uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,477,468,749,881
arxiv
\section{Introduction} The zeta functions associated to finite graphs by Ihara \cite{Ihara}, Hashimoto \cite{HaHo,Hashi}, Bass \cite{Bass} and others, combine features of Riemann's zeta function, Artin L-functions, and Selberg's zeta function, and may be viewed as analogues of the Dedekind zeta functions of a number field. They are defined by an Euler product and have an analytic continuation to a meromorphic function satisfying a functional equation. They can be expressed as the determinant of a perturbation of the graph Laplacian and, for Ramanujan graphs, satisfy the Riemann hypothesis \cite{StTe}. The first attempt in this context to study infinite graphs was made by Grigorchuk and $\dot{\text{Z}}$uk \cite{GrZu}, who considered graphs obtained as a suitable limit of a sequence of finite graphs. They proved that their definition does not depend on the approximating sequence in case of Cayley graphs of finitely generated residually finite groups, and, more generally, in case of graphs obtained as Schreier graphs of a pair $(G,H)$ of a finitely generated group $G$ and a separable subgroup $H$. The definition of the zeta function was extended to (countable) periodic graphs by Clair and Mokhtari-Sharghi in \cite{ClMS1}, where the determinant formula has been proved. They deduce this result as a specialization of the treatment of group actions on trees (the so-called theory of tree lattices, as developed by Bass, Lubotzky and others, see \cite{BaLu}). The purpose of this work is to give a more direct proof of that result, for the case of periodic simple graphs with a free action. We hope that our treatment, being quite elementary, could be useful for someone seeking an introduction to the subject. In a sequel to this paper \cite{GILa03}, we shall prove that for periodic amenable graphs, the Ihara zeta function can be approximated by the zeta functions of a suitable sequence of finite graphs, thereby answering in the affirmative a question raised by Grigorchuk and $\dot{\text{Z}}$uk in \cite{GrZu}. In order to provide a self-contained approach to the subject, we start by recalling the definition and some properties of the zeta function for finite graphs. Then, after having introduced some preliminary notions, we define in Section \ref{sec:Zeta} the analogue of the Ihara zeta function, and show that it is a holomorphic function, while, in Section \ref{sec:DetFormula}, we prove a corresponding determinant formula. The latter requires some care, because it involves the definition and properties of a determinant for bounded operators (acting on an infinite dimensional Hilbert space and) belonging to a von Neumann algebra with a finite trace. This question is addressed in Section \ref{sec:AnalyticDet}. In the final section, we establish several functional equations. In closing this introduction, we note that the operator-algebraic techniques used here are introduced by the authors in \cite{GILa01} in order to study the Ihara zeta functions attached to a new class of infinite graphs, called self-similar fractal graphs. \section{Zeta function for finite graphs} The Ihara zeta function is defined by means of equivalence classes of prime cycles. Therefore, we need to introduce some terminology from graph theory, following \cite{Serre, StTe} with some modifications. A {\it graph} $X=(VX,EX)$ consists of a collection $VX$ of objects, called {\it vertices}, and a collection $EX$ of objects called (oriented) {\it edges}, together with two maps $e\in EX\mapsto (o(e),t(e))\in VX\times VX$ and $e\in EX\mapsto \overline{e}\in EX$, satisfying the following conditions: $ \overline{\overline{e}}=e$, $o(\overline{e})=t(e)$, $\forall e\in EX$. The vertex $o(e)$ is called the {\it origin} of $e$, while $t(e)$ is called the {\it terminus} of $e$. The couple $\set{e,\overline{e}}$ is called a {\it geometric edge}. A graph is called {\it simple} if $EX \subset \set{(u,v)\in VX\times VX: u \neq v}$, $o(u,v) = u$, $t(u,v) = v$, $\overline{(u,v)} = (v,u)$; therefore, the set of geometric edges can be identified with a set of unordered pairs of distinct vertices. Observe that in the literature what we have called graph is also called a multigraph, while a simple graph is also called a graph. We will only deal with simple graphs. The edge $e=\set{u,v}$ is said to join the vertices $u,v$, while $u$ and $v$ are said to be {\it adjacent}, which is denoted $u\sim v$. A {\it path} (of length $m$) in $X$ from $v_0\in VX$ to $v_m\in VX$, is $(v_{0},\ldots,v_{m})$, where $v_{i}\in VX$ and $v_{i+1}\sim v_i$, for $i=0,...,m-1$. In the following, we denote by $|C|$ the length of a path $C$. A path is {\it closed} if $v_{m}=v_{0}$. A graph is {\it connected} if there is a path between any pair of distinct vertices. \begin{Dfn}[Proper closed Paths] \label{def:redPath} \itm{i} A path in $X$ has {\it backtracking} if $v_{i-1}=v_{i+1}$, for some $i\in\{1,\ldots,m-1\}$. A path with no backtracking is also called {\it proper}. Denote by $\Ci$ the set of proper closed paths. \itm{ii} A proper closed path $C=(v_{0},\ldots,v_{m}=v_{0})$ has a {\it tail} if there is $k\in\{1,\ldots,[m/2]-1\}$ s.t. $v_{j}=v_{m-j}$, for $j=1,\ldots,k$. Denote by $\Ta$ the set of proper closed paths with tail, and by $\Nt$ the set of proper tail-less closed paths, also called {\it reduced} closed paths. Observe that $\Ci=\Ta\cup\Nt$, $\Ta\cap\Nt=\emptyset$. \itm{iii} A reduced closed path is {\it primitive} if it is not obtained by going $n\geq 2$ times around some other closed path. \end{Dfn} \begin{exmp} Some examples of non reduced closed paths are shown in figures \ref{fig:Backtracking}, \ref{fig:Tail}. \begin{figure}[ht] \centering \psfig{file=backtracking.eps,height=1.5in} \caption{Closed path with backtracking} \label{fig:Backtracking} \end{figure} \begin{figure}[ht] \centering \psfig{file=tail.eps,height=1.5in} \caption{Closed path with tail} \label{fig:Tail} \end{figure} \end{exmp} We also need an equivalence relation for closed paths \begin{Dfn}[Cycles] Given closed paths $C=(v_{0},\ldots,v_{m}=v_{0})$, $D=(w_{0},\ldots,w_{m}=w_{0})$, we say that $C$ and $D$ are {\it equivalent}, and write $C\sim_{o} D$, if there is $k$ s.t. $w_{j}=v_{j+k}$, for all $j$, where the addition is taken mod $m$, that is, the origin of $D$ is shifted $k$ steps w.r.t. the origin of $C$. The equivalence class of $C$ is denoted $[C]_o$. An equivalence class is also called a {\it cycle}. Therefore, a closed path is just a cycle with a specified origin. Denote by $\Re$ the set of reduced cycles, and by $\Pr\subset\Re$ the subset of primitive reduced cycles, also called {\it prime} cycles. \end{Dfn} Then Ihara \cite{Ihara} defined the zeta function of a finite graph, that is, a graph $X=(VX,EX)$ with $VX$ and $EX$ finite sets, as \begin{Dfn}[Zeta function] $$ Z_{X}(u) := \prod_{C\in \Pr} (1-u^{|C|})^{-1}, \qquad u\in{\mathbb C}. $$ \end{Dfn} Ihara also proved the main result of this theory, though in the particular case of regular graphs; subsequently, through the efforts of Sunada \cite{Sunada}, Hashimoto \cite{HaHo,Hashi} and Bass \cite{Bass}, that result was proved in full generality. Nowadays, there exist many different proofs of Theorem \ref{Thm:Ihara}, $e.g.$ \cite{StTe,FoZe,KoSu}. To state it, we need to introduce some more notation. Let us denote by $A=[A(v,w)]$, $v,w\in VX$, the adjacency matrix of $X$, that is, $$ A(v,w)= \begin{cases} 1&\set{v,w}\in EX\\ 0&\text{otherwise.} \end{cases} $$ Let $Q:= \text{diag}(\deg(v_{1})-1,\deg(v_{2})-1,\ldots)$, where $\deg(v)$ is the number of vertices adjacent to $v$, and $\D(u):=I-Au+Qu^{2}$, $u\in{\mathbb C}$, a deformation of the usual Laplacian on the graph, which is $\D(1) = (Q+I)-A$. Then, with $d:= \max_{v\in VX} \deg(v)$, and $\chi(X)=|VX|-|EX|$, the Euler characteristic of $X$, we get \begin{Thm}[Determinant formula] \label{Thm:Ihara}{\rm \cite{Ihara,Sunada,HaHo,Hashi,Bass}} $$ \frac{1}{Z_{X}(u)} = (1-u^{2})^{-\chi(X)} \text{det}(\D(u)), \ \text{ for } |u|<\frac{1}{d-1}. $$ \end{Thm} \begin{exmp} We can compute the zeta function of the example shown in figure \ref{fig:Example1} by using the determinant formula. We obtain $Z_X(u)^{-1} = (1 - u^2)^2(1 - u)(1 - 2 u)(1 + u + 2 u^2)^3$. \begin{figure}[ht] \centering \psfig{file=Example1.eps,height=1.5in} \caption{A graph} \label{fig:Example1} \end{figure} \end{exmp} The zeta function has been used to establish some properties of the graphs. For example \begin{Thm} {\rm \cite{Hashi,Hashi1,Bass,North,KoSu}} Let $X$ be a finite graph, $r= |EX| - |VX| +1$ the rank of the fundamental group $\pi_1(X,x_0)$. Then $r$ is the order of the pole of $Z_X(u)$ at $u=1$. If $r>1$ $$ \lim_{u\to 1^-} Z_X(u)(1-u)^r = -\frac{1}{2^r (r-1)\kappa_X}, $$ where $\kappa_X$ is the number of spanning trees in $X$. \end{Thm} \begin{Thm} {\rm \cite{Hashi2,HSTe} } Let $X$ be a finite graph, $R_X$ be the radius of the greatest circle of convergence of $Z_X$. Denote by $\pi_{n}$ the number of prime cycles which have length $n$. If $g.c.d. \{ |C| : C\in\Pr \} =1$, then $$ \pi_{n} \sim \frac{R_X^{-n}}{n},\ n\to\infty. $$ \end{Thm} \begin{Thm}{\rm \cite{Ihara,Lubo,StTe} } Let $X$ be a finite graph which is $(q+1)$-regular, $i.e.$ $\deg(v) = q+1$ for all $v\in VX$. Then the following are equivalent \itm{i} $$ (RH)\ \begin{cases} Z_X(q^{-s})^{-1} = 0 & \\ \Re s \in(0,1) & \implies \Re s = \frac12. \end{cases} $$ \itm{ii} $X$ is a Ramanujan graph, $i.e.$ $\lambda} \def\La{\Lambda\in\sigma} \def\S{\Sigma(A),| \lambda | < q+1 \implies |\lambda| \leq 2\sqrt{q}$. \end{Thm} More results on the Ihara zeta function are contained in \cite{Serre1,StTe2,StTe3} and in various papers by Mizuno and Sato. In closing this section, we mention a generalization of the Ihara zeta function recently introduced by Bartholdi \cite{Bartholdi} and studied by Mizuno and Sato (see \cite{MiSa} and references therein). \section{Periodic simple graphs} Let $X=(VX,EX)$ be a simple graph, which we assume to be (countable and) with bounded degree, $i.e.$ the degree of the vertices is uniformly bounded. Let $\G$ be a countable discrete subgroup of automorphisms of $X$, which acts freely on $X$ ($i.e.$ any $\gamma} \def\G{\Gamma\in\G$, $\gamma} \def\G{\Gamma\neq id$ doesn't have fixed points), and with finite quotient $B:=X/\G$. Denote by ${\mathcal F}\subset VX$ a set of representatives for $VX/\G$, the vertices of the quotient graph $B$. Let us define a unitary representation of $\G$ on $\ell^{2}(VX)$ by $(\lambda} \def\La{\Lambda(\gamma} \def\G{\Gamma)f)(x):= f(\gamma} \def\G{\Gamma^{-1}x)$, for $\gamma} \def\G{\Gamma\in\G$, $f\in\ell^{2}(VX)$, $x\in V(X)$. Then the von Neumann algebra ${\mathcal N}(X,\G):= \{ \lambda} \def\La{\Lambda(\gamma} \def\G{\Gamma) : \gamma} \def\G{\Gamma\in\G\}'$ of bounded operators on $\ell^{2}(VX)$ commuting with the action of $\G$ inherits a trace given by $Tr_{\G}(T) = \sum_{x\in{\mathcal F}} T(x,x)$, for $T\in{\mathcal N}(X,\G)$. Let us denote by $A$ the adjacency matrix of $X$. Then (by \cite{Mohar}, \cite{MoWo}) $\|A\|\leq d:=\sup_{v\in VX} \deg(v) <\infty$, and it is easy to see that $A\in{\mathcal N}(X,\G)$. For any $m\in{\mathbb N}$, let us denote by $A_{m}(x,y)$ the number of proper paths in $X$, of length $m$, with initial vertex $x$ and terminal vertex $y$, for $x,y\in VX$. Then $A_{1}=A$. Let $A_{0}:= I$ and $Q:= \text{diag}(\deg(v_{1})-1,\deg(v_{2})-1,\ldots)$. Then \begin{Lemma}\label{lem:Lemma1} \itm{i} $A_{2} = A^{2}-Q-I\in{\mathcal N}(X,\G)$, \itm{ii} for $m\geq 3$, $A_{m} = A_{m-1}A-A_{m-2}Q\in{\mathcal N}(X,\G)$, \itm{iii} let $\alpha:= \frac{d+\sqrt{d^{2}+4d}}{2}$; then $\|A_{m}\| \leq \alpha^{m}$, for $m\geq0$. \end{Lemma} \begin{proof} $(i)$ if $x = y$ then $A_{2}(x,x)=0$ because there are no proper closed paths of length $2$ starting at $x$, whereas $A^{2}(x,x) = \deg(x) = (Q+I)(x,x)$, so that $A_{2}(x,x)=A^{2}(x,x)-(Q+I)(x,x)$. If $x\neq y$, then $A^{2}(x,y)$ is the number of paths of length $2$ (necessarily proper) from $x$ to $y$, so $A_{2}(x,y)=A^{2}(x,y) = A^{2}(x,y)-(Q+I)(x,y)$. \\ $(ii)$ for $x,y\in VX$, the sum $\sum_{z\in VX} A_{m-1}(x,z)A(z,y)$ counts the proper paths of length $m$ from $x$ to $y$, plus additional paths formed of a proper path of length $m-2$ from $x$ to $y$ followed by a path of length $2$ from $y$ to $z$ and back; since the path from $x$ to $y$ and then to $z$ is a proper path of length $m-1$ (one of those counted by $A_{m-1}(x,z))$, $z$ can only be one of the $deg(y)-1=Q(y,y)$ vertices adjacent to $y$, the last one being on the proper path from $x$ to $y$. Therefore $\sum_{z\in VX} A_{m-1}(x,z)A(z,y) = A_{m}(x,y) + A_{m-2}(x,y)Q(y,y)$, and the statement follows. \\ $(iii)$ We have $\|A_{1}\|=\|A\|\leq d$, $\|A_{2}\|\leq d^{2}+d$, and $\|A_{m}\|\leq d(\|A_{m-1}\|+\|A_{m-2}\|$, from which the claim follows by induction. \end{proof} Denote by $\Ci_{m}$ the subset of $\Ci$ consisting of the proper closed paths of length $m$, and attach a similar meaning to $\Ta_{m}$, $\Nt_{m}$, $\Re_{m}$ and $\Pr_{m}$. \begin{Lemma}\label{lem:countTail} Denote by $t_{m}:= \sum_{x\in {\mathcal F}} |\{C\in\Ta_{m}: C \text{ starts at } x \} |$, where $|\cdot|$ denotes the cardinality of a set. Then \itm{i} $t_{1}=t_{2}=0$, and, for $m\geq 3$, $t_{m} = Tr_{\G}((Q-I)A_{m-2}) + t_{m-2}$, \itm{ii} $t_{m} = Tr_{\G}\left( (Q-I)\sum_{j=1}^{[\frac{m-1}{2}]} A_{m-2j} \right)$. \end{Lemma} \begin{proof} $(i)$ Indeed, we have \begin{align*} t_{m} &= \sum_{x\in {\mathcal F}} |\{C\in\Ta_{m}: C \text{ starts at } x \} |\\ &= \sum_{x\in {\mathcal F}} \sum_{y\sim x} |\{C\in\Ta_{m}: C \text{ starts at } x \text{ goes to } y \text{ at first step} \} |\\ &= \sum_{y\in {\mathcal F}} \sum_{x\sim y} |\{C\in\Ta_{m}: C \text{ starts at } x \text{ goes to } y \text{ at first step} \} |, \end{align*} where the last equality follows from the fact that the cardinalities above are $\G$-invariant, and we can choose $\gamma} \def\G{\Gamma\in\G$ for which the second vertex $y$ of $\gamma} \def\G{\Gamma C$ is in ${\mathcal F}$. A path $C$ in the last set goes from $x$ to $y$, then over a closed path $D$ of length $m-2$, and then back to $x$. There are two kinds of closed paths $D$ at $y$: those with tails and those without. If $D$ has no tail, then there are $Q(y,y)+1$ possibilities for $x$ to be adjacent to $y$, but $x$ cannot be on $D$ (otherwise, $C$ would have backtracking), which leaves $Q(y,y)-1$ possibilities. If $D$ has a tail, $x$ cannot be on $D$ (otherwise, $C$ would have backtracking), which leaves $Q(y,y)$ possibilities. Therefore, we get \begin{align*} &\sum_{x\sim y} | \{C\in\Ta_{m}: C \text{ starts at } x \text{ goes to } y \text{ at first step} \} |\\ &= (Q(y,y)-1)\cdot | \{D\in\Nt_{m-2}: D \text{ starts at } y \} | \\ &\qquad + Q(y,y)\cdot | \{D\in\Ta_{m-2}: D \text{ starts at } y \} | \\ & = (Q(y,y)-1)\cdot | \{D\in\Ci_{m-2}: D \text{ starts at } y \} | \\ & \qquad + | \{D\in\Ta_{m-2}: D \text{ starts at } y \} |\, , \end{align*} so that \begin{align*} t_{m} &= \sum_{y\in {\mathcal F}} (Q(y,y)-1)\cdot | \{D\in\Ci_{m-2}: D \text{ starts at } y \} | \\ & \qquad + \sum_{y\in {\mathcal F}} | \{D\in\Ta_{m-2}: D \text{ starts at } y \} | \\ & = \sum_{y\in {\mathcal F}} (Q(y,y)-1)A_{m-2}(y,y) +t_{m-2}\\ & = Tr_{\G}((Q-I)A_{m-2})+t_{m-2}. \end{align*} $(ii)$ Follows from $(i)$. \end{proof} We need to introduce an equivalence relation between reduced cycles. \begin{Dfn}[Equivalence relation between reduced cycles] Given $C$, $D\in\Re$, we say that $C$ and $D$ are $\G$-{\it equivalent}, and write $C \sim_{\G} D$, if there is an isomorphism $\gamma} \def\G{\Gamma\in \G$ s.t. $D=\gamma} \def\G{\Gamma(C)$. We denote by $[\Re]_{\G}$ the set of $\G$-equivalence classes of reduced cycles, and analogously for the subset $\Pr$. \end{Dfn} For the purposes of the next result, for any closed path $D=(v_{0},\ldots,v_{m}=v_{0})$, we also denote $v_{j}$ by $v_{j}(D)$. Let us now assume that $C$ is a prime cycle of length $m$. Then the {\it stabilizer} of $C$ in $\G$ is the subgroup $\G_{C}= \{\gamma} \def\G{\Gamma\in\G : \gamma} \def\G{\Gamma(C)=C\}$ or, equivalently, $\gamma} \def\G{\Gamma\in\G_{C}$ if there exists $p(\gamma} \def\G{\Gamma)\in{\mathbb Z}_{m}$ s.t., for any choice of the origin of $C$, $v_{j}(\gamma} \def\G{\Gamma C)=v_{j-p}(C)$, for any $j$. Let us observe that $p(\gamma} \def\G{\Gamma)$ is a group homomorphism from $\G_{C}$ to ${\mathbb Z}_{m}$, which is injective because $\G$ acts freely. As a consequence, $|\G_{C}|$ divides $m$. \begin{Dfn} Let $C\in\Pr$ and define $\displaystyle \nu(C) := \frac{|C|}{|\G_{C}|}$. If $C=D^{k}\in\Re$, where $D\in\Pr$, define $\nu(C)=\nu(D)$. Observe that $\nu(C)$ only depends on $[C]_{\G}\in[\Re]_{\G}$. \end{Dfn} \begin{Lemma}\label{lem:estim.for.N} Let us set $N_{m} := \sum_{[C]_{\G}\in[\Re_{m}]_{\G}}\nu(C)$. Then \itm{i} $N_{m} = Tr_{\G}(A_{m}) - t_{m}$, \itm{ii} $N_{m} \leq d(d-1)^{m-1}|{\mathcal F}|$. \end{Lemma} \begin{proof} $(i)$ Let us assume that $[C]_{\G}$ is an equivalence class of prime cycles in $[{\mathcal P}_{m}]_{\G}$, and consider the set $U$ of all primitive closed paths with the origin in ${\mathcal F}$ and representing $[C]_{\G}$. If $C$ is such a representative, any other representative can be obtained in this way: choose $k\in{\mathbb Z}_{m}$, let $\gamma} \def\G{\Gamma(k)$ be the (unique) element in $\G$ for which $\gamma} \def\G{\Gamma(k)v_{k}(C)\in{\mathcal F}$, and define $C_{k}$ as $$ v_{j}(C_{k})=\gamma} \def\G{\Gamma(k)v_{j+k}(C),\ j\in{\mathbb Z}_{m}. $$ If we want to count the elements of $U$, we should know how many of the elements $C_{k}$ above coincide with $C$. For this to happen, $\gamma} \def\G{\Gamma$ should clearly be in the stabilizer of the cycle $[C]_{o}$. Conversely, for any $\gamma} \def\G{\Gamma\in\G_{C}$, there exists $p=p(\gamma} \def\G{\Gamma)\in{\mathbb Z}_{m}$ such that $\gamma} \def\G{\Gamma v_{j}(C)=v_{j-p}(C)$, therefore $\gamma} \def\G{\Gamma=\gamma} \def\G{\Gamma(p)$. As a consequence, $v_{j}(C_{p(\gamma} \def\G{\Gamma)})=\gamma} \def\G{\Gamma(p) v_{j+p}(C)=v_{j}(C)$, so that $C_{p(\gamma} \def\G{\Gamma)}=C$. We have proved that the cardinality of $U$ is equal to $\nu(C)$. The proof for a non-prime cycle is analogous. Therefore, \begin{align} N_{m}&= \sum_{[C]_{\G}\in[\Re_{m}]_{\G}} | \{ D\in{\mathcal C}^{notail}_{m}: [D]_{o}\sim_{\G} C, v_{0}(D)\in{\mathcal F} \} | \notag \\ & = | \{ C\in{\mathcal C}^{notail}_{m}, v_{0}(C)\in{\mathcal F} \} | \label{eq:Nm}\\ &= | \{ C\in{\mathcal C}_{m}, v_{0}(C)\in{\mathcal F} \} | - | \{ C\in{\mathcal C}^{tail}_{m}, v_{0}(C)\in{\mathcal F} \} | \notag\\ &= Tr_{\G}(A_{m}) - t_{m}. \notag \end{align} $(ii)$ Follows from (\ref{eq:Nm}). \end{proof} \section{The Zeta function}\label{sec:Zeta} In this section, we define the Ihara zeta function for a periodic graph, and prove that it is a holomorphic function. \begin{Dfn}[Zeta function] We let $$ Z_{X,\G}(u) := \prod_{[C]_{\G}\in [\Pr]_{\G}} (1-u^{|C|})^{ -\frac{1}{ |\G_{C}| } }, $$ for all $u\in{\mathbb C}$ sufficiently small so that the infinite product converges. \end{Dfn} \begin{Lemma}\label{lem:power.series} \itm{i} $Z(u):=\prod_{[C] \in [\Pr]_{\G}} (1-u^{|C|})^{ -\frac{1}{ |\G_{C}| } }$, defines a holomorphic function in $\{u\in{\mathbb C}: |u|<\frac{1}{d-1}\}$, \itm{ii} $u\frac{Z'(u)}{Z(u)} = \sum_{m=1}^{\infty} N_{m}u^{m}$, for $|u| < \frac{1}{d-1}$, \itm{iii} $Z(u) = \exp\left( \sum_{m=1}^{\infty} \frac{N_{m}}{m}u^{m} \right)$ , for $|u| < \frac{1}{d-1}$. \end{Lemma} \begin{proof} Let us observe that, for $|u|<\frac{1}{d-1}$, \begin{align*} \sum_{m=1}^{\infty} N_{m} u^{m} & = \sum_{[C]_{\G}\in [\Re]_{\G}}\nu(C)\, u^{|C|} \\ & = \sum_{m=1}^{\infty} \sum_{[C]_{\G}\in [\Pr]_{\G}}\frac{|C|}{|\G_{C}|}\, u^{|C^{m}|} \\ &= \sum_{[C]_{\G}\in [\Pr]_{\G}}\frac{1}{|\G_{C}|}\, \sum_{m=1}^{\infty} |C| u^{|C|m} \\ &= \sum_{[C]_{\G}\in [\Pr]_{\G}} \frac{1}{|\G_{C}|}\, u\frac{d}{du} \sum_{m=1}^{\infty} \frac{u^{|C|m}}{m} \\ &= -\sum_{[C]_{\G}\in [\Pr]_{\G}} \frac{1}{|\G_{C}|}\, u\frac{d}{du} \log(1-u^{|C|}) \\ & = u\frac{d}{du} \log Z(u), \end{align*} where, in the last equality we used uniform convergence on compact subsets of $\set{u\in{\mathbb C}: |u|<\frac{1}{d-1}}$. The rest of the proof is clear. \end{proof} \begin{exmp} Some examples of cycles with different stabilizers are shown in figures \ref{fig:cycle1}, \ref{fig:cycle2}. They refer to the graph in figure \ref{fig:Example2} which is the standard lattice graph $X={\mathbb Z}^{2}$ endowed with the action of the group $\G$ which is generated by the rotation by $\frac{\pi}{2}$ around the point $P$ and the translations by elements $(m,n)\in{\mathbb Z}^{2}$ acting as $(m,n)(v_{1},v_{2}):= (v_{1}+2m,v_{2}+2n)$, for $v=(v_{1},v_{2})\in VX={\mathbb Z}^{2}$. \begin{figure}[ht] \centering \psfig{file=Example2.eps,height=1.5in} \caption{A periodic graph $X$ with its quotient $B=X/\G$} \label{fig:Example2} \end{figure} \begin{figure}[ht] \centering \psfig{file=cycle1.eps,height=1.5in} \caption{A cycle with $|\G_{C}|=4$} \label{fig:cycle1} \end{figure} \begin{figure}[ht] \centering \psfig{file=cycle2.eps,height=1.5in} \caption{A cycle with $|\G_{C}|=2$} \label{fig:cycle2} \end{figure} \end{exmp} The interested reader can find the computation of the Ihara zeta function for several periodic simple graphs in \cite{GrZu,ClMS1,ClMS2,Clair}. \section[Analytic determinant for von Neumann algebras]{An analytic determinant for von Neumann algebras with a finite trace}\label{sec:AnalyticDet} In this section, we define a determinant for a suitable class of not necessarily normal operators in a von Neumann algebra with a finite trace. The results obtained are used in Section \ref{sec:DetFormula} to prove a determinant formula for the zeta function. In a celebrated paper \cite{FuKa}, Fuglede and Kadison defined a positive-valued determinant for finite factors ($i.e.$ von Neumann algebras with trivial center and finite trace). Such a determinant is defined on all invertible elements and enjoys the main properties of a determinant function, but it is positive-valued. Indeed, for an invertible operator $A$ with polar decomposition $A=UH$, where $U$ is a unitary operator and $H:= \sqrt{A^{*}A}$ is a positive self-adjoint operator, the Fuglede--Kadison determinant is defined by $$ Det(A)=\exp\, \circ\ \tau\circ\log H, $$ where $\log H$ may be defined via the functional calculus. For the purposes of the present paper, we need a determinant which is an analytic function. As we shall see, this can be achieved, but corresponds to a restriction of the domain of the determinant function and implies the loss of some important properties. Let $({\mathcal A},\tau)$ be a von Neumann algebra endowed with a finite trace. Then, a natural way to obtain an analytic function is to define, for $A\in{\mathcal A}$, $\text{det}_\tau(A)=\exp\, \circ\ \tau\circ\log A$, where $$ \log(A) := \frac{1}{2\pi i} \int_\Gamma \log \lambda (\lambda-A)^{-1} d\lambda, $$ and $\Gamma$ is the boundary of a connected, simply connected region $\Omega$ containing the spectrum of $A$. Clearly, once the branch of the logarithm is chosen, the integral above does not depend on $\Gamma$, provided $\G$ is given as above. Then a na\"{\i}ve way of defining $det$ is to allow all elements $A$ for which there exists an $\Omega$ as above, and a branch of the logarithm whose domain contains $\Omega$. Indeed the following holds. \begin{Lemma} Let $A$, $\Omega$, $\Gamma$ be as above, and $\varphi$, $\psi$ two branches of the logarithm such that both domains contain $\Omega$. Then $$ \exp\, \circ\ \tau\circ\varphi(A) = \exp\, \circ\ \tau\circ\psi(A). $$ \end{Lemma} \begin{proof} The function $\varphi(\lambda)-\psi(\lambda)$ is continuous and everywhere defined on $\Gamma$. Since it takes its values in $2\pi i \mathbb{Z}$, it should be constant on $\Gamma$. Therefore \begin{align*} \exp\, \circ\ \tau\circ\varphi (A) & = \exp\, \circ\ \tau\left(\frac{1}{2\pi i} \int_\Gamma 2\pi i n_{0} (\lambda-A)^{-1} d\lambda \right) \exp\, \circ\ \tau\circ\psi(A)\\ &=\exp\, \circ\ \tau\circ\psi(A). \end{align*} \end{proof} The problem with the previous definition is its dependence on the choice of $\Omega$. Indeed, it is easy to see that when $A=\begin{pmatrix}1&0\\0&i\end{pmatrix}$ and we choose $\Omega$ containing $\{e^{i\vartheta},\vartheta\in[0,\pi/2]\}$ and any suitable branch of the logarithm, we get $det(A)=e^{i\pi/4}$, by using the normalized trace on $2\times 2$ matrices. On the other hand, if we choose $\Omega$ containing $\{e^{i\vartheta},\vartheta\in[\pi/2,2\pi]\}$ and a corresponding branch of the logarithm, we get $det(A)=e^{5i\pi/4}$. Therefore, we make the following choice. \begin{Dfn} Let $({\mathcal A},\tau)$ be a von Neumann algebra endowed with a finite trace, and consider the subset ${\mathcal A}_{0}=\{A\in{\mathcal A} : 0\not\in \text{conv}\,\sigma(A)\}$, where $\sigma} \def\S{\Sigma(A)$ denotes the spectrum of $A$. For any $A\in{\mathcal A}_{0}$ we set $$ \text{det}_\tau(A)=\exp\, \circ\ \tau\circ\left(\frac{1}{2\pi i} \int_\Gamma \log \lambda (\lambda-A)^{-1} d\lambda\right), $$ where $\Gamma$ is the boundary of a connected, simply connected region $\Omega$ containing $\text{conv}\,\sigma(A)$, and $\log$ is a branch of the logarithm whose domain contains $\Omega$. \end{Dfn} \begin{Cor}\label{cor:det.analytic} The determinant function defined above is well-defined and analytic on ${\mathcal A}_{0}$. \end{Cor} We collect some properties of our determinant in the following result. \begin{Prop}\label{properties} Let $({\mathcal A},\tau)$ be a von Neumann algebra endowed with a finite trace, $A\in{\mathcal A}_{0}$. Then \item[$(i)$] $\text{det}_\tau(zA)=z^{\tau(I)}\text{det}_\tau(A)$, for any $z\in\mathbb{C}\setminus\{0\}$, \item[$(ii)$] if $A$ is normal, and $A=UH$ is its polar decomposition, $$\text{det}_\tau (A)=\text{det}_\tau(U)\text{det}_\tau(H),$$ \item[$(iii)$] if $A$ is positive, $\text{det}_\tau(A)=Det(A)$, where the latter is the Fuglede-Kadison determinant. \end{Prop} \begin{proof} $(i)$ If the half-line $\{\rho e^{i\vartheta_0}\in\mathbb{C} : \rho>0\}$ does not intersect $\text{conv}\,\sigma(A)$, then the half-line $\{\rho e^{i(\vartheta_0+t)}\in\mathbb{C} : \rho>0\}$ does not intersect $\text{conv}\,\sigma(zA)$, where $z=re^{it}$. If $\log$ is the branch of the logarithm defined on the complement of the real negative half-line, then $\varphi(x)=i(\vartheta_{0}-\pi) + \log(e^{-i(\vartheta_{0}-\pi)}x)$ is suitable for defining $\text{det}_\tau(A)$, while $\psi(x)=i(\vartheta_{0}+t-\pi) + \log(e^{-i(\vartheta_{0}+t-\pi)}x)$ is suitable for defining $\text{det}_\tau(zA)$. Moreover, if $\Gamma$ is the boundary of a connected, simply connected region $\Omega$ containing $\text{conv}\,\sigma(A)$, then $z\Gamma$ is the boundary of a connected, simply connected region $z\Omega$ containing $\text{conv}\,\sigma(zA)$. Therefore, \begin{align*} \text{det}_\tau(zA) &= \exp\, \circ\ \tau\left(\frac{1}{2\pi i} \int_{z\Gamma} \psi(\lambda) (\lambda-zA)^{-1} d\lambda\right)\\ &= \exp\, \circ\ \tau\left(\frac{1}{2\pi i} \int_{\Gamma} (i(\vartheta_{0}+t-\pi) + \log(e^{-i(\vartheta_{0}+t-\pi)} re^{it}\mu)) (\mu-A)^{-1} d\mu\right)\\ &= \exp\, \circ\ \tau\left((\log r + it)I+\frac{1}{2\pi i} \int_{\Gamma} \varphi(\mu) (\mu-A)^{-1} d\mu\right)\\ &= z^{\tau(I)} \text{det}_\tau(A). \end{align*} $(ii)$ When $A=UH$ is normal, $U=\int_{[0,2\pi]} e^{i\vartheta}\ du(\vartheta} \def\Th{\Theta)$, $H=\int_{[0,\infty)}r\ dh(r)$, then $ A = \int_{[0,\infty)\times[0,2\pi]} r e^{i\vartheta} \ d(h(r)\otimes u(\vartheta} \def\Th{\Theta))$. The property $0\not\in\text{conv}\,\sigma(A)$ is equivalent to the fact that the support of the measure $d(h(r)\otimes u(\vartheta} \def\Th{\Theta))$ is compactly contained in some open half-plane $$\{\rho e^{i\vartheta} : \rho>0, \vartheta \in (\vartheta_{0} - \pi/2, \vartheta_{0} +\pi/2)\},$$ or, equivalently, that the support of the measure $dh(r)$ is compactly contained in $(0,\infty)$, and the support of the measure $d u(\vartheta} \def\Th{\Theta)$ is compactly contained in $(\vartheta_{0} - \pi/2, \vartheta_{0} +\pi/2)$. Therefore $A\in{\mathcal A}_{0}$ is equivalent to $U,H\in{\mathcal A}_{0}$. Then $$\log A = \int_{[0,\infty) \times (\vartheta_{0} - \pi/2, \vartheta_{0} +\pi/2)} (\log r + i\vartheta) \ d(h(r)\otimes u(\vartheta} \def\Th{\Theta)),$$ which implies that \begin{align*} \text{det}_\tau(A) &= \exp\, \circ\ \tau\left(\int_{0}^{\infty} \log r\ dh(r) + \int_{\vartheta_{0} - \pi/2}^{\vartheta_{0} +\pi/2} i\vartheta \ du(\vartheta} \def\Th{\Theta)\right) \\ &= \text{det}_\tau(U)\cdot \text{det}_\tau(H). \end{align*} $(iii)$ Follows by the above argument. \end{proof} \begin{rem} We note that the above defined determinant function strongly violates the product property $\text{det}_\tau(AB)=\text{det}_\tau(A)\text{det}_\tau(B)$. Indeed, the fact that $A,B\in{\mathcal A}_{0}$ does not imply $AB\in{\mathcal A}_{0}$, as is seen e.g. by taking $A=B=\begin{pmatrix}1&0\\0&i\end{pmatrix}$. Moreover, even if $A,B,AB\in{\mathcal A}_{0}$ and $A$ and $B$ commute, the product property may be violated, as is shown by choosing $A=B=\begin{pmatrix}1&0\\0&e^{3i\pi/4}\end{pmatrix}$, and using the normalized trace on $2\times 2$ matrices. \end{rem} \section{The determinant formula}\label{sec:DetFormula} In this section, we prove the main result in the theory of Ihara zeta functions, which says that $Z$ is the reciprocal of a holomorphic function, which, up to a factor, is the determinant of a deformed Laplacian on the graph. We first need some technical results. Let us recall that $d:=\sup_{v\in VX} \deg(v)$, and $\alpha:= \frac{d+\sqrt{d^{2}+4d}}{2}$. \begin{Lemma}\label{lem:eq.for.A} \itm{i} $\left(\sum_{m\geq 0} A_{m}u^{m}\right)(I-Au+Qu^{2}) = (1-u^{2})I$, for $|u|<\frac{1}{\alpha}$, \itm{ii} $\left(\sum_{m\geq 0} \left( \sum_{k=0}^{[m/2]}A_{m-2k} \right) u^{m}\right)(I-Au+Qu^{2}) = I$, for $|u|<\frac{1}{\alpha}$. \end{Lemma} \begin{proof} $(i)$ From Lemma \ref{lem:Lemma1} we obtain \begin{align*} \biggl(\sum_{m\geq 0} A_{m}u^{m}\biggr)&(I-Au+Qu^{2}) = \sum_{m\geq 0} A_{m}u^{m} - \sum_{m\geq 0}\left( A_{m}Au^{m+1} -A_{m}Qu^{m+2}\right) \\ &= \sum_{m\geq 0} A_{m}u^{m} -A_{0}Au -A_{1}Au^{2} +A_{0}Qu^{2} \\ & \qquad - \sum_{m\geq 3}\left( A_{m-1}A -A_{m-2}Q\right)u^{m} \\ &= \sum_{m\geq 0} A_{m}u^{m} -Au -A^{2}u^{2} +Qu^{2} - \sum_{m\geq 3} A_{m}u^{m} \\ &= I +Au +A_{2}u^{2} -Au -A^{2}u^{2} +Qu^{2} \\ & = (1-u^{2})I. \end{align*} $(ii)$ \begin{align*} I &= (1-u^{2})^{-1} \biggl(\sum_{m\geq 0} A_{m}u^{m}\biggr)(I-Au+Qu^{2}) \\ &= \biggl(\sum_{m\geq 0} A_{m}u^{m}\biggr) \biggl( \sum_{j=0}^{\infty}u^{2j}\biggr) (I-Au+Qu^{2}) \\ &= \biggl(\sum_{k\geq 0}\sum_{j=0}^{\infty} A_{k}u^{k+2j}\biggr)(I-Au+Qu^{2}) \\ &= \biggl(\sum_{m\geq 0}\biggl( \sum_{j=0}^{[m/2]} A_{m-2j}\biggr) u^{m}\biggr)(I-Au+Qu^{2}). \end{align*} \end{proof} \begin{Lemma}\label{lem:eq.for.B} Denote by $B_{m} := A_{m} - (Q-I) \sum_{k=1}^{[m/2]}A_{m-2k} \in{\mathcal N}(X,\G)$, for $m\geq 0$. Then \itm{i} $B_{0}=I$, $B_{1}=A$, \itm{ii} $B_{m} = QA_{m} - (Q-I) \sum_{k=0}^{[m/2]}A_{m-2k}$, \itm{iii} $$Tr_{\G} B_{m} = \begin{cases} N_{m} - Tr_{\G}(Q-I) & m \text{ even} \\ N_{m} & m \text{ odd,} \end{cases}$$ \itm{iv} $$ \sum_{m\geq 1} B_{m}u^{m} = \left( Au-2Qu^{2}\right)\left(I-Au+Qu^{2}\right)^{-1}, \ \text{ for } |u|<\frac{1}{\alpha}. $$ \end{Lemma} \begin{proof} $(i)$, $(ii)$ follow from computations involving bounded operators. $(iii)$ It follows from Lemma \ref{lem:countTail} $(ii)$ that, if $m$ is odd, $$Tr_{\G} B_{m} = Tr_{\G}(A_{m}) - t_{m} = N_{m} ,$$ whereas, if $m$ is even, $$ Tr_{\G} B_{m} = Tr_{\G}(A_{m}) - t_{m} - Tr_{\G}((Q-I)A_{0}) = N_{m} - Tr_{\G}(Q-I). $$ $(iv)$ \begin{align*} \biggl( \sum_{m\geq 0} B_{m}u^{m} \biggr)& (I-Au+Qu^{2}) \\ & = \biggl( Q\sum_{m\geq 0} A_{m}u^{m} - (Q-I)\sum_{m\geq 0}\sum_{j=0}^{[m/2]} A_{m-2j}u^{m}\biggr) (I-Au+Qu^{2}) \\ & = Q(1-u^{2})I - (Q-I)\biggl( \sum_{m\geq 0}\sum_{j=0}^{[m/2]} A_{m-2j}u^{m}\biggr) (I-Au+Qu^{2})\\ & = (1-u^{2})Q - (Q-I) = I-u^{2}Q, \end{align*} where the second equality follows by Lemma \ref{lem:eq.for.A} $(i)$ and the third equality follows by Lemma \ref{lem:eq.for.A} $(ii)$. Since $B_{0}=I$, we get \begin{align*} \biggl( \sum_{m\geq 1} B_{m}u^{m} \biggr) (I-Au+Qu^{2}) &= I-u^{2}Q - B_{0}(I-Au+Qu^{2})\\ & = Au-2Qu^{2}. \end{align*} \end{proof} \begin{Lemma}\label{lem:Lemma3} Let $f:u\in B_{\eps}\equiv \{u\in{\mathbb C}: |u|<\eps\} \mapsto f(u)\in{\mathcal N}(X,\G)$, be a $C^{1}$- function, $f(0)=0$, and $\|f(u)\|<1$, for all $u\in B_{\eps}$. Then $$ Tr_{\G}\left( -\frac{d}{du} \log(I-f(u)) \right) = Tr_{\G}\left( f'(u)(I-f(u))^{-1}\right). $$ \end{Lemma} \begin{proof} To begin with, $-\log(I-f(u)) = \sum_{n\geq 1} \frac{1}{n} f(u)^{n}$, converges in operator norm, uniformly on compact subsets of $B_{\eps}$. Moreover, $$ \frac{d}{du} f(u)^{n} = \sum_{j=0}^{n-1} f(u)^{j}f'(u) f(u)^{n-j-1}. $$ Therefore, $-\frac{d}{du} \log(I-f(u)) = \sum_{n\geq 1} \frac{1}{n} \sum_{j=0}^{n-1} f(u)^{j}f'(u) f(u)^{n-j-1}$, so that \begin{align*} Tr_{\G}\biggl( -\frac{d}{du} \log(I-f(u)) \biggr) & = \sum_{n\geq 1} \frac{1}{n} \sum_{j=0}^{n-1} Tr_{\G}\left( f(u)^{j}f'(u) f(u)^{n-j-1} \right) \\ & = \sum_{n\geq 1} Tr_{\G}( f(u)^{n-1}f'(u) ) \\ & = Tr_{\G}\biggl( \sum_{n\geq 0} f(u)^{n}f'(u) \biggr) \\ & = Tr_{\G}( f'(u)(I-f(u))^{-1} ), \end{align*} where we have used the fact that $Tr_{\G}$ is norm continuous. \end{proof} \begin{Cor} $$ Tr_{\G}\left( \sum_{m\geq 1} B_{m}u^{m} \right) = Tr_{\G}\left( -u\frac{d}{du} \log(I-Au+Qu^{2}) \right), \ |u|<\frac{1}{\alpha}. $$ \end{Cor} \begin{proof} It follows from Lemma \ref{lem:eq.for.B} $(iv)$ that \begin{align*} Tr_{\G}\biggl( \sum_{m\geq 1} B_{m}u^{m} \biggr) &= Tr_{\G}( (Au-2Qu^{2}) (I-Au+Qu^{2})^{-1} )\\ &= Tr_{\G} \Bigl( -u\frac{d}{du} \log(I-Au+Qu^{2}) \Bigr), \end{align*} where the last equality follows from the previous lemma applied with $f(u) := Au-Qu^{2}$. \end{proof} Observe that for the $L^2$-Euler characteristic of $X$ we have $$ \chi^{(2)}(X) := -\frac12 Tr_{\G}(Q-I) = |V(B)| - |E(B)| = \chi(B), $$ where $\chi(B)$ is the Euler characteristic of the quotient graph $B=X/\G$. \begin{Thm}[Determinant formula] $$ \frac{1}{Z_{X,\G}(u)} = (1-u^{2})^{-\chi(B)} \text{det}_{\G}(I-Au+Qu^{2}), \ \text{ for } |u|<\frac{1}{\alpha}. $$ \end{Thm} \begin{proof} \begin{align*} Tr_{\G}\biggl( \sum_{m\geq 1} B_{m}u^{m} \biggr) &= \sum_{m\geq 1} Tr_{\G}( B_{m} ) u^{m}\\ &= \sum_{m\geq 1} N_{m}u^{m} - \sum_{k\geq 1} Tr_{\G}(Q-I) u^{2k} \\ &= \sum_{m\geq 1} N_{m}u^{m} - Tr_{\G}(Q-I) \frac{u^{2}}{1-u^{2}}, \end{align*} where the second equality follows by Lemma \ref{lem:eq.for.B} $(iii)$. Therefore, \begin{align*} u\frac{d}{du} \log Z_{X,\G}(u) & = \sum_{m\geq 1} N_{m}u^{m} \\ &= Tr_{\G}\left( -u\frac{d}{du} \log(I-Au+Qu^{2}) \right) - \frac{u}{2}\frac{d}{du} \log(1-u^{2}) Tr_{\G}(Q-I) \end{align*} so that, dividing by $u$ and integrating from $u=0$ to $u$, we get $$ \log Z_{X,\G}(u) = - Tr_{\G}\left( \log(I-Au+Qu^{2}) \right) -\frac12 Tr_{\G}(Q-I) \log(1-u^{2}), $$ which implies that, for $|u|<\frac{1}{\alpha}$, we have $$ \frac{1}{Z_{X,\G}(u)} = (1-u^{2})^{\frac12 Tr_{\G}(Q-I)} \cdot\exp Tr_{\G} \log(I-Au+Qu^{2}). $$ \end{proof} \section{Functional equations} In this final section, we obtain several functional equations for the Ihara zeta functions of $(q+1)-$regular graphs, $i.e.$ graphs with $\deg(v)=q+1$, for any $v\in VX$. The various functional equations correspond to different ways of completing the zeta functions, as is done in \cite{StTe} for finite graphs. \begin{Lemma} \label{prop:holomorphy} Let $X$ be a $(q+1)$-regular graph and $\D(u) := (1+qu^2)I-uA$. Then \itm{i} $\chi^{(2)}(X)=\chi(B)= |V(B)|(1-q)/2\in{\mathbb Z}$, \itm{ii} $\displaystyle Z_{X,\G}(u) = (1-u^2)^{\chi(B)} \text{det}_{\G}(\D(u))^{-1}$, for $|u| < \frac{1}{q}$, \itm{iii} by using the determinant formula in $(ii)$, $Z_{X,\G}$ can be extended to a function holomorphic at least in the open set $$ \O:={\mathbb R}^2 \setminus \left(\set{(x,y)\in{\mathbb R}^2: x^2+y^2=\frac{1}{q}} \cup \set{(x,0)\in{\mathbb R}^2: \frac{1}{q}\leq |x|\leq 1 }\right). $$ See figure \ref{fig:Omega}. \begin{figure}[ht] \centering \psfig{file=Omega.eps,height=1.5in} \caption{The open set $\Omega$} \label{fig:Omega} \end{figure} \itm{iv} $\displaystyle \text{det}_\G\Bigl(\D ( \frac{1}{qu})\Bigr) = (qu^2)^{-|VB|} \text{det}_\G(\D(u))$, for $u\in\O\setminus \set{0}$. \end{Lemma} \begin{proof} $(i)$ This follows by a simple computation. $(ii)$ This follows from $(i)$. $(iii)$ Let us observe that $$ \sigma} \def\S{\Sigma(\D(u)) = \set{1+qu^2-u\lambda} \def\La{\Lambda: \lambda} \def\La{\Lambda\in\sigma} \def\S{\Sigma(A)} \subset \set{1+qu^2-u\lambda} \def\La{\Lambda: \lambda} \def\La{\Lambda\in[-d,d]}. $$ It follows that $0\not\in\text{conv}\,\sigma} \def\S{\Sigma(\D(u))$ at least for $u\in{\mathbb C}$ such that $1+qu^2-u\lambda} \def\La{\Lambda\neq0$ for $\lambda} \def\La{\Lambda\in[-d,d]$, that is for $u=0$ or $\frac{1+qu^2}{u}\not\in[-d,d]$, or equivalently, at least for $u\in\O$. The rest of the proof follows from Corollary \ref{cor:det.analytic}. $(iv)$ This follows by Proposition \ref{properties} $(i)$ and the fact that $Tr_\G(I_V) = |VB|$. \end{proof} \begin{Prop} [Functional equations] Let $X$ be $(q+1)$-regular. Then, for all $u\in\O$, we have \itm{i} $\La_{X,\G}(u) := (1-u^{2})^{-\chi(B)}(1-u^{2})^{|VB|/2} (1-q^{2}u^{2})^{|VB|/2} Z_{X,\G}(u) = -\La_{X,\G}\Bigl(\frac{1}{qu}\Bigr)$, \itm{ii} $\xi_{X,\G}(u) := (1-u^{2})^{-\chi(B)} (1-u)^{|VB|} (1-qu)^{|VB|} Z_{X,\G}(u) = \xi_{X,\G}\Bigl(\frac{1}{qu}\Bigr)$, \itm{iii} $\Xi_{X,\G}(u) := (1-u^{2})^{-\chi(B)} (1+qu^{2})^{|VB|} Z_{X,\G}(u) = \Xi_{X,\G}\Bigl(\frac{1}{qu}\Bigr)$. \end{Prop} \begin{proof} $(i)$ \begin{align*} \La_{X}(u) & = (1-u^{2})^{|VB|/2} (1-q^{2}u^{2})^{|VB|/2} \text{det}_{\G}(\D(u))^{-1} \\ &= u^{|VB|}\Bigl(\frac{q^{2}}{q^{2}u^{2}}-1 \Bigr)^{|VB|/2} (qu)^{|VB|} \Bigl(\frac{1}{q^{2}u^{2}}-1 \Bigr)^{|VB|/2} \frac{1}{(qu^{2})^{|VB|}}\text{det}_{\G}\Bigl( \D(\frac{1}{qu}) \Bigr)^{-1}\\ &= -\La_{X}\Bigl(\frac{1}{qu}\Bigr). \end{align*} $(ii)$ \begin{align*} \xi_{X}(u) & = (1-u)^{|VB|} (1-qu)^{|VB|} \text{det}_{\G}(\D(u))^{-1}\\ & = u^{|VB|} \Bigl( \frac{q}{qu} -1 \Bigr)^{|VB|} (qu)^{|VB|} \Bigl( \frac{1}{qu} -1 \Bigr)^{|VB|} \frac{1}{(qu^{2})^{|VB|}}\text{det}_{\G}\Bigl( \D(\frac{1}{qu}) \Bigr)^{-1}\\ & = \xi_{X}\Bigl(\frac{1}{qu}\Bigr). \end{align*} $(iii)$ \begin{align*} \Xi_{X}(u) & = (1+qu^{2})^{|VB|} \text{det}_{\G}(\D(u))^{-1} \\ &= (qu^{2})^{|VB|} \Bigl( \frac{q}{q^{2}u^{2}} +1 \Bigr)^{|VB|} \frac{1}{(qu^{2})^{|VB|}}\text{det}_{\G}\Bigl( \D(\frac{1}{qu}) \Bigr)^{-1}\\ &= \Xi_{X}\Bigl(\frac{1}{qu}\Bigr). \end{align*} \end{proof} \begin{ack} The second and third named authors would like to thank respectively the University of California, Riverside, and the University of Roma ``Tor Vergata'' for their hospitality at different stages of the preparation of this paper. \end{ack}
1,477,468,749,882
arxiv
\section{Introduction} Neutron star low mass X-ray binaries (LMXB) are divided into two classes based upon the different morphology of their tracks in colour-colour and colour-intensity diagrams and their correlated timing behaviour. Six of the brightest LMXBs are classified as Z-type since they trace a Z-shaped pattern on such diagrams. The three branches of the Z are named Horizontal (HB), Normal (NB) and Flaring (FB) branches respectively, while atoll-type sources are characterized by an ``island'' and a ``banana'' state \cite{Has89}. The spectral state of LMXBs is most easily determined by using X-ray colour-colour and colour-intensity diagrams, in fact track morphology is due to spectral variations on timescales of weeks, days or hours. Tracks are commonly interpreted as an accretion sequence \cite{Hasetal89}, since it is thought that the mass accretion rate $\dot M$ is the parameter governing spectral variations along the track (see however \cite{bcbc03}). Secular shifts and shape changes of the Z track were reported for several Z sources such as CygX-2 \cite{Kuul96}, whose shifts were interpreted in terms of occultation of the emitting region by a precessing accretion disk, or very recently for LMC X-2 \cite{shk03}, in which secular shifts of 2.5-10\% are seen. Conversely significant secular variations of the track of Scorpius X-1 were never observed \cite{dvdk00}. Scorpius X-1 is the brightest extra-solar X-ray source and is a LMXB of the Z-type showing a high level of activity in the X-ray, optical and radio bands, where radio jets were recently observed. In the case of Scorpius X-1, the complete Z track is generally traced out in a few hours to a day. Its timing and spectral properties were studied by using data from many observatories such as EXOSAT \citep{dvdk00} and RXTE \citep{bcbc03, BrGeFo03}. This is the first systematic study of Scorpius X-1 using BeppoSAX Wide Field Cameras (WFC). WFCs are two coded mask instruments (WFC1 \& WFC2) with a wide field of view of 40x40 deg, pointing away from each other and perpendicular to the Narrow Field Instruments. In the large majority of cases WFCs observed random sky positions during primary NFI observations, giving the opportunity to monitor a large number of sources over the full six year satellite lifetime. Here we select fifty-five observations for a total monitoring duration of more than 600 hr and a total net exposure of the source of about 200 hr. \section{Colour-Colour and Colour-Intensity Diagrams} In order to obtain colour-colour and colour-intensity diagrams we define the total intensity as the count rate in the 1.7-19.1 keV band, the soft colour as the ratio [3.5-6.4keV / 2.0-3.5keV] and the hard colour as the ratio [9.5-16.4keV / 6.4-9.5keV]. \begin{figure}[h] \includegraphics[height=.27\textheight, angle=-90]{ccCrabRland.ps} \hspace{1cm} \includegraphics[height=.27\textheight, angle=-90]{ccCrabland.ps} \caption{Crab colour-colour diagram over six years, each point represents an observation. Left panel shows raw data, the separation between WFC1 (upper left clump of points) and WFC2 (lower right clump) observations is apparent. Right panel shows offset-corrected scaled data, the diagram dimensions are on the same scale of Scorpius X-1 colour-colour diagram in right panel of Figure \ref{ccScoraw} for comparison.} \label{ccCrabraw} \end{figure} \begin{figure}[h] \includegraphics[height=.27\textheight, angle=-90]{ccRaw1.ps} \hspace{1cm} \includegraphics[height=.27\textheight, angle=-90]{ccRaw2.ps} \caption{Scorpius X-1 colour-colour diagram over six years. Left panel shows raw data, the separation between WFC1 (upper left clump of tracks) and WFC2 (lower right clump) observations is apparent. Right panel shows offset-corrected scaled data. } \label{ccScoraw} \end{figure} Since we want to compare, on these diagrams, observations pointed off the source by different offset angles and at different epochs, the detector aging and the spatial response variations must be taken into account by scaling data to a common reference condition, for instance to the center of the detector at a certain epoch. Scaling factors were calculated taking into account the response of each camera at the source position in the field of view (FOV) for each epoch, then count rates and hardness ratios were scaled to the central quadrant of WFC1 at the date January 2002. These scaling factors change the intensity by 1\% at most, while for the soft colour and the hard colour the maximum change is 15\% and 8\% respectively. Systematic residuals were checked using observations of the Crab Nebula and an empirical correction to systematic residual effects was estimated \citep{pat1} as a function of the off-axis angle. The maximum empirical corrections found are 13\%, 7\%, 2\% for the intensity, hard colour and soft colour respectively, over the 0 to 20 deg range. Therefore corrections are applied to intensity and hard colour only. Figure \ref{ccCrabraw} shows the Crab Nebula colour-colour diagram. A clear separation between WFC1 and WFC2 data and a noticeable spread within each camera data set is seen in the uncorrected diagram (left panel), while points from both WFCs overlap in the corrected diagram (right panel). Average Crab colours are: <soft>=1.34, <hard>=0.89 with a spread of 3$\sigma$/<soft>=5\% 3$\sigma$/<hard>=7\%. \begin{figure}[h] \includegraphics[scale=.3]{si.ps} \hspace{1cm} \includegraphics[scale=.3]{double.ps} \caption{Scorpius X-1 corrected colour-intensity diagrams over six years. Parallel tracks appear due to large intensity variations. Arrows indicate line spacing of $\Delta$I = 10-13 cts/cm$^2$/s. Left panel shows the soft colour versus intensity, the dashed lines mark, from left to right, the upper envelope to all data, the lower envelope to the bulk of ``normal'' observations and the lower envelope to all data. The right panels show the hard colour versus intensity, the lower panel includes all observations while the upper panel shows only two normal and two shifted observations. The vertical dashed lines mark, from left to right, the two most widely intensity-shifted (almost vertical) horizontal branches and the tip of the the two most widely intensity-shifted flaring branches. } \label{ci} \end{figure} Figure \ref{ccScoraw} shows Scorpius X-1 colour-colour diagrams over six years. In the left panel a clear separation appears between the Z-tracks measured with WFC1 and located in the upper left part of the diagram and those measured with WFC2 and located in the lower right part. The right panel shows scaled and offset-corrected data, a spread is still present in this diagram, vertex points can differ by as much as $\Delta$soft=6\% and $\Delta$hard=14\%. Thus observations suggest that secular variations are present. We checked that about 10\% of tracks in the sample is shifted with respect to the remainder of ``normal'' observations. Secular shifts are more evident in the hardness-intensity diagrams in Figure \ref{ci}, where some tracks are characterized by softer hardness ratios and higher total intensity. Large intensity variations produce parallel tracks in both diagrams. The soft colour is well correlated with intensity (see left panel in Figure \ref{ci}) but tracks, which look like slanting lines, may differ by up to 30\% in intensity at equal soft colour. Also in the right panels of Figure \ref{ci} the tips of the flaring branches may differ by 30\% and horizontal branches by 50\% in intensity, while in the Crab hardness-intensity diagram (not shown) the largest percent variation for intensities is 16\%. It is known that on long timescales the total luminosity of LMXBs may vary secularly and that temporal variability indices like the QPO frequency and spectral parameters form a set of parallel lines when plotted versus the total luminosity, each line reflecting the short-term correlation with luminosity and the offset reflecting the average luminosity difference in different epochs (see \cite{MvKF01, vanderkli01}). A possible explanation for the ``parallel tracks'' phenomenon was proposed by \cite{vanderkli01}. \section{Spectra} Spectral analysis is performed in the energy range 2.0-22 keV along the Z-track of two observations, the ``normal'' observation 20143001 of January 25 1997 and the ``most shifted'' observation 20274001 of March 13 1997. Figure \ref{boxes} shows corrected colour-colour diagrams of both observations with boxes delimiting the data used to make spectra. Our best fit model is made up of a Comptonization component, a Gaussian component with a fixed width $\sigma$=0.5 keV to model an iron line component and an absorbing column density set to the Galactic value N$_{H}$ = 0.3$\times$10$^{22}$ cm$^{-2}$. A 2\% systematic error was assumed for all spectra. \begin{figure}[bh] \includegraphics*[height=.28\textheight]{20274001Num_cc97.ps} \hspace{1cm} \includegraphics*[height=.28\textheight]{20143001Num_cc97.ps} \caption{Scorpius X-1 corrected colour-colour diagrams of observations 20274001 (left panel) and 20143001 (right panel). The overlaid boxes delimit the data used to make spectra and are numbered from HB-NB to FB.} \label{boxes} \end{figure} The general behaviour of spectral changes is the same along both tracks. As an example spectra of observation 20274001 and best fit models are shown in the upper panel of Figure \ref{spec} while lower panel shows residuals in terms of sigmas with error bars of size one. Residuals are close to zero in the low energy range of either normal or "shifted" observations, consequently our choice of the Galactic value for N$_{H}$ seems not to affect results. Residuals at energy higher than 18 keV suggest that a further hard tail component is needed. \begin{figure}[ht] \includegraphics*[height=.35\textheight]{spectrum-del.ps} \caption{Spectral fits to spectra along the track of observation 20274001. Upper panel shows data and fitted models, labels indicating model components are positioned in the corresponding component energy ranges. Lower panel shows residuals in terms of sigmas, arrows indicate where expected (or unexpected!) variations occur. } \label{spec} \end{figure} \begin{figure}[b] \includegraphics*[height=.24\textheight]{kttaux2-n.ps} \hspace{.5cm} \includegraphics*[height=.24\textheight]{kttox2-n.ps} \caption{90\% $\chi^2$ confidence level contour plot for the Comptonization component parameters. The hot plasma temperature (kT) is reported as a function of the optical depth (tau) in left panel and of the seed photon temperature (T0) in right panel. Contour numbers correspond to spectrum boxes in Figure \ref{boxes}, solid lines belong to observation 20274001 while gray dotted lines belong to observation 20143001. } \label{contours} \end{figure} This is also supported by the comparison with Crab Nebula spectra (not shown), which show no such systematic distortion and the ratios of data over model show random distribution around unity. In Figure \ref{contours} we report, for both observations, the 90\% $\chi^2$ confidence level contour plots for two Comptonization component parameters. The hot plasma temperature is plotted as a function of the optical depth in left panel and of the seed photon temperature in right panel. There is a strong correlation between the plasma temperature and the optical depth, the temperature decreases by about 20\% from the HB-NB to the FB while the optical depth increases by 50\% from the vertex (V), namely the transition point between NB and FB, to the FB. We interpret the plasma temperature decrease from HB-NB to FB in terms of Compton cooling due to increasing mass accretion rate, and therefore to increasing seed photon number. The $\dot M$ increase is accompanied by a growing number of plasma particles in the corona which contributes to the optical depth. Luminosity variations are also interpreted as an effect of the $\dot M$ increase, however we note that the total luminosity follows the intensity behaviour seen in colour-intensity plots, where the total count rate increases on the FB but slightly decreases from the HB-NB to the V. The optical depth behaves like the total luminosity. The luminosity along the track of the shifted observation is systematically higher by 10-20\% but also the ranges of variation of the optical depth and the plasma temperature seem to be affected. Comparison between these two parameters on corresponding branches (cfr Figure \ref{contours}) shows that their ranges move to lower values by as large as 10\%. This decrease matches with the softer hardness ratios found in the previous paragraph. So the two parameters seem to correlate to luminosity changes as average luminosity varies in different epochs. \section{Conclusions} Comparison of Scorpius X-1 colour-colour and hardness-intensity diagrams of fifty-five observations shows secular shifts of the tracks in 10\% of the sample. Spectra from two observations are fitted with a constant column density, a Comptonization component and an iron line component. The spectral analysis shows great spectral changes from the HB-NB to the FB. The plasma temperature decreases along the track while the optical depth increases together with the total luminosity. The interpretation of these changes in terms of inverse Compton cooling confirms the widespread idea that it is the mass accretion rate $\dot M$ that varies along the track. The analysed observations show a systematic luminosity difference and shifted tracks. Comparison between spectral parameters on corresponding branches suggests that the secular shift is also accompanied by correlated secular changes in the optical depth and plasma temperature.
1,477,468,749,883
arxiv
\section{Introduction} Music source separation is an active research area in the context of audio signal processing and machine learning. The task is to estimate musical sources from observed musical mixture signals. One of the biggest challenges in music source separation is the estimation of singing voice from monaural (i.e. single channel) mixture signals~\cite{sisec17}. That is due to the high overlap that the sources exhibit in various signal representations~\cite{giannoulis11}. Most approaches for source separation use time-frequency masking~\cite{mim17}. Modern state of the art method estimate the mask with neural networks of various architectures. Given the strong and long-term temporal patterns and structures of music (e.g. rhythm, beat/tempo, melody), architectures that can model long time dependencies, e.g. recurrent neural networks (RNNs), seem to be a great fit for the music source separation task. But, local structures are usually dominating the learning signal, because the RNN focuses on the most recent information~\cite{serdyuk:twinnet}. A known issue with the RNNs, that has been pointed out in many seminar works~\cite{serdyuk:twinnet}, e.g.~\cite{bengio:nn:1994,hochreiter:lsm:1997}. As a result, the long-term temporal patterns of music (e.g. tempo/beat, melody, and rhythm) might not be modeled correctly by an RNN, because the learning signal will be heavily influenced by the local structures~\cite{serdyuk:twinnet}. This means that the RNN will focus more on the local structures instead of the long-term temporal patterns~\cite{serdyuk:twinnet}. The Twin Network (TwinNet)~\cite{serdyuk:twinnet} is an effective way to regularize generative RNNs when the generation is conditioned on some input (e.g. past content) and make the RNN also take into account the expected future content. This technique uses a second RNN which generates the same output in the backward direction and insures that the hidden states of the two networks are close. In this paper we are presenting a method that performs music source separation, using the TwinNet, and capable to model long-term structures of music. We evaluate the proposed method by focusing on singing voice separation (i.e. separating the singing voice from musical mixtures). This work builds upon the method for masking and denoising simultaneously~\cite{mim17}. The main contributions of this paper are: \begin{enumerate}[i] \item We present a method that improves the previous objective SOTA SDR and SIR by 0.37~dB and 0.23~dB, respectively. Our method is less computationally intensive comparing to the one presented in~\cite{mim17} which uses the recurrent inference (an iterative method which allows deep learning architectures to have stochastic depth \cite{stoch_depth}); \item We show that the TwinNet based regularization can be used for enhancing the results obtained by the MaD architecture. \end{enumerate} The rest of the paper is organized as follows. In Section~\ref{sec:overview} we give a brief overview of the related work for source separation task and approaches. The proposed method is thoroughly presented in Section~\ref{sec:proposedmethod}. The followed experimental procedure is described in Section~\ref{sec:experiments} and the obtained results are reported and discussed in Section~\ref{sec:results}. Section~\ref{sec:conclusions} concludes this work. \section{Related work} \label{sec:overview} A common approach to estimate individual sources from monaural mixtures is to to apply time-varying filters to the mixture signal~\cite{ps_masks}. The most straightforward way to derive and apply these filters is to treat audio signals as wide-sense stationary and compute a time-frequency representation of the signals via the short-time Fourier transformation (STFT). Then, the source estimate can be obtained by: \begin{equation}\label{eq:masking} {\mathbf{\hat{Y}}^{j}} = {\mathbf{Y}} \odot {\mathbf{M}^j} \text{,} \end{equation} where ${\mathbf{Y}}$ and ${\mathbf{\hat{Y}}^{j}} \in \mathbb{C}^{M \times N}$ are the complex-valued STFT representations of the mixture signal vector ${\mathbf{x}}$ and the $j$-th source estimated signal ${\mathbf{\hat{x}}^j}$ respectively, with $M$ overlapping time frames and $N$ frequency sub-bands. ${\mathbf{M}^j} \in \mathbb{R}_{\geq 0}^{M \times N}$ is the $j$-th source-dependent filter, which we will refer to as \emph{mask}, and $\odot$ denotes the Hadamard product. The question that remains open is how to compute ${\mathbf{M}^j}$. In the case that all the $j$ sources are known \textit{a priori}, the source dependent masks are computed by employing ratios of the known sources' time-frequency representations~\cite{ps_masks,liutkus_alpha,voran17}. Selecting an appropriate method for mask computation when all the sources are known is outside the scope of this work. Interested readers are kindly referred to the following works~\cite{ps_masks, liutkus_alpha, voran17}. However, it is important to note that the mask computation is an \textit{open} optimization problem~\cite{fitz_masks} and in many cases assumptions about the source additivity~\cite{liutkus_alpha} and the phase dependencies~\cite{ps_masks, voran17} have to be made for many mask computations. When the sources in an observed mixture are not known \textit{a priori}, supervised approaches relying on deep learning based optimization have yielded state-of-the-art results~\cite{sisec17}. Deep learning approaches for singing voice separation can be distinguished in three categories. The first category includes methods that train a deep neural network (DNN) to predict ${\mathbf{M}^j}$, conditioned on features computed using ${\mathbf{Y}}$~\cite{grais16} such as the magnitude spectrogram $\mathbf{V} = |{\mathbf{Y}}|$, where $|\cdot|$ denotes the matrix entry-wise absolute value operator. During training (when all sources are known for a given dataset), the pre-computation of the target ${\mathbf{M}^j}$ can rely on the ideal ratio mask (IRM, ${\mathbf{M}^j}_\text{IRM}\in[0,1]$), defined as \begin{equation} \mathbf{M}^{j}_\text{IRM}=\frac{\mathbf{V}^{j}}{\sum\limits_{j' \in J}\mathbf{V}^{j'}}\text{ where, }\label{eq:irm-mask} \end{equation} \noindent $J$ is the total number of sources in a mixture. As can be seen, the form of the denominator implies the strong assumption that all the sources are additive. It must be noted here that the ideal amplitude mask (IAM, ${\mathbf{M}^j}_\text{IAM}\in\mathbb{R}_{\geq0}$), a usual alternative way of computing ${\mathbf{M}^j}$ is defined as \begin{equation} \mathbf{M}^{j}_\text{IAM}=\frac{\mathbf{V}^{j}}{\mathbf{V}}\text{, } \end{equation} \noindent is considered to be not appropriate for deep learning approaches, due to the lack of upper limit of the mask~\cite{ps_masks}. As shown in~\cite{mim17_mlsp} and for tasks like singing voice separation, deep learning approaches that are trained to predict masks can be outperformed by methods that are not relying on pre-computed masks. The latter methods are discussed in the following paragraphs. The second category follows the idea that was introduced in denoising autoencoders (DAEs)~\cite{vincent_den,bengio_den}. Specifically, DNNs are trained to recover the target source magnitude spectrogram $\mathbf{V}^{j}$ from a corrupted version of $\mathbf{V}^{j}$. The corrupted version of $\mathbf{V}^{j}$ is assumed to be the observed mixture magnitude spectrogram $\mathbf{V}$~\cite{uhl15, mim16}. For such methods, it was observed that the performance of the separation is highly dependent on post-processing steps that involved either the fusion of multiple trained DNNs \cite{uhl17} and/or post-processing steps such as masking of the mixture signal using the outcome of DNNs~\cite{mim17_mlsp, uhl15, uhl17, huang, takahashi17}. Aiming to encapsulate the process of masking into deep learning optimization routines, the approaches of the third category introduced skip connections to the DNNs. The skip connections propagate the mixture signal $\mathbf{V}$ through two information paths. The first information path is the typical forward propagation of $\mathbf{V}$ through the layers of the DNNs and the second information path allows $\mathbf{V}$ to directly reach the output of the DNNs which is used to mask $\mathbf{V}$ using Eq.~(\ref{eq:masking}), yielding the final DNN estimate. Specifically, the work~\cite{huang} employs deep RNNs and trained them to yield magnitude estimates for all the sources concurrently (i.e. $\hat{\mathbf{V}}^{j}\;\forall\,j$). The magnitude estimates are then given to a deterministic function involving the computation of the ratio of the estimated magnitudes. The ratio outputs the mask that is applied to $\mathbf{V}$ and is encapsulated through the training procedure~\cite{huang}. That approach does not allow the deep RNNs to learn the masking process, but rather to output magnitude estimates that can be used to compute the mask. With the main ambition to also learn the masking process, the work of~\cite{mim16} proposed the usage of highway networks~\cite{hw15} that allow $\mathbf{V}$ to be masked directly by the output of a neural network layer. An extension to temporal sequences employing gated recurrent units (GRU)~\cite{bahdanau15} was presented in~\cite{mim17_mlsp} where the term skip-filtering connections was introduced. In~\cite{jannson17} a deep, ladder-structured, convolutional neural network (CNN) was presented. The output of the CNN was used to mask the input $\mathbf{V}$ to the CNN to provide magnitude estimates of the singing voice. The limitations of the above methods are that highway networks of~\cite{mim16} do not compute any latent variables that can be used for denoising~\cite{vincent_den}. On the other hand, CNN architectures are prone to learn statistical irregularities of the data~\cite{jo17_cnns} making these two architectures not robust against small data perturbations~\cite{vincent_den, jo17_cnns}, while the GRU encoder-decoder of~\cite{mim17_mlsp} is not robust against interferences from other sources concurrently active in the mixture signal~\cite{mim17}. To tackle these problems, the \textit{masker-denoiser} (MaD) architecture was introduced in~\cite{mim17}. This architecture builds upon~\cite{mim17_mlsp} by incorporating a sparse transformation and a stochastic-depth optimization~\cite{stoch_depth} step that are used to generate the mask applied to $\mathbf{V}$ (i.e. the masker). As a final step a DAE with skip-filtering connections (i.e the denoiser) is responsible for eliminating remaining interferences from other music sources~\cite{mim17}. \begin{figure*}[!ht] \includegraphics[width=\textwidth]{TwinNet_Source_Separation_Architecture.png} \caption{Illustration of the proposed method. The parts in magenta color are used only during training.} \label{fig:method_arch} \end{figure*} Although the approach in~\cite{mim17} provided state-of-the-art results in deep learning based monaural singing voice separation, the quality of the decoded mask is relying on stochastic optimization of a data-driven depth of the GRU decoder via recurrent inference~\cite{stoch_depth}. As a consequence and as the reported results in the experimental procedure of~\cite{mim17} suggest, the recurrent inference imposes a computationally cumbersome optimization process of the model given the available data for singing voice separation. To tackle that, we propose to replace the recurrent inference process by a recently developed Twin Network (TwinNet)~\cite{serdyuk:twinnet}. Similarly to the bidirectional RNN, the TwinNet employs a backward running network. The bidirectional RNNs are limited to be used for the representation learning when the TwinNet is applied for generative RNN. In addition to the original RNN, the TwinNet adds a second recurrent network that is a copy of the original with an exception that it aggregates the output in the backward direction. Second, the TwinNet adds a term to the loss function that depends on a trainable function of the backward running network. This loss function pushes together the hidden states of the forward net and the backward net for co-temporal timesteps. This cost ensures that the hidden state of the forward network encodes information stored in the state of the backward network. In other words, the TwinNet cost encourages the RNN to anticipate the future encoded in the backward running RNN, resulting in better modeling of both past and future context with the RNN. We use TwinNet under the hypothesis that the time-frequency mask for singing voice separation should remain the same regardless the direction (i.e. forward or backward in time) that one chooses to traverse a sequence of frames with time-frequency representation of the mixture signal. \section{Proposed method} \label{sec:proposedmethod} Our proposed method takes as an input the raw audio signal of a mixture of sources, and outputs the raw audio signal of the target source (i.e. the singing voice). The method uses a deep neural network architecture, which consists of two parts. The first part takes the mixture magnitude spectrogram of the input time-domain audio as an input, estimates the target source magnitude spectrogram from the mixture by predicting a time-frequency filter and applying it to the input, and outputs the estimated magnitude spectrogram of the target source. The second part takes the output of the first part as an input, predicts and applies a denoising mask, and outputs a filtered version of its input. Since the input to the first part is the mixture signal and the output is an estimate of the signal of the target source, we frame the time-frequency filter of the first part as a time-frequency mask. Similarly, since the input to the second part is a representation of the target source and its output is the same representation of the same source, we frame the time-frequency filter of the second part as a denoising filter. The first part of our method is denoted as {\em The Masker} and the second as the {\em The Denoiser}. The Masker and the Denoiser are neural network architectures based on DAEs~\cite{vincent_den}. According to the initial paper of DAEs~\cite{vincent_den}, DAEs try to learn stochastically the manifold of the clean data. Source separation can be understood as a process that transforms the samples of the corrupted data manifold (i.e. the mixture) into samples that reside in the manifold of clean data (i.e. the target source)~\cite{den_ss}. This transformation for audio data can be understood using time-frequency masking~\cite{ps_masks}. We want to include in our optimization graph the generation of the mask and the denoising filter. Consequently, we setup our method in a way that the Masker will predict the mask that will be applied to the input magnitude spectrogram, and the denoiser will predict values of denoising filter which is applied to the output of the Masker. To do so, we employ the skip-filtering connections~\cite{mim17_mlsp}, which allow us to define the magnitude spectrogram of the target source as the target of the Masker and the Denoiser, but the output of the last neural network layer in the Masker and the Denoiser would be the mask and the denoising filter, respectively. We claim that the direction that one chooses to view the time-frequency representation of the mixture signal (i.e. forward or backward in time), does not affect the mask that is used in order to separate the singing voice from the mixture. Additionally, we hypothesize that a reverse traversing of the time-frequency representation of the mixture, will make the Masker to learn and anticipate the strong temporal patterns and structures of music. For these reasons, we use the recently proposed TwinNet for regularizing the Masker during training~\cite{serdyuk:twinnet}. Our method is illustrated in Figure~\ref{fig:method_arch} and thoroughly presented below. \subsection{Input pre-processing} The input to our method is the vector of audio samples of the mixture signal $\mathbf{x}=[x_{1}, x_{2}, \ldots , x_{N}],\,\, x_{n} \in [-1,\,1]$, sampled at 44.1 kHz. We transform the input mixture signal into a time-frequency representation by using the STFT. For the STFT, we use overlapping frames of $N=2049$ samples ($\approx46$ milliseconds), segmented using the Hamming window function, and zero padded to $N'=4096$ samples. The hop size is set to 384 samples ($\approx9$ milliseconds). After the STFT, we retain only the first $N$ frequency sub-bands (i.e. up to the Nyquist frequency, including the DC term), resulting in the time-frequency representation of $\mathbf{x}$, $\mathbf{Y} \in \mathbb{C}^{M\times N}$, with $\mathbf{V} \equiv |\mathbf{Y}|$ to be the magnitude of $\mathbf{Y}$. From $\mathbf{V}$, we create overlapping subsequences of length $T$, that overlap by a factor of $L\times2$, in order to use context information from previous ($L$) and next (again $L$) frames. This results in having $B=\lceil{M/T}\rceil$ sequences of the form $\mathbf{V}_\text{in} \in \mathbb{R}^{T\times N}_{\geq0}$ for the whole input signal $\mathbf{x}$, where $\lceil\cdot\rceil$ is the ceiling function. Each $\mathbf{V}_\text{in}$ is used as an input to the next part of our method, which is the Masker. \subsection{The Masker} The Masker consists of a frequency trimming process, a bi-directional recurrent neural network (Bi-RNN) encoder (RNN\textsubscript{enc}), a forward RNN decoder (RNN\textsubscript{dec}), a sparsifying transform which is implemented with a rectified linear unit (ReLU) and a feed-forward neural network (FNN), and the skip-filtering connections. The input to the masker is the sequence $\mathbf{V}_\text{in}$ and its output is the predicted magnitude of the $j$-th target source, $\hat{\mathbf{V}}'^{j}_{\text{filt}} \in \mathbb{R}_{\geq0}^{T'\times N}$, with $T' = T-2L$. $\mathbf{V}_\text{in}$ is trimmed to $\mathbf{V}_\text{tr} \in \mathbb{R}_{\geq0}^{T\times F}$, with $F=744$. This is done in order to reduce the dimensionality of the RNN\textsubscript{enc}~and thus the number of training parameters. Consequently, information up to 8 kHz is retained. Since most of the relevant information for the singing voice is up to 8 kHz, the aforementioned reduction is considered to not have a great impact to the process of the Masker. $\mathbf{V}_\text{tr}$ is used as an input to the RNN\textsubscript{enc}. The forward RNN of the RNN\textsubscript{enc}~takes the sequence $\mathbf{V}_\text{tr}$ as an input, and the backward RNN the $\overleftarrow{\mathbf{V}_\text{tr}} = [{\mathbf{v}_\text{tr}}_{T}, \ldots, {\mathbf{v}_\text{tr}}_{t}, \ldots, {\mathbf{v}_\text{tr}}_{1}]$. The hidden states of the forward and the backward RNNs of the RNN\textsubscript{enc}~and at frame $t$, $\mathbf{h}_{t}$ and $\overleftarrow{\mathbf{h}_{t}}$ respectively, are concatenated and summed to the input (with residual connections) as \begin{equation} \label{eq:res_conn} \mathbf{h}_{\text{enc}_{t}} = [(\mathbf{h}_{t} + {\mathbf{v}_\text{tr}}_{t})^{\text{T}}, (\overleftarrow{\mathbf{h}_{t}} + \overleftarrow{\mathbf{v}_\text{tr}}_{t})^{\text{T}}]^{\text{T}}\text{ ,} \end{equation} \noindent leading to the output of the RNN\textsubscript{enc}, $\mathbf{H}'_{\text{enc}} \in \mathbb{R}_{\geq-1}^{T\times2F}$, \begin{equation} \mathbf{H}'_{\text{enc}} = [\mathbf{h}_{\text{enc}_{1}}, \mathbf{h}_{\text{enc}_{2}}, \ldots, \mathbf{h}_{\text{enc}_{T}}]\text{ .} \end{equation} \noindent Because we want the RNN\textsubscript{dec}~to focus only on the frames of $\mathbf{H}'_{\text{enc}}$ that are relevant to the sequence from which we want to extract the target source (i.e. the frames in the range $[1+L, T-L]$), we drop the first and the last $L$ $\mathbf{h}_{\text{enc}_{t}}$. This results to the output $\mathbf{H}_{\text{enc}} \in \mathbb{R}_{\geq-1}^{T'\times2F}$ of the RNN\textsubscript{enc}, \begin{equation} \mathbf{H}_{\text{enc}} = [\mathbf{h}_{\text{enc}_{1+L}}, \mathbf{h}_{\text{enc}_{2+L}}, \ldots, \mathbf{h}_{\text{enc}_{T-L}}]\text{ .} \end{equation} $\mathbf{H}_{\text{enc}}$ is used as an input to the RNN\textsubscript{dec}, which outputs the hidden states $\mathbf{H}^{j}_{\text{dec}} \in [-1, 1]^{T',F}$. Consequently, the $\mathbf{H}^{j}_{\text{dec}}$ is used as an input to the sparsifying transform (the FNN with the ReLU) to obtain the time-frequency mask for the target source $j$, $\tilde{\mathbf{M}}^{j} \in \mathbb{R}_{\geq0}^{T'\times N}$, as \begin{equation} \label{eq:sparse-transform} \tilde{\mathbf{M}}^{j} = g(\mathbf{H}^{j}_{\text{dec}}\mathbf{W}_{\text{FNN}_{\text{M}}} + \mathbf{b}_{\text{FNN}_{\text{M}}})\text{ ,} \end{equation} \noindent where $\mathbf{W}_{\text{FNN}_{\text{M}}}$ and $\mathbf{b}_{\text{FNN}_{\text{M}}}$ are the weight matrix and bias vector of the FNN, respectively, and $g$ is the element wise ReLU activation. The output of the Masker is obtained by employing the skip-filtering connections as \begin{equation} \label{eq:skip-filtering-connection-masker} \hat{\mathbf{V}}'^{j}_{\text{filt}} = \tilde{\mathbf{M}}^{j} \odot \mathbf{V}_\text{in}'\text{ ,} \end{equation} \noindent where $\mathbf{V}_\text{in}'= [{\mathbf{v}_\text{in}}_{1+L}, {\mathbf{v}_\text{in}}_{2+L}, \ldots, {\mathbf{v}_\text{in}}_{T-L}]$. \subsection{TwinNet architecture and regularization} The usage of a backward RNN as a regularizer of a forward RNN during training has been proposed in~\cite{serdyuk:twinnet,goyal2017z}. The target is to make the forward RNN capable to model better the long-term temporal structures and patters in the sequences that the RNN processes. To do so, the authors in~\cite{serdyuk:twinnet} use the hidden states of the backward RNN as a target for the hidden states of the forward RNN in a deterministic way, as can be seen in Figure~\ref{fig:twin}. \begin{figure} \centering \begin{tikzpicture}[->,thick] \scriptsize \tikzstyle{main}=[circle, minimum size = 7mm, thin, draw =black!80, node distance = 12mm] \foreach \name in {1,...,4} \node[main, fill = white!100] (y\name) at (\name*1.5,3.5) {$s_\name$}; \foreach \name in {1,...,4} \node[main, fill = white!100, draw=black!30!green] (hf\name) at (\name*1.5,1.5) {$\overrightarrow{h}_\name$}; \foreach \name in {1,...,4} \node[main, fill = white!100,draw=black!30!magenta] (hb\name) at (\name*1.5,0) {$\overleftarrow{h}_\name$}; \foreach \h in {1,...,4} { \draw[<->,draw=black!30!magenta,dashed] (hf\h) to [bend right=45] node[midway,left] {$\mathcal{L}^{\text{twin}}_{\h}$} (hb\h) {}; \path[draw=black!30!green] (hf\h) edge [bend left] (y\h); } \foreach \current/\next in {1/2,2/3,3/4} { \path[draw=black!30!green] (hf\current) edge (hf\next); \path[draw=black!30!magenta] (hb\next) edge (hb\current); } \foreach \h in {1,...,4} { \path (hb\h) edge [draw=black!30!magenta,bend right] (y\h); } \end{tikzpicture} \caption{Illustration of the hidden states regularization with the TwinNet. Forward RNN is depicted in green color. TwinNet (backward RNN) and TwinNet regularization are depicted in magenta color} \label{fig:twin} \bigskip \end{figure} More specifically, in the original proposal of the TwinNet~\cite{serdyuk:twinnet}, the authors consider a sequence of inputs $\mathbf{S} =[\mathbf{s}_{1},\ldots, \mathbf{s}_{T}]$, aiming at estimating the density $p(\mathbf{S})$. To do so, they target at maximizing the log-likelihood $\log p(\mathbf{S})$, using a forward RNN to process $\mathbf{S}$ and a non-linear transformation on top of the RNN, for predicting $p_{\text{f}}(\mathbf{s}_{t}|\mathbf{s}_{< t}) = \Psi_{\text{f}}(\overrightarrow{h}_{t})$. $\overrightarrow{h}_{t}$ is the output of the forward RNN and $\Psi_{\text{f}}(\cdot)$ is the non-linear transformation applied on top of the forward RNN (e.g. a softmax). To encourage the forward RNN to take into account upcoming (future) inputs, the authors in~\cite{serdyuk:twinnet} use a backward RNN to predict $p_{\text{b}}(\mathbf{s}_{t}|\mathbf{s}_{< t}) = \Psi_{\text{b}}(\overleftarrow{h}_{t})$, where $\overleftarrow{h}_{t}$ is the output of the backward RNN and $\Psi_{\text{b}}(\cdot)$ is a non-linear transformation applied on top of the backward RNN. Additionally, to regularize the learning process, they calculate the distance $\mathcal{L}^{\text{twin}}_{t} = ||f(\overrightarrow{h}_{t}) - \overleftarrow{h}_{t}||{}_2$, where $f$ is a learned affine transformation. All the above components all jointly optimized by maximizing \begin{equation}\label{eq:twineqoriginal} Q=\sum\limits_{t}\log p_{\text{f}}(\mathbf{s}_{t}|\mathbf{s}_{< t}) + \log p_{\text{b}}(\mathbf{s}_{t}|\mathbf{s}_{>t}) - \mathcal{L}^{\text{twin}}_{t}. \end{equation} \noindent The $\mathcal{L}^{\text{twin}}_{t}$ at Eq.~(\ref{eq:twineqoriginal}), is the term that encourages the forward RNN to take into account the future inputs. During the evaluation/testing process, the backward RNN and the associated parts (i.e. the magenta parts in Figure~\ref{fig:twin}) are not used. Finally, in the optimization graph, all the preceding parts of the network from the backward RNN are not receiving a gradient signal from the objective of the backward RNN. This means that the input to the backward RNN is disconnected from the computation graph and is used only to optimize the backward RNN. In our work, we use the TwinNet only during training to regularize the $\mathbf{H}^{j}_{\text{dec}}$. We claim that the RNN\textsubscript{dec}~can greatly benefit from compensating for the future time frames, due to the strong temporal patterns and structures of music. We set up the TwinNet as a duplicate/twin of the RNN\textsubscript{dec}, the sparsifying transform, and the skip-connections, as can be seen in Figure~\ref{fig:method_arch}. The input to the TwinNet is the $\mathbf{H}_{\text{enc}}$ and its output is the $\mathbf{H}^{j}_{\text{twin}}$, calculated exactly as $\mathbf{H}^{j}_{\text{dec}}$. We apply \begin{equation} \label{eq:cost-twin} \mathcal{L}^{\text{twin}} = \sum\limits_{t}||f(\mathbf{h}_{\text{dec}_{t}}) - \mathbf{h}_{\text{twin}_{t}}||{}_2 \end{equation} \noindent as the cost for the regularization of the RNN\textsubscript{dec}. The affine transform $f$ (obtained from a feed-forward network without any non-linearity applied to its output) is not used during evaluation/testing. In our work we transmit the gradient signal from the TwinNet back to the RNN\textsubscript{enc}, in order to imbue the compensation for future values to the encoding part of the Masker as well. The output of the TwinNet that is used to optimize it is ${\hat{\mathbf{V}}^{j}}_{\text{twin}}$ and is obtained exactly as $\hat{\mathbf{V}}'^{j}_{\text{filt}}$. \subsection{The Denoiser} We expect that the implemented masking process by the Masker will introduce artifacts to the magnitude spectrogram of the separated source. For that reason, we employ an extra learnable time-frequency filter applied to the $\hat{\mathbf{V}}'^{j}_{\text{filt}}$, in order to refine the latter and make it as close as possibly to the ${\mathbf{V}^{j}}'_{\text{in}}$. We perceive this process as denoising, hence we term the module that implements it the Denoiser. The Denoiser consists of two FNNs, the FNN\textsubscript{enc} and FNN\textsubscript{dec}, takes $\hat{\mathbf{V}}'^{j}_{\text{filt}}$ as an input, and outputs the $\hat{\mathbf{V}}^{j}_{\text{filt}}\in\mathbb{R}_{\geq0}^{T'\times N}$. The two FNNs of the Denoiser are set up in an DAE fashion and have shared weights through time. The FNN\textsubscript{enc} takes $\hat{\mathbf{V}}'^{j}_{\text{filt}}$ as the input and outputs $\mathbf{H}_{\text{FNN\textsubscript{enc}}}\in\mathbb{R}_{\geq0}^{T'\times N''}$, with $N'' = \lfloor N/2 \rfloor$ and $\lfloor\cdot\rfloor$ to be the floor function, as \begin{align} \mathbf{H}_{\text{FNN\textsubscript{enc}}} = g(\hat{\mathbf{V}}'^{j}_{\text{filt}} \mathbf{W}_{\text{FNN\textsubscript{enc}}} + \mathbf{b}_{\text{FNN\textsubscript{enc}}})\text{ ,} \end{align} \noindent where $\mathbf{W}_{\text{FNN\textsubscript{enc}}}$ and $\mathbf{b}_{\text{FNN\textsubscript{enc}}}$ are the weights matrix and bias vector, respectively, of the FNN\textsubscript{enc}. The FNN\textsubscript{dec} accepts the $\mathbf{H}_{\text{FNN\textsubscript{enc}}}$ and outputs the $\mathbf{H}_{\text{FNN\textsubscript{dec}}}\in\mathbb{R}_{\geq0}^{T'\times N}$, as \begin{align} \mathbf{H}_{\text{FNN\textsubscript{dec}}} = g(\mathbf{H}_{\text{FNN\textsubscript{enc}}} \mathbf{W}_{\text{FNN\textsubscript{dec}}} + \mathbf{b}_{\text{FNN\textsubscript{dec}}})\text{ ,} \end{align} \noindent where $\mathbf{W}_{\text{FNN\textsubscript{dec}}}$ and $\mathbf{b}_{\text{FNN\textsubscript{dec}}}$ are the weights matrix and bias vector, respectively, of the FNN\textsubscript{dec}. The final output of the Denoiser, $\hat{\mathbf{V}}^{j}_{\text{filt}}$, is obtained as \begin{equation} \label{eq:denoiser_output} \hat{\mathbf{V}}^{j}_{\text{filt}} = \mathbf{H}_{\text{FNN\textsubscript{dec}}}\odot\hat{\mathbf{V}}'^{j}_{\text{filt}}\text{ .} \end{equation} \subsection{Output processing} By iterating through all the overlapping subsequences of the analyzed input signal, the estimates from Eq. (\ref{eq:denoiser_output}) are aggregated together and reshaped to form the magnitude spectrogram of the $j$-th target source $\hat{\mathbf{V}}^{j} \in \mathbb{R}_{\geq 0}^{M\times N}$. For the target source, we obtain the complex-valued representation by means of the Griffin-Lim algorithm (least squares error estimation from modified STFT magnitude)~\cite{gla}, which uses the synthesis window and mixture's phase information. Inverse STFT is then applied to compute the time-domain samples of the target source $\mathbf{\hat{x}}^{j=1}$. \subsection{Implementation and training details} \label{subsec:training-details} We jointly train all components of our method. We treat $\hat{\mathbf{V}}'^{j}_{\text{filt}}$, $\hat{\mathbf{V}}^{j}_{\text{filt}}$, and ${\mathbf{V}^{j}}_{\text{twin}}$ as matrices with unnormalized probabilities (i.e. the values are not summing up to one), allowing us to use the generalized Kullback-Leibler divergence as cost function, and the employed objective is \begin{align} \begin{split} \mathcal{L} = &\mathcal{L}_{\text{D}} + \mathcal{L}_{\text{M}} + \mathcal{L}_{\text{TW}} + 0.5\mathcal{L}^{\text{twin}}\\ &+\lambda_{1}|\text{diag}\{\mathbf{W}_{\text{FNN}_{\text{M}}}\}|_{1}+\lambda_{2}||\mathbf{W}_{\text{FNN\textsubscript{dec}}}||^{2}_{2}\text{, where} \end{split}\\ \mathcal{L}_{\text{D}} = &D_{\text{KL}}(\mathbf{V}^{j} \,\, || \,\, \hat{\mathbf{V}}^{j}_{\text{filt}})\text{,} \\ \mathcal{L}_{\text{M}} = &D_{\text{KL}}(\mathbf{V}^{j} \,\, || \,\, \hat{\mathbf{V}}'^{j}_{\text{filt}})\text{,}\\ \mathcal{L}_{\text{TW}} = &D_{\text{KL}}(\mathbf{V}^{j} \,\, || \,\, {\mathbf{V}^{j}}_{\text{twin}})\text{, and} \end{align} \noindent $\mathcal{L}_{\text{M}}$, $\mathcal{L}_{\text{D}}$, and $\mathcal{L}_{\text{TW}}$ are the objectives of the Masker, the Denoiser, and the TwinNet respectively, $D_{\text{KL}}$ is the generalized Kullback-Leibler divergence, $\lambda_{1}=\num{1e-2}$ and $\lambda_{2}=\num{1e-4}$ are regularization terms, $|\cdot|_{1}$ is the $\ell_{1}$ vector norm, and $||\cdot||_{2}^{2}$ is the $L_{2}$ matrix norm. $\text{diag}\{\mathbf{W}_{\text{FNN}_{\text{M}}}\}$ is the main diagonal of the weight matrix of the FNN of the Masker (i.e. the elements $w_{ij}$ of $\mathbf{W}_{\text{FNN}_{\text{M}}}$ with $i=j$). The values for $\lambda_{1}$ and $\lambda_{2}$ are after the vanilla version of the Masker-Denoiser architecture~\cite{mim17}. We employ the $\lambda_{1}|\text{diag}\{\mathbf{W}_{\text{FNN}_{\text{M}}}\}|_{1}$ in order to enforce the FNN of the Masker not to have energy in its main diagonal. We observed that high energy in the main diagonal of the $\mathbf{W}_{\text{FNN}_{\text{M}}}$ results to a source-dependent activity detector, rather than a source-dependent filter. We employ the $\lambda_{2}||\mathbf{W}_{\text{FNN\textsubscript{dec}}}||^{2}_{2}$ as a weight decay to avoid overfitting of the Denoiser and induce a sparsity factor. All RNNs are GRUs and are initialized using the orthogonal initialization technique from~\cite{saxe} and all other matrices using random samples from a normal distribution~\cite{glorot}. Biases are initialized with zeros. All parameters are jointly optimized using the Adam algorithm~\cite{adam}. The learning rate is set equal to $10^{-4}$, the batch size to $16$, and a gradient $L_2$ norm clipping equal to $0.5$ is applied. Out method is implemented using the PyTorch framework\footnote{\url {http://pytorch.org/}} and the code can be found online\footnote{\url{https://github.com/dr-costas/mad-twinnet}}. \section{Experimental procedure} \label{sec:experiments} We assess the performance of our proposed method by focusing on the task of singing voice separation. We use the development subset of Demixing Secret Dataset\footnote{\url{http://www.sisec17.audiolabs-erlangen.de}} (DSD$100$) and the non-bleeding/non-instrumental stems of MedleydB~\cite{medleydb} for training our approach in a supervised fashion. Out total training set consists of 116 mixtures and their corresponding individual sources. The evaluation subset of DSD$100$ (50 mixtures and corresponding sources) is used for measuring the objective performance of our methods in terms of signal-to-distortion ratio (SDR) and signal-to-interference ratio (SIR), as proposed by the music source separation evaluation campaign (SiSeC)~\cite{sisec17}. Each multi-track contained in the available data is used to generate a monaural version of each of the four sources by averaging the two available channels. The target that we used for training (i.e. $\mathbf{V}^{j}$) is the outcome of the ideal ratio masking process~\cite{ps_masks}, scaled by a factor of $2$. The masking and the scaling processes are performed to avoid the inconsistencies in time delays and/or mixing gains between the mixture signal and the singing voice, which were apparent in the stems of the MedleydB dataset. Through our experiments it was observed that inconsistencies in the mixing gains yielded target source estimates that lacked amplitude, slightly decreasing the performance of the Masker. The length of the sequences is set to $T = 60$ (approximately equal to $0.5$ seconds), and the context information parameter to $L = 10$. The values for $T$ and $L$ are chosen after the initial proposal of the MaD architecture~\cite{mim17}. We compared our proposed method with established SOTA approaches that solely deal with monaural singing voice separation. These approaches and their corresponding results are listed at the on-line results page of the signal separation evaluation campaign for music signals (SiSeC-MUS)\footnote{\url{https://sisec17.audiolabs-erlangen.de/#/results/1/4/2}}. The approach denoted as GRA3~\cite{grais16} is a DNN supervised approach to yield estimates for the ideal and/or IRM masks that are used to process the mixture magnitude spectrogram. The method denoted as CHA~\cite{cha17} is a CNN approach that yields estimates of all sources and then post-process them by using an IRM mask. STO2 is a DNN approach that operates on the common-fate signal representation~\cite{cmf}. MM-RINF and MIM-NINF are MaD based methods, but {\emph{none of them uses the TwinNet regularization}. The MIM-RINF approach incorporates the recurrent inference stochastic optimization for the RNN\textsubscript{dec}, using a maximum number of $10$ iterations and a termination threshold equal to $\num{1e-3}$. The MIM-NINF approach does not incorporate the recurrent inference optimization procedure and it is the vanilla MaD architecture. Finally a supervised method based on robust principal component analysis (RPCA)~\cite{rpca17} for singing voice separation is also taken into consideration for the objective assessment, and this RPCA-based method is denoted as JEO2. The results from all the above mentioned approaches were obtained from the reported results of~\cite{sisec17} and~\cite{mim17}, following the same evaluation data and protocol proposed by SiSeC-MUS in \cite{sisec17}. \section{Results} \label{sec:results} In Figures~\ref{fig:sdr-results} and~\ref{fig:sir-results} are the box plots for the obtained results, for the employed metrics (i.e. SDR and SIR), and compared with the previous SOTA results. In Table~\ref{tab:results} are the median values for the obtained results and compared with the same previous approaches. We use the median value because that is the one proposed and used by the SiSeC. An online demo of the separated audio sequences is available.\footnote{\url{http://arg.cs.tut.fi/demo/mad-twinnet/}} \begin{figure}[!t] \includegraphics[width=\columnwidth]{SDR_all.png} \caption{Box-plots for the SDR obtained by MaDTwinNet and previous SOTA approaches.} \label{fig:sdr-results} \end{figure} \begin{figure}[!t] \includegraphics[width=\columnwidth]{SIR_all.png} \caption{Box-plots for the SIR obtained by MaDTwinNet and previous SOTA approaches.} \label{fig:sir-results} \end{figure} \sisetup{detect-weight=true,detect-inline-weight=math} \begin{table}[!t] \centering \caption{The median values for SDR and SIR of the proposed method and previous approaches.} \label{tab:results} \begin{tabular}{l S[ table-format=1.2, table-space-text-pre={$\approx$} ] S[ table-format=1.2, table-space-text-pre={$\approx$} ]} \multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{Metric}} \\ \multicolumn{1}{c}{\textbf{Approach}} & {\textbf{SDR}} & {\textbf{SIR}}\\ \midrule GRA3 & -1.74 & 1.28\\ CHA & 1.58 & 5.17\\ MIM-NINF & 3.63 & 7.06\\ STO2 & 3.92 & 6.75\\ JEO2 & 4.07 & 6.09\\ MIM-RINF & 4.20 & 7.94\\ MaDTwinNet & \bfseries 4.57 & \bfseries 8.17\\ \end{tabular} \end{table} As can be seen in the Table~\ref{tab:results} and in the Figures~\ref{fig:sdr-results} and~\ref{fig:sir-results} for all the metrics, the MaD TwinNet method achieves higher scores than the previous approaches. Specifically, in Figure~\ref{fig:sdr-results} can be seen that the MaD TwinNet achieves better score from the previous better approach (i.e. MIM-RINF). At the same time, the MaD TwinNet has smaller range of values from the MIM-RINF approach, indicating more consistency at the expected results than the MIM-RINF approach. Since the basic difference from the MIM-* approaches is the TwinNet and the recurrent inference, the results for SDR indicate that the usage of the TwinNet leads to more robust methods. Additionally, MaD TwinNet surpasses in all aspects all other presented approaches in Figure~\ref{fig:sdr-results}. Almost the same trend can be observed at the results for the SIR, depicted at Figure~\ref{fig:sir-results}. Again, the MaDTwinNet surpasses all previous monaural approaches in the terms of the achieved SIR. The MaDTwinNet approach seems to yield higher SIR values, compared to the MIM-RINF approach. Compared with the results for the SDR, it can be seen clearly the the TwinNet regularization increase the performance of the MaD architecture. \section{Conclusions} \label{sec:conclusions} In this paper we proposed a method for music source separation, able to model both past and future context of a musical sound source. We augmented our previously proposed MaD architecture with the recently proposed TwinNet. The Masker (of the MaD architecture) outputs a first estimate of the magnitude spectrogram of the targeted source, and the Denoiser enhances this first estimate by removing artifacts introduced by the Masker. We used the TwinNet to regularize the Masker. We evaluated our proposed method using the free DSD dataset and focusing on the singing voice separation task. The results showed an increase to the previous obtained SOTA results on the same task. Specifically, we reached to an increase of 0.37~dB and 0.23~dB to SDR and SIR, respectively. The obtained results show that the TwinNet can enhance the performance of the MaD architecture. As following work we propose the focus towards end-to-end methods, meaning that the neural network should receive as an input and produce as an output audio samples directly. This will make the time-frequency transformation to be included in the optimization graph and, probably, yield superior results. \section*{Acknowledgments} Part of the computations leading to these results was performed on a TITAN-X GPU donated by NVIDIA to K. Drossos. K. Drossos and T. Virtanen wish to acknowledge CSC-IT Center for Science, Finland, for computational resources. D. Serdyuk would like to acknowledge the support of the following agencies for research funding and computing support: Samsung, NSERC, Calcul Qu\'{e}bec, Compute Canada, the Canada Research Chairs, and CIFAR. S.-I. Mimilakis is supported by the European Union's H2020 Framework Programme (H2020-MSCA-ITN-2014) under grant agreement no 642685 MacSeNet. The authors would like to thank P. Magron and G. Naithani (TUT, Finland) for their valuable comments and feedback during the writing process. \bibliographystyle{IEEEbib}
1,477,468,749,884
arxiv
\section{Background} \label{sec:bg} \subsection{Linear Temporal Logic (LTL)} Let $\mathit{AP}$ be a finite set of {\em atomic propositions} and $\Sigma = 2^{\mathit{AP}}$ a finite {\em alphabet}. A {\em trace} is a finite or infinite sequence of letters $w=a_0a_1\dots$, where for all $i \geq 0$, we have $a_i \in \Sigma$. We denote the set of all finite traces by $\Sigma^*$ and the set of all infinite traces by $\Sigma^\omega$. For a finite trace $u$ and a trace $w$, we write $uw$ to denote their {\em concatenation}. \begin{definition}[LTL Syntax] {\sc Ltl}\xspace formulas are defined using the following grammar: $$\varphi ::= \top \;\mid\; p \; \mid \; \neg\varphi \;\mid\; \varphi_1\vee\varphi_2 \; \mid \; \mathbf{X}\varphi \; \mid \; \varphi_1 \mathbf{U} \varphi_2$$ \noindent where $p \in \Sigma$, and, $\mathbf{X}$ (next) and $\mathbf{U}$ (until) are temporal operators.$~\blacksquare$ \vspace{2mm} \end{definition} \begin{definition}[{\sc Ltl}\xspace Semantics] Let $w$ be an infinite trace in $\Sigma^w$, $i \geq 0$, and $\models$ denote the {\em satisfaction} relation. The semantics of {\sc Ltl}\xspace is defined as follows: \begin{tabbing} \= $w, i \models \top$\\ \> $w, i \models p$ \hspace*{1cm} \= iff \hspace*{.5cm} \= $p \in a_i$ \\ \> $w, i \models \neg \varphi$ \> iff \> $w,i \not \models \varphi$ \\ \> $w, i \models \varphi_1\vee\varphi_2$ \> iff \> $w,i \models \varphi_1 \; \vee \; w, i \models \varphi_2$ \\ \> $w, i \models \mathbf{X} \varphi$ \> iff \> $w, i+1 \models \varphi$ \\ \> $w, i \models \varphi_1 \, \mathbf{U} \, \varphi_2$ \> iff \\ \> \hspace{1cm}$\exists k \geq i : w, k \models \varphi_2 \; \wedge \; \forall j: i \leq j < k: w, j \models \varphi_1$. \end{tabbing} \noindent Also, $w \models \varphi$ holds \; iff \; $w, 0 \models \varphi$ holds.$~\blacksquare$ \vspace{2mm} \end{definition} For example, in a multi-threaded system, correct thread initialization (i.e., a thread cannot be spawned unless it is initialized) can be captured by the {\sc Ltl}\xspace property $\varphi \equiv (\neg \mathit{spawn} \; \U \; \mathit{init})$. An {\sc Ltl}\xspace formula $\varphi$ defines a set of traces, called a {\em property}, that satisfies the semantics of that formula. Throughout the paper, we use the terms `formula' and `property' interchangeably. We introduce abbreviation temporal operators: $\mathbf{F} \varphi$ ({\em finally} $\varphi$) denotes $\top \, \mathbf{U} \, \varphi$, and $\mathbf{G} \varphi$ ({\em globally} $\varphi$) denotes $\neg \mathbf{F} \neg\varphi$. For instance, the {\em request-response} property $\mathbf{G}(p \Rightarrow \mathbf{F} q)$ means that `it is always the case that if proposition $p$ holds, then eventually proposition $q$ holds'. {\sc Ltl}\xspace is commonly used to verify a program against a certain property. We assume that a {\em program} is defined as a set $p$ of infinite traces. Normally, each program trace is a sequence of states that satisfies certain propositions, where each state is a valuation of program variables. We say that $p$ satisfies an {\sc Ltl}\xspace property $\varphi$ (denoted $p \models \varphi$) \; iff \; for each each trace $w \in p$, we have $w \models \varphi$. \subsection{3-valued Linear Temporal Logic ({\sc Ltl}$_3$\xspace)} Verifying {\sc Ltl}\xspace properties at runtime boils down to the following problem: given the current program {\em finite} trace $w=a_0a_1a_2\dots a_n$, whether or not $\sigma$ belongs to a set of words defined by some property $\varphi$. However, {\sc Ltl}\xspace semantics is defined over infinite traces and a running program can only deliver only a finite trace at run time. For example, given a finite trace $w = a_0a_1 \dots a_n$, it may be impossible for an observer (e.g., a runtime {\em monitor}) to decide weather the property $\mathbf{F} p$ is satisfied. In~\cite{bls11}, the authors introduce 3-valued {\sc Ltl}\xspace, where `$\top$' denotes that the property is satisfied, `$\bot$' denotes that the property is violated, and `?' (the third value) denotes an unknown verdict, which means that given the current finite trace, it is impossible to determine whether the property is satisfied or violated. \begin{definition}[{\sc Ltl}$_3$\xspace semantics] \label{def:ltl3} Let $ u \in \Sigma^{*}$ be a finite word. The truth value of an {\sc Ltl}$_3$\xspace formula $\varphi$ with respect to $u$, denoted by $[u \models \varphi]$, is defined as follows: \begin{equation*} \left[ u \models \varphi \right] = \begin{cases} \top & \text{if } \forall w \in \Sigma^\omega : u \cdot w \models \varphi,\\ \bot & \text{if } \forall w \in \Sigma^\omega : u \cdot w \not \models \varphi,\\ ? & \text{otherwise}.~\blacksquare \end{cases} \end{equation*} \end{definition} Note that the syntax $\left[u \models \varphi \right]$ for {\sc Ltl}$_3$\xspace semantics is defined over finite words as opposed to $u \models \varphi$ for {\sc Ltl}\xspace semantics, which is defined over infinite words. For example, if proposition $p$ holds in some state of a finite trace, then property $\varphi=\mathbf{F}p$ holds for that trace. Otherwise, the valuation of $\varphi$ is `?', as it is unknown \begin{definition} [Good, Bad, Ugly Prefixes] Given a language $L\subseteq\Sigma^{\omega}$ of infinite words over $\Sigma$, we call a finite word $u \in \Sigma^{*}$ \begin{itemize} \item a \emph{good prefix} for $L$, if $\forall w \in \Sigma^{\omega} : u \cdot w \in L$ \item a \emph{bad prefix} for $L$, if $\forall w \in \Sigma^{\omega} : u \cdot w \notin L$ \item an \emph{ugly} prefix otherwise.$~\blacksquare$ \vspace{2mm} \end{itemize} \end{definition} In \cite{bls11}, the authors introduce a stepwise method that takes as input an {\sc Ltl}$_3$\xspace property $\varphi$ and synthesizes a deterministic finite state mashine (FSM) as a monitor that declares one of the values $\top$, $\bot$, or ? for each change of state of the program under inspection. Intuitively, simulating a finite word $u$ on this FSM reaches a state that illustrates the valuation of $[u \models \varphi]$. \begin{definition} [Monitor] \label{def:monitor} Let $\varphi$ be an {\sc Ltl}$_3$\xspace formula over alphabet $\mathrm{\Sigma}$. The {\em monitor} $\mathcal{M}^{\varphi}$ of $\varphi$ is the unique FSM $(\Sigma, Q, q_0, \delta, \lambda)$, where $Q$ is a set of states, $q_0$ is the initial state, $\delta \subseteq Q \times \Sigma \times Q$ is the transition relation, and $\lambda$ is a function that maps each state in $Q$ to a value in $\{\top, \bot, ?\}$, such that: \begin{equation*} \left[u \models \varphi\right] = \lambda(\delta(q_0, u)).~\blacksquare \end{equation*} \end{definition} \begin{figure}[t] \centering \includegraphics[scale=.8]{exprop.pdf} \caption{The monitor for property $\varphi \equiv (\neg \mathit{spawn} \; \U \; \mathit{init})$} \label{fig:monitor} \end{figure} For example, consider the property $\varphi \equiv (\neg \mathit{spawn} \; \U \; \mathit{init})$ (i.e., a thread is not spawned until it is initialized). The corresponding monitor is shown in Figure~\ref{fig:monitor}~\cite{bls11}. The proposition $\mathit{true}$ denotes the set $\mathit{AP}$ of all propositions. We use the term a \emph{conclusive state} to refer to monitor states $q_{\top}$ and $q_{\bot}$; i.e., states where $\lambda(q) = \top$ and $\lambda(q) = \bot$, respectively. Other states are called an {\em inconclusive state}. A monitor $\mathcal{M}^\varphi$ is constructed in a way that it recognizes minimal good and bad prefixes of $L(\varphi)$. Hence, if $\mathcal{M}^\varphi$ reaches a conclusive state, it stays in this {\em trap} state. SB: changes in response to Reviewer2 \begin{definition} [Monitorable Property] \label{def:monitorable} An {\sc Ltl}$_3$\xspace \linebreak property $\varphi$ is {\em monitorable} if $L(\varphi)$ has no ugly prefixes. We denote the set of all monitorable {\sc Ltl}$_3$\xspace properties by {\sc Ltl}$_3^{\mathit{mon}}$\xspace.$~\blacksquare$ \vspace{2mm} \end{definition} In other words, a property is \emph{monitorable} if for every finite word, there still exists a (possibly) infinite continuation that will determine whether the property is violated or satisfied. For example, property $\mathbf{G}\mathbf{F} p$ is not monitorable. This is intuitively because to declare satisfaction of $\mathbf{G}\mathbf{F} p$ the monitor has to identify an infinite loop that visits a state labeled by $p$, which is, of course, not possible at run time. \subsection{Finite LTL~\cite{mp95}} \label{def:fltl} The semantics of {\sc Ltl}\xspace is defined over infinite traces. Finite {\sc Ltl}\xspace ({\sc Fltl}\xspace) allows us to reason about finite traces for verifying properties at run time. The syntax of {\sc Fltl}\xspace is identical to that of {\sc Ltl}\xspace and the semantics are based on the truth values $\mathbb{B}_2=\{\top,\bot\}$. \begin{definition} [{\sc Fltl}\xspace semantics] Let $\varphi$ and $\psi$ be {\sc Ltl}\xspace properties, and $u = u_0u_1 \cdots u_{n-1}$ be a finite trace. \begin{align*} \left[u \models_{\text{\normalfont F}} \mathbf{X}\, \varphi\right] &= \begin{cases} [u_1 \models_{\text{\normalfont F}} \varphi] & \text{\normalfont if } u_1 \neq \epsilon \\ \bot & \text{\normalfont otherwise} \end{cases}\\ \\ \left[u \models_{\text{\normalfont F}} \varphi \,\mathbf{U}\, \psi\right] &= \begin{cases} \top & \exists k \in [0,n-1]: [u_k \models_{\text{\normalfont F}} \psi] = \top \;\;\wedge \\ &\forall l \in [0,k) : [u^l \models_{\text{\normalfont F}} \varphi] = \top\\ \bot & \text{\normalfont otherwise} \end{cases} \end{align*} where $\epsilon$ is the empty trace. The semantics of {\sc Fltl}\xspace for atomic propositions and Boolean combinations are identical to that of {\sc Ltl}\xspace. $~\blacksquare$ \vspace{2mm} \end{definition} \subsection{4-valued LTL (LTL4) ~\cite{bls10-jlc}} {\sc Ltl}$_4$\xspace is designed for runtime verification by producing more informative verdicts than {\sc Fltl}\xspace. The syntax of {\sc Fltl}\xspace is identical to {\sc Ltl}\xspace. The semantics of {\sc Ltl}$_4$\xspace is defined based on values $\mathbb{B}_4=\{\top,{\top_p},{\bot_p},\bot\}$ ({\em true}, {\em presumably true}, {\em presumably false}, and {\em false} respectively). The semantics of {\sc Ltl}$_4$\xspace is defined based on the semantics {\sc Ltl}\xspace and {\sc Fltl}\xspace. \begin{definition} [{\sc Ltl}$_4$\xspace semantics] \label{def:ltlfour} Let $\varphi$ be an {\sc Ltl}$_4$\xspace property and $u$ be a finite prefix of a trace. \begin{equation*} \left[u \models_4 \varphi\right] = \begin{cases} \top & \forall v \in \mathrm{\Sigma}^\omega : uv \models \varphi\\ \bot & \forall v \in \mathrm{\Sigma}^\omega: uv \not \models \varphi\\ {\top_p} & [u \models_F \varphi] \, \wedge \, \exists v \in \mathrm{\Sigma}^\omega: uv \not \models \varphi \\ {\bot_p} & [u \not \models_F \varphi] \, \wedge \, \exists v \in \mathrm{\Sigma}^\omega: uv \models \varphi$~\blacksquare$ \vspace{2mm} \end{cases} \end{equation*} \end{definition} Thus, an {\sc Ltl}$_4$\xspace property evaluates to $\top$ with respect to a finite trace $u$, if the property remains {\em permanently satisfied}, meaning that for all possible infinite continuations of the trace, the property will always be satisfied in {\sc Ltl}\xspace. Likewise, a valuation of $\bot$ means that the property will be {\em permanently violated}. If the property evaluates to ${\top_p}$, this denotes that currently the property is satisfied yet there exists a continuation that could violate it. Finally, value ${\bot_p}$ denotes that currently the property is violated yet there exists a continuation that could satisfy it. In \cite{bls10-jlc}, the authors introduce a method of synthesizing a {\em monitor}, as a deterministic finite state automaton, for an {\sc Ltl}$_4$\xspace property. \begin{definition} [Monitor] \label{def:monitor2} Let $\varphi$ be an {\sc Ltl}$_4$\xspace formula over alphabet $\mathrm{\Sigma}$. The {\em monitor} $\mathcal{M}^{\varphi}$ of $\varphi$ is the unique FSM $(\Sigma, Q, q_0, \delta, \lambda)$, where $Q$ is a set of states, $q_0$ is the initial state, $\delta \subseteq Q \times \Sigma \times Q$ is the transition relation, and $\lambda$ is a function that maps each state in $Q$ to a value in $\{\top,{\top_p},{\bot_p},\bot\}$, such that: \begin{equation*} \left[u \models \varphi\right] = \lambda(\delta(q_0, u)).~\blacksquare \end{equation*} \end{definition} Thus, given an {\sc Ltl}$_4$\xspace property $\varphi$ and a finite trace $u$, monitor $\mathcal{M}^{\varphi}$ is capable of producing a truth value in $\mathbb{B}_4$, which is equal to $[u \models_4 \varphi]$. For example, Figure~\ref{fig:ltl4} shows the monitor for property $\varphi = \mathbf{G} a \; \vee \; (b \, \mathbf{U} \, c)$. Observe that a monitor has two {\em trap} states (only an outgoing self loop), which map to truth values $\top$ and $\bot$. They are trap states since these truth values imply permanent satisfaction (respectively, violation). Otherwise, states labeled by ${\top_p}$ and ${\bot_p}$ can have outgoing transitions to other states. \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{ltl4} \caption{{\sc Ltl}$_4$\xspace monitor for property $\varphi = \mathbf{G} a \; \vee \; (b \, \mathbf{U} \, c)$.} \label{fig:ltl4} \end{figure} \section{Conclusion} \label{sec:concl} In this paper, we proposed a specification language \linebreak ({\sc Ltl}$_4-${\sc C}\xspace) for runtime verification of properties of types of objects in software and networked systems. Our language is an extension of {\sc Ltl}\xspace that adds counting semantics with numerical constraints. The six truth values of the semantics of {\sc Ltl}$_4-${\sc C}\xspace allows system designers to obtain informative verdicts about the status of system properties at run time. We also introduced an efficient and effective parallel algorithm with two implementations on multi-core CPU and GPU technologies. The results of our experiments on three real-world case studies show that runtime monitoring using GPU provides us with the best throughput and CPU utilization, resulting in minimal intervention in the normal operation of the system under inspection. For future work, we are planning to design a framework for monitoring {\sc Ltl}$_4-${\sc C}\xspace properties in distributed systems and cloud services. Another direction is to extend {\sc Ltl}$_4-${\sc C}\xspace such that it allows non-canonical strings of quantifiers. Finally, we are currently integrating {\sc Ltl}$_4-${\sc C}\xspace in our tool RiTHM~\cite{njsbmfb13}. \section{Parallel Algorithm Design} \label{sec:design} The main challenge in designing a runtime monitor is to ensure that its behavior does not intervene with functional and extra-functional (e.g., timing constraints) behavior of the program under scrutiny. This section presents a parallel algorithm for verification of {\sc Ltl}$_4-${\sc C}\xspace properties. Our idea is that such a parallel algorithm enables us to offload the monitoring tasks into a different computing unit (e.g., the GPU). The algorithm utilizes the popular {\em MapReduce} technique to spawn and merge submonitors to determine the final verdict. This section is organized as follows: Subsection~\ref{subsec:valext} describes how valuations are extracted from a trace in run time, and Subsection~\ref{subsec:algsteps} describes the steps of the algorithm in detail. \subsection{Valuation Extraction} \label{subsec:valext} Valuation extraction refers to obtaining a valuation of quantified variables from the trace. As described in {\sc Ltl}$_4-${\sc C}\xspace semantics, the predicate $p_i(x_i)$ identifies the subset of the domain of $x_i$ over which the quantifier is applied: namely the subset that exists in the trace. From a theoretical perspective, we check whether the predicate is a member of some trace event, which is a set of predicates. From an implementation perspective, the trace event is a key-value structure, where the key is for instance a string identifying the quantified variable, and the value is the concrete value of the quantified variable in that trace event. Consider the following property: \begin{equation} \label{eq:prop} \varphi=\mathbb{A}\xspace_{\ge 0.95} \, s \, \text{:}\, \code{socket}(s) \Rightarrow \left( \mathbf{G} \,\code{recv}\left(s\right) \Rightarrow \textbf{F} \, \code{respond}\left(s\right) \right) \end{equation} Predicate $p$ in this case is $\code{socket}(s)$, and a trace event should contain a key $\code{socket}$ and a value $\in [0,65535]$ representing the socket file descriptor in the system. Thus, the valuation extraction function $\varepsilon(u_i,K)=D_{\varphi}$ returns a map where keys are in $K$, and the value of each key is the value of the quantified variable corresponding to this key. These keys are defined by the user. \subsection{Algorithm Steps} \label{subsec:algsteps} Algorithm~\ref{alg:monitor} presents the pseudocode of the parallel monitoring algorithm. Given an {\sc Ltl}$_4-${\sc C}\xspace property $\varphi=\mathbb{Q}_\varphi\, \psi$, the input to the algorithm is the {\sc Ltl}$_4$\xspace monitor $\mathcal{M}^*$ of {\sc Ltl}$_4$\xspace property $\psi$, a finite trace $u$, the set of quantifiers $\mathbb{Q}_\varphi$, and the vector of keys $K$ used to extract valuations. Note that the algorithm supports both online and offline runtime verification. Offline mode is straightforward since the algorithm receives a finite trace that it can evaluate. In the case of online mode, the algorithm maintains data structures that represent the tree structure shown in Figure~\ref{fig:tree}, and repeated invocation of the algorithm updates these data structures incrementally. Thus, a monitoring solution can invoke the algorithm periodically or based on same event in an online fashion, and still receive an evolving verdict. The entry point to the algorithm is at Line~\ref{line:sort} which is invoked when the monitor receives a trace to process. The algorithm returns a truth value of the property at Line~\ref{line:applyq}. Subsections~\ref{subsubsec:sort} -- \ref{subsubsec:apply} describe the functional calls between Lines~\ref{line:sort} -- \ref{line:applyq}. The MapReduce operations are visible in functions {\em SortTrace} and {\em ApplyQuantifiers}, which perform a {\em map} ($\rightrightarrows$) in Lines~\ref{line:getval} and~\ref{line:getc} respectively. {\em ApplyQuantifiers} also performs a reduction ($\rightarrowtail$) in Line~\ref{line:reduce}. \subsubsection{Trace Sorting} \label{subsubsec:sort} As shown in Algorithm~\ref{alg:monitor}, the first step in the algorithm is to sort the input trace $u$ (Line~\ref{line:sort}). The function {\em SortTrace} performs this functionality as follows: \begin{enumerate} \item The function performs a parallel map of every trace event to the value vector that it holds using $\varepsilon$ (Line~\ref{line:getval}). \item The mapped trace is sorted in parallel using the quantifier variable keys (Line~\ref{line:sortk}). For instance, according to Property~\ref{eq:prop}, the key used for sorting will be $\code{socket}$, effectively sorting the trace by socket identifier. \item The sorted trace is then compacted based on valuations, and the function returns a map $\mu$ where keys are value vectors and values are the ranges of where these value vectors exist in trace $u$ (Line~\ref{line:compact}). A range contains the start and end index. This essentially defines the subsequences $u^{D_{\varphi}}$ for each property instance $\hat{\varphi}(D_{\varphi})$ (refer to Subsection~\ref{subsec:semantics}). \end{enumerate} \subsubsection{Monitor Spawning} \label{subsubsec:spawning} Monitor spawning is the second step of the algorithm (Line~\ref{line:spawn}). The function {\em SpawnMonitors} receives a map $\mu$ and searches the cached collection of previously encountered value vectors $\mathbb{D}$ for duplicates. If a value vector in $\mu$ is new, it creates submonitors and inserts them in the tree of submonitors $T$ (Line~\ref{line:addt}). The function {\em AddToTree} attempts to generate $|\mathbb{Q}_\varphi|-1$ quantifier submonitors $\mathcal{M}^{\mathcal{Q}}$ (Line~\ref{line:submon}) ensuring there are no duplicate monitors in the tree (Line~\ref{line:nodup}). After all quantifier submonitors are created, {\em SpawnMonitors} creates an {\sc Ltl}$_4$\xspace submonitor $\mathcal{M}^{*}$ and adds it as a child to the leaf quantifier submonitor in the tree representing the value vector (Line~\ref{line:addm}). This resembles the structure in Figure~\ref{fig:tree}. Creation of submonitors is performed in parallel for all value vectors in trace $u$. \subsubsection{Distributing the Trace} The next step in the algorithm is to distribute the sorted trace to all {\sc Ltl}$_4$\xspace submonitors (Line~\ref{line:dist}). The function {\em Distribute} instructs every {\sc Ltl}$_4$\xspace submonitor to process its respective trace by passing the full trace and the range of its respective subsequence, which is provided by the map $\mu$ (Line~\ref{line:procbuf}). The {\sc Ltl}$_4$\xspace monitor updates its state according to the trace subsequence and stores its truth value $b$. \subsubsection{Applying Quantifiers} \label{subsubsec:apply} Applying quantifiers is a recursive process, beginning with the leaf quantifier submonitors and proceeding upwards towards the root of the tree (Line~\ref{line:applyq}). Function {\em ApplyQuantifiers} operates in the following steps: \begin{enumerate} \item The function retrieves all quantifier submonitors at the $i^{th}$ level in the tree $T$ (Line~\ref{line:treelevel}). \item In parallel, for each quantifier submonitor, all child submonitor truth values are reduced into a single truth value of that quantifier submonitor (Lines~\ref{line:getc}-\ref{line:getb}). This step essentially {\em reduces} all child truth vectors into a single vector and then applies {\sc Ltl}$_4-${\sc C}\xspace semantics to determine the truth value of the current submonitor. \item The function proceeds recursively calling itself on submonitors that are one level higher. It terminates when the root of the tree is reached, where the truth value is the final verdict of the property with respect to the trace. \end{enumerate} \input{Algorithm1.tex} \section{Implementation and \\ Experimental Results} \label{sec:exp} We have implemented Algorithm~\ref{alg:monitor} for two computing technologies: Multi-core CPUs and GPUs. We applied three optimizations in our GPU-based implementation: (1) we use {\em CUDA Thrust API} to implement parallel sort, (2) we using {\em Zero-Copy Memory} which parallelizes data transfer with kernel operation without caching, and (3) we enforced alignment, which enables coalesced read of trace events into monitor instances. In order to intercept systems calls, we have integrated our algorithm with the Linux \texttt{strace} application, which logs all system calls made by a process, including the parameters passed, the return value, the time the call was made, etc. Notice that using \texttt{strace} has the benefit of eliminating static analysis for instrumentation. The work in~\cite{caceres2002syscall, wang2004file, ramsbrock2007profiling} also use {\tt strace} to debug the behavior of applications. Subsection~\ref{subsec:casestudies} presents the case studies implemented to study the effectiveness of the GPU implementation in online and offline monitoring. Subsection~\ref{subsec:setup} discusses the experimental setup, while Subsection~\ref{subsec:results} analyzes the results. \subsection{Case studies} \label{subsec:casestudies} We have conducted the following three case studies: \vspace{-2mm} \begin{enumerate} \itemsep0em \item \textbf{Ensuring every request on a socket is responded to.} This case study monitors the responsiveness of a web server. Web servers under heavy load may experience some timeouts, which results in requests that are not responded to. This is a factor contributing to the uptime of the server, along with other factors like power failure, or system failure. Thus, we monitor that at least $95\%$ of requests are indeed responded: \begin{equation*} \mathbb{A}\xspace_{\ge 0.95} \, s \, \text{: } \code{socket}(s)\mathbf{G}\, \code{receive}\left(s\right) \Rightarrow \textbf{F} \, \code{respond}\left(s\right) \end{equation*} We utilize the Apache Benchmarking tool to generate different load levels on the Apache Web Server. \item \textbf{Ensuring fairness in utilization of personal cloud storage services.} This case study is based on the work in~\cite{drago2012inside}, which discusses how profiling DropBox traffic can identify the bottlenecks and improve the performance. Among the issues detected during this analysis, is a user repeatedly uploading chunks of maximum size to DropBox servers. Thus, it is beneficial for a runtime verification system to ensure that the average chunk size of all clients falls below a predefined maximum threshold, effectively ensuring fairness of service use. The corresponding {\sc Ltl}$_4-${\sc C}\xspace property is as follows: \begin{equation*} \mathbb{A}\xspace u \, \text{: } \code{user}(u) \Rightarrow \textbf{F} \, (\code{avg\_chunksize}\left(u\right) \le \code{maximum}) \end{equation*} where $\code{avg\_chunksize}$ is a predicate that is based on a variable in the program representing the average chunk size of the current user's session. \item \textbf{Ensuring proxy cache is functioning correctly.} This experiment is based on a study that shows the effectiveness of utilizing proxy cache in decreasing \linebreak YouTube videos requests in a large university campus~\cite{zink2008watch}. Thus, we monitor that no video is requested externally while existing in the cache: \begin{equation*} \mathbb{A}\xspace v : \code{vid}(v) \Rightarrow \mathbb{E}\xspace_{=0}\, r : \code{req}(r) \Rightarrow (\code{cached}(v)\, \wedge\, \code{external}(r)) \end{equation*} \end{enumerate} \subsection{Experimental Setup} \label{subsec:setup} \noindent \textbf{Experiment Hardware and Software.} The machine we use to run experiments comprises of a 12-core Intel Xeon E5-1650 CPU, an Nvidia Tesla K20c GPU, and $32$GB of RAM, running Ubuntu 12.04.\\ \noindent \textbf{Experimental Factors.} The experiments involve comparing the following factors: \begin{itemize} \setlength{\itemsep}{-1pt} \item {\em Implementation.} We compare three implementations of the {\sc Ltl}$_4-${\sc C}\xspace monitoring algorithm: \vspace{-2mm} \begin{itemize} \itemsep0em \item {\em Single Core CPU.} A CPU implementation running on a single core. The justification for using a single core is to allow the remaining cores to perform the main functionality of the system without causing contention from the monitoring process. \item {\em Parallel CPU.} A CPU implementation running on all 12 cores of the system. The implementation uses OpenMP. \item {\em GPU.} A parallel GPU-based implementation. \end{itemize} \item {\em Trace size.} We also experiment with different trace sizes to study the scalability of the monitoring solution, increasing exponentially from $16,384$ to $8,388,608$. \end{itemize} \noindent \textbf{Experimental Metrics.} Each experiment results in values for the following metrics: \vspace{-2mm} \begin{itemize} \itemsep0em \item {\em Total execution time.} The total execution time of the monitor. \item {\em Monitor CPU utilization.} The CPU utilization of the monitor process. \end{itemize} In addition, we measure the following metrics for Case Study 1, since it utilizes an online monitor: \vspace{-2mm} \begin{itemize} \setlength{\itemsep}{-2pt} \item {\em Monitored program CPU utilization.} The CPU utilization of the monitored program. This is to demonstrate the impact of monitoring on overall CPU utilization. \item {\tt strace} {\em parsing CPU utilization.} The CPU utilization of the \code{strace} parsing module. This module translates \code{strace} strings a numerical table. \end{itemize} We perform $20$ replicates of each experiment and present error bars of a $95\%$ confidence interval. \subsection{Results} \label{subsec:results} The results of Case Study 1 are shown in Figure~\ref{fig:strace}. As seen in the figure, the GPU implementation scales efficiently with increasing trace size, resulting in the lowest monitoring time of all three implementations. The GPU versus single core CPU speedup ranges from $0.8$ to $1.6$, increasing with the increasing trace size. When compared to parallel CPU (CPU ||), the speedup ranges from $0.78$ to $1.59$. This indicates that parallel CPU outperforms GPU for smaller traces ($32768$), yet does not scale as well as GPU in this case study. This is attributed to the low number of individual objects in the trace, making parallelism less impactful. CPU utilization results in Figure~\ref{fig:strace} show a common trend with the increase of trace size. When the trace size is small, parallel implementations incur high CPU utilization as opposed to a single core implementation, which could be attributed to the overhead of parallelization relative to the small trace size. On the other hand, GPU shows a stable utilization percentage, with a $78\%$ average utilization. The single core CPU implementation shows a similar trend, yet slightly elevated average utilization (average $86\%$). The parallel CPU implementation imposes a higher CPU utilization (average $1.15\%$), since more cores are being used to process the trace. This result indicates that shipping the monitoring workload to GPU consistently provides more time for CPU to execute other processes including the monitored process. The results of Case Study 2 and Case Study 3 in Figures~\ref{fig:dropbox} and~\ref{fig:youtube} respectively provide a different perspective. The number of individual objects in these traces are large, making parallelism highly effective. For Case Study 2, the speedup of the GPU implementation over single core CPU ranges from $1.8$ to $3.6$, and $0.83$ to $1.18$ over parallel CPU. The average CPU utilization of GPU, single core CPU, and parallel CPU is $64\%$, $82\%$, and $598\%$ respectively. For Case Study 3, speedup is more significant, with $6.3$ average speedup of GPU over single core CPU, and $1.75$ over parallel CPU. The average CPU utilization of GPU, single core CPU, and parallel CPU is $73\%$, $95\%$, and $680\%$ respectively. Thus, the parallel CPU implementation is showing large speedup similar to the GPU implementation, yet also results in a commensurate CPU utilization percentage, since most cores of the system are fully utilized. \begin{figure}[t] \centering \includegraphics[scale=.45]{strace} \caption{Results of Case Study 1.} \label{fig:strace} \vspace{-1mm} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=.45]{dropbox} \caption{Results of Case Study 2.} \label{fig:dropbox} \vspace{-5mm} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=.45]{youtube} \caption{Results of Case Study 3.} \label{fig:youtube} \vspace{-5mm} \end{figure} \begin{center} \vspace{-1mm} \fbox{\rule{1mm}{0mm} \begin{minipage}[t]{.9\columnwidth} {\em Although the parallel CPU implementation provides reasonable speedup, and the single-core CPU implementation imposes low CPU utilization overhead, the GPU implementation manages to achieve both simultaneously.} \end{minipage} }\end{center} \section{Introduction} In this paper, we study runtime verification of properties specified in an extension of linear temporal logic ({\sc Ltl}\xspace) that supports expression of counting semantics with numerical constraints. Runtime verification (RV) is an automated specification-based technique, where a {\em monitor} evaluates the correctness of a set of logical properties on a particular execution either on the fly (i.e., at run time) or based on log files. Runtime verification complements exhaustive approaches such as model checking and theorem proving and under-approximated methods such as testing. The addition of counting semantics to properties is of particular interest, as they can express parametric requirements on types of execution entities (e.g., processes and threads), user- and kernel-level events and objects (e.g., locks, files, sockets), web services (e.g., requests and responses), and network traffic. For example, the requirement `every open file should eventually be closed' specifies a rule for causal and temporal order of opening and closing individual objects which generalizes to {\em all} files. Such properties cannot be expressed using traditional RV frameworks, where the specification language is propositional {\sc Ltl}\xspace or regular expressions. In this paper, we extend the 4-valued semantics of {\sc Ltl}\xspace (i.e, {\sc Ltl}$_4$\xspace), designed for runtime verification~\cite{bls10-jlc} by adding counting semantics with numerical constraints and propose an efficient parallel algorithm for their verification at run time. Inspired by the work in~\cite{libkin2004elements}, the syntax of our language (denoted {\sc Ltl}$_4-${\sc C}\xspace) extends {\sc Ltl}\xspace syntax by the addition of {\em counting quantifiers}. That is, we introduce two quantifiers: the {\em instance counting quantifier} ($\mathbb{E}\xspace$) which allows expressing properties that reason about the number of satisfied or violated instances, and the {\em percentage counting quantifier} ($\mathbb{A}\xspace$) which allows reasoning about the percentage of satisfied or violated instances out of all instances in a trace. These quantifiers are subscripted with numerical constraints to express the conditions used to evaluate the count. For example, the following {\sc Ltl}$_4-${\sc C}\xspace formula: $$\mathbb{A}\xspace_{\ge 0.95} \, s \, \text{:}\, \code{socket}(s) \Rightarrow \left( \mathbf{G} \,\code{receive}\left(s\right) \implies \textbf{F} \, \code{respond}\left(s\right) \right)$$ intends to express the property that `at least $95\%$ of open TCP/UDP sockets must eventually be closed'. also, the formula: $$\mathbb{A}\xspace x : \code{user}(x) \Rightarrow \left(\mathbb{E}\xspace_{\le3} \,r:\code{rid}(r) \Rightarrow \left( \code{login} \wedge \code{unauthorized} \right) \right)$$ intends to capture the requirement that `for all users, there exist at most $3$ requests of type login that end with an unauthorized status'. The semantics of {\sc Ltl}$_4-${\sc C}\xspace is defined over six truth values: \begin{itemize} \item {\bf True} ($\top$) denotes that the property is already permanently satisfied. \item {\bf False} ($\bot$) denotes that the property is already permanently violated. \item {\bf Currently true} (${\top_c}$) denotes that the current execution satisfies the quantifier constraint of the property, yet it is possible that an extension violates the constraint. \item {\bf Currently false} (${\bot_c}$) denotes that the current execution violates the quantifier constraint of the property, yet it is possible that an extension satisfies it. \item {\bf Presumably true} (${\top_p}$) denotes that the current execution satisfies the inner {\sc Ltl}\xspace property and the quantifier constraint of the property. \item {\bf Presumably false} (${\bot_p}$) denotes that the current execution violates the inner {\sc Ltl}\xspace property and the quantifier constraint of the property. \end{itemize} We claim that these truth values provide us with informative verdicts about the status of different components of properties (i.e., quantifiers and their numerical constraints as well as the inner {\sc Ltl}\xspace formula) at run time. The second contribution of this paper is a divide-and-conquer-based online monitor generation technique for \linebreak {\sc Ltl}$_4-${\sc C}\xspace specifications. In fact, {\sc Ltl}$_4-${\sc C}\xspace monitors have to be generated at run time, otherwise, an enormous number of monitors (in the size of cross-product of domains of all variables) has to be created statically, which is clearly impractical. Our technique first synthesizes an {\sc Ltl}$_4$\xspace monitor for the inner {\sc Ltl}\xspace property of {\sc Ltl}$_4-${\sc C}\xspace properties pre-compile time using the technique in~\cite{bls10-jlc}. Then, based upon the values of variables observed at run time, submonitors are generated and merged to compute the current truth value of a property for the current program trace. Our third contribution is an algorithm that implements the above approach for verification of {\sc Ltl}$_4-${\sc C}\xspace properties at run time. This algorithm enjoys two levels of parallelism: the monitor (1) works in parallel with the program under inspection, and (2) evaluates properties in a parallel fashion as well. While the former ensures that the runtime monitor does not intervene with the normal operation of the program under inspection, the latter attempts to maximize the throughput of the monitor. The algorithm utilizes the popular {\em MapReduce} technique to (1) spawn submonitors that aim at evaluating subformulas using partial quantifier elimination, and (2) merge partial evaluations to compute the current truth value of properties. Our parallel algorithm for verification of {\sc Ltl}$_4-${\sc C}\xspace properties is fully implemented on multi-core CPU and GPU technologies. We report rigorous experimental results by conducting three real-world independent case studies. The first case study is concerned with monitoring HTTP requests and responses on an Apache Web Server. The second case study attempts to monitor users uploading maximum chunk packets repeatedly to a personal cloud storage service based on a dataset for profiling DropBox traffic. The third case study monitors a network proxy cache to reduce the bandwidth usage of online video services, based on a YouTube request dataset. We present performance results comparing single-core CPU, multi-core CPU, and GPU implementations. Our results show that our GPU-based implementation provides an average speed up of $7$x when compared to single-core CPU, and $1.75$x when compared to multi-core CPU. The CPU utilization of the GPU-based implementation is negligible compared to multi-core CPU, freeing up the system to perform more computation. Thus, the GPU-based implementation manages to provide competitive speedup while maintaining a low CPU utilization, which are two goals that the CPU cannot achieve at the same time. Put it another way, the GPU-based implementation incurs minimal monitoring costs while maintaining a high throughput. The rest of the paper is organized as follows. Section~\ref{sec:prob} describes the syntax and semantics of {\sc Ltl}$_4-${\sc C}\xspace. In Section~\ref{sec:monitor}, we explain our online monitoring approach, while Section~\ref{sec:design} presents our parallelization technique based on MapReduce. Experimental results are presented in Section~\ref{sec:exp}. Related work is discussed in Section~\ref{sec:related}. Finally, we make concluding remarks and discuss future work in Section~\ref{sec:concl}. \section{Divide-and-Conquer-based \\ Monitoring of LTL4-C} \label{sec:monitor} In this section, we describe our technique inspired by divide-and-conquer for evaluating {\sc Ltl}$_4-${\sc C}\xspace properties at run time. This approach forms the basis of our parallel verification algorithm in Section~\ref{sec:design}. Unlike runtime verification of propositional {\sc Ltl}$_4$\xspace properties, where the structure of a monitor is determined solely based on the property itself, a monitor for an {\sc Ltl}$_4-${\sc C}\xspace needs to evolve at run time, since the valuation of quantified variables change over time. More specifically, the monitor $\mathcal{M}_\varphi$ for an {\sc Ltl}$_4-${\sc C}\xspace property $\varphi = \mathbb{Q}_\varphi \psi$ relies on instantiating a {\em submonitor} for each property instance $\hat{\varphi}$ obtained at run time. We incorporate two type of submonitors: (1) {\em {\sc Ltl}$_4$\xspace submonitors} evaluate the inner {\sc Ltl}\xspace property $\psi$, and (2) {\em quantifier submonitors} deal with quantifiers in $\mathbb{Q}_\varphi$, described in Subsections~\ref{subsec:ltl4sub} and~\ref{subsec:quantsub}. In Subsection~\ref{subsec:instant}, we explain the conditions under which a submonitor is instantiated at run time. Finally, in Subsection~\ref{subsec:truthsub}, we elaborate on how submonitors evaluate an {\sc Ltl}$_4-${\sc C}\xspace property. \subsection{LTL4 Submonitors} \label{subsec:ltl4sub} Let $\varphi = \mathbb{Q}_\varphi \psi$ be an {\sc Ltl}$_4-${\sc C}\xspace property. If $|\mathbb{Q}_\varphi| = 0$ (respectively, one wants to evaluate $\hat{\varphi}(D_{\varphi}|^i)$, where $i=|\mathbb{Q}_\varphi|$), then $\varphi$ (respectively, $\hat{\varphi}(D_{\varphi}|^i)$) is free of quantifiers and, thus, the monitor (respectively, submonitor) of such a property is a standard {\sc Ltl}$_4$\xspace monitor (see Definition~\ref{def:monitor2}). We denote {\sc Ltl}$_4$\xspace submonitors as $\mathcal{M}^*_{D_{\varphi}}$, where $D_{\varphi}$ is the value vector with which the monitor is initialized. \subsection{Quantifier Submonitors} \label{subsec:quantsub} Given a finite trace $u$ and an {\sc Ltl}$_4-${\sc C}\xspace property $\varphi = \mathbb{Q}_\varphi \psi$, a {\em quantifier submonitor} ($\mathcal{M}^{\mathcal{Q}}$) is a monitor responsible for determining the valuation of a property instance $\hat{\varphi}(D_{\varphi}|^i)$ with respect to a trace subsequence $u^{D_{\varphi}|^i}$, if $i < |\mathbb{Q}_\varphi|$. Obviously, such a valuation is in $\mathbb{B}_6$. Let $\mathbb{V}$ be a six-dimensional vector space, where each dimension represents a truth value in $\mathbb{B}_6$. \begin{definition} [Quantifier Submonitor] \label{def:qmonitor} Let \linebreak $\varphi=\mathbb{Q}_\varphi\psi$ be an {\sc Ltl}$_4-${\sc C}\xspace property and $\hat{\varphi}(D_{\varphi}|^i)$ be a property instance, with $i \in [0,|\mathbb{Q}_\varphi|-1]$. The {\em quantifier submonitor} for $\hat{\varphi}(D_{\varphi}|^i)$ is the tuple $\mathcal{M}_{D_{\varphi}|^i}^{\mathcal{Q}} = \langle \mathcal{Q}_i,\mathbb{M}_{D_{\varphi}|^i},v,b\rangle$, where \begin{itemize} \itemsep0em \item $\mathcal{Q}_i$ encapsulates the quantifier information (see Equation~\ref{eq:bigq}) \item $v \in \mathbb{V}$ represents the current number of child property instances that evaluate to each truth value in $\mathbb{B}_6$ with respect to their trace subsequences, \item $b \in \mathbb{B}_6$ is the current value of ${[u^{D_{\varphi}|^i} \models_6 \hat{\varphi}(D_{\varphi}|^i)]}$, \item $\mathbb{M}_{D_{\varphi}|^i}$ is the set of child submonitors (submonitors of child property instances) defined as follows: \begin{equation*} \mathbb{M}_{D_{\varphi}|^i} = \begin{cases} \{\mathcal{M}^*_{D_{\varphi}^\prime} \mid D_{\varphi}^\prime|^i=D_{\varphi}|^i\} & \text{if } i = |\mathbb{Q}_\varphi|-1 \\ \{\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}^\prime|^{i+1}} \mid D_{\varphi}^\prime|^i=D_{\varphi}|^i\} & \text{if } i < |\mathbb{Q}_\varphi| - 1 \end{cases} \end{equation*} \end{itemize} Thus, if $i = |\mathbb{Q}_\varphi| - 1$, all child submonitors are {\sc Ltl}$_4$\xspace submonitors. Otherwise, they are quantifier submonitors of the respective child property instances.$~\blacksquare$ \vspace{2mm} \end{definition} Based on the definition, every quantifier submonitor references a set of child monitors. We use the following notation to denote a hierarchy of a submonitor and its children: $$\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^i} \big\{ \mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^{i+1}}, \mathcal{M}^{\mathcal{Q}}_{D_{\varphi}^\prime|^{i+1}}, \mathcal{M}^{\mathcal{Q}}_{D_{\varphi}^{\prime\prime}|^{i+1}},\cdots\big\}$$ such that $D_{\varphi}|^i=D_{\varphi}^\prime|^i = D_{\varphi}^{\prime\prime}|^{i}\cdots$ and $i < |\mathbb{Q}_\varphi|-1$ which is why the child monitors are quantifier submonitors. \subsection{Instantiating Submonitors} \label{subsec:instant} Let an {\sc Ltl}$_4-${\sc C}\xspace monitor $\mathcal{M}_\varphi$ for property $\varphi$ evaluate the property with respect to a finite trace $u=u_0u_1 \cdots$. Let $D_{\varphi}=\langle d_0,d_1,\cdots\rangle$ be a value vector and $u_k$ the first trace event such that $\forall d_i : p_i(d_i) \in u_k$, where $p_i$ is the predicate within each quantifier (i.e. $\mathbb{A}\xspace x_i : p_i(x_i) \Rightarrow \cdots$). In this case, the {\sc Ltl}$_4-${\sc C}\xspace monitor instantiates submonitors for every property instance resulting from that value vector. A value vector of length $|\mathbb{Q}_\varphi|$ results in $|\mathbb{Q}_\varphi|+1$ property instances: one for each quantifier in addition to an {\sc Ltl}$_4$\xspace inner property. The hierarchy of the instantiated submonitors is as follows: $$\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^0} \bigg\{\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^1} \Big\{ \cdots \big\{ \mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^{|\mathbb{Q}_\varphi-1|}} \{\mathcal{M}^{*}_{D_{\varphi}}\}\big\}\Big\}\bigg\}$$ If another value vector $D_{\varphi}^\prime$ is subsequently encountered for the first time, the hierarchy of submonitors becomes as follows: $$\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^0} \Big\{\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^1} \big\{ \cdots \{\mathcal{M}^{*}_{D_{\varphi}}\}\big\}, \mathcal{M}^{\mathcal{Q}}_{D_{\varphi}^\prime|^1} \big\{ \cdots \{\mathcal{M}^{*}_{D_{\varphi}^\prime}\}\Big\}$$ Since the hierarchy is formulated as a recursive set, no duplicate submonitors are allowed. Two submonitors are duplicates, if they represent identical value vectors. If $D_{\varphi}|^1 = D_{\varphi}^\prime|^1$, the respective monitors are merged. Such merging is explained in detail in Section~\ref{sec:design}. \subsection{Evaluating LTL4-C Properties} \label{subsec:truthsub} Once the {\sc Ltl}$_4-${\sc C}\xspace monitor instantiates its submonitors, every submonitor is responsible for updating its truth value. The truth value of {\sc Ltl}$_4$\xspace submonitors ($\mathcal{M}^*$) maps to the current state of the submonitor's automaton as described in Definition~\ref{def:monitor2}. Quantifier submonitors update their truth value based on the truth values of all child submonitors. The number of child submonitors whose truth value is $\top$ is stored in $v_{\top}$ (i.e., the $\top$ dimension of vector $v$) and so on for all truth values in $\mathbb{B}_6$. Then, {\sc Ltl}$_4-${\sc C}\xspace semantics are applied, beginning with function $\mathcal{S}$ (see Equation~\ref{eq:S}), which in turn relies on the cardinality of function $\mathcal{B}(\varphi,u,D_{\varphi}|^{i},b)$ where $b$ is a truth value. This cardinality is readily provided by the vector $v$, such that for instance $\mathcal{B}(\varphi,u,D_{\varphi}|^{i},\top) = v_\top$ and so on. Since each submonitor depends on its child submonitors, updating truth values proceeds outwards, starting at {\sc Ltl}$_4$\xspace submonitors, then recursively parent submonitors update their truth values until the submonitor $\mathcal{M}^{\mathcal{Q}}_{D_{\varphi}|^0}$, which is the root submonitor. The truth value of the root submonitor is the truth value of property $\varphi$ with respect to trace $u$. This is visualized as the tree shown in Figure~\ref{fig:tree}. \begin{figure}[t] \centering \includegraphics[scale=.5]{tree} \caption{Tree structure of an {\sc Ltl}$_4-${\sc C}\xspace monitor.} \label{fig:tree} \vspace{-3mm} \end{figure} \section{LTL with counting semantics} \label{sec:prob} To introduce our logic, we first define a set of basic concepts. \begin{definition}[Predicate] Let $V = \{x_1,x_2,\dots, x_n\}$ be a set of variables with (possibly infinite) domains \linebreak $\mathcal{D}_1, \mathcal{D}_2,\dots, \mathcal{D}_n$, respectively. A {\em predicate} $p$ is a binary-valued function on the domains of variables in $V$ such that $$p: \mathcal{D}_1 \times \mathcal{D}_2 \times \cdots \times \mathcal{D}_n \, \rightarrow \, \{\text{true},\text{false}\}~\blacksquare$$ \end{definition} The arity of a predicate is the number of variables it accepts. A predicate is {\em uninterpreted} if the domain of variables are not known concrete sets. For instance, $p(x_1,x_2)$ is an uninterpreted predicate, yet we can interpret it as (for instance) a binary function that checks whether or not $x_1$ is less than $x_2$ over natural numbers. Let $\mathit{UP}$ be a finite set of uninterpreted predicates, and let $\Sigma=2^{\mathit{UP}}$ be the power set of $\mathit{UP}$. We call each element of $\Sigma$ an {\em event}. \begin{definition}[Trace] A {\em trace} $w=w_0w_1\cdots$ is a finite or infinite sequence of events; i.e, $w_i \in \Sigma$, for all $i \geq 0$.$~\blacksquare$ \vspace{2mm} \end{definition} We denote the set of all infinite traces by $\Sigma^{\omega}$ and the set of all finite traces by $\Sigma^{*}$. A {\em program trace} is a sequence of events, where each event consists of {\em interpreted} predicates only. For instance, the following trace is a program trace: $$w=\{\code{open}(1),\code{r},\code{anony})\}\, \{\code{open}(2),\code{rw},\code{user}(5)\} \, \cdots$$ where \code{open} and \code{user} are unary predicates and \code{r}, \code{anony}, and \code{rw} are 0-arity predicates. Predicate \code{open} is interpreted as opening a file, \code{r} is interpreted as read-only permissions, \code{anony} is interpreted as an anonymous user, and so on. \subsection{Syntax of LTL4-C} {\sc Ltl}$_4-${\sc C}\xspace extends {\sc Ltl}$_4$\xspace with two counting quantifiers: the instance counting quantifier ($\mathbb{E}\xspace$) and the percentage counting quantifier ($\mathbb{A}\xspace$). The semantics of these quantifiers are introduced in subsection~\ref{subsec:semantics}. The syntax of {\sc Ltl}$_4-${\sc C}\xspace is defined as follows: \begin{definition}[{\sc Ltl}$_4-${\sc C}\xspace Syntax] \label{def:syntax} {\sc Ltl}$_4-${\sc C}\xspace formulas \linebreak are defined using the following grammar: \begin{equation} \nonumber \begin{split} \varphi \, \mathtt{::=} \,&\mathbb{A}\xspace_{\sim k} \;x:p(x) \Rightarrow \varphi \; \mid \; \mathbb{E}\xspace_{\sim l} \;x:p(x) \Rightarrow\varphi \; \mid \; \psi \\ \psi \, \mathtt{::=} \, & \top \; \mid \; p\left(x_1 \cdots x_n\right)\; \mid \; \neg \psi \;\mid \psi_1 \wedge \psi_2 \; \mid \; \\ & \mathbf{X} \,\psi \; \mid \; \psi_1\,\mathbf{U}\, \psi_2 \end{split} \end{equation} \noindent where $\mathbb{A}\xspace$ is the percentage counting quantifier, $\mathbb{E}\xspace$ is the instance counting quantifier, $x$, $x_1\cdots x_n$ are variables with possibly infinite domains $\mathcal{D}, \mathcal{D}_1,\cdots \mathcal{D}_n$, $\sim \, \in \left\{ <,\le,>,\ge,= \right\}$, $k \, \mathtt{:} \, \mathbb{R} \in \left[0,1\right]$, $l \, \in \mathbb{Z^+}$, $\mathbf{X}$ is the next, and $\mathbf{U}$ is the until temporal operators.$~\blacksquare$ \vspace{2mm} \end{definition} If we omit the numerical constraint in $\mathbb{A}\xspace_{\sim k}$ (respectively, $\mathbb{E}\xspace_{\sim l}$), we mean $\mathbb{A}\xspace_{= 1}$ (respectively, $\mathbb{E}\xspace_{\ge 1}$). The syntax of \linebreak {\sc Ltl}$_4-${\sc C}\xspace forces constructing formulas, where a string of counting quantifiers is followed by a quantifier-free formula. We emphasize that $\mathbb{A}\xspace$ and $\mathbb{E}\xspace$ do not necessarily resemble standard first-order quantifiers $\forall$ and $\exists$. In fact, as we will explain $\neg \mathbb{A}\xspace$ and $\mathbb{E}\xspace$ are not generally equivalent. Consider {\sc Ltl}$_4-${\sc C}\xspace property $\varphi = \mathbb{A}\xspace x : p(x) \Rightarrow \psi$, where the domain of $x$ is $\mathcal{D}$. This property denotes that for any possible valuation of the variable $x$ ($[x:=v]$), if $p(v)$ holds, then $\psi$ should hold. If $p(v)$ does not hold, then $p(v) \Rightarrow \psi$ trivially evaluates to true. This effectively means that the quantifier $\mathbb{A}\xspace x$ is in fact applied only over the following sub-domain: $$\{v \in \mathcal{D} \mid p(v)\} \subseteq \mathcal{D}$$ To give an intuition, consider the scenarios where file management anomalies can cause serious problems at run time (e.g., in NASA's Spirit Rover on Mars in 2004). For example, the following {\sc Ltl}$_4-${\sc C}\xspace property expresses ``at least half of the files that a process has previously opened must be closed'': \begin{equation} \label{eq:example} \varphi_1 = \mathbb{A}\xspace_{\ge 50\%} \,f: \code{intrace}(f) \Rightarrow (\code{opened}(f) \,\mathbf{U}\, \code{close}(f)) \end{equation} where $\code{intrace}$ denotes the fact that the concrete file appeared in any event in the trace. \subsection{4-Valued LTL~\cite{bls10-jlc}} First, we note that the syntax of {\sc Ltl}$_4$\xspace can be easily obtained from Definition~\ref{def:syntax} by (1) removing the counting quantifier rules and (2) reducing the arity of predicates to 0 (i.e., predicates become atomic propositions). \subsubsection{FLTL} To introduce {\sc Ltl}$_4$\xspace semantics, we first introduce Finite {\sc Ltl}\xspace. Finite {\sc Ltl}\xspace ({\sc Fltl}\xspace)~\cite{mp95} allows us to reason about finite traces for verifying properties at run time. The semantics of {\sc Fltl}\xspace is based on the truth values $\mathbb{B}_2=\{\top,\bot\}$. \vspace{-2mm} \begin{definition} [{\sc Fltl}\xspace semantics] Let $\varphi$ and $\psi$ be {\sc Ltl}\xspace properties, and $u = u_0u_1 \cdots u_{n-1}$ be a finite trace. \begin{align*} \left[u \models_{\text{\normalfont F}} \mathbf{X}\, \varphi\right] &= \begin{cases} [u_1 \models_{\text{\normalfont F}} \varphi] & \text{\normalfont if } u_1 \neq \epsilon \\ \bot & \text{\normalfont otherwise} \end{cases}\\ \\ \left[u \models_{\text{\normalfont F}} \varphi \,\mathbf{U}\, \psi\right] &= \begin{cases} \top & \exists k \in [0,n-1]: [u_k \models_{\text{\normalfont F}} \psi] = \top \;\;\wedge \\ &\forall l \in [0,k) : [u^l \models_{\text{\normalfont F}} \varphi] = \top\\ \bot & \text{\normalfont otherwise} \end{cases} \end{align*} where $\epsilon$ is the empty trace. The semantics of {\sc Fltl}\xspace for atomic propositions and Boolean combinations are identical to that of {\sc Ltl}\xspace. $~\blacksquare$ \vspace{2mm} \end{definition} Similar to standard {\sc Ltl}\xspace, $\mathbf{F} p \equiv \top \, \mathbf{U} \,p$ and $\mathbf{G} p \equiv \neg \mathbf{F} \neg p$. \subsubsection{LTL4 Semantics} {\sc Ltl}$_4$\xspace is designed for runtime verification by producing more informative verdicts than {\sc Fltl}\xspace. The semantics of {\sc Ltl}$_4$\xspace is defined based on values $\mathbb{B}_4=\{\top,{\top_p},{\bot_p},\bot\}$ ({\em true}, {\em presumably true}, {\em presumably false}, and {\em false} respectively). The semantics of {\sc Ltl}$_4$\xspace is defined based on the semantics {\sc Ltl}\xspace and {\sc Fltl}\xspace. \begin{definition} [{\sc Ltl}$_4$\xspace semantics] \label{def:ltlfour} Let $\varphi$ be an {\sc Ltl}$_4$\xspace \linebreak property and $u$ be a finite prefix of a trace. \begin{equation*} \left[u \models_4 \varphi\right] = \begin{cases} \top & \forall v \in \mathrm{\Sigma}^\omega : uv \models \varphi\\ \bot & \forall v \in \mathrm{\Sigma}^\omega: uv \not \models \varphi\\ {\top_p} & [u \models_F \varphi] \, \wedge \, \exists v \in \mathrm{\Sigma}^\omega: uv \not \models \varphi \\ {\bot_p} & [u \not \models_F \varphi] \, \wedge \, \exists v \in \mathrm{\Sigma}^\omega: uv \models \varphi$~\blacksquare$ \vspace{2mm} \end{cases} \end{equation*} \end{definition} In this definition, $\models$ denotes the satisfaction relation defined by standard {\sc Ltl}\xspace semantics over infinite traces. Thus, an {\sc Ltl}$_4$\xspace property evaluates to $\top$ with respect to a finite trace $u$, if the property remains {\em permanently satisfied}, meaning that for all possible infinite continuations of the trace, the property will always be satisfied in {\sc Ltl}\xspace. Likewise, a valuation of $\bot$ means that the property will be {\em permanently violated}. If the property evaluates to ${\top_p}$, this denotes that currently the property is satisfied yet there exists a continuation that could violate it. Finally, value ${\bot_p}$ denotes that currently the property is violated yet there exists a continuation that could satisfy it. \subsubsection{LTL4 Monitors} In \cite{bls10-jlc}, the authors introduce a method of synthesizing a {\em monitor}, as a deterministic finite state machine (FSM), for an {\sc Ltl}$_4$\xspace property. \begin{definition} [{\sc Ltl}$_4$\xspace Monitor] \label{def:monitor2} Let $\varphi$ be an {\sc Ltl}$_4$\xspace formula over $\mathrm{\Sigma}$. The {\em monitor} $\mathcal{M}_{\varphi}$ of $\varphi$ is the unique FSM $(\Sigma, Q, q_0, \delta, \lambda)$, where $Q$ is a set of states, $q_0$ is the initial state, $\delta \subseteq Q \times \Sigma \times Q$ is the transition relation, and $\lambda: Q \rightarrow \mathbb{B}_4$ is a function such that: \begin{equation*} \left[u \models_4 \varphi\right] = \lambda(\delta(q_0, u)).~\blacksquare \end{equation*} \end{definition} Thus, given an {\sc Ltl}$_4$\xspace property $\varphi$ and a finite trace $u$, monitor $\mathcal{M}_{\varphi}$ is capable of producing a truth value in $\mathbb{B}_4$, which is equal to $[u \models_4 \varphi]$. For example, Figure~\ref{fig:ltl4} shows the monitor for property $\varphi = \mathbf{G} a \; \vee \; (b \, \mathbf{U} \, c)$. Observe that a monitor has two {\em trap} states (only an outgoing self loop), which map to truth values $\top$ and $\bot$. They are trap states since these truth values imply permanent satisfaction (respectively, violation). Otherwise, states labeled by ${\top_p}$ and ${\bot_p}$ can have outgoing transitions to other states. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{ltl4} \caption{{\sc Ltl}$_4$\xspace monitor for property $\varphi = \mathbf{G} a \; \vee \; (b \, \mathbf{U} \, c)$.} \label{fig:ltl4} \vspace{-3mm} \end{figure} \subsection{Truth Values of LTL4-C} The objective of {\sc Ltl}$_4-${\sc C}\xspace is to verify the correctness of quantified properties at run time with respect to finite program traces. Such verification attempts to produce a sound verdict regardless of future continuations. We incorporate six truth values to define the semantics of {\sc Ltl}$_4-${\sc C}\xspace: $\mathbb{B}_6=\{\top, \bot, {\top_c}, {\bot_c}, {\top_p}, {\bot_p}\}$; {\em true}, {\em false}, {\em currently true}, {\em currently false}, {\em presumably true}, {\em presumably false}, respectively. The values in $\mathbb{B}_6$ form a lattice ordered as follows: $\bot < {\bot_c} < {\bot_p} < {\top_p} < {\top_c} < \top$. Given a finite trace $u$ and an {\sc Ltl}$_4-${\sc C}\xspace property $\varphi$, the informal description of evaluation of $u$ with respect to $\varphi$ is as follows: \begin{itemize} \item {\bf True} ($\top$) denotes that any infinite extension of $u$ satisfies $\varphi$. \item {\bf False} ($\bot$) denotes that any infinite extension of $u$ violates $\varphi$. \item {\bf Currently true} (${\top_c}$) denotes that currently $u$ satisfies the counting quantifier constraint of $\varphi$, yet it is possible that a suffix of $u$ violates the constraint. For instance, the valuation of Property~\ref{eq:example} (i.e., $\varphi_1$) is ${\top_c}$, if in a trace $u$, currently $50\%$ of files previously opened are closed. This is because (1) the inner {\sc Ltl}\xspace property is permanently satisfied for at least $50\%$ of files previously opened, and (2) it is possible for a trace continuation to change this percentage to less than $50\%$ in the future (a trace in which enough new files are opened and not closed). \item {\bf Currently false} (${\bot_c}$) denotes that currently $u$ violates the quantifier constraint of $\varphi$, yet it is possible that a suffix of $u$ satisfies the constraint. For instance, the valuation of Property~\ref{eq:example} (i.e., $\varphi_1$) in a finite trace $u$ is ${\bot_c}$, if the number of files that were not successfully opened is currently greater than $50\%$. This could happen in the scenario where opening a file fails, possibly due to lack of permissions. Analogous to ${\top_c}$, the property is evaluated to ${\bot_c}$ because (1) the inner {\sc Ltl}\xspace property is permanently satisfied for less than $50\%$ of files in the program trace, and (2) it is possible for a trace continuation to change this percentage to at least $50\%$ in the future. \end{itemize} Now let us consider modifying the property to support multiple open and close operations on the same file. For this purpose, we reformulate the property as follows: \begin{equation} \label{eq:example2} \varphi_2 = \mathbb{A}\xspace_{\ge 50\%} \,f: \code{intrace}(f) \Rightarrow \left(\mathbf{G} \left(\code{opened}(f) \,\mathbf{U}\, \code{close}(f)\right)\right) \end{equation} \begin{itemize} \item {\bf Presumably true} (${\top_p}$) extends the definition of {\em presumably true} in {\sc Ltl}$_4$\xspace~\cite{bls10}, where ${\top_p}$ denotes that $u$ satisfies the inner {\sc Ltl}\xspace property and the counting quantifier constraint in $\varphi$, if the program terminates after execution of $u$. For example, Property~\ref{eq:example2} (i.e., $\varphi_2$) evaluates to ${\top_p}$, if at least $50\%$ of the files in the program trace are closed. Closed files presumably satisfy the property, since they satisfy the $\mathbf{G}$ operator thus far, yet can potentially violate it if the file is opened a subsequent time without being closed. Note that this property can never evaluate to ${\top_c}$, since no finite trace prefix can permanently satisfy the inner {\sc Ltl}\xspace property. However, if the inner property can be permanently satisfied ($\top$) and presumably satisfied (${\top_p}$), then the entire {\sc Ltl}$_4-${\sc C}\xspace property can potentially evaluate to ${\top_c}$ if the numerical condition of the quantifier is satisfied. A property can evaluate to ${\top_p}$ only if the conditions for ${\top_c}$ are not met, since ${\top_c}$ is higher up the partial order of $\mathbb{B}_6$. \item {\bf Presumably false} (${\bot_p}$) extends the definition of {\em presumably false} in {\sc Ltl}$_4$\xspace~\cite{bls10}, which denotes that $u$ presumably violates the quantifier constraint in $\varphi$. According to the Property~\ref{eq:example2}, this scenario will occur when the number of files that are either closed or opened and not yet closed is at least $50\%$ of all files in the trace. Opened files presumably violate the inner property, since closing the file is required but has not yet occurred. This condition should not conflict with ${\top_p}$ or ${\top_c}$, since they precede ${\bot_p}$ in the partial order of $\mathbb{B}_6$ and thus ${\bot_p}$ only occurs if the conditions for ${\top_p}$ and ${\top_c}$ do not hold. \end{itemize} \subsection{Semantics of LTL4-C} \label{subsec:semantics} An {\sc Ltl}$_4-${\sc C}\xspace property essentially defines a set of traces, where each traces is a sequences of events (i.e., sets of uninterpreted predicates). We define the semantics of {\sc Ltl}$_4-${\sc C}\xspace with respect to finite traces and present a method of utilizing these semantics for runtime verification. In the context of runtime verification, the objective is to ensure that a program trace (i.e., a sequence of sets of {\em interpreted} predicates) is in the set of traces that the property defines, given the interpretations of the property predicates within the program trace. To introduce the semantics of {\sc Ltl}$_4-${\sc C}\xspace, we examine counting quantifiers further. Since the syntax of {\sc Ltl}$_4-${\sc C}\xspace allows nesting of counting quantifiers, a canonical form of properties is as follows: \begin{equation} \label{eq:genprop} \varphi = \mathbb{Q}_\varphi \; \psi \end{equation} where $\psi$ is an {\sc Ltl}\xspace property and $\mathbb{Q}_\varphi$ is a string of counting quantifiers \begin{equation} \label{eq:bigq} \mathbb{Q}_\varphi=\mathcal{Q}_0\mathcal{Q}_1\cdots\mathcal{Q}_{n-1} \end{equation} such that each $\mathcal{Q}_i = \langle Q_i,\sim_i,c_i,x_i,p_i\rangle$, $0 \leq i \leq n-1$, is a tuple encapsulating the counting quantifier information. That is, $Q_i \in \{\mathbb{A}\xspace,\mathbb{E}\xspace\}$, $\sim_i \in \left\{ <,\le,>,\ge,= \right\}$, $c_i$ is the constraint constant, $x_i$ is the bound variable, and $p_i$ is the predicate within the quantifier (see Definition~\ref{def:syntax}). We presents semantics of {\sc Ltl}$_4-${\sc C}\xspace in a stepwise manner: \begin{enumerate} \item \textbf{Variable valuation.} First, we demonstrate how variable valuations are extracted from the trace and used to substitute variables in the formula. \item \textbf{Canonical variable valuations.} Next, we demonstrate how to build a canonical structure of the variable valuations provided in Step 1. This canonical structure mirrors the canonical structure of {\sc Ltl}$_4-${\sc C}\xspace properties. \item \textbf{Valuation of property instances.} A {\em property instance} is a unique substitution of variables in the property with values from their domains. This step demonstrate how to evaluate property instances. \item \textbf{Applying quantifier numerical constraints.} This step demonstrates how to evaluate counting quantifiers by applying their numerical constraints on the valuation of a set of property instances from Step 3. The set of property instances is retrieved with respect to the canonical structure defined in Step 2. \item \textbf{Inductive semantics.} Using the canonical structure in Step 2, and valuation of counting quantifiers in Step 4, we define semantics that begin at the outermost counting quantifier of an {\sc Ltl}$_4-${\sc C}\xspace property and evaluate quantifiers recursively inwards. \end{enumerate} \subsubsection{Variable Valuation} We define a vector $D_{\varphi}$ with respect to a property $\varphi$ as follows: $$D_{\varphi}=\langle d_0,d_1,\cdots,d_{n-1}\rangle$$ where $n = |\mathbb{Q}_\varphi|$ and $d_i$, $0 \leq i \leq n-1$, is a value for variable $x_i$. We denote the first $m$ components of the vector $D_{\varphi}$ (i.e., $\langle d_0,d_1,\cdots,d_{m-1}\rangle$) by $D_{\varphi}|^m$. We refer to $D_{\varphi}$ as a {\em value vector} and to $D_{\varphi}|^m$ as a {\em partial value vector}. A {\em property instances} $\hat{\varphi}(D_{\varphi}|^m)$ is obtained by replacing every occurrence of the variables $x_0\cdots x_{m-1}$ in $\varphi$ with the values $d_0 \cdots d_{m-1}$, respectively. Thus, $\hat{\varphi}(D_{\varphi}|^m)$ is free of quantifiers of index less than $m$, yet remains quantified over variables $x_m\cdots x_{n-1}$. For instance, for the following property $$\varphi = \mathbb{A}\xspace_{>c_1}\,x: p_x(x) \Rightarrow \left( \mathbb{A}\xspace_{<c_2} \,y : p_y(y) \Rightarrow \mathbf{G}\,q(x,y)\right)$$ and value vector $D_{\varphi} = \langle 1,2 \rangle$ (i.e., the vector of values for variables $x$ and $y$, respectively), $\hat{\varphi}(D_{\varphi})$ will be $$\hat{\varphi}(\langle 1,2\rangle) =p_x(1) \Rightarrow \left(p_y(2) \Rightarrow \mathbf{G}\,q(1,2)\right)$$ We now define the set $\mathbb{D}_{\varphi,u}$ as the set of all value vectors with respect to a property $\varphi = \mathbb{Q}_\varphi \; \psi$ and a finite trace $u = u_0u_1\cdots u_k$: \begin{equation} \label{eq:D} \mathbb{D}_{\varphi,u} = \{D_{\varphi} \mid \exists j \in [0,k]: \forall i \in [0,n-1]: p_i(d_i) \in u_j\} \end{equation} where $n=|\mathbb{Q}_\varphi|$. \subsubsection{Canonical Variable Valuations} An {\sc Ltl}$_4-${\sc C}\xspace property follows a canonical structure, in which every counting quantifier $\mathcal{Q}_i$ has a {\em parent} quantifier $\mathcal{Q}_{i-1}$, except for $\mathcal{Q}_0$ which is the {\em root} counting quantifier. A counting quantifier $\mathcal{Q}_i$ is applied over all valuations of its variable $x_i$ given a unique valuation of its predecessor variables $x_0,\cdots ,x_{i-1}$. Hence, we define function $\mathcal{P}$ which takes as input a partial value vector $D_{\varphi}|^m$, and returns all partial value vectors in $\mathbb{D}_{\varphi,u}$ of length $m+1$, such that the first $m$ elements of these vectors is the same as $D_{\varphi}|^m$. In this context, we refer to $D_{\varphi}|^m$ as a {\em parent} vector and all the returned vectors as {\em child} vectors. Similarly, a property instance can have a parent; for instance, $\hat{\varphi}(D_{\varphi}|^m)$ is the parent of $\hat{\varphi}(D_{\varphi}|^{m+1})$. \begin{equation*} \label{eq:eqP} \mathcal{P}(\varphi, u, D_{\varphi}|^m) = \bigg\{D_{\varphi}^\prime|^{m+1} \;\bigg|\; D_{\varphi}^\prime \in \mathbb{D}_{\varphi,u} \, \wedge \, D_{\varphi}^\prime|^m = D_{\varphi}|^m\bigg\} \end{equation*} Following the example above, assume there are two value vectors: $\langle 1,2\rangle$ and $\langle 1,3 \rangle$. In this case, \begin{equation*} \label{eq:exP} \mathcal{P}(\varphi,u,\langle 1 \rangle) = \big\{\langle 1,2 \rangle,\langle 1,3\rangle\big\} \end{equation*} \subsubsection{Valuation of Property Instances} As per the definition of $\mathbb{D}_{\varphi,u}$, every value vector $D_{\varphi}=\langle d_0 \cdots d_{n-1}\rangle$ in $\mathbb{D}_{\varphi,u}$ contains values for which the predicates $p_i(d_i)$ hold in some trace event $u_j$. For simplicity, we denote this as a value vector {\em in} a trace event $u_j$. These value vectors can possibly be in multiple and interleaved events in the trace. Thus, we define a trace $u^{D_{\varphi}}=u^{D_{\varphi}}_0 u^{D_{\varphi}}_1 \cdots u^{D_{\varphi}}_l$ as a subsequence of the trace $u$ such that the value vector $D_{\varphi}$ is in every event: $$\forall \,j \in [0,l] : \forall \,i \in [0,n-1]: p_i(d_i) \in u^{D_{\varphi}}_j$$ For any property instance $\hat{\varphi}(D_{\varphi})$, we wish to evaluate \linebreak $[u^{D_{\varphi}} \models_6 \hat{\varphi}(D_{\varphi})]$ (read as valuation of $\hat{\varphi}(D_{\varphi})$ with respect to $u^{D_{\varphi}}$ for {\sc Ltl}$_4-${\sc C}\xspace), since any other event in trace $u$ is not of interest to $\hat{\varphi}(D_{\varphi})$. By leveraging $u^{D_{\varphi}}$, we define function $\mathcal{B}$ as follows: \\ \noindent$\mathcal{B}(\varphi,u,D_{\varphi}|^m,b) = $ \begin{equation*} \begin{cases} D_{\varphi}^\prime|^{m+1} \in \mathcal{P}(\varphi,u,D_{\varphi}|^m) \mid & \\ \quad [u^{D_{\varphi}^\prime|^{m+1}} \models_6 \hat{\varphi}(D_{\varphi}^\prime|^{m+1})] = b & \text{iff}~m<|\mathbb{Q}_\varphi|-1 \\ D_{\varphi}^\prime|^{m+1} \in \mathcal{P}(\varphi,u,D_{\varphi}|^m) \mid & \\ \quad [u^{D_{\varphi}^\prime|^{m+1}} \models_4 \hat{\varphi}(D_{\varphi}^\prime|^{m+1})] = b & \text{iff}~m=|\mathbb{Q}_\varphi|-1 \\ \end{cases} \end{equation*} where $b$ is a truth value in $\mathbb{B}_6$. Function $\mathcal{B}$ can be implemented in a straightforward manner, where it iterates over all its children value vectors $D_{\varphi}^\prime|^{m+1}$ which are retrieved using $\mathcal{P}$. For every child vector, the function checks whether $\hat{\varphi}(D_{\varphi}^\prime|^{m+1})$ evaluates to $b$ with respect to the trace subsequence $u^{D_{\varphi}^\prime|^{m+1}}$. To clarify $\mathcal{B}$, let us refer to our example earlier. Let a program trace $u$ be as follows: $$u = \{p_x(1),p_y(2),\cdots\},\{p_x(1),p_y(3),\cdots\},\{p_x(1),p_y(2),\cdots\}$$ With respect to this trace, $\mathcal{P}(\varphi,u,\langle 1 \rangle) = \{\langle 1,2\rangle,\langle 1,3 \rangle\}$. As per the definition of $u^{D_{\varphi}}$, $u^{\langle 1,2\rangle}=u_0u_2$, and $u^{\langle 1,3\rangle}=u_1$. Thus, $\mathcal{B}(\varphi,u,\langle 1 \rangle,b)$ checks the following: \begin{align*} [u^{\langle 1,2\rangle} &\models_4 p_x(1) \Rightarrow \left(p_y(2) \Rightarrow \mathbf{G}\,q(1,2)\right)] = b \\ [u^{\langle 1,3\rangle} &\models_4 p_x(1) \Rightarrow \left(p_y(3) \Rightarrow \mathbf{G}\,q(1,3)\right)] = b \end{align*} The definition of $u^{D_{\varphi}}$ implies that $p_i(d_i) \in u^{D_{\varphi}}_j$ for all $j$. Thus, we can simplify the property by omitting the $p$ predicates since they hold by definition: \begin{align*} [u^{\langle 1,2\rangle} &\models_4 \mathbf{G}\,q(1,2)] = b \\ [u^{\langle 1,3\rangle} &\models_4 \mathbf{G}\,q(1,3)] = b \end{align*} For instance, if only $[u^{\langle 1,2\rangle} \models_4 \mathbf{G}\,q(1,2)] = b$ holds, then $$\mathcal{B}(\varphi,u,\langle 1 \rangle,b) = \{\langle 1 ,2 \rangle\}$$ As can be seen in the example, the property instances that are evaluated are {\sc Ltl}$_4$\xspace properties. This is because the input to $\mathcal{B}$ is $D_{\varphi}|^1=D_{\varphi}|^{|\mathbb{Q}_\varphi|-1}$, which represents the inner most quantifier. \subsubsection{Applying Quantifier Numerical Constraints} Finally, numerical constraints should be incorporated in the semantics. We define function $\mathcal{S}$ as follows: \begin{equation} \small \label{eq:S} \mathcal{S}(\varphi,u,D_{\varphi}|^m,B) = \begin{cases} \vphantom{\Bigg|}\bigg|\bigcup\limits_{b \in B}{\mathcal{B}(\varphi,u,D_{\varphi}|^m,b)}\bigg| \sim_i \\ \quad c_i \times |\{\mathcal{P}(\varphi,u,D_{\varphi}|^m)\}| & \text{iff } Q_m = \mathbb{A}\xspace\\ \vphantom{\Bigg|}\bigg|\bigcup\limits_{b \in B}{\mathcal{B}(\varphi,u,D_{\varphi}|^m,b)}\bigg| \sim_i c_i & \text{iff } Q_m = \mathbb{E}\xspace \end{cases} \end{equation} where $B \subseteq \mathbb{B}_6$ is a set of truth values. This function returns whether a counting quantifier constraint is satisfied or not based on any of the truth values $b \in B$. Observe that, for percentage counting quantifiers, the constraint value denotes the percentage of property instances that evaluate to $b$. For instance counting quantifiers, the constraint value denotes the number of property instances that evaluate to $b$. For instance, consider Property~\ref{eq:example3} which is read as: for all users, there exists at most $3$ requests of type login that end with an unauthorized status. For such a property, if $4$ or more unauthorized login attempts are detected for the same user, the property is permanently violated. \subsubsection{Inductive Semantics} Using the previously defined set of of functions, we now formalize {\sc Ltl}$_4-${\sc C}\xspace semantics. \begin{definition}[{\sc Ltl}$_4-${\sc C}\xspace Semantics] \label{def:semantics} {\sc Ltl}$_4-${\sc C}\xspace semantics for properties with counting quantifiers are defined as follows: \begin{equation*} \label{eq:semantics} [u \models_6 \varphi] = \begin{cases} \top & \text{iff }\;\; \mathcal{S}(\varphi,u,\langle\rangle,\{\top\}) = 1 \; \wedge \\ &\forall v \in \Sigma^{\omega} : [uv \models_6 \varphi] = \top\\ \bot & \text{iff}\;\; \mathcal{S}(\varphi,u,\langle\rangle,\mathbb{B}_6-\{\bot\})= 0 \; \wedge \\ &\forall v \in \Sigma^{\omega} : [uv \models_6 \varphi] = \bot\\ {\top_c} & \text{iff}\;\; \mathcal{S}(\varphi,u,\langle\rangle,\{\top,{\top_c}\}) = 1 \; \wedge \\ &\exists v \in \Sigma^{\omega} : [uv \models_6 \varphi] \neq {\top_c}\\ {\bot_c} & \text{iff }\;\; \mathcal{S}(\varphi,u,\langle\rangle,\mathbb{B}_6-\{\bot,{\bot_c}\}) = 0 \; \wedge \\ &\exists v \in \Sigma^{\omega} : [uv \models_6 \varphi] \neq {\bot_c} \\ {\top_p} & \text{iff }\;\; \mathcal{S}(\varphi,u,\langle\rangle,\{\top,{\top_c},{\top_p}\}) = 1\; \wedge \\ &\mathcal{S}(\varphi,u,\langle\rangle,\{\top,{\top_c}\}) = 0\\ {\bot_p} & \text{iff }\;\; \mathcal{S}(\varphi,u,\langle\rangle,\{\top,{\top_c},{\top_p}\}) = 0\; \wedge \\ &\mathcal{S}(\varphi,u,\langle\rangle,\mathbb{B}_6-\{\bot,{\bot_c}\}) = 0 $~\blacksquare$ \vspace{2mm} \end{cases} \end{equation*} \end{definition} Note that these semantics are applied recursively until there is only one counting quantifier left in the formula, at which point $\mathcal{B}$ checks the valuation based on {\sc Ltl}$_4$\xspace semantics ($[u^{D_{\varphi}} \models_4 \hat{\varphi}(D_{\varphi})] = b$). When checking the valuation of these {\sc Ltl}$_4$\xspace properties, $\mathcal{B}$ will always return an empty set in case the input $b$ is ${\top_c}$ or ${\bot_c}$, since these truth values are inapplicable to {\sc Ltl}$_4$\xspace properties. As mentioned earlier, truth values in $\mathbb{B}_6$ form a lattice. Standard lattice operators $\sqcap$ and $\sqcup$ are defined as expected based on the lattice's partial order. Permanent satisfaction $(\top)$ or violation $(\bot)$ is applicable to $\mathbb{E}\xspace$ quantifiers regardless of the comparison operator, as well as a special case of $\mathbb{A}\xspace$ quantifiers: \begin{itemize} \item \textbf{$\mathbb{A}\xspace$ quantifier.} As mentioned earlier, if the $\mathbb{A}\xspace$ quantifier is not subscripted, it is assumed to denote $\mathbb{A}\xspace_{=1}$. In this case, a single violation in its child property instances causes a permanent violation of the quantified property. \item \textbf{$\mathbb{E}\xspace$ quantifier.} Permanent violation is possible for any numerical constraint attached to an $\mathbb{E}\xspace$ quantifier, since it is a condition on the {\em number} of satisfied property instances. \end{itemize} Property~\ref{eq:example3} illustrates an example of an $\mathbb{E}\xspace$ quantifier that can be permanently violated. Also, since the $\mathbb{A}\xspace$ quantifier in Property~\ref{eq:example3} defaults to $\mathbb{A}\xspace_{=1}$, it will be violated if a single user makes more than three unauthorized login attempts. In such a case, the entire property evaluates to $\bot$. Table~\ref{table:exists} illustrates how permanent satisfaction or violation apply to the different numerical constraints of $\mathbb{E}\xspace$ quantifiers. \begin{equation} \label{eq:example3} \mathbb{A}\xspace x : \code{user}(x) \Rightarrow \left(\mathbb{E}\xspace_{\le3} \,r:\code{rid}(r) \Rightarrow \left( \code{login} \wedge \code{unauthorized} \right) \right) \end{equation} \begin{table} [h!] \caption{Rules of permanent satisfaction or violation of $\mathbb{E}\xspace$ constraints} \label{table:exists} \centering \begin{tabular} {c|l} Operator & Verdict \\ \hline $> c$ & Permanent satisfaction if $> c$ \\ $\ge c$ & Permanent satisfaction if $\ge c$ \\ $= c$ & Permanent violation if $> c$ \\ $< c$ & Permanent violation if $\ge c$ \\ $\le c$ & Permanent violation if $> c$ \\ \end{tabular} \end{table} To clarify the semantics, consider Property~\ref{eq:example3} and the following program trace: \begin{align*} &\{\code{rid}(12),\code{user}(Adam),\code{login},\code{unauthorized}\}\\ &\{\code{rid}(13),\code{user}(Adam),\code{login},\code{unauthorized}\}\\ &\{\code{rid}(14),\code{user}(Jack),\code{login},\code{authorized}\}\\ &\{\code{rid}(15),\code{user}(Adam),\code{login},\code{unauthorized}\}\\ &\{\code{rid}(16),\code{user}(Adam),\code{login},\code{unauthorized}\} \end{align*} where each line represents an event: a set of interpreted predicates. Each event contains a request identifier (\code{rid}), a username, a request type ($\code{login}$), and response status \linebreak ($\code{authorized}$ or $\code{unauthorized}$). As seen in the trace, there are $5$ distinct value vectors: $\langle Adam, 12\rangle$, $\langle Adam, 13\rangle$, $\langle Jack, 14\rangle$, \linebreak $\langle Adam, 15\rangle$, and $\langle Adam, 16\rangle$. Now, let us apply the inductive semantics on the property. \textbf{Step 1.} We begin by checking the truth value of $[u \models_6 \varphi]$, which requires determining which condition in Definition~\ref{def:semantics} applies. This requires the evaluation of function $\mathcal{S}$ for the different truth values shown. Since we are verifying $\varphi$, we begin with the outermost counting quantifier, which is a $\mathbb{A}\xspace$ quantifier. Thus, $\mathcal{S}$ will require calculating the cardinality of the set $\mathcal{P}(\varphi,u,D_{\varphi}|^0)$, which in case of the trace should be $|\{Adam,Jack\}| = 2$. Now, in order to evaluate $\mathcal{S}$, one has to evaluate $\mathcal{B}$ to determine whether each property instance evaluates to a certain truth value or not. The two property instances thus far are: \begin{align*} \hat{\varphi}(D_{\varphi}|^1)&=\hat{\varphi}(Adam)=\mathbb{E}\xspace_{\le3} \,r:\code{rid}(r) \Rightarrow \left( \code{login} \wedge \code{unauthorized} \right) \\ \hat{\varphi}(D_{\varphi}^\prime|^1)&=\hat{\varphi}(Jack)=\mathbb{E}\xspace_{\le3} \,r:\code{rid}(r) \Rightarrow \left( \code{login} \wedge \code{unauthorized} \right) \end{align*} And the trace subsequences for these property instances respectively are: \begin{align*} u^{D_{\varphi}|^1}=&\{\code{rid}(12),\cdots\}\{\code{rid}(13),\cdots\}\{\code{rid}(15),\cdots\}\{\code{rid}(16),\cdots\} \\ u^{D_{\varphi}^\prime|^1}=&\{\code{rid}(14),\cdots\} \end{align*} Note that $\code{user}(Adam) \Rightarrow \cdots$ is omitted from $\hat{\varphi}(D_{\varphi}|^1)$ since $\code{user}(Adam)$ holds according to the trace subsequence. The same applies to $\code{user}(Jack)$. Evaluating these property instances with respect to the trace subsequences requires referring to Definition~\ref{def:semantics} again, which marks the second level of recursion. \textbf{Step 2.} Let us consider the property instance $\hat{\varphi}(D_{\varphi}|^1)$, which begins with an $\mathbb{E}\xspace$ quantifier and has {\sc Ltl}$_4$\xspace properties as child instances (refer to $\mathcal{P}$). These properties are in the form of $\code{login}\, \wedge\, \code{unauthorized}$, where there is one instance for each distinct request identifier. We can deduce that the property holds for all $4$ requests: $12$, $13$, $15$, and $16$, thus evaluating to $\top$. Therefore, the following holds: $$\mathcal{B}(\hat{\varphi}(D_{\varphi}|^1),u^{D_{\varphi}|^1},D_{\varphi}|^1,\top) = 4$$ This value, when used in $\mathcal{S}(\hat{\varphi}(D_{\varphi}|^1),u^{D_{\varphi}|^1},D_{\varphi}|^1,\{\top\})$ will violate the numerical condition: $4 \not\le 3$, resulting in $\mathcal{S}$ valuating to $0$ (false). Based on the conditions in Definition~\ref{def:semantics} and the rules of permanent violation, this property instance becomes permanently violated and thus the verdict is $\bot$. The other property instance $\hat{\varphi}(D_{\varphi}^\prime|^1)$ will however evaluate to $\top$ since its child property instance $$\hat{\varphi}(D_{\varphi}^\prime|^2)=\hat{\varphi}(\langle Jack,14\rangle) = \code{login} \,\wedge\, \code{unauthorized}$$ is violated, and thus the number of satisfied instances is still less than $3$. \textbf{Step 3.} In this step we use the valuations determined in Step 2 to produce verdicts for the property instances in Step 1. Based on $\mathcal{S}$, the $\mathbb{A}\xspace$ quantifier's numerical condition is violated, since not {\em all} instances are satisfied. The final verdict should thus be $[u \models_6 \varphi] = \bot$, which denotes a permanent violation of the property. \section{Related Work} \label{sec:related} Runtime verification of parametric properties has been studied by Rosu et al~\cite{Hussein,6227231,meredith2013efficient}. In this line of work, it is possible to build a runtime monitor parameterized by objects in a Java program. The work by Chen and Rosu~\cite{chen2009parametric} presents a method of monitoring parametric properties in which a trace is divided into slices, such that each monitor operates on its slice. This resembles our method of identifying trace subsequences and how they are processed by submonitors. However, parametric monitoring does not provide a formalization of applying existential and numerically constrained quantifiers over objects. Bauer et al.~\cite{bauer2013propositional} present a formalization of a variant of first order logic combined with LTL. This work is related to our work in that it instantiates monitors at run time according to valuations, and defines quantification over a finite subset of the quantified domain, normally with that subset being defined by the trace. Our work extends this notion with numerical constraints over quantifiers, as well as a parallel algorithm for monitoring such properties. The work by Leucker et al. presents a generic approach for monitoring modulo theories~\cite{deckermonitoring}. This work provides a more expressive specification language. Our work enforces a canonical syntax which is not required in~\cite{deckermonitoring}, resulting in more expressiveness. However, the monitoring solution provided requires SMT solving at run time. This may induce substantial overhead as opposed to the lightweight parallel algorithm presented in this paper, especially since it is designed to allow offloading the workload on GPU. SMT solving also runs the risk of undecidability, which is not clear whether it is accounted for or not. {\sc Ltl}$_4-${\sc C}\xspace is based on six-valued semantics, extending {\sc Ltl}$_4$\xspace by two truth values: ${\top_c}$ and ${\bot_c}$. These truth values are added to support quantifiers and their numerical constraints. This six-valued semantics provides a more accurate assessment of the satisfaction of the property based on finite traces as opposed to the three-valued semantics in~\cite{deckermonitoring}. Finally, although {\sc Ltl}$_4-${\sc C}\xspace does not support the expressiveness of full first-order logic, numerical constraints add a flavor of second-order logic increasing its expressiveness in the domain of properties where some percentage or count of satisfied instances needs to be enforced. The work in~\cite{barre2013mapreduce} presents a method of using MapReduce to evaluate LTL properties. The algorithm is capable of processing arbitrary fragments of the trace in parallel. Similarly, the work in~\cite{basinscalable} presents a MapReduce method for offline verification of LTL properties with first-order quantifiers. Our work uses a similar approach in leveraging MapReduce, yet also adds the expressiveness of counting semantics with numerical constraints. Also, our approach supports both offline and online monitoring by introducing six-valued semantics, which are capable of reasoning about the satisfaction of a partial trace. This is unclear in~\cite{basinscalable}, since there is no evidence of supporting online monitoring. Finally, the work in~\cite{bbf13} presents two parallel algorithms for verification of propositional {\sc Ltl}\xspace specifications at run time. These algorithms are implemented in the tool RiTHM~\cite{njsbmfb13}. This paper enhances the framework in \cite{bbf13, njsbmfb13} by introducing a significantly more expressive formal specification language along with a parallel runtime verification system.
1,477,468,749,885
arxiv
\section{Introduction} \label{sec:introduction} Machine-type communications (MTC) are typically characterized by a massive number of machine-type devices that connect to the network to transmit small data payloads. Those features present a significant challenge to cellular networks, whose radio access part is traditionally designed to deal with a rather low number of connections with high data requirements. Specifically, current cellular networks, such as LTE-A, are connection-oriented~\cite{TribudiWiriaatmadja2014}, requiring a connection establishment between the device and the Base Station (BS) before the device can transmit its data packet. As an example, the connection establishment in LTE-A involves a high amount of signaling overhead, which is particularly emphasized when the data payload is small, e.g., less than 1000 bytes~\cite{3GPPTR37.869}. Therefore, in 3GPP it was proposed an approach to optimize the connection establishment by reducing the signaling overhead~\cite{3GPPTR36.888}. The resulting simplified connection establishment protocol starts with the contention-based Access Reservation Protocol (ARP)~\cite{3GPPTS36.321}, depicted in the first four steps in Fig.~\ref{fig:ARPComparison}(a), followed by a fifth message where the signaling and a small data payload are concatenated. The signaling exchanges related to the security mechanisms are omitted in the optimized version of the LTE-A connection establishment, by reusing an a-priori established security context~\cite{3GPPTR37.869}. The throughput and blocking probability of the ARP are rather sensitive to the number of contending devices. Specifically, the devices contend for access by sending their preambles in a designated and periodically occurring uplink sub-frame, here termed as random access opportunity (RAO). When the number of contending devices is high~\cite{7397849}, multiple devices activate the same preamble in a RAO, which leads to collisions of their RRC Connection Requests, see Fig.~\ref{fig:ARPComparison}(a). Consequently, most devices are unable to establish a connection in the first attempt and perform subsequent attempts that, due to the high load, are also likely to result in collisions. A solution put forward to cope with congestion, was the extended access class barring (EAB) \cite{36331}, where certain classes of devices are temporally blocked from participating in the ARP, but at the cost of an increased access latency to those same devices. Another drawback of the ARP is that the network learns the devices' identities and connection establishment causes only after the RRC Connection Request is successfully received, as the contention is performed via randomly chosen preambles that do not carry information. A solution that allows the network to learn the identities and connection establishment causes of the contending devices already at the beginning of the ARP, could enable their differentiated treatment in later phases of the connection establishment and even skip some of the steps in the LTE-A random access protocol, as indicated in Fig.~\ref{fig:ARPComparison}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{ARPComparison} \caption{(a) LTE-A connection establishment protocol optimized for MTC~\cite{3GPPTR36.888} and (b) signature-based modification of LTE-A connection establishment.} \vspace{-0.5cm} \label{fig:ARPComparison} \end{figure} In this paper we propose a new access method based on signatures and Bloom filtering~\cite{Bloom1970}. The method is demonstrated in the context of the LTE-A ARP, however, we note that it can be employed in the next generation ARPs~\cite{FANTASTICIR412016} following similar principles. In the proposed method, instead of contending with a single preamble in a RAO, the devices contend by transmitting a predefined sequence of preambles in a frame composed of several RAOs, The transmitted sequence of preambles is denoted as the \emph{device signature}. The presented ideas are a conceptual extension of the work \cite{ETT:ETT2656}, where the devices contend for access by selecting a random signature, generated by combining random preambles over consecutive RAOs. In contrast, in the method described here, each device contends with a unique signature generated using the International Mobile Subscriber Identity (IMSI) of the device and its connection establishment cause, in further text referred to as the device's identification.\footnote{We note that the proposed method can be straightforwardly applied to cases where some other information is used for signature generation.} Specifically, we apply the Bloom-filter~\cite{Bloom1970} principles for signature generation, where the device's identification is hashed over multiple independent hash functions and the resulting output used to select which preamble in which RAO to activate. We introduce an analytical framework through which we tune the signature properties, i.e., its length and the number of activated preambles, based on the number of expected arrivals and the target efficiency of the use of system resources, denoted as the goodput. We also investigate the expected latency and signature detection probability of the proposed method. Finally, we show that, when the arrivals are synchronous, the proposed method outperforms the LTE-A connection establishment procedure in terms of goodput, while achieving similar or lower average latency. The rest of the paper is organized as follows. Section~\ref{sec:LTE_ARP} summarizes the standard ARP in LTE-A. Section~\ref{sec:proposed_contention_modifications} describes the proposed access method and Section~\ref{sub:analytical_performance_model} presents the corresponding analysis. Section~\ref{sec:system_performance_evaluation} evaluates the performance of the proposed method, comparing it with the reference LTE-A procedure for MTC traffic. Section~\ref{sec:conclusions} concludes the paper. \section{LTE-A Access Reservation Procedure} \label{sec:LTE_ARP} A successful LTE-A access reservation entails the exchange of four messages\footnote{For the sake of brevity, we omit the details that are nonessential for the proposed method, such as the power ramping procedure etc.}, as depicted in Fig.~\ref{fig:ARPComparison}(a). Initially, a device randomly chooses a preamble to be transmitted in a RAO from a set of available preambles generated using Zadoff-Chu sequences~\cite{1054840}. The preambles are orthogonal and can be simultaneously detected by the BS. We also note that the BS is able to detect a preamble even when it is transmitted by multiple devices~\cite{TribudiWiriaatmadja2014,ETT:ETT2656}, i.e., a collision in the ``preamble space'' is still interpreted as an activated preamble. This represents a logical OR operation, since the preamble is detected as activated if there is \emph{at least} one device that transmits the preamble. This observation motivates the use of Bloom filter, a data structure based on OR operation for testing set membership. The devices whose preambles are detected are notified via a Random Access Response (RAR) in the downlink and assigned a temporary network identifier. The reception of the RAR triggers the transmission of the RRC Connection Request in the allocated uplink sub-frame. At this point, the BS is able to detect the collision of the multiple connection requests, sent by the devices that originally sent the same preamble. The successfully received connection requests are acknowledged, marking the start of the data transmission phase. On the other hand, the devices whose connection requests collided, do not receive the feedback and either contend again by sending a new preamble or end up in outage when the number of connection attempts reaches the predefined limit. In the RRC Connection Request, the device informs the network of its temporary identifier, IMSI, and the connection establishment cause. From these, the network can confirm if the device is authorized for access, track the device's subscribed services and reestablish the preexisting security context~\cite{3GPPTR37.869}. As already mentioned, the channel over which the devices contend can be modeled as an OR multiple access channel (OR-MAC). By $A=\{a_i, i = 0,1,..., M \}$, denote the set of available preambles, where the absence of preamble activation is denoted by the idle preamble $a_0$. Assume that there are $T$ devices in total. We model the contention by assuming that the device $h$, $h=1,\dots, T$, transmits a binary word \begin{align}\label{eq:x} \mathbf{x}^{(h)} = [ x^{(h)}_0, x^{(h)}_1, \cdots, x^{(h)}_M ], \end{align} where bit $x^{(h)}=1$ indicates if the device $h$ transmitted preamble $a_i$. Note that only a single entry $x^{(h)}_i$, $0 \leq i \leq M$, can be set to 1 since a device can only transmit a single preamble in a single RAO. The BS observes \begin{align}\label{eq:y} \mathbf{y} = \bigoplus_{h = 1}^{T} \hat{\mathbf{x}}^{(h)}, \end{align} where $\bigoplus$ denotes a bit-wise OR operator and $\hat{\mathbf{x}}^{(h)}$ is the detected binary word of device $h$. In particular, the BS detects a transmitted preamble with probability $p_d \leq 1$ and with probability $p_f \geq 0$ falsely detects a non-transmitted preamble, which may cause that $\mathbf{x}^{(h)} \neq \hat{\mathbf{x}}^{(h)}$. In practice, the preamble detection at the BS should ensure that $p_d > 0.99$ and $p_f <10^{-3}$~\cite{3GPPTS36.141}\footnote{The $p_d$ requirement in~\cite{3GPPTS36.141} corresponds to the single activation of a preamble. When a preamble is activated by multiple devices it is expected that the effective $p_d$ will be higher~\cite{TribudiWiriaatmadja2014}.}. Finally, every non-zero entry in $\mathbf{y}$ implies a detection of the corresponding preamble. Obviously, in the best-case scenario, the BS can detect up to $M$ different devices in a RAO. \section{The Proposed Method} \label{sec:proposed_contention_modifications} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{LTEORMAC} \vspace{-0.5cm} \caption{Illustration of the mapping of the LTE-A preambles into a signature frame composed by multiple RAOs.} \vspace{-0.5cm} \label{fig:LTEORMAC} \end{figure} The essence of the proposed method lies in the idea of devices contending with combinations of $K$ preambles transmitted over $L$ RAOs, denoted as signatures. Each preamble of a signature is sent in a separate RAO, while $L$ RAOs define a signature frame, see Fig.~\ref{fig:LTEORMAC}. Extending the model introduced in Section~\ref{sec:LTE_ARP}, the device $h$ contends by transmitting its signature \begin{align} \mathbf{s}^{(h)} = [ \mathbf{x}^{(h)}_{1}, \mathbf{x}^{(h)} _{2}, \cdots, \mathbf{x}^{(h)} _{L}], \end{align} where the binary words $\mathbf{x}^{(h)}_i$, $i = 1, \dots, L$, follow the structure introduced in \eqref{eq:x}. Obviously, the number of available signatures is $\binom{L}{K} M^K$, potentially allowing for the detection of exponentially more contenders compared to the case in which the preambles sent in each of the $L$ RAOs are treated independently and where the maximal number of detected contenders is $L \cdot M$. Similarly to \eqref{eq:y}, the BS observes \begin{equation}\label{eq:y_new} \mathbf{y} = \bigoplus_{h = 1}^{N} \hat{\mathbf{s}}^{(h)}, \end{equation} where $\hat{\mathbf{s}}^{(h)}$ is the detected version of $\mathbf{s}^{(h)}$. The BS decodes all signatures $\mathbf{s}$ for which the following holds \begin{align}\label{eq:det} \mathbf{s} = \mathbf{s} \bigotimes \mathbf{y}, \end{align} where $\bigotimes$ is the bit-wise AND. At this point, we turn to a phenomenon intrinsically related to the proposed contention method~\cite{ETT:ETT2656}. Namely, even in the case of perfect preamble detection ($p_d = 1$) and no false detections ($p_f = 0$), the BS may also decode signatures that have \emph{not} been transmitted but for which \eqref{eq:det} also holds. In other words, the BS may decode \emph{false positives}. An example of this is shown in Fig.~\ref{fig:ORMACSignatureTransmissionDetectionExample}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{ORMACSignatureTransmissionDetectionExample} \caption{Example of: (a) synchronous transmission of $3$ signatures when $L = 3$ and $M = 3$ and (b) erroneous decoding of a signature which was not present in the original transmission ($p_d = 1$ and $p_f = 0$).} \vspace{-0.5cm} \label{fig:ORMACSignatureTransmissionDetectionExample} \end{figure} The performance of the random signature construction in terms of probability of decoding false positives was first analyzed in \cite{ETT:ETT2656}, where they are referred to as phantom sequences. On the other hand, there is an extensive work on the construction of OR-MAC signatures~\cite{Gyori20081407} based on the following criterion: if up to $N$-out-of-$T$ signatures are active, then there are no false positives. However, these constructions are not directly applicable to the LTE-A access, as they would (1) require that a device sends multiple preambles in the same RAO, and (2) imply rather long signature lengths, i.e., $\frac{N^2 \log_2 T}{2 M \log_2 N} \leq L \leq \frac{N^2 \log_2 T}{M\ln 2}$, which implies an increased access latency. Inspired by Bloom filters~\cite{Bloom1970}, we propose a novel signature construction that uses much lower signature lengths, at the expense of introducing false positives in a controlled manner. \subsection*{Signature Construction based on Bloom Filtering} \label{sub:bloom_filter_inspired_signatures} In the proposed method, the device signature is constructed in such a way that it provides a representation of the device's identification, which is assumed to be a-priori known to the network. To illustrate how a signature is constructed, we first consider the case where a single preamble is available at each of the $L$ RAOs dedicated to the signature transmission, i.e., $M=1$. Taking the view of the device $h$, we start with the binary array $\mathbf{s}^{(h)}$ of length $L$, indexed from $1$ to $L$, where all the bits are initially set to $0$. We then activate $K$ index positions in this array, i.e., we set them to $1$; note that $K$ is a predefined constant valid for all devices. This is done by using $K$ independent hash functions, $f_j ( \mathbf{u}^h )$, $j = 1, \dots, K$, whose output is an integer value between 1 and $L$, corresponding to an index position in the array, and where $\mathbf{u}^{(h)}$ is representation of the device identity. The resulting binary array becomes the device signature. This construction follows the same steps as the object insertion operation in a Bloom filter~\cite{Bloom1970}. When $M>1$, the signature construction occurs in two stages. The first stage corresponds to the selection of the $K$ active RAOs using hash functions $f_j ( \mathbf{u}^h )$, $j = 1, \dots, K$, as described previously. In the second stage, for each of the activated RAOs, a contending device selects and transmits randomly one of $M$ preambles. This is performed by hashing the device identity using another set of independent hash functions $g_j ( \mathbf{u}^h )$, $j = 1, \dots, K$, i.e., a separate hash function for each RAO, whose output is an integer between $1$ and $M$ that corresponds to one of the available preambles. \subsection*{Signature-Based ARP} \label{sub:signature_ARP} The signature-based access reservation protocol is depicted in Fig.~\ref{fig:ARPComparison}(b), which starts by the devices transmitting their signatures. Upon the successful decoding of a signature, the BS transmits the \emph{RRC Connection Setup} message. In contrast with the LTE-A ARP depicted in Fig.~\ref{fig:ARPComparison}(a), the messages 2 and 3 are not required in the signature based access, since the BS is able to determine from the signature the IMSI of the device and the connection establishment cause. The protocol concludes with the transmission of the small data payload together with the completion of the RRC connection message. \subsection*{Practical Considerations} \label{sub:practical_considerations} The described signature generation raises two important issues: (i) out of $K$ hash functions $f_j ( \mathbf{u}^h )$, $j = 1, \dots, K$, there is a probability of $1 - K!\binom{L}{K}/L^K$ that at least two of these functions generating the same output, leading to less than $K$ distinct RAOs active in a signature; (ii) there is a non-zero probability that two or more devices share the same signature, given by \begin{equation} \sum_{i=2}^{T} \binom{T}{i} p^i(1-p)^{T-i} \mbox{ with } p = \left[ \binom{L}{K} (M)^K \right]^{-1} \end{equation} and $T$ as the total number of devices. The above probabilities can be minimized by increasing the signature length $L$, which is the reason why these issues are commonly ignored within the Bloom filter related literature, where $L$ is of the order of $10^4$. Although we do not use such large ranges for $L$, we note that for values of $L>10$ and $5 < K < L$ that are used in the performance evaluation in Section~\ref{sec:system_performance_evaluation}, the second probability can be neglected, as in this case $T \ll \binom{L}{K} (M)^K$. \begin{algorithm}[t]\label{alg:bloomfilterinsertion} \textbf{Input}: {$\mathbf{u}^{(h)}$, $L$, $M$, $K$}; \\ \textbf{Initialize}: $\mathbf{s}^{(h)} \gets \mathbf{0} $, $ \mathbf{L} \gets 1...L$, $ \mathbf{M} \gets 1...M$ \; \For{$ j : 1 \cdots K$}{ $i \gets \mathbf{L} (\text{mod}(\mathbf{u}^{(h)},L+1-j))$\; $\mathbf{L} = \mathbf{L} \setminus \{i\}$\; $m \gets \mathbf{M} (\text{mod}(\mathbf{u}^{(h)},M+1-j))$\; $\mathbf{M} = \mathbf{M} \setminus \{m\}$\; $x_{i,m}^{(h)} = 1$\; } Output {$\mathbf{s}^{(h)}$}; \\ \caption{Signature generation for $h^{th}$ device, where $\mathbf{u}^{(h)}$ is the device's identification and $x_{i,m}^{(h)}$ indicates activation of $m^{th}$ preamble in $i^{th}$ RAO of the signature $\mathbf{s}^{(h)}$.} \end{algorithm} The first issue can be addressed by a signature construction that enforces $K$ distinct active RAOs per signature. We provide in Alg.~\ref{alg:bloomfilterinsertion} a description of a practical signature construction that uses the modulus operation as basis for the hashing. This construction ensures that $K$ distinct RAOs are active per signature, by removing the RAOs selected in previous iterations from the set of available RAOs. Further, the preambles activated in previously selected RAOs are removed from the set preambles available for the next iteration. This operation limits the generation of signatures to $K\leq \min(M,L)$ active RAOs; however, this is within the operating range of interest where $K<M$ and allows us to apply probabilistic tools, as presented in the analysis in Section~\ref{sub:analytical_performance_model}, to design the signatures length $L$ and number of active RAOs $K$. As it will be shown in Section~\ref{sec:system_performance_evaluation}, the proposed signature generation algorithm matches well the derived analytical model. Finally, we note that an essential prerequisite for the proposed signature access scheme is that the signature generation algorithm and all the hash functions are known to all devices, including the BS. This can be accomplished via the existing periodic broadcasts that include the network configuration; an alternative would be to include this information already in the device's subscriber identity module. \section{Analysis} \label{sub:analytical_performance_model} We analyze a single instance of the contention process, assuming a synchronous batch arrival of $N_\text{a}$ devices. We assume that the probability of an arrival of a device is $p_a = \mathrm{E} [ N_\text{a} ] /T$, and denote the expected number of arrivals as $N = \mathrm{E} [ N_\text{a} ]$. The parameters of the proposed scheme are the signature frame size, denoted by $L$, the number of active RAOs in the signature, denoted by $K$, and the number of preambles per RAO that are available for signature construction, denoted by $M$. The first two parameters are subject to design, and we analyze their dimensioning when on average $N$-out-of-$T$ signatures are active, such that the false positive rate is below a threshold. In contrast, $M$ is assumed to be fixed, which corresponds to the typical scenario in LTE-A systems. We start by establishing the relationship between the correctly detected signatures and all detected signatures, which also includes the false positives, after all the contenders have completed $3^{rd}$ step of the proposed method, see Fig.~\ref{fig:ARPComparison}(b). We denote this metric as the goodput $G$. In essence, the goodput reflects the efficiency of the subsequent small data transmission, as the BS will also attempt to serve the falsely detected signatures. The expected goodput is \begin{equation} \label{eq:G_def} \mathrm{E} \left[ G \right] = \mathrm{E} \left[ \frac{N_\text{a} }{N_\text{a} + P} \right] \approx \frac{\mathrm{E} [N_\text{a}] }{ \mathrm{E} [N_\text{a}] + \mathrm{E}[P]} = \frac{N}{N + \mathrm{E}[P]}. \end{equation} where $P$ is the number of false positives. From \eqref{eq:G_def} it follows \begin{align} \label{eq:G_bounds} \frac{N}{T} \leq \mathrm{E} [ G ] \leq 1, \end{align} as there can be no more than $T$ detected signatures. The mean number of false positives $E[P]$ can be approximated as \begin{equation*} E[P] \approx p_\text{fa} (T - N ), \end{equation*} where $T- N$ corresponds to the mean number of inactive signatures, while $p_\text{fa}$ denotes the false positive probability, i.e., the probability of an inactive signature being perceived as active. Eq. \eqref{eq:G_def} now becomes \begin{align} \label{eq:G_approx} \mathrm{E} \left[ G \right] \approx \frac{N}{N + p_\text{fa} (T - N )}. \end{align} Using \eqref{eq:G_approx}, we proceed by setting the target goodput $\hat{G}$ and establishing the relation between $\hat{G}$ and the corresponding target $\hat{p}_\text{fa}$ \begin{equation} \label{eq:phantomTarget} \hat{p}_\text{fa} = \frac{N ( 1 - \hat{G}) }{ ( T - N ) \hat{G}}. \end{equation} To compute $p_\text{fa}$, we rely on approximations that hold when the number of simultaneously active signatures $N$ is high enough. Specifically, $p_\text{fa}$ is the probability that all $K$ preambles associated with an inactive signature, are detected as activated by the BS. Each of these $K$ preambles can be (i) actually activated by an active signature and detected as such by the BS, or (ii) not activated by any of the active signatures, but falsely detected as activated by the BS. Now, the probability that a particular preamble in a particular RAO is not activated by any of the signatures, denoted by $p_\text{idle}$, is \begin{equation} p_\text{idle} = \left( 1 - \frac{K}{L \cdot M}\right)^{N}, \end{equation} where $L \cdot M$ is the total number of preambles in $L$ RAOs, $K$ is the number of preamble activations per user, $N$ is the number of active signatures, and it is assumed that the selection of any preamble in any RAO is equally likely. The detection of a preamble is non-ideal and therefore we have to distinguish between two events: (i) detection of a preamble transmitted by at least one device with probability $p_d$; (ii) false detection of a non-transmitted preamble with probability $p_f$. We approximate $p_\text{fa}$ as \begin{align}\label{eq:PhantomSignature} p_\text{fa} &\overset{(a)}{\approx} \left[ (1 - p_\text{idle}) \cdot p_d + p_\text{idle} \cdot p_{f} \right]^K \\ \nonumber &= \left[ p_d + (p_{f} - p_d) \cdot p_\text{idle} \right]^K, \end{align} and where (a) becomes a lower bound when $M=1$ and $p_d = 1$ and $p_f =0$~\cite{Christensen:2010:NAF:1850837.1850860}. From \eqref{eq:PhantomSignature}, the required signature frame size $\hat{L}$ to meet the target $\hat{p}_\text{fa}$ is \begin{equation}\label{eq:LNonOptimal} \hat{L} = \frac{K}{M} \left[ 1 - \left(\frac{\hat{p}_\text{fa}^{1/K}-p_d}{p_f - p_d}\right)^{1/N} \right]^{-1} \end{equation} \begin{algorithm}[t]\label{alg:signatureDetection} \textbf{Input}: {$\mathbf{S}$, $\mathbf{y}$, $L$, $M$, $K$}; \\ \textbf{Initialize}: $\mathbf{V} = \mathbf{S}$, $\mathbf{D} = \emptyset$\; \For{$ i : 1 \cdots L \, M$}{ \For{$ \mathbf{s^{(h)}} \in \mathbf{V} \setminus \mathbf{D}$}{ \If{$\mathbf{s^{(h)}}(1:i) \neq \mathbf{s^{(h)}}(1:i) \bigotimes \mathbf{y}(1:i)$}{ $\mathbf{V} = \mathbf{V} \setminus \{\mathbf{s^{(h)}}\}$\; } \If{$ \left( \mathbf{V} \setminus \mathbf{s^{(h)}}(1:i ) \right) \bigotimes \mathbf{y}(1:i) \neq \mathbf{y}(1:i) $ } { $\mathbf{D} = \mathbf{D} \cup \{\mathbf{s^{(h)}}\}$\; Report to $\mathbf{u^{(h)}}$ that $\mathbf{s^{(h)}}$ is decoded\;} } } \For{$ \mathbf{s^{(h)}} \in \mathbf{V} \setminus \mathbf{D}$}{ $\mathbf{D} = \mathbf{D} \cup \{\mathbf{s^{(h)}}\}$; Report to $\mathbf{u^{(h)}}$ that $\mathbf{s^{(h)}}$ is decoded\; } \caption{Iterative signature decoding where $\mathbf{S}$ is the set of signatures and $\mathbf{D}$ is the set of decoded signatures.} \end{algorithm} To compute the $K$ that minimizes $\hat{L}$ in \eqref{eq:LNonOptimal}, we assume $p_d = 1$ and $p_f =0$. Then, for a given $N$ and $L$, the value of $K$ that minimizes $p_\text{fa}$ is given by~\cite{Mitzenmacher2001} \begin{equation}\label{eq:OptimalK} K_{\min} = \frac{L \cdot M}{N} \ln 2 \end{equation} We use \eqref{eq:OptimalK} to find the minimal required $\hat{L}$ via \eqref{eq:LNonOptimal}. Furthermore, recall that each device can only activate up to a single preamble per RAO, resulting in the constraint \begin{align} K_{\min} = L \, \min \left(1,\frac{M}{N} \ln 2\right), \end{align} where we assume to work in the regime in which $\frac{M}{N} \ln 2 < 1$, i.e., where $N > M \ln 2$. Now, the minimum $\hat{L}$ can be obtained by solving iteratively the following fixed-point equation obtained from combining \eqref{eq:LNonOptimal} and \eqref{eq:OptimalK} \begin{equation}\label{eq:IterativeL} \hat{L} = \ceil[\Bigg]{ \frac{\ceil{K_{\min}}}{M} \left[ 1 - \left( \frac{p_{fa}^{1 / \ceil{K_{\min}}} - p_d}{p_f - p_d} \right)^{1/N}\right]^{-1}}, \end{equation} which converges for $p_d \geq 0.99$ and $p_f \leq 10^{-3}$, i.e., the prescribed preamble detection performance~\cite{3GPPTS36.141}. \subsection{Signature Decoding} \label{sub:receiver_performance} A straightforward approach for signature decoding is to perform it after all RAOs of the signature frame have been received, i.e., after the BS has observed the whole signature frame. An alternative is to perform the decoding iteratively after every received signature RAO, i.e., the BS attempts to decode a signature while only having access to a partial observation of the signature frame. The latter strategy is inspired with the fact that $K$ active RAOs constituting a signature are randomly spread over the signature frame and, in principle, the BS does not have to wait until the end of the frame to detect a signature. The decoding performance is the same for both strategies when all $L$ RAOs in the signature frame have been received, but the average latency in the latter approach is lower. We provide in Alg.~\ref{alg:signatureDetection} an algorithmic description of the iterative signature decoding, where the notation $\mathbf{z}(1:i)$ corresponds to the first $i$ entries of vector $\mathbf{z}$. The key steps of the Alg.~\ref{alg:signatureDetection} are steps 5 and 7. In particular, in step 5 the BS discards the signatures that could not have generated the partial observation $\mathbf{y}(1:i)$ from the set of potentially active signatures $V$. Obviously, it is expected that $V$ will decrease with the additional received RAOs. In step 7, the BS detects the signatures whose combinations of active RAOs and preambles are uniquely contributing to the partial observation $\mathbf{y}(1:i)$. Then the BS reports to the respective device that its signature has been decoded, which in the LTE-A protocol realization would correspond to the RRC Connection Setup message, as shown in Fig.~\ref{fig:ARPComparison}(b). Finally, in steps 10--12, when all RAOs have been received, the BS reports all the signatures within the set $\mathbf{V} \setminus \mathbf{D}$ as decoded. \begin{figure}[t] \centering \includegraphics[width=0.85\linewidth]{advancedReceiverTrace} \caption{Evolution of the number of potentially active and already decoded signatures by the BS as the RAOs of the signature frame elapse, for $T=1000$, $N = 200$, $\hat{G}=0.99$, $p_d = 0.99$, $p_f = 10^{-3}$, and $\hat{L}=47$ from~\eqref{eq:IterativeL}.} \label{fig:advancedReceiverTrace} \vspace{-0.4cm} \end{figure} In Fig.~\ref{fig:advancedReceiverTrace}, we provide a simulation snapshot showing how many signatures are considered potentially active and how many have actually been decoded as the RAOs of the signature frame elapse. Obviously, the iterative signature decoding occurs in a spread manner, which leads to the spreading of the feedback messages acknowledging the decoding of each signature, i.e., the RRC Connection Setup message in Fig.~\ref{fig:ARPComparison}(b). In this way, the scenario in which a high number of devices attempt to complete the access reservation protocol simultaneously is avoided, i.e., the occurrence of congestion at the later stages of the ARP is reduced. Another important observation is that most of the signatures become decoded well before the end of the signature frame. \section{Performance Evaluation} \label{sec:system_performance_evaluation} \subsection{Scenario description} \label{sub:scenario_description} In order to evaluate the performance of the proposed signature based access and compare it with the proposed 3GPP LTE-A solution for MTC traffic~\cite{3GPPTR37.869}, we have implemented an event driven simulator where the main downlink and uplink LTE channels are modeled. Specifically, the simulator implements the both procedures depicted in Fig.~\ref{fig:ARPComparison}(a) and Fig.~\ref{fig:ARPComparison}(b), while the downlink control and data channels (PDCCH and PDSCH respectively) and the uplink data and random access channels (PUSCH and PRACH) are modeled as in \cite{3GPPTR37.869}. We consider a typical cell, configured with one RAO every 1~ms, $M=54$ available preambles for contention~\cite{3GPPTR37.869}. We assume a total population of size $T = 1000$, and a batch arrival of $N_a$ devices with a payload of $100$ bytes to transmit, The arrival probability of an individual device is given by $p_a = N/T$, i.e., $N_a$ is a binomially distributed random variable with mean $\text{E} [ N_a ] = N$. The mean number of arrivals $N$ is assumed to be known, and the signature based scheme is dimensioned for it.\footnote{$N$ can be estimated, e.g., using techniques that take advantage of the LTE-A ARP, such as the one proposed in~\cite{MassiveM2MAccessWithReliabilityGuaranteesInLTESystems}.} The probability of preamble detection by the BS is set to $p_d = 0.99$ and the probability of false detection of a preamble is set to $p_f = 10^{-3}$ \cite{3GPPTS36.141}. In the baseline, i.e., 3GPP scheme, we assume the typical values for the backoff window of 20~ms and the maximum number of $10$ connection attempts~\cite{3GPPTR37.869}. The devices upon becoming active contend for access by activating randomly one preamble in one of the available RAOs within the backoff interval, i.e., the batch arrival is spread with the backoff interval.\footnote{Note that this initial backoff is a modification of the original LTE-A access procedure, in which the devices contend by activating a preamble in the nearest RAO~\cite{3GPPTR37.868}. The purpose of this modification is to force a spread in the batch arrival and prevent the consequent imminent collision; the resulting performance of the baseline scheme is actually better than it could be expected.} In case that a device is the only one that selected a given preamble in a given RAO and that this preamble has been detected, then the access procedure, as depicted in Fig~\ref{fig:ARPComparison}(a), proceeds until completion. Otherwise, the device will reattempt the access within the back-off window after the timer to receive the RAR as elapsed. When multiple devices select the same preamble within a RAO, the resources assigned by the BS corresponding to the step 3 in the protocol are wasted due to the collided devices; and the collided devices re-attempt access later by selecting a random RAO within the backoff interval. The devices re-attempt access until either successful or until exceeding the allowed number of retransmissions. In the proposed method, the devices contend by transmitting their signatures, where the signature frame length $L$ is obtained from~\eqref{eq:IterativeL}. For the sake of comparison, we also evaluate the performance of the random signature construction~\cite{ETT:ETT2656}, where $K = L$. Each device upon its signature being decoded, even in the case of false positive, receives the feedback RRC connection setup message and is assigned uplink data resources for the transmission of the third and final message, see Fig~\ref{fig:ARPComparison}(b). The performance is evaluated in terms of: (i) the average goodput $E[G]$; (ii) the average latency until the first step in both access schemes is successful, corresponding to a singleton preamble in the baseline and a successfully decoded signature in the proposed scheme; (iii) the average latency until the small data transmission takes place, corresponding to step 5 in the baseline and to step 3 in the proposed scheme, see Fig~\ref{fig:ARPComparison}; and (iv) probability of device being successfully detected upon the completion of the access protocol. The average goodput $E[G]$ is evaluated as the ratio between the successfully used resources and the total resources spent in the third step of both access protocols. It directly relates to the efficient use of resources, since the BS is only able to discern if there is a correctly detected device upon successful completion of the third step. In the baseline scheme, the system resources are wasted whenever two or more devices select the same preamble within a RAO; the goodput in this case is given as the ratio between the total number of messages that are exchanged successfully and the total number of exchanged messages at the third step, including the failed ones due to collisions. In the case of the signature based access, the wasted resources in the third step occur whenever a false positive signature occurs, and the goodput is given by~\eqref{eq:G_def}. \begin{figure}[tb] \centering \includegraphics[width=0.95\linewidth]{PerformanceCurves_Goodput} \vspace{-0.1cm} \caption{$E[G]$ observed with increasing $N$, for the 3GPP scheme, random signature construction~\cite{ETT:ETT2656} and the proposed signature construction. ($T=1000$)} \vspace{-0.5cm} \label{fig:Goodput} \end{figure} \subsection{Results} \label{sub:numerical_results_and_discussion} The expected goodput is depicted in Fig.~\ref{fig:Goodput}, where for the goodput target for the proposed method~\eqref{eq:phantomTarget} is set to $\hat{G} = 0.99$. We observe that the proposed method meets the actual goodput meets the design target at higher access loads. On the other hand, at lower $N$, the performance deviates from the target value $\hat{G} = 0.99$. This is due to the assumption that the false positive signatures are independently and uniformly generated from the idle signatures, which is the basis of the approximation in \eqref{eq:PhantomSignature}. We can also observe that the goodput performance of the proposed method is always superior to the 3GPP scheme. Specifically, In the 3GPP scheme the devices re-attempt retransmission upon colliding and until they are either successful or the number of retransmissions is exceeded. Each subsequent failed retransmission results in additional wasted system resources, which results in the observed degradation of the baseline goodput with increasing number of active devices. Finally, the goodput achieved with the random signature construction \cite{ETT:ETT2656} is quite low, due to the high number of false positives. \begin{figure}[tb] \centering \includegraphics[width=0.95\linewidth]{PerformanceCurves_Latency} \vspace{-0.1cm} \caption{Mean latency of the 3GPP scheme, random signature construction and the proposed signature construction with optimal $K$ and minimum $\hat{L}$ computed from~\eqref{eq:IterativeL}, at different stages of the access procedures. ($T=1000$)} \vspace{-0.5cm} \label{fig:Latency} \end{figure} In Fig.~\ref{fig:Latency} we depict the mean latency at step 1 in all schemes, as well as in steps 3 and 5 in the signature and 3GPP schemes, respectively. An important observation is that the latency of the proposed method is always lower than the 3GPP scheme; and the gap between these two schemes increases for higher $N$. This is a consequence of the more efficient detection of active users, as can be seen when comparing the latency of these two schemes at step 1. Furthermore, the random signature construction has the worst performance, the reason being that a signature cannot be decoded before all $L$ RAOs of the signature frame have been received~\cite{ETT:ETT2656}. Finally, in Tab.~\ref{table:ProbDetectionTable} we show the probability of a device being successfully detected at end of the access protocol. Here the proposed method has a slight performance degradation compared to the 3GPP scheme, but this degradation diminishes higher access loads. The 3GPP scheme achieves higher detection performance due to only requiring one transmission out of all preamble retransmissions to be successful, making it more robust but at the cost of lower goodput and higher latency. On the other hand, the random signature construction leads to a very low detection performance, as it requires the successful detection of all the active preambles~\cite{ETT:ETT2656}. \section{Discussion and Conclusions} \label{sec:conclusions} Following the insights provided by Bloom filters, we have introduced the concept of signatures with probabilistic guarantees and applied it to a system model derived from the LTE-A access reservation protocol. The most important feature of the proposed method is in allowing the device to be identified already at the access stage. Moreover, the method is very efficient in terms of use of the system resources and has a favorable performance in terms of decoding latency. In the paper we assumed that the base station serves the successfully connected devices without preferences. Nevertheless, it is straightforward to modify the proposed solution to scenarios in which the BS serves devices based on the identifications inferred from the decoded signatures, i.e., IMSIs and/or connection establishment causes. In such cases, the proposed access method enables differentiated treatment by the BS from the very beginning. Finally, we note that in the paper we assessed a simplified scenario of a synchronous bath arrival in order to present the key concepts and the related analysis. Tuning the proposed scheme for the other typical models, like the Beta arrival model for synchronous arrivals or the Poisson arrival model for asynchronous arrivals, is left for further work. \begin{table}[t] \centering \begin{tabular}{ c c c c c c } \hline \textbf{N} & 100 & 300 & 500 & 700 & 900 \\ \hline Proposed method & 96 & 98 & 98 & 98 & 98 \\ 3GPP scheme & 100 & 100 & 100 & 100 & 100 \\ Random construction \cite{ETT:ETT2656} & 86 & 53 & 42 & 37 & 44 \\ \hline \end{tabular} \vspace{0.1cm} \caption{Probability of successfully detecting a device [\%]. (T = 1000)} \vspace{-0.9cm} \label{table:ProbDetectionTable} \end{table} \section*{Acknowledgment} This work was performed partly in the framework of H2020 project FANTASTIC-5G (ICT-671660), partly supported by the Danish Council for Independent Research grant no. DFF-4005-00281 ``Evolving wireless cellular systems for smart grid communications'' and by the European Research Council (ERC Consolidator Grant Nr. 648382 WILLOW) within the Horizon 2020 Program. The authors acknowledge the contributions of the colleagues in FANTASTIC-5G.
1,477,468,749,886
arxiv
\section{Mining strongly lensed quasars with machine learning} \section{Introduction} Strongly lensed quasars (SLQSOs), particularly quadruply lensed systems, are very rare (\citealt{Oguri10}), but very valuable probes of cosmology and extragalactic astrophysics. With the KiDS Strongly lensed QUAsar Detection project (KiDS-SQuaD, \citealt{Spiniello18}, hereafter S18) we have started a systematic census of lensed quasars in the Kilo Degree Survey (KiDS, \citealt{deJong15}), taking advantage from the high spatial resolution of VST (0.2$\arcsec$/pixel) and its stringent seeing constraints ($<0.8\arcsec$ in r-band). In S18 we selected qso-like objects based on infrared and optical colors and then used different methods to identify multiple systems, that we then visually inspected. However, with these methods, the number of candidates highly depends on the (somehow arbitrary and often calibrated on previous finding) selection criteria. Generally this number is of the order of thousands every 100 deg$^2$, making the visual inspection long and tedious. Thus, to make our research more effective and suitable to deal with larger amount of data coming with future data releases and new wide-sky surveys (e.g. Euclid or LSST), we developed a new method based on machine learning. These techniques have the great advantage to explore, with little computing time, large amount of candidates with less stringent pre-selections and with reasonably high precision (recovery rate) and little contamination (spurious detection). These have been applied already for the search of strong gravitational arcs in KiDS (\citealt{Petrillo18}) and SLQSOs in other wide-sky Surveys (\citealt{Agnello15}). \section{Mining Strong lensed quasars with machine learning} Although the combination of near-infrared and optical colours is the most effective way to separate quasars from stars within photometric surveys (\citealt{Akhmetov17}), infrared information is not always available and often not deep enough. For this reason, we developed a machine learning based method to separate QSOs from stars using only 4 photometric optical bands (u,g,r,i). We used Random Forests classifier (self-written python code) with spectroscopically confirmed stars ($622,052$) and QSOs ($484,372$) from SDSS DR14 (\citealt{Abolfathi18}). We worked in the 6-dimensional colour space, made of all possible differences of magnitudes in the different bands. The classification was performed with a 5-fold cross-validation, obtaining 99\% of purity and 65\% of completeness of the quasar validating sample. We thus apply this procedure on the KiDS-DR3 catalog and then select QSO-like multiple sources (with a separation $\le 10\arcsec$). We ended up with 3187 candidates, for which we inspected the combined ugr KiDS cutouts. A detailed presentation of the method will be presented in a future publication. \pagebreak \section{KiDS 0239-3211} KiDS 0239-3211 (Ra: $02$:$39$:$29.69$, Dec: $-32$:$11$:$29.66$) is without doubt the most promising candidate we found in the public KiDS-DR3. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.53,angle=0]{Fig1_new.pdf} \end{center} \end{figure} In the figure we show the combined $gri$ KiDS cutout of the quad, as well as single band images and GALFIT model and residuals. The system is composed by a red galaxy in the middle (G) surrounded by five blue blobs within few arcsec distance (listed in the Table) We obtained differential photometry directly from GALFIT, calibrating the zero-point with a reference star from the KiDS catalog. We note that all blobs have consistent colors, within the errors, which support the lensing hypothesis. Given the geometry, colors and shape of the blobs, we believe that four of them are multiple images of a point-like source (A, B, C, D in green) whereas the fifth, more distant one is a contaminant (X in cyan). This hypothesis is further confirmed by the machine learning based estimate of the photometric redshifts ($z\sim0.5$ for G and X and $z\sim1.6-2.0$ for A and C, from the catalog in \citealt{deJong17} and derived using the Multi Layer Perceptron with Quasi Newton Algorithm technique presented in \citealt{Cavuoti15}) \pagebreak \section{Discussion and conclusions} The discovery of KiDS 0239-3211, missed by other methods, demonstrates that our machine learning set-up, although still preliminary, represents a considerable step further to effectively find new SLQSOs in present and future public wide-sky photometric surveys. Nevertheless, to unambiguously confirm the lensing nature of the system and to translate the `geometric' lens model results (e.g. Einstein radius) into physical measurements of luminous and dark masses, the redshifts of the source and of the deflector are needed. We are therefore setting up a systematic, multi-site, multi-facility campaign for spectroscopic observation of the best candidates in KiDS, including KiDS 0239-3211, but the publication of the object coordinates and fluxes is meant to encourage the community to help out with this task. \section{Acknowledgments} We thank the organizators and participants of the Italy-Ukraine Collaboration in Astronomy Meeting, during which this discovery was made. We acknowledge the KiDS collaboration for the work done to make the data publicly available. CS and NRN have received funding from the European Union's Horizon 2020 research and innovation programme. under the Marie Sk\l odowska-Curie actions grant agreements No 664931 and No 721463 to the SUNDIAL ITN network.
1,477,468,749,887
arxiv
\section{Introduction} \label{intro} Giant radio galaxies (GRGs) are radio galaxies whose radio emitting regions (jets, lobes) extend over projected distances $ \geq 1 $ Mpc \citep{RefWorks:222, RefWorks:40, RefWorks:254, RefWorks:236, RefWorks:101}. Their morphology can be classified as core-brightened FRI or edge-brightened FRII \citep{Fanaroff1974}. There is no evidence that they are particularly more energetic than the general population of radio galaxies \citep[e.g.][]{Lara2001}. A low-density environment may be the key factor enabling their growth to such large sizes. \cite{RefWorks:148} have indeed found that the surrounding medium for their sample of GRGs is an order of magnitude less dense than that around smaller radio sources. Hence the radio sources can expand freely, implying that expansion losses rather than radiative losses are the dominant energy loss mechanism for the relativistic particle populations in the radio lobes. Apart from their size, GRGs are not fundamentally different from other radio galaxies, and they are expected to be subject to the same processes that are present in smaller radio galaxies. The AGN that power them go through a cycle of active and inactive periods \citep[e.g.][]{McNamara2007, Morganti2017}. Hence, we might expect GRGs to show evidence of multiple activity episodes, both in their radio morphology and spectra. They may exhibit double-double morphologies \citep[some examples can be found in the sample of][]{Malarecki2013} and show signs of spectral curvature indicating radiative ageing of the relativistic particles responsible for their extended radio emission. There are several studies of the ages of GRGs using radio data. \cite{RefWorks:148} have performed radiative ageing analysis of five giant radio galaxies (including 3C~236), obtaining ages less than 400 Myr. More recently \cite{Hunik2016} have presented a restarted giant radio galaxy for which they derive a radiative age of 160 Myr. Also, Cantwell et al. (submitted) studied NGC~6251 and found ages raging from 50 Myr to greater than 200 Myr. \cite{Orru2015} have studied the famous double-double GRG B1864+620 and showed that the source ages derived for its outer pair of radio lobes indicate that the AGN activity has stopped in the recent past. For many years following its discovery in the late 1950s, 3C~236 was an unresolved source. It was catalogued as such in the first 3C catalogue and kept its status up to and including the study of \cite{RefWorks:224}. However, using the Westerbork Synthesis Radio Telescope (WSRT), \cite{RefWorks:222} discovered low surface brightness radio lobes emanating from the compact source, extending over a total projected linear size 4.5 Mpc ($ z = 0.1005 $)\footnote{We assume a flat $\Lambda$CDM cosmology with $ H_{0} $ = 67.8$\,$km$\,$s$^{-1}$Mpc$^{-1}$ and $ \Omega_{m} $ = 0.308, taken from the cosmological results of the full-mission Planck observations \citepalias{Planck2016}, and use the cosmology calculator of \cite{Wright2006}. At the redshift of 3C~236, 1$\arcsec$ = 1.8 kpc.}. For decades, it was the largest known radio galaxy \citep[but see][for the current record holder]{RefWorks:236}, hence 3C~236 is listed as a GRG in radio survey catalogues. \cite{RefWorks:130} investigated the radio morphology at a variety of scales. They noted that the low surface brightness emission of the lobes, especially the north-west (NW) one, shows a large-scale (300 kpc) wiggle, possibly associated with the jet slightly changing its orientation over the source's lifetime (see their Figure 4). The NW lobe terminates in a diffuse plateau, and there is an inner hotspot embedded in it, which may indicate a separate episode of AGN activity or intermittent accretion. The south-east (SE) lobe is more extended and shows a double hotspot morphology which the authors suggest may be caused by an oblique shock deflecting the jet. \cite{RefWorks:225} studied the spectral index variations across the lobes and found that the spectral index steepens going from the outer edges of the lobes towards the host galaxy, similar with that observed in (hotspot dominated) FRII radio galaxies. The host galaxy of 3C~236 has been studied in detail by \citet{RefWorks:227}, \citet{RefWorks:228} and \citet{RefWorks:51}. Hubble Space Telescope (HST) imaging has revealed repeated bursts of star formation (on timescales of $ \sim 10^{7} $ and $ \sim 10^{9} $ yrs) in a warped dusty disk surrounding the AGN. This suggests that the younger starburst may be connected to the AGN reactivation which produced the currently active Compact Steep Spectrum (CSS) radio source at its centre \citep{RefWorks:51}. Thus, 3C~236 is an example of a radio galaxy showing signs of multiple epochs of radio activity. The central regions of this radio galaxy are rich in gas. Atomic neutral hydrogen was revealed as a deep narrow absorption feature near the systemic velocity by \cite{vanGorkom1989}. The distribution of this gas was studied at high spatial resolution using VLBI techniques by \cite{Struve2012}, who speculate about the radio source interacting with the cold ISM gas. \cite{Morganti2005} have discovered a broad and shallow wing, blueshifted up to 1000 $\,$km$\,$s$^{-1}$ . This absorption is associated with a fast outflow (a feature shared by a number of restarted radio galaxies), and has been recently traced also on VLBI (pc) scales \citep{Schulz2018}. The presence of cold molecular gas (traced by CO) was found by \cite{Labiano2013}, using the Plateau de Bure Interferometer at 209.5 GHz. The gas appeared to be rotating around the AGN, and was observed to be co-spatial with the dusty disk in which the AGN is embedded. With the advent of the LOw Frequency ARray \citep[LOFAR;][]{RefWorks:157} it is now possible, for the first time, to study the extended GRG morphology in a comprehensive manner at very low frequencies. LOFAR is sensitive to extended low surface brightness features due to its dense sampling of short spacings in the UV plane, while at the same time enabling high spatial resolution imaging leveraging its long (Dutch) baselines. Within the framework of the LOFAR Surveys Key Science Project, the nearby-AGN working group has observed two GRGs: 3C~236 and NGC~6251. These are among the largest GRGs and have never been studied in such detail as LOFAR can provide in its frequency range. In this work, we present the results related to 3C~236. Our goal is to perform high resolution mapping of the radio morphology of its extended structure at the lowest frequencies to date, enabling us to trace the oldest emission regions. Our aim is also to extend the (resolved) spectral index studies of this object a factor of two lower in frequency compared to previous studies. This enables us to place tighter constraints on the source energetics and activity history, tying in with previous studies of this object. The organization of this work is as follows. Section \ref{data} describes the observations and the reduction procedure. In Section \ref{res} we outline our results, we discuss them in Section \ref{dis} and conclude with Section \ref{con}. \section{Observations} \label{data} \subsection{LOFAR observations} The observations were performed with the LOFAR telescope operating in high-band (HBA) mode, for a total on-source time of 8 hours, on the morning of October 9, 2018. Details of the observation are given in Table \ref{table:obs}. Main and secondary calibrator scans were also performed before and after the target observation, ten minutes each in duration. \begin{table}[!htpb] \noindent \caption{\small LOFAR configuration} \label{table:obs} \centering \small \begin{tabular}{ll} \hline\hline\\ Project code & LT10\_010 \\ Central Frequency [MHz] & 143.65 \\ Bandwidth [MHz] & 47 \\ Integration time & 1 second \\ Observation duration & 8 hours\\ Polarization & Full Stokes \\ Primary flux reference & 3C~147\\ \hline \end{tabular} \end{table} The data were initially pre-processed (flagged and averaged) by the LOFAR observatory pipeline \citep{RefWorks:180}. The station gains were determined using the main calibrator pointing and set to the \citet{RefWorks:181} flux density scale. \begin{figure*}[!ht] \centering \includegraphics[width=0.45\textwidth]{UV_cov_all.png} \includegraphics[width=0.45\textwidth]{UV_profile_all.png} \caption{UV coverage (left) and its radially averaged profile of the LOFAR data.} \label{3C236:uv} \end{figure*} To image the entire field of view at these low frequencies, the influence of the ionosphere has to be properly taken into account. The observation used was part of the ongoing LOFAR Two-metre Sky Survey (LoTSS) project and the data were processed using its reduction pipelines which perform directional (self) calibration and imaging. For a full description, please refer to \cite{Shimwell2017,Shimwell2019}. 3C~236 was the brightest source in the field, and our main source of interest, so we did not calibrate and image across the entire FoV, focusing only on the area around the target (Tasse et al., van Weeren et al. in prep.). The calibrated data set (with uv-coverage as shown in Figure \ref{3C236:uv}) was imaged with WSclean \citep{Offringa2014} using multi-scale cleaning; scales of $ 0 - 2^{n}\, , n = [2, 6] $ pixels. The image shown in the main panel of Figure \ref{3C236:map} was obtained using a UV taper of $ 7.4\,\mathrm{k}\lambda $ and Briggs weights with robustness set to $ -0.5 $. To emphasize source structure on the smallest scale, we have imaged without tapering using the same weights as described previously. The final image properties are listed in Table \ref{table:imgs}. LOFAR flux densities are known to suffer from a systematic effect when the amplitude scale is set by transferring the gains from calibrator to target pointing. Different elevations of the target and calibrator sources will yield different gain normalization of the LOFAR beam pattern, which can appear as a frequency dependent flux density offset \citep{Shimwell2019}. To further verify our flux scale as measured from the images we have obtained, a check was performed measuring the flux density of the unresolved bright core as well as another nearby point source and comparing it with catalogue values; we found a residual flux excess of 42\% which we corrected for by down-scaling the LOFAR image. \subsection{Literature data} In order to perform the spectral study of 3C~236, we have collected images from the literature that trace the emission of the extended lobes and could be combined with the LOFAR data. This combination needs to be done with care and in Section \ref{spec_ind} we comment more on the caveats. We have used legacy Very Large Array (VLA) survey (NVSS\footnote{NVSS stands for the NRAO VLA Sky Survey carried out at a frequency of 1.4 GHz \citep{RefWorks:139}}), as well as Westerbork Synthesis Radio Telescope (WSRT) images. The image properties are listed in Table \ref{table:imgs}. The mid and high resolution LOFAR images are shown in Figure \ref{3C236:map}. The images collected allows us to produce spectral index maps and derive the spectral curvature and ages of the lobes. In Figure \ref{3C236:int} we plot the integrated source flux density measurements taken from the study of \cite{Mack1997} (with frequency coverage up to 10550 MHz, given in Table \ref{table:intflux}), together with those measured in our low resolution LOFAR map and the NVSS map, both listed in Table \ref{table:imgs}. \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{Int_spec-eps-converted-to.pdf} \caption{Integrated flux density of 3C~236.} \label{3C236:int} \end{figure} The LOFAR integrated flux density (marked in red) shows a slight flattening of the integrated source spectrum at low frequencies compared to the points at high frequency. This is to be expected, as we shall discuss in the forthcoming sections of this work. This flattening was hinted at in previous studies \citep[e.g.][]{Mack1997}. As can be discerned from Figure \ref{3C236:int}, the flux density scale of our LOFAR observations is as expected, within the errors. \begin{table} \centering \noindent \caption{\small Image properties.} \label{table:imgs} \small \begin{tabular}{c c c c c} \hline\hline\\ \small Instrument & \small $ \nu $ [MHz] & \small $ \Delta \nu $ [MHz] & \small $ \sigma $ [mJy/b] & \small Beam size\\ \hline\\ LOFAR & 143.65 & 46.87 & 0.26 & $ 11.77\arcsec \times 6.82\arcsec $ \\ LOFAR & 143.65 & 46.87 & 0.5 & $ 23.81\arcsec \times 19.18\arcsec $ \\ LOFAR & 143 & 53 & 3.0 & $ 50.0\arcsec $ \\ WSRT\tablefootmark{a} & 609 & - & 0.7 & $ 48\arcsec \times 28\arcsec $ \\ VLA\tablefootmark{b} & 1400 & 42 & 0.4 & $ 45\arcsec $ \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{Image provided by K. H. Mack, \citep{Mack1997}} \tablefoottext{b}{from VLA NVSS} } \end{table} \begin{table} \centering \noindent \caption{\small Integrated flux density values.} \label{table:intflux} \small \begin{tabular}{l l} \hline\hline\\ \small $ \nu $ [MHz] & \small $ S_{\mathrm{int}} $ [mJy] \\ \hline\\ 143 & $ 17744 \pm 3568.7 $\\ 326 & $ 13132 \pm 140 $\\ 609 & $ 8227.7 \pm 90.8 $\\ 2695 & $ 3652 \pm 90.8 $\\ 4750 & $ 2353.5 \pm 71.2 $\\ 10550 & $ 1274.7 \pm 41.7 $\\ \hline \end{tabular} \tablefoot{ All other values except the 143 MHz one are taken from \cite{Mack1997} } \end{table} \section{Results} \label{res} \subsection{The total intensity structure of 3C~236 at 143 MHz} The intermediate resolution image ($ 23.8\arcsec \times 19.2\arcsec $) of 3C~236 obtained with LOFAR is shown in the main panel of Fig. \ref{3C236:map}, while the insets zoom in on the two lobes and show their morphology as it appears in our high resolution LOFAR image. Figure \ref{3C236:spix} shows the contour levels of the emission at lower resolution ($ 50\arcsec $), emphasizing better some of the new low surface brightness features detected by LOFAR. An overview of the images is presented in Table \ref{table:imgs}. The map presented in Figure \ref{3C236:map} shows some residual artifacts due to limitation in the calibration around the stronger sources (e.g., the central, very bright compact core of 3C~236), while the regions of diffuse emission are less affected. Despite the huge source size (more than half a degree), the LOFAR observations recover the full extent of the source showing, for the first time, its structure at low frequencies and relatively high spatial resolution. The image reproduces well the main (previously known) morphological features of 3C~236 \citep{RefWorks:130}. The overall structure (about $40\arcmin$ in size, corresponding to $ 4.32 $ Mpc) consists of two very elongated lobes. The north-west (NW) lobe radio emission is more extended transversely to the longitudinal symmetry axis (jet propagation direction) compared to the south-east (SE) lobe (about $ 4\arcmin $ and $ 2\arcmin $ in width towards the NW and SE respectively). At the resolution of the LOFAR observations, the central region including the restarted CSS is unresolved. The asymmetry of the large-scale structure, with the SE lobe extending farther away from the core compared to the NW, is also confirmed by the LOFAR images. Given the size of the source, projection effects are unlikely and hence the source asymmetry must be intrinsic. The LOFAR images (especially the one at intermediate resolution) show that both lobes extend all the way to the core. The emission connecting the SE lobe to the core has very low surface brightness (around $ 0.5$ mJy beam$^{-1}$), nevertheless maintaining its width for the full length of the lobe (see the intensity contours in Fig. \ref{3C236:spix}). There are no signs of spurs or very extended emission transverse to the lobe extent, with the exception of the extension to the south in the NW lobe, right next to the core (Figure \ref{3C236:spix}). This extension was earlier seen at higher frequencies, although much weaker \citep{RefWorks:130}. It is reminiscent of structures created by back-flows of plasma deposited in the lobe by the jet seen in other \citep[e.g., X-shaped,][]{Leahy1984, Hardcastle2019, Cheung2009, Saripalli2018} radio galaxies. The high spatial resolution images of the lobes (seen in the insets of Fig. \ref{3C236:map}) show that in the NW lobe, the leading edge does not show a hotspot, but only a diffuse enhancement in surface brightness. However, as first noted by \cite{RefWorks:130}, there is a compact region in the middle of the lobe, confirmed by the LOFAR image (Fig. \ref{3C236:map}). This inner lobe region is split in two clumps (marked by a dashed ellipse in Figure \ref{3C236:map}), the leading one of which is probably a jet termination/hotspot. Structures of this type are seen in other objects \citep[c.f.][]{Malarecki2013}. The location of the hotspot within the lobe suggests that it may trace a separate event of source activity, propagating through the lobe, or tracing an interruption (flicker) in the accretion history of the activity episode that produced the large-scale lobes. At the end of the SE lobe, a double hot-spot appears to be present (see bottom right inset in Fig. \ref{3C236:map}). For the first time, we observe that the southern hotspot of the pair has itself a double structure, having two distinct brightness peaks (labeled H2 and H3 in the lower right inset of Figure \ref{3C236:map}). This may be a sign of a jet interaction with IGM gas \citep[e.g.][]{Lonsdale1986}, possibly indicating that the jet used to terminate at the location of the H1 hotspot, then the termination point moved to H2 and currently is located at H3. This is consistent with the hypothesis that the most compact hotspot is where the jet terminates at the current epoch \citep{Hardcastle2001}. Also, it would explain why the SE lobe has a steeper spectrum along the northern edge (discussed below). It is also possible that H1 and H2 are created by lateral emission of plasma from the H3 termination shock. In the 3CR sample, one-sixth of the sources have multiple hotspots; the statistics vary depending on the definitions used and source samples under consideration \citep{Valtaoja1984}. The differences in the structure of the lobes in 3C~236 suggest that they are not only the results of intermittence in the activity and/or changes in the position angle of the jet. Other effects due to e.g. the propagation of the jets in the inner region must have affected the large-scale morphology. Given the overall size of the source (more than half a degree), several unresolved background sources are embedded in the lobe emission. Their positions are noted in \cite{Mack1997}. Some of these sources are relatively bright, but we find that they do not present an issue for our analysis. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{3C236_LOFAR_hires_v5_invert-eps-converted-to.pdf} \caption{ LOFAR intensity map (linear scale, level limits at 1 and 150 mJy beam$^{-1}$) of 3C~236 at 143.6 MHz. Fifteen positive contours are overlaid as solid gray lines with levels at $ \left(\sqrt{2}\right)^{\mathrm{n}} 3\sigma $, where $ \sigma = 0.6 $ mJy beam$^{-1}$ , and $ \mathrm{n} $ increasing from $ 0 $ to $ 14 $ in increments of $ 1 $. One negative contour level at $ -3 \sigma $ is overlaid as a dashed gray line. The restoring beam size of $ 23.81\arcsec \times 19.18\arcsec $ is shown in the lower left corner. High resolution image insets (logarithmic intensity scale, limits at 1 , 100 and 500 mJy beam$^{-1}$ , $ 11.77\arcsec \times 6.82\arcsec $ restoring beam, $ \sigma = 0.26 $ mJy beam$^{-1}$) of the NW and SE lobes are shown in the top-left and bottom-right corners respectively. Regions of interest are marked with a dashed ellipse and labeled.} \label{3C236:map} \end{figure*} \begin{figure*}[!htpb] \includegraphics[width=\textwidth]{3C236_spix-eps-converted-to.pdf} \caption{$ \alpha_{143}^{609} $ spectral index map. The restoring beam size is $ 50\arcsec $ (bottom left). Overlaid in black are contours tracing the emission from the convolved LOFAR image (listed in Table \ref{table:imgs}) with levels at $ \left(\sqrt{2}\right)^{n} 5\sigma $, where $ n = [0, 9] $ and $ \sigma = 3 $ mJy beam$^{-1}$ . Inset are enlarged views of the lobes. Profile paths along which the spectral index values shown in Fig. \ref{3C236:profiles} are measured are shown using dashed lines, as well as measurement regions of the spectral index values listed in Table \ref{table:spix_regs} (solid labelled rectangles). Point sources embedded in the lobes have been masked.} \label{3C236:spix} \end{figure*} \begin{figure}[!htpb] \centering \includegraphics[width=0.5\textwidth]{3C236_spixerr-eps-converted-to.pdf}\\ \includegraphics[width=0.5\textwidth]{NW_lobe_spix_profile.pdf} \includegraphics[width=0.5\textwidth]{SE_lobe_spix_profile.pdf} \caption{Spectral index error (top panel) and spectral index profiles along the paths shown in Fig. \ref{3C236:spix}. The profile paths start in the inner part of the lobes. The shaded area in the spectral profile plots represents the spectral index error.} \label{3C236:profiles} \end{figure} \subsection{Spectral index} \label{spec_ind} We have derived a low frequency spectral index ($ S \propto \nu^{\alpha} $) map between 143 and 609~MHz using the images listed in Table \ref{table:imgs}, implementing the following procedure. The 609 MHz image \citep[from][]{Mack1997} was first re-gridded to a J2000 epoch, then we registered the lowest resolution 143~MHz and the 1400~MHz image to the same pixel scale as the 609~MHz image. Finally, we convolved the images with a Gaussian kernel resulting in a circular PSF of $ 50\arcsec $. The re-gridding and convolution were performed using the {\tt imregrid} and {\tt imsmooth} {\tt CASA} \citep{McMullin2007} tasks. Producing spectral index images from datasets taken with different telescopes is always subject to caveats. In particular, observations at different frequencies must be sensitive to similar spatial structures. The UV-coverage of the datasets can help in checking whether this condition is fulfilled. The UV-range of the mid-resolution LOFAR image (from which the lowest resolution LOFAR map is obtained by convolution) is well matched to that of the WSRT image (upper cut of around $ 7\,\mathrm{k}\lambda $) , so the low frequency spectral index derivation is unaffected by spatial filtering. The NVSS image has relatively few short spacings, i.e., it is less sensitive to extended emission \citep{RefWorks:139,Jamrozy2004}. We keep this limitation in mind when interpreting our (spectral ageing) results. The flux density scale was taken to be uncertain at a level of 20\% for the LOFAR \citep{Shimwell2017} and 5\% for the WSRT \citep{Mack1997} and VLA \citep{RefWorks:139} observations, respectively. We added these uncertainties in quadrature to the r.m.s. errors of the corresponding maps. We have derived the $ \alpha_{143}^{609} $ spectral index using the standard expression: $ \alpha = \log(S_{1} / S_{2}) / \log(\nu_{1} / \nu_{2}) $. We show the results in Figure \ref{3C236:spix}, restricted to pixels having flux density values above $5\sigma$ in the corresponding images. The spectral index errors were obtained by error propagation of the flux density errors in the corresponding images: \[ \delta\alpha = \frac{1}{\ln\frac{\nu_{1}}{\nu_{2}}}\sqrt{\left(\frac{\delta S_{1}}{S_{1}}\right)^{2} + \left(\frac{\delta S_{2}}{S_{2}}\right)^{2}} \] \noindent here, $ \delta S $ represents the flux density measurement error. The resulting spectral index error map is shown in Fig. \ref{3C236:profiles}, top panel. We have also measured flux densities in eleven characteristic regions, (along the lobes, encompassing the lobe termination regions and across the NW lobe). These numbered regions are listed in Table \ref{table:spix_regs}, and indicated in Fig. \ref{3C236:spix}. For each region we have computed the spectral index to investigate whether differences with the values reported in the spectral index map (which samples smaller spatial scales) are present. We find no significant deviations, indicating the robustness of the spectral index map. Also, we show the spectral index profiles along both lobes; the profiles are indicated as dashed lines in Fig. \ref{3C236:spix}. \begin{table*}[!htpb] \centering \noindent \caption{\small Spectral index and spectral ages for the regions defined in Fig. \ref{3C236:spix}.} \label{table:spix_regs} \small \begin{tabular}{c c c c c c c c c} \hline\hline\\ \small Region ID & \multicolumn{3}{c}{\small $ \alpha_{143}^{609} $} & \small Model & \small $ \alpha_{\mathrm{inj}} $ & \small Spectral age [Myr] & & $ \small \chi^{2}_{\mathrm{red}} $ \\ \hline\\ $ 1 $ & & $ -0.82 \pm 0.14 $ & & JP & $ - $ & $ - $ & & $ - $ \\ $ 2 $ & & $ -0.74 \pm 0.14 $ & & JP & $ -0.57 $ & $ 51.3^{+7.2}_{-6.9} $ & & $ 0.01 $ \\ $ 3 $ & & $ -0.64 \pm 0.14 $ & & JP & $ -0.20 $ & $ 140.3^{+6.4}_{-1.5} $ & & $ 0.89 $ \\ $ 4 $ & & $ -0.32 \pm 0.14 $ & & JP & $ -0.20 $ & $ 116.7^{+3.5}_{-6.4} $ & & $ 2.63 $ \\ $ 5 $ & & $ -0.60 \pm 0.15 $ & & JP & $ - $ & $ - $ & & $ - $ \\ $ 6 $ & & $ -0.60 \pm 0.15 $ & & JP & $ -0.20 $ & $ 153.2^{+6.3}_{-6.3} $ & & $ 3.56 $ \\ $ 7 $ & & $ -0.88 \pm 0.14 $ & & JP & $ -0.54 $ & $ 159.2^{+2.5}_{-2.6} $ & & $ 0.00 $ \\ $ 8 $ & & $ -0.70 \pm 0.14 $ & & JP & $ -0.20 $ & $ 135.3^{+5.5}_{-1.9} $ & & $ 0.06 $ \\ $ 9 $ & & $ -0.67 \pm 0.14 $ & & JP & $ -0.40 $ & $ 83.8^{+3.5}_{-5.3} $ & & $ 0.03 $ \\ $ 10 $ & & $ -0.68 \pm 0.14 $ & & JP & $ -0.40 $ & $ 91.6^{+2.8}_{-8.0} $ & & $ 0.12 $ \\ $ 11 $ & & $ -0.59 \pm 0.14 $ & & JP & $ -0.40 $ & $ 75.6^{+3.1}_{-9.6} $ & & $ 0.67 $ \\ \hline\\ &\multicolumn{3}{c}{\small $ S_{\mathrm{int}} $[mJy]} & & & & \small $ \nu_{\mathrm{br}} $ [MHz] & \\ & 143[MHz] & 609[MHz] & 1400[MHz] & & & & &\\ \hline\\ \small NW lobe & $ 5030 \pm 80 $ & $ 1930 \pm 390 $ & $ 730 \pm 150 $ & CI & $ -0.55 $ & $ 129.1^{+41.2}_{-30.6} $ & $ 479 $ & $ 0.50 $ \\ \small SE lobe & $ 3580 \pm 100 $ & $ 1260 \pm 250 $ & $ 580 \pm 120 $ & CI & $ -0.55 $ & $ 117.1^{+41.1}_{-30.9} $ & $ 582 $ & $ 0.01 $ \\ \hline\\ \end{tabular} \end{table*} The spectral index map shows that the outer lobe regions have spectral index values (between $ -0.5 $ and $ -0.65 $) typical regions with ongoing particle acceleration. This is also observed in the embedded hotspot region in the NW lobe and in the hotspot in the SE lobe (although here the spectral index is around $ -0.1\mathrm{dex} $ steeper). The spectral index generally steepens (see bottom two panels in Figure \ref{3C236:profiles}) toward the edges of the lobes and toward the core (consistent with the FRII overall source morphology), indicating loss of energy and (spectral) ageing of the particle populations as they back-flow in the direction of the core. However, curiously, the SE lobe has a region of very flat spectral index in its core-facing end; a hint of this region is also observed in the higher frequency spectral index maps of \cite{RefWorks:148}. These trends are shown in the spectral index profiles of the lobes presented in Figure \ref{3C236:profiles}. The SE lobe has the flattest spectral index along its southern edge. There is a transition to steeper spectral index values of $ \sim -0.9 $ along the northern edge of its outer region. Whereas the interpretation of the spectral index map in this region is not straightforward, the observed steepening could be real at least in some parts and warrants further investigation in future studies of this object. \cite{RefWorks:148} derive $ \alpha_{326}^{609} $ spectral indices for the NW lobe of around $ -1 $ for the outer regions to $ -1.2 $ going toward the core. Our $ \alpha_{143}^{608} $ spectral index map shows much flatter spectral index values, around $ -0.5 $ to $ -0.6 $, (c.f. the spectral profiles in Figure \ref{3C236:profiles}) meaning that LOFAR detects the particle population related to the primary injection of energy in the lobe. For the SE lobe, the agreement between our spectral index values and those derived by \cite{RefWorks:148} is better, although we have flatter values, with $ \alpha_{143}^{609} \sim -0.6 $ versus their values of $ \alpha_{326}^{609} \sim -0.8 $ for the outer lobe. We have derived the spectral index for several regions in the source (Table \ref{table:spix_regs}) to test whether the values we obtain in our spectral index map are reliable; the spectral index values per region match those from the map. High resolution mapping helps to disentangle the detailed spectral behaviour. \subsection{Source energetics and radiative ages} Before discussing the 3C~236 energetics, we estimate the magnetic field value throughout the source. We make the assumption that the field structure in the lobes is uniform and oriented perpendicular to the line of sight. We use cylindrical geometry for the lobes, and calculate the magnetic field strength assuming that equal energy is contained in the relativistic particles and the field (equipartition assumption). Furthermore, we set the ratio of electrons to protons to unity, as well as the filling factor of the lobes. We adopt limits of the spectral range from 10~MHz to $ 10^{5} $ MHz, and we set the spectral index of the radiation in this range to $ \alpha = -0.85 $ (taking into account the observed values at low frequencies, as well as assuming spectral steepening to higher frequencies). Using the relation by \cite{Miley1980}, calculating at a frequency of 143 MHz (Table \ref{table:fluxes}) and averaging over both lobes, we obtain $ \mathrm{B} = 0.75 \, \mathrm{\mu} $G for the equipartition magnetic field strength. As was noted by \cite{Brunetti1997}, the equipartition field calculated in this manner should be corrected, to take into account a low energy cut-off value for the spectrum of the particles and a value for the particle spectral index at injection time. With $ \gamma_{min} = 200 $, and $ \mathrm{\alpha}_{\mathrm{inj}} = -0.75 $ (average low frequency spectral index in the lobes), we find $ \mathrm{B} = 1.28 \, \mathrm{\mu} $G for the average source magnetic field, a value we will be using further in our analysis. This value of the magnetic field is lower than the CMB equivalent magnetic field at the source redshift ($ B_{CMB} = 3.93 \, \mathrm{\mu} $G). Thus, the dominant energy loss mechanism of the relativistic particles generating the synchrotron radio emission will be inverse Compton scattering off the omnipresent CMB photons. The spectral ages of the emitting regions are calculated using two different approaches: first for the regions defined in Figure \ref{3C236:spix} and second for measurement regions encompassing the NW and SE lobes separately, avoiding embedded point sources and measuring out to the $ 5\sigma $ contour in the 143 MHz image. We have used the {\tt fitintegrated} and {\tt fitcimodel} tasks of the {\tt BRATS}\footnote{http://www.askanastronomer.co.uk/brats/} software package \citep{Harwood2013,Harwood2015} for the two cases respectively. The fitting was performed using flux density measurements at three different frequencies, using the low resolution LOFAR image and the WSRT and VLA images listed in Table \ref{table:imgs}. In the {\tt fitintegrated} task we fitted a Jaffe-Perola \citep[JP,][]{Jaffe1973} model, and the {\tt fitcimodel} task fits a continuous injection (CI) model to the integrated flux densities for the source regions under consideration. The CI model was used when modelling the lobes assuming that they encompass regions where particle acceleration is ongoing, which can not be stated for (all) of the smaller measurement regions. Although the models do not give intrinsic ages \citep{Harwood2017}, they are useful to address the source properties. The injection spectral index was kept fixed for each fitting run, and the fitting was performed over a search grid spanning values from $ \alpha_{\mathrm{inj}} = -0.2 $ to $ \alpha_{\mathrm{inj}} = -0.9 $. The derived spectral ages and spectral break frequencies (in the CI fit case) resulting from the fitting procedure are shown in Table \ref{table:spix_regs}. The average reduced $ \chi^{2} $ measure for the goodness of the fit (one degree of freedom) is given in the last column. The fit results are shown in Figure \ref{CI_fits}. \begin{figure}[!htpb] \centering \includegraphics[width=0.5\textwidth]{NW_fit.pdf}\\ \includegraphics[width=0.5\textwidth]{SE_fit.pdf} \caption{CI model fits as reported in the lower section of Table \ref{table:spix_regs}.} \label{CI_fits} \end{figure} The derived ages for individual regions indicate older plasma as one goes from the lobe outer edges toward the core, consistent with what would be expected if the main acceleration regions are located in the outer lobes. The injection spectral indices which best fit the data are not steep, indicating that LOFAR is probing particles which have retained their energy since their acceleration phase. We will discuss the spectral ages further in Section \ref{dis}. The particle energy assuming equipartition can be expressed as \citep{RefWorks:148,Pacholzcyk1970} \[ \mathrm{E}_{\mathrm{eq}} = \dfrac{7}{4}\left[(1+\mathrm{k})\mathrm{c}_{\mathrm{12}}\mathrm{P}\right]^{\frac{4}{7}}\left(\dfrac{\mathrm{\Phi} \mathrm{V}}{\mathrm{6\pi}}\right)^{\frac{3}{7}} \] \noindent where $ \mathrm{\Phi} = 1 $ is the volume filling factor, $ \mathrm{V} $ the volume of the region filled with relativistic particles and fields, $ \mathrm{k} $ (= 1) the electron to proton ratio, $ \mathrm{P} $ the radio luminosity integrated over the earlier specified frequency range, for the regions under consideration, and $ \mathrm{c}_{\mathrm{12}} $ is a constant \citep[in our case $ \mathrm{c}_{\mathrm{12}} = 1.6 \times 10^{7} $;][]{Pacholzcyk1970}. Assuming that the lobes are in pressure balance with the intergalactic medium (IGM), the relativistic gas pressure in the lobes, $ \mathrm{p}_{\mathrm{l}} $\footnote{$ \mathrm{p}_{\mathrm{l}} = (\gamma - 1)\mathrm{e}_{\mathrm{eq}} $, where $ \mathrm{e}_{\mathrm{eq}} = \mathrm{E}_{\mathrm{eq}} / \mathrm{V} $ is the lobe energy density and $ \gamma $ is the ratio of specific heats; for relativistic gas $ \gamma = \frac{4}{3} $} should balance with the gas pressure of the IGM ($ \mathrm{p}_{\mathrm{IGM}} = \mathrm{n}_{\mathrm{IGM}}\mathrm{kT} $), where $ \mathrm{k} $ is the Boltzmann constant and $ \mathrm{T} $ the temperature of the IGM in degrees Kelvin. Adopting $ \mathrm{T} = 10^{7} $K, we can roughly estimate the IGM particle density values $ \mathrm{n}_{\mathrm{eq}} = \frac{\mathrm{e}_{\mathrm{eq}}}{3\mathrm{kT}} $ \citep{RefWorks:148, Hunik2016}, and list the resulting values in Table \ref{table:fluxes}. \begin{table*}[!htpb] \centering \noindent \caption{\small Source parameters and derived quantities} \label{table:fluxes} \small \begin{tabular}{c c c c c c c} \hline\hline\\ \small ID & \small $\mathrm{S}_{\mathrm{143}}$ [Jy] & \small $\mathrm{L}_{\mathrm{143}}$ [W$\mathrm{Hz}^{-1}$] & $ \mathrm{E}_{\mathrm{eq}} $ [J] & $ \mathrm{V} $ [$ \mathrm{m}^{3} $] & $ \mathrm{n}_{\mathrm{IGM}} $ [cm$ ^{-3} $] & $ \mathrm{P} $ [Pa] \\ \hline NW lobe & $ 5.0 $ & $ 1.2 \times 10^{26} $ & $ 2.0\times10^{53} $ & $ 0.9\times10^{67} $ & $ 5.2 \times 10^{-5} $ & $ 7.2\times10^{-15} $ \\ Core & $ 9.0 $ & $ 2.2 \times 10^{26} $ & - & - & - & - \\ SE lobe & $ 3.6 $ & $ 8.9 \times 10^{25} $ & $ 1.7\times10^{53} $ & $ 1.0\times10^{67} $ & $ 4.2\times10^{-5} $ & $ 5.8\times10^{-15} $ \\ \hline\\ \end{tabular} \end{table*} \section{Discussion} \label{dis} Our LOFAR imaging recovers the source structure as described previously in the literature \citep{RefWorks:130, Mack1997}, and discussed in the previous section of this work. Owing to the high surface brightness sensitivity in our LOFAR images, we can now trace the SE lobe emission all the way to the core even in the intermediate resolution maps. We note that the NW lobe is shorter than the SE lobe. Interestingly, this asymmetry is inverted for the small-scale emission of the CSS core. There, the NW extension is longer than the SE as seen by \cite{RefWorks:258}. They also speculate that the dust lane imaged by HST close to the core may be part of the material that helps collimate the radio emission. As was noted in Section \ref{intro}, there is a hint of a slight wiggle of the ridge line connecting the core and the outer lobe edges. It is visible in Figure \ref{3C236:map} as a departure from the symmetry axis in the NW lobe, where the inner hotspot and the outer diffuse region are not on a straight line to the core. This may be due to the wobble of the jet as it drills through the IGM/ICM over time. In this context, the appearance of the SE lobe hotspot is intriguing. It was described as a double hotspot in the literature \citep{RefWorks:130}; now, using LOFAR, we can see that it is in fact a triple hotspot; the southern component is split in two (Figure \ref{3C236:map}). It may be that the jet was deflected at the hotspot producing the observed morphology, or that the jet working surface has shifted over time. \cite{Lonsdale1986} have suggested that such hotspots can originate from a flow redirection in the primary hotspot. \cite{RefWorks:228} have classified 3C~236 as a double-double \citep{Saikia2009,Orru2015} radio galaxy, since the restarted activity in the core has extended emission aligned with the large-scale lobes. Then, 3C~236 may be a "triple-double" radio galaxy; the inner hotspot in the NW lobe signifying a stall in the jet, or a short sputter in the accretion episode responsible for the creation of the large-scale lobes. In this view, the diffuse outer region of the NW lobe is the (still active) remnant hotspot of the previous activity episode and the embedded hotspot is produced by the jet expelled during the re-activation episode, which is still advancing through the lobe material. Within this context, the wiggle noticeable in the source morphology (mentioned above) can be explained by a slight shift in the jet axis during the jet stall/sputter event. The lobe length asymmetry and the position of the hotspots may be caused by a difference in material density on the opposite sides of the host galaxy, at the position of the lobes, and higher for the NW lobe. This is tentatively supported by the particle density we have derived, presented in Table \ref{table:fluxes}, which is $ \sim 3 $ times higher than the medium density obtained by \cite{Mack1997} and hence broadly comparable with their result. Owing to their sizes, GRGs can be used as probes of the physical conditions of the IGM. \cite{Malarecki2013} have performed such a study on a sample of 19 GRGs; we are in agreement with the values they have derived for the mean lobe pressures in their sample (ranging from $ 1.34\times10^{-15} $ to $ 1.91\times10^{-14} $ Pa). In a subsequent study of the same sample \citet{Malarecki2015} find that GRGs tend to occur in sparse environments (such as the one of the 3C~236 host galaxy), and they show tentatively that shorter lobes expand in regions of (on average) higher galaxy density. This may be relevant to explain the lobe morphology of 3C~236. Further studies on the immediate environment of the host galaxy of 3C~236 should test this hypothesis. If true, the environment should be denser to the north-west, where the shorter lobe extends. On the other hand, the large-scale asymmetry and the (reverse) small-scale asymmetry may have a physical origin in asymmetric host galaxy properties. Recent studies of the GRG NGC~6251 by Cantwell et al. (submitted) have found ages for its lobes of less than 50 Myr, and show that the newly discovered faint lobe extensions have ages greater than 200 Myr. The radiative ages for the lobes of 3C~236 we derive fall in between of the ages derived for the different regions of NGC~6251. Given the morphological difference between the lobes of these two GRGs (the lobes of NGC~6251 are far less confined, and it is an FRI radio galaxy), the results of the studies are consistent. The lobe pressure values they find ($ 4.9\times10^{-16} $ to $ 4.8\times10^{-13} $ Pa) are also consistent with our findings. Our derivations of the lobe ambient medium assume that the lobes are in pressure balance with the IGM. One may find this assumption objectionable, as radio galaxy lobes are often observed to be under-pressured. However, \cite{Croston2014} have argued that FRIs can be in pressure balance if there is entrainment of surrounding material by the jet flow. Recent reports that FRI lobes are energetically dominated by protons \cite{Croston2018} seem to support the entrainment scenario. Similarly, \cite{harwood2016, harwood2017b} argue that FRIIs can be brought back into agreement by considering the steeper than expected injection index which is sometimes derived using model fits to data from low-frequency observations. Our 3C~236 spectral index map, with the highest spatial resolution obtained so far at these frequencies, allows us for the first time to associate morphological and spectral features. We clearly see a flatter (compared to the surrounding regions) spectral index associated with the inner hotspot of the NW lobe, hinting at it being a particle acceleration region. Also, while previous spectral index maps (Figure 3, top panel in \cite{RefWorks:148}) only weakly hinted at the spectral index steepening toward the lobe edges, we can now better trace that transition. The curious flattening of the spectral index in the inner SE lobe which was hinted at previously \citep{RefWorks:148}, now stands out. It can be a signature of the interaction between the jet inflating the SE lobe and the IGM at that position; the spectral index indicating that acceleration is ongoing. In general, the spectral ages we have derived do not agree with the values published by \cite{RefWorks:148}; these authors obtain lower age estimates (ranging from less than $ 8 $ Myr to $ 20 $ Myr). The only exception is region two, where our age is broadly comparable with their estimate (they have an age of around $ 20 $ Myr for that region of the source, while our value is around $ 50 $ Myr, both using the JP model). The ages we derive measuring the integrated flux density of the lobes are substantially higher than what \cite{RefWorks:148} derive. The fact that the age for region eight using the single-injection JP model is comparable with the age for the NW lobe derived using the CI model, suggests that our ages are a more robust measure of the characteristic age compared to previous studies. The break-frequencies in our model lobe spectra are located in the lower frequency range of the data used by \cite{RefWorks:148}, which is consistent with the fact that we measure flatter spectral indices in the lobes. LOFAR can trace the spectral flattening towards still lower frequencies, and thus characterize the spectral break. The age difference between these studies is most likely due to the fact that LOFAR measures the emission from the oldest particle population, affecting our estimates. It should also be noted that due to the uncertainties in the assumed values of the input parameters (especially the magnetic field value; \cite{RefWorks:148} use values between 0.3 $ \mu $G and 0.7 $ \mu $G), uncertainties in the model, mapping resolution as well as the sparse frequency coverage, these ages should be taken as limits to the actual values. Our age estimates support a scenario where the lobes are built up by a jet advancing with a speed of around $ 0.1\mathrm{c} $ \citep[as argued by][]{RefWorks:130}, i.e. that speed is required to inflate the lobes to their present linear size in a time broadly consistent with their derived ages (overall ages based on the CI model). Further, as was already mentioned in the introduction section of this work, \cite{RefWorks:228} suggest (based on their HST studies of star formation in the nucleus of the host galaxy) an age of the large-scale lobes in the range of $ 10^{8} $ to $ 10^{9} $ years, in line with our findings. \section{Conclusion} \label{con} We have presented new LOFAR observations of the GRG 3C~236. We have studied this radio galaxy for the first time at a resolution of up to $ 6\arcsec $ at 143 MHz. Also, we have derived the highest resolution spectral index maps to date (at $ 50 \arcsec $ resolution). Our main conclusions are: \begin{itemize} \item We observe an inner hotspot in the north-western lobe, separate from its more diffuse outer region. It is also discernible in the spectral index map, as a region undergoing more recent particle acceleration (flatter spectral index values). This detection, taken together with the overall source morphology, may be an indication of a short interruption of the accretion episode/jet sputter. \item The brighter component of the SE lobe double hotspot is resolved in two components, making this feature a triple hotspot. \item The source energy / pressure balance with the IGM suggests that confinement by the IGM may be responsible for the morphology of the lobes; the NW lobe is probably confined and the SE one has expanded in a lower density medium, reflected in the somewhat steeper spectrum of its outer region / northern edge. \item The derived spectral ages are consistent with a jet advancing at $ 0.1 $c in the surrounding medium of the host galaxy. \end{itemize} LOFAR is a valuable instrument for studies of giant radio sources. Its sensitivity to low surface brightness features, combined with its capability for high resolution imaging at low frequencies, offers an unprecedented detailed view of source emission regions containing low energy plasma. This is useful to uncover previously unknown features even in targets which have been studied for decades, such as 3C~236. \begin{acknowledgements} LOFAR, the Low Frequency Array designed and constructed by ASTRON, has facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the International LOFAR Telescope (ILT) foundation under a joint scientific policy. We would like to thank Karl-Heinz Mack for providing fits images for the previously published WSRT map. RM gratefully acknowledges support from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Advanced Grant RADIOLIFE-320745. MB acknowledges support from INAF under PRIN SKA/CTA ‘FORECaST’ GJW gratefully acknowledges support from the Leverhulme Trust. SM acknowledges funding through the Irish Research Council New Foundations scheme and the Irish Research Council Postgraduate Scholarship scheme. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\\ This research has made use of APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com This research made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration, 2019). \end{acknowledgements} \bibliographystyle{3C236_lofar.bst}
1,477,468,749,888
arxiv
\section{Introduction} \label{sec:Introduction} The art of game design has advanced tremendously in recent years, fueling and being fueled by a modern renaissance in tabletop and video gaming. The methods and vocabulary used by designers and presented in game design schools has become increasingly sophisticated, as efforts have been made to systematize our understanding of game design elements and process (e.g., \cite{tekinbas_rules_2003,koster_theory_2005,adams_game_2012,schell_art_2019,engelstein_building_2019}). Games are designed and used to entertain, train, educate, tell stories, sell products, study psychology, and simulate war, to name a few of their roles. They are also advancing as experimental tools, probing real-world questions through the design and play of publicly accessible board and video games \cite{reddie_next-generation_2018}. However, little mathematical attention has been given to questions of game design. Games are distinct among artistic media in that the interactive systems underlying them can often be precisely defined. This has enabled great mathematical progress in understanding game-centric decision-making processes, through efforts in game theory and artificial intelligence (AI) (e.g., \cite{yannakakis_artificial_2018}), but this progress has been largely isolated from other, ``softer'' subfields of game studies and design \cite{melcer_games_2015,melcer_toward_2017}. This definability of games, however, could also be used to formally explore questions of interest to those softer subfields: What makes a game broken, or balanced, or hyper-competitive? How much does a game mechanic matter for overall gameplay? How are these two games related? What's the best user interface to reflect the underlying rules? What makes this game hard to learn? Can we predict behavior or strategy in one game based on data from a similar game? Designers and players can have strong intuitions for questions like these and others, and those intuitions often draw upon the precisely definable underlying game systems. This is an opportunity to complement existing game design discourse with mathematically formal underpinnings. We thus propose the study of \emph{``Mathematical Ludology'': a mathematical exploration of the space of games and their properties in order to better understand game design principles, gameplay phenomena, and player behavior in complex games.} The goal of the present paper is to lay groundwork for this exploration by developing some basic perspectives and accompanying mathematical framework through which complex games can be precisely and compactly described, and their design interrogated. Mathematical ludology will be unavoidably experimental at some level, since we are chiefly interested in human interactions with games. Observational data, such as human gameplay data, will be important for developing and testing analytical models. Simulations using AI agents may provide useful approximations \cite{khalifa_modifying_2016} of human play---indeed, some progress has already been made on simulation-based measurements of game design properties \cite{browne_evolutionary_2011}. Ultimately, our goal is to discover how much we can describe games and predict their gameplay before players sit down to play, using analytical methods inspired from such measurements---perhaps even enabling back-of-the-envelope approximations usable by ludologers or designers on the fly. In this sense, mathematical ludology can be viewed as an experimental science of complex games, using real-world observations and formal methods to develop descriptive and predictive models of games. We think there is great potential benefit to this formal exploration, even beyond game design. Identifying important mechanics or inter-game relationships could be useful for training general gameplaying AI agents, or developing approximate game theoretic solutions. Learning to derive a user interface from logical game rules could be useful in varied applications of interactive system design. Understanding facets of typical player behavior in complex games could help interpret gameplay data from simulation games (e.g., \cite{reddie_next-generation_2018}), making it more possible to isolate laboratory effects (behavior unique to gameplaying) from behavior that reflects real-world responses. By the same token, mathematical ludology will necessarily be a multidisciplinary effort (like most of game studies), and will benefit from existing research in game theory, general gameplaying, cognitive science, psychology, interactive system design, wargaming, and other fields, not to mention game design. Two especially notable companion fields are game design efforts towards ``game grammar,'' building formal methods to atomize and diagram games (e.g., \cite{cousins_elementary_2004,koster_grammar_2005,adams_game_2012,stephane_game_2006}); and recent computational efforts in digital archaeoludology \cite{browne_modern_2018}, building on prior work in automated game design \cite{browne_evolutionary_2011}. These are the two research areas which, to our knowledge, are most similarly interested in mathematical questions of game design and structure, and we will discuss them more later on. The rest of this paper proceeds as follows: In \cref{sec:GameDescriptions}, we describe a hierarchy of game descriptions that will help frame and contextualize different questions of interest. In \cref{sec:UnderlyingGameSystem}, we define our lowest level formal description of a discrete game system, which will serve as a foundation for further study. We also discuss why existing formal descriptions are insufficient for our purposes. We discuss how to build and interpret game trees and automata in \cref{sec:GameTreesAndAutomata}. Using these game trees, we develop equivalence relations on the space of game systems in \cref{sec:AgencyEquivalence}, with \cref{sec:EquivalenceTechnicalDefinitions} containing the technical details. \cref{sec:StructuredNotation} builds on the formal description of \cref{sec:UnderlyingGameSystem}, offering more powerful notation to ease the expression and analysis of complex games. We discuss the relationship of our work to formal methods in game design and digital archaeoludology in \cref{sec:RelatedWork}, and wrap up with discussion in \cref{sec:Discussion}. \section{Game Description Hierarchy} \label{sec:GameDescriptions} \begin{figure} \centering \includegraphics[scale=1.00]{figures/figure-Hierarchy.pdf} \caption{The game description hierarchy. We focus on Underlying Game Systems in this paper. } \label{fig:GameDescriptionHierarchy} \end{figure} In order to isolate some of the facets that make up a game, and contextualize the work of this paper, we propose a four-tier hierarchy of game descriptions. We will illustrate these tiers by contrasting the following two games: standard chess, played by two people moving black and white pieces on a checkered board; and blindfold chess, played by two people dictating their moves (e.g., ``knight to c3''), and each having to remember the game state, without any external record of it. At the lowest tier, there is the \emph{underlying game system} (UGS), or simply \emph{game system} when there is no risk of confusion. This will be the focus of the present paper. A UGS describes the underlying rules and mechanics of the game,\footnote{Comparable to the ``constituative rules'' of \cite{tekinbas_rules_2003}.} the logic of a game, without any specification of the user interface or the information given to (or hidden from) the players. Specifically, a UGS provides: \begin{itemize}[noitemsep] \item Who is playing the game \item A way to describe the current game state \item How the game may be set up at the beginning \item The choices each player may make at each state \item The consequences of those choices, which alter the game state \item The results of a finished game, like who won or lost, or the final score \end{itemize} Standard and blindfold chess can be described with identical underlying game systems, since they share the same logical rules. The second tier is a \emph{perceived game system} (PGS). This builds on a UGS by adding an information layer, specifying what players are told (and not told) about the UGS and its state, other players' information, and how player knowledge might interact with the UGS rules (e.g., to describe how players obtain new information, or to describe epistemic games \cite{thielscher_gdl-iii:_2017}). A PGS is needed to account for hidden game states (e.g., a secret hand of cards) or hidden game rules (e.g., \emph{Betrayal at House on the Hill} \cite{glassco_betrayal_2004}, the card game Mao, or \emph{Gloomhaven} \cite{childres_gloomhaven_2017}). In a PGS for a perfect information game, like standard chess, every aspect of the UGS and its state are common knowledge among the players. In blindfold chess, the UGS rules, initial state, and each move are common knowledge, but the game state is otherwise hidden. We find this information layer useful to separate from the UGS for two reasons: first, because the UGS is interesting to study in its own right, and second, because information specification can be very recursive and complicated (e.g., ``I know that you know that I don't know whether we're playing this game or that game''), and it is useful to have a separate object to ground this specification. We offer some brief suggestions on how hidden information might be treated in \cref{sec:HiddenInformation}, for instance by employing intervals on the space of UGSs. Note it is useful to distinguish between \emph{information} given to players, which is explicitly revealed through the game, and \emph{knowledge} that players have, which may additionally include memories or inferences. For instance, the current game state in blindfold chess is never given as information (after the initial state), but it may be part of player knowledge (if each player can remember how it has evolved). A perceived game system specifies information given to players, while inference and memory are left to player models (see below). The third tier description is a \emph{game representation}. A game representation builds on a PGS by additionally specifying the game's user interface, which should respect the rules and hidden information described in the PGS. Standard chess uses pieces on a board as its interface, while blindfold chess uses a purely audio interface with certain accepted utterances. It is useful to separate the interface from earlier tiers, because two game representations might have identical PGSs and yet very different interfaces (e.g., versions of tic-tac-toe, see \cref{ex:ArithmeticTic-tac-toe}), or identical interfaces but very different PGSs or UGSs (e.g., Go and Gomoku). This interface specification may need varying levels of fidelity or abstraction for different research questions. It may suffice to describe a graph resembling the board shape, for instance, with symbols that can be placed on graph nodes to represent pieces on squares, but for understanding various gameplay effects it might also be useful to distinguish piece shapes, card iconography, where players are sitting \cite{rogers_6_2019}, etc. And finally, the fourth tier is a \emph{game actualization}. This has all the elements of a game representation, while also including means of communicating to each player the appropriate information, providing a realization of the user interface, and providing means of actually playing the game. Game actualizations are the ``real-world games'' that people actually play and interact with. A physical chess set with a rulebook could be an actualization of standard chess, while just a rulebook might suffice for blindfold chess; an appropriate piece of software could also function as an actualization for either. In contrast to game theoretic game descriptions, we do not include player preferences or payoffs in the description of the game at any of these levels. We relegate these and everything else about each player into a \emph{player model}, including their preferences, skill level, play style, faculties of memory and inference, personal familiarity with the game, grudges against other players, or anything else that might be relevant for the question of interest. These will be key to studying player behaviors and gameplay phenomena, and will likely benefit from existing research and methods in other fields, like game theory and cognitive science. \section{Underlying Game Systems} \label{sec:UnderlyingGameSystem} The focus of the present paper will be on mathematizing underlying game systems and investigating some of their properties. Specifically, we will focus on games that are discrete in time and space and that have a finite state space; \cref{def:UnderlyingGameSystem} can describe the game system of any such game. This captures the majority of board and card games, and many video games. It can approximate real-time or continuous-space games, but only insofar as they can be discretized into a finite state space. A different formalism will be needed to describe improvisational games, like tabletop role-playing games or nomic games, where game rules are created and modified in unforeseeable ways during the course of play.\footnote{Such a formalism will at least need an alternative form of state space, perhaps related to suggestions in \cite{hammond_schumpeterian_2007}.} The formalism (\cref{def:UnderlyingGameSystem}) we develop in this paper is not the only way to describe a game system, but we capture here those minimal elements essential to the logic of a game, while also making clear what agency each player has versus what is outside their control. It will provide our foundation on which to begin formalizing games: \begin{definition} \label{def:UnderlyingGameSystem} An \emph{(underlying) game system} $\mathcal{G}$ with $n$ players is a $9$-tuple $\mathcal{G} = \langle \mathcal{P}, \mathcal{T}, \mathcal{S}_0, \mathcal{D}, \mathcal{A}, C, L, \mathcal{O}, \Omega \rangle$, where: \begin{itemize} \item $\mathcal{P} = (1, \ldots, n)$ is a list of \emph{players}, agents which may make decisions in the game. \item $\mathcal{T} = (T_1, \ldots, T_m)$ is a finite list of finite sets, called \emph{substate tracks}. The set of \emph{game states} $\mathbb{S}$ is given by $\mathbb{S} = T_1 \times \cdots \times T_m.$ \item $\mathcal{S}_0 \subset \mathbb{S}$ is a set of \emph{initial conditions}. \item $\mathcal{D}$ is a set of \emph{decisions}, the choices which players may make in order to influence (but not directly change) the game state. This is extended with the \emph{null decision} $0 \notin \mathcal{D}$ to form $\mathcal{D}_0 \equiv \mathcal{D}\cup\{0\}$. \item $\mathcal{A}$ is a set of \emph{actions} $a: \mathbb{S} \to \mathbb{S}$, which can directly modify the game state. \item $C$ is a \emph{consequence function} $C(d_0^n, s)$ which takes a decision tuple $d_0^n\in\mathcal{D}_0^n$ (i.e., one possibly null decision per player) and state $s\in\mathbb{S}$ and returns a nonempty set of \emph{consequences}: a set of pairs $(p_a, a)$, where $p_a \in (0,1]$ is a non-zero probability and $a$ is an action or product of actions. The sum of probabilities in the set must equal 1. These are the consequences of decisions, which may be outside any one player's control. \item $L: \mathcal{P} \times \mathcal{D}\to 2^{\mathbb{S}}$ is a \emph{legality function}, which returns a (possibly empty) subset of $\mathbb{S}$ for each player $p\in\mathcal{P}$ and decision $d\in \mathcal{D}$, reflecting when that player can make that decision. The \emph{legal set of decisions for player $p$ at state $s$} is the set $L_p(s) \equiv \{ d\in\mathcal{D}: s \in L(p,d) \}$. A decision $d \in L_p(s)$ is \emph{legal} for $p$ at $s$, and \emph{illegal} otherwise. \item $\mathcal{O}$ is the set of \emph{outcomes} that can result from the game. \item $\Omega$ is an \emph{outcome function} $\Omega: \mathcal{S}_\text{ter}\to\mathcal{O}$, where $\mathcal{S}_\text{ter} \equiv \{ s\in\mathbb{S}: L_p(s) = \varnothing \text{ for all }p\in\mathcal{P}\}$ is the set of \emph{terminal game states}. Intuitively, $\mathcal{S}_\text{ter}$ is the set of game states at which no legal decisions can be made, so the game ends and the result is computed by $\Omega$. \end{itemize} The collection of all such objects $\mathcal{G}$ forms the space of game systems $\mathbb{G}$. \end{definition} \cref{def:UnderlyingGameSystem} provides all the elements necessary for a game system (see bulleted list in \cref{sec:GameDescriptions}): players ($\mathcal{P}$), the game state space ($\mathbb{S}$, factorized via $\mathcal{T}$), initial game states ($\mathcal{S}_0$), decisions available to players ($\mathcal{D}$), when those decisions are legal ($L$), the possible consequences of those decisions ($C$), how the game state is changed as a result ($\mathcal{A}$), and possible game outcomes ($\mathcal{O}, \Omega$). It allows for mixed sequential and simultaneous play, deterministic or nondeterministic, and makes no additional assumptions about the content of games, not even presuming the existence of boards and pieces. The separation of decisions (player choices) from actions (changes to the game state) via consequence functions provides a formal separation between what individual players can and cannot control. Consequence functions are necessary to handle random chance and simultaneous play. In both cases, each player can influence play by their chosen decision, but the ultimate effect on the game state (the consequent action) is determined probabilistically and/or after considering the simultaneous decisions of other players (see \cref{def:GameplayAlgorithm,ex:FlippingCoin}). In sequential, deterministic games, it is possible to have a one-to-one mapping between decisions and actions---individual players can directly determine game state changes---and so consequence functions are superfluous (see \cref{ex:GuessingGame}). Note we have included randomness in the game description itself in order to allow a total conceptual separation of game system and player models---in contrast to typical game theory or general gameplaying formalisms, which relegate randomness to an extra fictional player who behaves randomly \cite{rasmusen_games_2007,thielscher_general_2010,piette_ludii_2019}. (There are also other reasons we have not used one of these existing formal game descriptions, see \cref{sec:ExistingDescriptions}.) The following algorithm can be used (by players) to play any game with an underlying game system described by \cref{def:UnderlyingGameSystem}: \begin{definition}[Gameplay Algorithm] \hfill \label{def:GameplayAlgorithm} \begin{enumerate} \item All players must agree on some $s_0\in\mathcal{S}_0$. Let the current state be $s' = s_0$. \item \label{item:GameplayAlgorithm-StartLoop} Each player $p$ must select one decision from their respective legal set $L_p(s')$ at the current state. If $L_p(s') = \varnothing$ for a player $p$, that player is assigned the null decision: $d_p = 0$. \item If $d_p = 0$ for all $p$, the game is over. Go to step \ref{item:GameplayAlgorithm-StopLoop}. \item Compute the set of consequences from the decision tuple and the current state: $c = C((d_1, \ldots, d_n), s') = \{(p_1, a_1), \ldots, (p_m, a_m)\}$. Randomly select a single consequent action from $c$, where $a_j$ is selected with probability $p_j$. \item Compute the new game state $s'' = a_j s'$. Repeat from step~\ref{item:GameplayAlgorithm-StartLoop}, with the new current state $s' = s''$. \item \label{item:GameplayAlgorithm-StopLoop} The current state is a terminal state, $s' \in \mathcal{S}_\text{ter}$. Compute the outcome $\Omega(s')$. \end{enumerate} \end{definition} This algorithm can be used to describe which game systems are \emph{playable}, and to distinguish between \emph{legal} and \emph{illegal} play (see \cref{sec:CompleteGamesAndLegalPlay}). It is also closely related to the construction of game trees (see \cref{sec:GameTreesAndAutomata}). \subsection{Basic Notation} \label{sec:SomeNotation} Before we proceed to examples, let us introduce some basic notation for use with \cref{def:UnderlyingGameSystem}. Here and throughout, we choose to use fairly compact symbolic notation, for instance ``$L(\text{P1},\text{flip}) = (-)_\text{coin}$'' which means ``Player `P1' can legally choose decision `flip' when the track `coin' takes value `$-$'.'' This has the benefit of efficiency, but may take some practice to read and write fluently. We include commentary with each example to build familiarity. We will introduce additional structured notation in \cref{sec:StructuredNotation}, which will help to improve readability, compactness, and ease of analysis for more complex game descriptions. \emph{Regarding game states.} We write $(v)_t$ to express that track $T_t$ takes value $v$. These may be combined using product, sum, and overline notation (for Boolean AND, OR, and NOT, respectively) to express a subset (or ``\emph{slice}'') of state space $\mathbb{S}$. For instance: $u = (1)_a(2)_b + \overline{(3)}_c$ is the subset of $\mathbb{S}$ such that (($T_a$ takes value 1 AND $T_b$ takes value 2) OR ($T_c$ does NOT take value 3)). \emph{Regarding actions.} By $a: S\mapsto (v_1)_1\cdots(v_k)_k$, we mean that for any state in the slice $S \subset \mathbb{S}$, the action $a$ changes the value of track $T_1$ to $v_1$, and so on to track $T_k$, acting as the identity on any tracks not appearing on the right-hand side. It also acts as the identity on any state $s\notin S$. E.g., if $a: \mathbb{S}\mapsto (1)_a$ and we have some state $s = (2)_a(3)_b$, then $a\cdot s = (1)_a(3)_b$. \emph{Regarding outcome functions.} We take $\Omega: S\mapsto \omega$ to mean $\Omega(z) = \omega$ for all terminal states $z \in S$. \subsection{Basic Examples} \label{sec:BasicExamples} Here are two examples of very simple games. Their respective game trees are illustrated in \cref{fig:CoinFlipGameTree,fig:GuessingGameTree}. We will see examples of more complicated games after introducing richer notation in \cref{sec:StructuredNotation}. \begin{example}[Flipping a coin] \label{ex:FlippingCoin} Two players face off and flip a coin once. The first player wins on heads, the second on tails. \begin{align*} &\mathcal{P} = \{ \text{P1}, \text{P2} \} \\ &\mathcal{T}: T_\text{coin} = \{ -, \text{heads}, \text{tails} \} \\ &\mathcal{S}_0 = (-)_\text{coin} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{D} = \{ \text{flip} \} \\ &L(\text{P1}, \text{flip}) = L(\text{P2}, \text{flip}) = (-)_\text{coin} \\ &C((\text{flip}, \text{flip})) = \{ (1/2, A_\text{heads}), (1/2, A_\text{tails}) \} \\ &\mathcal{A} = \{ A_\text{heads}, A_\text{tails} \}, \quad A_c: (-)_\text{coin} \mapsto (c)_\text{coin} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{O} = \{ \text{P1 win}, \text{P2 win} \} \\ &\Omega: (\text{heads})_\text{coin} \mapsto \text{P1 win}, \quad (\text{tails})_\text{coin} \mapsto \text{P2 win} \end{align*} The lines roughly divide the setup, gameplay, and ending portions of the system. From the initial condition, both players can only legally choose the decision ``flip.'' As a consequence of this joint decision, the value of track $T_\text{coin}$ becomes heads or tails with a 50-50 chance. The game then ends, with no more legal decisions available, and the winner is declared. Remember at this point that all of the symbols used for players, decisions, actions, etc. (e.g., ``P1'', ``flip'', or ``$A_\text{coin}$'') are only labels---we could have just as well used pictographs. We treat these labels as semantically void so far as the game system is concerned (see coordinates in \cref{def:StructuredNotationSamples} for contrast), though they may certainly become relevant as part of a game representation, where aesthetic description and theming may matter for the user interface. \end{example} \begin{example}[Guessing game] \label{ex:GuessingGame} One player chooses a number between one and five. A second player tries to guess the number, winning if they guess it correctly and losing otherwise. \begin{align*} &\mathcal{P} = \{ \text{P1}, \text{P2} \} \\ &\mathcal{T}: T_{\text{P1}} = T_{\text{P2}} =\{ -, 1, 2, 3, 4, 5 \}\ \\ &\phantom{\mathcal{T}:}\ T_{\text{picked}} = T_{\text{guessed}} = \{ \text{no}, \text{yes} \}\ \\ &\mathcal{S}_0 = (-)_{\text{P1}}(-)_{\text{P2}} (\text{no})_\text{picked} (\text{no})_\text{guessed} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{D} \sim \mathcal{A}\quad (C: \text{trivial}) \\ &\mathcal{A} = \{ \text{1}_{\text{P1}}, \text{1}_{\text{P2}}, \ldots, \text{5}_\text{P1}, \text{5}_\text{P2} \}, \\ &\qquad \qquad i_\text{P1}: \mathbb{S}\mapsto (i)_\text{P1}(\text{yes})_\text{picked} \\ &\qquad \qquad i_\text{P2}: \mathbb{S}\mapsto (i)_\text{P2}(\text{yes})_\text{guessed} \\ &L(\text{P1}, i_\text{P1}) = (\text{no})_\text{picked} \\ &L(\text{P2}, i_\text{P2}) = (\text{yes})_\text{picked} (\text{no})_\text{guessed} \\ &L(p, d) = \varnothing,\quad \text{otherwise} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{O} = \{ \text{P2 win}, \text{P2 lose} \} \\ &\Omega: (1)_\text{P1}(1)_\text{P2} + \cdots + (5)_\text{P1} (5)_\text{P2} \mapsto \text{P2 win}, \\ &\phantom{\Omega: }\ \text{otherwise} \mapsto \text{P2 lose} \end{align*} This is a sequential, deterministic game, with a one-to-one mapping between decisions and actions (denoted $\mathcal{D} \sim \mathcal{A}$). Thus we only need to list out the actions, instead of the decisions as well. The consequence function just maps each decision to its corresponding action with probability 1, regardless of state; we denote this with $(C: \text{trivial})$. In this game P1 picks a number by choosing one of the decisions $\{ 1_\text{P1},\ldots,5_\text{P1} \}$, while P2 is forced to pick the null decision because no legal options are available. Then P2 guesses a number by choosing a decision $\{ 1_\text{P2},\ldots,5_\text{P2} \}$, while P1 picks the null decision. The game then ends, since no one can legally make another decision. This is a prime example where the underlying game system is not sufficient to capture the real world experience. In particular, this game is only interesting if P2 can't see P1's guess. A perceived game system would specify this hidden information, in particular hiding $T_\text{P1}$ from P2 while still letting them see $T_\text{picked}$ so they know when to guess. The underlying game system is only concerned with the game logic below this information layer. Alternatively, an equivalent underlying game system could do away with $T_\text{picked}$ and $T_\text{guessed}$ altogether, replacing the legality function mappings with $L(\text{P1}, i_\text{P1}) = (-)_\text{P1}$ and $L(\text{P2}, i_\text{P2}) = \overline{(-)}_\text{P1}(-)_\text{P2}$. A corresponding perceived game system would then selectively reveal to P2 whether the current state $s \in (-)_\text{P1}$ or not, instead. We say more about what we mean by ``equivalent'' in \cref{sec:AgencyEquivalence}. \end{example} \subsection{Hidden Information} \label{sec:HiddenInformation} We will not treat hidden information carefully here, but here are some brief suggestions on how it could be approached for perceived game systems built on \cref{def:UnderlyingGameSystem} as the underlying game system. Many games only hide aspects of the current state from the players, for instance the cards in an opponents hand, or which number player one has guessed. Players otherwise have full common knowledge of the rules and game system. Such hidden state information can be specified simply by flagging which track values are visible or invisible to each player, as in the comments following \cref{ex:GuessingGame}. These flags could even be included as additional tracks in $\mathcal{T}$ as part of the game state. (This is essentially how Ludii manages hidden information \cite{piette_ludii_2019}.) However, it is also very possible for other aspects of the game system to be hidden. This is necessary to model story-based games, like many video games or modern campaign or legacy board games (e.g., \cite{childres_gloomhaven_2017}), where future scenarios and rules are unknown until they are reached. It is also important for modeling and understanding the dynamics of tutorial games: games where the rules are revealed to players a bit at a time, often as a method for teaching first-time players. Games like these, with partially hidden systems, could be modeled by specifying intervals on the space of underlying game systems $\mathbb{G}$, representing all of the possible games that each player is told they might be playing. This would also require appropriate maps from each of these possible games (and their states, and the information available to other players) to the true game (and its actual state, and the actual information available to other players). \subsection{Complete Games and Legal Play} \label{sec:CompleteGamesAndLegalPlay} The gameplay algorithm \cref{def:GameplayAlgorithm} provides our first technical distinction between kinds of gameplay: Any gameplay which strictly follows the gameplay algorithm is said to be \emph{legal play}, and a game state is said to be \emph{legally accessible} if it can be produced in some application of this algorithm. Gameplay which deviates from the algorithm constitutes \emph{illegal play}: for instance, starting the game at some $s\notin\mathcal{S}_0$, neglecting the legality function when choosing decisions, ignoring the probabilities when selecting a consequent action, modifying the game state in a way not given by a consequent action, etc. It will be interesting in future work to explore the effects of illegal play, intentional and accidental---especially to understand the effects of accidental rule-breaking (common in some complex games, e.g., \emph{Mage Knight} \cite{chvatil_mage_2011}), and how to make the designer's goals of gameplay robust against it. For the remainder of this paper, however, we will assume legal play. The gameplay algorithm additionally furnishes a sense of when a game description is complete: \begin{definition} \label{def:CompleteGame} We call a game system $\mathcal{G}$ \emph{complete} or \emph{playable} if the algorithm \cref{def:GameplayAlgorithm} can always be faithfully performed. That is, if $\mathcal{P}, \mathbb{S}, \mathcal{S}_0, \mathcal{D}, \mathcal{A}$ are all nonempty, the legality function $L$ is uniquely defined for all player-decision pairs, and the consequence function $C$ is uniquely defined at all legally accessible decision tuples ($d_0^n \in \mathcal{D}_0^n$) and states. If any terminal states are legally accessible, then we also require $\mathcal{O}$ to be nonempty and the outcome function $\Omega$ to be uniquely defined on all legally accessible terminal game states. \end{definition} It may in general be difficult to confirm if a game system is playable, because determining which game states are legally accessible is a nontrivial problem \cite{chinchalkar_upper_1996}. The simplest way to ensure a game system is playable is to make it \emph{overcomplete}: with the consequence function $C$ uniquely defined for every decision tuple (except possibly the all-null tuple) and state, and the outcome function $\Omega$ uniquely defined on all terminal game states, in addition to $L$ uniquely defined on all player-decision pairs, and nonempty $\mathcal{P}, \mathbb{S}, \mathcal{S}_0, \mathcal{D}, \mathcal{A}, \mathcal{O}$. In fact, such a game is guaranteed to play faithfully from any arbitrary state $s \in \mathbb{S}$, not just $s \in \mathcal{S}_0$, which may be interesting for exploring illegal play. All examples in this paper are complete, though none of them are overcomplete. \cref{ex:FlippingCoin} does not define consequences for the tuple $(\text{flip},0)$, for instance. \subsection{Other Game Formalisms} \label{sec:ExistingDescriptions} Let us now address why we have developed a new formal game description, rather than using an existing one---besides the issue of randomness mentioned above (including it in-system rather than as a fictional player). The likely candidates are formal descriptions from game theory, general gameplaying, and formal methods in the game design community. Game theoretic descriptions are too limited in the games they can practically express. There are two issues in view here: first, game theoretic descriptions are unable to faithfully describe mixed sequential and simultaneous play. Strategic- and extensive-form games respectively describe simultaneous and sequential games well, but lose important nuance when trying to mix the two \cite{cooper_communication_1992,salles_beyond_2008}. Second, complex game descriptions are intractable with game theoretic formal descriptions. Generally, either the full game can be written explicitly as a strategic, extensive, or combinatorial game (all intractable, e.g., for chess), or else game theorists rely on ad hoc natural language description and reader familiarity to communicate the rules before proceeding to analysis (e.g., \cite{beck_combinatorial_2008}). We want a way to formally, and tractably, describe game rules even for complex games. We also find general gameplaying descriptions inadequate for our purposes. GDL and its extensions \cite{love_general_2006,thielscher_general_2010,thielscher_gdl-iii:_2017} are currently able to express the widest variety of games, but through code that is often intractably verbose, and from which it is difficult to extract structural features of games \cite{piette_ludii_2019}. \cref{def:UnderlyingGameSystem} bears some formal similarity GDL-II without hidden information, and we think is just as expressive, though we offer a simpler specification of state space, a different treatment of randomness, and prefer more compact and extensible notation. More recent entrants on the field, RBG \cite{kowalski_regular_2019} and Ludii \cite{piette_ludii_2019} more closely reflect the goals of mathematical ludology, with Ludii in particular being concerned with questions of game structure and design (see \cref{sec:RelatedWork}). However, both require the construction of a game board tightly integrated with the rules, distinguishing them as game representations (Tier 3 game descriptions). The underlying game systems \cite{piette_ludii_2019,kowalski_regular_2019} are related to \cref{def:UnderlyingGameSystem} (Ludii more than RBG), but designed for use with particular software and associated code bases, and with particular classes of games in mind. We desire something general and self-contained that we can develop to study any discrete game, with the flexibility to relate to such projects and software while not being beholden to them. There have also been formal descriptions and diagramming methods developed within the game design community (e.g., \cite{koster_game_2015,adams_game_2012,stephane_game_2006}), but they are built mostly as tools for high level design or analyzing subsystems within games. They are unable to compactly capture the full detail of a game's rules, as we require. See \cref{sec:RelatedWork} for further discussions on these and Ludii as related work. \section{Game Trees and Automata} \label{sec:GameTreesAndAutomata} A complete game system from \cref{def:UnderlyingGameSystem} can be used to generate a (possibly infinite) game tree or a finite (possibly nondeterministic) game automaton. These are useful for visualizing and analyzing the game systems, as well as for making connection with existing work in game theory and AI. They are graphical representations of the algorithm \cref{def:GameplayAlgorithm}: each playthrough from that algorithm can be identified as a path from an initial node to a terminal node in one of these objects. We will focus on game trees, rather than automata, in this paper. Examples are illustrated in \cref{fig:CoinFlipGameTree,fig:GuessingGameTree}, and also \cref{fig:MatchingDecisionTreesExample,fig:BookkeepingReductionExample,fig:SinglePlayerReductionExample} (in \cref{sec:EquivalenceTechnicalDefinitions}). \begin{definition} \label{def:GameSystemTree} The \emph{game trees} $\tau(\mathcal{G})$ of a complete game system $\mathcal{G}$ is a set of game trees $\tau(\mathcal{G}) \equiv \{ \tau(\mathcal{G},s_0): s_0\in\mathcal{S}_0 \}$, one for each initial condition. Each game tree $\tau(\mathcal{G},s_0)$ is given by the following construction: \begin{enumerate} \item Draw a root node, assigned the initial state $s_0$. \item \label{item:GenerateGameTreeAlgorithm-StartLoop} For each terminal node $w$ in the current tree, with assigned state $s(w)$ but no assigned outcome, do the following: \begin{enumerate} \item If $s(w) \in \mathcal{S}_\text{ter}$, assign the outcome $\Omega(s(w))$ to the terminal node $w$, then stop for node $w$. If $s(w)\notin\mathcal{S}_\text{ter}$, then proceed: \item Generate the set of all legal decision tuples at this state from the legal set $L_p(s)$ for each player: $D_0^n(s(w)) \equiv \{ (d_1, \ldots, d_n): d_p = 0 \text{ if } L_p(s(w)) = \varnothing,\text{ else }d_p\in L_p(s(w)) \}$ \item \label{step:DecisionMatrix} For each tuple $d_0^n \in D_0^n(s(w))$, draw a new child node $w'$ with a directed edge from $w$ to $w'$. Assign $d_0^n$ to this edge. \item For each child node $w'$ of $w$, do the following: \begin{enumerate} \item Compute the set of consequences $c' = C(d_0^n,s(w))$. \item If $|c'| = 1$, i.e. $c' = \{ (1, a) \}$, assign the state $a\cdot s(w)$ to $w'$. Otherwise: \item For each probability-action pair $(p_i,a_i) \in c'$, draw a new child node $w''$ with a directed edge from $w'$ to $w''$. Assign $p_i$ to this edge, and the state $a_i\cdot s(w)$ to $w''$. \end{enumerate} \end{enumerate} \item If all terminal nodes in the current tree have outcomes assigned to them, stop: this tree $\tau(\mathcal{G},s_0)$ is finished. Otherwise, repeat from step~\ref{item:GenerateGameTreeAlgorithm-StartLoop} with the current tree. \end{enumerate} \end{definition} \captionsetup[figure]{skip=9pt} \begin{figure} \centering \includegraphics[scale=1.0]{figures/figure-CoinFlipTree.pdf} \caption{The game tree for the coin flipping game system in \cref{ex:FlippingCoin}, with all nodes and edges fully labeled: the root state node with state $(-)_\text{coin}$, the decision edge with tuple $(\text{flip},\text{flip})$, the chance edges both with probability $1/2$, and the terminal state nodes $(\text{heads})_\text{coin}$ (outcome: P1 win) and $(\text{tails})_\text{coin}$ (outcome: P2 win). Solid nodes are state nodes, while the open node is a chance node. } \label{fig:CoinFlipGameTree} \vspace{-7pt} \end{figure} In summary, each resulting tree has the following structure: \emph{State nodes} have assigned states and outgoing \emph{decision edges}, which have assigned decision tuples. These decision edges lead to either new state nodes, or to unlabeled \emph{chance nodes} which have outgoing \emph{chance edges} labeled with probabilities. These chance edges lead to new state nodes. State nodes may be further subdivided into \emph{single-player nodes}, in which only a single player has legal decisions available; \emph{multiplayer nodes}, in which multiple players have legal decisions available; and \emph{terminal state notes}, which correspond to terminal states and have outcomes assigned to them. Single-player nodes can be said to \emph{belong} to the appropriate player. It is worth noting that these game trees are not guaranteed to be finite, even if the game in practice would end in finite time. Consider a game of rock-paper-scissors, for instance, where the two players can in principle keep tying forever. In cases like these, it might be more practical to consider game automata instead of game trees. These would be similarly defined, except each game state would only appear once. If a newly drawn edge would lead to a state that exists elsewhere in the automaton, it would be directed to the existing state node instead of drawing a new one. Because the state space always has a finite size by virtue of \cref{def:UnderlyingGameSystem}, and any non-determinism follows specific probabilities, the game automaton would be most similar to a probabilistic automaton \cite{rabin_probabilistic_1963}.\footnote{ The 4-tuple $\langle \mathcal{D}, \mathcal{A}, C, L \rangle \subset \mathcal{G}$, which determines the moment-to-moment gameplay rules, could be combined into a single stochastic transition function $\delta(s,d_0^n)$ that takes a state and decision tuple and returns all possible subsequent states with corresponding probabilities. (Illegal transitions would have probability 0.) A probabilistic automaton could use this transition function, with state space $\mathbb{S}$ and input alphabet $\mathcal{D}_0^n$. } A game tree generated by \cref{def:GameSystemTree} is close to the classic game theoretic extensive form game, for instance as given in \cite{rasmusen_games_2007}, except without any information sets. In this sense, a game system might almost be considered a grammar to generate extensive form games without hidden information. The notable exception is how we handle simultaneous play. Unlike extensive form games, some nodes in the tree (so-called multiplayer nodes) require multiple players to make a decision at the same time. This avoids the ambiguities inherent in the information set construction of simultaneous play in extensive form games \cite{bonanno_set-theoretic_1992}, which relies on sequential moves with hidden information, and can be experimentally different from true simultaneous play \cite{cooper_communication_1992,salles_beyond_2008}. \captionsetup[figure]{skip=9pt} \begin{figure} \centering \hspace{-3.5mm} \includegraphics[scale=0.95]{figures/figure-GuessingGameTree.pdf} \caption{The game tree for the guessing game system in \cref{ex:GuessingGame}, with only a few state nodes and decision edges labeled, for brevity. Each of the five bottom subtrees have the same five decision tuples on their edges: $(0,1_\text{P2}), (0,2_\text{P2}),\ldots, (0,5_\text{P2})$. There are no chance nodes; this is a deterministic game. } \label{fig:GuessingGameTree} \vspace{-7pt} \end{figure} With this difference, it would be more accurate to consider a game tree by our definition as a hybrid between game theoretic extensive form and strategic form games. A game tree with no multiplayer nodes is simply an extensive form game, in which a single player has control of each node and chooses one of the outgoing edges to follow, ultimately producing an outcome at a terminal node. In a game tree with multiplayer nodes, however, each multiplayer node acts as a strategic form game: several players must simultaneously make a decision, which is then evaluated by an umpire to choose an outgoing edge to follow. (We describe this strategic form game with a \emph{decision matrix}, see \cref{def:DecisionMatrix}.) With this understanding, we could use our game tree \cref{def:GameSystemTree} along with a single marker as a user interface in a game representation, for playing any game without hidden information. \section{Agency Equivalence} \label{sec:AgencyEquivalence} The game system \cref{def:UnderlyingGameSystem} is deliberately flexible, to act as a foundation for a wide variety of game descriptions. However, it's flexibility means that there are several ways to express the same game system. A key goal in mathematical ludology will be to measure distances between different games, so here we take a first step: what does it mean for two game systems to be equivalent? We will develop two important senses of this, \emph{game tree equivalence (up to relabeling)} and \emph{agency equivalence}. We describe them heuristically here, leaving the technical details for \cref{sec:EquivalenceTechnicalDefinitions}. Game tree equivalence up to relabeling (\cref{def:GameTreeEquivalenceUpToRelabeling}) matches game systems if they produce the same game trees, with some differences in aesthetic labeling---e.g., the names of decisions or states can differ, but the probabilities cannot. Two such game systems will be very similar in their formal descriptions. Agency equivalence (\cref{def:AgencyEquivalence}) matches game systems if they offer players the same agency, that is, if the systems offer players the same sorts of meaningful choices with the same sorts of consequences. We will define this by performing a series of reductions on game trees to prune spurious differences, declaring two game systems equivalent if their reduced trees match. There are four main kinds of difference that we consider spurious for this purpose: bookkeeping subtrees, single-player subtrees, symmetry-redundant subtrees, and decision matrix redundancies. Recall throughout that we are talking about underlying game systems (Tier 1 game descriptions), so differences due to hidden information and user interfaces are not in view right now. A \emph{bookkeeping subtree} (\cref{def:BookkeepingSubtree}) is a portion of a game tree where there is only one decision available at each state. Though there may be randomness involved, play continues on the subtree inevitably, without any chance for player influence. Perhaps a game has a cleanup phase, for instance, where players must discard all cards and end their turn. One game system may lump these together, while another might include a state where players must discard, followed by another state where they must end their turn. We do not consider these different from the standpoint of player agency. A \emph{single-player subtree} (\cref{def:SinglePlayerSubtree}) is a portion of the game tree where the same single player makes several deterministic decisions in a row. There is no difference in options or outcomes if the player makes these decisions one at a time or all at once. Consider pawn promotion in chess. One game system might have a player choose to advance a pawn to the final row, then from that state choose what piece to promote it into. Another might include move-and-promote-into-queen, move-and-promote-into-knight, etc., all as lumped decisions with no intermediate state. We do not consider these different from the standpoint of player agency. A \emph{symmetry-redundant subtree} (\cref{def:SymmetryRedundantSubtree}) is a portion of the game tree that is unnecessary because it duplicates a sibling subtree. Consider the first move in a game of tic-tac-toe. Although there are technically nine options, there are in practice only three: center, corner, or side. Playing on the right side does produce a different game state than playing on the left side, but because of the symmetry of the game board the substance of the remaining decisions is identical. If players were restricted to only play in the center, left-middle side, or bottom-left corner on the first turn, we would consider this identical to tic-tac-toe from the standpoint of meaningful player agency. Finally, a \emph{decision matrix redundancy} (\cref{def:DecisionMatrixRedundancy}) occurs when a player has two decisions at a state that would have identical results---it really does not matter which one they pick. This player would have the same agency if they had only one of those decisions available. Putting these together: \begin{definition} \label{def:AgencyEquivalence} We say two game systems $\mathcal{G}$ and $\mathcal{G}'$ are \emph{agency equivalent} if their respective game trees can be made equivalent up to relabeling (\ref{def:GameTreeEquivalenceUpToRelabeling}) by performing the following reductions, as many times as necessary, in any order: \begin{itemize}[noitemsep] \item Bookkeeping subtree reduction (\ref{def:BookkeepingSubtree}) \item Single-player subtree reduction (\ref{def:SinglePlayerSubtree}) \item Symmetry-redundant subtree reduction (\ref{def:SymmetryRedundantSubtree}) \item Decision matrix redundancy reduction (\ref{def:DecisionMatrixRedundancy}) \end{itemize} \end{definition} Note this definition can only be usefully applied for finite game trees, though infinite trees could be truncated at a certain depth and similarly compared. Practically speaking, establishing equivalence will probably be easiest by transforming and subsequently comparing the game system grammars, not the game trees. This definition gives intuitive and technical guidance on what those transformations must accomplish. \section{Structured Notation} \label{sec:StructuredNotation} The game system \cref{def:UnderlyingGameSystem} is sufficient for describing the game system of any finite discrete game, but writing down complex games with only the basic notation of \cref{def:UnderlyingGameSystem} and \cref{sec:SomeNotation} could be intractably verbose. The basic notation also does little to expose game mechanics or related rules that might appear in many games, like the $n$-in-a-row victory condition of Tic-tac-toe and Connect Four, or the sliding movement of a queen in Chess and Amazons. Relatedly, the labels given to tracks, decisions, etc. are semantically void so far as the formal description is concerned, being only convenient monikers to the ludologer writing them; integer labels need not respect addition or multiplication on the integers, for instance. In order to express games in a form most appropriate for their analysis, it will be helpful to use \emph{structured notation}. By this we simply mean notation for expressing game systems (\cref{def:UnderlyingGameSystem}) besides or in addition to the basic notation introduced earlier, especially which emphasizes structural patterns within game descriptions. We expect such structured notations will be critical for developing distance metrics on the space of game systems $\mathbb{G}$, for instance, since human intuitions for game similarity tend to rely strongly on high-level patterns in game designs. In this paper we use formal grammar-like notations, but different structured notations like diagrammatic methods or the ludeme-tree code of Ludii could be useful in other contexts (see \cref{sec:RelatedWork}). So without further ado, let us introduce a few key additional features that will ease the expression and analysis of a wide variety of games: \begin{definition}[Sample structured notations] \label{def:StructuredNotationSamples} The following notational features may be used along with the basic notation of \cref{def:UnderlyingGameSystem} and \cref{sec:SomeNotation}, to express game systems: \begin{enumerate} \item A set of \emph{ending states} $E\subset\mathbb{S}$ may be provided, at which the game should end even if legal decisions would otherwise be available. \emph{Partial legality functions} $\hat{L}_i(p, d)$ may be defined on subsets of $\mathcal{P}\times\mathcal{D}$ (returning subsets of $\mathbb{S}$), and a single player-decision pair $(p, d)$ may appear in multiple partial legality functions $\hat{L}_1(p, d), \ldots, \hat{L}_m(p, d)$. Each $\hat{L}$ adds additional restrictions on the legality of $(p,d)$. The overall legality function for $(p, d)$ is then given by $L(p, d) = \hat{L}_1(p, d) \cap \cdots \cap \hat{L}_m(p, d) \cap \overline{E}$. \item Each element of $\mathcal{P}, \mathcal{T}, \mathcal{D}, \mathcal{A}, \mathcal{O}$, as well as each substate value $v\in T\in\mathcal{T}$, may be assigned one or more \begin{enumerate} \item \emph{coordinates} $\phi$: an element of some mathematical object $\Phi$, with all the structure of that object available for manipulations on $\phi$. For instance, $\Phi$ may be a group or graph and $\phi$ may be a group element or node of a graph, so that group addition or node adjacency are well-defined notions inherited from $\Phi$ that can be used in the game description. \item \emph{tags} $t$: a label used to group related elements. Those elements of the same type (players, substate tracks, track values, decisions, actions, outcomes) which share the same tag may be referred to collectively by the tag name. E.g., $(t)^\text{p}$ is a set of players which share the same tag $t$, and similarly $(t)^\text{t}, (t)^\text{v}, (t)^\text{d}, (t)^\text{a}, (t)^\text{o}$ for the other types, respectively. \end{enumerate} \item Any element of the game description may be defined through the use of auxiliary \emph{ludemic functions}. These may take as arguments any element of the game description or current game state, including associated tags and coordinates, and return whatever is useful (e.g., state slices, a choice from a list, \ldots). \end{enumerate} \end{definition} Each of these features captures and highlights a different kind of structure in a game description. Ludemic functions are particularly flexible, intended to capture atomic patterns in a game's design (see \cref{sec:RelatedWork}), like the movement pattern of a chess piece or the probability distribution of an opposed dice roll in \emph{Risk} \cite{lamorisse_risk:_1959}. They may be defined for an individual game description, but will be most useful when collected in a catalog, for portable use in different games that share those same patterns of game logic. Using ludemic functions from this catalog may improve readability and analysis of game descriptions, but with the trade-off of requiring reader familiarity with each function used.\footnote{An extreme example of this is Ludii, in which every aspect of the game description must be drawn from a library of ``ludemes'' \cite{soemers_ludii_2019}, a kind of computational cousin to a catalog of ludemic functions (see \cref{sec:RelatedWork} for further discussion). This can result in compact and readable descriptions for a variety of games \cite{noauthor_notitle_nodate}, but requires outside knowledge of every ludeme used. } Functions in this catalog may be related to each other: for instance, a chess queen's movement function is a special case of a 2D translation, which is a special case of an $n$D translation. It even relates to the $n$-in-a-row pattern found in tic-tac-toe, since there must be $n$-in-a-row blank spaces for a queen to move $n+1$ spaces. Such relationships could be used to build distance metrics on the catalog of ludemic functions, which would then be useful in building metrics on the space of game systems. We leave this catalog and related metrics to future work, though we illustrate a couple examples of ludemic functions in \cref{ex:RockPaperScissors,ex:ArithmeticTic-tac-toe}. Additional kinds of structured notation will likely be useful for different classes of games. For instance, some games involve a stack or some other kind of finite ``memory'' of past states (e.g., \emph{Magic: The Gathering} \cite{garfield_magic:_1999}, or chess stalemate rules). Similarly, some games like chess employ ``foresight,'' determining the legality of a move now based on what moves might become legal as a result (e.g., the ``check'' mechanic). Defining a special kind of ``memory'' substate alongside the usual substate tracks, or a ``foresight'' legality function, might be useful for such games. We emphasize that not every kind of structure is admissible here. Any game with an unbounded memory requirements would still not be possible, for instance, because that would require an unbounded state space. Describing games of that sort would require more fundamental modifications of \cref{def:UnderlyingGameSystem}. \subsection{Boolean brackets} \label{sec:BooleanBrackets} Before we proceed to examples, let us see how we can use these new features to express subsets of state space, adding to our structured notation. Note we will use $\phi(e)$ to refer to the coordinate of a game element $e$, and $T(s)$ to refer to the value of track $T$ at state $s$. We can describe state slices by leveraging Boolean relations on coordinates and tags. We delineate these Boolean calculations by square brackets. For instance, $[\phi(T) > 5]$ is either $\mathbb{S}$ if the track $T$ has integer coordinate greater than 5, or $\varnothing$ otherwise. If an element has multiple coordinates, we denote the intended by subscripts, for instance $[\phi_i(T) > 5]$, $[\phi(T) >_i 5]$, or $[\phi(T) > 5]_i$ for the $i$th coordinate. (Recall that tracks can have coordinates independent from their values.) To extend this to track values, we let $[B(v(T))] \equiv \{ s \mid B(T(s)) \}$ for Boolean relation $B$ and state $s \in \mathbb{S}$. For instance, $[v(T) \in (b)^\text{v}] = \{ s \mid T(s) \in (b)^\text{v} \} \subset \mathbb{S}$ is the set of all states where the track $T$ takes some value that has tag $b$. These will usually return nonempty proper subsets of $\mathbb{S}$. We might call this bracket notation ``Boolean slice functions'' or ``Boolean brackets.'' They could be considered a special case of ludemic functions, though it may be useful to conceptually distinguish them when considering atomic elements of games (see \cref{sec:RelatedWork}). We'll illustrate Boolean brackets along with other new notation through the following examples. \subsection{Examples with Structured Notation} \label{sec:StructuredNotationAndExamples} We now present three examples---Rock-Paper-Scissors, Tic-Tac-Toe, and the hand game Chopsticks---to illustrate this structured notation, describing each to explain new shorthand and build familiarity with interpreting the formalism. \begin{example}[Rock-Paper-Scissors] \label{ex:RockPaperScissors} Two players play rock-paper-scissors; best two out of three wins. \begin{align*} &\mathcal{P} = (\text{P1},\text{P2}), \quad \mathcal{O} = \mathcal{P} \\ &\mathcal{T}: \quad \mathcal{P} \leftarrow [0,2] \\ &\mathcal{S}_0 = (0)_\text{P1} (0)_\text{P2} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{D} = \{ \text{rock}, \text{paper}, \text{scissors} \} \sim \mathbb{Z}_3 \\ &\mathcal{A}: \quad \mathcal{P} \ni p: \mathbb{S} \mapsto (+1)_p;\quad \mathds{1}: \mathbb{S}\mapsto\mathbb{S} \\ &C: (d_1, d_2)\mapsto \text{RPS}((d_1, d_2), \mathds{1}, \text{P1}, \text{P2}) \\ &\hat{L}(p, d) = \mathbb{S} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &E = E_\text{P1}\cup E_\text{P2}, \quad E_p = (2)_p, \quad p \in \mathcal{T} \\ &\Omega: E_p \mapsto p \end{align*} Let's unpack the notation a bit: There are no tags in this example. We reuse the list $\mathcal{P}$ as track names, so $\mathcal{P} \leftarrow [0,2]$ means we have two tracks, P1 and P2, which each take the values $[0,2] = \{0,1,2\}$. We implicitly take these values to have integer coordinates---on the ring $\mathbb{Z}$, with the usual addition and multiplication---inferred from the integer interval notation. (We will use the interval notation $[a, b]$ to refer to integers by default, rather than reals.) The decisions, by contrast, have coordinates on $\mathbb{Z}_3$ (the ring of integers modulo 3) with $\phi(\text{rock}) = 0$, $\phi(\text{paper}) = 1$, and $\phi(\text{scissors}) = 2$, and respecting modular arithmetic (e.g., $1-2 = -1 \text{ mod } 3 = 2$). Note we use $ \sim $ to associate coordinates to sets, drawing correspondence from the written or canonical ordering of each. Two of the actions are labeled by the players, P1 and P2, and increment the corresponding track by 1. Note we've leveraged a coordinate operator as a nice shorthand: $a: \mathbb{S}\mapsto (+1)_T$ is shorthand (see also \cref{sec:SomeNotation}) for $a: s\mapsto(\phi(T(s)) + 1)_T, s\in\mathbb{S}$. The third action $\mathds{1}$ is the identity function. The consequence function uses the ludemic function $\text{RPS}((d_1,d_2),a_0,a_1,a_2)$. This takes a tuple of two decisions with coordinates on $\mathbb{Z}_3$,\footnote{Since the null decision hasn't been given a coordinate on $\mathbb{Z}_3$, the RPS function is only well-defined on non-null decisions in this example.} and returns the action $a_i$ if $\phi(d_1) - \phi(d_2) = i$. Note here we've effectively written the consequence function as $C: d_0^n \mapsto a$. This is shorthand for $C(d_0^n,s) = \{ (1,a) \}$ for all $s \in \mathbb{S}$. The partial legality function $\hat{L}$ adds no restrictions: any decision is legal for any player at any state that isn't an ending state ($s \notin E$). The ending states are those states where either track takes value 2---which track determines the outcome (winner), though we have been lazy and defined the outcomes as $\mathcal{O} = \{\text{P1}, \text{P2}\}$ instead of the more descriptive $\{\text{P1 wins}, \text{P2 wins}\}$. In either case, it's up to the players to determine how they value each outcome, according to their player models. It is interesting to note that the outcome function $\Omega$ is not uniquely defined on all terminal game states: the state $z = (2)_\text{P1}(2)_\text{P2}$ is a terminal state (since $z \in E$), but since $z \in E_\text{P1}$ \emph{and} $z \in E_\text{P2}$, it is ambiguous which outcome results. The state $z$ is not legally accessible, however, so the game system is still complete and playable. If this ambiguity were not present, and if $C$ were also defined on the partially null tuples $(d,0)$ and $(0,d)$, this system would be overcomplete. \end{example} \begin{example}[Arithmetic Tic-tac-toe] \label{ex:ArithmeticTic-tac-toe} Two players take turns claiming numbers from the set $\{ -4, -3, -2, -1, 0, 1, 2, 3, 4 \}$. Each number can only be chosen once, and a player wins by gathering any three numbers that can add to 0. The game ends when either a player wins or all numbers have been claimed, at which point the game ends in a draw if nobody has a triple that sums to 0. \begin{align*} &\mathcal{P} = (\text{P1},\text{P2}), \quad \mathcal{O} = \mathcal{P} \cup \{ \text{draw} \} \\ &\mathcal{T}: \begin{array}[t]{lllll} \text{(numbers)} & [-4,4] &\leftarrow& \mathcal{P} \cup \{-\} & \\ & \text{turn} &\leftarrow& \mathcal{P} &\sim \mathbb{Z}_2 \end{array} \\ &\mathcal{S}_0 = (-)_\text{(numbers)} (\text{P1})_\text{turn} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{D} \sim \mathcal{A}\quad (C: \text{trivial}) \\ &\mathcal{A} = \mathcal{P}\times\text{(numbers)} \ni (p, n): \mathbb{S} \mapsto (p)_n (+1)_\text{turn} \\ &\hat{L}(p, (p',n)) = (-)_n (p)_\text{turn} [p = p'] \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &E = E_\text{P1}\cup E_\text{P2}, \\ &\quad E_p = \text{TripleSumsToZero}(\text{(numbers)}, p) \\ &\Omega: E_p \mapsto p,\quad \mathcal{S}_\text{ter} \setminus E \mapsto \text{draw} \end{align*} This is our first example with tags, and coordinates on track values, so let's again unpack this a bit. There are ten tracks here, which in basic notation we might write $T_{-4}, T_{-3}, \ldots, T_4, T_\text{turn}$. The first nine tracks have integer names, and also (implicitly by means of the integer interval notation) corresponding integer coordinates. They are grouped into the tag set $(\text{numbers})^\text{t}$, and each track takes the values $\{ \text{P1}, \text{P2}, - \}$. The tenth track has name ``turn,'' and its values have coordinates on $\mathbb{Z}_2$: $\phi(\text{P1}) = 0$, $\phi(\text{P2}) = 1$. We use the tags to compactly express the initial condition. In general, we can reference a set of tracks to define a slice: e.g., if tracks $T_1, T_2$ both have tag $t$ and possible value $v$, then $(v)_{(t)} = (v)_1(v)_2 \subset \mathbb{S}$. There are eighteen actions, named as pairs with one member $p$ from $\mathcal{P}$ and one member $n$ from $(\text{numbers})$. As we have here, we will often drop the type superscript on tag sets where there is no risk of confusion, just writing $(\text{numbers})$ instead of $(\text{numbers})^\text{t}$. Each action marks the number $n$ as claimed by player $p$, and toggles the turn. Each player $p$ can legally claim number $n$ for player $p'$ (i.e., make decision $(p',n)$) if the number $n$ isn't taken, it is $p$'s turn, and they are claiming it for themselves ($[p=p']$). The ending states are defined via the ludemic function $\text{TripleSumsToZero}(K, v)$, which takes a set $K$ of tracks $t$ with integer coordinates $\phi(t)\in\mathbb{Z}$ and possible value $v \in t$, and returns the union of all slices $(v)_{t_1}(v)_{t_2}(v)_{t_3}$ such that $\phi(t_1) + \phi(t_2) + \phi(t_3) = 0$. E.g., $\text{TripleSumsToZero}(\text{(numbers)}^\text{t}, \text{P1}) = (\text{P1})_{-4}(\text{P1})_{0}(\text{P1})_{4} + (\text{P1})_{-4}(\text{P1})_{1}(\text{P1})_{3} + \cdots$. Note that this is essentially the same game system we would use to describe tic-tac-toe: \captionsetup[figure]{skip=8pt} \begin{figure}[H] \centering \begin{tabular}{|r|r|r|}\hline $-3$ & $2$ & $1$ \\\hline $4$ & $0$ & $-4$ \\\hline $-1$ & $-2$ & $3$ \\\hline \end{tabular} \caption{``Magic square'' correspondence between the arithmetic and grid-based versions of tic-tac-toe.} \vspace{-8pt} \end{figure} \noindent To replicate the more familiar grid-based thinking, we could replace the integer coordinates $[-4,4]$ of the nine (numbers)$^\text{t}$ tracks with 2D coordinates $[1,3]^2\subset\mathbb{Z}^2$, and replace the ending states with an analogous function $E_p = \text{ThreeInARow}(\text{(numbers)}, p)$, for instance using the definition of $n$-in-a-row on $\mathbb{Z}^d$ from \cite{beck_combinatorial_2008}. These two game systems would be game tree equivalent up to relabeling, and with identical players and outcomes. The version with tracks on $\mathbb{Z}^2$ might be easier to compare to a 4x4 version of tic-tac-toe (e.g., extended with tracks $[1,4]^2$ and everything else the same), but the arithmetic version might be easier to compare to a version with extended tracks $[-5,5]$, which has no clear grid-based analog. Which structured expression is more useful will depend on the question being asked. This is also interesting case study in technical game atoms (see \cref{sec:RelatedWork}): here we see how different game descriptions might use different ludemic functions, reflecting different atomic game concepts, and yet describe the same game system. \end{example} \def\\[\topgap]&\makebox{\color{lightgray}\tikz\draw[dashed] (0,0) -- +(1.5in,0);}\\[\bottomgap]{\\[\topgap]&\makebox{\color{lightgray}\tikz\draw[dashed] (0,0) -- +(1.5in,0);}\\[\bottomgap]} \begin{example}[Chopsticks] \label{ex:Chopsticks} \hspace{8mm} From Wikipedia: ``Chopsticks is a hand game for two players in which players extend a number of fingers from each hand and transfer those scores by taking turns to tap one hand against another'' \cite{wikipedia_contributors_chopsticks_2019}. A full natural language rules description can be found there, and is implemented and explained here: \begin{align*} &\mathcal{P} = (\text{P1},\text{P2}), \quad \mathcal{O} = \mathcal{P} \\ &\mathcal{T}: \begin{array}[t]{lllll} \text{(hands, P1)} & \{ \text{1L}, \text{1R} \} & \leftarrow [0, 4] & \sim \mathbb{Z}_5, [0, 4] \\ \text{(hands, P2)} & \{ \text{2L}, \text{2R} \} & \leftarrow [0, 4] & \sim \mathbb{Z}_5, [0, 4] \\ & \text{turn} &\leftarrow \mathcal{P} &\sim \mathbb{Z}_2 \end{array} \\ &\mathcal{S}_0 = (1)_\text{(hands)} (\text{P1})_\text{turn} \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\mathcal{D} \sim \mathcal{A}\quad (C: \text{trivial}) \\ &\mathcal{A} = (\text{add})^\text{a}\cup(\text{transfer})^\text{a} \\[\topgap]&\makebox{\color{lightgray}\tikz\draw[dashed] (0,0) -- +(1.5in,0);}\\[\bottomgap] &\hat{L}_1(p, \mathcal{D}) = (p)_\text{turn} \\[\topgap]&\makebox{\color{lightgray}\tikz\draw[dashed] (0,0) -- +(1.5in,0);}\\[\bottomgap] &(\text{add})^\text{a} = \text{(hands)}\times\text{(hands)} \ni \\ &\quad \quad \quad (h, g): (v)_h (v')_g \mapsto (v' +_1 v)_g (+1)_\text{turn} \\ &\hat{L}_2(p, (h,g)) = [h \in (\text{hands}, p)]\, \overline{(0)}_h \overline{(0)}_g \\[\topgap]&\makebox{\color{lightgray}\tikz\draw[dashed] (0,0) -- +(1.5in,0);}\\[\bottomgap] &(\text{transfer})^\text{a} = \text{(hands)}\times\text{(hands)}\times[1, 4] \ni \\ &\hspace{5mm} (h, g, n): (v)_h (v')_g \mapsto (v -_2 n)_h (v' +_2 n)_g (+1)_\text{turn} \\ &\hat{L}_2(p, (h,g,n)) = [h,g \in (\text{hands}, p)]\ [v(h) > n]_2 \\ &\quad \times [v(g) + n < 5]_2\ [v(h) - v(g) \neq n]_2 \\[\topgap]&\makebox{\color{lightgray}\rule{2in}{0.5pt}}\\[\bottomgap] &\Omega: (0)_{(\text{P1})}\mapsto \text{P2}, \quad (0)_{(\text{P2})}\mapsto \text{P1} \end{align*} This is a more involved example, with a more mathematical game, to illustrate more uses of coordinates. It also illustrates how our notation can compactly express some complex games even without any ludemic functions (though with ample use of Boolean brackets). Here we have five tracks, named 1L, 1R, 2L, 2R, (reflecting players' hands, with five fingers each) and turn. We have three tags: hands, P1, and P2. In general, we will notate the intersection of two tag-sets $(t)^\text{y}, (t')^\text{y}$ of the same type y by the comma-separated $(t, t')^\text{y}$, so we see for instance that the track 1L has two tags: hands and 1. The turn track has values $\{\text{P1}, \text{P2}\}$, and coordinates on $\mathbb{Z}_2$, a cyclic turn track like in \cref{ex:ArithmeticTic-tac-toe}. This is a turn-based game, because the partial legality function $\hat{L}_1$ prevents any player from acting out of turn, and every action advances the turn track. The (hands)$^\text{t}$ tracks have integer values $[0,4]$, but because we see the $\sim$ coordinate assignment we will \emph{not} implicitly assume these have just integer coordinates: instead we see they have two sets of coordinates, first $\mathbb{Z}_5$ (coordinate 1) and second $[0,4]\subset\mathbb{Z}$ (coordinate 2). The order matters here, because we will need to refer to them later on. We have grouped the actions with tags for readability, but notice how each uses the coordinates: The (add) actions allow a player to tap a hand $h$ against another hand $g$, using modular arithmetic (coord. 1) to add the number on $h$ to the number on $g$. Players can legally tap one of their own not-empty hands against any other non-empty hand, as long as it's their turn. The (transfer) actions allow a player to tap a hand $h$ against another hand $g$, using integer arithmetic (coord. 2) to transfer 1--4 points from $h$ to $g$. Players can legally transfer between their own hands on their turn, as long as neither hand ends up empty (with zero or five fingers: $[v(h) > n]_2, [v(g) + n < 5]_2$) and the hands don't just switch values ($[v(h) - v(g) \neq n]_2$). The game starts with 1 finger on each hand, and ends when one player has both hands empty, with the other player winning. Notice we don't need to specify additional ending states (we can take $E = \varnothing$), because the partial legality functions are already sufficient: no legal decisions can be made when it reaches the turn of a player with two empty hands. \end{example} \section{Related work} \label{sec:RelatedWork} As mentioned in the Introduction, there are many other lines of research that may be relevant to progress in mathematical ludology. Here we discuss the two which, to our knowledge, share the most particular kinship. The first is an effort within the game design community, launched especially by Benjamin Cousins \cite{cousins_elementary_2004} and Raph Koster \cite{koster_grammar_2005}, to formally break down games into their constituent atoms and develop notation to diagram them, as a means to improve the critical study and design of games. This has produced several notational systems and notions of game atoms, each best suited to unpacking different kinds of features from different kinds of board games and video games (e.g., \cite{cousins_elementary_2004,koster_theory_2005,stephane_game_2006,adams_game_2012,koster_game_2015}). Most of these notations have been developed by game designers for game designers, and as such prioritize higher-level concepts in order to better survey, iterate, and improve game designs. Even the most detailed of these notational systems (``machination diagrams'' \cite{adams_game_2012}) would struggle to provide the level of rule detail needed to build a game tree, like we require from \cref{def:UnderlyingGameSystem} for careful analysis. It may be interesting to build structured notations (beyond \cref{def:StructuredNotationSamples}) which leverage these game diagramming methods, in order to probe higher-level analysis questions and visualizations. It will also will be interesting to formally understand and build technical analogues (e.g., ludemic functions) for the various game atoms and constituents that have been proposed by designers before and after Cousins and Koster: choice molecules \cite{tekinbas_rules_2003}, primary elements \cite{cousins_elementary_2004}, ludemes \cite{koster_theory_2005,koster_atomic_2012,parlett_whats_2016,browne_evolutionary_2011}, mechanisms (e.g., as compiled in \cite{engelstein_building_2019}), and others. Another, more recent research direction is digital archaeoludology, spearheaded by the Digital Ludeme Project (DLP) \cite{browne_modern_2018}. The DLP aims to bring computational attention and AI methods to historical studies of traditional strategy games. As a key part of this, games are modeled and their designs studied in Ludii, a software program under development which provides a means to digitally describe and play games \cite{piette_ludii_2019}. The game properties and relationships of particular interest to the DLP, e.g., strategic depth or phylogenetic distance \cite{browne_modern_2018}, are interesting case studies in our more general interest in game metrics, and will need to overcome similar challenges. For instance, the same game can be described many ways in Ludii code, and we think this flexibility should not be allowed to unduly affect phylogenetic distance measurements when they are developed---similar to our considerations around agency equivalence, and an issue for any game comparison method. Ludii is intended as a general-purpose tool with AI capabilities \cite{piette_ludii_2019,noauthor_notitle_nodate} and we may find it useful, e.g., for game description testing, gameplay data collection, and automating certain game measurements (especially via simulations). Ludii has chosen a particular game atom for its descriptions, the ludeme,% \footnote{The ludeme of Ludii \cite{soemers_ludii_2019}, and its predecessor Ludi \cite{browne_evolutionary_2011}, is rather different from the ludeme of Koster \cite{koster_theory_2005}, and includes both smaller and larger structures: every part of a Ludii description is a Ludii-ludeme, from a solitary Boolean AND function to the entire game itself \cite{soemers_ludii_2019}. See \cite{parlett_whats_2016} for discussion of the origins and pre-Ludi(i) uses of ``ludeme,'' or \cite{koster_atomic_2012,koster_theory_2005} for an early understanding of Koster's ludeme. } somewhat related to ludemic functions and bringing similar benefits and challenges (see \cref{sec:StructuredNotation}). It will be interesting to explore its meaning and use alongside other candidate game atoms. Ludii will also be interesting as a case study for Tiers 3 and 4 of our game description hierarchy. In our terms, Ludii provides game representations and actualizations (see \cref{sec:GameDescriptions}), since hidden information and interface specification are woven into Ludii game descriptions, and the software provides means of playing the described games \cite{piette_ludii_2019}. We will be well-positioned to study the structure of Ludii game descriptions after studying the more primitive underlying and perceived game systems (Tiers 1 and 2), like we have begun in this paper. For instance, Ludii uses a tree structure to arrange the atoms of its game descriptions \cite{browne_modern_2018,piette_ludii_2019}, a kind of structured notation which our approach could enable us to generalize beyond Ludii code, and to probe its uses and limitations. \section{Discussion} \label{sec:Discussion} We have proposed a new line of game studies research, mathematical ludology, which aims to formally explore the space of games and their properties in order to better understand game design principles, gameplay phenomena, and player behavior in complex games. In this paper we have developed some basic mathematical formalism that allows the compact description of complex, finite discrete games, with notation that helps to expose their structure for later comparison and analysis. We have focused mostly on underlying game systems, the first tier of our game description hierarchy: much work still remains to flesh out the specification of hidden information and user interfaces. Existing progress in game theory and general gameplaying may be helpful for some of those next steps, since these efforts have typically begun at the second tier and higher. We have also begun developing equivalence relations on the space of game systems, a first step in learning to formally compare games. It will be interesting next to develop a variety of distance metrics on the space of games, each highlighting different aspects of game descriptions. This will require a careful understanding of game structure, as well as an understanding of which differences are not meaningful, in the same way that many game system differences were not meaningful in establishing agency equivalence. It will be useful to develop distance metrics on the space of ludemic functions, ludemes, or other technical game atoms, as part of this effort. Taken together, these metrics will enable formal taxonomies of games, and provide a reference point for comparing gameplay trends or strategies in related games. A natural next step will be to consider the distinct properties of common structures in game rules, for instance game phases, resources, or decks as common (and perhaps uniquely identifiable) kinds of substates in underlying game systems. This could be a stepping stone towards comparing games, building user interfaces, or generating natural language descriptions of rules. For instance, we have a method under development to derive user interface graphs from underlying game systems, which may benefit from understanding such structures \cite{riggins_in_progress}. Great progress has already been made in the study, design, and application of games, thanks to the efforts of academics and practitioners in both mathematical and non-mathematical disciplines. As game studies continues to be a multidisciplinary effort, we hope that progress in mathematical ludology might help to bridge some of the gap between mathematical and game design experts, enriching the work of both. \section*{Acknowledgments} We would like to thank Stephen Crane, Marquita Ellis, Ryan Janish, Kiran Lakkaraju, Kweku Opoku-Agyemang, Stephen Phillips, and Ben Wormleighton for useful discussions. We would also like to thank Vlaada Chv\'atil for designing the board game \emph{Mage Knight} \cite{chvatil_mage_2011}, which inspired us towards this line of research. \renewcommand{\bibname}{References} \printbibliography
1,477,468,749,889
arxiv
\section{Introduction} Transverse Anderson localization (TAL) of light was first proposed by Abdullaev, et al.~\cite{transverse-Abdullaev} and De Raedt, et al.~\cite{transverse-DeRaedt} in a dielectric medium with a transversely random and longitudinally uniform refractive index profile. They showed that an optical beam can propagate freely in the longitudinal direction while being trapped (Anderson-localized~\cite{Anderson1,Anderson1980,Abrahams-50-book,Lagendijk-Physics-Today-2009,Abrahams-Scaling-Theory,Stone,sheng2006introduction}) in the disordered transverse direction(s). TAL of light has since been observed in various optical systems with one or two transversely disordered dimensions~\cite{Schwartz2007,Lahini-1D-AL-2008,Martin-1D-AL-2011,Mafi-Salman-OL-2012,Mafi-Salman-OPEX-2012,Mafi-Salman-OMEX-2012,SegevNaturePhotonicsReview,Mafi-AOP-2015,fratini2015anderson,yao2017beam,yilmaz2019transverse,PhysRevA.99.063807}. In particular, Karbasi, et al. reported the first observation of TAL in disordered optical fibers~\cite{Mafi-Salman-OL-2012,Mafi-Salman-OPEX-2012,Mafi-Salman-OMEX-2012}. The disordered optical fibers have since been used for high-quality image transport~\cite{Mafi-Salman-Nature-2014,Tuan:18,tong2018characterization,zhao2018image,zhao2018deep}, beam multiplexing~\cite{Mafi-Salman-Multiple-Beam-2013}, wave-front shaping and sharp focusing~\cite{Mafi-Marco-singlemode-2017,Mafi-Marco-Nature-light-focusing-2014,Mafi-Behnam-Optica-2018}, nonlocal nonlinearity~\cite{Mafi-Marco-APL-self-focusing-2014,Mafi-Marco-PRL-Migrating-NL-2014}, single-photon data packing~\cite{Mafi-Marco-information-2016}, optical diagnostics~\cite{liang2018surface}, and random lasers~\cite{Mafi-Behnam-Random-Laser-2017,joshi2019effect}. TAL optical fiber (TALOF) is essentially a {\em highly} multimode optical fiber (MMF) with a {\em transversely} random refractive index profile. What sets a TALOF apart from a conventional MMF is that its guided modes are spatially localized due to the transverse disorder, while the guided modes in a conventional MMF typically cover all or a large portion of the guiding region~\cite{okamoto2005fundamentals,gloge1973multimode,Feit:78}. The modal characteristics of MMFs are generally responsible for their performance for the desired functionality~\cite{jolivet2009beam,vcivzmar2012exploiting,papadopoulos2013high,amitonova2016high,zhu2010coherent}; e.g., the mean localization radius of the modes in an imaging TALOF determines the average point spread function (PSF) across the tip of the fiber, where a stronger localization leads to a narrower PSF and a higher resolution image transport~\cite{Mafi-Salman-Nature-2014}. Similarly, the standard deviation in the localization radius of the modes determines the uniformity of the image transport across the fiber. Because the refractive index profile of a TALOF is random and the guided modes are numerous, the modal characteristics of a TALOF must be studied statistically~\cite{Mafi-AOP-2015,Mafi-Salman-Modal-JOSAB-2013}. This stochastic nature of TALOFs and the diversity of the physical attributes of the localized modes is the key differentiating factor between the linear/nonlinear dynamics observed in TALOFs versus conventional MMFs. The modal area statistics of disordered quasi-one-dimensional (quasi-1D) and quasi-two-dimensional optical waveguides were studied recently by Abaie, et al.~\cite{Mafi-Behnam-Scaling-PRB-2016,Mafi-Abaie-OL-2018} using the mode-area probability-density-function (PDF). The mode-area PDF characterizes the relative distribution of the mode-areas of the guided modes in a disordered waveguide. In particular, Abaie, et al. showed that the mode-area PDF converges to a terminal configuration as the transverse dimensions of the waveguide are increased. Therefore, it may not be necessary to study a real-sized disordered structure to obtain its full statistical localization properties and the PDF can be obtained for a considerably smaller structure. This observation is not only important from a fundamental standpoint, it has practical implications because it can reduce the often demanding computational cost that is required to study the statistical properties of Anderson localization in disordered waveguides. We emphasize that the mode-area PDF encompasses all the relevant statistical information on spatial localization of the guided modes and is a powerful tool for studying the TAL. In this paper, we employ a similar statistical analysis based on the {\em probability-density-function} to study the dispersive properties of TAL in disordered waveguides. In the modal language, the dispersive properties of a waveguide are determined by the frequency ($\omega$) dependence of the propagation constant, $\beta(\omega)$, of the guided modes~\cite{agrawal2013nonlinear,agrawal2012fiber}. Determining the optical dispersive properties of TALOFs is critical to the understanding of their linear and nonlinear characteristics, or in the continuous wave (CW) or pulsed laser operation, a few examples of which are as follows. For each mode labeled with an index $i$, the full form of $\beta_i(\omega)$ over a broad frequency range is needed to determine the phase-matching wavelengths for the intermodal (nonlinear) four-wave mixing (FWM) process~\cite{stolen1974phase,lin1981large,hill1981efficient,Pourbeyram:15,Nazemosadat:13}. In some cases, the Taylor expansion of $\beta(\omega)$ around a central frequency of $\omega_0$ and the corresponding local frequency derivatives, $\beta^{(n)}_i=\partial^n_\omega\beta_i|_{\omega_0}$, are sufficient to characterize the dispersive properties of an optical fiber~\cite{agrawal2013nonlinear,agrawal2012fiber}. For example, consider $\beta^{(1)}_i(\omega)$, which determines the group velocity associated with the mode labeled with the index $i$: in the linear regime, the difference between group velocities of different modes in a multimode fiber is responsible for the intermodal dispersion, which is generally the main limiting factor for the achievable transmission bandwidth (data-rate) in a multimode optical fiber communications system~\cite{agrawal2012fiber,gloge1973impulse,fan2005principal}. When an optical pulse is sent through a MMF, the intermodal dispersion (different values of $\beta^{(1)}_i$ for different modes) causes the pulse to break into multiple sub-pulses, each propagating with a different group velocity. Therefore, the distribution of group velocities determines the achievable data-rate. For example, a nanosecond-long optical pulse is hardly affected by the propagation through a 1\,km-long high-quality graded index fiber~\cite{Olshansky:76,koike1995high}; however, the same pulse is highly distorted by the modal dispersion in a comparable step-index optical fiber. $\beta^{(1)}_i(\omega)$ also determines the pulsed nonlinear dynamics, including that of soliton propagation in MMF~\cite{WiseA5,WiseA13,Raghavan}. For CW laser applications, the cavity response, including the free spectral range (FSR), is determined by the values of $\beta_i(\omega)$ for the relevant modes, which are excited in the lasing process; their distribution can dictate the spectrum of the laser via the optical Vernier effect~\cite{Vasdekis:07,EsmaeilSelectivity}. The pulsed dynamics of such lasers, including the Q-switching and mode-locking, are similarly controlled by the values of $\beta^{(1)}_i(\omega)$ for the relevant modes~\cite{Mafi-Behnam-Random-Laser-2017}. \subsection{Studying a quasi-1D disordered slab waveguide} In this manuscript, we focus on the statistics of the groups velocity (GV) of the guided modes and determine the GV-PDF of disordered waveguides. Understanding the GV distribution underlies a large number of dispersive phenomena in guided wave systems~\cite{ho2011statistics}. Solving for all the guided modes for a given TALOF and obtaining proper statistical averages over many fiber samples is a formidable task even for large computer clusters. For example, the V-number of the disordered polymer TALOF in Ref.~\cite{Mafi-Salman-OL-2012} with an air cladding is approximately 2,200 at 405~nm wavelength resulting in more than 2.3 million guided modes. Recall that the V-number is given by \begin{align} V=\dfrac{\pi t}{\lambda}\sqrt{n^2_{\rm co}-n^2_{\rm cl}}, \label{eq:V-number} \end{align} where $\lambda$ is the optical wavelength, $t$ is the core diameter of the fiber (or the core width for the case of a quasi-1D slab waveguide), and $n_{\rm co}$ ($n_{\rm cl}$) is the effective refractive index of the core (cladding). The total number of the bound guided modes in a step-index optical fiber is $\approx V^2/2$. As such, in order to lay the groundwork for understanding the statistical behavior of GV distribution in TALOFs, we have decided to present a comprehensive characterization of a quasi-1D Anderson localized optical waveguide in this manuscript. This exercise is quite illuminating as it sheds light on the general statistical behavior of GV distribution and shows the extent of information that can be extracted from such distributions. The detailed analysis of the TALOF structure will be presented in a future publication. \subsection{Wave equation for the guided modes} Here, we have chosen to calculate the transverse electric (TE) modes of the disordered waveguide using the finite element method (FEM) presented in Refs.~\cite{Mafi-Behnam-Boundary-OC-2016,El-Dardiry-microwave-2012,Kartashov-NL-2012,Lenahan}. Similar observations can be drawn for transverse magnetic (TM) guided modes, but we limit our analysis to TE in this paper for simplicity. The appropriate partial differential equation that will be solved in this manuscript is the Helmholtz equation for electromagnetic wave propagation in a z-invariant dielectric waveguide \begin{align} \label{eq:helmholtz} \nabla^2_{\rm T}A({\bf x}_{\rm T})+n^2({\bf x}_{\rm T})k_0^2A({\bf x}_{\rm T})=\beta^2A({\bf x}_{\rm T}), \end{align} where $A({\bf x}_{\rm T})$ is the transverse profile of the (TE) electric field $E({\bf x}_{\rm T},z,t)=A({\bf x}_{\rm T})\exp(i \beta z-i\omega t)$, which is assumed to propagate freely in the $z$ direction, $\beta$ is the propagation constant, $n({\bf x}_{\rm T})$ is the (random) refractive index of the waveguide, ${\bf x}_{\rm T}$ is generally the one (two) transverse dimension(s) in 1D (2D), $\omega=ck$, and $k=2\pi/\lambda$ where $c$ is the speed of light in vacuum. Equation~\ref{eq:helmholtz} is an eigenvalue problem in $\beta^2$ and guided modes are those solutions (eigenfunctions) with $\beta^2>n^2_{\rm cl}k_0^2$. Here, because we consider only quasi-1D waveguides, we only have one transverse dimension, so ${\bf x}_{\rm T}=x$. We use Dirichlet boundary condition, while noting that the choice of the boundary condition is largely inconsequential because most guided modes strongly decay before reaching the boundary. In this manuscript, we do not consider the chromatic dispersion of the constituent optical materials; therefore, all refractive indexes are assumed to be independent of the optical frequency. The reason is two-fold: first, we would like to isolate primarily the waveguiding contribution to the dispersion, which is driven by the TAL; and second, the size of the chromatic dispersion depends on the choice of the constituent materials and only makes sense in the context of a specific design, rather than the broad observations and arguments presented here. \section{Quasi-1D disordered lattice waveguide index profile} A quasi-1D ordered optical lattice waveguide can be realized by periodically stacking dielectric layers with different refractive indexes on top of each other. Fig.~\ref{fig:IndexProfile}(a) shows the refractive index profile of a periodic quasi-1D optical waveguide where $n_{0}$, $n_{1}$, and $n_{c}$ correspond to the lower index layers, higher index layers, and cladding, respectively. We also define the refractive index contrast as $\Delta n=n_1-n_0$. In order to make a disordered waveguide, the thickness of the layers is randomized around an average value. We always assume that $n_{1}=1.50$ and $n_{c}=n_{0}$, while our simulations are carried out for either high contrast ($\Delta n=0.1$, $n_0=1.4$), or low contrast ($\Delta n=0.05$, $n_0=1.45$) . The total number of layers alternating in $n_{0}$ and $n_{1}$ in each sample waveguide is always 100, and the average thickness of each waveguide is $2\lambda$, where $\lambda$ is the optical wavelength at which the simulations are performed. The average thickness is chosen for maximum localization according to Ref.~\cite{Mafi-Salman-Modal-JOSAB-2013}. The actual thickness of each layer is a random number $2\lambda + \Delta d$, where $\Delta d$ represents the variation in the thickness. $\Delta d$ is a random number that is chosen from a uniform random distribution of ${\rm unif}[-2r\lambda,2r\lambda]$. The amount of randomness is controlled by the $r$-parameter, $0\le r\le 1$; where $r=0$ corresponds to the periodic lattice. We refer to $r$ as the {\em disorder strength}. The waveguide is padded from each side with a layer of $5\lambda$ in thickness and refractive index of $n_{c}$. Fig.~\ref{fig:IndexProfile}(b) shows a sketch of the refractive index profile of a quasi-1D disordered optical waveguide, where the number of layers are reduced for an easier visualization. Note that because the total number of alternating layers is fixed at 100 but the thickness of each layer is random, the total width of the waveguide varies in each element of the random ensemble (each waveguide). The width variation ranges from zero to 5.6\% for $0\le r\le 1$, respectively. Although a larger value of $r$ is associated with a larger variation in the width of the waveguide, it also results in a stronger confinement, hence making the calculations less dependent on the width of the waveguide~\cite{Mafi-Behnam-Scaling-PRB-2016}. Therefore, the very same disorder that induces the waveguide width variation is responsible for Anderson localization, which makes the results of this paper independent of the width variation. \begin{figure}[t] \centerline{ \includegraphics[width=\columnwidth]{IndexProfile.png} } \caption{\label{fig:IndexProfile} Sample refractive index profiles of (a) ordered and (b) disordered slab waveguides are shown.} \end{figure} In Fig.~\ref{fig:ModeProfile}(a), we plot two guided modes of a quasi-1D periodic waveguide with the highest propagation constant. These two modes belong to a large group of standard {\em extended} Bloch periodic guided modes supported by the ordered optical waveguide, which are modulated by the overall mode profile of the quasi-1D waveguide~\cite{Mafi-Salman-Modal-JOSAB-2013}. The total number of guided modes depends on the total thickness and the refractive index values of the slabs and cladding. The key point is that each mode of the periodic structure extends over the entire width of the waveguide structure. A similar exercise can be done with a quasi-1D disordered waveguide, where two arbitrarily selected modes are plotted in Fig.~\ref{fig:ModeProfile}(b) using the same refractive index parameters as that of the periodic waveguide. It is clear that the modes become localized in the quasi-1D disordered waveguide. While there are variations in the shape and width of the modes, the mode profiles shown in Fig.~\ref{fig:ModeProfile}(b) are typical. \begin{figure}[t] \centerline{ \includegraphics[width=\columnwidth]{ModeProfile.png} } \caption{\label{fig:ModeProfile}Typical mode profiles for (a) an ordered slab waveguide where each mode extends over the entire waveguide, and (b) a disordered slab waveguide, where the modes are spatially localized.} \end{figure} \section{Mode group index PDF} \label{sec:PDF} The group index of mode $i$ is defined as the $n^i_g=c \beta^{(1)}_i$, where $c$ is the speed of light in vacuum. In Fig.~\ref{fig:gv-pdf-1}, we plot the group index PDF for the periodic waveguide ($r=0.0$), and disordered waveguides with $r=0.25$, $r=0.50$, and $r=1.0$; with $\Delta n=0.1$. The area under each PDF curve integrates to unity, and each curve is generated using the statistical information from stimulating 6,000 waveguides, amounting to nearly 770,000 guided modes. The PDF of the periodic waveguide is highly peaked around $n_g\approx 1.5065$; however, there are also broad secondary peaks near $n_g\approx 1.520$ and $n_g\approx 1.485$. The modal patterns do not give any obvious clues on which category of mode shapes belong to which group-index bins. When a moderate amount of disorder with $r=0.25$ is introduced, the broad peak near $n_g\approx 1.485$ disappears, the main peak near $n_g\approx 1.520$ is lowered and broadened, while the peak around $n_g\approx 1.5065$ is raised. As the amount of disorder is further increased to $r=0.5$ and $r=1.0$, third and fourth peaks appear at larger values of group index, respectively. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{gv-pdf-1.png} } \caption{\label{fig:gv-pdf-1} Mode group index PDF for the periodic waveguide ($r=0.0$), and disordered waveguides with $r=0.25$, $r=0.50$, and $r=1.0$. Simulations are for the refractive index contrast of $\Delta n=0.1$ and the area under each PDF curve integrates to unity.} \end{figure} By looking at the general shapes of PDF curves in Fig.~\ref{fig:gv-pdf-1}, one can claim that a higher level of disorder amounts to a broader PDF curve, i.e., the diversity in groups index values is increased. In other words, on average, a broader range of group velocities becomes accessible in the presence of increasing disorder. In order to further verify this claim, in Fig.~\ref{fig:gv-pdf-2}, we repeat the simulations of Fig.~\ref{fig:gv-pdf-1}, except for the lower refractive index contrast of $\Delta n=0.05$. In Fig.~\ref{fig:gv-pdf-2}, the mode group index PDF related to $r=0.25$ appears to have the narrowest and highest form. Looking at Fig.~\ref{fig:gv-pdf-2}, one can claim that there appears to be an optimal amount of disorder strength that narrows the range of group velocities and possibly reduces the pulse broadening. We will come back to these two seemingly contradictory conclusions in section~\ref{sec:broadening}, but for now we continue to look for other clues on the impact of disorder on the statistical behavior of group index in these waveguides. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{gv-pdf-2.png} } \caption{\label{fig:gv-pdf-2}Similar to Fig.~\ref{fig:gv-pdf-1}, except for the refractive index contrast of $\Delta n=0.05$.} \end{figure} In Fig.~\ref{fig:GVD-vs-1-A} we present a scatter plot of the group index, $n_g$, versus the effective mode refractive index, $n_p=c\beta/\omega$, for each guided mode. The plots are presented again for $r=0.0$, $r=0.25$, $r=0.50$, and $r=1.0$, and each scatter plot is generated from stimulating 100 waveguides with the refractive index contrast of $\Delta n=0.1$. Note that for the periodic case of $r=0.0$, the result is always the same, because the waveguide refractive index profile is fully deterministic. For $r=0.0$, bandgaps in $n_p$ are observed and the range of $n_g$ is limited. As the disorder is introduced and gradually increased, the available range of both $n_p$ and $n_g$ are expanded and the gaps in $n_g$ eventually close. Again, the disorder increases the diversity in the values of both $n_p$ and $n_g$. We recall that the accessible values of $n_p$ in a waveguide are responsible for the shape of the spatial patterns, in addition to the {\em modal intensity profiles}. For example, if the values of $n_g$ are regularly spaced, the beam pattern in the waveguide repeats its shape periodically; e.g., in a graded-index optical fiber, this repetition happens with a sub-millimeter period as the beam propagates along the fiber~\cite{Mafi:11,liu2016kerr,wright2016self}. For disordered waveguides where such an {\rm order} is broken, pattern repetition is eliminated because of the large number of modes and random values of $n_p$. This behavior combined with the localized {\em modal intensity profiles} is responsible for the high quality of image transport through TALOFs. Similar comments can be made about the diversity in values of $n_g$ and its impact of the temporal shape of an optical pulse, which will be discussed in detail in section~\ref{sec:broadening}. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{GVD-vs-1-A.png} } \caption{\label{fig:GVD-vs-1-A} Scatter plot of the group index, $n_g$, versus the effective mode refractive index, $n_p=\beta/\omega$, for each guided mode. The plots are presented for $r=0.0$, $r=0.25$, $r=0.50$, and $r=1.0$. The refractive index contrast is $\Delta n=0.1$.} \end{figure} In order to further elaborate on the diversity of the $n_g$ values of the guided modes, in Fig.~\ref{fig:GVD-vs-1-B} we plot the values of $n_g$ for both the periodic waveguide with $r=0.0$ and the maximally disordered waveguide $r=1.0$. The refractive index contrast is assumed to be $\Delta n=0.1$ in this figure. In the left panel corresponding to $r=0.0$, we plot the values of $n_g$ versus the mode number (138 guided modes), where we have ordered the modes based on their $n_g$ values. In the right panel corresponding to $r=1.0$, we simulate 100 waveguides and show the $n_g$ values in an ascending order. The result shows that in a highly disordered waveguide, the group index values for the majority of the modes are still similar to those of a periodic waveguide with $r=0.0$; however, nearly 10\%-20\% of the modes exhibit strong deviations in group index and assume considerably smaller or larger values. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{GVD-vs-1-B.png} } \caption{\label{fig:GVD-vs-1-B}Values of the group index, $n_g$, versus the mode number for the periodic waveguide with $r=0.0$ and the maximally disordered waveguide $r=1.0$. For $r=1.0$, we simulate 100 waveguides and show the $n_g$ values in a ascending order. The refractive index contrast is $\Delta n=0.1$.} \end{figure} \section{Pulse propagation and broadening} \label{sec:broadening} The results presented in Figs.~\ref{fig:gv-pdf-1} and~\ref{fig:gv-pdf-2} show the impact of the disorder strength and refractive index contrast on the group index distribution in these disordered waveguides. However, it may be hard to make a concrete conclusion about pulse broadening from such figures, especially because the results appear to be somewhat contradictory as explained in section~\ref{sec:PDF}. The reason is that the pulse broadening is affected by the GV distribution of only those guided modes which are excited by the input pulse. For example, a typical input beam with a Gaussian spatial profile is likely to excite those modes which have less phase variations. As we explained in section~\ref{sec:PDF}, we tried to look at the modal profiles to build a correlation between the profiles and GV values; however, it was inconclusive. As such, in this section, we resort to direct computation to evaluate the impact of the GV distribution of Figs.~\ref{fig:gv-pdf-1} and~\ref{fig:gv-pdf-2} on pulse width. The pulse width is important in setting the accessible communication bandwidth and is also essential for nonlinear properties of these waveguides. In order to evaluate the pulse broadening, we assume that the in-coupling electric field is Gaussian in spatial profile, but is extremely narrow (Dirac delta function) in its temporal profile. The Gaussian spatial profile has a radius of $w=5$\,\textmu m, is centrally aligned with the waveguide, and is expressed as \begin{equation} \label{eq:EprofileSMin} \ket{W}= \Big(\dfrac{2}{\pi w^2}\Big)^{1/4} \exp{\left(-\dfrac{x^2}{w^2}\right)}, \end{equation} where $\braket{W|W}=1$. The bra-ket notation indicates integration in the transverse $x$-coordinate. The guided modes in each waveguide are identified with $\ket{i}$ ($\braket{i|i}=1$), where $i=1,\cdots,M$ is the mode index. For example, for $\Delta n=0.1$, there are approximately $M=140$ guided modes supported by the waveguide. For each waveguide, the modal excitation amplitudes are calculated as $c_i=\braket{i|W}$ and the fractional power in each mode is given by $p_i=|c_i|^2$. The fractional power is nonzero mainly for those modes which are positioned near the center of the waveguide and have an overlap with $\ket{W}$. We define the coupling efficiency as $\eta=\sum^M_{i=1}p_i$, where $0\leq \eta\leq 1$. When $\eta<1$, which is almost always the case, some of the power does not couple to the guided modes and is radiated out. In order to calculate the pulse broadening due to the modal dispersion, we follow the procedure outlined in Ref.~\cite{Pepeljugoski,Mafi-bandwidth}. The temporal profile of the input excitation is $\delta(t)$; however, as it couples into different modes that propagate with different GVs, the pulse breaks into multiple subpulses: \begin{align} \delta(t)\to \sum^M_{i=1}p_i\delta(t-\tau_i), \end{align} where $\tau_i=n^i_gL/c$ is the modal delay for mode $\ket{i}$, $n^i_g$ is the group index of mode $\ket{i}$, $L$ is the propagation length, and $c$ is the speed of light in vacuum. The pulse broadening, $\delta\tau$, is calculated using \begin{align} (\delta\tau)^2=2\eta^{-1}\sum^M_{i=1}p_i(\tau_i-\bar{\tau})^2, \end{align} where $\bar{\tau}$ is the temporal center of the broken (broadened) pulse given by \begin{align} \bar{\tau}=\eta^{-1}\sum^M_{i=1}p_i\tau_i. \end{align} In Fig.~\ref{fig:coupling-efficiency-vs-disorder}, we plot the coupling efficiency, $\eta$, as a function of the disorder strength for the two cases of $\Delta n=0.1$ and $\Delta n=0.05$. For each data point, we have simulated 1,000 waveguides and calculated the value of $\eta$ for each waveguide; the dark circle shows the mean value of $\eta$ averaged over the 1,000 waveguides and the error-bar indicates one standard deviation around the mean value. Of course, $\eta$ is generally higher for $\Delta n=0.1$ than $\Delta n=0.05$, because a larger number of guided modes are supported in the former case. This result is important because it shows that in coupling to a typical disordered waveguide, on average, only 70\%-80\% of the power can be coupled in and the rest is radiated out. In Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}, we show the pulse broadening per unit length, $\delta \tau/L$, as a function of the disorder strength for two cases of $\Delta n=0.1$ and $\Delta n=0.05$. Again, the results are averaged over the 1,000 waveguides for each data point. These figures indicate that a small amount of disorder, typically around $r\approx 0.1-0.15$, achieves minimal pulse broadening compared to the case of no-disorder or highly disordered waveguides. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{coupling-efficiency-vs-disorder.png} } \caption{\label{fig:coupling-efficiency-vs-disorder}The coupling efficiency is plotted as a function of the disorder strength for two cases of $\Delta n=0.1$ and $\Delta n=0.05$. Each data point is averaged over 1,000 waveguides and the error-bar indicates one standard deviation around the mean value.} \end{figure} \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{GV-pulse-broadening-vs-disorder.png} } \caption{\label{fig:GV-pulse-broadening-vs-disorder}Pulse broadening per unit length is plotted as a function of the disorder strength for two cases of $\Delta n=0.1$ and $\Delta n=0.05$. Each data point is averaged over 1,000 waveguides and the error-bar indicates one standard deviation around the mean value.} \end{figure} The data from Fig.~\ref{fig:GV-pulse-broadening-vs-disorder} indicates an optimal disorder value to achieve a minimal amount of pulse broadening. Equipped with this information, we can now go back to the discussion surrounding Figs.~\ref{fig:gv-pdf-1} and~\ref{fig:gv-pdf-2} in section~\ref{sec:PDF}. We recall that Fig.~\ref{fig:gv-pdf-1} indicated that a higher level of disorder amounts to a broader range of group velocities, while Fig.~\ref{fig:gv-pdf-2} indicated an optimal value for the disorder strength. In order to address this issue, in Fig.~\ref{fig:gv-pdf-3}, we plot the mode group index PDFs at $r=0.1$ for $\Delta n=0.1$ and $r=0.15$ for $\Delta n=0.05$, respectively. These values correspond to the minima of the pulse broadening curves in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}. By comparing Fig.~\ref{fig:gv-pdf-3} and Fig.~\ref{fig:gv-pdf-2}, it can be clearly seen that for $\Delta n=0.05$, $r=0.15$ provides the narrowest and tallest PDF curve, which is clearly consistent with the results reported in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}. For $\Delta n=0.1$, by comparing Fig.~\ref{fig:gv-pdf-3} and Fig.~\ref{fig:gv-pdf-1}, in particular comparing $r=0.1$ and $r=0.0$, it can be observed that both the primary peak at $n_g\approx 1.5065$ and the secondary peak at $n_g\approx 1.520$ narrow down considerably for $r=0.1$ and the mode group index values below the primary peak disappear for $r=0.1$. As such, both results confirm the presence of an optimal value in the disorder strength to achieve the minimum pulse broadening. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{gv-pdf-3.png} } \caption{\label{fig:gv-pdf-3}Similar to Fig.~\ref{fig:gv-pdf-1}, except for $\Delta n=0.1$ and $r=0.1$ in the left panel and $\Delta n=0.05$ and $r=0.15$ in the right panel.} \end{figure} It must be noted that although the input pulse is assumed to be extremely narrow, i.e., a Dirac delta function, the same results would be readily obtained with a longer input pulse. The choice of a Dirac delta function is merely a matter of convenience. In other words, for a Gaussian input pulse of temporal width $\tau_{\rm in}$, which broadens to $\tau_{\rm out}$ upon propagation through the waveguide, it can be shown that $\delta \tau=(\tau^2_{\rm out}-\tau^2_{\rm in})^{1/2}$, where $\delta \tau$ is independent of $\tau_{\rm in}$. Therefore, the same value of pulse broadening is obtained from a broader Gaussian pulse as from a Dirac delta function. Note that this statement does not strictly hold if the second or higher order dispersive effects are taken into account, all of which are of higher-order-contribution and play a less important role in the pulse broadening in normal circumstances. \subsection{Metric for pulse broadening using the PDF curves} In light of the observations in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder} and how they relate to Figs.~\ref{fig:gv-pdf-1},~\ref{fig:gv-pdf-2}, and~\ref{fig:gv-pdf-3}, we define a metric to assess the width of the PDF curves; the goal is to establish a relationship between this metric and the the pulse broadening values in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}. A metric, at the minimum, should be able to predict the optimal of the disorder strength for minimal pulse broadening. We use the square of the {\em inverse population ratio} (IPR) of the PDF curves as the metric. The IPR is defined as: \begin{align} \label{eq:defIPR} {\rm IPR} = \int \big[\rm PDF(n_g)\big]^2\,dn_g, \end{align} where ${\rm PDF}(n_g)$ represents any of the PDF curves in Figs.~\ref{fig:gv-pdf-1},~\ref{fig:gv-pdf-2}, and~\ref{fig:gv-pdf-3}. Note that unlike the commonly used 4th power in the definition of IPR (see, e.g., Ref.~\cite{PhysRevLett.105.183901}), we only use the 2nd power of PDF in Eq.~\ref{eq:defIPR}: the reason is that the PDF is a non-negative probability density function and is similar to $|\psi|^2$, if $\psi$ is regarded as the (quantum-mechanical) wave amplitude. We recall that the area under each PDF curve integrates to unity: $\int {\rm PDF}(n_g)\,dn_g=1$. In Fig.~\ref{fig:broadening-metric}, we plot the metric, i.e. ${\rm IPR^2}$ as a function of the disorder strength, both for $\Delta n=0.1$ and $\Delta n=0.05$. Fig.~\ref{fig:broadening-metric} should be compared with Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}; the disorder contrast corresponding to the minimum pulse broadening is almost exactly predicted by the metric. Moreover, the correlation factor between the metric and the mean values presented in Fig.~\ref{fig:broadening-metric} is 89\% for $\Delta n=0.1$ and 99\% for $\Delta n=0.05$. Therefore, the ${\rm IPR^2}$ metric appears to be a powerful tool that can predict the pulse broadening performance of such disordered waveguides directly using the PDF curves and without resorting to specific pulse propagation simulations. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{broadening-metric.png} } \caption{\label{fig:broadening-metric}The ${\rm IPR^2}$ metric is calculated from the PDF curves as a function of the disorder contrast ($r$) for both $\Delta n=0.1$ and $\Delta n=0.05$. Each PDF curve is obtained by simulating 1,000 waveguides.} \end{figure} \section{Mode group index PDF versus localization length} We discussed earlier that the presence of disorder results in the TAL of the guided modes. Therefore, as the disorder strength is increased, on average, the modes should become smaller in width. In this section, we explore the correlation between the localization width of the guided modes and their group index. For each guided mode, the mode width $\mathcal{W}$ is defined based on the standard deviation $\sigma$ of the (1D) normalized intensity distribution $I(x)\propto |A(x)|^2$ of the mode according to \begin{align} \label{eq:sigma} \sigma^2 = \int_{-\infty}^{+\infty}~(x-\bar{x})^2~I(x)~dx, \end{align} where we define the mode center as \begin{align} \label{eq:xbar} \bar{x} &= \int_{-\infty}^{+\infty} x~I(x)~dx. \end{align} $x$ is the spatial coordinate across the width of the waveguide and the mode intensity profile is normalized such that $\int_{-\infty}^{+\infty} I(x) dx = 1$. We define $\mathcal{W}=\sqrt{2}\sigma$ as a measure of the width of the modes, i.e. a larger $\mathcal{W}$ signifies a wider mode intensity profile distribution. In Fig.~\ref{fig:gv-width-1}, we present a scatter plot of the group index, $n_g$, versus the mode width, $\mathcal{W}$, for each guided mode. The plots are presented for $r=0.1$, $r=0.25$, $r=0.50$, and $r=1.0$, while the refractive index contrast is assumed to be $\Delta n=0.1$. We note that $r=0.1$ corresponds to the minimum pulse spreading for $\Delta n=0.1$ according to the left panel in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}. Similarly, in Fig.~\ref{fig:gv-width-2}, we present a scatter plot of the group index, $n_g$, versus the mode width, $\mathcal{W}$, for each guided mode. The plots are presented for $r=0.15$, $r=0.25$, $r=0.50$, and $r=1.0$, while the refractive index contrast is assumed to be $\Delta n=0.05$. We note that $r=0.15$ corresponds to minimum pulse spreading for $\Delta n=0.05$ according to the right panel in Fig.~\ref{fig:GV-pulse-broadening-vs-disorder}. The data in each subfigure is generated from the simulation of 1,000 independent waveguides resulting in 138,000 modes. We note that the lowest value of $r$ in each figure corresponds to the narrowest group index distribution and the widest mode-width distribution. As the disorder is increased, the group index distribution increases, while the mode-width distribution decreases. This observation is consistent with the TAL behavior is disordered waveguides and our discussions on group index distribution in previous sections. \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{gv-width-1.png} } \caption{\label{fig:gv-width-1}Scatter plot of the group index, $n_g$, versus the mode width, $\mathcal{W}$, for each guided mode. The plots are presented for $r=0.1$, $r=0.25$, $r=0.50$, and $r=1.0$. The refractive index contrast is $\Delta n=0.1$.} \end{figure} \begin{figure}[t] \centerline{ \includegraphics[width=0.95\columnwidth]{gv-width-2.png} } \caption{\label{fig:gv-width-2}Scatter plot of the group index, $n_g$, versus the mode width, $\mathcal{W}$, for each guided mode. The plots are presented for $r=0.15$, $r=0.25$, $r=0.50$, and $r=1.0$. The refractive index contrast is $\Delta n=0.05$.} \end{figure} \section{conclusion} In this manuscript, we have introduced the mode group index PDF as a powerful tool to study the dispersion properties of guided modes in a disordered quasi-1D slab optical waveguide. We observe that the minimum amount of modal dispersion corresponds to a small amount of disorder, i.e., no disorder or large disorder both result in a large modal dispersion. We establish a metric that is applied to the mode group index PDF and can reliably predict the optimal amount of disorder for a minimal pulse dispersion. The metric is a measure of the width of the PDF and its value is strongly correlated with the modal dispersion of a pulse propagating in the disordered waveguide. While the simulations are for a certain class of disordered quasi-1D waveguides, they appear to conform well with the physical intuition and are likely to hold in other designs. The results presented in the manuscript are intended to establish the framework for a comprehensive analysis of the group velocity statistics for quasi-2D transverse Anderson localization in disordered optical fibers in the future. It is quite plausible to expect that in a transversely disordered optical fiber, similar to the disordered quasi-1D slab optical waveguide, the minimum amount of modal dispersion corresponds to an optimal (and likely a small) amount of disorder. While a longitudinally invariant and transversely disordered optical fiber is not inherently more lossy than a conventional core-cladding optical fiber, it is likely to be fabricated by a method that is more prone to manufacturing uncertainties, such as the stack-and-draw method~\cite{Mafi-Salman-OL-2012}. Such manufacturing uncertainties can break the longitudinal invariance and result in attenuation, as well as polarization coupling. The undesirable attenuation must be addressed in a case-by-case basis by making better fibers or amplifying the signal. Pulse broadening due to the random polarization coupling is likely going to be negligible compared to the modal dispersion, similar to a conventional optical fiber; however, this issue warrants further research. \section*{ACKNOWLEDGMENT} A. Mafi gratefully acknowledges support by Grant Number 1807857 from National Science Foundation (NSF). \section*{References} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
1,477,468,749,890
arxiv
\section{Introduction} Directly imaged planets typically have their masses inferred indirectly from their luminosity and age, using uncalibrated evolutionary models that assume an initial thermal state. Most commonly-used models assume an initially high specific entropy \citep[hot start; e.g.,][]{1997ApJ...491..856B,2003A&A...402..701B}, but the planet formation process might radiate away a significant amount of energy leading to a much lower initial specific entropy \citep[cold or warm start; e.g.,][]{2007ApJ...655..541M,2012ApJ...745..174S}. Furthermore, planet assembly could be slow and only conclude well after the star is formed, in which case young planets could appear even more luminous than hot-start models would predict from the host star's age. The crucial observations needed to sort out these various possibilities are masses of planets with known age and luminosity. $\beta$~Pic~b was one of the first directly imaged planets to be discovered \citep{2010Sci...329...57L}, and its host star is the namesake of a young moving group of well-determined age \citep[$22\pm6$\,Myr; e.g.,][]{2014MNRAS.438L..11B,2017AJ....154...69S}. We present here a new model-independent dynamical mass for $\beta$~Pic~b. We use the methodology of \citet{Brandt_Dupuy_Bowler_2018} to perform a joint orbital analysis of relative astrometry, radial velocities, and host-star astrometry from the cross-calibrated {\sl Hipparcos}--{\sl Gaia}\ Catalog of Accelerations \citep[HGCA;][]{Brandt_2018}. Our new mass is consistent with recent results from \citet{2018NatAs.tmp..114S} but with broader uncertainties owing to our re-assessment of errors reported in {\sl Hipparcos}\ and {\sl Gaia}~DR2 catalogs. \section{Data} \label{sec:data} \subsection{Host-Star Astrometry} \citet{Brandt_2018} has cross-calibrated {\sl Hipparcos}\ and {\sl Gaia}~DR2, placing them on a common reference frame. Figure~1 of \cite{Brandt_2018} shows that neither the {\sl Hipparcos}\ re-reduction \citep{2007A&A...474..653V} nor the {\sl Gaia}~DR2 astrometry \citep{2018A&A...616A...2L} are suitable for orbit fitting in their published form: the ensemble of proper motion differences are inconsistent with their formal uncertainties. Moreover, Figure~9 of \cite{Brandt_2018} shows that the cross-calibrated HGCA proper motions satisfy the standard assumptions of Gaussianity but that the lowest-precision stars in {\sl Gaia}\ (like $\beta$~Pic) have uncertainties that remain underestimated. HGCA contains three proper motions: a (nearly) instantaneous proper motion near 1991.25, another near 2015.5, and the positional difference between the catalogs scaled by the time between them. The three proper motions are nearly independent. \citet{Brandt_2018} also gives the central epoch at which a position was measured; this is the epoch with the minimum positional uncertainty (which differs slightly in right ascension and declination). \begin{deluxetable*}{lccccccr} \tablewidth{0pt} \tablecaption{Absolute Stellar Astrometry} \tablehead{ \colhead{Mission} & \colhead{$\mu_{\alpha*}$} & \colhead{$\sigma[\mu_{\alpha*}]$} & \colhead{$\mu_{\delta}$} & \colhead{$\sigma[\mu_\delta]$} & \colhead{Corr$[\mu_{\alpha*},\mu_\delta]$} & \colhead{$t_{\alpha*}$} & \colhead{$t_\delta$} \\ \colhead{} & \multicolumn{2}{c}{(mas\,yr$^{-1}$)} & \multicolumn{2}{c}{(mas\,yr$^{-1}$)} & \colhead{} & \multicolumn{2}{c}{(year)} } \startdata {\sl Hipparcos} & 4.4 & 0.4 & 82.8 & 0.4 & 0.002 & 1991.33 & 1991.26 \\ {\sl Hipparcos}--{\sl Gaia} & 4.796 & 0.027 & 83.863 & 0.028 & 0.025 & \nodata & \nodata \\ {\sl Gaia} & 2.5 & 2.5\tablenotemark{a} & 82.6 & 2.5\tablenotemark{a} & 0.040 & 2015.58 & 2015.67 \enddata \tablenotetext{a}{{\sl Gaia}~DR2 errors have been inflated by a factor of two as recommended by \cite{Brandt_2018} for stars like $\beta$~Pic\ that have large reported proper motion errors in DR2 ($\gtrsim0.7$\,\hbox{mas\,yr$^{-1}$}).} \label{tab:hip_gaia} \end{deluxetable*} Table \ref{tab:hip_gaia} lists our HGCA proper motions for $\beta$~Pic, the correlation coefficients between proper motion in RA and Dec, and the central epoch for each measurement. $\beta$~Pic\ is heavily saturated in {\sl Gaia}\ data and thus is among the least-precisely measured stars in the HGCA. Figure~9 of \citet{Brandt_2018} indicates that the inflated uncertainties of such stars remain underestimated by as much as factor of two. We have therefore doubled the {\sl Gaia}~DR2 proper motion errors beyond the values in the HGCA. For parallax, we adopt the same 60/40 linear combination of the {\sl Hipparcos}\ catalogs as the HGCA and add the same 0.20\,mas error inflation in quadrature; this results in a value of $51.61\pm0.39$\,mas. The uncertainties for the {\sl Hipparcos}\ proper motions are much larger in the \cite{Brandt_2018} catalog than in the {\sl Hipparcos}\ re-reduction, though they are slightly smaller than the uncertainties of the original {\sl Hipparcos}\ reduction. This is a generic feature of bright stars in the HGCA. As shown in Figure~1 of \cite{Brandt_2018}, stars with higher-precision proper motions depart most strongly from the standard normal distribution in their residuals. Even for the most precise 20\% of stars, a $\sim$60/40 linear combination of the two {\sl Hipparcos}\ reductions gives lower residuals than the \citet{2007A&A...474..653V} proper motions alone, and further error inflation is necessary to bring the residuals into agreement with a normal distribution. Given the cross-calibration approach used in the HGCA, it would be infeasible to use {\sl Hipparcos}\ epoch astrometry, as in \citet{2018NatAs.tmp..114S}, and ensure independence of individual measurements. \subsection{Literature Relative Astrometry \& Radial Velocities} We consider all available relative astrometry of $\beta$~Pic~b in our orbit analysis, setting aside duplicate measurements when the same data have been analyzed separately in the literature. This includes astrometry from VLT/NaCo \citep{2010ApJ...722L..49Q,2011A&A...528L..15B,2013A&A...555A.107B,2012A&A...542A..41C,2013A&A...559L..12A,2014A&A...566A..91M}, Gemini-S/NICI \citep{2014ApJ...794..158N}, Magellan/MagAO \citep{2014ApJ...794..158N}, Gemini-S/GPI \citep{2016AJ....152...97W}, and VLT/SPHERE \citep{2018arXiv180908354L}. This comprises 50 measurements spanning sixteen years, with two observations on the northeastern side of the orbit (in 2003~November and 2018~September). The radial velocity of the host star has been monitored from 2003--2011 with the HARPS spectrograph \citep{2012A&A...542A..18L}. We use all 1049 individual published measurements and account for the substantial intrinsic ``jitter'' that is expected for a young star like $\beta$~Pic. We also use the measurement of the planet's relative radial velocity ($\Delta{\rm RV} = {\rm RV}_{\rm comp} - {\rm RV}_{\rm host}$) from \citet{2014Natur.509...63S} in our orbit fit. \section{Orbit Analysis} \label{sec:orbit} Relative astrometry from direct imaging has already been shown to constrain many orbital parameters of $\beta$~Pic~b given the long time baseline and intensive monitoring \citep[e.g.,][]{2016AJ....152...97W,2018arXiv180908354L}. Therefore, as a first step we fit the relative astrometry with a standard seven-parameter Keplerian orbit in order to assess any systematics in combining astrometry from many different instruments and data reduction methods. We found an unreasonably large $\chi^2$ of 165 for 93 degrees of freedom (dof), $p(\chi^2) = 6\times10^{-6}$, when taking all reported astrometric errors at face value. To achieve $p(\chi^2) = 0.5$ we estimated that errors of 4\,mas and 0$\fdg$3 would need to be added in quadrature to all separation and PA measurements, respectively. Alternatively, we could exclude a handful of outlier measurements (which have reasons for being suspect) to decrease the $\chi^2$ of the maximum likelihood solution to a reasonable value. Five epochs of VLT/NaCo astrometry from \citet{2014A&A...566A..91M} account for 30\% of the $\chi^2$ in the relative orbit fit. \citet{2012A&A...542A..41C} and \citet{2018arXiv180908354L} did not use any of these five epochs, even though they each could have used at least some. We therefore exclude all \citet{2014A&A...566A..91M} astrometry, which is contemporaneous with other available measurements. Likewise, Gemini/NICI astrometry from \citet{2014ApJ...794..158N} has three highly discrepant measurements (25\% of the total $\chi^2$) that each were obtained on the same night as another measurement that is more consistent with the orbit fit. We exclude these three measurements as well, using 42 relative astrometry measurements in our final orbital analysis. As shown by \cite{Brandt_Dupuy_Bowler_2018}, simultaneous measurements of projected relative separation, host-star radial velocity, and host-star astrometric acceleration can provide a direct measurement of companion mass. In practice, observations of directly imaged companions are never truly simultaneous, although for very long long orbital periods on the order of centuries this can be a good approximation. The orbit of $\beta$~Pic~b is of order decades \citep[e.g.,][]{2012A&A...542A..41C,2014ApJ...794..158N}, so more detailed analysis is needed to produce a companion mass from combining these three types of measurements. Our approach is described in detail in \cite{Brandt_Dupuy_Bowler_2018} and briefly here. \begin{figure*} \centerline{ \includegraphics[width=0.30\linewidth]{BPIC-xy.pdf} \hskip -0.1 truein \includegraphics[width=0.30\linewidth]{BPIC-rv.pdf} \hskip -0.1 truein \includegraphics[width=0.30\linewidth]{BPIC-hgca.pdf} \hskip -1.3 truein \includegraphics[width=0.30\linewidth]{BPIC-colorbar.pdf} } \vskip 0.0 truein \caption{Our joint orbit fit to relative astrometry (left), RVs (middle), and absolute astrometry of the host star from HGCA (right). In all panels, the thick black line indicates the highest likelihood orbit, and thin lines are 100~orbits drawn randomly from our posterior distribution colored according to orbital eccentricity. {\bf Left:} Small filled circles along the maximum likelihood deprojected orbit indicate epochs spaced by 2~years from 2002 until 2022. The dotted line indicates periastron. Open symbols of different shapes and colors are plotted along the maximum likelihood orbit at the epochs corresponding to the relative astrometry used in our analysis. {\bf Middle:} Over $10^3$ RVs for $\beta$~Pic\ from \citet{2012A&A...542A..18L} are plotted as small blue dots, displaying a large jitter of $269\pm6$\,m\,s$^{-1}$. The bottom panel shows $\beta$~Pic~b's RV relative to its host star along with the measurement of $-15.4\pm1.7$\,\mbox{km\,s$^{-1}$}\ from \citet{2014Natur.509...63S}. {\bf Right:} Each plotted measurement is the difference between the proper motion measured in one mission ({\sl Hipparcos}\ in 1991.3 or {\sl Gaia}\ in 2015.6) and the proper motion computed from the change in RA and Dec between the two missions. The strongest constraint on acceleration caused by $\beta$~Pic~b comes from {\sl Hipparcos}\ given the large astrometric errors for $\beta$~Pic\ in {\sl Gaia}~DR2.} \label{fig:orbit} \end{figure*} \begin{deluxetable*}{lccc} \tablecaption{MCMC Orbital Posteriors for $\beta$ Pic b \label{tbl:mcmc-BPIC}} \setlength{\tabcolsep}{0.10in} \tabletypesize{\tiny} \tablewidth{0pt} \tablehead{ \colhead{Property} & \colhead{Median $\pm$1$\sigma$} & \colhead{95.4\% c.i.} & \colhead{Prior} } \startdata \multicolumn{4}{c}{Fitted parameters} \\[1pt] \cline{1-4} \multicolumn{4}{c}{} \\[-5pt] Companion mass $M_{\rm comp}$ (\mbox{$M_{\rm Jup}$}) & $13.1_{-3.2}^{+2.8}$ & 7.2, 19.5 & $1/M$ (log-flat) \\[3pt] Host-star mass $M_{\rm host}$ (\mbox{$M_{\sun}$}) & $1.84\pm0.05$ & 1.74, 1.94 & $1/M$ (log-flat) \\[3pt] Parallax (mas) & $51.60_{-0.39}^{+0.40}$ & 50.82, 52.37 & $\exp[-0.5((\varpi-\varpi_{\rm DR2})/\sigma[\varpi_{\rm DR2}])^2]$ \\[3pt] Semimajor axis $a$ (AU) & $11.8_{-0.9}^{+0.8}$ & 10.3, 13.7 & $1/a$ (log-flat) \\[3pt] Inclination $i$ (\mbox{$^{\circ}$}) & $88.87\pm0.08$ & 88.71, 89.04 & $\sin(i)$, $0\mbox{$^{\circ}$} < i < 180\mbox{$^{\circ}$}$ \\[3pt] $\sqrt{e}\sin{\omega}$ & $-0.080_{-0.029}^{+0.027}$ & $-$0.134, $-$0.017 & uniform \\[3pt] $\sqrt{e}\cos{\omega}$ & $-0.48\pm0.05$ & $-$0.59, $-$0.36 & uniform \\[3pt] Mean longitude at $t_{\rm ref}=2455197.5$~JD, $\lambda_{\rm ref}$ (\mbox{$^{\circ}$}) & $150\pm4$ & 142, 159 & uniform \\[3pt] PA of the ascending node $\Omega$ (\mbox{$^{\circ}$}) & $31.65\pm0.09$ & 31.48, 31.82 & uniform \\[3pt] RV zero point (m\,s$^{-1}$) & $73_{-15}^{+14}$ & 45, 103 & uniform \\[3pt] RV jitter $\sigma$ (m\,s$^{-1}$) & $269\pm6$ & 257, 281 & $1/\sigma$ (log-flat) \\[3pt] \cline{1-4} \multicolumn{4}{c}{} \\[-5pt] \multicolumn{4}{c}{Computed properties} \\[1pt] \cline{1-4} \multicolumn{4}{c}{} \\[-5pt] Orbital period $P$ (yr) & $29.9_{-3.2}^{+2.9}$ & 24.1, 36.8 & \nodata \\[3pt] Semimajor axis (mas) & $610_{-50}^{+40}$ & 530, 700 & \nodata \\[3pt] Eccentricity $e$ & $0.24\pm0.06$ & 0.13, 0.35 & \nodata \\[3pt] Argument of periastron $\omega$ (\mbox{$^{\circ}$}) & $189.3_{-2.9}^{+3.0}$ & 182.3, 195.5 & \nodata \\[3pt] Time of periastron $T_0=t_{\rm ref}-P\frac{\lambda-\omega}{360\mbox{$^{\circ}$}}$ (JD)& $2456380_{-60}^{+80}$ & 2456210, 2456520 & \nodata \\[3pt] Mass ratio $q = M_{\rm comp}/M_{\rm host}$ & $0.0068_{-0.0016}^{+0.0015}$ & 0.0038, 0.0101 & \nodata \enddata \tablecomments{The $\chi^2$ of relative astrometry is 35.5 for separations and 32.3 for PAs, with 42 measurements for each. The $\chi^2$ of the {\sl Hipparcos}\ and {\sl Gaia}~DR2 proper motion differences is 1.09 for four measurements. For the parallax, we use a combination of the original and re-reduced {\sl Hipparcos}\ measurements re-weighted according to \citet{Brandt_2018}, $\varpi_{\rm HGCA} = 51.61\pm0.39$\,mas.} \end{deluxetable*} \begin{figure*} \vskip -1.75 truein \includegraphics[width=1.0\linewidth]{BPIC-posterior.pdf} \vskip -2.25 truein \caption{Marginalized distributions for four orbital parameters (histograms) along with their joint posteriors (grayscale images with contours) for the $\beta$~Pic\ system. In histograms, the thick solid lines indicate the highest likelihood orbit, and dashed and dotted lines show 1$\sigma$ and 2$\sigma$ ranges, respectively. In 2-d plots, the 1$\sigma$ and 2$\sigma$ areas of the joint posteriors are indicated by dark dashed contours and lighter dash-dotted contours, respectively. The strongest covariance is between eccentricity and semimajor axis such that both smaller, less eccentric and larger, more eccentric orbits are consistent with observations. Given the positive correlation between semimajor axis and period (not shown), less eccentric orbits also correspond to shorter orbital periods.} \label{fig:posterior} \end{figure*} Posteriors of orbital parameters were determined using the parallel-tempering Markov chain Monte Carlo (PT-MCMC) ensemble sampler in \texttt{emcee~v2.1.0} \citep{2013PASP..125..306F} based on the algorithm described by \citet{2005PCCP....7.3910E}. We ran 30 temperatures and 100 walkers fitting for eleven parameters, including the masses of the host star ($M_{\rm host}$) and planet ($M_{\rm comp}$). Eight others define the orbit, including the zero point of the system velocity (RV$_{\rm zero}$) and the intrinsic RV jitter ($\sigma_{\rm jit}$). We also included parallax ($\varpi$) as a fitted parameter with a Gaussian prior based on the measured value. For the initial step, we drew random values according to our priors across all valid parameter space, where for log-flat priors we used bounds of 0.3--3.0\,\mbox{$M_{\sun}$}\ in $M_{\rm host}$, 0.001--0.1\,\mbox{$M_{\sun}$}\ in $M_{\rm comp}$, 1--100\,AU in $a$, and 0.3--300\,m\,s$^{-1}$ in $\sigma_{\rm jit}$. We used $3\times10^5$ steps in our PT-MCMC analysis, saving every 50th step of our chains. After ensuring that all walkers had stabilized in the mean and standard deviation of the posterior for each of the parameters we discarded all but the last $10^3$ samples as the burn-in portion yielding $10^5$ PT-MCMC samples across all walkers in the cold chain. Table~\ref{tbl:mcmc-BPIC} provides information on all our priors and posteriors, Figure~\ref{fig:orbit} shows our orbit fit compared to the input measurements, and Figure~\ref{fig:posterior} shows posteriors of astrophysically important parameters. \section{Discussion} \label{sec:discussion} Previous work has established key aspects of the orbit of $\beta$~Pic~b, such as the viewing geometry of the nearly edge-on orbit and total system mass \citep[e.g.,][]{2012A&A...542A..41C,2014ApJ...794..158N,2014PNAS..11112661M,2014Natur.509...63S,2016AJ....152...97W}. The reflex motion induced on $\beta$~Pic\ by $\beta$~Pic~b is this same orbit scaled down by the mass ratio. Because of stellar proper motion, detecting this reflex motion and obtaining a dynamical mass for $\beta$~Pic~b requires measuring nonlinear perturbations on the motion of $\beta$~Pic. In principle, RVs could determine a mass for $\beta$~Pic~b, but the substantial RV jitter on such a young, active star ($\sigma_{\rm jit} = 269\pm6$\,m\,s$^{-1}$) hampers the measurement of the $\sim$100\,m\,s$^{-1}$ expected semiamplitude of the planet. HGCA reports deviations from constant proper motion of only 1--2$\sigma$ for $\beta$~Pic. Our joint fit of astrometry and RVs yields a mass posterior of $13\pm3$\,\mbox{$M_{\rm Jup}$}\ (23\% uncertainty) for $\beta$~Pic~b. This is not as precise as the value of $11\pm2$\,\mbox{$M_{\rm Jup}$}\ (18\% uncertainty) from \citet{2018NatAs.tmp..114S} because we adopted cross-calibrated {\sl Hipparcos}\ and {\sl Gaia}~DR2 astrometry, which \citet{Brandt_2018} found requires error inflation of reported astrometric errors in both catalogs. Moreover, our analysis does not assume a host-star mass, although it broadly supports previous assumptions of 1.75\,\mbox{$M_{\sun}$}\ with a remarkably precise model-independent mass of $1.84\pm0.05$\,\mbox{$M_{\sun}$}. Our mass determination for $\beta$~Pic~b is chiefly driven by the small offset between the {\sl Hipparcos}\ proper motion and the {\sl Hipparcos}-to-{\sl Gaia}\ positional difference, as was the case in \citet{2018NatAs.tmp..114S}, because the uncertainty in the {\sl Gaia}~DR2 proper motion is very large due to $\beta$~Pic\ being saturated in {\sl Gaia}. Combining our mass for $\beta$~Pic~b with the luminosity of $\log(\mbox{$L_{\rm bol}$}/\mbox{$L_{\sun}$}) = -3.78\pm0.03$\,dex determined by \cite{2015ApJ...815..108M} we calculate upper and lower limits on the substellar cooling age from the hot-start evolutionary models of \citet{2008ApJ...689.1327S}. We use the same method described in \citet{2017ApJS..231...15D}, with a uniform prior in age, our orbit posterior as the prior on mass, and rejection-sampling on \mbox{$L_{\rm bol}$}\ to select models consistent with $\beta$~Pic~b. The posterior on the age is wide, as expected given the low-precision mass. The 3$\sigma$ confidence interval on the cooling age ranges from 7--65\,Myr. Combining the 7\,Myr lower limit with external age information for $\beta$~Pic\ and its eponymous young moving group directly determines the amount of time that could have elapsed between the formation of the host star and planet. Adopting the $\beta$~Pic\ moving group age of $22\pm6$\,Myr from \citet{2017AJ....154...69S}, we thus find an upper limit of $15\pm6$\,Myr in the difference between the times of formation (a.k.a.\ $t = 0$) for $\beta$~Pic~b and its host star. This is not particularly constraining on theory, but improved precision in the mass of $\beta$~Pic~b in the future will result in stronger tests of the timescale of giant planet formation. Our dynamical mass of $13\pm3$\,\mbox{$M_{\rm Jup}$}\ for $\beta$~Pic~b is broadly consistent with hot-start formation models, as these predict a mass of $13.0^{+0.4}_{-0.3}$\,\mbox{$M_{\rm Jup}$}\ at $22\pm6$\,Myr \citep{2018AJ....156...57D}. The high-mass end of our posterior is also consistent with warm-start models. Our 2$\sigma$ upper limit on the mass of $\beta$~Pic~b is 19.5\,\mbox{$M_{\rm Jup}$}\ (0.019\,\mbox{$M_{\sun}$}). Interpolating hot-start evolutionary tracks from \citet{2008ApJ...689.1327S}, an object of this mass should have a luminosity of $\log(\mbox{$L_{\rm bol}$}/\mbox{$L_{\sun}$}) = -3.1$\,dex at an age of 22\,Myr. The actual luminosity of $\beta$~Pic~b is 0.7\,dex (1.7\,mag) fainter than this. This decrement corresponds to the intermediate range of 10-\mbox{$M_{\rm Jup}$}\ warm-start models from \citet[][see their Figure~9]{2012ApJ...745..174S} and is highly inconsistent with cold-start models that are $\approx$5\,mag fainter than hot-start tracks at 22\,Myr. However, warm-start models would need to be computed beyond 10\,\mbox{$M_{\rm Jup}$}\ for a more accurate appraisal. Our relative orbit for $\beta$~Pic~b is consistent with past work within the uncertainties, but our posteriors are notably lacking any near-circular orbits, with $e<0.1$ excluded at $>2\sigma$. Previous work was generally consistent with eccentricities up to 0.1--0.2 but preferred more circular orbits, unlike our orbit fit ($e=0.24\pm0.06$). Based on tests using various subsets of the relative astrometry, we find that our results are simply the consequence of combining all published measurements in a joint fit. Recent results from \citet{2016AJ....152...97W} did not have access to the VLT/SPHERE measurements, and \citet{2018arXiv180908354L} used only VLT astrometry in their analysis. We note that our choice to exclude eight relative astrometry outliers out of 50 measurements does not significantly impact this result. We ran an identical PT-MCMC using all 50 measurements, with errors of 4\,mas and 0$\fdg$3 added in quadrature to all separations and PAs in order to make a reasonable $\chi^2$. All parameter posteriors were very similar, including a slightly higher eccentricity of $0.28\pm0.06$. The strong preference of our fit for non-circular orbits has implications for the origin of $\beta$~Pic~b and its history of dynamical interactions with the disk. While the focus of the literature has been on the planet--disk interaction as a means to explain the disk warp \citep[e.g.,][]{2011ApJ...743L..17D}, the eccentricity may help explain other observations \citep{2015ApJ...800..136A,2015ApJ...811...18M,2016AJ....152...97W}. The higher eccentricity for the planet ($e\sim0.2$) is also consistent with the exo-comet hypothesis put forth to explain the occasional absorption features in the host star's spectrum. \citet{2001A&A...376..621T} note that the frequency of observed events is well explained by the excitation of cometary bodies in a 3:1 resonance with a massive perturber at roughly 10\,AU. However, the location from which these comets would be launched lies somewhat inside the inner edge of the disk as fit by \citet{2015ApJ...811...18M}. Simulations of the planet--disk interaction at lower values of $e$ have not successfully explained the observed inner edge and disk scale height \citep{2015ApJ...811...18M,2015ApJ...815...61N}. These authors also note that a low eccentricity ($e<0.1$) is unable to account for the observed northeast--southwest brightness asymmetry in the disk. Although several authors have proposed a possible unseen second planet to explain these features \citep{2015ApJ...800..136A,2015ApJ...811...18M,2015ApJ...815...61N,2016AJ....152...97W}, further modeling with $\beta$~Pic~b alone at a higher eccentricity may be warranted. While the higher eccentricity for $\beta$~Pic~b may help explain some present-day disk observations, it is consistent with a wide range of past formation scenarios. Giant planets are thought to form on relatively circular orbits due to efficient damping in the natal protoplanetary disk \citep{2011ARA&A..49..195A}, but many mechanisms can subsequently pump their eccentricities. High eccentricities can easily be generated by secular, resonant, or scattering interactions with a massive perturber in the form of another planet or nearby stars \citep{1997Natur.386..254H,1998ApJ...508L.171L,1996Sci...274..954R,2008ApJ...686..621F}. The perturber responsible for $\beta$~Pic~b's eccentricity need not remain in the system and be observable today; it could have been ejected by a strong scattering event. As noted above, the presence of a second planet is favored in some models to explain disk structures. Detailed analysis of long-term RV monitoring excludes much of the parameter space for additional planets \citep{2018A&A...612A.108L}, and while absolute astrometry can potentially rule out more planets \citep{2018arXiv181108902K}, the fact that $\beta$~Pic~b is only marginally detected in current observations complicates the interpretation of additional astrometric signals due to more planets. A second massive planet is not, however, required to generate an eccentricity as high as $e=0.2$--0.3. In principle, migration of a very massive planet like $\beta$~Pic~b through a massive gas disk \citep[e.g.,][]{2001A&A...366..263P,2018MNRAS.474.4460R} or even a planetesimal disk \citep{1998Sci...279...69M} can generate substantial eccentricity growth. While more modest-mass planets have their eccentricities damped by the disk, planets with masses $\gtrsim10\,\mbox{$M_{\rm Jup}$}$ can have their eccentricities pumped through interaction with the 3:1 outer Lindblad resonance: the planet excites eccentricity in the disk, which back-reacts to excite eccentricity in the planet \citep{2012ARA&A..50..211K}. \citet{2001A&A...366..263P} explicitly predict that a massive eccentric planet inside a disk cavity is a natural outcome of this process. A key implication of this mechanism is that $\beta$~Pic~b formed exterior to its current orbit. \setlength{\tabcolsep}{0.047in} \begin{deluxetable*}{lcccccccccccc} \tablecaption{Predicted Future Astrometry and Radial Velocities for $\beta$ Pic b \label{tbl:future}} \tablewidth{0pt} \tablehead{ \colhead{Epoch} & \colhead{} & \multicolumn{3}{c}{Separation (mas)} & \colhead{} & \multicolumn{3}{c}{PA (\mbox{$^{\circ}$})} & \colhead{} & \multicolumn{3}{c}{$\Delta{\rm RV}$ (\mbox{km\,s$^{-1}$})} \\ \colhead{} & \colhead{} & \colhead{$e=0.10$} & \colhead{$e=0.20$} & \colhead{$e=0.30$} & \colhead{} & \colhead{$e=0.10$} & \colhead{$e=0.20$} & \colhead{$e=0.30$} & \colhead{} & \colhead{$e=0.10$} & \colhead{$e=0.20$} & \colhead{$e=0.30$} } \startdata 2019 Jan 1 & & $176\pm 3$ & $173\pm 3$ & $169\pm 3$ & & $28.21\pm0.31$ & $27.97\pm0.33$ & $27.79\pm0.35$ & & \phn$ 2.92\pm0.23$ & $ 1.24\pm0.20$ & $-0.25\pm0.19$\phs \\ 2020 Jan 1 & & $301\pm 3$ & $299\pm 3$ & $296\pm 3$ & & $29.81\pm0.20$ & $29.63\pm0.21$ & $29.49\pm0.22$ & & \phn$ 5.76\pm0.29$ & $ 3.68\pm0.24$ & $ 1.87\pm0.22$ \\ 2021 Jan 1 & & $407\pm 3$ & $412\pm 3$ & $413\pm 3$ & & $30.50\pm0.15$ & $30.32\pm0.17$ & $30.19\pm0.18$ & & \phn$ 8.07\pm0.32$ & $ 5.68\pm0.27$ & $ 3.61\pm0.25$ \\ 2022 Jan 1 & & $490\pm 3$ & $508\pm 3$ & $517\pm 3$ & & $30.92\pm0.13$ & $30.73\pm0.14$ & $30.59\pm0.15$ & & \phn$ 9.78\pm0.33$ & $ 7.24\pm0.28$ & $ 5.00\pm0.26$ \\ 2023 Jan 1 & & $545\pm 6$ & $585\pm 4$ & $608\pm 3$ & & $31.24\pm0.11$ & $31.02\pm0.12$ & $30.86\pm0.13$ & & $10.89\pm0.30$ & $ 8.41\pm0.28$ & $ 6.10\pm0.26$ \\ 2024 Jan 1 & & \phn$571\pm10$ & $642\pm 6$ & $684\pm 4$ & & $31.51\pm0.11$ & $31.25\pm0.11$ & $31.06\pm0.12$ & & $11.40\pm0.24$ & $ 9.22\pm0.26$ & $ 6.95\pm0.26$ \\ 2025 Jan 1 & & \phn$567\pm16$ & $679\pm 9$ & $747\pm 6$ & & $31.78\pm0.10$ & $31.44\pm0.10$ & $31.23\pm0.11$ & & $11.29\pm0.17$ & $ 9.70\pm0.22$ & $ 7.58\pm0.25$ \\ 2026 Jan 1 & & \phn$533\pm23$ & \phn$695\pm13$ & $794\pm 8$ & & $32.06\pm0.11$ & $31.62\pm0.10$ & $31.37\pm0.11$ & & $10.59\pm0.18$ & $ 9.86\pm0.18$ & $ 8.02\pm0.23$ \\ 2027 Jan 1 & & \phn$471\pm30$ & \phn$690\pm18$ & \phn$827\pm11$ & & $32.40\pm0.14$ & $31.80\pm0.10$ & $31.51\pm0.10$ & & \phn$ 9.30\pm0.31$ & $ 9.73\pm0.14$ & $ 8.30\pm0.20$ \\ 2028 Jan 1 & & \phn$383\pm38$ & \phn$664\pm24$ & \phn$845\pm15$ & & $32.88\pm0.22$ & $31.99\pm0.10$ & $31.63\pm0.10$ & & \phn$ 7.44\pm0.52$ & $ 9.31\pm0.13$ & $ 8.42\pm0.17$ \enddata \tablecomments{Computed from subsets of our posterior selected by rejection sampling using Gaussian eccentricity priors with $\sigma_e = 0.01$.} \end{deluxetable*} Given the still limited observational coverage of the $\approx$30-year orbit of $\beta$~Pic~b, and a particular lack of data on the northeastern side, its eccentricity is still relatively uncertain. In Table~\ref{tbl:future}, we provide predicted astrometry and RVs for three representative eccentricities from our PT-MCMC posterior ($e=0.1$, 0.2, and 0.3). In the near term, the RV of $\beta$~Pic~b is the most discriminating between different eccentricities, but after a few years separation measurements will cleanly define the orbit. Lower eccentricity orbits predict smaller separations in the next decade and a more imminent turnaround toward decreasing separation. \acknowledgments We thank the referee for a thoughtful and timely review. This work has made use of data from the European Space Agency mission {\sl Gaia}\ (\url{https://www.cosmos.esa.int/gaia}), processed by the Gaia Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. T.J.D.\ acknowledges research support from Gemini Observatory. T.D.B.\ gratefully acknowledges support from the Heising-Simons foundation and from NASA under grant \#80NSSC18K0439. Our research has employed NASA ADS; SIMBAD; VizieR; and J.~R.~A.\ Davenport's IDL implementation of the cubehelix color scheme \citep{2011BASI...39..289G}.
1,477,468,749,891
arxiv
\section{#1}\indent} \renewcommand{\theequation}{\thesection.\arabic{equation}} \textwidth 165mm \textheight 220mm \renewcommand{\thefootnote}{\fnsymbol{footnote}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\vs}[1]{\vspace{#1 mm}} \newcommand{\hs}[1]{\hspace{#1 mm}} \renewcommand{\a}{\alpha} \renewcommand{\b}{\beta} \renewcommand{\c}{\gamma} \renewcommand{\d}{\delta} \newcommand{\epsilon}{\epsilon} \newcommand{\omega}{\omega} \newcommand{\Gamma}{\Gamma} \renewcommand{\Im}{{\rm Im}\,} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\partial}{\partial} \newcommand{\frac{dz}{2\pi i}}{\frac{dz}{2\pi i}} \newcommand{\nearrow \kern-1em \searrow}{\nearrow \kern-1em \searrow} \newcommand{{1 \over2}}{{1 \over2}} \newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}} \newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}} \newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}} \newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}} \newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}} \newcommand{\PTP}[1]{Prog.\ Theor.\ Phys.\ {\bf #1}} \newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}} \newcommand{\IJMP}[1]{Int.\ Jour.\ Mod.\ Phys.\ {\bf #1}} \newcommand{\AJ}[1]{Astorophys. \ J.\ {\bf #1}} \newcommand{\JMP}[1]{J.\ Math.\ Phys.\ {\bf #1}} \newcommand{\ZETF}[1]{Zh.\ Eksp.\ Teor.\ Fiz.\ {\bf #1}} \newcommand{\GRG }[1]{ Gen.\Rel.\ and Grav.\ { \bf #1 } } \newcommand{\wedge}{\wedge} \newcommand{{\cal{Z}}}{{\cal{Z}}} \newcommand{{\cal{G}}}{{\cal{G}}} \newcommand{{\cal{M}}}{{\cal{M}}} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\displaystyle}{\displaystyle} \newcommand{\tilde}{\tilde} \makeatletter \def\eqnarray{% \stepcounter{equation}% \let\@currentlabel=\theequation \global\@eqnswtrue \global\@eqcnt\omega@ \tabskip\@centering \let\\=\@eqncr $$\halign to \displaywidth\bgroup\@eqnsel\hskip\@centering $\displaystyle\tabskip\omega@{##}$&\global\@eqcnt\@ne \hfil$\displaystyle{{}##{}}$\hfil &\global\@eqcnt\tw@$\displaystyle\tabskip\omega@{##}$\hfil \tabskip\@centering&\llap{##}\tabskip\omega@\cr} \makeatother \begin{document} \begin{titlepage} \setcounter{page}{0} \begin{flushright} EPHOU 96-008\\ December 1996\\ \end{flushright} \vs{6} \begin{center} {\Large On Explicit Evaluations Around the Conformal Point in \\$N=2$ Supersymmetric Yang-Mills Theories } \vs{6} {\large Takahiro Masuda \footnote{e-mail address: [email protected]} \\ and \\ Hisao Suzuki\footnote{e-mail address: [email protected]}}\\ \vs{6} {\em Department of Physics, \\ Hokkaido University \\ Sapporo, Hokkaido 060 Japan} \\ \end{center} \vs{6} \centerline{{\bf{Abstract}}} We show how to give the expression for periods, Higgs field and its dual of $N=2$ supersymmetric Yang-Mills theory around the conformal point. This is achieved by evaluating the integral representation in the weak coupling region, and by using analytic continuation to the conformal point. The explicit representation is shown for the $SU(2)$ theory with matter fields and also for pure $SU(N)$ and pure $SO(2N)$ theory around the conformal point where the relation to the beta function of the theory is clarified. We also discuss a relation between the fixed points in the $SU(2)$ theories with matter fields and the Landau-Ginzburg point of 2-D $N=2$ SCFT. \end{titlepage} \newpage \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \sect{Introduction} Recent years many progress about four dimentional $N=2$ supersymmetric Yang-Mills theories have been made. Seiberg and Witten \cite{SW} solved the low energy effective theory in the $SU(2)$ theory without matter fields exactly, based on the duality and holomorphy by introducing the elliptic curve. Following this work various generalizations introducing the matter fields or gauge group being higher than $SU(2)$ have been investigated by many people \cite{KLTY,APS,HO,DS,Hanany,AS,MW,CDF}. The solution of these theories extesively studied in the weak coupling region by various method, such as solving the Picard-Fuchs equation \cite{KLT,IY}, purtubative treatments to obtain the prepotential \cite{DKP,Ohta}. The direct tests have been made by comparison with the instanton method in the case of $SU(2)$ theory with matter fields \cite{IS1,HS}, and even for the higher rank gauge groups \cite{IS2}, which provide the consistent results. Apart from the analysis in the weak coupling region, the power of the exact results should be used in the analysis in the strong coupling region, where one finds truely non-perturbative results. Among the analysis of the strong coupling region, one of the striking fact is the existence of the conformal points \cite{AD,APSW,EHIY} where the prepotentials have no dependence of the dynamical mass scale. These theories are classified by scaling behaviors around the conformal point \cite{EHIY}. It seems interesting to investigate the theories around this points by deriving the explicit form of fields, which seems to provide us of a more concrete behaviour of the critical theories. In previous works \cite{MS1,MS2} we have evaluated the integral representation about the period, Higgs field and its dual on such situations. The results for $SU(2)$ Yang-Mills theory with matter fields \cite{MS1} can be analytically continuated around the conformal point when the bare masses take the critical values. In this article, we generalize this approach to investigate the expression around the conformal point. We treat moduli parameters and bare masses as deviations from the conformal point. After evaluating the integral representation explicitely in the region where only one parameter is very large but other parameters are near the confomal point, we perform the analytic continuation of one large parameter to be near the conformal point. By use of the analyic continuation we can get the expression around the conformal point. Usually, analytic continuation to the region where the logarithmic singularity exits must be treated with care because the result may depend on the choice of variables. In other words, some choice of variable are valid only within some branch. However, when we consider the analytic continuation to the critical point where there is no such singularity, the confirmation of such analytic continuation turns out to be easy, as we will see in each cases. As the matter of fact, this approach can be considered as a generalization of the one given to obtain the periods for Calabi-Yau systems \cite{BCOFHJQ}. Of course, there are a variety of the class of conformal point \cite{EHIY} so that we cannot exhaust all known cases. In this paper, we will deal with $SU(2)$ Yang-Mills theory with massive hypermultiplets, and $SU(N)$ and also $SO(2N)$ Yang-Mills theory without matters. We also provide expressions of Higgs field and its dual around the conformal point for the pure $SU(N)$ and also $SO(2N)$ Yang-Mills theories. This article is organized as follows. In section 2, we will obtain the expression of the fields around conformal points in $SU(2)$ theory with matter fields case, and verify that the result recovers our previous result which was obtained by transformations of the hypergeometric functions\cite{MS1} when the deviation of mass parameters from the conformal point are set to be zero. We will also discuss the relation to 2-D $N=2$ SCFT through the simple correspodence to the deformation of curve on $W\bf{CP}^2$. This relation has been pointed out recently in the different context in the ref. \cite{LMW}. In section 3, we will derive the form of Higgs field and its dual in the pure $SU(N)$ theory around the conformal point and clarify the relation to the beta function of theory. In section 4, we will study the pure $SO(2N)$ theory around the conformal point and discuss the validity of the expression. The last section will be devoted to some discussion. \sect{$SU(2)$ Yang-Mills theories with matter fields} In this section we treat the $SU(2)$ theory with $N_f$ matter fields ($N_f\le 3$), to verify that our approach recover the previous results which was obtained by transformations of the hypergeometric functions\cite{MS1}. First of all we consider $N_f=1$ case, whose curve of $N_f=1$ theory is given by \begin{eqnarray} y^2&=&x^2(x-u)+{1\over 4}m\Lambda^3 x-{\Lambda^6\over 64}. \end{eqnarray} In order to calculate the periods: \begin{eqnarray} {\partial a\over \partial u}=\oint_{\alpha}{dx\over y},\ \ \ {\partial a_D\over \partial u}=\oint_{\beta}{dx\over y}, \end{eqnarray} around the conformal point $u={3\over 4}\Lambda^2,\ m={3\over 4}\Lambda$ where the curve becomes degenerate as $y^2=(x-{\Lambda^2\over 4})^3$, we introduce the deviations from this conformal point as $\tilde u=u-{3\over 4}\Lambda^2,\ \tilde m= m-{3\over 4}\Lambda$. The strategy is that after calculating the period in the weak coupling region $\tilde u\sim \infty$, we analytically continuate around the conformal point $\tilde u\sim 0$. It should be noted that similar consideration has been used to evaluate periods for Calabi-Yau manifolds \cite{BCOFHJQ}. Rescaling the variable as $x={1\over 4}\Lambda^2 z$ the curve becomes \begin{eqnarray} y^2=({\Lambda^2\over 4})^3(z-1)^3-({\Lambda^2\over 4})^2\tilde u z^2-({\Lambda^2\over 4})^2 \Lambda mz, \end{eqnarray} and thus we consider $\tilde u\over \Lambda^2$ and $\tilde m\over \Lambda$ as perturbations from the conformal point. Expanding the period with respect to $1/\tilde u$ we have the expression for the period as follows: \begin{eqnarray} \oint{dx\over y}= \tilde u^{-1/2}\int_{-i\infty}^{i\infty}{ds\over 2\pi i} \sum_{m=0}^{\infty}& &{\Gamma({1\over 2}+s) \Gamma(s+1)\Gamma(-s)\over \Gamma({1\over 2})\Gamma(s-m+1)m! }\nonumber \\ & &\times \oint dz\, (1-z)^{3(s-m)}z^{-2s+m-1}\left( {\Lambda^2 \over 4\tilde u}\right)^s\left({\tilde m\over \Lambda}\right)^m, \end{eqnarray} where we have introduced Barnes-type integral representation. From this expression we can find that ${\partial a\over \partial u}$ is obtained by picking up poles $z=0$ along $\alpha$ cycle in the weak coupling region. In this way, we find ${\partial a \over \partial u}$ in the weak coupling region is of the form \begin{eqnarray} {\partial a\over \partial u}&=&{\sqrt 3\over \tilde u^{1/2}2\pi} \sum_{n,m=0}^{\infty}{\Gamma(n+m+{1\over 2})\Gamma(n+{1\over 3}) \Gamma(n+{2\over 3})\over \Gamma(n+{m\over 2}+{1\over 2}) \Gamma(n+{m\over 2}+1)n!m!} \left(-{27\Lambda^2\over 16\tilde u}\right)^n \left({\Lambda\tilde m\over 8 \tilde u}\right)^m\nonumber \\ &=&{\sqrt 3 \over \tilde u^{1/2}2\pi}\sum_{m=0}^ {\infty}{\Gamma(m+{1\over 2})\Gamma({1\over 3})\Gamma({2\over 3})\over \Gamma({m+1\over 2})\Gamma({m\over 2}+1)m!} \left({\Lambda\tilde m\over 8 \tilde u}\right)^m\\ & &\hspace{4cm}\times \, _3F_2\left({1\over 3}, {2\over 3},{1\over 2}+m;{m+1\over 2} ,{m\over 2}+1;-{27\Lambda^2\over 16\tilde u}\right),\nonumber \end{eqnarray} where $_3F_2$ is the generalized hypergeometric function, which is defined as \cite{HTF} \begin{eqnarray} _pF_{p-1}(a_1,\cdots,a_p;b_1,\cdots,b_{p-1};z)= \sum_{n=0}^{\infty}{(a_1)_n\cdots (a_p)_n\over (b_1)_n\cdots (b_{p-1})_n} {z^n\over n!},\ \ (a)_n={\Gamma(a+n)\over \Gamma(a)}. \end{eqnarray} Integrating with respect to $\tilde u$ of (2.5) we have Higgs field $a$ up to mass residue in the weak coupling region in the following form \begin{eqnarray} a&=&{\sqrt 3 \tilde u^{1/2}\over \pi}\sum_{m=0}^ {\infty}{\Gamma(m+{1\over 2})\Gamma({1\over 3})\Gamma({2\over 3})\over \Gamma({m+1\over 2})\Gamma({m\over 2}+1)m!} \left({\Lambda\tilde m\over 8 \tilde u}\right)^m\\ & &\hspace{4cm}\times \, _3F_2\left({1\over 3}, {2\over 3},m-{1\over 2};{m+1\over 2} ,{m\over 2}+1;-{27\Lambda^2\over 16\tilde u}\right).\nonumber \end{eqnarray} The analytic continuation from this expression to around the conformal point can be performed to obtain \begin{eqnarray} a&=&\sqrt{\pi}\tilde u^{1/2}\left({27\Lambda^2\over 16\tilde u}\right)^{-1/3} \sum_{m=0}^{\infty}\left\{{\Gamma({1\over 3})\Gamma({1\over 3})\Gamma(m- {5\over 6})\over \Gamma({11\over 6})\Gamma(-{1\over 6})\Gamma(m+{1\over 3})m!} \left({\Lambda \tilde m\over 4\tilde u}\right)^m\right. \nonumber \\ & & \hspace{5cm}\times \ _3F_2\left({1\over 3},-{m\over 2}+{5\over 6},-{m\over 2}+{1\over 3}; {2\over 3},-m+{11\over 6}; -{16\tilde u\over 27\Lambda^2}\right) \nonumber\\ & & - \left({27\Lambda^2\over 16\tilde u}\right)^{-1/3} {\Gamma({2\over 3})\Gamma(-{1\over 3})\Gamma(m-{7\over 6})\over \Gamma(-{7\over 6})\Gamma({13\over 6})\Gamma(m-{1\over 3})m!} \left({\Lambda \tilde m\over 4\tilde u}\right)^m \\ & &\hspace{4cm} \left. \times\ _3F_2\left({2\over 3},-{m\over 2}+{7\over 6},-{m\over 2}+{2\over 3}; {4\over 3},-m+{13\over 6}; -{16\tilde u\over 27\Lambda^2}\right) \right\}. \nonumber \end{eqnarray} If we set $\tilde m=0$ we can recover the previous result \cite{MS1} where $a$ is represented by the generalized hypergeometric function $_3F_2$ in terms of $\tilde u$. Next we consider $a_D$. In this case we integrate (2.4) from $z=0$ to $z=1$ and evaluate double poles which give the logarithmic terms \cite{MS2}. Quite similarly $a_D$ in the weak coupling region can be written as \begin{eqnarray} {\partial a_D\over \partial u}&=&{-1\over 2(-)^{1\over 2} \tilde u^{1\over 2}\pi}\sum_{n,m} {\Gamma(n+m+{1\over 2})\Gamma(3n+1)\over \Gamma({1\over 2}) \Gamma(n+1)^2\Gamma(2n+m+1)}\left({\Lambda^2\over 4\tilde u}\right)^n \left(\Lambda\tilde m\over 4\tilde u\right)^m\nonumber \\ & &\times\left[\psi(n+m+{1\over 2})+3\psi(3n+1)-2\psi(n+1)- 2\psi(2n+m+1)+\ln\left(\Lambda^2\over 4\tilde u\right)\right], \end{eqnarray} where $\psi(z)$ is defined by ${d\Gamma(z)\over dz}=\psi(z)\Gamma(z)$. Analytic continuation to the region $\tilde u\sim 0$ and integration with respect to $\tilde u$ give the expression up to mass residue around the conformal point as follows \begin{eqnarray} a_D&=&{-\sqrt \pi\tilde u^{1\over 2}\over (-1)^{1\over 2}} \left({27\Lambda^2\over 16\tilde u}\right)^{-1/3} \sum_{m=0}^{\infty}\left\{{\Gamma({1\over 3})\Gamma({1\over 3})\Gamma(m- {5\over 6})\over \Gamma({11\over 6})\Gamma(-{1\over 6})\Gamma(m+{1\over 3})m!} \left({\Lambda m\over 4\tilde u}\right)^m\right. \nonumber \\ & & \hspace{5cm}\times \ _3F_2\left({1\over 3},-{m\over 2}+{5\over 6},-{m\over 2}+{1\over 3}; {2\over 3},-m+{11\over 6}; -{16\tilde u\over 27\Lambda^2}\right) \nonumber\\ & & +\left({27\Lambda^2\over 16\tilde u}\right)^{-1/3} {\Gamma({2\over 3})\Gamma(-{1\over 3})\Gamma(m-{7\over 6})\over \Gamma(-{7\over 6})\Gamma({13\over 6})\Gamma(m-{1\over 3})m!} \left({\Lambda m\over 4\tilde u}\right)^m \\ & &\hspace{4cm} \left. \times\ _3F_2\left({2\over 3},-{m\over 2}+{7\over 6},-{m\over 2}+{2\over 3}; {4\over 3},-m+{13\over 6}; -{16\tilde u\over 27\Lambda^2}\right) \right\}. \nonumber \end{eqnarray} As the parameter approaching the point ${\tilde m\over \tilde u}\rightarrow 0,\ {\tilde u\over \Lambda^2}\rightarrow 0$, we find $a=a_D$, which implies that the theory is completely free theory. Therefore the conformal point is certainly the fixed point where the beta function of the theory vanishes. Since $a,\ a_D\sim \tilde u^{5\over 6}$ near the conformal point and $a,\ a_D$ are propotional to mass scale of the theory, the conformal dimention of $\tilde u$ is ${6\over 5}$, which has been observed in \cite{APSW}. In the $N_f=2$ theory, we use the curve of forth order: \begin{eqnarray} y^2&=&(x^2-u+{\Lambda^2\over 8})^2-\Lambda^2(x+m_1)(x+m_2)\\ &=&(x^2-u+{\Lambda^2\over 8})^2-\Lambda^2(x^2+Mx+N), \end{eqnarray} where we introduce symmetrized mass parameters $M=m_1+m_2$ and $N=m_1m_2$. We shift the parameters from the conformal point as $\tilde u=u-{3\Lambda^2\over 8},\ \ \tilde M=M-\Lambda,\ \ \tilde N=N-{\Lambda^2\over 4}$, and rescale $x+{\Lambda\over 2}=\tilde u^{1/2}z$, we find that the curve can be written as \begin{eqnarray} y^2=\tilde u^2(z^2-1)^2-2\Lambda\tilde u^{3\over 2}(z^3-z)- \Lambda^2\tilde M\tilde u^{1\over 2}z-\Lambda^2\tilde N', \end{eqnarray} where $\tilde N'=\tilde N-\tilde M\Lambda/2$. As is the case of $N_f=1$, we evaluate the period in the weak coupling region by expanding with respect to $1/\tilde u$ and mass parameters, and by picking up poles at $z=1$ along the $\alpha$ cycle to find \begin{eqnarray} {\partial a\over \partial u}&=& {\tilde u^{-{1\over 2}}\over 2\pi} \sum_{m,l=0}^{\infty}{\Gamma(2\alpha_{l,m}+{1\over 2}) \Gamma(\beta_{l,m}+{1\over 2}) \over \Gamma(l+1)\Gamma(m+1)\Gamma(4\alpha_{l,m}+1)} \left(-{\Lambda^4\tilde M^2\over 4\tilde u^3}\right)^l \left({\Lambda^2\tilde N'\over \tilde u^2}\right)^m \\ & & \hspace{1cm} \times _4F_3\left(\alpha_{l,m}+{1\over 4},\alpha_{l,m}+{3\over 4}, l+{1\over 2},\beta_{l,m}+{1\over 2}; {1\over 2},2\alpha_{l,m}+{1\over 2},2\alpha_{l,m}+1 ;-{\Lambda^2\over \tilde u}\right) \nonumber \\ &+&{\tilde u^{-{1\over 2}}\over 2\pi}\left({2\Lambda\over \tilde u^{1\over 2}} \right)\left({\Lambda^2\tilde M\over \tilde u^{3\over 2}}\right) \sum_{m,l=0}^{\infty}{\Gamma(2\alpha_{l,m}+{5\over 2}) \Gamma(\beta_{l,m}+{5\over 2})\over \Gamma(l+1)\Gamma(m+1)\Gamma(4\alpha_{l,m}+4)} \left(-{\Lambda^4\tilde M^2\over 4\tilde u^3}\right)^l \left({\Lambda^2\tilde N'\over \tilde u^2}\right)^m \nonumber \\ & &\hspace{1cm} \times _4F_3\left(\alpha_{l,m}+{5\over 4},\alpha_{l,m}+{ 7\over 4},l+{3\over 2},\beta_{l,m}+{5\over 2};{3\over 2}, 2\alpha_{l,m}+{5\over 2},2\alpha_{l,m}+2;-{\Lambda^2\over \tilde u}\right),\nonumber \end{eqnarray} where $\alpha_{l,m}=l+{m\over 2}$, $\beta_{l,m}=2m+3l$. In the $N_f=2$ theory, ${\partial a\over \partial u}$ has a additional part which is propotional to $\tilde M$ and vanishes when $\tilde M=0$. Next we consider ${\partial a_D\over \partial u}$. Performing the line integral from $z=0$ to $z=1$ and evaluating double poles, we have ${\partial a_D\over \partial u}$ in the weak coupling region as \begin{eqnarray} {\partial a_D\over \partial u}&=& {\tilde u^{-{1\over 2}}\over 4\pi^2 i} \sum_{l,m,n}{\Gamma(2\alpha_{l,m}+{1\over 2}) \Gamma(\beta_{l,m}+{1\over 2}) \over \Gamma(l+1)\Gamma(m+1)\Gamma(4\alpha_{l,m}+1)} \left(-{\Lambda^4\tilde M^2\over 4\tilde u^3}\right)^l \left({\Lambda^2\tilde N'\over \tilde u^2}\right)^m \left(-{\Lambda^2\over \tilde u}\right)^n\nonumber \\ & &\hspace{1cm} \times {(\alpha_{l,m}+{1\over 4})_n(\alpha_{l,m}+{3\over 4})_n(l+{1\over 2})_n (\beta_{l,m}+{1\over 2})_n\over ({1\over 2})_n(2\alpha_{l,m}+ {1\over 2})_n(2\alpha_{l,m}+1)_n} \left[ \ln\left(-{\Lambda^2\over \tilde u}\right)\right.\\ & &\hspace{2cm}+\psi_n(\alpha_{l,m}+{1\over 4})+\psi_n(\alpha_{l,m}+{3\over 4}) +\psi_n(\beta_{l,m}+{1\over 2})\nonumber \\ & &\hspace{3cm}\left.+\psi_n(l+{1\over 2}) -\psi_n(1)-\psi_n({1\over 2})- \psi_n(2\alpha_{l,m}+{1\over 2})- \psi_n(2\alpha_{l,m}+1)\right]\nonumber \\ &+&{\tilde u^{-{1\over 2}}\over 4\pi^2 i}\left({2\Lambda\over \tilde u}\right) \left({\Lambda^2\tilde M\over \tilde u^{3\over 2}}\right) \sum_{l,m,n}{\Gamma(2\alpha_{l,m}+{5\over 2}) \Gamma(\beta_{l,m}+{5\over 2})\over \Gamma(l+1)\Gamma(m+1)\Gamma(4\alpha_{l,m}+4)} \left(-{\Lambda^4\tilde M^2\over 4\tilde u^3}\right)^l \left({\Lambda^2\tilde N'\over \tilde u^2}\right)^m \nonumber \\ & &\hspace{1cm}\times\left(-{\Lambda^2\over \tilde u}\right)^n {(\alpha_{l,m}+{5\over 4})_n(\alpha_{l,m}+{7\over 4})_n(l+{3\over 2})_n (\beta_{l,m}+{5\over 2})_n\over ({3\over 2})_n(2\alpha_{l,m}+{5\over 2})_n(2\alpha_{l,m}+2)_n} \left[ \ln\left(-{\Lambda^2\over \tilde u}\right)\right. \nonumber \\ & &\hspace{2cm}+\psi_n(\alpha_{l,m}+{5\over 4})+ \psi_n(\alpha_{l,m}+{7\over 4})+ \psi_n(\beta_{l,m}+{5\over 2})\nonumber \\ & &\hspace{3cm}\left.+\psi_n(l+{3\over 2})-\psi_n(1)-\psi_n({3\over 2})- \psi_n(2\alpha_{l,m}+2)-\psi_n(2\alpha_{l,m}+{5\over 2})\right],\nonumber \end{eqnarray} where $\psi_n(\alpha)=\psi(n+\alpha)$. Analytic continuation of ${\partial a\over \partial u}$ and ${\partial a_D \over \partial u}$ to the region $\tilde u\sim 0$ gives four kinds of $_4F_3$, and $a$ and $a_D$ are also represented by $_4F_3$ after integration with respect to $\tilde u$. By defining $\Phi$ as \begin{eqnarray} \Phi(\delta,\epsilon;\rho,\sigma,\mu)&=&\,_4F_3 \left(-\alpha_{l,m}+\delta,-\alpha_{l,m}+\delta+{1\over 2}, \alpha_{l,m}+\epsilon,\alpha_{l,m}+\epsilon+{1\over 2}\right. \\ & &\hspace{5cm}\left.;\alpha_{l,m}-\beta_{l,m}+\rho,{m\over 2}+\sigma,\mu; -{\tilde u\over \Lambda^2}\right),\nonumber \end{eqnarray} and using this function, we find that $a$ around the conformal point can be written in the form: \begin{eqnarray} a&=&{\sqrt \pi\tilde u^{1\over 2}\over 2} \sum_{m,l}\left(-{\Lambda^2\tilde M^2\over \tilde u^2}\right)^l \left({\Lambda\tilde N'\over 2\tilde u^{3\over 2}}\right)^m {1\over \Gamma({1\over 2})\Gamma(m+1)\Gamma(2l+1)\sqrt 2} \nonumber \\ &\times &\left[c_1 \left({\Lambda^2\over \tilde u}\right)^{-{1\over 4}} {\Gamma(-{m\over 2}+{1\over 4})\Gamma(\beta_{l,m}-\alpha_{l,m}+{1\over 4}) \over \Gamma(\alpha_{l,m}+{3\over 4}) \Gamma({1\over 4}-\alpha_{l,m})}\right. \Phi\left({1\over 4},{1\over 4};{7\over 4},{3\over 4},{1\over 2}\right) \nonumber \\ & &-c_2\left({\Lambda^2\over \tilde u}\right)^{-{3\over 4}} {\Gamma(2\alpha_{l,m}+{1\over 2})\Gamma(-{m\over 2}-{1\over 4}) \Gamma(\beta_{l,m}-\alpha_{l,m}-{1\over 4})\over \Gamma(\alpha_{l,m}+{1\over 4}) \Gamma(-\alpha_{l,m}-{1\over 4})\Gamma(2\alpha_{l,m}-{1\over 2})} \Phi\left({3\over 4},{3\over 4};{9\over 4},{5\over 4},{3\over 2}\right) \\ & &+c_3\left({\Lambda^2\over \tilde u}\right)^{-{3\over 4}} \left({\Lambda^2\tilde M\over 2\tilde u^{3\over 2}} \right) {\Gamma(2\alpha_{l,m}+{5\over 2})\Gamma({1\over 4}-{m\over 2}) \Gamma(\beta_{l,m}-\alpha_{l,m}+{5\over 4})\over \Gamma(\alpha_{l,m}+{3\over 4})\Gamma({1\over 4}-\alpha_{l,m}) \Gamma(2\alpha_{l,m}+{3\over 2})} \Phi\left(-{1\over 4},{3\over 4};{3\over 4},{3\over 4}, {1\over 2}\right) \nonumber\\ & &\left.-c_4\left({\Lambda^2\over \tilde u}\right)^{-{5\over 4}} \left({\Lambda^2\tilde M\over 2\tilde u^{3\over 2}} \right) {\Gamma(2\alpha_{l,m}+{5\over 2})\Gamma(-{m\over 2}-{1\over 4}) \Gamma(\beta_{l,m}-\alpha_{l,m}+{3\over 4}) \over \Gamma(\alpha_{l,m}+{1\over 4})\Gamma(-\alpha_{l,m}-{1\over 4}) \Gamma(2\alpha_{l,m}+{3\over 2})} \Phi\left({1\over 4},{5\over 4};{5\over 4},{5\over 4},{3\over 2}\right)\right] ,\nonumber \end{eqnarray} where $c_1=c_2=c_3=c_4=1$. We find that the expression for $a_D$ is given by changing $c_i$ as $c_1=c_3=(-1)^m,\ c_2=c_4=-(-1)^m$. If we set $\tilde M=\tilde N'=0$ we can recover the previous result \cite{MS1}. As in the $N_f=1$ theory, we see that the conformal point is the fixed point of this theory from the relation $a\sim a_D$ on this point. Reading the leading power of the expression (2.17), we see that the conformal dimension of $\tilde u$ is ${4\over 3}$ \cite{APSW}. In the $N_f=3$ theory, the curve is given by \begin{eqnarray} y^2&=&(x^2-u+{\Lambda\over 4}x+{(m_1+m_2+m_3)\Lambda\over 8})^2 -\Lambda(x+m_1)(x+m_2)(x+m_3) \nonumber \\ &=&(x^2-u+{\Lambda\over 4}x+{\Lambda L\over 8})^2-\Lambda( x^3+Lx^2+Mx+N), \end{eqnarray} where $L=m_1+m_2+m_3,\ \ M=m_1m_2+m_2m_3+m_3m_1,\ \ N=m_1m_2m_3$. We shift the parameter from the conformal point as $u'=u-{\Lambda^2\over 32},\ \ \tilde L=L-{3\Lambda\over 8},\ \ \tilde M=M-{3\Lambda^2\over 64},\ \ \tilde N=N-{\Lambda^3\over 512}$, the curve becomes \begin{eqnarray} y^2=(x+{\Lambda\over 8})^3(x-{7\Lambda\over 8})-2(u'- {\Lambda\tilde L\over 8})(x+{\Lambda\over 8})^2- \Lambda(\tilde Lx^2+\tilde Mx+\tilde N) +(u'-{\Lambda\tilde L\over 8})^2. \end{eqnarray} Setting $\tilde u=u'-{\Lambda\tilde L\over 8}$ and rescaling $x+{\Lambda\over 8}=\tilde u^{1/2}z$, we find that the curve can be written as \begin{eqnarray} y^2=\tilde u^2(z^2-1)^2-\tilde u^{3\over 2}\Lambda z^3- \tilde L\Lambda \tilde u z^2-\tilde u^{1\over 2} Az+ B, \end{eqnarray} where $A=\Lambda\tilde M-{\Lambda^2\tilde L\over 4},\ B= {\Lambda^2\tilde L^2\over 64}+{\Lambda^2\tilde M\over 8} -\Lambda\tilde N$. Evaluation of the integral for the period and analytic continuation from $\tilde u\sim \infty$ to $\tilde u\sim 0$ are same as $N_f=2$ case. In this way, we can obtain the period in the weak coupling region in the form: \begin{eqnarray} {\partial a\over \partial u}&=& {\tilde u^{-{1\over 2}}\over 2\sqrt \pi} \sum_{l,m,p,q}^{\infty} {\Gamma(3\eta_{l,p}+{1\over 2}) \Gamma(\omega_{l,p,q}+{1\over 2})\over \Gamma({1\over 2})\Gamma(2\chi_{l,p,q}+1) l!m!(2p)!}\left({\Lambda\tilde L\over \tilde u}\right)^l \left({ A^2\over \tilde u^3}\right)^p \left({B\over \tilde u^2}\right)^q \nonumber \\ & & \times \ _4F_3\left(\eta_{l,p}+{1\over 6}, \eta_{l,p}+{1\over 2},\eta_{l,p} +{5\over 6}, \omega_{l,p,q}+{1\over 2}\right. \\ & &\hspace{5cm}\left. ;{1\over 2},\chi_{l,p,q}+{1\over 2},\chi_{l,p,q}+1; -{27\Lambda^2\over 256\tilde u}\right).\nonumber \\ &-& {\tilde u^{-{1\over 2}}\over 2\sqrt \pi} \left({\Lambda A\over \tilde u^2}\right) \sum_{l,m,p,q}^{\infty} {\Gamma(3\eta_{l,p}+{5\over 2}) \Gamma(\omega_{l,p,q}+{5\over 2})\over \Gamma({1\over 2})\Gamma(2\chi_{l,p,q}+3) l!m!(2p+1)!}\left({\Lambda\tilde L\over \tilde u}\right)^l \left({ A^2\over \tilde u^3}\right)^p \left({B\over \tilde u^2}\right)^q \nonumber \\ & & \times \ _4F_3\left(\eta_{l,p}+{5\over 6}, \eta_{l,p}+{7\over 6},\eta_{l,p} +{9\over 6}, \omega_{l,p,q}+{5\over 2}\right. \\ & &\hspace{5cm}\left. ;{3\over 2},\chi_{l,p,q}+{3\over 2},\chi_{l,p,q}+2; -{27\Lambda^2\over 256\tilde u}\right),\nonumber \end{eqnarray} \begin{eqnarray} {\partial a_D\over \partial u}&=& {\tilde u^{-{1\over 2}}\over 4\pi^2 i} \sum_{l,m,p,q}^{\infty} {\Gamma(3\eta_{l,p}+{1\over 2}) \Gamma(\omega_{l,p,q}+{1\over 2})\over \Gamma(2\chi_{l,p,q}+1) l!m!(2p)!}\left({\Lambda\tilde L\over \tilde u}\right)^l \left({ A^2\over \tilde u}\right)^p \left({B\over \tilde u^2}\right)^q \nonumber \\ & & \times \sum_{n=0}^{\infty} {(\eta_{l,p}+{1\over 6})_n(\eta_{l,p}+{1\over 2})_n ( \eta_{l,p}+{5\over 6})_n (\omega_{l,p,q}+{1\over 2})_n\over ({1\over 2})_n(\chi_{l,p,q}+{1\over 2})_n(\chi_{l,p,q}+1)_n n!} \left(-{27\Lambda^2\over 256\tilde u}\right)^n \nonumber \\ & & \hspace{1cm}\times \left[\ln\left(-{27\Lambda^2\over 256\tilde u}\right)+ \psi_n(\eta_{l,p}+{1\over 6})+\psi_n(\eta_{l,p}+{1\over 2}) +\psi_n(\eta_{l,p}+{5\over 6})\right. \\ & & \hspace{1cm} \left. +\psi_n(\omega_{l,p,q}+{1\over 2}) -\psi_n({1\over 2})-\psi_n(\chi_{l,p,q}+{1\over 2})-\psi_n( \chi_{l,p,q}+1)-\psi_n(1)\right]\nonumber \\ &-& {\tilde u^{-{1\over 2}}\over 4\pi^2 i} \left({\Lambda A\over \tilde u^2}\right) \sum_{l,m,p,q}^{\infty} {\Gamma(3\eta_{l,p}+{5\over 2}) \Gamma(\omega_{l,p,q}+{5\over 2})\over \Gamma(2\chi_{l,p,q}+3) l!m!(2p+1)!}\left({\Lambda\tilde L\over \tilde u}\right)^l \left({ A^2\over \tilde u}\right)^p \left({B\over \tilde u^2}\right)^q \nonumber \\ & &\times \sum_{n=0}^{\infty} {(\eta_{l,p}+{5\over 6})_n(\eta_{l,p}+{7\over 6})_n ( \eta_{l,p}+{9\over 6})_n (\omega_{l,p,q}+{5\over 2})_n\over ({3\over 2})_n(\chi_{l,p,q}+{3\over 2})_n(\chi_{l,p,q}+2)_n n!} \left(-{27\Lambda^2\over 256\tilde u}\right)^n \nonumber \\ & & \hspace{1cm}\times \left[\ln\left(-{27\Lambda^2\over 256\tilde u}\right)+ \psi_n(\eta_{l,p}+{5\over 6})+\psi_n(\eta_{l,p}+{7\over 6}) +\psi_n(\eta_{l,p}+{9\over 6})\right.\nonumber \\ & & \hspace{1cm} \left. +\psi_n(\omega_{l,p,q}+{5\over 2}) -\psi_n({3\over 2})-\psi_n(\chi_{l,p,q}+{3\over 2})-\psi_n( \chi_{l,p,q}+2)-\psi_n(1)\right], \nonumber \end{eqnarray} where \begin{eqnarray} \eta_{l,p}={l\over 3}+{p\over 3},\ \ \omega_{l,p,q}=l+3p+2q,\ \ \chi_{l,p,q}={l\over 2}+p+{q\over 2}. \end{eqnarray} By analytic continuation and by integration with respect to $\tilde u$, we obtain $a$ around the conformal point in the form: \begin{eqnarray} a&=&-{2u^{1\over 2}}\sum_{l,p,q} {2^{\chi_{l,p,q}+1} \over l!(2p)!q!3^{\eta_{l,p}}} \left({\Lambda\tilde L\over \tilde u}\right)^l\left({A^2\over \tilde u^3}\right)^p \left({B\over \tilde u^2}\right)^q \left(-{256\tilde u\over 27\Lambda^2}\right)^{-\eta_{l,p}} \nonumber \\ &\times & \left\{{c_1\Gamma({1\over 3})\Gamma({2\over 3})\Gamma(\omega_{l,p,q}-\eta_{l,p} +{1\over 3})\Gamma(\eta_{l,p}+{1\over 6}) \over \Gamma({1\over 3}-\eta_{l,p}) \Gamma({1\over 3}+\chi_{l,p,q}-\eta_{l,p})\Gamma( {5\over 6}+\chi_{l,p,q}-\eta_{l,p})}\left(-{256\tilde u\over 27\Lambda^2}\right)^ {1\over 6}\right. \Psi\left({1\over 6},{1\over 6};{1\over 3},{2\over 3},{5\over 3}\right) \nonumber \\ & &+{c_2\Gamma(-{1\over 3})\Gamma({1\over 3})\Gamma(\omega_{l,p,q}- \eta_{l,p})\Gamma(\eta_{l,p}+{1\over 2})\over \Gamma(-\eta_{l,p})\Gamma(\chi_{l,p,q}-\eta_{l,p}) \Gamma({1\over 2}+\chi_{l,p,q}-\eta_{l,p})} \left(-{256\tilde u\over 27\Lambda^2}\right)^{1\over 2} \Psi\left({1\over 2},{1\over 2};{2\over 3},{4\over 3},2\right)\\ & &+\left. {c_3\Gamma(-{2\over 3})\Gamma(-{1\over 3})\Gamma(\omega_{l,p,q}-\eta_{l,p} -{1\over 3})\Gamma(\eta_{l,p}+{5\over 6})\over \Gamma(-\eta_{l,p}-{1\over 3})\Gamma(\chi_{l,p,q}-\eta_{l,p}-{1\over 3}) \Gamma(\chi_{l,p,q}-\eta_{l,p}+{1\over 6})} \left(-{256\tilde u\over 27\Lambda^2}\right)^{5\over 6} \Psi\left({5\over 6},{5\over 6} ;{4\over 3},{5\over 3},{7\over 3}\right)\right\} \nonumber\\ &+&{2\pi\sqrt{\pi}\Lambda A\over \tilde u^{1\over 2}}\sum_{l,p,q} {2^{\chi_{l,p,q}+2} \over l!(2p)!q!3^{\eta_{l,p}+2}} \left({\Lambda\tilde L\over \tilde u}\right)^l\left({A^2\over \tilde u^3}\right)^p \left({B\over \tilde u^2}\right)^q \left(-{256\tilde u\over 27\Lambda^2}\right)^{-\eta_{l,p}} \nonumber \\ &\times & \left\{{c_4\Gamma({1\over 3})\Gamma({2\over 3})\Gamma(\omega_{l,p,q}-\eta_{l,p} +{5\over 3})\Gamma(\eta_{l,p}+{5\over 6}) \over \Gamma({2\over 3}-\eta_{l,p}) \Gamma({2\over 3}+\chi_{l,p,q}-\eta_{l,p})\Gamma( {7\over 6}+\chi_{l,p,q}-\eta_{l,p})}\left(-{256\tilde u\over 27\Lambda^2}\right)^ {5\over 6}\right. \Psi\left({1\over 3},-{1\over 6};{1\over 3},{2\over 3},{1\over 3}\right) \nonumber \\ & &+{c_5\Gamma(-{1\over 3})\Gamma({1\over 3})\Gamma(\omega_{l,p,q}- \eta_{l,p}+{4\over 3})\Gamma(\eta_{l,p}+{7\over 6})\over \Gamma({1\over 3}-\eta_{l,p})\Gamma({1\over 3}+\chi_{l,p,q}-\eta_{l,p}) \Gamma({5\over 6}+\chi_{l,p,q}-\eta_{l,p})} \left(-{256\tilde u\over 27\Lambda^2}\right)^{7\over 6} \Psi\left({2\over 3},{1\over 6};{2\over 3},{4\over 3},{2\over 3}\right) \nonumber \\ & &+\left. {c_6\Gamma(-{2\over 3})\Gamma(-{1\over 3})\Gamma(\omega_{l,p,q}-\eta_{l,p} +1)\Gamma(\eta_{l,p}+{3\over 2})\over \Gamma(-\eta_{l,p})\Gamma(\chi_{l,p,q}-\eta_{l,p}) \Gamma(\chi_{l,p,q}-\eta_{l,p}+{1\over 2})} \left(-{256\tilde u\over 27\Lambda^2}\right)^{3\over 2} \Psi\left(1,1;{4\over 3},{5\over 3},1\right)\right\},\nonumber \end{eqnarray} where \begin{eqnarray} \Psi(a,b;c,d,e)=\! _4F_3\left(\eta_{l,p}+a,\eta_{l,p}+a+{1\over 2}, \eta_{l,p}-\chi_{l,p,q}+b\right. &,&\eta_{l,p}-\chi_{l,p,q}+b+{1\over 2} \\ & &\left. ; c,d,\eta_{l,p}-\omega_{l,p,q}+e;-{256\tilde u\over 27\Lambda^2}\right),\nonumber \end{eqnarray} and $c_1=\cdots =c_6=1$. The expression for $a_D$ is given by changing $c_i$ as $c_1=\cot(\eta_{l,p}+{1\over 6})\pi,\ c_2=\cot(\eta_{l,p}+{1\over 2})\pi,\ c_3=c_4= \cot(\eta_{l,p}+{5\over 6})\pi,\ c_5= \cot(\eta_{l,p}+{7\over 6})\pi,\ c_6=\cot(\eta_{l,p}+{3\over 2})\pi$. If we set $\tilde L=\tilde M=\tilde N=0$, i.e., $A=B=0$ and $\tilde u=u'=u- {\Lambda^2\over 32}$, we can recover the previous result \cite{MS1}. As was the case of $N_f=1,2$, the relation $a\sim a_D$ hold on the conformal point, therefore we can recognize the conformal point is the fixed point of this theory. Let us compare our expressions to the ones obtained by the expansion around the different point from the conformal point. If we consider the massive theory as the generalization from the massless theory, we would treat the bare mass parameter as the deviation from the massless theory. In order to see the behaviour of the field $a$ and $a_D$ in the weak coupling region in this case, we expand the meromorphic differential $\lambda$ with respect to $\Lambda$ and mass parameters, and evaluate the integral representation along the corresponding cycle. For example in the case of $N_f=1$, $\lambda$, $a$ and $a_D$ are given by \begin{eqnarray} \lambda&=&{x\ dx\over 2\pi i\ y}\left({(x^2-u)\over 2(x+m)}-(x^2-u)'\right), \nonumber \\ a&=&\oint_{\alpha}\lambda,\ \ \ \ \ a_D=\oint_{\beta}\lambda, \end{eqnarray} where we use the curve of forth order. The result of the calculation for the field $a$ can be written as \begin{eqnarray} a&=&{\sqrt{u}\over 12 \sqrt{\pi}}\sum_{n,l=0} {\Gamma(n-{1\over 6})\Gamma(n+{1\over 6})\Gamma( l-n)\Gamma(l+3n-{1\over 2})\over \Gamma(n+1)\Gamma(-n)\Gamma(3n-{1\over 2})\Gamma(l+{1\over 2})n!l!} \left({m^2\over u}\right)^l\left({ -27\Lambda^6\over 256u^3}\right)^n \\ &+&{3\sqrt u\over 32\sqrt{\pi}}\left({\Lambda^3m\over u^2}\right) \sum_{n,l=0}{\Gamma(n+{7\over 6})\Gamma(n+{5\over 6}) \Gamma(l-n)\Gamma(l+3n+{3\over 2})\over (2n+1)\Gamma(n+1)\Gamma(-n)\Gamma(3n+{3\over 2})\Gamma(l+{3\over 2}) n!l!}\left({m^2\over u}\right)^l\left({ -27\Lambda^6\over 256u^3}\right)^n.\nonumber \end{eqnarray} In the massless limit, this expression reduces to the previous result obtained by solving the Picard-Fucks equation \cite{IY}, which is represented by using the Gauss' hypergeometric function. The expression (2.28) can be verified by expanding the following expression which is represented by using the modular invariant form \cite{MS1}: \begin{eqnarray} {\partial a\over \partial u}&=&(-D)^{-{1\over 4}}F\left({1\over 12}, {5\over 12};1;-{27\Delta\over 4D^3}\right),\nonumber \\ \Delta&=&-\Lambda^6(256u^3-256u^2m^2-288um\Lambda^3+256m^3\Lambda^3+27\Lambda^6),\\ D&=&-16u^2+12m\Lambda^3.\nonumber \end{eqnarray} in the weak coupling region, and by comparing two expressions order by order after $u$ integration. In the $N_f=2,3$ case, instead of integrating $\lambda$ to obtain fields $a$ and $a_D$, we can evaluate ${\partial a\over \partial u}$ and ${\partial a_D\over \partial u}$ by expanding around the massless point in a similar manner. The results are expressed in terms of the following arguments: \begin{eqnarray} {1\over 64}\left({\Lambda^2\over u}\right)^2\ (N_f=2),\hspace{1cm} {1\over 256}\left({\Lambda^2\over u}\right)\ (N_f=3), \end{eqnarray} and appropriate combinations of mass parameters. These are identical to the argument of the hypergeometric function describing the massless theories \cite{IY}. These powers of $\Lambda$ are equivalent to the powers of the instanton term of the curve, and vary as the number of matters we have introduced. On the contrary, the argument of the expression we have derived in this section is simple compared to (2.29), which is the argument based of the the deviation from the conformal point, and the form of these deviations does not depent on the number of the matters as we have seen in this section. Thus if we use the parametrization from the conformal point, the theory can be described by using the simple deviation from the conformal point even in such case that we discuss the weak coupling behaviour. Furthermore the expression around the massless point in the $N_f=1,2$ case can be obtained from our expression for the $N_f=3$ case by taking suitable double scaling limit to decouple the irrelevant mass parameters. These are obvious advantages to observe the behaviour of the theory by using the expression around the conformal point. Before closing this section, we discuss the relation between 4-D $SU(2)$ $N=2$ supersymmetric QCD and 2-D $N=2$ SCFT, which has been partially analized in our previous paper \cite{MS1}. Let us review the Landau-Ginzburg description of 2-D $N=2$ superconformal minimal models with $c=3$ which describe the torus. Since the theory with central charge $c=3k/k+2$ corresponds to the Landau-Ginzburg potential $x^{k+2}$, we have three type of description; $(k=1)^3$, $(k=2)^2$ and $(k=1)(k=4)$, as \begin{eqnarray} f_1&=&x^3+y^3+z^3,\nonumber \\ f_2&=&x^4+y^4+z^2, \\ f_3&=&x^6+y^3+z^2. \nonumber \end{eqnarray} These are known as the algebraic curve on the (weighted) complex projective space $(W)\bf{CP}^2$ with homogeneous coordinates $[x,y,z]$ describing singular torus, and their typical deformation in one parameter $\psi$ are following \begin{eqnarray} \tilde E_6\ :\ &f&={x^3\over 3}+{y^3\over 3}+{z^3\over 3}-\psi_6 xyz=0, \nonumber \\ \tilde E_7\ :\ &f&={x^4\over 4}+{y^4\over 4}+{z^2\over 2}-\psi_7 xyz=0,\\ \tilde E_8\ :\ &f&={x^6\over 6}+{y^3\over 3}+{ z^2\over 2}-\psi_8 xyz=0,\ \nonumber \end{eqnarray} where we have used appropriate normalization. We can evaluate the period ${\cal W}$: \begin{eqnarray} {\cal W}=\psi\int_{\Gamma} {dxdydz\over f}, \end{eqnarray} on each curve in the region $\psi\sim \infty$ by picking up poles of the integrant expanded by $1/\psi$. Altanative approach to obtain the period is solving the Picard-Fuchs equation corresponding to these curves \begin{eqnarray} (1-y)y{d^2{\cal W}\over dy^2}+(1-2y){d{\cal W}\over dy}-{1\over \alpha} (1-{1\over \alpha}) {\cal W}=0, \end{eqnarray} where $y=\psi^{-\alpha}$ and $\alpha=3\ (\tilde E_6),\ 4\ (\tilde E_7),\ 6\ (\tilde E_8)$. As a result, periods are expressed as linear combinations of $F({1\over \alpha},1-{1\over \alpha},1; y)$ and $F^*({1\over \alpha},1-{1\over \alpha},1; y)$ around $y=0$ where $F$ is Gauss' hypergeometric function $_2F_1$, and $F^*$ is another independent solution corresponding to $F$. Comparing these results to the expression obtained by setting mass deviations zero in the results we have derived in this section, or the more obvious expression in \cite{MS1}, we find that periods of $\tilde E_6,\ \tilde E_7$ and $\tilde E_8$ curves are identical to the periods ${\partial a\over \partial u},\ {\partial a_D\over \partial u}$ of 4-D $N=2$ supersymmetric $SU(2)$ QCD with $N_f=1,\ 2$ and $3$ matter fields respectively in the weak coupling region $\tilde u\sim \infty$ when the theory has the conformal point. In this way, we can find a simple identification between the moduli parameter of each theory, which is \begin{eqnarray} \psi^{\alpha}\ \longleftrightarrow\tilde u, \end{eqnarray} up to irrelevant constant factors, and Landau-Ginzburg point $\psi=0$ of torus corresponds to the fixed point $\tilde u=0$ of N=2 supersymmetric SU(2) QCD. This is another confirmation of our expression around the critical points. It is also interesting that another toric description of torus: \begin{eqnarray} {1\over 2}x^2+{1\over 2}y^2-\psi zw=0,\ \ \ {1\over 2}z^2+{1\over 2}w^2-xy=0, \end{eqnarray} can be regarded as the curve which corresponds to $N_f=0$ curve whose parameter $\psi^2$ can be identified by deviation from the dyon point. \sect{$SU(N)$ pure Yang-Mills theories} In this section we study pure $SU(N)$ theory. In this case the curve is given by \begin{eqnarray} y^2=(x^N-\sum_{i=2}^Ns_ix^{N-i})^2-\Lambda^{2N}, \end{eqnarray} where $s_i\ (2\le i\le N)$ are gauge invariant moduli parameter. We treat meromorphic differential $\lambda$ directly, and calculate the period of meromorphic differential $\lambda$, i.e. Higgs field and its dual, which are defined by \begin{eqnarray} \lambda={x\over 2\pi i}(x^N-\sum_{i=2}^Ns_ix^{N-i})'{dx\over y}\\ a_i=\oint_{\alpha_i}\lambda,\ \ \ \ a_D^i=\oint_{\beta_i}\lambda. \end{eqnarray} We consider so called $Z_N$ critical point $s_2=\cdots=s_{N-1}=0,\ s_N=\pm \Lambda^{N}$ where the curve becomes \cite{AD,APSW,EHIY} \begin{eqnarray} y^2=x^{N}(x^N\pm 2\Lambda^N). \end{eqnarray} First we evaluate the integral in the region $s_i\sim 0\ (2\le i\le N-1),\ s_N\sim \infty$. To this end, we expand the meromorphic differential $\lambda$ with respect to $\Lambda^{2N}$ in the form \begin{eqnarray} \lambda={dx\over 2\pi i}\sum_{n=0}^{\infty}{\Gamma(n+{1\over 2}) (\Lambda^{2N})^n\over \Gamma({1\over 2})\ n!\ 2n} (x^N-\sum_{i=2}^Ns_ix^{N-i})^{-2n}. \end{eqnarray} Rescaling $x=s_N^{1/N}z$ and $\alpha_i=s_is_N^{-i/N}$, and expanding with respect to $1/s_N$ and $\alpha_i$, $\lambda$ becomes \begin{eqnarray} \lambda&=&{s_N^{1/N}dz \over 2\pi i}\sum_{n=0}^{\infty} {\Gamma(n+{1\over 2})\over \Gamma({1\over 2})n!2n}\left( {\Lambda^{2N}\over s_N^2}\right)^n(z^N-1)^{-2n}\nonumber \\ & & \hspace{3cm} \times \sum_{\{m\}}^{\infty}{\Gamma(a_{\{m\}}+2n)\over \Gamma(2n)} \prod_{i=2}^{N-1}{1\over m_i !}\left({\alpha_iz^{N-i}\over z^N-1}\right)^{m_i}, \end{eqnarray} where $\{m\}=\{m_2,\cdots,m_{N-1}\}$ and $ a_{\{m\}}=\sum_{i=2}^{N-1}m_i$. In order to calculate $a_i$, we pick up the poles at $\displaystyle e^{ {2\pi ik\over N}}$ in meromorphic differential along $\alpha_i$ cycle. By introducing Barnes-type integral representation \cite{MS2} and multiplying $\sin 2s\pi/\pi$, we integrate from $z=0$ to $\displaystyle z=e^{{2\pi ik\over N}}$ to pick up the poles as \begin{eqnarray} a_k&=&s_N^{{1\over N}}\int_{-i\infty}^{i\infty} {ds\over 2\pi i}\sum_{\{m\}}^{\infty}\int_0^{e^{2\pi ik\over N}} dz {\Gamma(s+{1\over 2})(-1)^s\Gamma(-s)\Gamma(a_{\{m\}}+2s)\over \Gamma({1\over 2})2s \Gamma(2s)}{\sin 2s\pi\over \pi}\nonumber\\ & &\hspace{2cm}\times (z^N-1)^{-2s-a_{\{m\}}}\,z^{Na_{\{m\}}-b_{\{m\}}} \prod_{i=2}^{N-1} {\alpha_i^{m_i}\over m_i!}\left({\Lambda^{2N}\over s_N^2}\right)^s, \end{eqnarray} where $ b_{\{m\}}=\sum_{i=2}^{N-1}i\, m_i$. Therefore we find that $a_k$ in the region where $s_N\sim \infty$ is given by \begin{eqnarray} a_k&=&{s_N^{1\over N}\over N}\sum_{n,\{m_i\}}^{\infty} {e^{-2\pi ikb'_{\{m\}}} \Gamma(n+{1\over 2})\Gamma(a_{\{m\}}-b'_{\{m\}}) \over \Gamma({1\over 2})\Gamma(2n+1)n! \Gamma(-2n-b'_{\{m\}}+1)} \prod_{i=2}^{N-1}{\alpha_i^{m_i}\over m_i!}\left( {\Lambda^{2N}\over s_N^2}\right)^n, \end{eqnarray} where $b'_{\{m\}}=(b_{\{m\}}-1)/N$. Note that the phase factor guarantees the constraint $\sum_{i=1}^N a_i=0$. In order to continuate analytically to the region $s_N\sim \Lambda^{N}$ and to use various identities, we re-express (3.8) by using the hypergeometric function as \begin{eqnarray} a_k&=&{s_N^{1\over N}\over N} \sum_{n,\{m_i\}}^{\infty} {e^{-2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}})} \prod_{i=2}^{N-1}{\alpha_i^{m_i}\over m_i!} F\left({b'_{\{m\}}\over 2},{b'_{\{m\}}+1\over 2};1;{\Lambda^{2N}\over s_N^2} \right). \end{eqnarray} Quite generally the expression of $a_k$ differs by the choice of the branch, therefore we cannot perform analytic continuation of the expression beyond the convergence domain. In the case of $SU(2)$, this process can be justified by comparison to the elliptic singular curve made for torus. For general hyper-elliptic curve, there is no such guarantee for the process. However in our expression, ${\Lambda^{2N}\over s^2_N}=1$ is the critical point which is just on the boundary of the convergence domain, therefore we can obtain expression around ${\Lambda^{2N}\over s^2_N}=1$. Performing analytic continuation to ${\Lambda^{2N}\over s_N^2}\sim 1$, and using the identity for the hypergeometric function \begin{eqnarray} F(a,b,c,w)=(1-w)^{-a}F(a,c-b,c,{w\over w-1}), \end{eqnarray} and the quadratic transformation \cite{HTF} \begin{eqnarray} F(2a,2b,a+b+{1\over 2},z)=F(a,b,a+b+{1\over 2},4z(1-z)), \end{eqnarray} and also using another identity \begin{eqnarray} F(a,b,c,z)=(1-z)^{c-a-b}F(c-a,c-b,c,z), \end{eqnarray} where $z={1\over 2}-{s_N\over 2\Lambda^N}$, we can put $a_{k}$ around the conformal point in the form: \begin{eqnarray} a_{k}&=&{s_N^{1\over N}\over N} \sum_{\{m\}}^{\infty} {e^{-2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}})} \prod_{i=2}^{N-1}{\alpha_i^{m_i}\over m_i!} \left({s_N+\Lambda^N\over 2\Lambda^N}\right)^{1\over 2}\nonumber\\ &\times &\left\{{\Gamma({1\over 2}-b'_{\{m\}}) \over \Gamma(1-b'_{\{m\}}) }\right. \left({\Lambda^N+s_N\over s_N}\right)^{-b'_{\{m\}}} F\left({1\over 2},{1\over 2};b'_{\{m\}}+{1\over 2};z\right) \label{eq:su(n)ac}\\ & &\left.+{\Gamma(b'_{\{m\}}-{1\over 2}) \over \Gamma(b'_{\{m\}})} \left({\Lambda^N+s_N\over s_N}\right)^{-{1\over 2}} \left({s_N-\Lambda^N\over s_N}\right)^{{1\over 2}-b'_{\{m\}}} F\left({1\over 2},{1\over 2}; {3\over 2}-b'_{\{m\}};z\right)\right\}. \nonumber \end{eqnarray} Next we consider $a_D^i$. In this case we integrate from $z=0$ to $z=e^{2\pi i k\over N}$ without multiplying $\sin 2s\pi$ as \begin{eqnarray} a_D^k&=&{s^{1\over N}_N\over \pi iN} \int_{-i\infty}^{i\infty}{ds\over 2\pi i} \sum_{\{m\}}e^{-2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})(-1)^{-2s-a_{\{m\}}}\\ & &\hspace{1cm}\times{\Gamma(s+{1\over 2})\Gamma(-s)\Gamma(a_{\{m\}}+2s) \Gamma(-2s-a_{\{m\}}+1)\over \Gamma({1\over 2})\Gamma(2s+1)\Gamma(-2s- b'_{\{m\}}+1)} \left(\prod_{i=2}^{N-1} {\alpha_i^{m_i}\over m_i!}\right)\left(-{\Lambda^{2N}\over s_N^2}\right)^s, \nonumber \end{eqnarray} which is defined modulo $a_k$ in the weak coupling region. We evaluate double poles of this integral and also subtract the contribution from $z=0$ \cite{MS2} to obtain $a_D^k$ \begin{eqnarray} a_D^k&=& {s^{1\over N}_N\over \pi iN}\sum_{\{m\}}^{\infty}{e^{- 2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}})} \left(\prod_{i=2}^{N-1}{\alpha_i^{m_i}\over m_i!}\right) \nonumber \\ & &\times\sum_{n=0}^{\infty}{({b'_{\{m\}}\over 2})_n ({b'_{\{m\}}\over 2}+{1\over 2})_n\over \Gamma(n+1)n!} \left({\Lambda^{2N}\over s_N^2}\right)^n \\ & &\times\left\{\ln\left({\Lambda^{2N}\over s_N^2}\right)+ \psi(n+{b'_{\{m\}}\over 2})\right.\nonumber \\ & &\hspace{2cm}\left.+ \psi(n+{b'_{\{m\}}\over 2}+{1\over 2}) -2\psi(n+1)+\pi\cot(b'_{\{m\}}\pi)\right\}.\nonumber \end{eqnarray} Using the analytic continuation to the region $s_N\sim \Lambda$, we have \begin{eqnarray} a_D^k&=& {s^{1\over N}_N\over \pi iN}\sum_{\{m\}} {e^{-2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}})} \left(\prod_{i=2}^{N-1}{\alpha_i^{m_i}\over m_i!}\right) \left({s_N+\Lambda^N\over 2\Lambda^N}\right)^{1\over 2}\nonumber \\ & &\times\left\{{\pi\cot(b'_{\{m\}}\pi) \Gamma(b'_{\{m\}}-{1\over 2})\over \Gamma(b'_{\{m\}}) }\right. \label{eq:su(n)adc}\\ & &\hspace{3cm}\times \left. \left({s_N+\Lambda^N\over s_N}\right)^{-{1\over 2}} \left({s_N-\Lambda^N\over s_N}\right)^{{1\over 2}-b'_{\{m\}}} F\left({1\over 2},{1\over 2};{3\over 2}- b'_{\{m\}};z\right) \nonumber \right\}. \end{eqnarray} Around the critical point, the original roots of the curve $e_k^{+},\ e_k^{-}$ which both reduce to $e_k$ for $\Lambda=0$, become $ e_k^{+}\simeq \Lambda e^{2\pi ik\over N},\ e_k^{-}\simeq 0$. The expression (\ref{eq:su(n)ac}) and (\ref{eq:su(n)adc}) show that $a_k$ consists of the contribution from both poles whereas $a_D^k$ consists of the contribution from $e_k^{-}$ which vanishes at the critical point. Of course, we can find an expression for $a_D^k$ which reduces to $a_k$ at the conformal point, i.e., $a_D^{'k}=a_D^k+ a_k$, because $a_D^k$ was defined modulo $a_k$, which cannot be determined by analytic continuation but by the consistency. Therefore, around the conformal point $a_k$ and $a_D^k$ behave as \begin{eqnarray} a_k\sim a_D^k\sim(s_N-\Lambda^N)^{N+2\over 2N}+\ const. \end{eqnarray} From this result, we recognize that the conformal point is certainly the fixed point of the theory, and the conformal dimension of $s_N$ is ${2N\over N+2}$ \cite{EHIY}. We have used the ordinary type of the analytic continuation but the presence of the factor $\Gamma(-b'_{\{m\}}+{1\over 2})$ shows that this factor has poles and the expression (\ref{eq:su(n)ac}) and (\ref{eq:su(n)adc}) contain the logarithmic terms. To see this, we decompose $b_{\{m\}}$ mod $N$ as $b_{\{m\}}=Nl+\lambda$ where $l=0,1,2,\cdots$ and $0\le \lambda\le N-1$. Noticing that $b'_{\{m\}}=(b_{\{m\}}-1)/N$, when $N$ is even and $\lambda=1+{N\over 2}$, $b'_{\{m\}}-{1\over 2}$ becomes integer, thus we find that $\Gamma(-b'_{\{m\}}+{1\over 2})$ has poles. That is, around the conformal point of the moduli space of pure $SU(2n)$ theory, there are unstable directions that $a_i$ and $a_D^i$ have logarithmic terms. However except these directions $a_i$ and $a_D^i$ contain no logarithmic term, and since just on the conformal point $\Gamma( -b'_{\{m\}}+{1\over 2})=\Gamma(-{1\over N}+{1\over 2})$ there is no logarithmic singularity except $N=2$, the conformal point is still the fixed point of the theory. When we set $N=2$, i.e., gauge group $G=SU(2)$, the point we considered is a dyon point. Therefore it is natural that $a$ and $a_D$ have such logarithmic contribution. As a check of our result and an example, we consider the gauge gourp $G=SU(3)$. We set $u=s_2,\ v=s_3,\ \alpha_2=u/v^{2\over 3}$ and $a_{\{m\}}=m,\ b'_{\{m\}}=(2m-1)/3$. In the weak coupling region $v\sim \infty$, our expression reduces to Appell's $F_4$ system \cite{HTF} with argument ${ 4u^3\over 27v^2},\ {\Lambda^6\over v^2}$. Analytic continuation to the region $u\sim \infty$ recovers the result in ref.\cite{KLT} up to the choice of branch for the logarithmic term of $a_D^i$, which is again represented by Appell's $F_4$ system. By analytic continuation to around the conformal point, we can find that our expression becomes Horn's $H_7$ system \cite{HTF}. To see this, we set $m=3l+\lambda\ (l=0,1,2,\cdots,\ \lambda=0,1,2)$ so that $a_k$ and $a_D^k$ are decomposed to $a_k=\sum_{\lambda=0}^2 a_k^{\lambda},\ a_D^k=\sum_{\lambda=0}^2a_D^{k\lambda}$. For example, $a_k^{\lambda}$ can be written as \begin{eqnarray} a_k^{\lambda}&=&{v^{1\over 3}e^{-{2\pi i k\over 3}(2\lambda-1)} \sin({2\lambda-1\over 3})\pi \ 2^{2\lambda-1\over 3}\over i6\pi\Gamma({1\over 2})^3}\left({u^3\over v^2}\right)^{\lambda\over 3} \left({v\over \Lambda^3}\right)^{1\over 2} \sum_{n,l}{\Gamma(l+{\lambda+1\over 3}) \over \Gamma(3l+\lambda+1)} \label{eq:su(3)ac}\\ & &\times \left\{ \left({\Lambda^3\over v}\right)^{-{2\lambda\over 3}+{5\over 6}} {\Gamma(2l+n+{2\lambda\over 3}-{1\over 3})^2\sin({2\lambda\over 3}-{1\over 3})\pi \ z^n\over \Gamma(2l+n+{2\lambda\over 3}+{1\over 6}) \sin({2\lambda\over 3}+{1\over 6})\pi\ n!} \left(u^3\over 4\Lambda^6\right)^l \right.\nonumber \\ & &+\left. \left({2(v-\Lambda^3)\over v}\right)^{-{2\lambda\over 3}+{5\over 6}} \Gamma(n+{1\over 2})^2\Gamma(2l-n+{2\lambda\over 3}-{5\over 6})\ {(-z)^n\over n!}\left({u^3\over (v-\Lambda^3)^2}\right)^l\right\},\nonumber \end{eqnarray} where $ z={1\over 2}(1-{v\over \Lambda^3})$. Because of a factor $\sin ({2\lambda-1\over 3}) \pi$ in (\ref{eq:su(3)ac}), the component for $\lambda=2$ disappears, i.e., $a_k^2=0$. For $\lambda=0,1$, the second term can be expressed by Horn's $H_7$ function as \begin{eqnarray} H_7\left(-{5-4\lambda\over 6},{1\over 2},{1\over 2}, {2+2\lambda\over 3},{u^3\over 27(v-\Lambda^3)^2},-{1\over 2}(1-{v\over \Lambda^3})\right), \end{eqnarray} where $H_7(a,b,c,d,x,y)$ is given by \cite{HTF} \begin{eqnarray} H_7(a,b,c,d,x,y)=\sum_{n,m}{(a)_{2m-n}(b)_n(c)_n\over (d)_m m! n!}x^m y^n. \end{eqnarray} This means that if we choose the variable $x={u^3\over 27(v-\Lambda^3)^2}$ and $y=-{1\over 2}(1-{v\over \Lambda^3})$, Picard-Fuchs equations of the theory should reduce to differential equations of $H_7(a,b,c,d,x,y)$ system which is given by \cite{HTF} \begin{eqnarray} & \left\{-y(1+y)\partial^2_y+2x{\partial_x\partial_y}+(a-1-(b+c+1)y)\partial_ y-bc\right\}H_7=0, \label{eq:h7eq} \\ &\left\{x(1-4x)\partial^2_x+4xy\partial_x\partial_y-y^2 \partial^2_y+(d-(4d+6)x)\partial_x+2ay\partial_y-a(a+1)\right\}H_7=0,\nonumber \end{eqnarray} where we have corrected a misprint in ref.\cite{HTF}. Furthermore, noticing that four independent solutions of this system can be written as \begin{eqnarray} & &H_7(a,b,c,d,x,y)\nonumber \\ &x&^{1-d}H_7(a-2d+2,b,c,2-d,x,y),\nonumber \\ &y&^a\sum_{m,n=0}^{\infty}{(b+a)_{2m+n}(c+a)_{2m+n}\over (d)_m(1+a)_{2m+n}m!n!}(xy^2)^m(-y)^n,\label{eq:h7}\\ &y&^{a-2d+2}\sum_{m,n}^{\infty}{(b+a-2d+2)_{2m+n}(c+a-2d+2)_{2m+n} \over (2-d)_m(a-2d+3)_{2m+n}}(xy^2)^m(-y)^n,\nonumber \end{eqnarray} first and second term of (\ref{eq:su(3)ac}) with $\lambda=0,1$ correspond to above solutions of this system. Let us chech this point. We start with the Picard-Fuchs equation in this theory for $\Pi=\oint \lambda$ \cite{KLT}: \begin{eqnarray} {\cal L}_1\Pi&=& \left\{(27\Lambda^6-4u^3-27v^2)\partial_u^2-12u^2v\partial_u\partial_v-3uv\partial_v-u\right\} \Pi=0,\nonumber \\ {\cal L}_2\Pi&=& \left\{(27\Lambda^6-4u^3-27v^2)\partial_v^2-36uv\partial_u\partial_v-9v\partial_v-3\right\} \Pi=0.\label{eq:su(3)pf} \end{eqnarray} By direct change of variables $x={u^3\over 27(v-\Lambda^3)^2}$ and $y=-{1\over 2}(1-{v\over \Lambda^3})$, and some linear combinations of these equations, we can check that the Picard-Fuchs equation (\ref{eq:su(3)pf}) can be written as \begin{eqnarray} &x&(1-4x)\partial^2_x\Pi_0-y^2\partial^2_y\Pi_0+4xy\partial_x\partial_y\Pi_0+{2\over 3} (1-4x)\partial_x\Pi_0-{5\over 3}y\partial_y\Pi_0+ {5\over 36}\Pi_0=0,\nonumber\\ &y&(1+y)\partial^2_y\Pi_0-2x\partial_x\partial_y\Pi_0+ {11+12y\over 6}\partial_y\Pi_0+{1\over 4}\Pi_0=0. \end{eqnarray} where $\Pi_0=y^{-{5\over 6}}\Pi$. Comparing this to (\ref{eq:h7eq}), we see that this system is identical to (\ref{eq:h7eq}) with $a=-{5\over 6},\ b=c={1\over 2},\ d={2\over 3}$. Substituting these to (\ref{eq:h7}), we can find directly that four solutions of the Picard-Fuchs equation of this theory are identical to four functions of the expression (\ref{eq:su(3)ac}) with $\lambda=0,1$, although the first term of (\ref{eq:su(3)ac}) are not within the Horn's list. \sect{$SO(2N)$ pure Yang-Mills theories} In this section we discuss pure $SO(2N)$ theory whose singular points in the strong coupling region are known in arbitrary $N$ \cite{EHIY}. In pure $SO(2N)$ theory the curve and meromorphic differential are given by \begin{eqnarray} y^2&=&P(x)^2-\Lambda^{4(N-1)}x^4= \left(x^{2N}-\sum_{i=1}^Nx^{2(N-i)}s_i\right)^2-\Lambda^{4(N-1)}x^4,\\ \lambda&=&(2P(x)-xP'(x)){dx\over y}. \end{eqnarray} Since the difference from $SU(N)$ theory is only powers of $\Lambda$ in the instanton correction term, the calculation is almost same as $SU(N)$ theory. What we need is the expression around the point $ s_i=0 \, (i\ne N-1),\ s_{N-1}=\pm\Lambda^{2N-2}$ where the curve is degenerate as \cite{EHIY} \begin{eqnarray} y^2=x^{2N+2}(x^{2N-2}\pm 2\Lambda^{2N-2}). \end{eqnarray} To this end, it is convenient to evaluate integral in the region $s_i\sim 0\, (i\ne N-1),\ s_{N-1}\sim \infty$. Expanding $\lambda$ with respect to $\Lambda^{4(N-1)}$ and integrating by part, we can rewrite $\lambda$ in the following form: \begin{eqnarray} \lambda=\int_{-i\infty}^{\infty} {ds\over 2\pi i}{dx\over 2\pi i} {\Gamma(s+{1\over 2})\Gamma(-s)\over \Gamma({1\over 2})2s}\left(-\Lambda^{4(N-1)}x^4\right)^sP(x)^{-2s}, \end{eqnarray} where we have introduced Barnes-type integral representation as before. Rescaling the variable as \begin{eqnarray} x=s_{N-1}^{1/(2N-2)}z=uz, \ \ s_i=u^{2i}\alpha_i\, (i\ne N-1). \end{eqnarray} and expanding with respect to $\alpha_i$ and $\Lambda^{4N-4}/u^{4N-4}$, we have $\lambda$ in the form: \begin{eqnarray} \lambda&=&u\int_{-i\infty}^{i\infty}{ds\over 2\pi i}{\Gamma(s+{1\over 2}) \Gamma(-s)\Gamma(2s+a_{\{m\}})\over \Gamma({1\over 2})\Gamma(2s+1)} \left(-{\Lambda^{4N-4}\over u^{4N-4}}\right)^s\sum_{\{m\}}\prod_{i\ne N-1}^{N} \left({\alpha_i\over m_i!}\right)^{m_i}\\ & &\hspace{2cm} \times \int{dz\over 2\pi i} z^{2(N-1)a_{\{m\}}-2b_{\{m\}}}(z^{2N-2}-1)^{-2s-a_{\{m\}}},\nonumber \end{eqnarray} where $\{m\}=\{m_1,\cdots,m_{N-2},m_N\}$ and $ a_{\{m\}}=\sum_{i=1,i\ne N-1}^{N}m_i,\ \ b_{\{m\}}=\sum_{i=1,i\ne N-1}^{N}im_i$. In order to obtain $a_k$, we pick up poles at $z= e^{2\pi i k\over 2N-2}\ (0\le k\le N-1)$ along $\alpha_k$ cycle and $z=0$ along $\alpha_N$ cycle. First we calculate $a_k\ (0\le k\le N-1)$. To pick up poles at $z=e^{2\pi i k\over 2N-2}$ we integrate from $z=0$ to $z= e^{2\pi i k\over 2N-2}$ multiplying $\sin 2s \pi/\pi$ to find that $a_k$ can be expressed in the form: \begin{eqnarray} a_k&=&{u\over 2N-2}\sum_{n,\{m\}}^{\infty}{e^{-2\pi ikb'_{\{m\}}} \Gamma({1\over 2}+n)\over \Gamma({1\over 2})\Gamma(2n+1) n!}\left({\Lambda^{4N-4}\over u^{4N-4}}\right)^n\prod_{i\ne N-1} \left({\alpha_i\over m_i!}\right)^{m_i} {\Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(-2n-b'_{\{m\}}+1)}\nonumber\\ &=&{2u\over 2N-2}\sum_{n,\{m\}}^{\infty} {e^{-2\pi ikb'_{\{m\}}}\Gamma(a_{\{m\}}-b'_{\{m\}}) \over \Gamma(1-b'_{\{m\}})}\\ & & \hspace{4cm}\times \prod_{i\ne N-1}\left({\alpha_i\over m_i!}\right)^{m_i} F\left({b'_{\{m\}}\over 2},{b'_{\{m\}}+1\over 2};1; {\Lambda^{4N-4}\over u^{4N-4}}\right),\nonumber \end{eqnarray} where $ b'_{\{m\}}={b_{\{m\}}\over (N-1)}-{1\over (2N-2)}$. The analytic continuation to the region $u^{2N-2}=s_{N-1}\sim \Lambda^{2N-2}$ and the quadratic transformation show that the result is \begin{eqnarray} a_k&=& {2u\over 2N-2}\sum_{n,\{m\}}^{\infty}{e^{-2\pi ib'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}})} \prod_{i\ne N-1}\left({\alpha_i\over m_i!}\right)^{m_i} \nonumber \\ &\times &\left[{\Gamma({1\over 2}-b'_{\{m\}})\over \Gamma(1-b'_{\{m\}}) }\left({\Lambda^{2N-2}\over u^{2N-2}}\right)^{-b'_{\{m\}}}\left({\Lambda^{2N-2}+u^{2N-2}\over \Lambda^{2N-2}} \right)^{{1\over 2}-b'_{\{m\}}} F\left({1\over 2},{1\over 2};b'_{\{m\}}+{1\over 2};z\right) \right.\nonumber \\ & &\hspace{1cm}+ {\Gamma(b'_{\{m\}}-{1\over 2})\over \Gamma(b'_{\{m\}}) }(1-{\Lambda^{4N-4}\over u^{4N-4}})^{ {1\over 2}-b'_{\{m\}}} \left({\Lambda^{2N-2}\over u^{2N-2}}\right)^{b'_{\{m\}}-1} \\ & & \hspace{4cm}\times \left.\left({\Lambda^{2N-2}+u^{2N-2}\over \Lambda^{2N-2}} \right)^{b'_{\{m\}}-{1\over 2}} F\left({1\over 2},{1\over 2};{3\over 2}-b'_{\{m\}};z\right)\right],\nonumber \end{eqnarray} where $z={1\over 2}(1-{u^{2N-2}\over \Lambda^{2N-2}})$. Next we consider $a_D^k\ (1\le k\le N-1)$. In this case we integrate meromorphic differential $\lambda$ from $z=-e^{2\pi i k\over 2N-2}$ to $z= e^{2\pi i k\over 2N-2}$ and evaluate douple pole of the integrant without muliplied by $\sin 2s \pi$, and subtract ${1\over 2}a_k$ \cite{MS2}. We have $a_D^k$ in the form: \begin{eqnarray} a_D^k&=&{u\over 2\pi^2 i}\sum_{n,\{m\}} {e^{-2\pi ik(b'_{\{m\}})}\Gamma(a_{\{m\}}-b'_{\{m\}})\sin(b'_{\{m\}}\pi) 2^{b'_{\{m\}}}\over (2N-2)\Gamma({1\over 2})\Gamma(n+1)^2} \prod_{i\ne N-1}^{N}\left({\alpha_i\over m_i!}\right)^{m_i}\nonumber \\ & &\hspace{2cm} \times \Gamma(n+{b'_{\{m\}}\over 2})\Gamma(n+{b'_{\{m\}}\over 2}+{1\over 2}) \left({\Lambda^{4N-4}\over u^{4N-4}}\right)^n\\ & &\times\left[\psi(n+{b'_{\{m\}}\over 2})+\psi( n+{b'_{\{m\}}\over 2}+{1\over 2})-2\psi(n+1)+\ln\left({\Lambda^{4N-4}\over u^{4N-4}}\right)+2\pi\cot(b'_{\{m\}}\pi)\right].\nonumber \end{eqnarray} We make use of the analytic continuation of $a_D^k$ around the conformal point to get \begin{eqnarray} a_D^k&=& {2u\over (2N-2) i}\sum_{n,\{m\}}^{\infty}{e^{-2\pi ikb'_{\{m\}}} \Gamma(a_{\{m\}}-b'_{\{m\}})\over \Gamma(1-b_{\{m\}})} \prod_{i\ne N-1}\left({\alpha_i\over m_i!}\right)^{m_i} \nonumber \\ & &\times \cot(b'_{\{m\}}\pi) {\Gamma(b'_{\{m\}}-{1\over 2})\over \Gamma(b'_{\{m\}})} (1-{\Lambda^{4N-4}\over u^{4N-4}})^{ {1\over 2}-b'_{\{m\}}} \left({\Lambda^{2N-2}\over u^{2N-2}}\right)^{b'_{\{m\}}-1} \nonumber \\ & & \hspace{4cm}\times \left({\Lambda^{2N-2}+u^{2N-2}\over \Lambda^{2N-2}} \right)^{b'_{\{m\}}-{1\over 2}} F\left({1\over 2},{1\over 2};{3\over 2}-b'_{\{m\}};z\right).\nonumber \end{eqnarray} As in the pure $SU(N)$ theory, we can claim that $a_k\sim a_D^k$ at the critical point. The behavior of $a$ and $a_D$ near $s_{N-1}=\Lambda^{2N-2},\ s_i=0\ (i\ne N-1)$ is \begin{eqnarray} a_k\sim a_D^k\sim (s_{N-1}-\Lambda^{2N-2})^{{1\over 2}-{1\over 2N-2}}+\ const. \end{eqnarray} Therefore, we see that the confomal dimension of $s_{N-1}$ is ${2N-2\over N}$ \cite{EHIY}. As was the case of $SU(2n)$, $a_i$ and $a_D^i$ contain the logarithmic terms coming from the factor $\Gamma({1\over 2}-b'_{\{m\}})$ when $N$ of $SO(2N)$ is even, which vanish at the conformal point. Next we consider $a_N$ and $a_D^N$. Until now the calculation is same as $SU(N)$ case. However in order to calculate $a_N$ and $a_D^N$, we have to pick up the pole $x\sim 0$. To this end we rescale the variable of the curve as \begin{eqnarray} x^2=-{s_N\over s_{N-1}}z^2,\ \ \beta_i={s_i\over s_N}\left(-{s_N\over s_{N-1}}\right) ^{N-i},\end{eqnarray} where $s_0=-1$, and $\lambda$ becomes as \begin{eqnarray} \lambda&=&\left(-{s_N\over s_{N-1}}\right)^{1\over 2} \int_{-i\infty}^{i\infty}{ds\over 2\pi i} \sum_{m}{\Gamma(s+{1\over 2})\Gamma(-s)\Gamma(2s+c_{\{m\}})\over \Gamma({1\over 2})2s\Gamma(2s)}\left(-{\Lambda^{4N-4}\over s_{N-1}^2}\right)^s\nonumber \\ & &\hspace{3cm}\times \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right) z^{4s+2Nc_{\{m\}}-2d_{\{m\}}}(z^2-1)^ {-2s-c_{\{m\}}}, \end{eqnarray} where $\{m\}=\{m_0,m_1,\cdots,m_{N-2}\}$ and $ c_{\{m\}}=\sum_{i=0}^{N-2}m_i,\ \ d_{\{m\}}=\sum_{i=0}^{N-2}(N-i)m_i $. By evaluating the line integral from $z=0$ to $z=1$ and by multiplying $\sin 2s\pi/ \pi$ to pick up the pole at $z=1$, we get $a_N$ in the region ${s^2_{N-1}\over \Lambda^{4N-4}}>> 1$ in the form: \begin{eqnarray} a_N&=&\left(-{s_N\over s_{N-1}}\right)^{1\over 2} \sum_{n,\{m\}}{\Gamma(n+{1\over 2})\Gamma(2n+d_{\{m\}}+ {1\over 2})\over \Gamma({1\over 2})\Gamma(2n+1)\Gamma((N-1)c_{\{m\}}- d_{\{m\}}+{3\over 2})}\left({\Lambda^{4N-4}\over s_{N-1}^2}\right)^n \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right)\nonumber \\ &=&2\left(-{s_N\over s_{N-1}}\right)^{1\over 2}\sum_{\{m\}} {\Gamma(d_{\{m\}}+ {1\over 2})\over \Gamma(-c_{\{m\}}+d_{\{m\}}+{3\over 2})} \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right)\\ & &\hspace{3cm}\times F\left({d_{\{m\}}\over 2}+{1\over 4}, {d_{\{m\}}\over 2}+{3\over 4};1;{\Lambda^{4N-4}\over s_{N-1}^2 }\right).\nonumber \end{eqnarray} Notice that this hypergeometic function gives logarithmic term by analytic continuation to the region ${\Lambda^{4N-4}\over s_{N-1}^2}\sim 1$. To see this, we set the variable as \begin{eqnarray} y={\Lambda^{4N-4}\over s_{N-1}^2},\ \ z={\Lambda^{2N-2}-s_{N-1}\over 2\Lambda^{2N-2}}, \end{eqnarray} and perform the analytic continuation to the region ${s_{N-1}\over \Lambda^{4N-4} }\sim 1$ as \begin{eqnarray} a_N&=&\left(-{s_N\over s_{N-1}}\right)^{1\over 2}\sum_{\{m\}} {\Gamma(d_{\{m\}}+ {1\over 2})\over \Gamma({1\over 2})\Gamma(-c_{\{m\}}+d_{\{m\}}+{3\over 2})} \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right)\nonumber \\ &\times &\left\{ \left(1-y\right)^{-d_{\{m\}}-{1\over 2}} y^{{d_{\{m\}}\over 2}-{1\over 4}} {\Gamma(d_{\{m\}})\over \Gamma({d_{\{m\}}\over 2}+{1\over 4})^2} \sum_{n=0}^{d_{\{m\}}-1}{({1\over 4}-{d_{\{m\}}\over 2})_n ({1\over 4}-{d_{\{m\}}\over 2})_n \over n!(-d_{\{m\}}+1)_n}\left(1-y\right)^n \right.\nonumber\\ & &+{y^{-{d_{\{m\}}\over 2}-{1\over 4}} (1-z)^{-d_{\{m\}}} \over \Gamma({3\over 4}-{d_{\{m\}}\over 2})\Gamma({1\over 4}- {d_{\{m\}}\over 2}) \Gamma(d_{\{m\}}+1)}\sum_{n=0}^{\infty} {({1\over 2})_n({1\over 2})_n\over n!(d_{\{m\}}+1)_n}z^n\\ & &\times \left.\left[ \psi(n+1)+\psi(n+d_{\{m\}}+1)-2\psi(n+{1\over 2})-\pi -\log(-z)\right]\right\}.\nonumber \end{eqnarray} Next we calculate $a_D^N$. In the region $s_{N-1}\sim \infty$, $a_D^N$ is given by integrating meromorphic differential $\lambda$ from $z=-1$ to $z=1$ without multiplying $\sin 2s\pi$ and subtracting ${1\over 2}a_N$, and by evaluating double poles as \begin{eqnarray} a_D^N&=&{is_N^{1\over 2}\over s_{N-1}^{1\over 2}2\pi i} \sum_{n,\{m\}}{\Gamma(d_{\{m\}}+ {1\over 2})\over \Gamma({1\over 2})\Gamma(-c_{\{m\}}+d_{\{m\}}+{3\over 2})} \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right) {({d_{\{m\}}\over 2}+{1\over 4})_n({d_{\{m\}}\over 2}+{3\over 4})_n \over (n!)^2}y^n \nonumber \\ & &\hspace{1cm}\times \left[\psi(n+{d_{\{m\}}\over 2}+{1\over 4})+ \psi(n+{d_{\{m\}}\over 2}+{3\over 4})-2\psi(n+1)+ \ln y \right], \end{eqnarray} where $y={\Lambda^{4N-4}\over s_{N-1}^2}$. Although this logarithmic term disappears by the analytic continuation to the region $s_{N-1}\sim \Lambda^{2N-2}$, another logarithmic term appears \begin{eqnarray} a_D^N&=&{s_N^{1\over 2}\over 2s_{N-1}^{1\over 2}} \sum_{n,\{m\}}{\Gamma(d_{\{m\}}+ {1\over 2})\over \Gamma({1\over 2})\Gamma(-c_{\{m\}}+d_{\{m\}}+{3\over 2})} \prod_{i=0}^{N-2}\left({\beta_i^{m_i}\over m_i !}\right)\nonumber \\ &\times &\left\{ \left(1-y\right)^{-d_{\{m\}}-{1\over 2}} y^{{d_{\{m\}}\over 2}-{1\over 4}} {i\pi \Gamma(d_{\{m\}})\over \Gamma({d_{\{m\}}\over 2}+{1\over 4})^2} \sum_{n=0}^{d_{\{m\}}-1}{({1\over 4}-{d_{\{m\}}\over 2})_n ({1\over 4}-{d_{\{m\}}\over 2})_n \over n!(-d_{\{m\}}+1)_n}\left(1-y\right)^n \right.\nonumber\\ & &+ {y^{-{d_{\{m\}}\over 2}-{1\over 4}}(1-z)^{-d_{\{m\}}}\over \Gamma({3\over 4}-{d_{\{m\}}\over 2})\Gamma({1\over 4}-{d_{\{m\}}\over 2}) \Gamma(d_{\{m\}}+1)}\sum_{n=0}^{\infty}{({1\over 2})_n({1\over 2})_n z^n\over n!(d_{\{m\}}+1)_n} \\ & &\times\left. \left[\psi(n+1)+\psi(n+d_{\{m\}}+1)-2\psi(n+{1\over2 })- \log(-z)-\pi\right]\right\}.\nonumber \end{eqnarray} Thus in $SO(2N)$ theory $a_N$ and $a_D^N$ have the logarighmic terms around this point though the curve become degenerate multiple. Let us consider what is happening. Near $x\sim 0$, $\alpha_N$ cycle and $\beta_N$ cycle form a small torus, and the curve looks like the curve of pure $SU(2)$ theory. In this case due to our choice of approaching to the point $s_{N-1}= \Lambda^{2N-2},\ s_N= 0$, this point corresponds to the dyon point for $a_N$ and $a_D^N$ and these have certainly the logarithmic terms. These logarithmic terms are simply caused by the fact that we consider a branch where two of the singurality approach to zero before the theory is going to be at the critical point. This point has been understood in the framework of the $SU(3)$ theory near $u=0,\ v=\Lambda^2$ \cite{AD}. From the expression (4.16) and (4.18), we see that $a_N\sim a_D^N$ on the conformal point. Therefore the existence of logarithmic terms in the expression (4.16) and (4.18) is not harmful. \sect{Discussion} We have derived the expression for the periods and Higgs fields and its dual around the conformal point of $SU(2)$ Yang-Mills theory with matter fields, pure $SU(N)$ and pure $SO(2N)$ Yang-Mills theory. In the $SU(2)$ theory with matter fields and the pure $SU(N)$ theory, we have directly recognized the structure of the theories near the conformal points. We find a simple correspondence between the fixed point of 4-D $N=2$ $SU(2)$ Yang-Mills theory with matter fields and Landau-Ginzburg description of 2-D $N=2$ SCFT with $c=3$. For $SU(N)$ and $SO(2N)$ theories we could show a verification of the analytic continuation due to the well known formula of the hypergeometric functions. It seems interesting that we could obtain the explicit expression of fields around the conformal point even for the theories with higher rank gauge gourps. But the examples we treated in this paper is elementary compared to more complicated varieties of critical points as was shown in \cite{EHIY}. At present, we do not know whether we can find more interesting examples which one can calculate the explicit form of fields. An important question is the verification of the validity of the analytic continuation for these cases, which require further investigation. \vskip 2cm \begin{center} {\bf Acknowledgment} \end{center} We would like to thank members of the particle physics group in Hokkaido University for encouragement. One of authers (T.M.) is partially supported by Nukazawa Science Foundation. \newpage
1,477,468,749,892
arxiv
\section{Introduction}\label{sec:intro} A graph search is a mechanism for systematically visiting the vertices of a graph. It has been a fundamental technique in the design of graph algorithms since the early days of computer science. Many of the early search methods were based on Breadth First Search (BFS) or Depth First Search (DFS) and resulted in efficient algorithms for practical problems such as the distance between two vertices, diameter, connectivity, network flows and the recognition of planar graphs see \cite{CormenLR89}. Many variants of these searches have been introduced since, providing elegant and simple solutions to many problems. For example, Lexicographic Breadth First Search (LBFS) ~\cite{RTL}, and its generalization Maximal Neighbourhood Search (MNS) \cite{Shier} were shown to yield simple linear time algorithms for chordal graph recognition. More recently, Lexicographic Depth First Search (LDFS\xspace}%%{LIPMaxS\xspace), was introduced in \cite{CK} based on its symmetrical ``pattern-condition'' with LBFS\xspace}%%{LDAMaxS\xspace. A few years after its discovery it was shown that LDFS when applied to cocomparability graphs yields simple and efficient algorithms for solving various Hamiltonian Path related problems \cite{CDH, MC, CHK}. Some recent applications of graph searches involve a controlled tie-break mechanism in a series of consecutive graph searches, see \cite{Corneil04, DOS09, CDH, DH13}. Examples include the strongly connected components computation using double-DFS \cite{S81} and the series of an arbitrary LBFS followed by two LBFS$^+$s used to recognize unit interval graphs \cite{Corneil04}. Note that a ``$^+$ search'' breaks ties by choosing (amongst the tied vertices) the vertex that is rightmost with respect to a given ordering of the vertices. This motivates a general study of these graph searches equipped with a tie-break mechanism that incorporates such multi-sweep usage of graph searches. This is the goal of the present paper: to define the simplest framework powerful enough to capture many graph searches either used individually or in a multi-sweep fashion and simple enough to allow general theorems on graph searches. Building on the General Label Search (GLS) framework from \cite{BGS11} we not only simplify their model but also unify their model with the ``pattern-conditions'' formalism of \cite{CK}. This paper is organised as follows. After basic notations and definitions in Section 2, Section~\ref{sectTBLS} introduces the Tie-Breaking Label Search (TBLS) formalism to address graph searches. We then illustrate the TBLS by expressing some classical graph searches in this formalism. We will also show the relationship between our formalism and the ``pattern-conditions'' of search orderings introduced in \cite{CK} thereby yielding some new pattern-conditions for various classical searches. In Section~\ref{compa} we show that the TBLS and GLS models capture the same set of graph searches. We then propose a unified method for recognizing whether a given ordering of the vertices could have been produced by a specific graph search. Finally, in Section~\ref{Imp} we present a TBLS implementation framework in a particular case. \section{Preliminaries and notation} In this paper, $G=(V,E)$ always (and sometimes implicitly) denotes a graph with $n$ vertices and $m$ edges. All graphs considered here are supposed to be finite. We identify the vertex-set with $\{1,...,n\}$, allowing us to see a total ordering on $V$ as a permutation on $\{1,...,n\}$. We define a \emph{graph search} to be an algorithm that visits all the vertices of a graph according to some rules, and a \emph{search ordering} to be the ordering $\sigma$ of the vertices yielded by such an algorithm. The link between these two notions is an overriding theme of this paper. Vertex $\sigma(i)$ is the $i$th vertex of $\sigma$ and $\sigma^{-1}(x)\in\{1,...,n\}$ is the position of vertex $x$ in $\sigma$. A vertex $u$ is the \emph{leftmost} (respectively \emph{rightmost}) vertex with property $X$ in $\sigma$ if there is no vertex $v$ such that $X(v)$ and $v<_\sigma u$ (respectively $u<_\sigma v$). Our graphs are assumed to be undirected, but most searches (especially those captured by TBLS) may be performed on directed graphs without any modifications to the algorithm (if $xy$ is an arc then we say that $y$ is a neighbour of $x$ while $x$ is \emph{not} a neighbour of $y$). The symmetric difference of two sets $A$ and $B$, namely $(A-B) \cup (B-A)$ is denoted by $A \bigtriangleup B$. Furthermore, $\mathbb{N}^+$ represents the set of integers strictly greater than $0$ and $\mathbb{N}^+_p$ represents the set of integers strictly greater than $0$ and less than $p$. $P(\mathbb{N}^+)$ denotes the power-set of $\mathbb{N}^+$ and $P_f(\mathbb{N}^+)$ denotes the set of all finite subsets of $\mathbb{N}^+$. By $\mathfrak{S}_n $ we denote the set of all permutations of $\{1,...,n\}$. For finite $A\inP(\mathbb{N}^+)$, let $umin(A)$ be: if $A=\emptyset$ then $umin(A)=\infty$ else $umin(A)=min\{i \mid i\in A\}$; and let $umax(A)$ be: if $A=\emptyset$ then $umax(A)=0$ else $umax(A)=max\{i \mid i\in A\}$. We always use the notation $<$ for the usual strict (i.e., irreflexive) order between integers, and $\prec$ for a partial strict order between elements from $P_f(\mathbb{N}^+)$ (or from another set when specified). Definitions of most of the searches we will consider appear in \cite{CK} or \cite{GOL}. \section{TBLS, a Tie-Breaking Label Search}\label{sectTBLS} A \emph{graph search} is an iterative process that chooses at each step a vertex of the graph and numbers it (from 1 to $n$). Each vertex is chosen (also said \emph{visited}) exactly once (even if the graph is disconnected). Let us now define a general \emph{Tie-Breaking Label Search} (TBLS). It uses \emph{labels} to decide the next vertex to be visited; $label(v)$ is a subset of $\{1,...,n\}$. A TBLS is defined on: \begin{enumerate} \item A graph $G=(V,E)$ on which the search is performed; \item A strict partial order $\prec$ over the label-set $P_f(\mathbb{N}^+)$; \item An ordering $\tau$ of the vertices of $V$ called the \emph{tie-break permutation}. \end{enumerate} The output of $\mbox{TBLS}(G,\prec,\tau)$ is a permutation $\sigma$ of $V$, called a $TBLS-ordering$ or also the \emph{search ordering} or \emph{visiting ordering}. Let us say a vertex $v$ is \emph{unnumbered} until $\sigma(i)\leftarrow v$ is performed, and then $i$ is its \emph{visiting date}. Thanks to the following algorithm, $label(v)$ is always the set of visiting dates of the neighbours of $v$ visited before $v$. More specifically $label_i(v)$ for a vertex $v$ denotes the label of $v$ at the beginning of step $i$. This formalism identifies a search with the orderings it may produce, as in \cite{CK}, while extending the formalism of General Label Search (GLS) of \cite{BGS11} by the introduction of a \emph{tie-break} ordering $\tau$, making the result of a search algorithm purely deterministic (no arbitrary decision is taken). \vspace{0.5cm} {\small \begin{algorithm}[ht \caption{TBLS($G,\prec,\tau$)\label{gtls}} \lForEach{$v\in V$}{$label(v)\leftarrow \emptyset$}\ \For{$i\leftarrow 1$ \KwTo $n$}{ $Eligible\gets \{x \in V ~ | ~ x $ unnumbered and $\nexists$ unnumbered $y \in V \text{ such that } label(x) \prec label(y)\}$\; Let $v$ be the leftmost vertex of $Eligible$ according to the ordering $\tau$\; $\sigma(i)\leftarrow v$\; \ForEach{\textup{unnumbered vertex $w$ adjacent to $v$}}{ $label(w)\leftarrow label(w)\cup\{i\}$\; } } \end{algorithm} } \vspace{0.5cm} \subsection*{Remarks on this formalism: } \begin{enumerate} \item Notice that during a TBLS search vertices are always labelled from $1$ up to $n$. The original description of LBFS\xspace}%%{LDAMaxS\xspace generated labels from $n$ down to $1$. Since a label is always an unordered set rather than a string, as often seen with LBFS\xspace}%%{LDAMaxS\xspace and LDFS\xspace}%%{LIPMaxS\xspace, we avoid having to prepend or append elements to existing labels. It should also be noticed that the TBLS model does not require the graph to be connected, and therefore in the following we will extend classical graph searches to disconnected graphs. Since we just need to specify $\prec$ to describe a particular search no implementation details have to be discussed in the specification of the search. Finally, by requiring a tie-breaking permutation $\tau$ we have a mechanism for choosing a specific vertex from $Eligible$. Many existing recognition algorithms such as the unit interval recognition algorithm in \cite{Corneil04} use a series of LBFS\xspace}%%{LDAMaxS\xspace sweeps where ties are broken by choosing the rightmost eligible vertex in the previous LBFS\xspace}%%{LDAMaxS\xspace search; to accomplish this in the TBLS formalism $\tau$ is set to be the reverse of the previous LBFS\xspace}%%{LDAMaxS\xspace ordering. \item Note that all elements of the set $Eligible$ have a label set which is maximal with respect to some finite partial order, since any finite partial order has at least one maximal element; therefore $Eligible$ cannot be empty. In the context of LBFS\xspace}%%{LDAMaxS\xspace the $Eligible$ set is often called a \emph{slice}. The reader should be aware that we make no claims on the complexity of computing the strict partial order $\prec$ over the label-set $P_f(\mathbb{N}^+)$; unfortunately it could be NP-hard. \end{enumerate} Let us first present an easy characterization property of TBLS search that will be used throughout the paper and which is a direct translation of the algorithm. First we define $N_\sigma(u,v)$ to be the set of visiting dates of neighbours of $u$ that occur before $v$ in $\sigma$; formally $N_\sigma(u,v) = \{i \mid \sigma(i) \in N(u) $ and $ \sigma(i)<_{\sigma} v\}$. \begin{property}\label{metaordering} Let $S$ be a TBLS search with partial order relation $\prec_S$. An ordering $\sigma$ of the vertices of a graph G is an S-ordering if and only if for every $x,~y \in V$, if $x <_\sigma y$, then $N_\sigma(x,x) \not\prec_S N_\sigma(y,x)$. \end{property} \begin{proof} The forward direction follows directly from the definition. For the backward direction, assume that for every $x,~y \in V$, $x <_\sigma y$, $N_\sigma(x,x) \not\prec_S N_\sigma(y,x)$ but $\sigma$ is not an $S$-ordering. Let $\gamma=\mbox{TBLS}(G,\prec_S,\sigma)$. Since $\sigma$ is not an $S$-ordering, we know that $\sigma \neq \gamma$. Now let $i$ be the first index such that $\sigma^{-1}(i) \neq \sigp^{-1}(i)$. Let $x=\sigma^{-1}(i)$ and $y=\sigp^{-1}(i)$. Since $x <_\sigma y$ but $\mbox{TBLS}(G,\prec_S,\sigma)$ did not choose $x$, we know that $x$ did not have a maximal label at step $i$. Therefore there must exist $z$ such that $x <_\sigma z$, and $label_i(x) \prec_S label_i(z)$. But since $label_i(x)=N_\sigma(x,x)$ and $label_i(z)=N_\sigma(z,x)$ we now have a pair of vertices $x$, $z$ that contradicts the assumption that for all $x,~y \in V$, $x <_\sigma y$, $N_\sigma(x,x) \not\prec_S N_\sigma(y,x)$. \end{proof} With this formalism, in order to specify a particular search we just need to specify $\prec$, the partial order relation on the label sets for that search. As a consequence we can transmit relationships between partial orders to their associated graph searches. There are two natural ways of saying that a search $S$ \textbf{is a} search $S'$ (for instance, that LBFS\xspace}%%{LDAMaxS\xspace \textbf{is a} BFS): either the $\prec$ ordering used by $S$ is a refinement of that of $S'$; or any search ordering $\sigma$ output by an execution of $S$ could also have been output by an execution of $S'$. In fact it can easily be shown that both formulations are equivalent, as stated in Theorem \ref{metaexten}. \begin{definition} For two TBLS searches $S$, $S'$, we say that $S'$ is an extension of $S$ (denoted by $S \ll S'$) if and only if every $S'$-ordering $\sigma$ also is an $S$-ordering. \end{definition} The statement and proof of the next lemma follows the work of \cite{BGS11} where there are similar results for the GLS formalism. \begin{lemma}[see \cite{BGS11}]\label{K} For any $TBLS$ S, any integer $p\geq 1$ and any sets $A$ and $B$ of $P(\mathbb{N}^+_p)$, if $A \not\prec_S B$ then there exists a graph $G$ and an $S$-ordering $\sigma$ such that in the $(p-1)$st step the label of the $(p-1)$st vertex is $A$ and the label of the $p$th vertex is $B$ (i.e., $label_{p-1}(\sigma^{-1}(p-1))=A$ and $label_{p-1}(\sigma^{-1}(p))=B$). \end{lemma} \begin{proof} Let $G=(V,E)$, with $V=\{ z_1,...,z_p\}$ and $E = \{z_iz_k \mid 1 \leq k <i \leq p-2$ and if $A \cap \mathbb{N}^+_i \prec_S B \cap \mathbb{N}^+_i$ then $k \in B$ else $k \in A\} \cup \{ z_{p-1}z_k | k \in A\} \cup \{ z_{p}z_k | k \in B\}$. Let $\sigma = (z_1,\dots,z_p)$. By the definitions of $E$ and $\sigma$, for any integers $i,j$ such that $1 \leq i \leq j \leq p$, $N_\sigma(\sigma^{-1}(j),\sigma^{-1}(i))=A \cap \mathbb{N}^+_i$ or $N_\sigma(\sigma^{-1}(j),\sigma^{-1}(i)=B \cap \mathbb{N}^+_i$ with $N_\sigma(\sigma^{-1}(i), \sigma^{-1}(i)) \not\prec_S N_\sigma(\sigma^{-1}(j), \sigma^{-1}(i))$. By Property \ref{metaordering}, $\sigma$ is an S ordering and we have $N_\sigma(\sigma^{-1}(p-1),\sigma^{-1}(p-1))=A$ and $N_\sigma(\sigma^{-1}(p),\sigma^{-1}(p-1))=B$. \end{proof} \begin{definition} For two partial orders $\prec_P, \prec_Q$ on the same ground set $X$, we say that $\prec_P$ is an extension of $\prec_Q$ if $\forall x,y \in X$, $x\prec_Q y$ implies $x\prec_P y$. \end{definition} \begin{theorem}\label{metaexten} Let $S$, $S'$ be two TBLS. $S'$ is an extension of $S$ if and only if $\prec_{S'}$ is an extension of $\prec_S$. \end{theorem} \begin{proof} For the forward direction, assume for contradiction that $S'$ is an extension of $S$ but $\prec_{S'}$ is not an extension of $\prec_{S}$. Therefore there exists $A$, $B$ such that $A \prec_S B$ and $A \not\prec_{S'} B$. Now using Lemma \ref{K} there exists a graph $G$ and an $S'$-ordering $\sigma$ such that $label_{p-1}(\sigma^{-1}(p-1))=A$ and $label_{p-1}(\sigma^{-1}(p))=B$. Since $A \prec_S B$ using Property \ref{metaordering}, we deduce that $\sigma$ is not an $S$-ordering which contradicts the fact that $S'$ is an extension of $S$. Suppose that $\sigma$ is an $S'$-ordering. Therefore using Property \ref{metaordering} we know that for every $x,~y \in V$, $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{S'} N_\sigma(y,x)$. Since $\prec_{S'}$ is an extension of $\prec_S$, we deduce that $x,~y \in V$, such that for every $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{S} N_\sigma(y,x)$. Now using Property \ref{metaordering}, we deduce that $\sigma$ is an $S$-ordering. \end{proof} The choice of permutation $\tau$ is useful in some situations described below; otherwise, we consider the orderings output by an arbitrary choice of $\tau$ thanks to the following definition: \begin{definition}\label{defordre} Let $\prec$ be some ordering over $P_f(\mathbb{N}^+)$. Then $\sigma$ is a TBLS ordering for $G$ and $\prec$ if there exists $\tau$ such that $\sigma=\mbox{TBLS}(G,\prec,\tau)$. \end{definition} Before giving some examples of appropriate $\prec$ for well known searches, let us start with a kind of fixed point theorem and some general features of the TBLS formalism. \begin{theorem}\label{thm:fixpoint} Let $G$ be a graph; $\prec$ a search rule; and $\sigma$ an ordering of the vertices of $G$. Then there exists $\tau$ such that $\sigma=\mbox{TBLS}(G,\prec,\tau)$ if and only if $\sigma=TBLS(G,\prec,\sigma)$. \end{theorem} \begin{proof} One direction is obvious. For the other direction, assume that $\sigma=\mbox{TBLS}(G, \prec, \tau)$ for some $\tau$, and consider $\sigma'=\mbox{TBLS}(G,\prec,\sigma)$. Assume, by contradiction, that $\sigma \neq \sigma'$, and consider $i$, the index of the first difference between $\sigma$ and $\sigma'$. Let $Eligible_i^{\sigma}$ be the set of eligible vertices at step $i$ of the algorithm that generated $\sigma$, and let $Eligible_i^{\sigma'}$ be the set of eligible vertices at step $i$ of the algorithm that generated $\sigma'$. Since $\sigma$ and $\sigma'$ are equal until index $i$, $Eligible_i^{\sigma}=Eligible_i^{\sigma'}$. By the definition of TBLS, $\sigma(i)$ is the first vertex of $Eligible_i^{\sigma}$ according to $\tau$. Finally, since the first vertex of this set, according to $\sigma$, is $\sigma(i)$, the tie-break rule chose it and so $\sigma(i)=\sigma'(i)$, a contradiction. \end{proof} This easy result (which is a direct consequence of our model equipped with a built-in tie-break process) is in fact a very powerful theoretical tool to show that some ordering is not a TBLS ordering, and we will use it several times in the proofs in the next sections, as for example in Theorem \ref{genericsearch}. As a first conclusion, the TBLS model is a pure mathematical abstraction of graph search algorithms via partial orders but with no data structures involved. Moreover, if we have a characterization of the total orderings produced by a given TBLS (as for example the usual search characterizations of section 4) then we can get rid of the implementation itself which can be parallel or sequential. In the next sections we will exhibit some easy consequences of this model. But before, we demonstrate its expressive power, in particular to deal with multi-sweep algorithms (i.e., algorithms written as a series of successive graph searches). To this aim, let us consider the sequence $\{\sigma_i \}_{i \in \mathbb{N}}$ of total orderings of the vertices, satisfying the following recursive equations: (i) $\sigma_0$ is an arbitrary total ordering of the vertices (ii) $\sigma_i =\mbox{TBLS}(G,\prec,\sigma^r_{i-1})$ where $\sigma^r$ denotes the reverse ordering of $\sigma$. \vspace{0,5cm} It was proven when $\prec$ is the partial order associated to the LBFS search, as described in the next section: (i) in \cite{Corneil04}, that if $G$ is a unit interval graph then $\sigma_{3}$ is a unit interval ordering\footnote{an ordering $\tau$ of $V$ such that for all $x <_{\tau} y <_{\tau} z$ with $xz \in E$, we have $xy, yz \in E$.}. (ii) in \cite{DH13} that if $G$ is a cocomparability graph then $\sigma_{|V|}$ is a cocomp ordering\footnote{an ordering $\tau$ of $V$ such that for all $x <_{\tau} y <_{\tau} z$ with $xz \in E$, we have at least one of $xy, yz \in E$.}. \section{Characterizing classical searches using TBLS}\label{charac} In this section we show how various classical searches (see \cite{CK} for the definitions of these various searches) may be expressed in the TBLS formalism. In each case we will state an appropriate $\prec$ order and where applicable, we will establish various characterizations of the search including the ``pattern-condition'' presented in \cite{CK}. In many cases we will exhibit new vertex ordering characterizations. \begin{definition}\label{lnrndef} For every vertex $x$, let $ln(x)$ be the leftmost (in $\sigma$) left neighbour of $x$, and let $rn(x)$ be the rightmost (in $\sigma$) right neighbour of $x$. In both cases, if $x$ has no left (respectively right) neighbour, then $ln(x)$ (respectively $rn(x)$)=$-1$. \end{definition} \subsection{Generic Search} A \emph{Generic Search} as described by Tarjan \cite{TAR} is any search that wherever possible visits neighbours of already visited vertices (this corresponds to the usual notion of graph search). We now give an alternative proof based on our formalism of the characterization of a generic search ordering, called a GEN-ordering throughout the rest of the paper. \begin{theorem}[see \cite{CK}]\label{genericsearch} We define $A\prec_{gen} B$ if and only if $A=\emptyset$ and $B\ne \emptyset$ and let $\sigma$ be a permutation of $V$. The following conditions are equivalent:\\[-.6cm] \begin{enumerate} \item Vertex ordering $\sigma$ is a GEN-ordering of $V$ (i.e., a TBLS using $\prec_{gen}$). \item For every triple of vertices $a,~b,~c$ such that $ a <_\sigma b <_\sigma c$, and $a \in N(c)-N(b)$ there exists $d \in N(b)$ such that $d <_\sigma b$. \item For every $x\in V$, for every $y\in V$ such that $x<_{\sigma}y <_{\sigma}rn(x)$, we have $ln(y) \neq -1$. \end{enumerate} \end{theorem} \begin{proof} Suppose that $\sigma$ is a GEN-ordering. Using Property \ref{metaordering} on $\sigma$, we know that:\\ $\sigma$ is a GEN-ordering\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{gen} N_\sigma(y,x)$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $N_\sigma(y,x) = \emptyset$ or $N_\sigma(x,x) \neq \emptyset$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $N_\sigma(y,x) \neq \emptyset \Rightarrow N_\sigma(x,x) \neq \emptyset$\\ $\iff$ for every triple of vertices $a,~b,~c$ such that $ a <_\sigma b <_\sigma c$, $a \in N(c)-N(b)$, there exists $d \in N(b)$ such that $d <_\sigma b$.\\ Therefore we proved the equivalence between 1 and 2. Let us consider 3, which is a reformulation of 2. The fact that $1 \Rightarrow 3$, is obvious. To prove the converse, we can use Theorem \ref{thm:fixpoint}. Suppose that there exists $\sigma$ satisfying 3 but not 1. Let $\sigma'=\mbox{TBLS}(G,\prec_{gen},\sigma) \neq \sigma$ and $i$ be the leftmost index where they differ ($z=\sigma'(i)\neq y= \sigma(i)$). This means that $l_i(y)=\emptyset$ and there exists some $x \in l_i(z)$. But this implies with condition 3 since $x<_{\sigma}y <_{\sigma}z \leq_{\sigma}rn(x)$ that $ln(y) \neq -1$ contradicting $l_i(y)=\emptyset$. \end{proof} \begin{remark} Theorem \ref{thm:fixpoint} can be used also in the following proofs of this section, but we omit them to avoid tedious reading. \end{remark} \subsection{BFS (Breadth-First Search)}\label{sec:bfs} We now focus on BFS. Here, we will follow the definition of BFS given in \cite{CK}, that is a graph search in which the vertices that are eligible are managed with a queue. Note that this differs for example from the definition given in \cite{CormenLR89}, where BFS stands for what we call layered search. Our notion of BFS is the most common implementation of a layered search. \begin{theorem}\label{th:bfs} We define $A\prec_{BFS} B$ if and only if $umin(A)>umin(B)$. Let $\sigma$ be a permutation of $V$. The following conditions are equivalent:\\[-.6cm] \begin{enumerate} \item Vertex ordering $\sigma$ is a BFS-ordering (i.e., a TBLS using $\prec_{BFS}$). \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$, $a \in N(c) - N(b)$, there exists d such that $d \in N(b)$ and $d <_\sigma a$. \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$ and $a$ is the leftmost vertex of $ N(b) \cup N(c)$ in $\sigma$, we have $a \in N(b)$. \end{enumerate} \end{theorem} \begin{proof} The equivalence between condition (1) and condition (2) has been proved in \cite{CK}. We now prove that condition (1) is equivalent to condition (3). Suppose that $\sigma$ is a BFS-ordering. Using Property \ref{metaordering} on $\sigma$, we know that:\\ $\sigma$ is a BFS-ordering\\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, $N_\sigma(x,x) \not\prec_{BFS} N_\sigma(y,x)$\\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, $umin(N_\sigma(x,x)) \not> umin(N_\sigma(y,x))$ \\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, $umin(N_\sigma(x,x)) \leq umin(N_\sigma(y,x))$\\ $\iff$ for every triple of vertices $a,~b,~c$ such that $a<_\sigma b <_\sigma c$, and $a$ is the leftmost vertex of $ N(b) \cup N(c)$, we have $a \in N(b)$.\\ \end{proof} \subsection{DFS (Depth First Search)}\label{sec:dfs} We now turn our attention to Depth First Search. \begin{theorem}\label{th:dfs} We define $A\prec_{DFS} B$ if and only if $umax(A)<umax(B)$. Let $\sigma$ be a permutation of $V$. The following conditions are equivalent:\\[-.6cm] \begin{enumerate} \item Vertex ordering $\sigma$ is a DFS-ordering (i.e., a TBLS using $\prec_{DFS}$). \item For every triple of vertices $a,~b,~c$ such that $ a <_\sigma b <_\sigma c$, $a \in N(c)-N(b)$ there exists $d \in N(b)$ such that $a <_\sigma d <_\sigma b$. \item For every triple of vertices $a,~b,~c$ such that $ a <_\sigma b <_\sigma c$, and $a$ is the rightmost vertex of $N(b) \cup N(c)$ to the left of $b$ in $\sigma$, we have $a \in N(b)$. \end{enumerate} \end{theorem} \begin{proof} The equivalence between condition (1) and (2) has been proved in \cite{CK}. Let us show the equivalence between (1) and (3). Suppose that $\sigma$ is a DFS-ordering. Using Property \ref{metaordering} on $\sigma$, we know that:\\ $\sigma$ is a DFS-ordering\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{DFS} N_\sigma(y,x)$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $umax(N_\sigma(x,x)) \not< umax(N_\sigma(y,x))$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $umax(N_\sigma(y,x)) \leq umax(N_\sigma(x,x))$\\ $\iff$ for every triple of vertices $a,~b,~c$ such that $a<_\sigma b <_\sigma c$ and $a$ is the rightmost vertex of $ N(b) \cup N(c)$ to the left of $b$ in $\sigma$, we have $a \in N(b)$.\\ \end{proof} \subsection{LBFS\xspace}%%{LDAMaxS\xspace (Lexicographic Breadth First Search)}\label{sec:lbfs} LBFS\xspace}%%{LDAMaxS\xspace was first introduced in \cite{RTL} to recognize chordal graphs. Since then many new applications of LBFS\xspace}%%{LDAMaxS\xspace have been presented ranging from recognizing various families of graphs to finding vertices with high eccentricity or to finding the modular decomposition of a given graph, see \cite{HMPV00,SURV,DOS09,Ted}. \begin{theorem}\label{th:lbfs} We define $A \prec_{\LexBFS} B$ if and only if $umin(B-A)<umin(A-B)$. Let $\sigma$ be a permutation of $V$. The following conditions are equivalent:\\[-.6cm] \begin{enumerate} \item Vertex ordering $\sigma$ is a LBFS\xspace}%%{LDAMaxS\xspace-ordering (i.e., a TBLS using $\prec_{\LexBFS}$) \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$, $a \in N(c)-N(b)$, there exists $d <_\sigma a$, $d\in N(b)-N(c)$. \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$ and $a$ is the leftmost vertex of $N(b) \bigtriangleup N(c)$ to the left of $b$ in $\sigma$, then a $\in N(b)-N(c)$. \end{enumerate} \end{theorem} \begin{proof} The equivalence between (1) and (2) is well known, see \cite{RTL,GOL,BD97}. We now prove the equivalence between (1) and (3). Suppose that $\sigma$ is a LBFS\xspace}%%{LDAMaxS\xspace-ordering. Using Property \ref{metaordering} on $\sigma$, we know that:\\ $\sigma$ is a LBFS\xspace}%%{LDAMaxS\xspace-ordering\\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{\LexBFS} N_\sigma(y,x)$\\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, we have $umin(N_\sigma(y,x) - N_\sigma(x,x)) \not< umin(N_\sigma(x,x) -N_\sigma(y,x))$\\ $\iff$ for every $x,~y \in V$, $x <_\sigma y$, we have $umin(N_\sigma(x,x)-N_\sigma(y,x)) \leq umin(N_\sigma(y,x)-N_\sigma(x,x))$\\ $\iff$ for every triple of vertices $a,~b,~c$ such that $a<_\sigma b <_\sigma c$ and $a$ is the leftmost vertex of $ N(b) \bigtriangleup N(c)$ to the left of $b$ in $\sigma$, we have $a \in N(b)-N(c)$.\\ \end{proof} \subsection{LDFS\xspace}%%{LIPMaxS\xspace (Lexicographic Depth First Search)}\label{sec:ldfs} Lexicographic Depth First Search (LDFS\xspace}%%{LIPMaxS\xspace) was introduced in \cite{CK}. \begin{theorem}\label{th:ldfs} We define $A \prec_{\LexDFS} B$ if and only if $umax(A-B)<umax(B-A)$. Let $\sigma$ be a permutation of $V$. The following conditions are equivalent:\\[-.6cm] \begin{enumerate} \item Vertex ordering $\sigma$ is a LDFS\xspace}%%{LIPMaxS\xspace-ordering (i.e., a TBLS using $\prec_{\LexDFS}$) \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$, $a \in N(c)-N(b)$, there exists $a <_\sigma d <_\sigma b$, $d\in N(b)-N(c)$. \item For every triple $a,b,c \in V$ such that $a<_\sigma b <_\sigma c$ and $a$ is the rightmost vertex in $N(b) \bigtriangleup N(c)$ to the left of $b$ in $\sigma$, $a \in N(b)-N(c)$. \end{enumerate} \end{theorem} \begin{proof} The equivalence between (1) and (2) is well known, see \cite{CK}. We now prove the equivalence between (1) and (3). Suppose that $\sigma$ is a LDFS\xspace}%%{LIPMaxS\xspace-ordering. Using Property \ref{metaordering} on $\sigma$, we know that:\\ $\sigma$ is a LDFS\xspace}%%{LIPMaxS\xspace-ordering\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $N_\sigma(x,x) \not\prec_{\LexDFS} N_\sigma(y,x)$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $umax(N_\sigma(x,x)-N_\sigma(y,x)) \not< umax(N_\sigma(y,x) - N_\sigma(x,x))$\\ $\iff$ for every $x,~y \in V$ such that $x <_\sigma y$, we have $umax(N_\sigma(y,x)-N_\sigma(x,x)) \leq umax(N_\sigma(x,x)-N_\sigma(y,x))$\\ $\iff$ for every triple of vertices $a,~b,~c$ such that $a<_\sigma b <_\sigma c$ and $a$ is the rightmost vertex of $ N(b) \bigtriangleup N(c)$ to the left of $b$ in $\sigma$, we have $a \in N(b)-N(c)$.\\ \end{proof} The symmetry between BFS and DFS (respectively LBFS\xspace}%%{LDAMaxS\xspace and LDFS\xspace}%%{LIPMaxS\xspace) becomes clear when using the TBLS ordering formalism. This symmetry was also clear using the pattern-conditions as introduced in \cite{CK}, and in fact lead to the discovery of LDFS\xspace}%%{LIPMaxS\xspace. To finish with classical searches, we notice that \textbf{Maximum Cardinality Search (MCS)} as introduced in \cite{TY84}, can easily be defined using the partial order: $A \prec_{MCS} B$ if and only if $|A| < |B|$. Similarly \textbf{MNS (Maximal Neighbourhood Search)} as introduced in ~\cite{Shier} for chordal graph recognition, is a search such that $A\prec_{MNS} B$ if and only if $A \subsetneq B$, i.e., it uses the strict inclusion partial order between subsets. To conclude this section we use Theorem \ref{metaexten} to easily rediscover the relationships amongst various graph searches as noted in \cite{CK} and \cite{BGS11}. \begin{theorem}\label{Xhier} The partial order of the relation extension between classical searches is described in Figure 1. \end{theorem} \begin{proof} To show that a search extends another one we will use Theorem \ref{metaexten}. Let us show that $\prec_{BFS}$ (respectively $\prec_{DFS}$, $\prec_{MNS}$) is an extension of $\prec_{gen}$. Let $A \prec_{gen} B$. By definition we have $A = \emptyset$ and $B \neq \emptyset$. As a consequence we have $umin(B) < umin(A)$ and thus $A \prec_{BFS} B$ (respectively $umax(A) < umax(B)$ implying $A \prec_{DFS} B$, and $A \subsetneq B$ implying $A \prec_{MNS} B$). We now show that $\prec_{\LexBFS}$ is an extension of $\prec_{BFS}$. Let $A \prec_{BFS} B$. By definition, we have $umin(B) < umin (A)$. As a consequence, $umin(B-A) < umin (A-B)$, implying $A \prec_{\LexBFS} B$. To see that $\prec_{\LexDFS}$ is an extension of $\prec_{DFS}$, first suppose that $A \prec_{DFS} B$. Therefore $umax(A) < umax (B)$ and as a consequence $umax(A-B) < umax (B-A)$ thereby implying $A \prec_{\LexDFS} B$. Similarly $\prec_{\LexBFS}$ (respectively $\prec_{\LexDFS}$, $\prec_{MCS}$) is an extension of $\prec_{MNS}$. Let $A \prec_{MNS} B$; by definition we have $A \subsetneq B$. As a consequence, $umin(B-A) < umin (A-B)$ (respectively $umax(A-B) < umax (B-A)$ and $|A| < |B|$). So $A \prec_{\LexBFS} B$ (respectively $A \prec_{\LexDFS} B$ and $A \prec_{MCS} B$). \end{proof} \vspace{0.5cm} \begin{figure}[ht]\label{fig:panorama}\vspace{-.4cm} \begin{center} \begin{tikzpicture} \node[draw](B) at (3,2) {LBFS\xspace}%%{LDAMaxS\xspace}; \node[draw](C) at (7,2) {LDFS\xspace}%%{LIPMaxS\xspace}; \node[draw](D) at (5,2) {MCS}; \node[draw](E) at (5,1) {MNS}; \node[draw](G) at (3,1) {BFS}; \node[draw](H) at (7,1) {DFS}; \node[draw](J) at (5,0) {Generic Search}; \draw[->] (J)--(E); \draw[->] (J)--(G); \draw[->] (J)--(H); \draw[->] (E)--(B); \draw[->] (E)--(D); \draw[->] (E)--(C); \draw[->] (H)--(C); \draw[->] (G)--(B); \end{tikzpicture} \end{center}\vspace{-.4cm} \vspace{0.5cm} \caption{Summary of the hereditary relationships proved in Theorem \ref{Xhier}. An arc from Search $S$ to Search $S'$ means that $S'$ extends $S$.} \end{figure} \subsection{Limitations of the TBLS model} To finish, let us remark that there exists at least one known search that does not fit into the TBLS model. In the following, recall that $label_i(v)$ for a vertex $v$ denotes the label of $v$ at the beginning of step $i$ of Algorithm 1. \emph{Layered Search} starts at a vertex s, and ensures that if $dist(s,x)<dist(s,y)$ then $\sigma(x)<\sigma(y)$. In other words it respects the layers (vertices at the same distance from the start vertex $s$). We now show that this search is not a TBLS by considering the graph $G$ in Figure \ref{fig:limit}. Assume that we have started the Layered Search with $x_1,~x_2,~x_3,~x_4$ and so $label_5(x_5)=\{3\}$ and $label_5(x_6)=\{4\}$. In a Layered Search, both $x_5$ and $x_6$ must be eligible at step 5. Thus we must have neither $\{3\}\prec \{4\}$ nor $\{4\}\prec \{3\}$; they are incomparable labels. But now consider graph $H$ in Figure \ref{fig:limit} and assume that again we have started the search with $v_1,~v_2,~v_3,~v_4$. So we have $label_5(v_5)=\{3\}$ and $label_5(v_6)=\{4\}$. But in this graph we have to visit $v_5$ before $v_6$. Therefore we must have $\{3\} \prec \{4\}$. As a conclusion, no partial ordering of the labels can capture all Layered Search orderings and so this search cannot be written in our formalism. The same seems true for Min-LexBFS as defined in \cite{Meister05} and Right Most Neighbour as used in \cite{CDH}. \begin{figure}[ht] \begin{center} \begin{tikzpicture} \coordinate(X1) at (0,0.5); \coordinate(X2) at (1,0); \coordinate(X3) at (1,0.5); \coordinate(X4) at (1,1); \coordinate(X5) at (2,0.5); \coordinate(X6) at (2,1); \draw (X1) node[above left] {$x_1$} node{$\bullet$}; \draw (X2) node[above] {$x_2$} node{$\bullet$}; \draw (X3) node[above] {$x_3$} node{$\bullet$}; \draw (X4) node[above] {$x_4$} node{$\bullet$}; \draw (X5) node[above] {$x_5$} node{$\bullet$}; \draw (X6) node[above] {$x_6$} node{$\bullet$}; \coordinate(V1) at (4,0.5); \coordinate(V2) at (5,1); \coordinate(V3) at (5,0); \coordinate(V4) at (6,1); \coordinate(V5) at (6,0); \coordinate(V6) at (7,1); \draw(V1) node[above left]{$v_1$} node{$\bullet$}; \draw(V2) node[above]{$v_2$} node{$\bullet$}; \draw(V3) node[above]{$v_3$} node{$\bullet$}; \draw(V4) node[above]{$v_4$} node{$\bullet$}; \draw(V5) node[above]{$v_5$} node{$\bullet$}; \draw(V6) node[above]{$v_6$} node{$\bullet$}; \draw (X6)--(X4)--(X1)--(X2); \draw (X1)--(X3)--(X5); \draw (V1)--(V2)--(V4)--(V6); \draw (V1)--(V3)--(V5); \end{tikzpicture} \end{center} \caption{Graph $G$ on the left and $H$ on the right.\label{fig:limit}} \end{figure} \section{The relationship between GLS and TBLS} \label{compa} We are now interested in determining the relationship between TBLS and GLS. First let us recall GLS from \cite{BGS11}. It depends on a \emph{labeling structure} which consists of four elements: \begin{itemize} \item a set of labels $L$; \item a strict order $\prec_{GLS}$ over the label-set $L$; \item an initial label $l_0$; \item an UPLAB function $L\times \mathbb{N}^+ \to L$. \end{itemize} The GLS algorithm then takes as input a graph $G=(V,E)$ (over which the search is performed) as well as a labeling structure. The computational power of the UPLAB function is unbounded, even though it must be deterministic, and the label set $L$ may be any set. In contrast, TBLS uses a fixed initial label $\emptyset$, a fixed label set $P_f(\mathbb{N}^+)$, and a fixed simple updating function. Despite these restrictions, it is, however equivalent to $GLS$ in the sense of Theorem \ref{equivalence}. \vspace{0.5cm} {\small \begin{algorithm}[H] \caption{GLS($G,\{L,\prec_{GLS},l_0,UPLAB\}$)} \lForEach{$v\in V$}{$l(v)\leftarrow l_0$}\ \For{$i\leftarrow 1$ \KwTo $n$}{ Let $Eligible$ be the set of eligible vertices, i.e., those unnumbered vertices $v$ with $l(v)$ maximal with respect to $\prec_{GLS}$\; Let $v$ be some vertex from $Eligible$\; $\sigma(i)\leftarrow v$\; \ForEach{\textup{unnumbered vertex $w$ adjacent to $v$}}{ $l(w)\leftarrow UPLAB(l(w),i)$\; } } \end{algorithm} } \vspace{0.5cm} We now prove that for each GLS, there is a $\prec_{TBLS}$ producing the same orderings, and conversely. First we need some notation. At each iteration $i$ of $\mbox{TBLS}(G,\prec_{TBLS}, \sigma)$, let $l_{TBLS,i}(v)$ be the label assigned to every unnumbered vertex $v$ by $\mbox{TBLS}(G,\prec_{TBLS}, \sigma)$, i.e., the label that will be used to choose the $i$th vertex. Similarly, let $l_{GLS,i}(v)$ be the label assigned to every unnumbered vertex $v$ by GLS($G, \{L,\prec_{GLS},l_0,UPLAB\}$), i.e., the label that will be used to choose the $i$th vertex. Given a graph $G=(V,E)$, and an ordering $\sigma$ of $V$, let us define $I_k^{\sigma}(v)=N(v) \cap \{\sigma(1), \dots \sigma(k)\}$ to be the neighbours of $v$ visited at step $k$ or before. Let us define $p_k^j$ to be the $j$-th element of $I_k^{\sigma}(v)$ sorted in increasing visiting ordering. \begin{proposition}[\cite{BGS11}]\label{GLSlabelOK} Let $S$ be a labeling structure, $G=(V,E)$ a graph. At iteration $i$ of GLS($G,S$) computing an ordering $\sigma$, for every unnumbered vertex $v$: \\$l_{GLS,i}(v)= UPLAB (\dots UPLAB (l_o, p_1), \dots, p_{k})$ where $(p_1, \dots, p_k)$ is the sequences of numbers in $N_{\sigma}(v, \sigma^{-1}(i))$ in increasing order. \end{proposition} \begin{proposition}\label{TBLSlabelOK} Let $G=(V,E)$ be a graph, $v\in V$, and $\sigma$ the ordering produced by TBLS($G,\prec, \tau$). At iteration $i$ of $\mbox{TBLS}(G,\prec, \tau$), for every unnumbered vertex $v$: \\$l_{TBLS,i}(v)=I_{i-1}^{\sigma}(v)$. \end{proposition} \begin{proof} The proof goes by induction. At the first step of the algorithm, every vertex has $\varnothing$ as its label, and has no previously visited neighbour. Assume that at iteration $i$, every unnumbered vertex $x$ has label $l_{TBLS,i}(x)= I_{i-1}^{\sigma}(x)$. After this iteration, for every unnumbered neighbour $v$ of $\sigma(i)$, \\$l_{TBLS,i}(v)=l_{TBLS,i-1}(v)\cup\{i\}$, which is indeed $I_{i}^{\sigma}(v)$, and for every unnumbered non-neighbour $v$ of $\sigma(i)$, $l_{TBLS,i}(v)=l_{TBLS,i-1}(v)$, which is again $I_{i}^{\sigma}(v)$. \end{proof} \begin{theorem}\label{equivalence} A set $T$ of orderings of the vertices of a graph $G$ is equal to $\{TBLS(G,\prec_{TBLS},\tau) \mid \tau\in \mathfrak{S}_n \}$ if and only if there exists a labeling structure $S=(L,\prec_{GLS},l_0,UPLAB)$ such that $T$ is equal to the set of orderings produced by $GLS(G,S)$. \end{theorem} \begin{proof} First, consider an ordering $\prec_{TBLS}$. The set $\{\mbox{TBLS}(G,\prec_{TBLS},\tau) \mid \tau\in \mathfrak{S}_n \}$ is equal to the set of all orderings produced by GLS($G,S$) with $S=(P(\mathbb{N}^+), \prec_{TBLS},\varnothing,cons)$, where $cons(l(w),i)$ returns $l(w)\cup\{i\}$. Conversely, consider $S=(L,\prec_{GLS},l_0,UPLAB)$ a labeling structure. We show that there exists an order $\prec_{TBLS}$ such that, for every graph $G$, the set of all orderings produced by GLS($G,S$) is equal to $\{TBLS(G,\prec_{TBLS},\tau) \mid \tau\in \mathfrak{S}_n \}$. By propositions \ref{GLSlabelOK} and \ref{TBLSlabelOK} we can define a mapping $\phi$ from $P(\mathbb{N}^+)$ (the labels used by TBLS) into labels \emph{effectively} used by $GLS$ (i.e., those that can be assigned to a vertex during some execution of the algorithm). $\phi$ is recursively defined as $\phi(\varnothing)=l_0$, and if $max(A)=i$, then $\phi(A)=$ UPLAB($\phi(A) \backslash \{i\},i$). Notice the same GLS-label $l$ may be reached in different ways. Subset $\phi^{-1} (l) \subset P(\mathbb{N}^+)$ is the set of TBLS-labels that correspond to that label. It is empty for all labels not effectively used. Then, we define $\prec_{TBLS}$ as follows: $\forall A, A' \in P_f(\mathbb{N}^+)$, $A\prec_{TBLS} A'$ if and only if $\phi(A)\prec_{GBLS} \phi(A')$. We are now ready to prove the theorem. The proof goes by induction. Before the first iteration, for every vertex $v$, $l_{GLS}^0(v)=l_0$ and $l_{TBLS}^0(v)=\varnothing$. $GLS$ can pick any of these vertices, in particular the one that would be picked by $TBLS (G,\prec_{TBLS},\tau)$, and by setting $\tau$ to be equal to a given output of $GLS(G,S)$, TBLS would indeed chose the same vertex. Now, assume that when step $i$ begins, both algorithms have produced the ordering $\sigma(1) \dots \sigma(i-1)$, and the $i$th vertex is about to be chosen. By propositions \ref{GLSlabelOK} and \ref{TBLSlabelOK}, for every unnumbered vertex $x$, $l_{TBLS,i}(x)= I_{i-1}^{\sigma}(x)$, and $l_{GLS,i}(x)= UPLAB (\dots UPLAB (l_o, p_{i-1}^1)\dots ,p_{i-1}^{|I_k^{\sigma}(x)|})$. By the definition of $\phi$, we have that $l_{GLS,i}(x)=\phi(l_{TBLS,i}(x))$, and $l_{TBLS,i}(x)\in \phi^{-1}(l_{GLS,i}(x))$. Then, by the definition of $\prec_{TBLS}$, for two unnumbered vertices $v$ and $w$, we know that $l_{TBLS,i}(v) \prec_{TBLS} l_{TBLS,i}(w)$ if and only if $\phi(l_{TBLS,i}(v)) \prec_{GLS} \phi(l_{TBLS,i}(w))$, and $l_{GLS,i}(v) \prec_{GLS} l_{GLS,i}(w)$ if and only if for all $l_v\in \phi^{-1}(l_{GLS,i}(v))$ and all $l_w\in \phi^{-1}(l_{GLS,i}(w))$, $l_v\prec_{TBLS} l_w$. Thus, the set of eligible vertices at step $i$ is the same for both algorithms. $GLS$ can pick any of these vertices, in particular the one that would be picked by $\mbox{TBLS}(G,\prec_{TBLS},\tau)$, and by setting $\tau$ to be equal to GLS$(G,S)$, TBLS would indeed choose the right vertex. \end{proof} Although TBLS and GLS cover the same set of vertex orderings, we think that our TBLS formalism provides a simpler framework to analyze graph search algorithms, as can be seen in the next section. \section{Recognition of some TBLS search orderings}\label{tci} Let us now consider the following problem: \vspace{0.5cm} \textbf{Recognition of Search $\cal S$} \KwData{Given a total ordering $\sigma$ of the vertices of a graph $G$ and a TBLS search $\cal S$,} \KwResult{Does there exist $\tau$ such that $\sigma = \mbox{TBLS}(G, \prec_{\cal S}, \tau)$?} \vspace{0.5cm} Of course we can use Theorem~\ref{thm:fixpoint} and build an algorithm that tests whether or not $\sigma=\mbox{TBLS}(G,\prec_{\cal S}, \sigma)$. Let $\tau=\mbox{TBLS}(G,\prec_{\cal S},\sigma)$. If $\tau=\sigma$ then the answer is yes; otherwise it is no. We can certify the no answer using the first difference between $\tau$ and $\sigma$. Let $i$ be the first index such that $\sigma(i)\neq \tau(i)$. If TBLS chooses $\tau(i)$ and not $\sigma(i)$ at step $i$, then at this time $l(\sigma(i))\prec_{\cal S} l(\tau(i))$. So we can build a contradiction to the pattern-condition of this search. But we may want to be able to answer this question without applying a TBLS search, or modifying a TBLS algorithm. For example suppose that a distributed or parallel algorithm has been used to compute the ordering (for example when dealing with a huge graph \cite{BV12}) that is assumed to be a specific search ordering; how does one efficiently answer this question? Let us study some cases. \subsection{Generic Search} For Generic Search consider Algorithm \ref{alg:gencertif} where $\sigma$ is the ordering we want to check, and for all $i$ between $1$ and $n$, $ln(\sigma(i))$ has been computed; note that $G$ may be disconnected. Recall that $ln(x)$ is the leftmost left neighbour of $x$; if $x$ has no left neighbours, then $ln(x) = -1$. The algorithm will output either ``YES" or ``NO" depending on whether or not $\sigma$ is a GEN-ordering. \vspace{0.5cm} \begin{algorithm}[H] \label{alg:gencertif} \caption{GEN-check} $J\leftarrow 1$; \hspace{4mm}\% \{ If $\sigma$ is a GEN-ordering, then $J$ is the index of the first \\ \hspace{1.5cm}vertex of the current connected component.\}\%\\ \For{$i\leftarrow 2$ \KwTo $n$}{ \If{$ln(\sigma(i)) = -1$}{$J \leftarrow i$} \Else{{\If{$ln (\sigma(i)) < J$}{\Return{``NO''}}} } } \Return{``YES''} \end{algorithm} \vspace{0.5cm} \begin{theorem}\label{genericreco} The GEN-check algorithm is correct and requires $O(n)$ time. The recognition of a GEN-ordering can be implemented to run in $O(n+m)$ time. \end{theorem} \begin{proof} If the algorithm reports that $\sigma$ is not a GEN-ordering, then vertices $\sigma (ln(i)),$$ \sigma (J), \sigma (i)$ form a forbidden triple as stipulated in Condition 2 of Theorem \ref{genericsearch}. Note that $\sigma (J)$ has no neighbours to its left in $\sigma$. Now assume that the algorithm reports that $\sigma$ is a GEN-ordering but for sake of contradiction there exists a forbidden triple on vertices $a <_{\sigma} b <_{\sigma} c$. Let $J$ be the rightmost $J$ index less than $\sigma^{-1} (c) $ identified by the algorithm; note that $ b \le_{\sigma} \sigma (J) <_{\sigma} c$ and $ \sigma (ln(c)) \le_{\sigma} a$. When $i = \sigma^{-1} (c)$ the algorithm would have reported that $\sigma$ is not a GEN-ordering. For the preprocessing we need to compute the values of $ln (x)$ for every vertex $x$, following Definition \ref{lnrndef}. By sorting the adjacency lists with respect to $\sigma$ (in linear time), it is possible to find $ln (x)$ in linear time by scanning the adjacency lists once and storing $ln (x)$ in an array. Given this information, Algorithm 3 runs in $O(n)$ time. Including the preprocessing time, the whole complexity needed is $O(n+m)$. \end{proof} \subsection{BFS} In order to handle the recognition of BFS-orderings and DFS-orderings, we will first prove variations of the conditions proposed in Theorems \ref{th:bfs} and \ref{th:dfs}, which are easier to check. Let us define for every vertex $x$ in V, the following two intervals in $\sigma$: $Right(x)=[x,rn(x)]$ and $Left(x)=[ln(x), x]$. By convention, if $rn(x)=-1$ or $ln(x)=-1$ the corresponding interval is reduced to $[x]$. \begin{theorem}\label{BFScond} Vertex ordering $\sigma$ is a BFS-ordering of $V$ if and only if \begin{enumerate} \item Vertex ordering $\sigma$ is a GEN-ordering of $G$ \item For every pair of vertices $x, y$, if $x<_{\sigma} y$ then $ln(x) \leq_{\sigma} ln(y)$ \item For every pair of vertices $x, y$, if $x \ne y$ then the intervals $Left(x)$ and $Left(y)$ cannot be strictly included \footnote{One is included in the other and the two left extremities are different, as are the two right extremities.}. \end{enumerate} \end{theorem} \begin{proof} It is easy to show that Conditions 2 and 3 are equivalent. $\Rightarrow$ First, notice that every BFS-ordering $\sigma$ is also a GEN-ordering. Now assume for contradiction that Condition 3 is contradicted, namely that $x <_{\sigma} y$ and that $Left(y)$ strictly contains $Left(x)$. Then we have the configuration: $ln(y)<_{\sigma}ln(x) \le_{\sigma} x <_{\sigma} y$. Considering the triple $(ln(y), x, y)$, since $ln(y)<_{\sigma} ln(x)$, necessarily $x ln(y) \notin E$. Using the BFS 4-points condition on this triple there exists $z$ such that $z <_{\sigma} ln(y)$ where $xz \in E$, thereby contradicting $ln(y) <_{\sigma} ln(x)$. $\Leftarrow$ Assume that $\sigma$ respects all three conditions of the theorem. Consider a triple $(a, b, c)$ of vertices such that: $a<_{\sigma} b <_{\sigma} c$ with $ac \in E$ and $ab \notin E$. Since $\sigma$ is a GEN-ordering, $ac \in E$ implies that $ln(b) \neq -1$ (i.e., $b$ has a left neighbour in $\sigma$). Suppose $ln(b) >_{\sigma} a$. Since $ln($$c)\leq_{\sigma}a$, this implies that $Left(c)$ strictly contains $Left(b)$, thereby contradicting Condition 3. Therefore $b$ has a neighbour before $a$ in $\sigma$. So $\sigma$ follows the BFS 4-points condition and is a legitimate BFS-ordering. \end{proof} To determine whether a given vertex ordering $\sigma$ is a BFS-ordering we first use Algorithm \ref{alg:gencertif} to ensure that $\sigma$ is a GEN-ordering. We then use Algorithm \ref{alg:BFScertif} to determine whether or not Condition 3 of Theorem \ref{BFScond} is satisfied and thus whether or not $\sigma$ is a BFS-ordering. As with Algorithm \ref{alg:BFScertif}, we assume that $ln(\sigma(i))$ has been computed for all $i$ between $1$ and $n$. \vspace{0.5cm} \begin{algorithm}[H] \label{alg:BFScertif} \caption{BFS-check} $min \leftarrow n$; \hspace{4mm}\%\{$min$ will store the index of the current leftmost value of \\ \hspace{1.7cm}$ln (\sigma (j))$ for all $i \le j \le n$.\}\% \\ \For{i$\leftarrow$ n downto 1}{ \If{ $ln(\sigma(i)) > min $}{{\Return{``NO''}}} \If{$ln(\sigma(i)) \neq -1$}{$min \leftarrow ln(\sigma(i))$;} } \Return{``YES''} \end{algorithm} \vspace{0.5cm} \begin{theorem}\label{BFSreco} Given a GEN-ordering $\sigma$, the BFS-check algorithm correctly determines whether $\sigma$ is a BFS-ordering in $O(n)$ time. The recognition of a BFS-ordering can be done in $O(n+m)$ time. \end{theorem} \begin{proof} If the algorithm reports that $\sigma$ is not a BFS-ordering, then consider the triple of vertices $\sigma(min), \sigma(i), \sigma(k)$, where $k$ is the value of $i$ when $min$ was determined. Note that $\sigma(i)$ is not adjacent to $\sigma(min)$ or to any vertices to the left of $\sigma(min)$ and thus this triple forms a forbidden triple as stipulated in Condition 2 of Theorem \ref{th:bfs}. Now assume that the algorithm reports that $\sigma$ is a BFS-ordering but for sake of contradiction there exists a forbidden triple on vertices $a <_{\sigma} b <_{\sigma} c$. We let $a' =\sigma (ln ($$c))$ and note that since $b$ has no neighbours to the left of or equal to $a$, $b$ is not adjacent to $a'$ or to any vertices to its left. Thus when $i = \sigma^{-1} (c)$ the algorithm would have reported that $\sigma$ is not a BFS-ordering. The complexity argument is the same as in the proof of Theorem \ref{genericreco}. \end{proof} Concerning this particular result on BFS, when the graph is connected it provides as a corollary a linear time algorithm to certify a shortest path between the vertices $\sigma (1)$ and $\sigma(n)$. So in the spirit of \cite{McMNS11}, this can be used for certifying BFS-based diameter algorithms (see \cite{BV12, CGHLM13}). \vspace{0.5cm} \noindent \subsection{DFS} We now consider DFS and define $Lmax(x)$ for every vertex $x \in V$ to be the rightmost left neighbour of $x$ in $\sigma$; if $x$ has no left neighbours then by convention $Lmax(x)=-1$. The interval $RLeft(x)$ is defined to be $[Lmax(x), x]$; again by convention, if $Lmax(x)=-1$ $RLeft(x)$ is reduced to $[x]$. \begin{theorem} Let $G=(V,E)$ be a graph, and let $\sigma$ be an ordering of $V$. Vertex ordering $\sigma$ is a DFS-ordering of $G$ if and only if \begin{enumerate} \item $\sigma$ is a GEN-ordering of $G$ \item no two intervals $Right(x)$ and $RLeft(y)$, with $x \neq y$, strictly overlap as intervals. \end{enumerate} \end{theorem} \begin{proof} $\Rightarrow$ First, notice that every DFS-ordering is also a GEN-ordering. Then, assume, for contradiction, that $\sigma$ is a DFS-ordering of $G$, but that in $\sigma$ $Right(x)$ and $RLeft(y)$ overlap for some $x \neq y$. Necessarily $x <_{\sigma} y$ and $Lmax(y)<_{\sigma} x <_{\sigma} y <_{\sigma} rn(x)$. $Lmax(y)<_{\sigma} x $ implies $xy \notin E(G)$. But then the triple $(x, y, rn(x))$ violates the 4-points condition of $\sigma$, since $y$ has no neighbour between $x$ and $y$ in $\sigma$. $\Leftarrow$ Assume that $\sigma$ respects both conditions of the theorem but $\sigma$ is not a DFS-ordering. Consider a triple $(a, b, c)$ of vertices such that: $a<_{\sigma} b <_{\sigma} c$ with $ac \in E$ and $ab \notin E$ but there is no neighbour of $b$ between $a$ and $b$ in $\sigma$. Since $\sigma$ is supposed to be a GEN-ordering, $ac \in E$ implies that $b$ has a neighbour $d$ left to it in $\sigma$, which by the above argument, must be before $a$. Thus $Lmax(b) <_{\sigma} a$ and therefore the intervals $RLeft(b), Right(a)$ strictly overlap, a contradiction. \end{proof} \begin{corollary} DFS-orderings can be recognized in $O(n+m)$. \end{corollary} \begin{proof} Verifying that $\sigma$ is a generic-ordering can be done in $O(n+m)$ time using Theorem \ref{genericreco}. To check the second condition, it suffices to build the family of $2n$ intervals and apply a simple 2 states stack automaton \cite{HMU01} to check the overlapping in $O(n)$ time. \end{proof} \subsection{LBFS and LDFS} \begin{theorem}\label{cert} LBFS\xspace}%%{LDAMaxS\xspace and LDFS\xspace}%%{LIPMaxS\xspace-orderings can be recognized in $O(n(n+m))$ time. \end{theorem} \begin{proof} To build the recognition algorithm we use the third condition of the relevant theorems in Section \ref{charac}, in particular \ref{th:lbfs} (LBFS) and \ref{th:ldfs} (LDFS). Both of these conditions are pattern-conditions. The certificate is stored in a table whose entries are keyed by the pair $(b,c)$ where $b <_{\sigma} c$ and the information will either be the vertex $a$, where $a <_{\sigma} b$ that satisfies the corresponding condition or an error message indicating that the condition has been violated. For LBFS and LDFS, the pattern-condition examines $a$, the leftmost (LBFS) or the rightmost (LDFS) vertex of $N(b) \bigtriangleup N(c) $ and requires that $a \in N(b)-N(c)$. It is easy to show that this can be accomplished in time $O(|N(b)| + |N(c)|$, for any $b$ and $c$. In all cases, if $a$ satisfies the membership condition then it is stored in the $(b,c)$'th entry of the table; otherwise ``error'' is stored. Regarding complexity considerations, the table uses O($n^2$) space complexity. For the lexicographic searches, the timing requirement is bounded by $\sum_{b \in V} \sum_{c \in V} (|N(b)| + |N(c)|)$ to build the table and $O(n^2)$ time to search for an ``error'' entry, giving an $O(n(n+m))$ time complexity. \end{proof} These results for LBFS and LDFS do not seem to be optimal, but at least they yield a certificate in case of failure. To improve these algorithms we need to find some new characterizations of LBFS- and LDFS-orderings. \section{Implementation issues}\label{Imp} We now consider how to compute an TBLS search, in the case where $\prec$ is a total order. In such a case, at each step of the search, the labels yield a total preorder on the vertices. Such a total preorder (also called weak-order using ordered sets terminology) can be efficiently represented using ordered partitions as can beset in the next result. \begin{theorem}\label{thpartoche} $TBLS(G,\prec,\tau)$ where $\prec$ is a total order can be implemented to run in $O(n+ mT(n)\log n)$ time where the $\prec$ comparison time between two labels is bounded by $O(T(n))$. \end{theorem} \begin{proof} We use the framework of partition refinement \cite{HMPV00}. First we sort the adjacency lists with respect to $\tau$, and consider the following algorithm. The input to the algorithm is a graph $G=(V,E)$, a total order $\prec$ on $P_f(\mathbb{N}^+)$, and an ordering $\tau$ of $V$ and the output is the $\mbox{TBLS}(G,\prec,\tau)$-ordering $\sigma$ of $V$. \vspace{0.5cm} {\small \begin{algorithm}[H] \caption{Computing a TBLS ordering} Let $\mathcal{P}$ be the partition $\{V\}$, where the only part (i.e., $V$) is ordered with respect to $\tau$\; \For{$i\leftarrow 1$ \KwTo $n$} { Let $Eligible$ be the part of $\mathcal{P}$ with the largest label with respect to $\prec$ and\; let $x$ be its first vertex\; replace $Eligible$ by $Eligible-\{x\}$ in $\mathcal{P}$ \; $\sigma(i)\gets x$\; Refine($\mathcal{P}$, $N(x)$)\; } \end{algorithm} } The algorithm maintains an (unordered) partition $\mathcal{P}$ of the unnumbered vertices. Each part of $\mathcal{P}$ is an ordered list of vertices. The following two invariants hold throughout the execution of the algorithm: \begin{enumerate} \item The vertices of each part have the same unique (with respect to parts) label; \item Inside a part, the vertices are sorted with respect to $\tau$. \end{enumerate} The action of Refine$(\mathcal{P},A)$ is to replace each part $P\in\mathcal{P}$ with two new parts: $P\cap A$ and $P-A$ (ignoring empty new parts). It is possible to maintain the two invariants using the data structure from \cite{HMPV00}, provided the adjacency lists of $G$ are sorted with respect to $\tau$. After each refinement, each part of $\mathcal{P}$ therefore contains vertices that are twins with respect to the visited vertices (Invariant 1). Thanks to the second invariant, the chosen vertex is always the first vertex (with respect to $\tau$) of part $Eligible$; i.e., $\sigma(i)$ is indeed $x$. \vspace{0.5cm} For the time complexity, Refine($\mathcal{P}$, $N(x)$) takes $O(|N(x)|)$ time \cite{HMPV00}, so all refinements take $O(n+m)$ time. The only non-linear step is identifying part $Eligible$ among all parts of the partition. Each part has a label (the one shared with all its vertices) used as a key in a Max-Heap. Refine($\mathcal{P}$, $N(x)$) creates at most $|N(x)|$ new parts so there are at most $m$ insertions into the heap. The label of a part does not change over time (but empty parts must be removed). There are no more removal operations than insertion operations, each consisting of at most $\log n$ label comparisons (since there are at most $n$ parts at any time). So we get the $O(n+mT(n) \log n)$ time bound. \end{proof} This complexity is not optimal, since it is well-known and already used in some applications (see for example \cite{DOS09}) that classical searches such as BFS, DFS, LBFS can be implemented within the TBLS framework, i.e., solving the tie-break with a given total ordering $\tau$ of the vertices, within the same complexity as their current best implementations. To avoid the $T(n)$ costs and the $\log n$ factor, the trick is simply to use an implementation of the search that uses partition refinement (such an implementation exists for BFS, DFS, and LBFS). If we start with a set ordered via $\tau$, there exists a partition refinement implementation that preserves this ordering on each part of the partition, and the tie-break rule means simply choose the first element of the Eligible part. For LDFS, that best known complexity can also be achieved this way. But for Gen-search, MCS and MNS we do not know yet how to achieve linear time, within the TBLS framework. \section{Concluding remarks} We have focused our study on a new formalism that captures many usual graph searches as well as the commonly used multi-sweep application of these searches. The TBLS formalism allows us to define a generic TBLS-orderings recognition algorithm, and gives us a new point of view of the hierarchy amongst graph searches. The new pattern-conditions for Generic Search, BFS and DFS give us a better way (compared to the pattern-conditions presented in \cite{CK}) of certifying whether a given ordering could have been produced by such a search. Furthermore, for LBFS and LDFS we do not have to trust the implementation of the search (which can be complicated) but have presented a simple program that just visits the neighbourhood of the vertices of the graph and stores a small amount of information (see Theorem \ref{cert}). The size of this extra information, however, can be bigger than the size of the input, and it may take longer to compute than the actual time needed to perform the search itself. \vspace{0.3cm} The landscape of graph search is quite complex. Graph searches can be clustered using the data structures involved in their best implementations (queue, stack, partition refinement \dots). In this paper we have tried a more formal way to classify graph searches. This attempt yields an algebraic framework that could be of some interest. Clearly being an extension (see section \ref{sectTBLS}) is a transitive relation. In fact $\ll$ structures the TBLS graph searches as $\wedge$-semilattice. The $0$ search in this semi-lattice, denoted by the null search or $S_{null}$, corresponds to the empty ordering relation (no comparable pairs). At every step of $S_{null}$ the Eligible set contains all unnumbered vertices. Therefore for every $\tau$, $\mbox{TBLS}(G, \prec_{S_{null}}, \tau)=\tau$ and so any total ordering of the vertices can be produced by $S_{null}$. The infimum between two searches $S,S'$ can be defined as follows: \medskip \noindent For every pair of label sets $A, B$, we define: $A\prec_{S\wedge S'}B$ if and only if $A \prec_S B$ and $A \prec_{S'}B$. \medskip Clearly every extension of $S$ and $S'$ is an extension of $S\wedge S'$. Similarly $S$ and $S'$ are extensions of $S\wedge S'$. While being as general as GLS, we feel that TBLS is closer to the pattern-conditions presented in \cite{CK}, since many of the $\prec$ conditions presented in this paper are a rewriting of their pattern-conditions. Still, there are many variants of the searches we studied that do not fall under the TBLS model, such as layered search. We wonder if a more general search model can be found, that would not only include some of these other common searches but would also retain the simplicity of TBLS. \section*{Acknowledgements:} The authors thank Dominique Fortin for his careful reading and numerous suggestions to improve the paper. The first author wishes to thank the Natural Sciences and Engineering Research Council of Canada for their financial support.
1,477,468,749,893
arxiv
\section{Introduction} Flowchart languages are a particular class of imperative programming languages which permit a pleasant and intuitive graphical representation of the control flow of programs. While conceptually very simple, flowchart languages form the foundation for modern imperative programming languages, and have been used for this reason as vehicles for program analysis (\emph{e.g.}, to measure coverage in white-box testing~\cite{Ammann2008}), program transformations (\emph{e.g.}, partial evaluation, see \cite{JonesGomardSestoft1993}), and to express fundamental properties of imperative programming, such as the equivalence of expressivity in \emph{structured} and \emph{unstructured} programming in the \emph{Böhm-Jacopini theorem}~\cite{BohmJacopini1966} (see also \cite{AshcroftManna1972,WilliamsOssher1978}). Figure \ref{fig:structured_flowchart} shows the (textual and graphical) flowchart structures used by structured flowchart languages. An interesting feature in flowchart languages is the dual presentation of predicates as \emph{conditions} and \emph{decisions}, depending on the context. On the one hand, the textual $\mathbf{if}~p~\mathbf{then}~c_1~\mathbf{else}~c_2$ seems to favor the view of $p$ as a condition, \emph{i.e.}, a predicate which has inherently nothing to do with control flow, but which may easily be combined with other conditions to other conditions to form new ones. In other words, the textual representation considers the branching behaviour to be given by the semantics of $\mathbf{if}~\dots~\mathbf{then}~\dots~\mathbf{else}~\dots$ rather than by the semantics of $p$. This view is also emphasized by the usual (big-step) operational semantics of conditionals: Here, predicates are treated as expressions that may be evaluated in a state to yield a Boolean value, which the conditional may then branch on, as in \begin{equation*} \frac{\langle p, \sigma \rangle \to \mathbf{true} \qquad \langle c_1, \sigma \rangle \to \sigma'}{\langle \mathbf{if}~p~\mathbf{then}~c_1~\mathbf{else}~c_2, \sigma \rangle \to \sigma'} \qquad \text{ and } \qquad \frac{\langle p, \sigma \rangle \to \mathbf{false} \qquad \langle c_2, \sigma \rangle \to \sigma'}{\langle \mathbf{if}~p~\mathbf{then}~c_1~\mathbf{else}~c_2, \sigma \rangle \to \sigma'} \enspace. \end{equation*} On the other hand, the graphical representation of conditionals in Figure~\ref{fig:structured_flowchart}(c) seems to rather prefer the view of $p$ as a decision, \emph{i.e.}, a kind of flowchart operation intrinsically capable of directing control flow. That is to say, that this is a structured flowchart (corresponding to a conditional) is purely coincidental; for \emph{unstructured} flowcharts to make sense, $p$ \emph{must} be able to direct control flow on its own. However, where conditions are most naturally composed via the Boolean combinators, the only natural way of composing decisions seems to be in sequence (though this leads to additional output branches). While categorical models of structured flowchart languages have been widely studied (see, \emph{e.g.}, \cite{Stefanescu1986, Stefanescu1987, ManesArbib1980, ManesArbib1986,Elgot1975,Elgot1976}), none provide a treatment of this dual view of predicates. In this paper, we argue that extensive restriction categories are precisely categories that make clear this dual view on predicates as conditions and decisions, offering both the ease of combination of conditions and the control flow behaviour of decisions. Restriction categories (introduced in \cite{Cockett2002, Cockett2003, Cockett2007}) are categories of partial maps, in which each morphism is equipped with a \emph{restriction idempotent} that, in a certain sense, gauges how partial that morphism is. Since models of flowchart languages most provide a notion of partiality (due to possible nontermination), restriction categories provide an ideal setting for such models. Coincidentally, the defining feature of extensive restriction categories\footnote{Note that while extensive restriction categories are strongly connected to extensive categories, they are confusingly \emph{not} extensive in the usual sense of extensive categories~\cite{Carboni1993}.} is the presence of certain morphisms called \emph{decisions}, which play a similar role as the decision view on predicates in flowchart languages. In this setting, we show that the correspondence between conditions and decisions is exhibited precisely as a natural isomorphism between the \emph{predicate fibration} $\Hom(X,1+1)$ of predicates and predicate transformers (see also \cite{CJWW2015,Jacobs2015}), and the \emph{decision fibration} $\Dec(X)$ of decisions (certain morphisms $X \to X+X$) and decision transformers. We then go on to explore the structure of $\Dec(X)$ (or equivalently, $\Hom(X,1+1)$), showing that this extends to a fibration over the category of \emph{De Morgan quasilattices} and homomorphisms, which give algebraic semantics~\cite{FinnGrigolia1993} to Kleene's \emph{weak logic} \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}~\cite{Kleene1952}. Intuitively, \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} can be seen as a partial version of classical (Boolean) logic. We make this statement precise in this setting by showing that if we restrict ourselves to \emph{total} decisions and decision transformations, classical logic can be recovered. Since the subcategory of objects and total morphisms of a (split) extensive restriction category is an extensive category in the usual sense (see, \emph{e.g.}, \cite{Carboni1993}), we can use this to provide an alternative proof of a statement from Effectus theory~\cite{Jacobs2015,CJWW2015} that predicates over each extensive category forms a fibred Boolean algebra via the predicate fibration~\cite[Prop.~61, Prop.~88]{CJWW2015}. This yields a relationship diagram of effecti, extensive categories, and extensive restriction categories and their corresponding logics as shown in Figure~\ref{fig:exteff}. \begin{figure} \ctikzfig{fc_struct} \caption{The four flowchart structures.} \label{fig:structured_flowchart} \end{figure} This paper is structured as follows: Section \ref{sec:background} gives a brief introduction to extensive restriction categories. Section \ref{sec:condition_decision_duality} demonstrates the condition/decision duality of extensive restriction categories by showing that the decision and predicate fibrations are naturally isomorphic; and, as a consequence, that \emph{decisions are a property of the predicates}. Then, in Section~\ref{sec:the_internal_logic_of_extensive_restriction_categories}, we show that the decisions on an object form models of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}, with decision transformers as homomorphisms. By restricting to only \emph{total} decisions, we show that these restrict to models of classical logic. Finally, \ref{sec:conclusion} offers some concluding remarks. \begin{figure} \centering \begin{tikzpicture} \node[draw,rounded corners] at (0.75,0) (ExtCat) {\begin{tabular}{c} \textbf{Extensive category} \\ \emph{Boolean algebras} \end{tabular}}; \node[draw,rounded corners] at (5.25,-4) (ExtRestCat) {\begin{tabular}{c} \textbf{Extensive restriction category} \\ \emph{De Morgan quasilattices} \end{tabular}}; \node[draw,rounded corners] at (-5.5,-4) (Effectus) {\begin{tabular}{c} \textbf{Effectus} \\ \emph{Effect algebras} \end{tabular}}; \draw[->] (Effectus) to node {} (ExtCat); \draw[->] (ExtRestCat) to node {} (ExtCat); \end{tikzpicture} \caption{Extensive categories, extensive restriction categories, and effecti: their relationships and associated logics.} \label{fig:exteff} \end{figure} \section{Extensive restriction categories} \label{sec:background} This section gives an introduction to extensive restriction categories as it will be applied in the sections that follow. The experienced reader may safely skip this section on a first reading, and instead refer back to it as necessary. Restriction categories are categories equipped with notions of partiality and totality of morphisms. This is done by means of a \emph{restriction combinator}, assigning to each morphism $X \xrightarrow{f} Y$ its \emph{restriction idempotent} $X \xrightarrow{\ridm{f}} X$ (subject to certain laws) which may intuitively be thought of as a partial identity defined precisely where $f$ is defined. In this way, restriction categories provide an axiomatic (and relatively light-weight) approach to partiality of morphisms in categories. Formally, restriction categories are defined in the following way: \begin{definition} A \emph{restriction structure} on a category consists of a combinator mapping each morphism $f$ to its \emph{restriction idempotent} $\ridm{f}$, \emph{i.e.} \begin{equation*} \frac{X \xrightarrow{f} Y}{X \xrightarrow{\ridm{f}} X} \end{equation*} subject to the \emph{restriction laws}: \begin{multicols}{2} \begin{itemize} \setlength{\itemindent}{3em} \item[(R1)] $f \ridm{f} = f$ for all $X \xrightarrow{f} Y$, \item[(R2)] $\ridm{f} \ridm{g} = \ridm{g} \ridm{f}$ for all $X \xrightarrow{f} Y$ and $X \xrightarrow{g} Z$, \item[(R3)] $\ridm{f\ridm{g}} = \ridm{f} \ridm{g}$ for all $X \xrightarrow{f} Y$ and $X \xrightarrow{g} Z$, and \item[(R4)] $\ridm{g} f = f \ridm{gf}$ for all $X \xrightarrow{f} Y$ and $Y \xrightarrow{g} Z$. \end{itemize} \end{multicols} A category equipped with a restriction structure is called a \emph{restriction category}. \end{definition} As the name suggests, a restriction structure is a structure on a category rather than a property of it; in particular, a category can be equipped with several different restriction structures. For this reason, we must in principle specify which restriction structure we are using when speaking of a particular category as a restriction category, though this is often omitted when the restriction structure is implicitly given to be a canonical one. Given that restriction categories are built on a foundation of idempotents, one would expect it to be occasionally useful when all such restriction idempotents split, and indeed this is the case. Say that restriction structure is \emph{split} when all restriction idempotents split, and let $\Split(\ensuremath{\mathscr{C}})$ denote the category arising from the usual idempotent splitting (\emph{i.e.}, the \emph{Karoubi envelope}) of all restriction idempotents in \ensuremath{\mathscr{C}}. That $\Split(\ensuremath{\mathscr{C}})$ is a restriction category when $\ensuremath{\mathscr{C}}$ is follows by \cite[Prop.~2.26]{Cockett2002} As a canonical example, the category \ensuremath{\mathbf{Pfn}}{} of sets and partial functions is a restriction category, with the restriction idempotent $X \xrightarrow{\ridm{f}} X$ for $X \xrightarrow{f} Y$ given by \begin{equation*} \ridm{f}(x) = \left\{ \begin{array}{ll} x & \text{if $f$ is defined at $x$} \\ \text{undefined}\enspace & \text{otherwise} \end{array}\right. \end{equation*} In a restriction category, say that a morphism $X \xrightarrow{f} Y$ is \emph{total} if $\ridm{f} = \id_X$, and that it is a \emph{partial isomorphism} if there exists $Y \xrightarrow{f^\dagger} X$ such that $f^\dagger f = \ridm{f}$ and $ff^\dagger = \ridm{f^\dagger}$. Partial isomorphisms thus generalize ordinary isomorphisms, as an isomorphism is then a partial isomorphism $X \xrightarrow{f} Y$ such that both $f$ and $f^\dagger$ are total. Since total morphisms are closed under composition and include all identities, they form an important subcategory $\Total(\ensuremath{\mathscr{C}})$ of any restriction category \ensuremath{\mathscr{C}}. Likewise, partial isomorphisms are closed under composition and include all identities, so all objects and partial isomorphisms of \ensuremath{\mathscr{C}}{} form the subcategory $\Inv(\ensuremath{\mathscr{C}})$. As the notation suggests, this category $\Inv(\ensuremath{\mathscr{C}})$ is not just a restriction category but an \emph{inverse category} (indeed, it is the \emph{cofree} such~\cite{Kaarsgaard2017}) in the usual sense (see \cite{Cockett2002,Kastl1979}). A useful property of restriction categories is that they come with a natural partial order on homsets (which extends to enrichment in \ensuremath{\mathbf{Poset}}) given by $f \le g$ iff $g \ridm{f} = f$. Intuitively, this can be thought of as an \emph{information order}; $f \le g$ if $g$ can do everything $f$ can do, and possibly more. Like any other categorical structure, when working in restriction categories we require everything in sight to cooperate with the restriction structure. One of the simplest examples of cooperation with restriction structure is given in the definition of a restriction terminal object: This is simply a terminal object $1$ in the usual sense, which further satisfies that the unique map $X \to 1$ is \emph{total} for all objects $X$. For coproducts, this means that we not only require the restriction category to have all finite coproducts in the usual sense, but also that the coproduct injections $X \xrightarrow{\kappa_1} X+Y$ and $Y \xrightarrow{\kappa_2} X+Y$ are \emph{total}. In this case, we say that the restriction category has \emph{restriction coproducts}. There is also a similar notion of a \emph{restriction zero} object $0$: a zero object in the usual sense which additionally satisfies that each zero endomorphism $X \xrightarrow{0_{X,X}} X$ is its own restriction idempotent, \emph{i.e.}, that $\ridm{0_{X,X}} = 0_{X,X}$ (or equivalently, that $\ridm{0_{X,Y}} = 0_{X,X}$ for all zero morphisms $0_{X,Y}$). When zero morphisms exist, they serve as least element in their homset with respect to the natural ordering, and when a category has restriction coproducts and a restriction zero object, the restriction zero object serves as unit for the restriction coproduct. When this is the case, restriction coproduct injections are further partial isomorphisms (\emph{e.g.}, the partial inverse to $X \xrightarrow{\kappa_1} X+Y$ is $X+Y \xrightarrow{[\id,0]} X$). Extensivity for restriction categories means that the restriction coproducts are particularly well-behaved, in the sense that they admit a \emph{calculus of matrices}~\cite{Cockett2007}. Concretely, this means that each morphism $X \xrightarrow{f} Y+Z$ is associated with a unique morphism $X \xrightarrow{\dec{f}} X+X$, its \emph{decision}, which, intuitively, makes the same branching choices as $f$ does, but doesn't do any actual work. Extensive restriction categories are defined as follows. \begin{definition}\label{def:extensive} A restriction category with restriction coproducts and a restriction zero is said to be an \emph{extensive restriction category} if each morphism $f$ has a unique \emph{decision} $\dec{f}$, \emph{i.e.} \begin{equation*} \frac{X \xrightarrow{f} Y+Z}{X \xrightarrow{\dec{f}} X+X} \end{equation*} satisfying the \emph{decision laws} \begin{multicols}{2} \begin{itemize} \setlength{\itemindent}{3em} \item[(D1)] $\nabla \dec{f} = \ridm{f}$ \item[(D2)] $(f+f) \dec{f} = (\kappa_1 + \kappa_2) f$ \end{itemize} \end{multicols} where $X+X \xrightarrow{\nabla} X$ is the natural codiagonal $[\id,\id]$. \end{definition} Note that extensive restriction categories are not extensive in the usual sense -- rather, extensive restriction categories are the ``partial'' version of extensive categories. This connection is made precise by the following proposition due to \cite{Cockett2007}. \begin{proposition} Whenever \ensuremath{\mathscr{C}}{} is an extensive restriction category, $\Total(\Split(\ensuremath{\mathscr{C}}))$ is an extensive category. \end{proposition} A straightforward example of an extensive restriction category is \ensuremath{\mathbf{Pfn}}. Here, the decision $X \xrightarrow{\dec{f}} X+X$ of a partial function $X \xrightarrow{f} Y+Z$ is given by \begin{equation*} \dec{f}(x) = \left\{ \begin{array}{ll} \kappa_1(x) & \text{if $f(x) = \kappa_1(y)$ for some $y \in Y$} \\ \kappa_2(x) & \text{if $f(x) = \kappa_2(z)$ for some $z \in Z$} \\ \text{undefined} & \text{if $f$ undefined at $x$} \end{array}\right. \end{equation*} For further examples and details on extensive restriction categories, see \cite{Cockett2007}. \section{Condition/decision duality} \label{sec:condition_decision_duality} Categorical models of flowcharts are categories with a notion of partiality (due to possible nontermination) and coproducts (corresponding to the control flows of the flowchart). As such, restriction categories with restriction coproducts serve as a good starting point for these. We show in this section that the additional requirement of \emph{extensivity} of the restriction coproduct allows the category to exhibit a condition/decision duality, analogous to the flowchart languages. This manifests in the category as a natural isomorphism between the \emph{decisions} and \emph{predicates} over an object (with their corresponding transformations). We start with a few technical lemmas regarding the partial order on morphisms in a restriction category as well as properties of decisions in extensive restriction categories. \begin{lemma}\label{lem:ridm} It is the case that \begin{multicols}{2} \begin{enumerate}[(i)] \item\label{lem:ridm:1} $g \le g'$ implies $\ridm{gf} \le \ridm{g'f}$, \item\label{lem:ridm:2} $\ridm{gf} \le \ridm{f}$, \item\label{lem:ridm:3} $f \le g$ implies $hf \le hg$ and $fh' \le gh'$, \item\label{lem:ridm:4} $f \le f'$ and $g \le g'$ iff $f+g \le f'+g'$. \item\label{lem:ridm:5} $f \le \ridm{g}$ implies $f = \ridm{f}$. \end{enumerate} \end{multicols} \end{lemma} \begin{lemma}\label{lem:utility} Let $X \xrightarrow{f} Y+Z$ and $X' \xrightarrow{g} X$ be arbitrary morphisms of an extensive restriction category, and $X \xrightarrow{\ridm{e}} X$ any restriction idempotent. It is the case that \begin{multicols}{2} \begin{enumerate}[(i)] \item\label{lem:1} $\dec{\mkern-3mu\dec{f}\mkern-3mu} = \dec{f}$ \item\label{lem:2} $\dec{f}$ is a partial isomorphism and $\dec{f}^\dagger = \left[\,\ridm{\kappa_1^\dagger f},\ridm{\kappa_2^\dagger f}\,\right]$ \item\label{lem:2.5} $\ridm{\dec{f}^\dagger} = \ridm{\kappa_1^\dagger f} + \ridm{\kappa_2^\dagger f}$ \item\label{lem:3} $\ridm{\dec{f}} = \ridm{f}$ \item\label{lem:4} $\gamma \dec{f} = \dec{\gamma f}$ \item\label{lem:5} $\dec{\mkern-3mu\dec{f}g} = \dec{fg}$ \item\label{lem:6lem} $(\ridm{e}+\ridm{e})\dec{f} = (\ridm{e}+\ridm{e})\dec{f}\ridm{e}$ \item\label{lem:6} $\dec{f}\ridm{e}$ is a decision and $\dec{f}\ridm{e} = (\ridm{e}+\ridm{e})\dec{f}$ \item\label{lem:7} $\dec{f}\ridm{e} = \dec{f \ridm{e}}$ \item\label{lem:8} $\kappa_i^\dagger \dec{f} = \ridm{\kappa_i^\dagger f}$ \item\label{lem:9} $\dec{g}f = (f+f)\dec{gf}$ \end{enumerate} \end{multicols} \end{lemma} A few of these identities were shown already in \cite{Cockett2007}; the rest are mostly straightforward to derive. Note that a direct consequence of \eqref{lem:1} is that $(\dec{f}+\dec{f})\dec{f} = (\kappa_1 + \kappa_2)\dec{f}$; we will make heavy use of this fact in Section~\ref{sec:the_internal_logic_of_extensive_restriction_categories}. Another particularly useful identity is the following, stating intuitively that anything that behaves as a decision in each component is, in fact, a decision. \begin{lemma}\label{lem:decrep} If $\kappa_1^\dagger p = \ridm{\kappa_1^\dagger f}$ and $\kappa_2^\dagger p = \ridm{\kappa_2^\dagger f}$ then $p = \dec{f}$. \end{lemma} \begin{proof} By Lemma~\ref{lem:utility}\eqref{lem:2} $\dec{f}^\dagger = \left[\,\ridm{\kappa_1^\dagger f},\ridm{\kappa_2^\dagger f}\,\right]$. Since $\kappa_1^\dagger p = \ridm{\kappa_1^\dagger f}$ and $\kappa_2^\dagger p = \ridm{\kappa_2^\dagger f}$ it follows that $\ridm{\kappa_1^\dagger f} = (\ridm{\kappa_1^\dagger f})^\dagger = (\kappa_1^\dagger p)^\dagger = p^\dagger \kappa_1$ and $\ridm{\kappa_2^\dagger f} = (\ridm{\kappa_2^\dagger f})^\dagger = (\kappa_2^\dagger p)^\dagger = p^\dagger \kappa_2$ so it follows by the universal mapping property of the coproduct that $p^\dagger = \left[\,\ridm{\kappa_1^\dagger f},\ridm{\kappa_2^\dagger f}\,\right] = \dec{f}^\dagger$, and finally $p = \dec{f}$ by unicity of partial inverses. \end{proof} As a corollary, $p$ is a decision if $\kappa_1^\dagger p$ and $\kappa_2^\dagger p$ are both restriction idempotents (\emph{i.e.}, if $\kappa_1^\dagger p = \ridm{\kappa_1^\dagger p}$ and $\kappa_2^\dagger p = \ridm{\kappa_2^\dagger p}$) since all decisions decide themselves (\emph{i.e.}, since $\dec{\mkern-3mu\dec{p}\mkern-3mu} = \dec{p}$). \begin{theorem} There is a functor $\ensuremath{\mathscr{C}}^{\text{op}} \xrightarrow{\Dec} \ensuremath{\mathbf{Set}}$ given by mapping objects to their decisions, and morphisms to decision transformers. \end{theorem} \begin{proof} Define this functor by $\Dec(X) = \{\dec{p} \mid p \in \Hom(X,X+X)\}$ on objects, and by $\Dec(f : Y \to X)(\dec{p}) = \dec{\mkern-3mu\dec{p}f}$ on morphisms. This is contravariantly functorial since $\Dec(\id_X)(\dec{p}) = \dec{\mkern-3mu\dec{p}\id_X} = \dec{\mkern-3mu\dec{p}\mkern-3mu} = \dec{p}$ by Lemma~\ref{lem:utility}\eqref{lem:1}, and $\Dec(gf)(\dec{p}) = \dec{\mkern-3mu\dec{p} gf} = \dec{\mkern-3mu\dec{\mkern-3mu\dec{p} g}f} = \Dec(f)(\Dec(g)(\dec{p}))$ by Lemma~\ref{lem:utility}\eqref{lem:5} and definition of $\Dec(f)$, as desired. \end{proof} From now on, we will use the notation $\Dec(Y) \xrightarrow{f^\diamond} \Dec(X)$ for the decision transformation $\Dec(f)$. This is an example of a fibred category, which have historically been important in categorical presentations of logic, \emph{e.g.}, in topoi (see \cite{Jacobs1999} for a thorough treatment of indexed and fibred categories in categorical logic). In Section~\ref{sec:the_internal_logic_of_extensive_restriction_categories}, we will see that this indexed category extends beyond $\ensuremath{\mathbf{Set}}$ to a model of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}. For now, it is sufficient to show the equivalence between conditions (morphisms $X \to 1+1$) and decisions (morphisms $X \to X+X$ satisfying the \emph{decision laws} of Definition~\ref{def:extensive}). \begin{theorem}[Condition/decision duality]\label{thm:dec_cond_duality} Decisions and predicates are naturally isomorphic in any extensive restriction category with a restriction terminal object: $\Dec(-) \cong \Hom(-,1+1)\enspace$. \end{theorem} \begin{proof} Let \ensuremath{\mathscr{C}}{} be an extensive restriction category with a restriction terminal object, and $X$ some object of \ensuremath{\mathscr{C}}; we begin by showing that the mappings \begin{equation*} X \xrightarrow{\dec{f}} X+X\quad \mapsto\quad X\xrightarrow{\dec{f}} X+X \xrightarrow{!+!} 1+1 \qquad\text{and}\qquad X \xrightarrow{p} 1+1 \quad \mapsto \quad X \xrightarrow{\dec{p}} X+X \end{equation*} between $\Dec(X)$ and $\Hom(X,1+1)$ yields a bijection. In other words, we must show that $\dec{\mkern-2mu(!+!)\dec{f}\mkern-3mu} = \dec{f}$ and $p = (!+!)\dec{p}$. To show $\dec{\mkern-2mu(!+!)\dec{f}\mkern-3mu} = \dec{f}$, we show that $\dec{f}$ decides $(!+!)\dec{f}$ by $\nabla \dec{f} = \ridm{f} = \ridm{\dec{f}} = \ridm{(!+!)\dec{f}}$ using the fact that the unique map $X \xrightarrow{!} 1$ is total by $1$ restriction terminal, and by $(((!+!)\dec{f}) + ((!+!)\dec{f})) \dec{f} = ((!+!)+(!+!)) (\dec{f} + \dec{f}) \dec{f} = ((!+!)+(!+!)) (\dec{f} + \dec{f}) \dec{\mkern-3mu\dec{f}\mkern-3mu} = ((!+!)+(!+!)) (\kappa_1 + \kappa_2) \dec{f} = (\kappa_1 + \kappa_2) (!+!) \dec{f}$. Thus $\dec{\mkern-2mu(!+!)\dec{f}\mkern-3mu} = \dec{f}$, as desired. To show that $p = (!+!)\dec{p}$ for $X \xrightarrow{p} 1+1$ we show something slightly more general, namely that $(!+!)f = (!+!)\dec{f}$ for any $X \xrightarrow{f} Y+Z$. That $p = (!+!)\dec{p}$ then follows as a special case since $\id_{1+1} = (!+!)$ by $1$ terminal, so $p = \id_{1+1} p = (!+!) p$. This slightly more general statement follows by commutativity of the diagram below. \begin{center} \begin{tikzpicture} \node (X) {$X$}; \node[right=25mm of X] (XX) {$X+X$}; \node[below=12mm of X] (11') {$Y+Z$}; \node[below=12mm of XX] (1111) {$(Y+Z)+(Y+Z)$}; \node[below right=10mm of 1111] (11) {$1+1$}; \node[below=5mm of X] (p1) {}; \node[right=11mm of p1] (i) {\footnotesize\emph{(i)}}; \node[below=5mm of 1111] (ii) {\footnotesize\emph{(ii)}}; \node[above=2mm of 1111] (phantom) {}; \node[right=8mm of phantom] (iii) {\footnotesize\emph{(iii)}}; \draw[->,font={\small}] (X) to node [above] {$\dec{f}$} (XX); \draw[->,font={\small}, bend left] (XX) to node [right] {$!+!$} (11); \draw[->,font={\small}] (X) to node [left] {$f$} (11'); \draw[->,font={\small}, bend right] (11') to node [below] {$!+!$} (11); \draw[->,font={\small}] (11') to node [below] {$\kappa_1 + \kappa_2$} (1111); \draw[->,font={\small}] (XX) to node [right] {$f+f$} (1111); \draw[->,font={\small}] (1111) to node [above right] {$!+!$} (11); \end{tikzpicture} \end{center} Here, \emph{(i)} commutes by the second axiom of decisions, while \emph{(ii)} and \emph{(iii)} both commute by $1$ terminal. To see that this bijection extends to a natural isomorphism, we must fix some $Y \xrightarrow{f} X$ and chase the diagram \begin{center} \begin{tikzpicture} \node (DecX) {$\Dec(X)$}; \node[right=15mm of DecX] (HomX) {$\Hom(X,1+1)$}; \node[below=10mm of DecX] (DecY) {$\Dec(Y)$}; \node[below=10mm of HomX] (HomY) {$\Hom(Y,1+1)$}; \draw[->,font={\small}] (DecX) to node [left] {$f^\diamond$} (DecY); \draw[<->,font={\small}] (DecX) to node [above] {$\cong$} (HomX); \draw[<->,font={\small}] (DecY) to node [below] {$\cong$} (HomY); \draw[->,font={\small}] (HomX) to node [right] {$f^*$} (HomY); \end{tikzpicture} \end{center} where we use $f^\diamond$ to denote the functorial action $\Dec(f)$, $\Dec(f)(\dec{p}) = \dec{\mkern-3mu\dec{p}f}$. Picking some $\dec{g} \in \Dec(X)$ we must have $(!+!) \dec{\mkern-3mu\dec{g} f} = (!+!) \dec{g} f$, which indeed follows by the statement above. On the other hand, picking some $p \in \Hom(X,1+1)$, chasing yields that we must have $\dec{\mkern-3mu\dec{p}f} = \dec{pf}$, which follows directly by Lemma~\ref{lem:utility} \eqref{lem:5}. \end{proof} A consequence of this equivalence in extensive restriction categories is that decisions are a property of the \emph{predicates} rather than a property of arbitrary maps, as it is commonly presented. This is shown in the following corollary to Theorem~\ref{thm:dec_cond_duality}. \begin{corollary} A restriction category with restriction coproducts, a restriction zero, and a restriction terminal object has all decisions (\emph{i.e.}, is extensive as a restriction category) iff it has all decisions of predicates. \end{corollary} \begin{proof} It follows directly that having decisions for all morphisms implies having decisions for all predicates. On the other hand, suppose that the category only has decisions for predicates, and let $X \xrightarrow{f} Y+Z$ be an arbitrary morphism. But then, by the proof of Theorem~\ref{thm:dec_cond_duality}, the decision for the predicate $X \xrightarrow{f} Y+Z \xrightarrow{!+!} 1+1$ decides $X \xrightarrow{f} Y+Z$ (by $\dec{(!+!)f} = \dec{(!+!)\dec{f}\mkern-3mu} = \dec{f}$), and we are done. \end{proof} \section{The internal logic of extensive restriction categories} \label{sec:the_internal_logic_of_extensive_restriction_categories} Having established the natural isomorphism of decisions and predicates (with their respective transformers) which forms the condition/decision duality at the categorical level, we now turn to their structure. The main result of this section, Theorem~\ref{thm:internal_logic}, shows that the decisions $\Dec(X)$ on an object $X$ form a model of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}, and that decision transformers $\Dec(Y) \xrightarrow{f^\diamond} \Dec(X)$ are homomorphisms of these models. We first recall Kleene's three valued logics, in particular \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} and its algebraic counterpart of De Morgan quasilattices. \subsection{Kleene's three valued logics and De Morgan quasilattices} \label{sub:weak_kleene_logic_and_de_morgan_quasilattices} Kleene's three valued logics of \ensuremath{\mathbf{K}_{\mathrm{3}}}{} (\emph{strong Kleene logic}) and \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} (\emph{weak Kleene logic}), both introduced in~\cite{Kleene1952}, are logics based on \emph{partial predicates} with a computational interpretation: Predicates are conceived of as programs which \emph{may not terminate}, but if they do, they terminate with a Boolean truth value as output. In this way, both \ensuremath{\mathbf{K}_{\mathrm{3}}}{} and \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} can be thought of as partial versions of classical logic. Here, possible nontermination is handled analogously to how it is handled in domain theory, \emph{i.e.}, by the introduction of a third truth value in addition to truth $t$ and falsehood $f$, denoted $u$ in Kleene's presentation~\cite{Kleene1952}, which should be read as ``undefined''. The difference between \ensuremath{\mathbf{K}_{\mathrm{3}}}{} and \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} lies in how they cope with undefined truth values. In \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} (see Figure~\ref{fig:k3w_semantics}), undefinedness is ``contagious'': if any part of an expression is undefined, the truth value of the entire expression is undefined as well\footnote{This contagious behaviour has also been used to explain other phenomena. In philosophy, \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} is better known as \ensuremath{\mathbf{B}_{\mathrm{3}}}{} or \emph{Bochvar's nonsense logic} (see, \emph{e.g.}, \cite{FinnGrigolia1993}), and the third truth value read as ``meaningless'' or ``nonsensical'' rather than ``undefined''. The central idea is that nonsense is contagious: \emph{e.g.}, ``$2+2=5$ and gobbledygook'' is nonsensical even if part of it can be assigned meaning.}. This fits well into a computation paradigm with possible nontermination and only sequential composition available. In contrast, the semantics of \ensuremath{\mathbf{K}_{\mathrm{3}}}{} is to try to recover definite truth values whenever possible, even if part of the computation fails to terminate. For example, in \ensuremath{\mathbf{K}_{\mathrm{3}}}{} (and unlike \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}), $p \land q$ is considered false if one of $p$ and $q$ is false, even if the other is undefined. While this allows for some recovery in the face of nontermination, computationally it seems to require parallel processing capabilities. \begin{figure} \centering \subfloat[Weak conjunction.]{ \begin{tabular}{cc|ccc} & $P$ & $t$ & $f$ & $u$ \\ $Q$ & $\land$ & & & \\ \hline $t$ & & $t$ & $f$ & $u$ \\ $f$ & & $f$ & $f$ & $u$ \\ $u$ & & $u$ & $u$ & $u$ \end{tabular}} \hspace{5em} \subfloat[Weak disjunction.]{ \begin{tabular}{cc|ccc} & $P$ & $t$ & $f$ & $u$ \\ $Q$ & $\lor$ & & & \\ \hline $t$ & & $t$ & $t$ & $u$ \\ $f$ & & $t$ & $f$ & $u$ \\ $u$ & & $u$ & $u$ & $u$ \end{tabular}} \hspace{5em} \subfloat[Negation.]{ \begin{tabular}{c|ccc} $P$ & $t$ & $f$ & $u$ \\ \hline $\neg P$ & $f$ & $t$ & $u$ \end{tabular}} \caption{The three-valued semantics of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}.} \label{fig:k3w_semantics} \end{figure} Like classical logic takes its algebraic semantics in Boolean algebras, the corresponding algebraic structure for \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} is that of \emph{De Morgan quasilattices} (see, \emph{e.g.}, \cite{FinnGrigolia1993}). As is sometimes done, we assume these to be distributive; \emph{i.e.}, what we call De Morgan quasilattices are sometimes called \emph{distributive De Morgan quasilattices} or even \emph{(distributive) De Morgan bisemilattices} (see, \emph{e.g.}, \cite{Ledda2018}). Note that we generally do \emph{not} require these to be bounded, \emph{i.e.}, for top and bottom elements $\top$ and $\bot$ to exist. \begin{definition}\label{def:dmqlat} A \emph{De Morgan quasilattice} (in its algebraic formulation) is a quadruple $\mathfrak{A} = (|\mathfrak{A}|, \neg, \land, \lor)$ satisfying the following equations, for all $p,q,r \in |\mathfrak{A}|$: \begin{multicols}{2} \begin{enumerate}[(i)] \item $p \land p = p$, \label{def:con_idemp} \item $p \lor p = p$, \label{def:dis_idemp} \item $p \land q = q \land p$, \label{def:con_comm} \item $p \lor q = q \lor p$, \label{def:dis_comm} \item $p \land (q \land r) = (p \land q) \land r$, \label{def:con_assoc} \item $p \lor (q \lor r) = (p \lor q) \lor r$, \label{def:dis_assoc} \item $p \land (q \lor r) = (p \land q) \lor (p \land r)$, \label{def:con_dist} \item $p \lor (q \land r) = (p \lor q) \land (p \lor r)$, \label{def:dis_dist} \item $\neg\neg p = p$, \label{def:nne} \item $\neg(p \land q) = \neg p \lor \neg q$, \label{def:dm1} \item $\neg(p \lor q) = \neg p \land \neg q$, \label{def:dm2} \end{enumerate} \end{multicols} Further, a De Morgan quasilattice $\mathfrak{A}$ is said to be \emph{bounded} if there exist elements $\bot, \top \in |\mathfrak{A}|$ such that the following are satisfied (for all $p \in |\mathfrak{A}|$): \begin{multicols}{2} \begin{enumerate}[(i)]\setcounter{enumi}{11} \item $p \land \top = p$, \label{def:con_unit} and \item $p \lor \bot = p$. \label{def:dis_unit} \end{enumerate} \end{multicols} A homomorphism $\mathfrak{A} \xrightarrow{h} \mathfrak{B}$ of De Morgan quasilattices is a function $|\mathfrak{A}| \to |\mathfrak{B}|$ which preserves $\neg$, $\land$, and $\lor$. A homomorphism of bounded De Morgan quasilattices is one which additionally preserves $\top$ and $\bot$. \end{definition} Being a De Morgan quasilattice is a strictly weaker property than being a Boolean algebra. In particular, a Boolean algebra is a bounded De Morgan quasilattice which further satisfies the \emph{absorption} laws $p = p \land (p \lor q)$ and $p = p \lor (p \land q)$, and the laws of contradiction and \emph{tertium non datur}, $p \land \neg p = \bot$ and $p \lor \neg p = \top$. De Morgan quasilattices and their homomorphisms form a category which we call \ensuremath{\mathbf{DMQLat}}. As for Boolean algebras, one can derive a partial order on De Morgan quasilattices by $p \preccurlyeq q$ iff $p \land q = p$, and another one by $p \sqsubseteq q$ iff $p \lor q = q$. Unlike as for Boolean algebras, however, these do \emph{not} coincide, though they are anti-isomorphic, as it follows from the De Morgan laws that $p \preccurlyeq q$ iff $\neg q \sqsubseteq \neg p$. We will return to these in Section~\ref{sec:the_internal_logic_of_extensive_restriction_categories} and argue why $\cdot \preccurlyeq \cdot$ is the one more suitable as the entailment relation for \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}. \subsection{The internal logic} \label{sub:the_internal_logic} With \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} and De Morgan quasilattices introduced, we return to the construction of the internal logic. To aid in its presentation (and subsequent proofs), we start by introducing a graphical language of extensive restriction categories, based on the one for cocartesian categories (see, \emph{e.g.}, \cite{Selinger2011}). Then, we show how the constants and connectives of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} can be interpreted (Definition~\ref{def:internal_log}) as decisions on an object (Lemma~\ref{lem:condec}). Finally, we show that decisions on an object form a model of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} (Lemma~\ref{lem:dec_dmq}), and that decision transformations are homomorphisms of these models (Lemma~\ref{lem:f_diam_homo}), concluding this construction. We go on to explore an important corollary to this construction, namely that if we restrict ourselves from ordinary decisions to total decisions and total decision transformations, we obtain a fibration over Boolean algebras instead (Corollary~\ref{cor:tdec} Theorem~\ref{thm:ext_bool}). The latter is a well-known property of extensive categories first shown in \cite{CJWW2015}, though this proof uses entirely different machinery. Figure~\ref{fig:graphical_lang} shows the graphical language of extensive restriction categories, which has the restriction coproduct as its monoidal tensor. The first five gadgets are from cocartesian categories ($\gamma_{X,Y}$ is here the twist map, $[\kappa_2, \kappa_1]$). We add gadgets corresponding to decisions $X \xrightarrow{\dec{f}} X+X$, inverses to decisions $X+X \xrightarrow{\dec{f}^\dagger} X$ (as all decisions are partial isomorphisms, see Lemma~\ref{lem:utility}\eqref{lem:2}), and restriction idempotents $X \xrightarrow{\ridm{f}} X$. The gadget for inverses to decisions was inspired by \emph{assertions} in reversible flowcharts (see~\cite{YokoyamaAG2016}). Useful derived gadgets include \ctikzfig{graphical_derived} Just as the graphical language of cocartesian categories, isomorphism or isotopy of diagrams is not enough for coherence -- equations only hold in the graphical language up to diagrammatic manipulations corresponding to the decision laws, as well as the diagrammatic manipulations for coproducts (\emph{e.g.}, the commutative monoid axioms and naturality for the codiagonal, the zero morphism laws, etc.). For more on the latter, see \cite{Selinger2011}. For example, graphically, the decision laws are \ctikzfig{decision_laws} As in the example above, when the signature is clear from the context, we omit the object annotations (\emph{e.g.}, $X, Y, Z$ in Figure~\ref{fig:graphical_lang}). \begin{figure} \ctikzfig{graphical_language} \caption{An overview of the gadgets that make up the graphical language of extensive restriction categories.} \label{fig:graphical_lang} \end{figure} With the graphical language in place, we proceed to give the definition of the internal logic of decisions in an extensive restriction category, \emph{i.e.}, the entailment relation and construction of constants and propositional connectives. \begin{definition}\label{def:internal_log} In an extensive restriction category, propositional constants and connectives are defined for decisions as follows, using the graphical language: \ctikzfig{logic} Entailment is defined by $\dec{p} \vDash \dec{q}$ iff $\dec{p} \preccurlyeq \dec{q}$ (explicitly, iff $\dec{p} \land \dec{q} = \dec{p}$). \end{definition} For those more textually inclined, this defines $\top = \kappa_1$, $\bot = \kappa_2$, $\neg \dec{p} = \gamma \dec{p}$, $\dec{p} \lor \dec{q} = (\dec{p}^\dagger + \id) \alpha (\ridm{\dec{q}} + \dec{q})\dec{p}$, and $\dec{p} \land \dec{q} = (\id + \dec{p}^\dagger)\alpha(\dec{q} + \ridm{\dec{q}})\dec{p}$. Intuitively, we think of decisions as representing partial predicates by separating values into \emph{witnesses} and \emph{counterexamples} of that partial predicate (see also \cite{KaarsgaardGlueck2018}). The definitions of $\top$ and $\bot$ express the convention that the first component carries witnesses, while the second component carries counterexamples. Negation of partial predicates then amounts to swapping witnesses for counterexamples and vice versa, \emph{i.e.}, by composing with the symmetry. The intuition behind conjunction (and, dually, disjunction) is less obvious: Using the intuition of decisions as morphisms that tag inputs with a branch but doesn't change it otherwise, we see that a witness of $\dec{p} \land \dec{q}$ has to be a witness of both $\dec{p}$ and $\dec{q}$, while a counterexample of $\dec{p} \land \dec{q}$ is either a counterexample of $\dec{p}$ which is further defined for $\dec{q}$ (necessary to ensure commutativity), or a witness of $\dec{p}$ which is a counterexample of $\dec{q}$. The case for disjunctions is dual. Before we move on to show that this actually has the logical structure we're after, we first obliged to show that these connectives and constants actually define well-formed decisions. This fact is expressed in the following lemma. \begin{lemma}\label{lem:condec} The constants and connectives of Definition~\ref{def:internal_log} are decisions. \end{lemma} \ifappendix \begin{proof} See appendix. \end{proof} \fi Before we can proceed, we need a small technical lemma. \begin{lemma}\label{lem:dec_misc} Let \begin{minipage}[m]{8mm}\tikzfig{pdec}\end{minipage} and \begin{minipage}[m]{8mm}\tikzfig{qdec}\end{minipage} be decisions. It is the case that \begin{multicols}{2} \begin{enumerate}[(i)] \item \begin{minipage}[m]{\linewidth}\tikzfig{commstmt}\end{minipage} \label{lem:commstmt} \item \begin{minipage}[m]{\linewidth}\tikzfig{con_irrev}\end{minipage} \label{lem:con_irrev} \item \begin{minipage}[m]{\linewidth}\tikzfig{dis_irrev}\end{minipage} \label{lem:dis_irrev} \end{enumerate} \end{multicols} \end{lemma} \ifappendix \begin{proof} See appendix. \end{proof} \fi The first part of this lemma can be seen as a form of commutativity for decisions -- and, indeed, it performs most of the heavy lifting in showing commutativity of conjunction and disjunction. On the other hand, parts \eqref{lem:con_irrev} and \eqref{lem:dis_irrev} shows that we could have defined conjunction and disjunction more simply in Definition~\ref{def:internal_log}. The reason why we chose the current definition is that it yields entirely \emph{reversible} models (see also \cite{KaarsgaardGlueck2018}), \emph{i.e.}, involving only partial isomorphisms. We will discuss this property further in Section~\ref{sec:conclusion}. For now, we continue with the internal logic. \begin{lemma}\label{lem:dec_dmq} $\Dec(X)$ is a bounded De Morgan quasilattice for any object $X$. \end{lemma} \begin{proof} \ifappendix We show only a few of the cases here using the graphical language. See the appendix for the rest. \else In the interest of conserving space, we show only a few of the cases using the graphical language. \fi Idempotence of conjunction, \emph{i.e.}, $\dec{p} \land \dec{p} = \dec{p}$, follows by \ctikzfig{con_idemp} and similarly for disjunction. That $\dec{p} \land \top = \dec{p}$ is shown simply by \ctikzfig{con_unit} and again, the unit law for disjunction has an analogous proof. The first De Morgan law, that $\neg \dec{p} \land \neg \dec{q} = \neg (\dec{p} \lor \dec{q})$ \ctikzfig{dm1} and the proof of the second De Morgan law follows similarly. \end{proof} As such, we have that each collection of decisions on an object form a local model of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}, giving us the first part of the fibration. For the second, we need to show that decision transformers preserve entailment and the propositional connectives (though not necessarily the constants). This is shown in the following lemma. \begin{lemma}\label{lem:f_diam_homo} Let $X \xrightarrow{f} Y$. Then $\Dec(Y) \xrightarrow{f^\diamond} \Dec(X)$ is a homomorphism of De Morgan quasilattices, \emph{i.e.}, \begin{multicols}{2} \begin{enumerate}[(i)] \item\label{lem:monot} $\dec{p} \vDash \dec{q}$ implies $f^\diamond(\dec{p}) \vDash f^\diamond(\dec{q})$ \item\label{lem:pres_neg} $f^\diamond(\neg\dec{p}) = \neg f^\diamond(\dec{p})$ \item\label{lem:pres_conj} $f^\diamond(\dec{p} \land \dec{q}) = f^\diamond(\dec{p}) \land f^\diamond(\dec{q})$ \item\label{lem:pres_disj} $f^\diamond(\dec{p} \lor \dec{q}) = f^\diamond(\dec{p}) \lor f^\diamond(\dec{q})$ \end{enumerate} \end{multicols} In addition, if $f$ is total then $f^\diamond$ is a homomorphism of \emph{bounded} De Morgan quasilattice; \emph{i.e.}, we also have $f^\diamond(\top) = \top$ and $f^\diamond(\bot) = \bot$. \end{lemma} \begin{proof} \eqref{lem:monot} follows by \eqref{lem:pres_conj} since $\dec{p} \vDash \dec{q}$ iff $\dec{p} \preccurlyeq \dec{q}$ iff $\dec{p} \land \dec{q} = \dec{q}$, which in turn implies that $f^\diamond(\dec{p}) \land f^\diamond(\dec{q}) = f^\diamond(\dec{p} \land \dec{q}) = f^\diamond(\dec{p})$, so $f^\diamond(\dec{p}) \preccurlyeq f^\diamond(\dec{q})$ as well, \emph{i.e.}, $f^\diamond(\dec{p}) \vDash f^\diamond(\dec{q})$. \iffalse For monotony (\emph{i.e.}, \eqref{lem:monot}), suppose $\dec{p} \vDash \dec{q}$, that is, $\kappa_1^\dagger \dec{p} \le \kappa_1^\dagger \dec{q}$ and $\ridm{\dec{p}} \le \ridm{\dec{q}}$. Then $ \kappa_1^\dagger f^\diamond(\dec{p}) = \kappa_1^\dagger \dec{\mkern-3mu\dec{p}f} = \kappa_1^\dagger \dec{pf} = \ridm{\kappa_1^\dagger pf} = \ridm{\ridm{\kappa_1^\dagger p}f} = \ridm{\kappa_1^\dagger \dec{p}f} $ and, by analogous argument, $\kappa_1^\dagger f^\diamond(\dec{q}) = \ridm{\kappa_1^\dagger \dec{q}f}$, so $\ridm{\kappa_1^\dagger \dec{p}f} = \kappa_1^\dagger f^\diamond(\dec{p}) \le \kappa_1^\dagger f^\diamond(\dec{q}) = \ridm{\kappa_1^\dagger \dec{q}f}$ follows by $\kappa_1^\dagger \dec{p} \le \kappa_1^\dagger \dec{q}$ and Lemma~\ref{lem:ridm}\eqref{lem:ridm:1}. Similarly, $\ridm{f^\diamond(\dec{p})} = \ridm{\dec{\mkern-3mu\dec{p}f}} = \ridm{\dec{p}f} = \ridm{\ridm{\dec{p}}f}$ and analogously $\ridm{f^\diamond(\dec{q})} = \ridm{\ridm{\dec{q}}f}$, so $\ridm{\ridm{\dec{p}}f} = \ridm{f^\diamond(\dec{p})} \le \ridm{f^\diamond(\dec{q})} = \ridm{\ridm{\dec{q}}f}$ follows by $\ridm{\dec{p}} \le \ridm{\dec{q}}$ and Lemma~\ref{lem:ridm}\eqref{lem:ridm:1}. Thus, decision transformation is monotone on entailment. \fi For \eqref{lem:pres_neg}, we compute $f^\diamond(\neg\dec{p}) = f^\diamond(\gamma\dec{p}) = f^\diamond(\dec{\gamma p}) = \dec{\mkern-3mu\dec{\gamma p}f} = \dec{\gamma pf} = \gamma \dec{pf} = \gamma \dec{\mkern-3mu\dec{p}f} = \neg f^\diamond(\dec{p})$ (using Lemma~\ref{lem:utility}). \eqref{lem:pres_conj} follows by lengthy but straightforward computation\ifappendix (see appendix)\fi. \eqref{lem:pres_disj} is analogous to the previous case. \end{proof} Notice the final part regarding preservation of units. Generally, $f^\diamond(\top) = \dec{\top f} = \dec{\kappa_1 f}$, so $\ridm{f^\diamond(\top)} = \ridm{\dec{\kappa_1 f}} = \ridm{\kappa_1 f} = \ridm{\ridm{\kappa_1} f} = \ridm{f}$, so if $f$ is not total, $\dec{\kappa_1 f} \neq \kappa_1$ (instead $\dec{\kappa_1 f} = \kappa_1 \ridm{f}$). Putting the two lemmas together gives us the main result: \begin{theorem}\label{thm:internal_logic} In every extensive restriction category \ensuremath{\mathscr{C}}, decisions over \ensuremath{\mathscr{C}}{} form a fibred De Morgan quasilattice via the decision fibration. \end{theorem} \begin{proof} By Lemmas~\ref{lem:dec_dmq} and \ref{lem:f_diam_homo}. \end{proof} We previously claimed that the conjunction order was the more suitable one for entailment in extensive restriction categories. We are finally ready to state why: \begin{lemma}\label{lem:entailment} Entailment is upwards directed in truth and definedness: $\dec{p} \vDash \dec{q}$ iff $\kappa_1^\dagger \dec{p} \le \kappa_1^\dagger \dec{q}~\text{and}~\ridm{\dec{p}} \le \ridm{\dec{q}}$. \end{lemma} \ifappendix \begin{proof} See appendix. \end{proof} \fi In other words, $\dec{p}$ entails $\dec{q}$ iff $\dec{q}$ is both \emph{at least as true} and \emph{at least as defined} as $\dec{p}$ is. That is, entailment preserves not only \emph{truth} (as we expect all entailments to) but also \emph{information} (as we expect of orders on \emph{partial} maps). Compare this to the disjunction partial order for which $\dec{p} \sqsubseteq \dec{q}$ instead states that $\dec{q}$ is less \emph{false} and less \emph{defined} than $\dec{p}$: In other words, it prefers for information to be \emph{forgotten} rather than preserved. We move on now to an important special case of the situation above, which is when only \emph{total} decisions are considered rather than arbitrary ones. For this, we need a small lemma regarding the restriction idempotents of decisions when composed using the propositional connectives. \begin{lemma}\label{lem:ridmdec} We state some facts about restriction idempotents of decisions: \begin{multicols}{2} \begin{enumerate}[(i)] \item\label{lem:ridmdec:1} $\ridm{\neg\dec{p}} = \ridm{\dec{p}}$, \item\label{lem:ridmdec:2} $\ridm{\dec{p} \land \dec{q}} = \ridm{\dec{p}}\,\ridm{\dec{q}}$, \item\label{lem:ridmdec:3} $\ridm{\dec{p} \lor \dec{q}} = \ridm{\dec{p}}\,\ridm{\dec{q}}$, \item\label{lem:ridmdec:4} $\ridm{\dec{p} \land \dec{q}} \le \ridm{\dec{p}}$ and $\ridm{\dec{p} \land \dec{q}} \le \ridm{\dec{q}}$, \item\label{lem:ridmdec:5} $\ridm{\dec{p} \lor \dec{q}} \le \ridm{\dec{p}}$ and $\ridm{\dec{p} \lor \dec{q}} \le \ridm{\dec{q}}$. \end{enumerate} \end{multicols} \end{lemma} \ifappendix \begin{proof} See appendix. \end{proof} \fi We can now show that total decisions form a fibred Boolean algebra. \begin{corollary}\label{cor:tdec} $\TDec(X)$ is a Boolean algebra for any object $X$, and $f^\vartriangle : \TDec(Y) \to \TDec(X)$ is homomorphism of Boolean algebras for any total $X \xrightarrow{f} Y$. \end{corollary} \begin{proof} Since $\Dec(X)$ is a De Morgan quasilattice (Lemma~\ref{lem:dec_dmq}), since total decisions are specifically decisions, and since the constants are total and the connectives preserve totality (Lemma~\ref{lem:ridmdec}), it suffices to show that when $\dec{p}$ and $\dec{q}$ are total they satisfy the \emph{absorption} laws $\dec{p} = \dec{p} \land (\dec{p} \lor \dec{q}$ and $\dec{p} = \dec{p} \lor (\dec{p} \land \dec{q})$, and the laws of contradiction and \emph{tertium non datur}, $\dec{p} \land \neg\dec{p} = \bot$ and $\dec{p} \lor \neg\dec{p} = \top$. The first absorption law follows by \ctikzfig{absorption} and the other follows analogously. Likewise, the law of contradiction can be shown as \ctikzfig{contradiction} and similarly for \emph{tertium non datur}. \end{proof} Using the previous corollary, it follows (see \cite{CJWW2015} for the original proofs from effectus theory) that predicates over an extensive category form a fibred Boolean algebra. \begin{theorem}\label{thm:ext_bool} Predicates (or, equivalently, decisions) over an extensive category is a fibred Boolean algebra via the predicate fibration (or, equivalently, the decision fibration). \end{theorem} \begin{proof} Since total decisions on objects form Boolean algebras by Corollary~\ref{cor:tdec}, it suffices to show that every extensive category arises as the subcategory of total morphisms of an extensive restriction category. Let \ensuremath{\mathscr{C}}{} be an extensive category, and $\mathcal{M}$ denote the collection of all coproduct injections of \ensuremath{\mathscr{C}}{}. As remarked in \cite{Cockett2002}, this is a stable system of monics, and by Example 4.17 of \cite{Cockett2003}, $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$ is a classified restriction category under the $+1$ monad. Since \ensuremath{\mathscr{C}}{} has coproducts and $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$ is classified, it follows by Proposition 2.3 of \cite{Cockett2007} that $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$ has restriction coproducts. That $0$ is a restriction zero in $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$ follows straightforwardly, with the span $X \xhookleftarrow{!_X} 0 \xrightarrow{!_Y} Y$ as the unique zero morphism $X \xrightarrow{0_{X,Y}} Y$. As such, it suffices to show that decisions can be constructed in $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$. Let $X \xhookleftarrow{m} X' \xrightarrow{f} Y+Z$ be an arbitrary morphism of $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$. Since \ensuremath{\mathscr{C}}{} is extensive it has pullbacks of coproduct injections along arbitrary morphisms, so the two squares \begin{center} \begin{tikzcd} X_1 \arrow[r, "m_1"] \arrow[d, "f_1"'] & X' \arrow[d, "f"] & X_2 \arrow[l, "m_2"'] \arrow[d, "f_2"] \\ Y \arrow[r, "\kappa_1"'] & Y+Z & Z \arrow[l, "\kappa_2"] \end{tikzcd} \end{center} are pullbacks, and so the top row is a coproduct diagram (\emph{i.e.}, $X_1 + X_2 \cong X'$). But then it readily follows tha \begin{center} \begin{tikzcd}[cramped, sep=tiny, font=\scriptsize] & & X_1+X_2 \arrow[rd, "m_1+m_2"] \arrow[ld, "\cong"'] & & \\ & X' \arrow[ld, "m"'] & & X'+X' \arrow[rd, "m+m"] & \\ X & & & & X+X \end{tikzcd} \end{center} is a decision for $X \xhookleftarrow{m} X' \xrightarrow{f} Y+Z$ in $\Par(\ensuremath{\mathscr{C}},\mathcal{M})$, and we are done. \end{proof} \section{Conclusion and future work} \label{sec:conclusion} Motivated by an observation from flowchart languages that predicates serve a dual role as both condition and decision, we have given an account of extensive restriction categories (due to \cite{Cockett2002,Cockett2003,Cockett2007}) as categories with an internal logic (namely \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}) that internalize this duality, in the form of a natural isomorphism between the predicate fibration and the decision fibration. We have also extended the graphical language of cocartesian categories to one for extensive restriction categories, and used our results to give an alternative proof of the fact that extensive categories, too, are categories with an internal logic -- classical logic. While the graphical language has proven itself useful in proving theorems, it does have its shortcomings. For example, the only way to express restriction idempotents of compositions, such as $\ridm{gf}$, is, awkwardly, as \tikzfig{ridmfg}. That is, we would want only one representation of composition as placing gadgets in sequence, but since $\ridm{gf}$ cannot generally be expressed as a composite involving only smaller things (\emph{e.g.}, $\ridm{f}$ and $\ridm{g}$), we are forced in this case to let the textual representation (\emph{i.e.}, juxtaposition) bleed into the graphical language. The graphical notation for decisions has similar issues. An application of the developed theory is in reversible models of logics, which was also the motivation for defining the connectives in slightly more involved fashion, using partial inverses to decisions rather than the codiagonal. Indeed, the inspiration for using decisions as predicates came from the study of the categorical semantics of reversible flowchart languages (see \cite{Glueck2017,KaarsgaardGlueck2018}). Since a decision in \ensuremath{\mathscr{C}}{} is still a decision in $\Inv(\ensuremath{\mathscr{C}})$ (see \cite{KaarsgaardGlueck2018}), $\Dec(X)$ is still a De Morgan quasilattice in $\Inv(\ensuremath{\mathscr{C}})$, though the homomorphisms between fibres differ (\emph{i.e.}, only decision transformers that are partial isomorphisms occur in the decision fibration on $\Inv(\ensuremath{\mathscr{C}})$). We have only considered the weak Kleene logic \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} here, as it can be constructed by purely sequential means. However, we conjecture that the strong Kleene logic \ensuremath{\mathbf{K}_{\mathrm{3}}}{} can be modelled as well in extensive restriction categories if additionally a \emph{parallel} composition operator such as finite joins (see \cite{Guo2012}) is available. Finally, just the propositional fragment of \ensuremath{\mathbf{K}^{\mathrm{w}}_{\mathrm{3}}}{} and classical logic has been considered in this paper. Though decisions on an object yields a fibred category with a logical structure, we have not explored extensions to models of first-order logics, \emph{e.g.}, by investigating the feasibility of adjoints to substitution, as in the standard trick due to Lawvere~\cite{Lawvere1969} (see also \cite{Jacobs1999}).
1,477,468,749,894
arxiv
\section{Introduction} \label{intro} Learning the causal relationships among variables using non-experimental data is of high importance in many scientific fields, such as economics and econometrics\footnote{For a general definition of causality specifically in economics and econometrics see \cite{hoover2017}.}. When the aim is particularly to recreate the causal mechanism that generated the data, graphical models, such as causal networks and Bayesian Networks (BNs)\footnote{Also known as Bayes networks, belief networks, decision networks, Bayes(ian) models or probabilistic directed acyclic graphical models.} are frequently employed. The advantages of BNs include simultaneous variable selection, among all variables and hence detection of conditional associations between variables. On a different route BNs form the scheme for synthetic population generation \citep{sun2015} and have been used synergetically with agent based models \citep{kocabas2009,kocabas2013}. BNs enjoy applications to numerous fields, but the focus of the current paper is on economics related fields applications, such as production economics \citep{hosseini2016}, macroeconomics \citep{spiegler2016} and environmental resource economics \citep{xue2017}. Applications of BNs can be found also in financial econometrics \citep{mele2017} banking and finance \citep{chong2018}, credit scoring \citep{leong2016}, insurance \citep{sheehan2017} and customer service \citep{cugnata2014} to name a few. Despite the plethora of applications of BNs, not many BN algorithms exist, and most importantly fewer are publicly available in free software environments, such as the statistical software \textit{R}. The Max-Min Hill Climbing (MMHC) \citep{tsamardinos2006} is an example of a widely used BN learning algorithm\footnote{The relevant paper is one of the classic papers in the Artificial Intelligence field and has received more than 1,870 citations according to \textit{google.scholar} by July 2022.} that is publicly available, in the \textit{R} package \textit{bnlearn} \citep{bnlearn2010}. PC Hill Climbing (PCHC) \citep{tsagris2021} is a recently suggested hybrid algorithm that is also publicly available, in the \textit{R} package \textit{pchc} \citep{pchc2021}. \citep{tsagris2021} showed that when the sample size is at the order of hundreds of thousands MMHC's implementation in the \textit{R} package \textit{bnlearn} requires more than a day with continuous data, even with 50 variables. On the contrary, PCHC is a computationally efficient and scalable BN learning algorithm \citep{tsagris2021}. With modern technology and vast data generation, the computational cost is a considerable parameter. Every novel algorithm must be computationally efficient and scalable to large sample sizes. Seen from the green economy point of view, this cost also has an economic and environmental impact; a faster algorithm will produce results in a more timely manner, facilitating faster decision making, consuming less energy and hence reducing its carbon footprint. Moving along those lines this paper proposes a new computationally highly efficient algorithm termed Forward with Early Dropping Hill Climbing (FEDHC) that is publicly available in the \textit{R} package \textit{pchc}. FEDHC shares common ideas with PCHC and MMHC. It applies the Forward Backward with Early Dropping (FBED) variable selection algorithm \citep{borboudakis2019} to each variable as a means of skeleton identification, followed by a Hill Climbing (HC) scoring phase. FEDHC can handle millions of observations in just a few minutes and retains similar or better accuracy levels than PCHC and MMHC. With continuous data FEDHC performs fewer errors than PCHC but the converse is true with categorical data. FEDHC further enjoys the same scalability property as PCHC, its computational cost is proportional to the sample size of the data. Increasing the sample size by a factor increases the execution time by the same factor. Finally, a new, computationally efficient, implementation of MMHC is offered that is also publicly available in the \textit{R} package \textit{pchc}. The choice of the BN learning algorithm is not only computational cost dependent, but also quality dependant. Regardless of the algorithm used, the quality of the learned BN can be significantly affected by outliers. Robustness to outliers is an important aspect that, surprisingly enough, has not attracted substantial research attention in the field of BN learning. \cite{kalisch2008} were the first to propose a robustified version of the PC algorithm by replacing the empirical standard deviations with robust scale estimates. \cite{cheng2018} on the other hand, removed the outliers but their algorithm is only applicable to BNs with a known topology. Robustification of the proposed FEDHC algorithm takes place by adopting techniques from the robust statistics literature. The key concept is to identify and remove the outliers prior to applying FEDHC. The rest of the paper is structured as follows. Preliminaries regarding BNs that will assist in making the paper comprehensible and a brief presentation of PCHC and MMHC algorithms are unveiled in Section \ref{prelim}. The FEDHC is introduced in Section \ref{fedhc} along with its robustified version\footnote{ The robustified versions is applicable to PCHC and MMHC as well.} that will be shown to be remarkable and utterly insensitive to outliers. Theoretical properties and computational details of FEDHC and the conditional independence tests utilised for continuous and categorical data are delineated in the same section. Section \ref{mc} contains Monte Carlo simulation studies comparing FEDHC to PCHC and MMHC in terms of accuracy, computational efficiency and number of tests performed. Section \ref{real} illustrates the FEDHC, PCHC and MMHC on two real cross-sectional datasets using the \textit{R} package \textit{pchc}. The first dataset contains continuous data on the credit history for a sample of applicants for a type of credit card \citep{greene2003}. The second dataset contains categorical data on the household income plus some more demographic variables. Ultimately, Section \ref{concl} contains the conclusions drawn from this paper. \section{Preliminaries} \label{prelim} Graphical models or probabilistic graphical models express visually the conditional (in)dependencies between random variables ($V_i$, $i=1,\ldots,D$). Nodes (or vecrtices) are used to represent the variables $V_i$ and edges between the nodes, for example $V_i-V_j$, indicate relationship between the variables $V_i$ and $V_j$. Directed graphs are graphical models that contain arrows, instead of edges, indicating the direction of the relationship, for example $V_i \rightarrow V_j$. The parents of a node $V_i$ are the nodes whose direction (arrows) points towards $V_i$. Consequently, node $V_i$ is termed child of those nodes and the common parents of that nodes are called spouses. Directed acyclic graphs (DAG) are stricter in the sense that they impose no cycles on these directions. For any path between $V_i$ and $V_j$, $V_i \rightarrow V_k \rightarrow \ldots \rightarrow V_j $, no path from $V_j$ to $V_i$ ($V_j \rightarrow \ldots \rightarrow V_i$) exists. In other words, the natural sequence or relationship between parents and children or ancestors and descendants is mandated. \begin{figure}[!ht] \centering \includegraphics[scale = 0.5, trim = 0 20 0 0]{exampledag.png} \caption{An example of a DAG. Nodes $V1$ and $V2$ are the parents of $V3$, whose children are nodes $V4$ and $V5$. $V2$ is the spouse of $V1$ (and vice versa, $V1$ is the spouse of $V2$).} \label{dag1} \end{figure} \subsection{Bayesian Networks} Assume there is a collection $\mathbf{V}$ of $D$ variables whose joint distribution $P$ is known. The BN\footnote{BN is a special case of a DAG.} \citep{pearl1988,spirtes2000} $B = \langle G, \mathbf{V} \rangle$ arises from linking $P$ to $G$ through the Markov condition (or Markov property), which states that each variable is conditionally independent of its non-descendants given its parents. By using this condition, the joint distribution $P$ can be factorised as \begin{eqnarray} \label{markov} P(V_1, \dots, V_D) = \prod_{i=1}^D P\left(V_i | Pa(V_i)\right), \end{eqnarray} where Pa($V_i$) denotes the parents set of $V_i$ in $G$. If $G$ entails only conditional independencies in $P$ and all conditional independencies in $P$ are entailed by $G$, based on the Markov condition, then $G$, $P$ and $G$ are faithful to each other, and $G$ is a perfect map of $P$ \citep{spirtes2000}. The BN whose edges can be interpreted causally is called causal BN, an edge $V_i \rightarrow V_j$ exists if $V_i$ is a direct cause of $V_j$. A necessary assumption made by the algorithms under study is causal sufficiency; there are no latent (hidden, non observed) variables among the observed variables $\mathbf{V}$. The triplet $(V_i, V_k, V_j)$ where $V_i \rightarrow V_k \leftarrow V_j$ is called V-structure. If there is no edge between $V_i$ and $V_j$ the node $V_k$ is called unshielded collider. In Figure \ref{dag1} the triplet $(V_1, V_3, V_2)$ is a V-structure as there is no edge between $V_1$ and $V_2$ and hence node $V_3$ is an unshielded collider. The unshielded collider $V_k$ implies that $V_i$ and $V_j$ ae independependent conditioning on $V_k$, provided that the faithfulness property holds true \citep{spirtes2000}. Conversely, the triplet of nodes $(V_i, V_k, V_j)$ such that $V_k \rightarrow V_i$ and $V_k \rightarrow V_j$ is termed $\Lambda$-structure (nodes $V_3, V_4$ and $V_5$ in Figure \ref{dag1} is such an example). The $\Lambda$-structure implies that $V_i$ and $V_j$ are conditionally independent given $V_k$. Two or more BNs are called Markov equivalent if and only if they have the same skeleton and the same V-structures \citep{verma1991}. The set of all Markov equivalent BNs forms the Markov equivalence class that can be represented by a complete partially DAG, which in addition to directed edges contains undirected edges\footnote{Undirected edges may be oriented either way in BNs of the Markov equivalence class (in the set of all valid combinations), while directed and missing edges are shared among all equivalent BNs.}. \subsection{Classes of BN learning algorithms} BN learning algorithms are typically constraint-based, score-based or hybrid. Constraint-based learning algorithms, such as PC\footnote{PC stands for Peter and Clark, named after Peter Spirtes and Clark Glymour, the names of the two researchers who invented it.} \citep{spirtes1991} and FCI \citep{spirtes2000} employ conditional independence (CI) tests to discover the structure of the network (skeleton), and then orient the edges by repetitively applying orientation rules. On the contrary, score-based methods \citep{cooper1992,heckerman1995,chickering2002}, assign a score on the whole network and perform a search in the space of BNs to identify a high-scoring network. Hybrid algorithms, such as MMHC \citep{tsamardinos2006} and PCHC \citep{tsagris2021}, combine both aforementioned methods; they first perform CI tests to discover the skeleton of the BN and then employ a scoring method to direct the edges in the space of BNs. \subsection{PCHC and MMHC algorithms} PCHC's skeleton identification phase is the same as that of the PC algorithm \citep{tsagris2021}. The phase commences with all pairwise unconditional associations and removes the edge between ordered pairs which are not statistically significantly associated. Subsequently, CI tests are performed with the cardinality of the conditioning set (denoted by k) increasing by 1 at a time. At every step, the conditioning set consists of subsets of the neighbours, adjacent to each variable $V_i$ ($adj(G, V_i)$). This process is repeated until no edge can be removed. \cite{spirtes2000} suggested three heuristics to select the pairs of variables and the order is crucial as it can yield erroneous results. The optimal is, for a given variable $V_i$, to first test those variables ${\bf V}$ that are least probabilistically dependent on $V_i$, conditional on those subsets of variables that are most probabilistically dependent on $V_i$. Note that the pairs are first ordered according to the third heuristic of \cite{spirtes2000} and so the order of selection of the pairs is deterministic. Hence, the skeleton identification phase is independent of the order at which the variables are located in the dataset \citep{tsagris2019b}. MMHC's skeleton identification phase performs a variable selection process for each variable (call it target variable, $V_i$), described as follows. A search for its statistically significantly associated variables $V_s$ is performed via unconditional statistical tests. The associations are stored and the variable with the highest association ($V_j$) is chosen, an edge is added between this $V_i$ and $V_j$ and all non statistically significant variables are excluded from further consideration. In the second step, all CI tests between the target variable and previously identified variables, conditional on the previously selected variable are performed $\left( V_i\perp\!\!\!\perp V_m | V_j, \ \ m \neq i,j \right)$ and the non statistically significant variables are neglected. The previously stored associations are updated, for each variable, the minimum between the old and the new variables is stored. The variable with the highest association (Max-Min heuristic) is next selected. In subsequent steps, while the set of the selected variables increases, the conditioning set does not, as its cardinality is at most equal to $k$\footnote{This algorithm resembles the classical forward variable selection in statistics with two distinct differences. At each step, non significant variables are excluded from future searches and instead of conditioning on all selected variables. Secondly, the CI test for the next variable conditions upon all possible subsets, up to a pre-specified cardinality, of the already selected variables.}. Upon completion, a backward phase, in the same spirit as the forward applies to remove falsely detected variables. This variable selection process is repeated for all variables. The edges detected remain only if they were identified by all variables. If for example, $V_j$ was found to associated with $V_i$, but $V_i$ was not found to be associated with $V_j$, then no edge between $V_i$ and $V_j$ will be added. A slightly modified version of MMHC's skeleton identification phase is implemented in the \textit{R} package \textit{pchc}. The backward phase is not performed in order to make the algorithm faster. To distinguish between them, \textit{bnlearn}'s implementation will be denoted by MMHC-1 and \textit{pchc}'s implementation will be denoted by MMHC-2 hereafter. The orientation of the discovered edges takes place in the second, Hill Climbing (HC) scoring, phase of PCHC and MMHC and is the same phase employed by FEDHC as well. \section{The FEDHC BN learning algorithm} \label{fedhc} Similarly to PCHC and MMHC, the skeleton identification phase of FEDHC relies on a variable selection algorithm. Thus, prior to introducing the FEDHC algorithm the Forward Backward with Early Dropping (FBED) variable selection algorithm \citep{borboudakis2019} is briefly presented. \subsection{The FBED variable selection algorithm} In the classical forward selection algorithm all available predictor variables are constantly used and their statistical significance is tested at each step. Assuming that out of $10,000$ predictor variables only $10$ are selected. This implies that almost $10,000 \times 10$ regression models must be fitted and the same amount of statistical tests must be executed. The computational cost is tremendous rendering this computationally expensive algorithm impractical and hence prohibitive. \cite{borboudakis2019} introduced the FBED algorithm as a speed-up modification of the traditional forward selection algorithm coupled with the backward selection algorithm \citep{draper1998}. FBED relies on the Early Dropping heuristic to speed up the forward selection. The heuristic drops the non statistically significant predictor variables at each step, thus removes them from further consideration resulting in a computationally dramatically cheaper algorithm, that is presented in Algorithm \ref{algorithm1}. \makeatletter \def\State\hskip-\ALG@thistlm{\State\hskip-\ALG@thistlm} \makeatother \begin{algorithm} \begin{algorithmic}[1] \State \textbf{Input}: A response variable $y$ and a set of $D$ predictor variables $\bf V$. \State Let ${\bf S}=\emptyset$ denote the set of selected variables. \State Perform all regression models of $y$ on each $V_i$, $i=1,\ldots,D$, $y \sim f(V_i)$, where $f$ denotes a function of $V_i$, e.g. a linear model $y=a+bV_i + e$, and retain only the statistically significant predictor variables ${\bf V}_{sig}$. \State Choose $V_j$ from ${\bf V}_{sig}$ that has the highest association, add it in ${\bf S}$ and use that to perform all regression models of $y$ on the $V_j$ and each $V_{\ell}$, $y\sim f(V_j,V_{\ell})$, where $\ell \in {\bf V}$, with $\ell \neq j$ and again retain only the statistically significant predictor variables, thus reducing $| {\bf V}_{sig}|$ and increasing $| {\bf S}|$. \State Repeat until no predictor variable is left, i.e. ${\bf V}_{sig}=\emptyset$. \State This process can be repeated $k$ times, using all neglected predictor variables, where $k$ is a pre-defined number, until $|{\bf S}|$ cannot further increase. \State Perform a backward selection phase attempting to remove the non statistically significant predictor variables. \State \textbf{Return} ${\bf S}$. \end{algorithmic} \caption{The FBED variable selection algorithm} \label{algorithm1} \end{algorithm} \subsection{Skeleton identification phase of the FEDHC algorithm} The skeleton identification phase of the FEDHC algorithm is the one presented in Algorithm \ref{algorithm2}, but it must be stressed that the backward phase of FBED is not performed so as to reduce the computational cost. The FBED algorithm (Algortihm \ref{algorithm1}) for each variable (call it target variable, $V_i$). This variable selection process is repeated for all variables. The edges detected remain only if they were identified by all variables. If for example, $V_j$ was found to associated with $V_i$, but $V_i$ was not found to be associated with $V_j$, then no edge between $V_i$ and $V_j$ will be added. \makeatletter \def\State\hskip-\ALG@thistlm{\State\hskip-\ALG@thistlm} \makeatother \begin{algorithm}[H] \centering \begin{algorithmic}[1] \State \textbf{Input}: Data set on a set of $D$ variables $\bf V$. \State Let the adjacency matrix $G$ be full of zeros. \State \textbf{Repeat} for all variables $V_i$, $i=1,\ldots,D$ \State Perform the FBED algorithm in Algorithm \ref{algorithm1}, excluding the backward phase, and return \hskip 0.4cm ${\bf S}_i$. \State Set $G_{ij}=1$ for all $j \in {\bf S}_i$. \State \textbf{If} $G_{ij} \neq G_{ji}$ set $G_{ij}=G_{ji}=0$. \State \textbf{Return} $G$. \end{algorithmic} \caption{Skeleton identification phase of the FEDHC algorithm} \label{algorithm2} \end{algorithm} \subsection{Hill Climbing phase of the FEDHC algortihm} The first phase of FEDHC, MMHC and of PCHC is to discover any possible edges between the nodes using CI tests. In the second phase, a search for the optimal DAG is performed, where edges turn to arrows or are deleted towards maximisation of a score metric. This scoring phase performs a greedy HC search in the space of BNs, commencing with an empty graph \citep{tsamardinos2006}. The edge deletion or direction reversal that leads to the largest increase in score, in the space of BNs\footnote{This implies that every time an edge removal, or arrow direction is implemented, a check for cycles is performed. If cycles are created, the operation is cancelled regardless if it increases the score.}, is applied and the search continues recursively. The fundamental difference from standard greedy search is that the search is constrained to the orientation of the edges discovered by the skeleton identification phase\footnote{For more information see \cite{tsamardinos2006}.}. Tabu search is such an iterative local searching procedure adopted by \cite{tsamardinos2006} for this purpose. Its performance is enhanced by using a list where the last 100 structures explored are stored, while searching in the neighborhood of each solution. The search is also capable of escaping from local optima, in which normal local search techniques often get stuck. Instead of applying the best local change, the best local change that results in a structure not on the list is performed in an attempt to escape local maxima \citep{tsamardinos2006}. This change may actually reduce the score. When a number of changes (10-15) occur without an increase in the maximum score ever encountered during search, the algorithm terminates. The overall best scoring structure is then returned. The Bayesian Information Criterion (BIC) \citep{schwarz1978} is a frequent score used for continuous data, while other options include the multivariate normal log-likelihood, the Akaike Information Criterion (AIC) and the Bayesian Gaussian equivalent\footnote{The term "\textit{equivalent}" refers to their attractive property of giving the same score to equivalent structures (Markov equivalent BNs) i.e., structures that are statistically indistinguishable \citep{tsamardinos2006}.} \citep{geiger1994} score. The Bayesian Dirichlet equivalent (BDE) \citep{buntine1991}, the BDe uniform score (BDeu) \citep{heckerman1995}, the multinomial log-likelihood score \citep{bouckaert1995} and the MDL score \citep{suzuki1993,lam1994} are four scoring options for discrete data. The combination of the FBED algorithm during the skeleton identification phase with the HC scoring method forms the FEDHC algorithm. Interestingly enough, the skeleton identification phase of FEDHC performs substantially fewer statistical tests than PCHC and MMHC. \subsection{Prior knowledge} \label{prior} All BN learning algorithms are agnostic of true relationships among the input data. It is customary though for practitioners and researchers to have prior knowledge of the necessary directions (forbidden or not) of some of the relationships among the variables. For instance, variables such as sex or age cannot be caused by any economic or demographic variables. Economic theory (or theory from any other field) can further assist in improving the quality of the fitted BN by imposing or forbidding directions among some variables. All the prior information can be inserted into the scoring phase of the aforementioned BN learning algorithms leading to less errors and more realistic BNs. \subsection{Theoretical properties of FEDHC} The theoretical properties and guarantees of MMHC and PCHC can be found in \cite{tsamardinos2006} and \cite{tsagris2021}, respectively. As for the FEDHC, while there is no theoretical guarantee of the skeleton identification phase of FEDHC, \cite{borboudakis2019} showed that running FBED with two repeats recovers the MB of the response variable if the joint distribution of the response and the predictor variables can be faithfully represented by a BN. When used for BN learning though, FBED need not be be run more than once for each variable. In this case FBED, similarly to MMHC, will identify the children and parents of a variable $V_i$, but not the spouses of the children \citep{borboudakis2019} as this is not necessary during the skeleton identification phase. When FBED is run on the children of the variable $V_i$ it will again identify the children's parents who are the spouses of the variable $V_i$. Hence, upon completion of the FEDHC algorithm will have identified the MB of each variable. Additionally, the early dropping heuristic does not only reduce the computational time but also reduces the problem of multiple testing, in some sense \citep{borboudakis2019}. When FBED is run only once (as in the current situation), in the worst-case scenario, it is expected to select about $\alpha \cdot D$ variables (where $\alpha$ is the significance level) since all other variables will be dropped in the first (filtering) phase. However, their simulation studies showed that FBED was selecting fewer false positives than expected and the authors' recommendation is to reduce the number of runs to further limit the number of falsely selected variables, a strategy FEDHC follows by default. Similar to MMHC, the FEDHC is a local learning algorithm, and hence during the HC phase the overall score is decomposed \citep{tsamardinos2006} exploiting the Markov property of BNs (\ref{markov}). The local learning has several advantages (see \cite{tsamardinos2006}) and the scores (BDe, BIC., etc.) are locally consistent \citep{chickering2002}. \subsection{Robustification of the FEDHC algorithm for continuous data} It is not uncommon for economic datasets to contain outliers, observations with values far from the rest of the data. Income is such an example that contains outliers, but if outliers appear only in that variable their effect will be minor. The effect of outliers is propagated when they exist in more variables and in order to mitigate their effect, they must be identified in the multivariate space. If these outliers are not detected or not treated properly, BN learning algorithms will yield erroneous results. FEDHC will employ the Reweighted Minimum Covariance Determinant (RMCD) \citep{rousseeuw1985,rousseeuw1999} as a means to identify outliers and remove them\footnote{The reason why one cannot use the robust correlation matrix directly is because the independence property between two variables no longer holds true. The robust correlation between any two variables depends on the other variables, so adding or removing a variable modifies the correlation structure \citep{raymaekers2021}.}. The RMCD estimator is computed in two stages. In the first stage, a subset of observations $h$ ($n/2 ≤ h < n$) is selected such that the covariance matrix has the smallest determinant and a robust mean vector is also computed. The second stage, is a re-weighting scheme that increases the efficiency of the estimator, while preserving its high-breakdown properties. A weight $w_i=0$ is given to observations whose first-stage robust distance exceeds a threshold value, otherwise the weight is $w_i = 1$ ($i=1,\ldots,n$) is given. Using the re-weighted robust covariance matrix and mean vector, robust Mahalanobis distances are computed $d_{i(RMCD)}^2=\left({\bf x}_i - \tilde{\pmb{\mu}}_{(RMCD)}\right)^T\tilde{\pmb{\Sigma}}^{-1}_{(RMCD)}\left({\bf x}_i - \tilde{\pmb{\mu}}_{(RMCD)}\right)$ and proper cut-off values are required to detect the outliers. Those cut-off values are based on the following accurate approximations \citep{cerioli2010,cerchiello2016} \begin{eqnarray*} d_{i(RMCD)}^2 \begin{array}{ccc} \sim & \frac{\left(w-1\right)^2}{w}Be\left(\frac{D}{2}, \frac{w-D-1}{2}\right) & \text{if} \ \ w_i=1 \\ \sim & \frac{w+1}{w}\frac{(w-1)D}{w-D}F_{D,w-D}& \text{if} \ \ w_i=0, \end{array} \end{eqnarray*} where $w=\sum_{i=1}^nw_i$, and $Be$ and $F$ denote the Beta and F distributions respectively. The observations whose Mahalananobis distance $d_{i(RMCD)}^2$ exceeds the $97.5\%$ quantile of either distribution ($Be$ or $F$) are considered to be outliers and are hence removed from the dataset. The remainder of the dataset, assumed to be outlier free, will be used by FEDHC to learn the BN. The default value for $h$ is $[(n+p+1)/2]$, where $[.]$ denotes the largest integer. This value was proven to have the highest breakdown point \citep{hubert2010}, but low efficiency on the other hand. Changing $h$ yields an estimator with lower robustness properties and increases the computational cost of the RMCD estimator. For these reasons, this is the default value used inside the robustification process of the FEDHC algorithm. The case of $n < p$ and $n \ll p$ (very high dimensional case) in general can be treated in a similar way, by replacing the RMCD estimator with the high dimensional MCD approach of \cite{ro2015}. \section{Monte Carlo simulation studies} \label{mc} Extensive experiments were conducted on simulated data to investigate the quality of estimation of FEDHC compared to PCHC and MMHC-2. MMHC-1 participated in the simulation studies only with categorical data and not with continuous data because it is prohibitively expensive. The continuous data were generated by synthetic BNs that contained a various number of nodes, $p = (20, 30, 50)$, with an average of 3 and 5 neighbours (edges) for each variable. For each case 50 random BNs were generated with Gaussian data and various sample sizes. Categorical data were generated utilising two famous (in the BN learning community) realistic BNs and the sample sizes were left varying. The \textit{R} packages \textit{MXM} \citep{tsagris2019a} and \textit{bnlearn} were used for the BN generation, and the \textit{R} packages \textit{pchc} and \textit{bnlearn} were utilised to apply the FEDHC, PCHC, MMHC-2 and the MMHC-1 algorithms respectively. All simulations were implemented in a desktop computer with Intel Core i5-9500 CPU at 3.00GHz with 48 GB RAM and SSD installed. The metrics of quality of the learned BNs were the structural Hamming distance (SHD) \citep{tsamardinos2006} of the estimated DAG from the true DAG\footnote{This is defined as the number of operations required to make the estimated graph equal to the true graph. Instead of the true DAG, the Markov equivalence graph of the true BN; that is some edges have been un-oriented as their direction cannot be statistically decided. The transformation from the DAG to the Markov equivalence graph was carried out using Chickering's algorithm \citep{chickering1995}.}, the computational cost and the number of tests performed during the skeleton identification phase and the total duration of the algorithm. PCHC and the MMHC-1 algorithms have been implemented in \textit{C++}, henceforth the comparison of the execution times is not really fair at the programming language level. FEDHC and MMHC-2 have been implemented in \text{R} (skeleton identification phase) and \textit{C++} (scoring phase). \subsection{Synthetic BNs with continuous data} \label{datgen} The procedure used to generate data for $X$ is summarised in the steps below. Let $X$ be a variable in $G$ and $Pa(X)$ be the parents of $X$ in $G$. \begin{enumerate} \item Sample the coefficients $\beta$ of $f\left(Pa(X)\right)$ uniformly at random from $[-1,-0.1] \cup [0.1,1]$. \item In case $Pa(X)$ is empty, $X$ is sampled from the standard normal distribution. Otherwise, $X = f\left(Pa(X)\right) = \beta_0 + \sum_i \beta_i Pa_i(X) + \epsilon_X$, a linear function\footnote{In general this can represent any (non-linear) function.} depending on $X$, where $\epsilon_X$ is generated from a standard normal distribution. \end{enumerate} The average number of connecting edges (neighbours) was set to 3 and 5. The higher the number of edges is, the denser the network is and the harder the inference becomes. The sample sizes considered were $n = (100, 500, 1,000, 5000, 1\times 10^4, 3\times 10^4, 5\times 10^4, 1\times 10^5, 3\times 10^5, 5\times 10^5, 1\times 10^6, 3\times 10^6, 5\times 10^6)$. Figures \ref{synthetic_3a}, \ref{synthetic_3b}, \ref{synthetic_5a} and \ref{synthetic_5b} present the SHD and the number of CI tests performed of each algorithm for a range of sample sizes (in log-scale). With 3 neighbours on average per node, the differences in the SHD are noticeably rather small, yet FEDHC achieves lower numbers. With 5 neighbours on average though the differences are more significant and increasing with increasing sample sizes. As for the number of CI tests executed during the skeleton identification phase, FEDHC is the evident winner as it executes up to 6 times less tests, regardless of the average neighbours. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_3_20.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_3_20.png} \\ (a) SHD vs log of sample size. & (b) Number of CI tests vs log of sample size. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_3_30.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_3_30.png} \\ (c) SHD vs log of sample size. & (d) Number of CI tests vs log of sample size. \\ \end{tabular} \caption{SHD and number of CI tests against log of sample size for 20 and 30 dimensions with \textbf{3 neighbours} on average. \label{synthetic_3a} } \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_3_50.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_3_50.png} \\ (a) SHD vs log of sample size. & (b) Number of CI tests vs log of sample size. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_3_100.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_3_100.png} \\ (c) SHD vs log of sample size. & (d) Number of CI tests vs log of sample size. \\ \end{tabular} \caption{SHD and number of CI tests against log of sample size for 50 and 100 dimensions with \textbf{3 neighbours} on average. \label{synthetic_3b} } \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_5_20.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_5_20.png} \\ (a) SHD vs log of sample size. & (b) Number of CI tests vs log of sample size. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_5_30.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_5_30.png} \\ (c) SHD vs log of sample size. & (d) Number of CI tests vs log of sample size. \\ \end{tabular} \caption{SHD and number of CI tests against log of sample size for 20 and 30 dimensions with \textbf{5 neighbours} on average. \label{synthetic_5a} } \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_5_50.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_5_50.png} \\ (a) SHD vs log of sample size. & (b) Number of CI tests vs log of sample size. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_5_100.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_5_100.png} \\ (c) SHD vs log of sample size. & (d) Number of CI tests vs log of sample size. \\ \end{tabular} \caption{SHD and number of CI tests against log of sample size for various dimensions with \textbf{5 neighbours} on average. \label{synthetic_5b} } \end{figure} \subsection{Robustified FEDHC for continuous data} Examination of the robustified FEDHC contains two axes of comparison; the outlier-free and the outliers present cases. At first the performances of the raw and the robustified FEDHC in the outlier-free case are evaluated. Figures \ref{robust_no_1} and \ref{robust_no_2} signify that there is no loss in the accuracy when using the robustified FEDHC over the raw FEDHC. Computationally speaking though, the raw FEDHC is significantly faster than the robustified FEDHC. For small sample sizes the robustified FEDHC can be 10 times higher, while for large sample sizes its cost can be double or only 5\% higher than that of the raw FEDHC. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{3_20_robust.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{5_30_robust.png} \\ (a) 3 neighbours. & (b) 5 neighbours. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{3_30_robust.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{5_30_robust.png} \\ (c) 3 neighbours. & (d) 5 neighbours. \\ \end{tabular} \caption{Ratios of SHD and computational cost against log of sample size for various dimensions with \textbf{3 neighbours} and \textbf{5 neighbours} on average. The ratios depict the errors and computational cost of the raw FEDHC relatively to the robustified FEDHC with NO outliers. \label{robust_no_1} } \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{3_50_robust.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{5_50_robust.png} \\ (a) 3 neighbours. & (b) 5 neighbours. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{3_100_robust.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{5_100_robust.png} \\ (c) 3 neighbours. & (d) 5 neighbours. \end{tabular} \caption{Ratios of SHD and computational cost against log of sample size for various dimensions with \textbf{3 neighbours} and \textbf{5 neighbours} on average. The ratios depict the errors and computational cost of the raw FEDHC relatively to the robustified FEDHC with NO outliers. \label{robust_no_2} } \end{figure} The performances of the raw FEDHC and of the robustified FEDHC in the presence of extreme outliers are evaluated next. The BN generation scheme is the one described in Section \ref{datgen} with the exception of having added a 5\% of outlying observations. The considered sample sizes are smaller, as although FEDHC is computationally efficient, it becomes really slow in the presence of outliers. The results presented in Figures \ref{robust1} and \ref{robust2} evidently show the gain when using the robustified FEDHC over the raw FEDHC. The SHD of the raw FEDHC increases by as little as 100\% up to 700\% with 3 neighbours on average and 50 variables. The duration of the raw FEDHC increases substantially\footnote{Similar patterns were observed for the duration of the skeleton learning phase and for the number of CI tests}. The raw FEDHC becomes up to more than 200 times slower than the robustified version with hundreds of thousands of observations and 50 variables. This is attributed to the HC phase of the raw FEDHC which consumes a tremendous amount of time. The outliers produce noise that escalates the labour of the HC phase. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{robust_3_20.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{robust_5_20.png} \\ (a) 3 neighbours. & (b) 5 neighbours. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{robust_3_30.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{robust_5_30.png} \\ (c) 3 neighbours. & (d) 5 neighbours. \\ \end{tabular} \caption{Ratios of SHD and computational cost against log of sample size for 20 and 30 dimensions with \textbf{3 neighbours} and \textbf{5 neighbours} on average. The ratios depict the errors and computational cost of the raw FEDHC relatively to the robustified FEDHC with 5\% outliers. \label{robust1} } \end{figure} \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{robust_3_50.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{robust_5_50.png} \\ (a) 3 neighbours. & (b) 5 neighbours. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{robust_3_100.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{robust_5_100.png} \\ (c) 3 neighbours. & (d) 5 neighbours. \end{tabular} \caption{Ratios of SHD and computational cost against log of sample size for 50 and 100 dimensions with \textbf{3 neighbours} and \textbf{5 neighbours} on average. The ratios depict the errors and computational cost of the raw FEDHC relatively to the robustified FEDHC with 5\% outliers. \label{robust2} } \end{figure} \subsection{Realistic BNs with categorical data} The $f\left(Pa(X)\right)$ function utilised in the continuous data case relies on the $\beta$ coefficients. The larger the magnitude of their values, the stronger the association between the edges becomes and hence it becomes easier to identify them. For BNs with categorical data one could apply the same generation technique and then discretize the simulated data. To avoid biased or optimistic estimates favoring one or the other method, two real BNs with categorical data were utilised to simulate data. These are a) the \textit{Insurance} BN, used for evaluating car insurance risks \citep{beinlich1989}, that consists of 27 variable (nodes) and 52 (directed) edges and b) the \textit{Alarm} BN, designed to provide an alarm message system for patient monitoring and consists of 37 variables and 45 (directed) edges. The \textit{R} package \textit{bnlearn} contains a few thousand categorical instantiations from these BNs, but for the purpose of the simulation studies more instantiations were generated using the same package. The sample sizes considered were $n=(1\times 10^4, 2\times 10^4, 5\times 10^4, 1\times 10^5, 2\times 10^5, 5\times 10^5, 1\times 10^6, 2\times 10^6, 5\times 10^6)$. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_insurance.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_insurance.png} \\ (a) SHD vs log of sample size. & (b) Number of CI tests vs log of sample size. \\ \includegraphics[scale = 0.38, trim = 70 0 0 0]{shd_alarm.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{ntests_alarm.png} \\ (c) SHD vs log of sample size. & (d) Number of CI tests vs log of sample size. \\ \end{tabular} \caption{SHD and number of CI tests against log of sample size with \textbf{categorical} data. \label{realistic} } \end{figure} Figure \ref{realistic} shows the SHD and the number of CI tests executed by each algorithm against the sample size. The MMHC-1 has evidently the poorest performance in both axes of comparison. Our implementation (MMHC-2) performs substantially better but the overall winner is the PCHC. FEDHC on the other hand performs better than MMHC-1 yet is the second best option. \subsection{Scalability of FEDHC} \label{scalability} The computational cost of each algorithm was also measured appearing in Figure \ref{scalab} as a function of the sample size. The empirical slopes of all lines in Figure \ref{scalab} are nearly equal to 1 indicating that the scalability of FEDHC, PCHC, and MMHC-2 is linear in the sample size. Hence, the computational cost of all algorithms increases linearly with respect to the sample size. For any percentage-wise increase in the sample size, the time increases by the same percentage. Surprisingly enough, the computational cost of MMHC-1 was not evaluated for categorical data case because similarly to the continuous data case it was too high to evaluate. It is surprising though that the computational cost of FEDHC is similar to that of PCHC and MMHC-2. In fact the skeleton identification phase requires about the same amount of time and it requires only 8 seconds with 5 million observations. The scoring phase is the most expensive phase of the algorithms, absorbing 73\%-99\% of the total computation time. Regarding FEDHC and MMHC-2, since the initial phase of both has been implemented in \textit{R}, this can be attributed to the fact that the calculations of the partial correlation in FEDHC are heavier than those in MMHC-2 because the conditioning set in the former can grow larger than the conditioning set in MMHC-2 which is always bounded by a pre-specified value $k$. Thus, MMHC-2 performs more but computationally lighter calculations than FEDHC. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.38, trim = 70 0 0 0]{skeltime_3_50.png} & \includegraphics[scale = 0.38, trim = 50 0 0 0]{time_3_50.png} \\ \multicolumn{2}{c}{\textbf{Continuous data with 3 neighbours on average.}} \\ \includegraphics[scale = 0.38, trim = 50 10 0 0]{skeltime_5_50.png} & \includegraphics[scale = 0.38, trim = 50 10 0 0]{time_5_50.png} \\ \multicolumn{2}{c}{\textbf{Continuous data with 5 neighbours on average.}} \\ \includegraphics[scale = 0.38, trim = 70 10 0 0]{skeltime_insurance.png} & \includegraphics[scale = 0.38, trim = 50 10 0 0]{time_insurance.png} \\ \multicolumn{2}{c}{\textbf{Categorical data.}} \end{tabular} \caption{Scalability of the algorithms with respect to the sample size for some selected cases. The results for the other cases convey the same message and are thus not presented. The left column refers to the skeleton identification phase whereas the right column to both phases. \label{scalab} } \end{figure} \section{Illustration of the algorithms on real economics data using the \textit{R} package \textit{pchc} } \label{real} The \textit{R} package \textit{pchc} is first presented and then two examples with the datasets used in \cite{tsagris2021} illustrate the performance of FEDHC against its competitors, PCHC, MMHC-1, MMHC-2. The advantages of BNs have already been discussed in \cite{tsagris2021} and hence the examples focus on the comparison of FEDHC to PCHC, MMHC-1 and MMHC-2. \subsection{Expenditure data} The first example concerns a dataset with continuous variables containing information on the monthly credit card expenditure of individuals was used. It is the \textbf{Expenditure} dataset \citep{greene2003} and is publicly accessible via the \textit{R} package \textit{AER} \citep{aer2008}. The dataset contains information about 1,319 observations (10\% of the original data set) on the following 12 variables. Whether the application for credit card was accepted or not (\textbf{Card}), the number of major derogatory reports, (\textbf{Reports}), the age in years plus twelfths of a year (\textbf{Age}), the yearly income in \$10,000 (\textbf{Income}), the ratio of monthly credit card expenditure to yearly income (\textbf{Share}), the average monthly credit card expenditure (\textbf{Expenditure}), whether the person owns their home or they rent (\textbf{Owner}), whether the person is self employed or not (\textbf{Selfemp}), the number of dependents + 1 (\textbf{Dependents}), the number of months living at current address (\textbf{Months}), the number of major credit cards held (\textbf{Majorcards}) and the number of active credit accounts (\textbf{Active}). The \textit{R} package \textit{AER} contains the data and must be loaded and be processed for the algorithms to run. \\ \begin{verbatim} > library(AER) ## CreditCard are available > library(bnlearn) ## To run MMHC-1 > data(CreditCard) ## load the data > x <- CreditCard > colnames(x) <- c( "Card", "Reports", "Age", "Income", "Share", "Expenditure", + "Owner", "Selfemp", "Dependents", "Months", "Majorcards", "Active" ) ## Prepare the data > for (i in 1:12) x[, i] <- as.numeric(x[, i]) > x <- as.matrix(x) > x[, c(1, 7, 8)] <- x[, c(1, 7, 8)] - 1 ## Run all 4 algorithms > a1 <- bnlearn::mmhc( as.data.frame(x), restrict.args = + list(alpha = 0.05, test = "zf") ) > a2 <- pchc::mmhc(x, alpha = 0.05) > a3 <- pchc::pchc(x, alpha = 0.05) > a4 <- pchc::fedhc(x, alpha = 0.05) \end{verbatim} In order to plot the fitted BNs of each algorithm the following commands were used. \begin{verbatim} > pchc::bnplot(a1) > pchc::bnplot(a2$dag) > pchc::bnplot(a3$dag) > pchc::bnplot(a4\$dag) \end{verbatim} This example shows the practical usefulness of the BNs. Evidently this small scale experiment shows that companies can customize their products according to the key factors that determine the consumers’ behaviour. Instead of selecting one variable only, a researcher/practitioner can identify the relationships among all variables by estimating the causal mechanistic system that generated the data. The BN can further reveal information about the variables that are statistically significantly related. According to FEDHC (Figure \ref{fig_expenditure})(a), the age of the individual affects their income, the number of months they have been living at their current address, whether they own their home or not, and the ratio of their monthly credit card expenditure to their yearly income. The only variables associated with the number of major derogatory reports (Reports) are whether the consumer's application for credit card was accepted or not (Card) and the number of active credit accounts (Active). In fact these two variables are parents of Reports as the arrows are directed towards it. A third advantage of BNs is that they provide a solution to the variable selection problem. The parents of the variable Majorcards (number of major credit cards held) are Card (whether the application for credit card was accepted or not) and Income (yearly income in \$10,000), its only child is Active (number of active credit accounts) and and it only spouse (parent of Active) is Owner (whether the consumer owns their home). The collection of those parents, children and spouses form the Majorcards' MB. That is, any other variable does not increase the information on the number of major credit cards held by the consumer. For any given variable one can straightforwardly obtain (and visualise) its MB which can be used for the construction of the appropriate linear regression model. Figure \ref{fig_expenditure} contains the BNs using both implementations of MMHC, the PCHC and the FEDHC algorithms fitted to the expenditure data with the variables sorted in a topological order \citep{chickering1995}, a tree-like structure. The BIC of the BN learned by MMHC-1 and MMHC-2 are equal to $-32171.75$ and $-32171.22$, and for the PCHC and FEDHC are both equal to $-32171.75$. This is an indication that all four algorithms produced BNs of nearly the same quality. On a closer examination of the graphs one can detect some differences between the algorithms. For instance \textbf{Age} is directly related to \textbf{Active} according to PCHC and MMHC-2 but not according to FEDHC and MMHC-1. Further, all algorithms have identified the \textbf{Owner} as the parent of \textbf{Income} and not vice-versa. This is related to the prior knowledge discussed in Section \ref{prior} and will be examined in the next categorical example dataset. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.48, trim = 70 0 0 0]{a4.png} & \includegraphics[scale = 0.48, trim = 20 0 0 0]{a3.png} \\ (a) FEDHC. & (b) PCHC. \\ \includegraphics[scale = 0.48, trim = 70 0 0 0]{a1.png} & \includegraphics[scale = 0.48, trim = 20 0 0 0]{a2.png} \\ (c) MMHC-1. & (d) MMHC-2. \end{tabular} \caption{\textbf{Expenditure data}. Estimated BNs using (a) FEDHC, (b) PCHC, (c) MMHC-1 and (d) MMHC-2. \label{fig_expenditure} } \end{figure} \subsection{Income data} The second example data set contains categorical variables and originates from an example in the book "The Elements of Statistical Learning" \citep{friedman2001} and is publicly available from the \textit{R} package \textit{arules} \citep{arules2011}. It consists of $6,876$ instances (obtained from the original data set with $9,409$ instances, by removing observations with missing annual income) and a mixture of 13 categorical and continuous demographic variables. The continuous variables were discretised as suggested by the authors of the package. The continuous variables (age, education, income, years in bay area, number in household, and number of children) were discretised based on their median values. \textbf{Income}: "\$0-\$40,000" or "\$40,000+", \textbf{Sex}: "male" or "female", \textbf{Marriage}: "married", "cohabitation", "divorced", "widowed" or "single", \textbf{Age}: "14-34" or "35+", \textbf{Education}: "college graduate" or "no college graduate", \textbf{Occupation}, "professional/managerial", "sales", "laborer", "clerical/service", "homemaker", "student", "military", "retired" or "unemployed", \textbf{Bay} (number of years in bay area): "1-9" or "10+", \textbf{No of people} (number of of people living in the house): "1" or "2+", \textbf{Children}: "0" or "1+", \textbf{Rent}: "own", "rent" or "live with parents/family", \textbf{Type}: "house", "condominium", "apartment", "mobile home" or "other", \textbf{Ethnicity}: "American Indian", "Asian", "black", "east Indian", "hispanic", "white", "pacific islander" or "other" and \textbf{Language} ( language spoken at home): "english", "spanish" or "other". The dataset is at first accessed via the \textit{R} package \textit{arules} and is prepossessed as suggested in \textit{arules}. \\ \begin{verbatim} > library(arules) > data(IncomeESL) ## remove incomplete cases > IncomeESL <- IncomeESL[complete.cases(IncomeESL), ] ## preparing the data set > IncomeESL[["income"]] <- factor((as.numeric(IncomeESL[["income"]]) > 6) + 1, + levels = 1 : 2 , labels = c("0-40,000", "40,000+")) > IncomeESL[["age"]] <- factor((as.numeric(IncomeESL[["age"]]) > 3) + 1, + levels = 1 : 2 , labels = c("14-34", "35+")) > IncomeESL[["education"]] <- factor((as.numeric(IncomeESL[["education"]]) > 4) + + 1, levels = 1 : 2 , labels = c("no college graduate", "college graduate")) > IncomeESL[["years in bay area"]] <- factor( + (as.numeric(IncomeESL[["years in bay area"]]) > 4) + 1, + levels = 1 : 2 , labels = c("1-9", "10+")) > IncomeESL[["number in household"]] <- factor( + (as.numeric(IncomeESL[["number in household"]]) > 3) + 1, + levels = 1 : 2 , labels = c("1", "2+")) > IncomeESL[["number of children"]] <- factor( + (as.numeric(IncomeESL[["number of children"]]) > 1) + 0, + levels = 0 : 1 , labels = c("0", "1+")) \end{verbatim} Some more steps are required prior to running the BN algorithms. \\ \begin{verbatim} > x <- IncomeESL > x <- x[, -8] > colnames(x) <- c( "Income", "Sex", "Marriage", "Age", "Education", "Occupation", + "Bay", "No of people", "Children", "Rent", "Type", "Ethnicity", "Language" ) > nam <- colnames(x) \end{verbatim} The importance of prior knowledge incorporation discussed in Section \ref{prior} becomes evident in this example. Prior knowledge can be added in the \textbf{blacklist} argument denoting the forbidden directions (arrows). \\ \begin{verbatim} > black <- matrix(nrow = 26, ncol = 2) > black <- as.data.frame(black) > for (i in 1:13) black[i, ] <- c(nam[i], nam[2]) > for (i in 1:13) black[13 + i, ] <- c(nam[i], nam[4]) > black <- black[-c(2, 17), ] > black <- rbind( black, c(nam[9], nam[3]) ) > black <- rbind( black, c(nam[3], nam[6]) ) > black <- rbind( black, c(nam[9], nam[6]) ) > black <- rbind( black, c(nam[6], nam[5]) ) > black <- rbind( black, c(nam[3], nam[1]) ) > black <- rbind( black, c(nam[1], nam[5]) ) > black <- rbind( black, c(nam[10], nam[1]) ) > black <- rbind( black, c(nam[10], nam[5]) ) > black <- rbind( black, c(nam[10], nam[6]) ) > black <- rbind( black, c(nam[13], nam[12]) ) > colnames(black) <- c("from", "to") \end{verbatim} Finally, the 4 BN algorithms are applied to the Income data. \\ \begin{verbatim} > b1 <- bnlearn::mmhc( x, blacklist = black, restrict.args = + list(alpha = 0.05, test = "mi") ) > b2 <- pchc::mmhc(x, method = "cat", alpha = 0.05, blacklist = black, + score = "bic") > b3 <- pchc::pchc(x, method = "cat", alpha = 0.05, blacklist = black, + score = "bic") > b4 <- pchc::fedhc(x, method = "cat", alpha = 0.05, blacklist = black, + score = "bic") \end{verbatim} Figure \ref{fig_income} presents the fitted BNs of the MMHC-1, MMHC-2, PCHC and FEDHC algorithms. There are some distinct differences between the algorithms. For instance, PCHC is the only algorithm that has not identified \textbf{Education} as the parent of \textbf{Bay}. Also, the BN learned by MMHC-2 is the densest one (contains more arrows) whereas PCHC learned the BN with the fewest arrows. This example further demonstrates the necessity of prior knowledge. BN learning algorithms fit a model to the data ignoring the underlying truthfulness, ignoring the relevant economic theory. Economic theory can be used as prior knowledge to help mitigate the errors and lead to more truthful BNs. The exclusion of the {blacklist} argument (forbidden directions) would yield some irrational directions, for instance that occupation of age might affect sex or marriage affects age, simply because these directions could increase the score. Finally, BNs are related to synthetic population generation where the data are usually categorical. This task requires the specification of the joint distribution of the data and BNs accomplish this \citep{sun2015}. Based on the Markov condition \ref{markov}, the joint distribution can be written down explicitly allowing for synthetic population generation in a sequential order. One commences by generating values for education and sex. Using these two variables, values for occupation. These values, along with income and age can be used to generate for the marital status and so on. \begin{figure}[!ht] \centering \begin{tabular}{cc} \includegraphics[scale = 0.48, trim = 60 0 0 0]{b4.png} & \includegraphics[scale = 0.48, trim = 20 0 0 0]{b3.png} \\ (a) FEDHC. & (b) PCHC. \\ \includegraphics[scale = 0.48, trim = 60 0 0 0]{b1.png} & \includegraphics[scale = 0.48, trim = 20 0 0 0]{b2.png} \\ (c) MMHC-1. & (d) MMHC-2. \\ \end{tabular} \caption{\textbf{Income data}. Estimated BNs using (a) FEDHC, (b) PCHC, (c) MMHC-1 and (d) MMHC-2. \label{fig_income} } \end{figure} \section{Conclusions} \label{concl} This paper proposed to combine the first phase of the FBED variable selection algorithm with the HC scoring phase leading to a new hybrid algorithm, termed FEDHC. Additionally, a new implementation of the MMHC algorithm was provided. Finally the paper presented robustified (against outliers) versions of FEDHC, PCHC and MMHC. The robustified version of FEDHC was shown to be even nearly 40 times faster than the raw version and yielded BNs of higher quality, when outliers were present. Simulation studies manifested that in terms of computational efficiency, FEDHC is comparable to PHCHC and along with MMHC-2 FEDHC was able to fit BNs to continuous data with sample sizes at the order of hundreds of thousands in a few seconds and at the order of millions in a few minutes. It must be highlighted though that the skeleton identification phase of FEDHC and MMHC-1 have been implemented in \textit{R} and not in \textit{C++}. Additionally, FEDHC was always executing significantly fewer CI tests than its competitors. Ultimately, in terms of accuracy, FEDHC outperformed is competitors with continuous data, and it was more accurate than or on par with MMHC-1 and MMHC-2 with categorical data, but less accurate than PCHC. The rationale of MMHC and PCHC is to perform variable selection to each node and then apply a HC to the resulting network. On the same spirit, \cite{meinshausen2006} used LASSO for variable selection with the scopus of constructing an un-directed network. The HC phase could be incorporated in the graphical LASSO to learn the underlying BN. Broadly speaking, the combination of a network learning phase with a scoring phase can yield hybrid algorithms. Other modern hybrid methods for BN learning include \cite{kuipers2020} on hybrid structure learning and sampling. They combine constraint-based pruning with MCMC inference schemes (also to improve the overall search space) and find a combination that is relatively efficient with relatively good performance. The constraint-based part is interchangeable and could connect well with MMHC, PCHC, or FEDHC. FEDHC is not the first algorithm that has outperformed MMHC. Recent algorithms include PCHC \citep{tsagris2021}, the SP algorithm for Gaussian DAGs \citep{raskutti2018} and the NOTEARS \citep{zheng2018}. The algorithms of \cite{zhang2012} and \cite{chalupka2018} were also shown to outperform MMHC in the presence of latent confounders, not examined here. The advantage of the latter two is that employ non-parametric tests, such as kernel CI test, thus allowing for non-linear relationships. BNs that detect non-linear relationships among the variable, such as the algorithms proposed by \cite{zhang2012} and \cite{chalupka2018} is what this paper did not cover. Further, our comparative analysis was only with MMHC \citep{tsamardinos2006} and PCHC \citep{tsagris2021} due to their close relationship with FEDHC. Future research includes a comparison of all algorithms in terms of more directions. For instance, a) the effect of Pearson and Spearman CI tests and the effect of $X^2$ and $G^2$ CI tests, b) the effect of the outliers, c) the effect of the scoring methods (Tabu search and HC) the effect of the average neighbours (network density), and e) the effect of the number of variables on the quality of the BN learned by either algorithm. These directions can be used to numerically evaluate the asymptotic properties of the BN learning algorithms with tens of millions of observations. Another interesting direction is the incorporation of fast non-linear CI tests, such as the distance correlation \citep{szekely2007,szekely2014,huo2016,shen2022}. The distance correlation could be utilized during the skeleton identification of the FEDHC mainly because it performs fewer CI tests than its competitors. \clearpage \section*{Appendix} \subsection*{A: Conditional independence tests} The type of CI tests executed during the skeleton identification phase depends upon the nature of the data and they are used to test the following. Let $X$ and $Y$ be two random variables, and $\mathbf{Z}$ be a (possibly empty) set of random variables. Statistically speaking, $X$ and $Y$ are conditionally independent given $\mathbf{Z}$ $\left( X\perp\!\!\!\perp Y | {\bf Z} \right)$, if $P(X,Y|\mathbf{Z}) = P(X|\mathbf{Z}) \cdot P(Y|\mathbf{Z})$ and this holds for all values of $X$, $Y$ and $\mathbf{Z}$. Equivalently, conditional independence of $X$ and $Y$ given $\mathbf{Z}$ implies $P(X|Y,\mathbf{Z}) = P(X|\mathbf{Z})$ and $P(Y|X,\mathbf{Z}) = P(Y|\mathbf{Z})$. \subsubsection*{Pearson correlation for continuous data} A frequently employed CI test for two continuous variables $X$ and $Y$ conditional on a set of variables ${\bf Z}$ is the partial correlation test \citep{baba2004} that assumes linear relationships among the variables. The test statistic for the partial Pearson correlation is given by \begin{eqnarray} \label{pearson} T_p = \frac{1}{2}\left|\log{\frac{1+r_{X,Y|{\bf Z}}}{1-r_{X,Y|{\bf Z}}}} \right| \sqrt{n - |{\bf Z}| - 3}, \end{eqnarray} where $n$ is the sample size, $|{\bf Z}|$ denotes the number of conditioning variables and $r_{X,Y|{\bf z}}$ is the partial Pearson correlation\footnote{The partial correlation is efficiently computed using the correlation matrix of $X$, $Y$ and ${\bf Z}$ \citep{baba2004}.} of $X$ and $Y$ conditioning on $\bf Z$\footnote{In the \textit{R} package \textit{Rfast}'s implementation of the PC algorithm compares $T_p$ (\ref{pearson}) against a $t$ distribution with $n-|{\bf Z}|-3$ degrees of freedom, whereas the MMHC algorithm in the \textit{R} package \textit{bnlearn} compares $T$ against the standard normal distribution. The differences are evident in small sample sizes, but become negligible when the sample sizes are at the order of a few tens.}. When ${\bf Z}$ is empty ($|{\bf Z}| = 0$), the partial correlation drops to the usual Pearson correlation coefficient. \subsubsection*{Spearman correlation for continuous data} The non-parametric alternative that is assumed to be more robust to outliers is the Spearman correlation coefficient. Spearman correlation is equivalent to the Pearson correlation applied to the ranks of the variables. Its test statistic though is given by $T_s = T_p \times 1.029563$ \citep{fieller1957,fieller1961}. \subsubsection*{$G^2$ test of independence for categorical data} The $G^2$ test of independence of two categorical variables $X$ and $Y$ conditional on a set of variables ${\bf Z}$ is defined as \citep{agresti2002} \begin{eqnarray} \label{g2} G^2=2\sum_l\sum_{i, j}O_{ij|l}\log{\frac{O_{ij|l}}{E_{ij|l}}}, \end{eqnarray} where $O_{ij}$ are the observed frequencies of the $i$-th and $j$-th values of $X$ and $Y$ respectively for the $l$-th value of $\bf Z$. The $E_{ij}$ are their corresponding expected frequencies computed by $E_{ij}=\frac{O_{i+|l}O_{+j|l}}{O_{++|l}}$, where $O_{i+|l} = \sum_{j=1}^nO_{ij|l}$, $O_{+j|l}=\sum_{i=1}^nO_{ij|l}$ and $O_{++|l}=n_l$. Under the conditional independence assumption, the $G^2$ test statistic follows the $\chi^2$ distribution with $(|X| - 1) (|X| - 1) (|{\bf Z}| - 1)$ degrees of freedom, where $|{\bf Z}|$ refers to the cardinality of ${\bf Z}$, the total number of values of ${\bf Z}$ \subsubsection*{$X^2$ test of independence for categorical data} Alternatively, one could use the Pearson $X^2$ test statistic $X^2= \sum_l\sum_{i, j}\frac{\left(O_{ij|l} - E_{ij|l}\right)^2}{E_{ij|l}^2}$ that has the same properties as the $G^2$ test statistic (\ref{g2}). The drawback of $X^2$ is that it cannot be computed when $E_{ij|l}=0$. On the contrary, $G^2$ is computed in such cases since $\lim_{x\rightarrow 0}x\log x = 0$. For either aforementioned test, when $|{\bf Z}|$ is the empty set, both tests examine the unconditional association between variables $X$ and $Y$\footnote{For a practical comparison between the two tests based on extensive simulation studies see \cite{alenazi2020}.}. \subsubsection*{Permutation based p-values} The aforementioned test statistics produce asymptotic p-values. In case of small sample sizes computationally intensive methods like permutations might be preferable. With continuous variables for instance, when testing for unconditional independence the idea is to distort the pairs multiple times and each time calculate the relevant test statistic. For the conditional independence of $X$ and $Y$ conditional on $\bf Z$ the partial correlation is computed from the residuals of two linear regression models, $X \sim {\bf Z}$ and $Y \sim {\bf Z}$. In this instance, the pairs of the residual vectors are distorted multiple times. With categorical variables, this approach is more complicated and care must be taken so as to retain the row and column totals of the resulting contingency tables. For either case, the p-value is then computed as the proportion of times the permuted test statistics exceed the observed test statistic that computed using the original data. Permutation based techniques have shown to improve the quality of BNs \citep{tsamardinos2010} in small sample sized cases. On the contrary, the FEDHC algorithm aims at making inference on datasets with large sample sizes, for which asymptotic statistical tests are valid and reliable enough to produce correct decisions. \subsection*{B: Computational details of FEDHC} With continuous data, the correlation matrix is computed once and utilised throughout the skeleton identification phase. FEDHC returns the correlation matrix and the matrix of the p-values of all pairwise associations that is useful in a second run of the algorithm with a different significance level. This is a significant advantage when BNs have to fit to large scale datasets and the correlation matrix can be given as an input to FEDHC to further reduce FEDHC's computational cost. The partial correlation coefficient is given by \begin{eqnarray*} r_{X,Y|{\bf Z}}= \left\lbrace \begin{array}{cc} \frac{R_{X,Y} - R_{X,z} R_{Y,z}}{ \sqrt{ \left(1 - R_{X,Z}^2\right)^T \left(1 - R_{Y,z}^2\right) }} & \text{if} \ \ |{\bf Z}|=1 \\ -\frac{ {\bf A}_{1,2} }{ \sqrt{{\bf A}_{1,1}{\bf A}_{2,2}} } & \text{if} \ \ |{\bf Z}| > 1 \end{array} \right\rbrace, \end{eqnarray*} where $R_{X,Y}$ is the correlation between variables $X$ and $Y$, $R_{X,Z}$ and $R_{Y,Z}$ denote the correlations between $X$ \& $Z$ and $Y$ \& $Z$. ${\bf A}=R_{X,Y,{\bf Z}}^{-1}$, with ${\bf A}$ denoting the sub correlation matrix of variables $X, Y, {\bf Z}$ and $A_{i,j}$ symbolises the element in the $i$-row and $j$-th column of matrix $A$. The CI tests executed during the initial phase compute the logarithm of the p-value, instead of the p-value itself, to avoid numerical overflows observed with a large test statistic that produces a p-value equal to $0$. Additionally, the computational cost of FEDHC's first phase can be further reduced via parallel programming. It is also possible to store the p-values of each CI test for future reference. When a different significance level must be used, this will further decrease the associated computational cost of the skeleton identification phase in a second run. However, as will be exposed in section \ref{scalability}, the cost of this phase is really small (a few seconds) even for millions of observations. The largest portion of this phase's computational cost is attributed to the calculation of the correlation matrix, which can be passed into subsequent runs of the algorithm. Finally, \cite{tsagris2021} disregarded the potential of applying the PC-orientation rules \citep{spirtes1991,spirtes2000} prior to the scoring phase as a means of improving the performance of FEDHC and MMHC and this is not pursued any further. \subsection*{C: The \textit{R} package \textit{pchc}} The package \textit{pchc} was first launched in \textit{R} in July 2020 and initially contained the PCHC algorithm. It now includes the FEDHC and MMHC-2 algorithms, functions for testing (un)conditional independence with continuous and categorical data, data generation, BN visualisation and utility functions. It imports the \textit{R} packages \textit{bnlearn}, \textit{Rfast} and the built-in package \textit{stats}. \textit{pchc} is distributed as part of the CRAN R package repository and is compatible with MacOS-X, Windows, Solaris and Linux operating systems. Once the package is installed and loaded \begin{verbatim} > install.packages("pchc") > library(pchc) \end{verbatim} \noindent it is ready to use without internet connection. The signature of the function \textbf{fedhc} along with a short explanation of its arguments is displayed below. \\ \begin{verbatim} > fedhc(x, method = "pearson", alpha = 0.05, robust = FALSE, ini.stat = NULL, + R = NULL, restart = 10, score = "bic-g", blacklist = NULL, whitelist = NULL) \end{verbatim} \begin{itemize} \item x: A numerical matrix with the variables. If you have a data.frame (i.e. categorical data) turn them into a matrix using. Note, that for the categorical case data, the numbers must start from 0. No missing data are allowed. \item method: If you have continuous data, this must be "pearson" (default value) or "cat" if you have categorical data though. With categorical data one has to make sure that the minimum value of each variable is zero. The function \textit{g2test} from the package \textit{Rfast} and the relevant functions work that way. \item alpha: The significance level for assessing the p-values. The default value is 0.05. \item robust: If outliers are be removed prior to applying the FEDHC algorithm this must be set to TRUE. \item ini.stat: If the initial test statistics (univariate associations) are available, they can passed to this argument. \item R: If the correlation matrix is available, pass it here. \item restart: An integer, the number of random restarts. The default value is 10. \item score: A character string, the label of the network score to be used in the algorithm. If none is specified, the default score is the Bayesian Information Criterion for continuous data sets. The available score for continuous variables are: "bic-g" (default value), "loglik-g", "aic-g", "bic-g" or "bge". The available score categorical variables are: "bde", "loglik" or "bic". \item blacklist: A data frame with two columns (optionally labeled "from" and "to"), containing a set of forbidden directions. \item whitelist: A data frame with two columns (optionally labeled "from" and "to"), containing a set of must-add directions. \end{itemize} The output of the \textbf{fedhc} function is a list including: \begin{itemize} \item ini: A list including the output of the \textit{fedhc.skel} function. \item dag: A "bn" class output, a list including the outcome of the Hill-Climbing phase. See the package \textit{bnlearn} for more details. \item scoring: The highest score value observed during the scoring phase. \item runtime: The duration of the algorithm. \end{itemize} \clearpage
1,477,468,749,895
arxiv
\section{Introduction} The formation of planar cellular structures have attracted considerable interest among scientists in general and physicists in particular because cellular structures are ubiquitous in nature. Especially, the class of non-equilibrium systems with apparent disordered patterns have come under close scrutiny. Examples of cellular structures include acicular texture in martensite growth, tessellated pavement on ocean shores, agricultural land division according to ownership, grain texture in polycrystals, cell texture in biology, soap froths and so on \cite{ref.martensite, ref.polycrystal,ref.biocell,ref.soapfroths}. Most existing theoretical models which can mimic such systems are typically formed by tessellation, tiling, or subdivision of a plane into domains of contiguous and nonoverlapping cells. For instance, Voronoi lattice is formed by partitioning a plane into mutually exclusive convex polygons, Apollonian packing on the other hand is formed by tiling a plane into contiguous and non-overlapping disks, etc. These models have found widespread applications in physics and biology \cite{ref.model,ref.apollonian}. Significant progress has already been made in understanding various properties of both Voronoi lattice and Apollonian packing. A fact worth mentioning is that most of the existing models for generating cellular structures have one thing in common and that is: the number of neighbours of a cell never exceeds the number of sides it has. Another important property of the existing cellular structures is that they have small mean coordination number where it is almost impossible to find cells that have significantly higher or fewer neighbours than the average number of neighbours. However, geologists and soil scientists observe an abundance of cellular patterns ranging from topographical ground formations and underwater stone arrangements where number of neighbours may exceeds the number of sides of the cell. Typically, these structures emerge through evolution and in the process each cell may acquire more neighbours than the number of sides the corresponding cell has. Such disordered structures can be of great interest in physics provided they have some global topological and geometrical properties since they can mimic kinetic disordered medium on which one can study percolation and random walk problems. \begin{figure}[ht] \begin{center} \includegraphics[width=10.5cm,height=9.75cm]{area_image.eps} \end{center} \caption{A snapshot of the weighted planar stochastic lattice containing $30001$ blocks. } \label{fig:fg1} \end{figure} Perhaps, the square lattice is the simplest example of the cellular structure where every cell has the same size and the same coordination number. Its construction starts with an initiator, say a square of unit area, and a generator that divides it into four equal parts. In the next step and steps thereafter the generator is applied to all the available blocks which eventually generates a square lattice. In this talk, we intend to address the following questions. Firstly, what if the generator is applied to only one of the available block at each step by picking it preferentially with respect to the areas? Secondly, what if we use a modified generator that divides the initiator randomly into four blocks instead of four equal parts and apply it to only one of the available block at each step, which are again picked preferentially with respect to their respective areas \cite{ref.hassan_njp}? Our primary focus, however, will be on the later case which results in the tiling of the initiator into increasingly smaller mutually exclusive rectangular blocks. The process is so simple that even a few steps by hand on a piece of paper can lead to the conclusion that the blocks in the resulting lattice may have more neighbours than the number of sides of a cell. We term the resulting structure as {\it weighted planar stochastic lattice} (WPSL) since the spatial randomness is incorporated by the modified generator and also the time is incorporated in it by the sequential application of the modified generator. The definition of the model may appear too simple but the results it offers are found to be far from simple. To illustrate the type of systems expected, we show a snapshot of the resulting weighted planar stochastic lattice taken during the evolution (see figure 1). We intend to investigate its topological and geometrical properties in an attempt to find some order in this seemingly disordered lattice. The model in question can also describe the kinetics of fragmentation in two dimensions as it can be defined as follows \cite{ref.hassan,ref.krapivsky}. At each time step one may assume that a seed is being nucleated randomly on the initiator and hence the greater the area of the block, the higher is the probability that it contains the seed. Upon nucleation two orthogonal cracks parallel to the sides of the block are grown until intercepted by existing cracks which divides it randomly into four rectangular fragments. In reality though, fragments are produced in fracture of solid objects by propagation of interacting cracks resulting in the fragments of arbitrary size and shape which makes it a formidable mathematical problem. However, the present model can be considered as the minimum model which should be capable of capturing the essential generic aspects of the underlying mechanism. In addition, Ben-Naim and Krapivsky have noted that the WPSL can also describe kinetics of martensite formation. A clear testament to this can be found if one compares figure 1 of our work and figure 2 of the work of Rao {\it et al.} in the context of martensite formation \cite{ref.martensite,ref.martensite_1}. Yet another application, albeit a little exotic, is the random search tree problem in computer science \cite{ref.majumdar}. A truly disordered lattice where the coordination number disorder and the block size disorder are introduced naturally is perhaps long over due. On the other hand, where there is a disorder physicists have the natural tendency to look for an order in it. To this end, we invoke the idea of multifractality to quantify the size disorder and the idea of scale-invariance to quantify the coordination number disorder. Firstly, we characterize the blocks of the lattice by the size of their respective length $x$ and width $y$. Furthermore, we identify each blocks by labelling them as $i=1,2,...,N$ and the corresponding length and width by $(x_1,y_1)$, $(x_2,y_2)$, $...$, $(x_N,y_N)$ respectively. We then show that the dynamics of the WPSL is governed by infinitely many conservation laws, namely the numerical value of $\sum_i^N x_i^{n-1}y_i^{4/n-1}$ remains the same regardless of the lattice size $N$ and the value of $n$ where $n=2$ corresponds to the conservation of total area. Of the non-trivial conservation laws, we single out either $\sum_i^N y_i^3$ by setting $n=1$ or $\sum_i^N x_i^3$ by setting $n=4$ as we can use it to perform multifractal analysis. For instance, we show that if the blocks are populated according to the cubic power of their own length (or width) then the distribution of the population in the WPSL emerges as multifractal. On the other hand, if the blocks are characterized by the coordination number or the number of neighbours $k$ with whom the block have common sides then we show that the coordination number distribution function $\rho(k)$, the probability that a block picked at random have coordination number $k$, have power-law tail. A power-law distribution function is regarded as scale-free since it looks the same regardless of the scale we use to look at it. The organization of this talk is as follows. In section 2, we give the exact algorithm of the model. In section 3, we give the geometric properties of the WPSL in an attempt to quantify the annealed size disorder and at the same time we proposed kinetic square lattice in an attempt to look for a possible origin of multifractality. In section 4, we discuss the various structural topological properties of the WPSL and its dual are discussed in order to quantify the annealed coordination number disorder. Finally, section 5 gives a short summary of our results. \begin{figure}[ht] \begin{center} \includegraphics[width=13.5cm,height=6.5cm]{nwk_steps_1.eps} \end{center} \caption{Schematic illustration of the first few steps of the algorithm. } \label{fig:fg2} \end{figure} \section{Algorithm of the weighted planar stochastic lattice (WPSL)} Perhaps an exact algorithm can provide a better description of the model than the mere definition. In step one, the generator divides the initiator, say a square of unit area, randomly into four smaller blocks. The four newly created blocks are then labelled by their respective areas $a_1, a_2, a_3$ and $a_4$ in a clockwise fashion starting from the upper left block (see Fig. 2). In step two and thereafter only one block is picked at each step with probability equal to their respective area and then it is divided randomly into four blocks. In general, the $j$th step of the algorithm can be described as follows. \begin{itemize} \item[{\bf(i)}] Subdivide the interval $[0,1]$ into $(3j-2)$ subintervals of size $[0,a_1]$, $[a_1, a_1+a_2]$,$\ ...$, $[\sum_{i=1}^{3j-3} a_i,1]$ each of which represent the blocks labelled by their areas $a_1,a_2,...,a_{(3j-2)}$ respectively. \item[{\bf(ii)}] Generate a random number $R$ from the interval $[0,1]$ and find which of the $(3i-2)$ sub-interval contains this $R$. The corresponding block it represents, say the $p$th block of area $a_p$, is picked. \item[{\bf(iii)}] Calculate the length $x_p$ and the width $y_p$ of this block and keep note of the coordinate of the lower-left corner of the $p$th block, say it is $(x_{low}, y_{low})$. \item[{\bf(iv)}] Generate two random numbers $x_R$ and $y_R$ from $[0,x_p]$ and $[0,y_p]$ respectively and hence the point $(x_{R}+x_{low},y_{R}+y_{low})$ mimicking a random nucleation of a seed in the block $p$. \item[{\bf(v)}] Draw two perpendicular lines through the point $(x_{R}+x_{low},y_{R}+y_{low})$ parallel to the sides of the $p$th block mimicking orthogonal cracks parallel to the sides of the blocks which stops growing upon touching existing cracks and divide it into four smaller blocks. The label $a_p$ is now redundant and hence it can be reused. \item[{\bf(vi)}] Label the four newly created blocks according to their areas $a_p$, $a_{(3j-1)}$, $a_{3j}$ and $a_{(3j+1)}$ respectively in a clockwise fashion starting from the upper left corner. \item[{\bf(vii)}] Increase time by one unit and repeat the steps (i) - (vii) {\it ad infinitum}. \end{itemize} \section{Geometric properties of WPSL} In general, the distribution function $C(x,y;t)$ describing the blocks of the lattice by their length $x$ and width $y$ evolves according to the following kinetic equation \cite{ref.hassan,ref.krapivsky} \begin{eqnarray} \label{eq:1} {{\partial C(x,y;t)}\over{\partial t}}& =& -C(x,y;t)\int_0^x\int_0^y dx_1dy_1F(x_1,x-x_1,y_1,y-y_1) + \\ \nonumber & & 4\int_x^\infty\int_y^\infty C(x_1,y_1;t)F(x,x_1-x,y,y_1-y)dx_1dy_1, \end{eqnarray} where kernel $F(x_1,x_2,y_1,y_2)$ determines the rules and the rate at which the block of sides $(x_1+x_2)$ and $(y_1+y_2)$ is divided into four smaller blocks whose sides are the arguments of the kernel. The first term on the right hand side of equation (\ref{eq:1}) represents the loss of blocks of sides $x$ and $y$ due to nucleation of seed of crack on one such block from which mutually perpendicular cracks are grown to divide it into four smaller blocks. Similarly, the second term on the right hand side represents the gain of blocks of sides $x$ and $y$ due to nucleation of seed of crack on a block of sides $x_1$ and $y_1$ ensuring that one of the four new blocks have sides $x$ and $y$. Let us now consider the case where the generator divides the initiator randomly into four smaller rectangles and apply it to only one of the available squares thereafter by picking preferentially with respect to their areas. It effectively describes the random sequential nucleation of seeds with uniform probability on the initiator. Within the rate equation approach this can be ensured if one chooses the following kernel \begin{equation} \label{eq:2} F(x_1,x_2,y_1,y_2)=1 \end{equation} Substituting it into equation (\ref{eq:1}) we obtain \begin{equation} \label{eq:3} {{\partial C(x,y;t)}\over{\partial t}} = -xyC(x,y;t)+ 4\int_x^\infty\int_y^\infty C(x_1,y_1;t)dx_1dy_1. \end{equation} The coefficient $xy$ of $C(x,y;t)$ of the loss term implies that seeds of cracks are nucleated on the blocks preferentially with respect to their areas which is consistent with the definition of our model. Incorporating the $2$-tuple Mellin transform given by \begin{equation} \label{eq4} M(m,n;t)=\int_0^\infty\int_0^\infty x^{m-1}y^{n-1}C(x,y;t)dxdy, \end{equation} in equation (\ref{eq:3}) we get \begin{equation} \label{eq:momenteq} {{dM(m,n;t)}\over{dt}}=\Big ( {{4}\over{mn}}-1\Big )M(m+1,n+1;t). \end{equation} Iterating equation (\ref{eq:momenteq}) to get all the derivatives of $M(m,n;t)$ and then substituting them into the Taylor series expansion of $M(m,n;t)$ about $t=0$ one can immediately write its solution in terms of generalized hypergeometric function \cite{ref.hypergeometric} \begin{equation} \label{eq:5} M(m,n;t)= ~_2F_2\Big (a_+,a_-;m,n;-t\Big ), \end{equation} where $M(m,n;t)=M(n,m;t)$ for symmetry reason and \begin{equation} \label{eq:6} a_{\pm} = {{m+n}\over{2}} \pm \Big [ \Big ({{m-n}\over{2}}\Big )^2+4 \Big ]^{{{1}\over{2}}}. \end{equation} One can see that (i) $M(1,1;t)=1+3t$ is the total number of blocks $N(t)$ and (ii) $M(2,2;t)=1$ is the sum of areas of all the blocks which is obviously a conserved quantity. Both properties are again consistent with the definition of the WPSL depicted in the algorithm. The behaviour of $M(m,n;t)$ in the long time limit is \begin{equation} \label{eq:aminus} M(m,n;t)\sim t^{-a_-}. \end{equation} Thus, in addition to the conservation of total area the system is also governed by infinitely many non-trivial conservation laws as it implies \begin{equation} \label{eq:7} M(n,4/n;t)\sim {\rm constant} \hspace{0.25cm} \ \forall n. \end{equation} We used numerical simulation to verify equation (\ref{eq:7}) or its discrete counterpart $\sum_i^N x_i^{n-1}y_i^{4/n-1}$ if we label all the available blocks as $i=1,2,...,N$. We found that the analytical solution is in perfect agreement with the numerical simulation which we performed based on the algorithm for the WPSL model (see figure 3). \begin{figure}[ht] \begin{center} \includegraphics[width=11.5cm,height=8.5cm]{moment.ps} \end{center} \caption{The plots of $\sum_i^N x_i^{n-1}y_i^{4/n-1}$ vs $N$ for $n=3,4,5$ are drawn using data collected from one realization. } \label{fig:fg3} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{x3.ps} \end{center} \caption{The plots of $\sum_i^N x_i^3$ vs $N$ for four different realizations shows that the numerical value is different at every independent realization. } \label{fig:fg4} \end{figure} We now find it interesting to focus on the distribution function $n(x,t)=\int_0^\infty C(x,y,t)dy$ that describes the concentration of blocks which have length $x$ at time $t$ regardless of the size of their widths $y$. Then the $q$th moment of $n(x,t)$ is defined as \begin{equation} \label{eq:8} M_q(t)=\int_0^\infty x^q n(x,t)dx. \end{equation} Appreciating the fact that $M_q(t)=M(q+1,1;t)$ and using equation (\ref{eq:aminus}) one can immediately write its solution \begin{equation} \label{eq:qmoment} M_q(t)\sim t^{\{\sqrt{q^2+16}-(q+2)\}/2}. \end{equation} Note that for symmetry reason it does not matter whether we consider the $q$th moments of $n(x,t)$ or that of $n(y,t)$ since we have $M(q+1,1;t)=M(1,q+1;t)$. According to equation (\ref{eq:qmoment}) the quantity $M_3(t)$ and hence $\sum_i^N x_i^3$ or $\sum_i^N y_i^3$ is a conserved quantity. However, although $M_3(t)$ remains a constant against time in every independent realization the exact numerical value is found to be different in every different realization (see figure 4). It clearly indicates lack of self-averaging or wild fluctuation. This is supported by our analytical solution as we find that \begin{equation} \label{eq:9} <x^q>={{\int_0^\infty x^qn(x,t)dx}\over{\int_0^\infty n(x,t)dx}}\neq <x>^q= \Big ({{\int_0^\infty xn(x,t)dx}\over{\int_0^\infty n(x,t)dx}}\Big )^q, \end{equation} which suggest that a single length scale cannot characterize all the moments of the distribution function $n(x,t)$. \begin{figure}[ht] \begin{center} \includegraphics[width=8.5cm,height=10.5cm,angle=-90]{T-q.ps} \end{center} \caption{The plots of $\tau(q)$ vs $q$ to show its slope as a function of $q$ varies. } \label{fig:fg5} \end{figure} Of all the conservation laws we find that $M_3(t)=\sum_ix_i^3$ is a special one since we can use it as a multifractal measure consisting of members $p_i$, the fraction of the total measure $p_i=x_i^3/\sum_ix_i^3$, distributed on the geometric support WPSL. That is, we assume that the $i$th block is occupied with cubic power of its own length $x_i$. The corresponding "partition function" of multifractal formalism then is \cite{ref.stanley} \begin{equation} \label{eq:10} Z_q(t)=\sum_ip_i^q\sim M_{3q}(t). \end{equation} Its solution can immediately be obtained from equation (\ref{eq:qmoment}) to give \begin{equation} \label{eq:11} Z_q(t)\sim t^{\{\sqrt{9q^2+16}-(3q+2)\}/2}. \end{equation} Using the square root of the mean area $\delta(t)=\sqrt{M(2,2;t)/M(1,1;t)}\sim t^{-1/2}$ as the yard-stick to express the partition function $Z_q$ gives the weighted number of squares $N(q,\delta)$ needed to cover the measure which we find decays following power-law \begin{equation} \label{weightednumber} N(q,\delta)\sim \delta^{-\tau(q)}, \end{equation} where the mass exponent \begin{equation} \label{massexponent} \tau(q)=\sqrt{9q^2+16}-(3q+2). \end{equation} The non-linear nature of $\tau(q)$, see figure 5 for instance, suggests that the gap exponent \begin{equation} \label{eq:12} \Delta=\tau(q)-\tau(q-1) \end{equation} is different for every $q$ value. It implies that we require an infinite hierarchy of exponents to specify how the moments of the probabilities $\{p\}$s scales with $\delta$. Now if we choose $q=0$ then it gives an estimate of the number of squares $N(0,\delta=N(\delta)$ of sides $\delta$ we need to cover the support on which the members of the population is distributed. We find that $N(\delta)$ scales as \begin{equation} \label{eq:13} N(q=0,\delta)\sim \delta^{-\tau(0)}, \end{equation} where $\tau(0)=2$ is the Hausdorff-Besicovitch dimension of the support \cite{ref.multifractal_1}. On the other hand, if we choose $q=1$ we have $Z_1=\sum_i p_i=const.$ and hence we must have $\tau(1)=0$. This is indeed the case according to equation (\ref{massexponent}). Therefore, $\tau(0)=2$ (the dimension of the support) and $\tau(1)=0$ (required by the normalization condition) are often considered as the first self-consistency check for the multifractal analysis. The role of $q$ in the partition function $Z_q$ can be best understood by appreciating the fact that the large values of $q$ favours contributions in the $Z_q$ from the blocks with relatively high values of $p_i$ since $p_i^q>>p_j^q$ with $p_i>p_j$ provided $q>>1$. On the other hand, $q<<-1$ favours contributions in the $Z_q$ from those blocks which are occupied with relatively low values of the measure $p_i$. For further elucidation we find it worthwhile to consider the slope of the curve $\tau(q)$ vs $q$ which is given by \begin{equation} \label{eq:14} {{d\tau(q)}\over{dq}}=-\lim_{\delta \rightarrow 0}{{\sum_ip_i^q\ln p_i}\over{(\sum_ip_i^q)\ln \delta}}. \end{equation} Now $\tau(q)$ vs $q$ curve has the maximum slope in the limit $q\rightarrow -\infty$ and hence we can write \begin{equation} \label{eq:15} {{d\tau(q)}\over{dq}}=-\alpha_{{\rm max}}. \end{equation} It implies that the right hand side of equation (\ref{eq:14}) is dominated by $p_{{\rm min}}$, the minimum of $p_i$, in the sum and hence we have \begin{equation} \label{eq:16} {{d\tau(q)}\over{dq}}\Big |_{q\rightarrow -\infty}=- \lim_{\delta \rightarrow 0}{{\ln p_{{\rm min}}}\over{\ln \delta}} =-\alpha_{{\rm max}}, \end{equation} and hence we get \begin{equation} p_{{\rm min}}\sim \delta^{\alpha_{{\rm max}}}. \end{equation} A similar argument in the limit $q \rightarrow \infty$ leads to the conclusion that the minimum slope of the $\tau(q)$ vs $q$ curve is given by \begin{equation} \label{eq:17} {{d\tau(q)}\over{dq}}\Big |_{q\rightarrow +\infty}=-\lim_{\delta \rightarrow 0}{{\ln p_{{\rm max}}}\over{\ln \delta}} =-\alpha_{{\rm min}}, \end{equation} where $p_{{\rm max}}$ is the largest value of $p$ which gives the minimum value of $\alpha$ and hence \begin{equation} p_{{\rm max}}\sim \delta^{\alpha_{{\rm min}}}. \end{equation} So, in general we can write \begin{equation} \label{eq:18} {{d\tau(q)}\over{dq}}=-\alpha(q), \end{equation} where the exponent $\alpha(q)$ is known as the Lipschitz-H\"{o}lder exponent and \begin{equation} \label{eq:19} p\sim \delta^\alpha. \end{equation} \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{f_alpha.ps} \end{center} \caption{The $f(\alpha)$ spectrum. } \label{fig:fg6} \end{figure} We now perform the Legendre transformation of the mass exponent $\tau(q)$ by using the Lipschitz-H\"{o}lder exponent given equation (\ref{eq:18}) as an independent variable to obtain the new function \begin{equation} \label{eq:21} f(\alpha)=q\alpha+\tau(q). \end{equation} Replacing $\tau(q)$ in equation (\ref{weightednumber}) in favour of $f(\alpha)$ we find that \begin{equation} \label{eq:22} N(q(\alpha),\delta) \sim \lim_{\delta\rightarrow 0}\delta^{q\alpha-f(\alpha)}. \end{equation} On the other hand, using $p\sim \delta^\alpha$ in the expression for partition function $Z_q=\sum_ip^q$ and replacing the sum by integral while indexing the blocks by continuous Lipschitz-H\"{o}lder exponent $\alpha$ as variable with a weight $\rho(\alpha)$ we obtain \begin{equation} \label{eq:23} N(q(\alpha),\delta) \sim \int \rho(\alpha)d\alpha N(\alpha,\delta)\delta^{q\alpha}, \end{equation} where $N(\alpha,\delta)$ is the number of squares of side $\delta$ needed to cover the measure indexed by $\alpha$. Comparing Eqs. (\ref{eq:22}) and (\ref{eq:23}) we find \begin{equation} \label{eq:24} N(\alpha,\delta)\sim \delta^{-f(\alpha)}. \end{equation} It implies that we have a spectrum of spatially intertwined fractal dimensions \begin{equation} f(\alpha(q))={{16}\over{\sqrt{9q^2+16}}}-2, \end{equation} are needed to characterize the measure. That is, the size disorder of the blocks are multifractal in character since the measure $\{p_\alpha\}$ is related to size of the blocks. That is, the distribution of $\{p_\alpha\}$ in WPSL can be subdivided into a union of fractal subsets each with fractal dimension $f(\alpha)\leq 2$ in which the measure $p_\alpha$ scales as $\delta^\alpha$. Note that $f(\alpha)$ is always concave in character (see figure 6) with a single maximum at $q=0$ which corresponds to the dimension of the WPSL with empty blocks. On the other hand, we find that the entropy $S(\delta)=-\sum_i p_i\ln p_i$ associated with the partition of the measure on the support (WPSL) by using the relation $\sum_i p_i^q\sim \delta^{-\tau(q)}$ in the definition of $S(\delta)$. Then a few steps of algebraic manipulation reveals that $S(\delta)$ exhibits scaling \begin{equation} S(\delta)=\ln\delta^{-\alpha(1)} \end{equation} where the exponent $\alpha_1={{6}\over{5}}$ obtained from \begin{equation} \alpha(q)= -\left. d\tau(q)/dq \right |_{q}. \end{equation} It is interesting to note that $\alpha(1)$ is related to the generalized dimension $D_q$, also related to the R\'{e}nyi entropy $H_q(p)={{1}\over{q-1}}\ln \sum_i p_i^q$ in the information theory, given by \begin{equation} \label{dq} D_q=\lim_{\delta\rightarrow 0} \Big [{{1}\over{q-1}}{{\ln \sum_i p_i^q}\over{\ln\delta}}\Big ]={{\tau(q)}\over{1-q}}, \end{equation} which is often used in the multifractal formalism as it can also provide insightful interpretation. For instance, $D_0=\tau(0)$ is the dimension of the support, $D_1=\alpha_1$ is the Renyi information dimension and $D_2$ is known as the correlation dimension \cite{ref.procaccia,ref.renyi_entropy}. Multifractal analysis was initially proposed to treat turbulence but later successfully applied in a wide range of exciting field of research. For instance, it has been found recently that the wild fluctuations of the wave functions at the Anderson and the quantum Hall transition can be best described by multifractality \cite{ref.mandelbrot,ref.anderson}. Recently, though it has got renewed momentum as it has been found that the probability density function at the Anderson and the quantum Hall transition exhibits multifractality since in the vicinity of the transition point fluctuations are wild - a characteristic feature of multifractal behaviour \cite{ref.anderson}. In an attempt to understand the origin of multifractality we now consider the case where the generator divides the initiator into four equal blocks instead of randomly into four blocks. If the generator is applied over and over again thereafter to only one of the available squares by picking preferentially with respect to their areas then it results in the kinetic square lattice (KSL). Within the rate equation approach it can be described by the kernel \begin{equation} F(x_1,x_2,y_1,y_2)=(x_1+x_2)(y_1+y_2)\delta(x_1-x_2)\delta(y_1-y_2). \end{equation} and hence the resulting rate equation can be obtained after substituting it in equation (\ref{eq:1}) to give \begin{equation} \label{eq1} {{\partial C(x,y;t)}\over{\partial t}} = -{{1}\over{4}}xyC(x,y;t)+ 4^2xyC(2x,2y;t). \end{equation} Incorporating equation (\ref{eq4}) in equation (\ref{eq1}) yield \begin{equation} \label{momenteq_2} {{dM(m,n;t)}\over{dt}}=-\Big ( {{1}\over{4}}-{{4}\over{2^{m+n}}}\Big )M(m+1,n+1;t). \end{equation} To obtain the solution of this equation in the long-time limit, we assume the following power-law asymptotic behaviour of $M(m,n;t)$ and write \begin{equation} M(m,n;t)\sim A(m,n)t^{\theta(m+n)}, \end{equation} with $\theta(4)=0$ since total area obtained by setting $m=n=2$ is an obvious conserved quantity. Using it in equation (\ref{momenteq_2}) yields the following difference equation \begin{equation} \theta(m+n+2)=\theta(m+n)-1. \end{equation} Iterating it subject to the condition that $\theta(4)=0$ gives \begin{equation} M(m,n;t)\sim t^{-{{m+n-4}\over{2}}}. \end{equation} Apparently it appears that in addition to the conservation of the total area $M(2,2;t)$, we find that the integrals $M(3,1;t)$ and $M(1,3;t)$) are also conserved. Interestingly, all the three integrals $M(2,2;t)$, $M(3,1;t)$ and $M(1,3;t)$ effectively describe the same physical quantity since all the blocks are square in shape and hence \begin{equation} \sum_{i=1}^N x_i^2=\sum_{i=1}^N y_i^2=\sum_{i=1}^N x_iy_i. \end{equation} Therefore, in reality the system obeys only one conservation law - conservation of total area. We again look into the $q$th moment of $n(x,t)$ using equation (\ref{eq:8}) and appreciating the fact that $M_q(t)$ equal to $M(q+1,1;t)$ or $M(1,q+1;t)$ we immediately find that \begin{equation} \label{eq:ksl_moment_2} M_q(t)\sim t^{-{{q-2}\over{2}}}. \end{equation} Unlike in the previous case where the exponent of the power-law solution of the $M_q(t)$ is non-linear, here we have an exponent which is linear in $q$. It immediately implies that in the case kinetic square lattice \begin{equation} <x^q>=<x>^q, \end{equation} and hence a single length-scale is enough to characterize all the moments of $n(x,t)$. That is, the system now exhibits simple scaling instead of multiscaling. Like before let us consider that each block is occupied with a fraction of the measure equal to square of its own length or area $p_i=\sum_i^Nx_i^2$ and hence the corresponding partition function is \begin{equation} Z_q=\sum_i^N p_i^q=M_{2q}(t). \end{equation} Using equation (\ref{eq:ksl_moment_2}) we can immediately write its solution \begin{equation} Z_q(t)\sim t^{-{{2q-2}\over{2}}}. \end{equation} Expressing it in terms of the square root of the average area $\delta\sim t^{-1/2}$ gives the weighted number of square $N(q,\delta)$ of side $\delta$ needed to cover the measure which has the following power-law solution \begin{equation} N(q,\delta)\sim \delta^{-(2-2q)}, \end{equation} where mass exponent $\tau(q)$ is \begin{equation} \tau(q)=2-2q. \end{equation} The Legendre transform of the mass exponent is a constant \begin{equation} f(\alpha)=2, \end{equation} and so is the generalized dimension \begin{equation} D_q=2. \end{equation} We thus find that if the generator divides the initiator into four equal square and we apply it thereafter sequentially then the resulting lattice is no longer a multifractal. The reason is that the distribution of the population in the resulting support in this case is uniform. The two model therefore provides a unique opportunity to look for possible origin of multifractality. The two models discussed in this talk differs only in the definition of the generator. In the case when the generator divides the initiator randomly into four blocks and we apply it over and over again sequentially then we have multifractality since the underlying mechanism in this case is governed by random multiplicative process. This is not, however, the case if the generator divides the initiator into equal four blocks and apply it over and over again sequentially since the resulting dynamics is governed by deterministic multiplicative process instead. \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{area_N_vs_t.ps} \end{center} \caption{Shown is the growth of the number of blocks $N_k(t)$ with exactly $k=4,5,6,7$ neighbours as a function of time. } \label{fig:fg7} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{number_density_area.ps} \end{center} \caption{Fraction of the blocks $\rho_k(t)$ with $k=4,5,6,7$ neighbours are drawn as a function of time. } \label{fig:fg8} \end{figure} \section{Scale-free properties of WPSL} Defining each step of the algorithm as one time unit and imposing periodic boundary condition in the simulation we can immediately write an exact expression for the total number of blocks as a function of time: $N(t)=1+3t$. However, unlike the square lattice where every block shares its borders exactly with four neighbours, the blocks in the WPSL can have their common borders with at least four or more blocks albeit all the blocks have exactly four sides. In fact, we can characterize individual blocks of the WPSL by their coordination numbers, which is defined as the number of neighbours with whom they have common borders, which we find neither a constant like the square lattice nor has a typical mean value like the Voronoi lattice. Instead, the coordination number that each block assumes in the WPSL is random and evolves with time. The coordination number disorder in the WPSL can, therefore, be regarded as of annealed type. Our data extracted from numerical simulation suggest that the number of blocks $N_k(t)$ which have coordination number $k$ (or $k$ neighbours) continue to grow linearly with time $N_k(t)=m_k t$ (see figure 7). On the other hand, the number of total blocks $N(t)=\sum_k N_k(t)$ at time $t$ in the lattice also grow linearly with time $N(t)=1+3t$ and hence $N(t)\sim 3t$ in the long-time limit. The ratio of the two quantities $\rho_k(t)=N_k(t)/N(t)$ that describes the fraction of the total blocks which have coordination number $k$ is $\rho_k(t)=m_k/3$ and it is clearly independent of time {\it vis-a-vis} of the size of the lattice (see figure 8). It implies that we can take a lattice of sufficiently large size and study its properties as these properties without being worried about its exact size. \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{P_vs_K.ps} \end{center} \caption{Shown is the coordination number distribution function $P(k)$ vs $k$ which is also equivalent to the degree distribution of the DWPSL. } \label{fig:fg9} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{ens2.ps} \end{center} \caption{Drawn is plot of $\ln (P(k))$ against $\ln (k)$ for the DWPSL network where data points represent average of $50$ independent realizations. The line have slope exactly equal to $5.66$ revealing power-law degree distribution with exponent $\gamma=5.66$. } \label{fig:fg10} \end{figure} We now assume that that blocks in the WPSL are equivalent regardless of their sizes and then ask: What is the probability that a block picked at random with equal {\it a priori} probability from all the available blocks of the lattice have coordination number $k$? Answer to this question lies in collecting data for $\rho_t$ as a function of $k$ and plotting a normalized histogram which effectively gives the coordination number distribution function $\rho_t(k)\equiv P(k)$ where subscript $t$ indicates fixed time. We first plot $P(k)$ in figure 9 to have a glimpse of its behaviour as a function of $k$. A closer look into the data reveals that there exist scarce data points near the tail. The long tail with scarce data points turns into a noisy or fat-tail when we plot $\ln P(k)$ vs $\ln (k)$. This is drawn in figure 10 where data points represents ensemble average over $50$ independent realizations and it clearly suggests that fitting a straight line near the tail is possible. It implies that the coordination number distribution decays obeying power-law \begin{equation} \label{degreedistribution} P(k)\sim k^{-\gamma}, \end{equation} with heavy or fat-tail reflecting scarce data points near the tail-end of figure 10. However, the noise in the tail-end complicate the process of identifying the range over which the power-law holds and hence of estimating the exponent $\gamma$. One way of reducing the noise at the tail-end is to plot cumulative distribution $P(k^\prime \geq k)$ which is related to degree distribution $P(k)$ via \begin{equation} P(k)=-{{dP(k^\prime\geq k)}\over{dk}}. \end{equation} We therefore plot $\ln (P(k^\prime >k))$ vs $\ln (k)$ in figure 11 using the same data of figure 10 and find that the heavy tail smoothes out naturally where no data is obscured. The straight line fit of figure 11 has a slope $\gamma-1=4.66$ which indicates that the degree distribution (figure 10) decays following power-law with exponent $\gamma=5.66$. A power-law distribution function is often regarded as scale-free since it looks the same whatever scale we look at it. \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{ens_cum.ps} \end{center} \caption{Cumulative degree distribution $P(k^\prime>k)$ is shown using the same data as of figure 4. The dotted line with slope equal to $\gamma-1=4.66$ is drawn to guide our eyes. } \label{fig:fg11} \end{figure} We thus find that in the large-size limit the WPSL develops some order in the sense that its annealed coordination number disorder is scale-free in character. This is in sharp contrast to the quenched coordination number disorder found in the Voronoi lattice where it is almost impossible to find cells which have significantly higher or fewer neighbours than the mean value $k=6$ \cite{ref.vd}. In fact, it has been shown that the coordination number distribution of the Voronoi lattice is Gaussian in character instead. The power-law coordination number distribution in the WPSL is reminiscent of the scale-free degree distribution of complex network. The past decade have witnessed a surge of interest in the theory of complex network resulting in the dramatic advances in the field thanks to the seminal work of A.-L. Barabasi and his co-workers. It is worth mentioning that the dual of the WPSL, obtained replacing each block with a node at its centre and common border between blocks with an edge joining the two nodes (see figure 12 where we have illustrated the network aspect of the WPSL) is a network whose degree distribution has the same data as the coordination number distribution of the WPSL. \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm,height=8.5cm]{nwk_steps.eps} \end{center} \caption{Nodes in the circle and their links with other nodes illustrate the network topology at step $5$ of Fig. 2. New nodes ($17$, $18$ and $19$) and links to be added and links to be removed (broken lines) in step $6$ again of Fig. 2 are also shown to illustrate the dynamics. } \label{fig:fg12} \end{figure} The square lattice, on the other hand, is self-dual and its degree distribution is $P(k)=\delta(k-4)$. Further, it is interesting to point out that the exponent $\gamma=5.66$ is significantly higher than usually found in most real-life network which is typically $2<\gamma\leq 3$. This suggests that in addition to the PA rule the network in question has to obey some constraints. For instance, nodes in the WPSL are spatially embedded in Euclidean space, links gained by the incoming nodes are constrained by the spatial location and the fitting parameter of the nodes. Owing to its unique dynamics this was not unexpected. Perhaps, it is noteworthy to mention that the degree distribution of the electric power grid, whose nodes like WPSL are also embedded in the spatial position, is shown to exhibit power-law but with exponent $\gamma_{{\rm power}}=4$ \cite{ref.barabasi}. The power-law degree distribution $P(k)$ has been found in many seemingly unrelated real life networks. It implies that there must exists some common underlying mechanisms for which disparate systems behave in such a remarkably similar fashion \cite{ref.review}. Barabasi and Albert argued that the growth and the PA rule are the main essence behind the emergence of such power-law. Indeed, the DWPSL network too grows with time but in sharp contrast to the BA model where network grows by addition of one single node with $m$ edges per unit time, the DWPSL network grows by addition of a group of three nodes which are already linked by two edges. It also differs in the way incoming nodes establish links with the existing nodes. To understand the growth mechanism of the DWPSL network let us look into the $j$th step of the algorithm. First a node, say it is labeled as $a_p$, is picked from the $(3j-2)$ nodes preferentially with respect to the fitness parameter of a node (i.e., according to their respective areas). Secondly, connect the node $a_p$ with two new nodes $(3j-1)$ and $(3j+1)$ in order to establish their links with the existing network. At the same time, at least two or more links of $a_p$ with other nodes are removed (though the exact number depends on the number of neighbours $a_p$ already has) in favour of linking them among the three incoming nodes in a self-organized fashion. In the process, the degree $k_p$ of the node $a_p$ will either decrease (may turn into a node with marginally low degree in the case it is highly connected node) or at best remain the same but will never increase. It, therefore, may appear that the PA rule is not followed here. A closer look into the dynamics, however, reveals otherwise. It is interesting to note that an existing nodes during the process gain links only if one of its neighbour is picked not itself. It implies that the higher the links (or degree) a node has, the higher its chance of gaining more links since they can be reached in a larger number of ways. It essentially embodies the intuitive idea of PA rule. Therefore, the DWPSL network can be seen to follow preferential attachment rule but in disguise. \section{Summary} To summarize, we proposed a model that generates weighted planar stochastic lattice (WPSL) which emerges through evolution. Unlike the regular lattice where every block or cell has exactly the same coordination number, the WPSL is seemingly disordered both in terms of the coordination number and in terms of its block size distribution. One of our primary goal was to find some order in such seemingly disordered system since finding order is always an attractive proposition for physicists. To this end, our first attempt was to quantify the size disorder of the lattice and we found several interesting results. One of the most interesting results that we found is that $\sum_i^N x_i^{n-1} y_i^{4/n-1}$ remains a constant $\forall \ n$ regardless of the size of the lattice where blocks are labeled as $i=1,2,...,N$ and $x_i$ and $y_i$ are their respective length and width. Yet another interesting fact is that the numerical values of the conserved quantities $\sum_i^N x_i^{n-1} y_i^{4/n-1}$ for a given value of $n$ except $n=2$ are never the same in different realizations which is clearly an indication of wild fluctuation. Of the conservation laws, we found a special one obtained by choosing either $n=1$ or $n=3$ to give $\sum_{i=1}^N y_i^3$ or $\sum_{i=1}^N x_i^3$ which can be picked as a measure. It implies that if the blocks are populated with a fraction of the measure equal to cubic power of their respective length or width then its distribution on the WPSL is multifractal in nature - a property that quantifies the wild fluctuation of block size distribution of the WPSL. That is, the probability distribution of $x_i^3$ is multifractal in the sense that a single exponent is not enough to characterize its distribution on the support, namely the WPSL, instead we need a spectrum of exponent $f(\alpha)$. We derived an exact expression for the $f(\alpha)$ spectrum and obtained information dimension $D_1=6/5$ with which the entropy of the measure scales $S(\delta)\sim \ln \delta^{-D_1}$. To look for a possible origin of multifractality we also studied the kinetic square lattice which is actually the deterministic counterpart of the WPSL. It led to the conclusion that as soon as randomness is ceased from the definition of the generator the resulting lattice can no longer be quantified as multifractal rather it possesses all the properties of the square lattice but only in the long-time limit. Our second attempt was to quantify the coordination number disorder. We have shown numerically that the coordination number disorder is scale-free in character since the degree distribution of its dual (DWPSL), which is topologically identical to the network obtained by considering blocks of the WPSL as nodes and the common border between blocks as links, exhibits power-law. In other words, the coordination number distribution function of the WPSL decays following power-law with exponent $\gamma=5.66$. However, the novelty of this network is that it grows by addition of a group of already linked nodes which then establish links with the existing nodes following PA rule, though in disguise, in the sense that existing nodes gain links only if one of their neighbour is picked, not the node itself. Finally, such multifractal lattice with scale-free coordination disorder can be of great interest as it has the potential to mimic disordered medium on which one can study various physical phenomena like percolation and random walk problems etc. One may also study the phase transition and critical behaviour in such multifractal scale-free lattice. Indeed, phase transition and critical behaviour in complex network have attracted much of our recent attention. The advantage of the WPSL over many known complex networks is that its nodes or blocks are embedded in spatial position. That is, one may assume that interacting particles located on the sites with great many different neighbours and therefore its result must differ from that of a system where sites are located on a regular lattice. We intend to work in these directions in our future endeavour. NIP gratefully acknowledges support from the Bose Centre for Advanced Study and Research in Natural Sciences.
1,477,468,749,896
arxiv
\section{INTRODUCTION} Measurement of the correlation in intensity fluctuations of a light source gives access to the squared modulus of the complex degree of coherence. The pioneering experiments of Hanbury-Brown and Twiss (HBT) demonstrated that the modulus of the degree of coherence can be exploited to retrieve information about the morphology of astronomical objects \cite{HBT1957a}. This led to the construction of the Narrabri Stellar Intensity Interferometer (NSII) which was successfully used to measure 32 stellar diameters \cite{HB1974}. The advent of large Imaging Air-Cherenkov Telescope (IACT) arrays sparked a renewed interest in the stellar intensity interferometry (SII) technique \cite{SIIwIACT} and much recent work has been performed examining the ability to retrieve valuable astronomical observations with high angular resolution using a modern SII observatory. In particular, these include stellar imaging capabilities of IACT observatories \cite{paulthesis}, and laboratory setups using pseudo-thermal sources \cite{Dainis1,dainis2}. Other work on SII includes resolving the temporal coherence of a thermal source \cite{Tan1}, demonstration of the relative insensitivity to atmospheric turbulence \cite{Tan2}, investigations into temporal intensity interferometry using narrow-band emission lines from astrophysical sources \cite{Tan3}, and improving the obtainable SNR via the use of multi-channel intensity interferometry \cite{sascha}. \\ Since the intensity interferometry (II) technique measures the squared modulus of the degree of coherence, the phase information is lost, which complicates accurate image reconstruction. However, phase recovery is possible, given densely spaced coverage of the imaging plane, through both three-point correlations \cite{tripleproduct1} and Cauchy-Riemann algorithms \cite{paul2}. Practical implementations of modern SII have employed new technologies such as single photon detectors (SPDs) and high-speed digital data acquisition systems \cite{ASU2}. Initial measurements using some of these advancements were carried out using the Aquaeye+ and Iquaeye instruments which showed tentative measurements of coherence for a stellar source \cite{aquaeye}. \\ In this paper, we present new techniques for measuring the spatial coherence of a laboratory thermal source using high-speed photo-detectors and digital electronics. The modular nature of the detector and data acquisition system allows for straightforward integration with existing observatories. Parallel polarizations clearly demonstrate a photon bunching in time and space, whereas orthogonal polarizations eliminates coherence but reveals any additional correlation due to noise contamination. We show that correlation measurements in the orthogonal configuration, or when the detectors are separated at distances greater than the spatial coherence length of the source, can be used to correct for systematic noise due to spurious electronic correlations. \section{EXPERIMENTAL SETUP} \begin{figure} \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{labschematic_SVG_v5 \caption{\label{fig:labsetup} Schematic of the laboratory setup} \end{figure} A diagram of our laboratory system is shown in Fig. \ref{fig:labsetup}. Light from a mercury arc-lamp is collimated, passed through a 10$\,$nm narrow-band filter centered on the 435.8$\,$nm G spectral line of the Hg arc-lamp, and then refocused onto a spatial mask. The mask is either a single or double pinhole of various size configurations (typ. 200 - 300 micron diameter) simulating single and binary star systems. The output light passes through a long box (3$\,$m) and is split into two secondary beams via a 50/50 non-polarizing beam splitter. The light from each beam is then detected by super bi-alkali ($>$ 35\% Q.E.) high-speed photo-multiplier tubes (PMT). The PMTs used in the laboratory are the same as those currently employed on the cameras of an IACT observatory, VERITAS \cite{pmtupgrade}. The light collecting areas of the detectors are limited by a circular aperture of 5$\,$mm diameter. It is noted that the PMT aperture is of comparable size to the spatial coherence length of the source. This is done to increase the amount of light throughput into the detector such that the necessary integration time needed to reach a desired sensitivity level is reduced. The effects of large detector areas have already been described in other work \cite{janvida} and are taken into account here. The PMTs are also enclosed in a brass tube to shield them from unwanted electro-magnetic radiation. Linear optical polarizers may be placed in front of each PMT and can be individually rotated to select parallel or orthogonal polarization between the detectors. In order to sample different regions of the spatial coherence curve, one of the PMTs is mounted on a RoboCylinder linear actuator, whose position is controlled via LabVIEW software to high accuracy. The positioning is integrated into our data acquisition system allowing for automated measurements at varying positions. The output cables from the PMTs are fed into a low noise high-speed ($>$ 200$\,$MHz) FEMTO trans-impedance preamplifier. The resulting signal is sent through 10$\,$ft of double shielded cable (RG-223) and then continuously digitized by an analog-to-digital converter (ADC) at a rate of 250$\,$MS/s using an AC-coupled National Instruments (NI) FlexRIO adapter module (NI-5761). We have successfully employed two different types of digital correlators, off-line and real-time. In the off-line correlator, the digitized data from each channel is scaled, truncated to 8-bits, and merged into a single continuous data stream by a Virtex-5 FPGA (PXIe7965R). The data stream is then recorded to a high speed (700$\,$MB/s) 12TB RAID disk. A software routine using LabVIEW can later be used to retrieve intensity correlations between channels as a function of the digital time lag, typically up to $\pm$ 1$\,\mu$s in steps of 4$\,$ns. Due to the large number of samples, the data is read in blocks of 512 samples. The convolution theorem gives the correlation between two signals as the inverse Fourier transform of the product of the Fourier transforms of the signals. This is implemented by use of the NI Multi-core Analysis and Sparse Matrix toolkit (MASM) cross-correlation virtual instrument (VI), which optimizes the computation by utilizing separate computing cores. The largest drawback of performing the correlation off-line is the computation time needed to analyze the data. Data is read into a buffer using a single computing core, and then correlated using the remaining cores. Depending on the number of samples in each data block, it takes on the order of an hour of computation time for every minute of data recorded. This is mainly due to the time required to perform the correlation via the Fourier method for each data block in the NI cross-correlation VI. Since the correlation can be easily parallelized, this could be remedied by using a super-computer with many ($>$1000) processing cores. The NI controller used has only 4 cores limiting the maximum number of correlations performed at the same time. Some of the results presented herein were obtained by use of a real-time correlator using the Virtex-5 FPGA. In this implementation, the cross-correlation is computed using a multiply-accumulate algorithm with delay nodes to retrieve the correlation at various time lags with the FPGA clock set at 125$\,$MHz. The standard deviation of the entire correlogram excluding the zero time-delay bin is recorded and the time-stream of both channels are displayed on the LabVIEW front panel interface to allow visual inspection of the data. The FPGA clock for the algorithm is limited to 125$\,$MHz (maximum of 250$\,$MHz). This timing restriction reduces the signal to noise ratio (SNR) by a factor of $\sqrt{2}$ in comparison to the off-line correlation. However, the correlations are retrieved in real-time allowing for immediate inspection of results and iterative tests of the laboratory setup. In the future, a compromise between the off-line and FPGA methods can be achieved by first streaming the data to disk, and then using the FPGA to perform the correlations on stored data. \section{EXPERIMENTAL OBSERVABLES AND DATA REDUCTION} \subsection{Review of II Measurements} The correlation between AC-coupled amplified voltage signals, $J_1(t)$ and $J_2(t)$, from separated photo-detectors is, \begin{equation} c(\tau) = \frac{1}{T_0} \int_{0}^{T_0} J_1(t-\tau) J_2(t) dt \end{equation} where $T_0$ is the total integration time of the correlator, and $\tau$ is the time delay between channels. Hanbury-Brown and Twiss showed \cite{HBT1957b} that the correlation $\bar c(0)$ for a linearly polarized partially-coherent source of finite angular size could be written as, \begin{equation} \bar c(0) = 2e^2 A_1 A_2 \int_{0}^{\infty} |\Gamma (\nu,d)|^2 \, \alpha^2 (\nu) \, n^2 (\nu) d\nu \int_{0}^{\infty} |F(f)|^2 df \end{equation} where $A_1$ and $A_2$ are the light collection areas for each detector, $\alpha$ is the quantum efficiency, assumed to be the same for both channels, and $n$ is the spectral density of the source in units of photons sec$^{-1}$ Hz$^{-1}$ m$^{-2}$. $\Gamma$ is the coherence factor expected from the source and is dependent on the detector separation $d$. The term $F(f)$ represents the frequency response of the detectors and amplifiers. The optical bandwidth of the light, $\Delta \nu$ as set by filters in the optical system, is generally narrow enough that the quantum efficiency, spectral density, and coherence can be assumed as constant over the optical bandwidth. Additionally, for a rectangular bandpass the integral over the frequency response can be re-written as $\int_{0}^{\infty} |F(f)|^2 df = |F_{max}|^2 \, \Delta f $, where $|F_{max}|$ is the effective gain in a single channel (assuming identical channels), and $\Delta f$ is the electronic bandwidth of the correlator assuming that the gain is approximately constant over the electronic bandwidth. The correlation then becomes \begin{equation} \label{eqn:cbar} \bar c(0) = 2e^2 A_1 A_2 \alpha^2 n^2 |F_{max}|^2 \Delta \nu \Delta f |\Gamma (d)|^2 \end{equation} The ability to detect the coherence of the source is limited due to shot noise fluctuations in each channel. Hanbury-Brown and Twiss showed that for identical channels the root mean square fluctuations in the correlator output due to shot noise is \begin{equation} \label{eqn:rmsnoise} \sigma = \sqrt{2} e^2 \alpha n \Delta \nu (A_1 A_2)^{\frac{1}{2}} |F_{max}|^2 ( \frac{\Delta f}{T_0} )^{\frac{1}{2}}. \end{equation} To find the signal to noise ratio (SNR) we divide equation (\ref{eqn:cbar}) by equation (\ref{eqn:rmsnoise}) retrieving \begin{equation} \label{eqn:SNR} {SNR} = \sqrt{2} (A_1 A_2)^{\frac{1}{2}} \, \alpha \, n \, | \Gamma (d)|^2 \sqrt{ \Delta f T_0}. \end{equation} The above equation represents an idealistic form of the SNR. In this derivation, we assume point-like detectors that exhibit no dark current or after-pulsing. Furthermore, the above SNR does not include the contribution from stray light entering the detector, losses in the correlator, or pickup of additional noise in the data acquisition system. More complete treatments that include many of these additional considerations have already been performed \cite{HBT1957a,HBT1957b,janvida}. \\ In our experiment, we digitize the voltage such that time is discretized, $J(t) \rightarrow J(t_i)$, making the observed ADC reading, \begin{equation} K(t_i) = [ \frac{2^{n_b}}{V_r} J(t_i) ] \end{equation} where [ ] represents rounding the value to the nearest integer, $V_r$ is the voltage range of the digitizer, and $n_b$ is the number of resolution bits. The observed digital correlation is then, \begin{equation} c(t_k) = \frac{1}{T_0} \sum_{i=0}^{N} K_1(t_i - t_k) K_2(t_i) \Delta t . \end{equation} where $t_k$ is the discrete digital time delay, and $\Delta t$ is the sampling time of the ADC. \subsection{Correlated Noise Reduction - ON/OFF Analysis} When operating at the large bandwidths required by an intensity interferometry system, there is often the undesired influence of spurious correlated noise degrading the spatial coherence measurement for a given source. Noise sources are varied, from electronic cross-talk between channels in the recording system, to Cherenkov light in the atmosphere due to gamma-rays when observing stars. In the laboratory, a persistent noise source is attributed to radio-frequency (RF) pickup. This RF signal is simultaneously detected in both electronic channels producing correlated noise. Regardless of the source, if the unwanted correlated noise is stable on operational timescales it can then be measured and removed. The exact behavior of each noise source on the correlated signal must be examined in a case-by-case basis. In this section, a general way to identify and reduce correlated noise by subtraction is presented. In our application, the temporal behavior of the correlated signal, or correllogram, is monitored over small time-lag windows ($<1\,\mu s$), throughout the integration process. In the laboratory total integration times are on the order of 5 - 20 minutes, but will be greater than one hour when observing stellar sources with telescopes. \\ The measured correlation as a function of the time delay is \begin{equation*} c(\tau) = < K_1( t ) K_2( t+ \tau )> \end{equation*} Typically, the sampling time of the digitizer is much longer than the coherence time of the light. In this case, the correlation attributed to the spatial coherence of the source will only appear for the zero time-lag bin, $\tau$ = 0. For time-lags not equal to zero, the correlation should be distributed randomly according to shot noise from photo-detection. \\ Additional noise is then written as an additive term to the ADC reading recorded for each channel at the digitizer input, \begin{equation*} K(t) = S(t) + N(t) \end{equation*} where $S(t)$ is the signal attributed to the source which includes both the wave and shot noise components, and $N(t)$ is the noise introduced into the system. In general, the noise term, $N(t)$, may result from a combination of several noise sources. The resulting correlation is then, \begin{equation*} \begin{aligned} c(\tau) = &<S_1(t)S_2(t + \tau)> + <N_1(t)N_2(t + \tau)> \\ &+ <S_1(t)N_2(t)> + <S_2(t+\tau)N_1(t)> \end{aligned} \end{equation*} The goal is then to identify and remove all above terms except for the correlation between $S_1$ and $S_2$. Now, it is necessary to consider at what stage in the measurement process is the noise introduced into the detection. For purely electronic noise which occurs after photo-detection, the cross-terms between the signal in one channel and noise in another, known as the cross-talk, is ignored. In the laboratory, we observe that the cross-talk between the channels is negligible compared to the signal and noise correlations. The measured correlation can then be written as \begin{equation*} c(\tau) = <S_1(t) S_2(t+\tau) > + <N_1(t) N_2(t+\tau)> \end{equation*} where only the noise not correlated to the signal itself was kept. The correlated noise appears as a purely additive term to the overall correlation. \\ To remove the correlated noise, we perform a background measurement of the correlation which does not contain the signal attributed to the spatial coherence, but, includes the noise contribution at the same level as in the desired correlation measurement. The final measurement is obtained as the residual between the ON observation, where source coherence is expected, and the OFF observation. A straightforward way to obtain OFF data in the laboratory is to measure the correlation for detector separations large enough for the contribution due to the coherence of the source to be negligible. This makes the observed correlation \begin{equation*} \begin{aligned} c_{F} (\tau,T,d) = & < S_1(t) S_2(d_{on},t+\tau) > - <S_1(t+T) S_2(d_{off},t+T+\tau) > \\ &+ <N_1(t) N_2(d_{on},t+\tau)> - <N_1(t+T) N_2(d_{off},t+T+\tau)> \end{aligned} \end{equation*} where $T$ is the time difference between recordings of the ON and OFF runs, and $d_{on}$ and $d_{off}$ are the detector separations in the ON and OFF configurations. Given a circular source with angular diameter $\theta_d$, the detector separation for the background correlation must be greater than $1.22\lambda/\theta_d$ so that the coherence from the source is very small. Ideally, the noise sources do not significantly change between ON and OFF runs such that the residual between noise correlations tends to zero, leaving only the difference in signal correlations. In order to alleviate for the slow changes in noise level between ON and OFF observations we tend to proceed with relatively rapid observation cycles of no more than a few minutes period. \\ \begin{figure}[t] \centering \includegraphics[width=\linewidth]{stdAnalysis_EPS_v2} \caption{The top panel shows the standard deviation of the correllogram excluding the zero time-lag bin as a function of the total integration time. In the bottom panel the same data is shown but multiplied by $\sqrt{T_0}$. The horizontal black dashed line shows the mean of the ON-OFF analysis. For both the ON and OFF runs, the presence of spurious correlations causes the R.M.S. trend to deviate from the expected $\frac{1}{\sqrt{T_0}}$. In the case of the ON-OFF analysis the R.M.S. tends to follow the expected trend.} \label{fig:stdanalysis} \end{figure} To ensure that the noise subtractions are properly performed, the R.M.S. distribution over the entire correlogram excluding the zero time-lag is monitored against the expected trend of $\frac{1}{\sqrt{T_0}}$, where $T_0$ is the total integration time. Initially, the shot noise component will dominate the R.M.S, but as the integration of the correlator proceeds, low-level noise correlations may be detected which sets a limit on the minimum detectable R.M.S. When the noise correlation is significant, the R.M.S. trend will deviate from $\frac{1}{\sqrt{T_0}}$. For proper noise subtraction the residual between the ON and OFF correlations should follow the $\frac{1}{\sqrt{T_0}}$ trend. Figure \ref{fig:stdanalysis} shows a typical result in the laboratory for the R.M.S. trend. An ON and then OFF run of 5 minutes were taken sequentially. The integrated correlation for ON, OFF, and ON-OFF was recorded every second and the R.M.S. was calculated for each measurement over the entirety of the integration time. The bottom panel displays the R.M.S. multiplied by the $\sqrt{T_0}$ such that the expected value should fluctuate about a constant. For both the ON and OFF runs it begins to deviate from the expected trend after only 50 seconds of integration (when the noise is detected). However, the residual between the ON and OFF runs appears to be more stable suggesting that the noise subtraction is being performed properly. Here, the normalization for the R.M.S. trend for ON and OFF runs is different which we attribute to varying levels in the light intensity and also the noise. \\ The residual observation can also be performed using parallel (ON) and orthogonal polarization (OFF) configurations between the detectors. Light between orthogonal polarizations should show no coherence and thus can be used as a background, or OFF observation. This method provides an additional benefit since both parallel and orthogonal configurations can be observed simultaneously for a single detector separation. \section{RESULTS} \subsection{Validation of ON/OFF analysis} \begin{figure} \centering \includegraphics[width=\linewidth,height=0.6\linewidth]{residONOFF_EPS_v2} \caption{In the top panel sequential ON and OFF measurements of the correlogram over an integration time of 30 seconds each are overlaid. For small time lags ($\tau < 500\,$ns) the scatter between the measured correlation for different time lags increases significantly due to the presence of correlated noise. In the bottom panel, the residual between a total of 10 minutes each of ON and OFF data (comprised of 30 second sequential runs alternating between ON and OFF) is shown which reveals the zero time-lag correlation emanating from the spatial coherence of the source.} \label{fig:onoff_resid} \end{figure} The ON/OFF analysis was validated in the laboratory with the experimental setup shown in Fig. \ref{fig:labsetup} using the off-line correlator and without the use of polarizing filters. Fig. \ref{fig:onoff_resid} displays the correlogram both before and after obtaining the residual between ON and OFF runs. The ON region was chosen at zero baseline separation, and the OFF at a separation of 10 mm. Given the expected angular size of the source the first zero of the coherence function is reached at approximately 5.5$\,$mm. The subtraction of spurious noise reveals the coherence of the source at the zero time-lag bin. \subsection{Spatial Coherence Measurement} \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{spatialCoherenceFit_July11_EPS} \caption{The above image shows the measured correlation from the FPGA correlator as a function of the detector separation. The dotted blue line is a fit to the data assuming a uniform disk model for the light source. The wavelength and diameter of the source is held constant but the normalization and center can vary. The black line is a similar fit, but includes the effects of the extended detector aperture.} \label{fig:spatialcoherence} \end{figure} To measure the spatial coherence a LabVIEW routine was developed which integrated the actuator movement with the data acquisition. The source consisted of a circular pinhole of approximately 300 micron diameter at a distance of 3.15$\,$m with a central wavelength of $\lambda =$ 435$\,$nm. Correlations were recorded at each position in 5 minute segments for both an ON and subsequent OFF run. A total of 6 ON positions each separated by 1mm were recorded about the zero baseline position. After each 5 minute integration, the mean of the correlogram, excluding the value at the zero time-lag bin, was subtracted from the entire correlogram. The residual between ON and OFF runs was then calculated. This process was repeated 4 times yielding a total integration time of 20 minutes at each position. \\ The result of this procedure is shown in Fig. \ref{fig:spatialcoherence}. The uncertainty in each measurement was determined by the RMS scatter for time lags away from zero. The dashed line represents a fit to the data by modeling the source as a uniform disk with fixed wavelength and angular diameter. The zero baseline (or center position) and normalization are left as free parameters and determined by the fit. The solid line includes the effects of the extended detector size \cite{janvida}. To include these effects in the fit, an initial model is generated by convolving the detector areas with the expected normalized spatial coherence. The resulting model was interpolated and then fit to the data in a similar manner as the initial fit without the detector size effects. \\ A reduced $\chi ^2$ test was performed between the uniform disk model with detector size effects and the measured spatial correlation finding $\chi ^2 /\nu$ = 0.83 suggesting agreement between the data and model. However, there are several considerations for the source that are not taken into account here. Examination of the pinhole under a scanning electron microscope revealed irregularities in the diameter on the order of 5-10$\,\%$. Additionally, the angular brightness distribution may not be constant over the area of the pinhole, making the uniform disk model assumption not fully valid. \subsection{Correlation between orthogonal and parallel polarized thermal light} The experiment was setup using the polarizing filters in a parallel configuration in front of each detector. Real-time FPGA correlations for minimal detector separation were recorded for a period of 5 minutes. The filter was manually rotated by 90 degrees to select the orthogonal configuration and the correlation measurement was repeated. The results are shown in Fig. \ref{fig:polarizationResid} which shows the correlogram both before and after the application of the ON/OFF subtraction. The noise subtraction between parallel and orthogonal polarizations offered an improvement of 59$\%$ of the SNR over the parallel configuration measurements. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{parallelOrthogonal_EPS} \caption{Correlogram for the polarization tests performed in the laboratory. The top panel shows the result for both the parallel and orthogonal configuration and the bottom panel shows the residual along with $\pm 1 \sigma$ indicators.} \label{fig:polarizationResid} \end{figure} \section{SUMMARY AND OUTLOOK} The second order coherence function for simulated stars using a thermal light source were measured with a digital correlator. An ON/OFF analysis routine was developed allowing for removal of systematic spurious correlations due to unwanted noise pickup. The routine consisted of either physically separating the detectors so that the coherence from the source is negligible or using orthogonal polarizations in order to measure a background. The main application of this work is towards a modern SII array using IACT arrays to observe stars. The system will be integrated into the StarBase-Utah \cite{starbase} observatory over the Summer of 2017 for initial tests to verify operation on actual astronomical telescopes, and then scaled to the VERITAS array in the Fall 2017 and Winter 2017-2018. \\ Our group is actively working to improve some of the features presented in this paper. At the time when these results were obtained the maximum bandwidth of the FPGA correlator was limited due to timing restrictions. However, a modified algorithm was developed allowing for optimal operation of the FPGA correlator. Additionally, instead of performing the correlations in real-time, the data can be first streamed to disk, where the correlations are performed post-digitization. A benefit of streaming data to disk is that an arbitrary number of channels can be correlated with computation time as the only limitation. This opens the possibility for correlations between selected polarization modes as well as multiple spectral channels. The obtainable SNR is then improved by the square root of the number of additional channels. Also, proper normalization of the correlation between runs for varying light levels has yet to be demonstrated. The true normalization depends on a number of factors, primarily the light intensity and gain variations. Within small integration times in the laboratory the light intensity and gain can be expected to be constant, however, over hour-long time scales as needed for stellar observations, these changing parameters need to be accounted for. The normalization of the correlation for varying light intensity and gain fluctuations has already been studied by Hanbury-Brown and Twiss \cite{hbtbook}. \\ For the integration into IACT telescopes there are several tasks to be demonstrated. First, to use the ON/OFF analysis it is necessary to measure the orthogonal and parallel polarization of light simultaneously to remove the lasting effect of any transient noise sources as well as reducing total data collection time. A straightforward implementation of this is to use a polarized beam splitter to separate the orthogonal polarizations. Each telescope will have its own data acquisition hardware with the data brought together to a central processing unit post-digitization. This requires synchronization of the ADC modules to sub nanosecond precision which already can be accomplished using fiber optics and external clocks \cite{whiterabbit}. In the laboratory we have already achieved synchronization for closely spaced ($<$ 1m) but physically separated data acquisition modules using a central timing unit and coaxial cable connections. However, this capability still needs to be demonstrated over large ($>$ 100m) distances. \\ We have successfully measured the coherence of a thermal blackbody source in both time and space using a digital correlator. Small modifications to the current experimental setup allow interferometric capabilities on large arrays of IACTs at very modest costs and are currently being pursued. \\ \section*{ACKNOWLEDGEMENTS} This is an author's Accepted Manuscript of an article published by Taylor \& Francis in the Journal of Modern Optics on August 10, 2017, available online at: http://www.tandfonline.com/10.1080/09500340.2017.1360958." \\ The authors would like to dedicate this work to Micah Kohutek for his work in the laboratory with regrets that he did not get to see this recent progress. We would like to also acknowledge Udara Abeysekara for helpful discussions and assistance in the setup of the linear actuator. The authors gratefully acknowledge for this work from the University of Utah and from National Science Foundation grants PHY151050 and PHY0960242. \newpage \bibliographystyle{tfp}
1,477,468,749,897
arxiv
\section{Introduction} \label{Sec:intro} The smallness of the cosmological constant is one of the great mysteries of particle physics. In the late 1980's, Coleman proposed a solution \cite{Coleman:1988tj,Klebanov:1988eh} based on the effects of Euclidean wormholes~\cite{Giddings:1987cg} (see also Refs.~\cite{Kawai:2013wwa,Hebecker:2018ofv} for reviews). After summing the wormholes, the low energy theory is described by an ensemble average of various coupling constants, including the cosmological constant. However, Coleman's original proposal has problems such as \cite{Fischler:1988ia,Polchinski:1988ua,Fischler:1989ka}. These problems seem to stem from the pathology of the 4d Euclidean gravity associated with the conformal mode. To overcome this problem, a Lorentzian formulation of Coleman's mechanism was proposed and studied~\cite{Kawai:2011rj,Kawai:2011qb,Hamada:2014ofa,Hamada:2014xra,Hamada:2015dja}. On the other hand, significant progress has recently been made toward resolving the information paradox of the black hole ~\cite{Penington:2019npb,Almheiri:2019psf}. At least in two dimensions, the replica wormhole~\cite{Penington:2019kki,Almheiri:2019qdq} plays an important role in reproducing the unitary page curve of the black hole entropy (see Refs.~\cite{Almheiri:2020cfm,Raju:2020smc} for reviews). Given the importance of wormhole, it is interesting to revisit the Coleman's mechanism in two dimensions. Indeed, 2d Euclidean quantum gravity on closed manifolds coupled to a matter field with central charge $c\leq1$ is well-defined. Its proper time Hamiltonian can be regarded as a kind of field theory of noncritical strings. Thus, the validity of Coleman's proposal can be clearly discussed.\footnote{The analysis of wormholes in the worldsheet theory of critical strings has been done in \cite{Lyons:1991im} } In this paper, we first show that the sum of topologies in 2d Euclidean gravity does not lead to an automatic tuning of the cosmological constant by explicitly counting the number of random surfaces.\footnote{The fluctuation of the cosmological constant in 2d gravity is also considered in \cite{Ambjorn:2021wdm}.} We argue that this is true for a wide range of modifications of the 2d Euclidean gravity based on the matrix model. Next, we consider 4d Lorentzian gravity and introduce an effective Hamiltonian of the multiverse consisting of the creation and annihilation operators of the mother and baby universes. The Hamiltonian is non-Hermitian due to the difference between the creation of the mother universe from nothing and the annihilation of the mother universe into nothing. In this model, the Coleman mechanism is realized and the effective cosmological constant is tuned to almost zero. The paper is organized as follows. In Section~\ref{Sec:Euclidean_2d}, we first outline the path integral formulation of 2d Euclidean gravity (non-critical strings). In particular, we introduce the Hamiltonian formulation for 2d gravity coupled to $(2,q)$ minimal matter.\footnote{It includes the Jackiw-Teitelboim (JT) gravity as a limit $q \rightarrow \infty$~\cite{Teitelboim:1983ux,Jackiw:1984je}.} We then show that the effect of the microscopic baby universes is too small compared to the macroscopic topology changes to realize the Coleman mechanism. We then consider modifications of 2d gravity based on the matrix model, and discuss that the Coleman mechanism works in Lorentzian gravity. In Section~\ref{Sec:Lorentzian}, we consider Lorentzian gravity. The processes of creation and annihilation of the mother and baby universes are investigated, and a non-Hermitian effective Hamiltonian describing a Lorentzian multiverse is introduced. We show that Coleman's idea is satisfied in this model. We also discuss the potential implications for phenomenology. \section{2d Euclidean quantum gravity with various topologies}\label{Sec:Euclidean_2d} In this section, we examine the possibility of obtaining ensemble averages for the coupling constants from the sum of topologies in two-dimensional gravity. The Euclidean 2d gravity coupled to a matter field with central charge $c\leq1$ is well-defined without suffering from the problem of the conformal mode.\footnote{This is related to the fact that the number of degrees of freedom is negative in 2d gravity.} It can be defined either by continuum theory \cite{Knizhnik:1988ak,David:1988hj,Distler:1988jt} or by dynamical triangulation \cite{David:1984tx,Kazakov:1985ds,Boulatov:1986jd,Ambjorn:1985az,Kazakov:1985ea}. In particular, all topologies can be summed using the matrix model~ \cite{Brezin:1990rb,Douglas:1989ve,Gross:1989vs}. (See also Ref.~\cite{Kawai:1991qv,Ginsparg:1993is,DiFrancesco:1993cyw,Nakayama:2004vk} for reviews. ) As is well known, 4d Euclidean gravity has difficulties due to instability of the conformal mode. Also, whether a microscopic wormhole is more important than a macroscopic topology change depends on the dimension of spacetime. Nevertheless, the 2d Euclidean wormhole is a good clue to investigate the 4d Lorentzian multiverse, as we will see in the next section. \subsection{Formulations in Continuum theory} In this subsection, we introduce two formalisms of 2d Euclidean gravity. \subsubsection{Hamiltonian Formalism: Non-critical string field theory}\label{Sec:non-critical_string} \begin{figure} \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (267.5,58.5) .. controls (267,64.97) and (247.58,96) .. (223,96) .. controls (198.43,96) and (178.5,76.97) .. (178.5,53.5) ; \draw (178.5,53.5) .. controls (168.5,40) and (158.33,36) .. (140,30) ; \draw (178.5,30.5) .. controls (168.5,20) and (158.33,16) .. (140,10) ; \draw (178.5,30.5) .. controls (188.5,10) and (198.33,5) .. (200,2) .. controls (212,-10) and (216,-10) .. (220,-40) ; \draw (267.5,58.5) .. controls (270,52) and (275,51) .. (280,50) .. controls (300,50) and (320,48) .. (350,45) .. controls (400,35) and (350,33) .. (267.5,33.5) .. controls (260,20) and (250.33,10) .. (240,2) .. controls (237,-2) and (234,-20) .. (230,-40) ; \draw [shift={(-10,0)}] (215,20) .. controls (235,20) and (235,70) .. (215,70) ; \draw [shift={(-10,0)}] (225.5,25) .. controls (220,25) and (220,65) .. (225.5,65) ; \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=1] (310,49) .. controls (315,44) and (315,39) .. (310,34) .. controls (305,39) and (305, 44) .. (310,49); \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=1] (240,2) .. controls (227,-7) and (214,-7) .. (200,2) .. controls (214,11) and (227, 11) .. (240,2); \draw [shift={(0,-20)}] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=1] (150,53) .. controls (155,47) and (155,41) .. (150,34) .. controls (145,41) and (145,47) .. (150,53); \draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (222.5, 78.38) .. controls (222.5, 80.24) and (224.01, 81.75) .. (225.88, 81.75) .. controls (227.74, 81.75) and (229.25, 80.24) .. (229.25, 78.38) .. controls (229.25, 76.51) and (227.74, 75) .. (225.88, 75) .. controls (224.01, 75) and (222.5, 76.51) .. (222.5, 78.38) -- cycle ; \draw (270,80) node [anchor=north west][inner sep=0.75pt] {$\textcolor[rgb]{0.82,0.01,0.11}{V(P;D)}$}; \draw (210,78) node [anchor=north west][inner sep=0.75pt] {\textcolor[rgb]{0.82,0.01,0.11}{{\fontsize{8pt}{8pt}\selectfont $P$}}}; \draw (270,-10) node [anchor=north west][inner sep=0.75pt] {$\textcolor[rgb]{0.11,0.01,0.82}{S(P;D)}$}; \draw (245,55) node [anchor=north west][inner sep=0.75pt] [rotate=24] {\textcolor[rgb]{0.11,0.01,0.82}{{\fontsize{8pt}{8pt}\selectfont $D$}}}; \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ] (233,76) -- (303,42) ; \draw [shift={(305,40.5)}, rotate = 150] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [shift={(232,76.5)}, rotate = -24] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ] (227,72) -- (227,12) ; \draw [shift={(227,8)}, rotate = 90] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [shift={(227,72.5)}, rotate = -90] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ] (155,25) -- (220,76) ; \draw [shift={(155,25)}, rotate = 40] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [shift={(220,76)}, rotate = 220] [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \end{tikzpicture} \caption{Illustration of $V(P;D)$ and $S(P;D)$. Here $V(P;D)$ is the set of points whose geodesic distance from $P$ is less than or equal to $D$. The boundary of $V(P;D)$ is denoted by $S(P;D)$. In the figure, $S(P;D)$ consists of three loops.} \label{Fig:proper_time} \end{figure} In Refs.~\cite{Ishibashi:1993nq, Fukuma:1993np}, a Hamiltonian formalism was proposed for 2d Euclidean gravity, in which the geodesic distance is considered as time (See Ref.~\cite{Kawai:1993cj} for the Hamiltonian in dynamical triangulation). This theory can be regarded as a string field theory, since the Hamiltonian describes the creation and annihilation of universes of spatial dimension 1. It can also be viewed as a two-dimensional third quantization theory~\cite{Giddings:1988wv}. This formalism is convenient to generalize to Lorentzian spacetime, which we will explore in Section~\ref{Sec:Lorentzian}. Let us consider 2d spacetime, and take an arbitrary point $P$ (See Fig. \ref{Fig:proper_time} for illustration). Then, the set of points $V(P;D)$ is defined as \begin{align} V(P;D) = \{ Q\in (\text{spacetime})| d(P,Q)\leq D\} \end{align} where $d(P,Q)$ is the geodesic distance between $P$ and $Q$. Let $S(P;D)$ be the boundary of $V(P;D)$. Next, we introduce operators $\psi^\dagger(\ell)$ and $\psi(\ell)$, which create and annihilate loops (1d spaces) of length $\ell$, respectively. They satisfy the relation, \begin{align} [\psi(\ell),\psi^\dagger(\ell')]=\delta(\ell-\ell')~, \end{align} and the vacuum (the absence of space) is defined as \begin{align} \psi(\ell)|0\rangle=\langle0|\psi^\dagger(\ell)=0. \end{align} For simplicity, we take the $(2,q)$ minimal model as the matter field.\footnote{The $(2,q)$ minimal model has $c = -3q +13- \frac{12}{q}$. For example, $q=1,3,\infty$ gives $c=-2, 0, -\infty,$ respectively. } In that case, there is no need to introduce any extra degrees of freedom other than $l$.\footnote{Non-critical string field theory for the minimal unitary series $(p,p+1)$ ($p=2,3,\cdots$) is given in Ref.~\cite{Ikehara:1994vx}.} Then, the state with $k$ loops of length $\ell_1,\cdots,\ell_k$ can be written as \begin{align} |\ell_1,\cdots,\ell_k\rangle = \psi^\dagger(\ell_1)\cdots\psi^\dagger(\ell_k)|0\rangle~, \end{align} and the state of the boundary surface $S(P;D)$ is represented by their superposition: \begin{align} |S(P;D)\rangle = \sum_{k=0}^\infty \int^\infty_0d\ell_1 \cdots \int^\infty_0\,d\ell_k c_k(\ell_1,\cdots,\ell_k)|\ell_1,\cdots,\ell_k\rangle~. \end{align} We can define Hamiltonian~\cite{Kawai:1993cj,Ishibashi:1993nq}, which describes the infinitesimal translation of the proper time $D$. \begin{align} \frac{d}{dD} |S(P;D)\rangle = - H_\mathrm{Euclid} |S(P;D)\rangle~, \label{Eq:Euclidean_Schroedinger} \end{align} where $H_\mathrm{Euclid}$ is given by \begin{align} H_\mathrm{Euclid} = &\int^\infty_0 d\ell_1 d\ell_2 \,\psi^\dagger(\ell_1) \psi^\dagger(\ell_2) \psi(\ell_1+\ell_2) + \int^\infty_0 d\ell_1 d\ell_2 \,\psi^\dagger(\ell_1+\ell_2) \psi(\ell_1) \psi(\ell_2) \nonumber\\ &+\int^\infty_0 d\ell \,\rho(\ell) \psi(\ell)~. \label{Eq:Euclidean_Hamiltonian}\end{align} The source function $\rho(\ell)$ is\footnote{To be precise, the source term is defined through the Laplace transformation.} \begin{align} \rho(\ell)= \begin{cases} \lambda\,\delta(\ell) \quad \text{for $(2,1)$ topological gravity ($c=-2$) \cite{Ishibashi:1993nqz}}\\ \lambda\,\delta(\ell) + \delta''(\ell) \quad \text{for $(2,3)$ pure gravity ($c=0$) \cite{Ishibashi:1993nq}} \end{cases}. \end{align} This function is related to the disk amplitude: \begin{align} \tilde{\rho}(\zeta)= \frac{\partial}{\partial\zeta}(\tilde{D}(\zeta))^2, \end{align} where $\tilde{D}(\zeta)$ is the Laplace transformation of the disk amplitude $D(l)$. The function $\rho$ corresponding to the JT gravity \cite{Teitelboim:1983ux,Jackiw:1984je} can be obtained as follows. Its action is given by \begin{align} S_{JT}=-\frac{S_0}{2\pi}\left[ \frac{1}{2}\int_{\mathcal{M}}\sqrt{g}R + \int_{\partial\mathcal{M}} \sqrt{h} K \right] -\left[ \frac{1}{2}\int_{\mathcal{M}}\sqrt{g}\Phi(R+2)+\int_{\partial\mathcal{M}}\sqrt{h}\Phi(K-1)\right], \end{align} where $S_0$ is a constant, $K$ is the boundary extrinsic curvature, and $\Phi$ is the dilaton. We consider the case where $\mathcal{M}$ has the disk topology. From the variation of $\Phi$, we obtain that the metric is the Euclidean $AdS_2$ ($EAdS_2$). In the Poincare patch ($ds^2=(d\tau^2+dz^2)/z^2$), the solution of the dilaton is \begin{align} \Phi=\frac{2\pi\gamma}{z}~, \end{align} where $\gamma$ is a constant. Then $\tilde{D}$ is given by \cite{Saad:2019lba,Hirano:2021rzg} \begin{align} \tilde{D}_{JT}(\zeta)= e^{S_0}\frac{\gamma}{2\pi^2}\sinh\left(2\pi\sqrt{2\gamma \zeta}\right)~. \end{align} This leads to \begin{align} \tilde{\rho}_{JT}(\zeta)= \frac{\partial}{\partial\zeta}\tilde{D}_{JT}^2= e^{2S_0}\frac{\gamma^{5/2}}{4\pi^3}\sqrt{\frac{2}{\zeta}}\sinh\left(4\pi\sqrt{2\gamma \zeta}\right) =e^{2S_0}\frac{\gamma^2}{\pi^2}\sum_{n=1}^\infty \frac{(2\gamma)^n}{(2n-1)!}\zeta^{n-1}~, \end{align} from which we obtain \begin{align} \rho_{JT}(l)=e^{2S_0}\frac{\gamma^2}{\pi^2}\sum_{n=1}^\infty \frac{(2\gamma)^n}{(2n-1)!}\delta^{(n-1)}(\ell)~. \end{align} Here $\delta^{(n)}(l)$ is the $n$-th derivative of the Dirac delta function. Thus the JT gravity is obtained as the limit $p=2, q\to\infty$ \cite{Saad:2019lba} (see also Refs.~\cite{Mertens:2019tcm,Mertens:2020hbs,Turiaci:2020fjj,Okuyama:2021eju,Gregori:2021tvs}). Using Eq.~(\ref{Eq:Euclidean_Schroedinger}), we can evaluate the length distribution of the circles that consist of the boundary $S(P;D)$~ \cite{Kawai:1993cj,Gubser:1993vx}. For example, in the case of $c=0$ ($q=3$), the expectation value of the number of loops with length from $L$ to $L+dL$ contained in $S(P;D)$ is given by \begin{align} &n(L;D)dL= \frac{3}{7\sqrt{\pi}D^2} \left(x^{-5/2}+\frac{1}{2}x^{-3/2}+\frac{14}{3}x^{1/2}\right) e^{-x}dL~, &&x:=\frac{L}{D^2}~. \end{align} This implies that a large number of small baby universes will be created. Nevertheless, we cannot conclude that the Coleman mechanism is realized, as we will discuss in Section~\ref{Sec:absence}. \subsubsection{Path Integral Formalism}\label{Sec:DDK} The Liouville action appears in the quantization of 2d gravity coupled to conformal matter~\cite{David:1988hj,Distler:1988jt} (See also Ref.~\cite{Knizhnik:1988ak}). In Section~\ref{Sec:absence}, we use this to examine the possibility of a Coleman mechanism in 2d Euclidean gravity. We start from the partition function, \begin{align} Z=\int \frac{\mathcal{D}g}{\text{vol(Diff)}}e^{-\frac{\mu_0}{2\pi}\int d^2x \sqrt{g}}Z_M[g]~. \end{align} Here $Z_M[g]$ is the partition function of a conformal field with central charge $c$ defined on the background metric $g_{\mu\nu}(x)$, vol(Diff) stands for the volume of the space of diffeomorphisms, and $\mu_0$ is the (bare) cosmological constant. The path measure $\mathcal{D}g$ is induced from the diffeomorphism invariant norm: \begin{align} ||\delta g||^2 = \int d^2x \sqrt{g}g^{\mu\nu}g^{\lambda\rho}\delta g_{\mu\lambda}\delta g_{\nu\rho}~. \label{Eq:g_norm}\end{align} In the conformal gauge, the metric is parametrized as \begin{align} g_{\mu\nu}(x)=\hat{g}_{\mu\nu}(\tau,x)\,e^{\phi(x)}~, \label{Eq:decomposition}\end{align} where $\phi$ and $\tau$ are the conformal mode and moduli, respectively. After this decomposition, the partition function becomes \begin{align} \int d\tau\,\mathcal{D}_1\phi\, \Delta_{FP}[\hat{g}e^\phi]Z_M[\hat{g}e^\phi]e^{-\frac{\mu_0}{2\pi}\int \sqrt{\hat{g}}e^\phi d^2x} =\int d\tau\,\mathcal{D}_1\phi\, \Delta_{FP}[\hat{g}]Z_M[\hat{g}]e^{\frac{c-26}{48\pi}S_L[\hat{g};\phi]-\frac{\mu_0}{2\pi}\int \sqrt{\hat{g}}e^\phi d^2x}~, \label{Eq:Z_conformal_gauge}\end{align} where $\Delta_{FP}[\hat{g}e^\phi]$ is the Faddeev-Popov determinant, $S_L(\hat{g};\phi)$ is the (unrenormalized) Liouville action, \begin{align} S_L(\hat{g};\phi)=\int d^2x \sqrt{g}\left(\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\partial_\nu\phi+ R\phi\right)~. \end{align} $\mathcal{D}_1\phi$ is the measure induced from the norm \begin{align} ||\delta\phi||^2_1=\int d^2x \sqrt{\hat{g}}\,e^\phi(\delta\phi)^2~, \label{Eq:phi_norm_1}\end{align} which is derived from Eq.~\eqref{Eq:g_norm}. In the last equality of Eq.~\eqref{Eq:Z_conformal_gauge}, we have used \begin{align} &\Delta_{FP}[\hat{g}e^\phi]=\Delta_{FP}[\hat{g}]e^{-\frac{26}{48\pi}S_L[\hat{g};\phi]}, &&Z_M[\hat{g}e^\phi]=Z_M[e^\phi] e^{\frac{c}{48\pi}S_L[\hat{g};\phi]}~. \end{align} Since $\mathcal{D}_1\phi$ is inconvenient because of $e^\phi$ in the norm \eqref{Eq:phi_norm_1}, we rewrite it in terms of the measure $\mathcal{D}_0\phi$ that is induced from the standard norm, \begin{align} ||\delta\phi||^2_0=\int d^2x \sqrt{\hat{g}}\,(\delta\phi)^2~. \end{align} Then the parition function becomes~\cite{David:1988hj,Distler:1988jt} \begin{align} Z=\int d\tau\,\mathcal{D}_0\phi\, \Delta_{FP}[\hat{g}]Z_M[\hat{g}]e^{-S[\phi;\hat{g}]}~, \label{Eq:Z_renormalized}\end{align} where $S[\phi;\hat{g}]$ is the (renormalized) Liouville action, \begin{align} S[\phi;\hat{g}]=\frac{1}{2\pi}\int d^2x\left(\partial\phi\bar{\partial}\phi+\frac{1}{4}Q\sqrt{\hat{g}}\hat{R}\phi+\mu_1\sqrt{\hat{g}}e^{\alpha\phi}\right)~. \label{Eq:renormalized_action}\end{align} Here we have \begin{align} &Q=\sqrt{\frac{25-c}{3}}, &&\alpha=\frac{\sqrt{25-c}-\sqrt{1-c}}{\sqrt{12}}~. \label{Eq:Q_and_alpha}\end{align} Note that Eq.~(\ref{Eq:Z_renormalized}) is reduced to the semi-classical Liouville theory in the limit $c\to-\infty$. \subsection{The absence of the Coleman mechanism}\label{Sec:absence} In this Section, we show that the Coleman mechanism does not work for 2d Euclidean gravity coupled to conformal matter with $c\leq1$. To do so, we compare the number of random surfaces with different topologies. The partition function of a 2d manifold with a given topology and area $A$ is given by \begin{align} Z(A) = \int \frac{\mathcal{D}g_{\mu\nu}}{\text{vol}\left(\text{Diff}\right)}Z_M[g_{\mu\nu}]\delta\left(\int d^2x\sqrt{\mathrm{det}\,g_{\mu\nu}}-A\right)~, \end{align} which can also be viewed as the number of random surfaces of area $A$. As reviewed in Section~\ref{Sec:DDK}, in the conformal gauge we have \begin{align} Z(A)=\int d\tau\int \mathcal{D}_0\phi\, Z_M[\hat{g}]e^{-S[\hat{g};\phi]}\delta\left(\int \sqrt{\hat{g}}\,e^{\alpha\phi}d^2x-A\right)~. \end{align} Here we are interested in the string susceptibility $\Gamma$ defined by~\cite{Weingarten:1979gn,Eguchi:1982fe} \begin{align} Z(A)\sim A^{\Gamma-3}~, \end{align} which is a generalization of the central limit theorem for random walks, see Appendix~\ref{Sec:Random_Walk}. $\Gamma$ is obtained by a scaling argument as follows~\cite{David:1988hj,Distler:1988jt} (see Ref.~\cite{Knizhnik:1988ak} for genus zero case). By shifting $\phi$ as \begin{align} \phi\to\phi+\frac{\log A}{\alpha}~, \end{align} the measure $\mathcal{D}_0\phi$ is invariant while the action \eqref{Eq:renormalized_action} is shifted as \begin{align} S[\hat{g};\phi]\to S[\hat{g};\phi]+Q\chi\frac{\log A}{2\alpha}~. \end{align} Here $\chi$ is the Euler number of the 2d manifold. The change of the delta function is \begin{align} \delta\left(\int \sqrt{\hat{g}}\,e^{\alpha\phi}d^2x-A\right)\to\frac{1}{A}\delta\left(\int \sqrt{\hat{g}}\,e^{\alpha\phi}d^2x-1\right)~. \end{align} Putting altogether, we obtain \begin{align} &Z(A) = \left.Z\right|_{A=1} A^{-\frac{\chi Q}{2\alpha}-1}= \left.Z\right|_{A=1} A^{-b\frac{\chi}{2}-1}~, &&b=\frac{25-c+\sqrt{(1-c)(25-c)}}{12}~. \label{Eq:scaling}\end{align} Note that, in the region $c\leq1$ where quantum gravity is well defined, $b$ is bounded from below: \begin{align} &b\geq 2, &&\text{for \quad$c\leq1$}~.\label{ineqb} \end{align} For $c>1$, $Z(A)$ becomes complex which signals an instability of the spacetime against the formation of pinches \cite{Kawai:1983nq,Ambjorn:1985dn}. This can be used to discuss the magnitude of quantum fluctuations of spacetime due to baby universes in 2d Euclidean gravity. As an example, consider a situation in which spacetime is a macroscopic 2d sphere, and a tiny tubular wormhole is attached to it (see the left figure of Fig.~\ref{Fig:random_surface}). This represents the process of a tiny circular baby universe branching off from the circular mother universe and being absorbed back into the mother universe. Overall, the spacetime is a 2d torus. On the other hand, if the spacetime is a macroscopic 2d torus, it represents the process of a circular mother universe splitting into two macroscopic mother universes, which then merge back into a single mother universe (see the right figure of Fig.~\ref{Fig:random_surface}). If the former is significantly non-zero compared to the latter, the former can be regarded as a quantum correction due to microscopic fluctuations (baby universes) to the 2d spherical spacetime. Then after integrating out the contributions of small wormholes, we obtain an effective field theory in which the cosmological constant appears as a dynamical parameter~\cite{Coleman:1988tj,Klebanov:1988eh}. Unfortunately, as we will see below, this is not the case in the simple 2d Euclidean gravity. From Eq.~\eqref{Eq:scaling} with $\chi=0$, we obtain \begin{align} \left(\text{\# of random surfaces with area $A$ and topology $T^2$}\right) \sim A^{-1}~. \label{Eq:all_contribution}\end{align} This equation includes both the contribution of a macroscopic sphere with a microscopic wormhole and the contribution of a macroscopic torus. The former is estimated as follows. \begin{align} \left(\text{\# of random spheres with a microscopic wormhole}\right) \sim A^{-b-1}\cdot A^2 = A^{-b+1}~. \label{Eq:small_contribution}\end{align} Here $A^{-b-1}$ is the number of random spheres, and $A^2$ is the number of ways to attach the endpoints of the microscopic wormhole. From Eq.~(\ref{ineqb}), we see that $A^{-b+1}\ll A^{-1}$ for large $A$ and that the effect of the microscopic wormhole is negligibly small compared to the macroscopic topology change. In other words, there is no special mechanism to enhance the effect of small wormholes. This is due to the fact that the gravitational coupling is dimensionless in two dimensions ($\sim 1/\sqrt{c}$), and consequently, there is no intrinsic difference between small and large wormholes~\footnote{This is also pointed out in e.g. \cite{Polchinski:1989fn}.}. \begin{figure} \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (28.5,53.5) .. controls (28.5,30.03) and (48.43,11) .. (73,11) .. controls (97.58,11) and (117.5,30.03) .. (117.5,53.5) .. controls (117.5,76.97) and (97.58,96) .. (73,96) .. controls (48.43,96) and (28.5,76.97) .. (28.5,53.5) -- cycle ; \draw (50,25.5) .. controls (64,-10.56) and (66,-10.69) .. (80.5,25.5) ; \draw (55,25.5) .. controls (64,-4.56) and (66,-4.69) .. (75.5,25.5) ; \draw (50,25.5) .. controls (52,23) and (53,23) .. (55,25.5) ; \draw (50,25.5) .. controls (52,28) and (53,28) .. (55,25.5) ; \draw (75.5,25.5) .. controls (77.5,23) and (78.5,23) .. (80.5,25.5) ; \draw (75.5,25.5) .. controls (77.5,28) and (78.5,28) .. (80.5,25.5) ; \draw (178.5,53.5) .. controls (178.5,30.03) and (198.43,11) .. (223,11) .. controls (247.58,11) and (267.5,30.03) .. (267.5,53.5) .. controls (267.5,76.97) and (247.58,96) .. (223,96) .. controls (198.43,96) and (178.5,76.97) .. (178.5,53.5) -- cycle ; \draw (200,45) .. controls (200,65) and (250,65) .. (250,45) ; \draw (205,55.5) .. controls (205,50) and (245,50) .. (245,55.5) ; \draw (135.5,50) node [anchor=north west][inner sep=0.75pt] [rotate=0] [align=left] {{\fontsize{18pt}{18pt}\selectfont $\ll$}} \draw (200,-30) node [anchor=north west][inner sep=0.75pt] {$\textcolor[rgb]{0.82,0.01,0.11}{\sim A^{-1}}$}; \draw (50,-30) node [anchor=north west][inner sep=0.75pt] {$\textcolor[rgb]{0.82,0.01,0.11}{\sim A^{-b+1}}$}; \end{tikzpicture} \caption{For fixed large area $A$, the number of random tori is much larger than the number of random spheres with a thin tube.} \label{Fig:random_surface} \end{figure} \subsection{Modification of the model} The Coleman mechanism may be achieved by modifying the model. To do so, we start with the large-$N$ limit of a matrix model. \begin{align} S= N\left( \frac{1}{2}\mathrm{tr}\phi^2- \frac{\lambda}{3}\mathrm{tr}\phi^3 \right), \end{align} where $\phi$ is an $N\times N$ Hermite matrix. This describes the 2d pure gravity ($c=0$) in the scaling limit, see Ref.~\cite{Francesco_1995} for a review and the references therein. One of the possible modifications is to consider the action as defined as a polynomial of the local actions~\cite{Hamada:2015dja}. Here we consider the simplest modification, namely adding a term corresponding to $\left(\int d^2x\sqrt{g}\right)^2$: \begin{align} S&= N\left( \frac{1}{2}\mathrm{tr}\phi^2- \frac{\lambda_0}{3}\mathrm{tr}\phi^3 \right) - \frac{1}{2}C \left(\frac{1}{3}\mathrm{tr}\phi^3\right)^2~. \label{Eq:modified_action}\end{align} In fact, in terms of Feynman diagrams, the last term represents the insertion of a pair of $\phi^3$ vertices, which is just a discretization of $\left(\int d^2x\sqrt{g}\right)^2$. Then the partition function is formally evaluated as \begin{align} Z&= \int d\phi \exp\left( -N\left( \frac{1}{2}\mathrm{tr}\phi^2 - \frac{\lambda_0}{3}\mathrm{tr}\phi^3 \right) + \frac{1}{2}C \left(\frac{1}{3}\mathrm{tr}\phi^3\right)^2\right)\label{Eq:modified_Z} \\&=\int d\lambda \int d\phi \exp\left( -N\left( \frac{1}{2}\mathrm{tr}\phi^2 - \frac{\lambda+\lambda_0}{3}\mathrm{tr}\phi^3 \right) - \frac{N^2}{2C}\lambda^2\right) \\&=\int d\lambda \,Z_{\phi^3}(\lambda+\lambda_0) \exp\left(-\frac{N^2}{2C}\lambda^2\right) \\&=\int d\lambda \,Z_{\phi^3}(\lambda) \exp\left(-\frac{N^2}{2C}(\lambda-\lambda_0)^2\right)~, \end{align} where \begin{align} Z_{\phi^3}(\lambda)=\exp\left(Z_{\text{single}}(\lambda)\right):= \int d\phi \exp\left(-N\left(\frac{1}{2}\mathrm{tr}\phi^2- \frac{\lambda}{3}\mathrm{tr}\phi^3 \right)\right)~. \end{align} This fulfills Coleman's idea of considering ensembles of various coupling constants simultaneously: \begin{align} Z=\int d\lambda \,\exp\left(-\frac{N^2}{2C}(\lambda-\lambda_0)^2\right)\exp\left(Z_{\text{single}}(\lambda)\right) =:\int d\lambda w(\lambda) \exp\left(Z_{\text{single}}(\lambda)\right)~. \label{Eq:average}\end{align} Then $\lambda$ is fixed to the peak of the integrand. Unfortunately, however, this model is not well defined because the partition function \eqref{Eq:modified_Z} is divergent. This can be seen in both views: the sum over the area and the integration over the cosmological constant. We start with the former view. In terms of Feynman diagrams, $A$ is the number of vertices. A single insertion of $C(\mathrm{Tr}(\phi^3))^2$ can be viewed as the insertion of two separate vertices, giving rise to a factor, \begin{align} kN^2C\times A^2~, \end{align} where $k$ is a positive $\mathcal{O}(1)$ constant. By adding multiple insertions, we obtain the factor \begin{align} \exp\left(kN^2C\times A^2\right)~. \end{align} This indicates that the sum over $A$ does not converge because the partition function (the number of planar Feynman diagrams) when $C=0$ is bounded by an exponential function of $A$. In the latter view, the partition function of a single universe $Z_\text{single}(\lambda)$ is not well defined when $\lambda$ is above a critical value. In the matrix model, $Z_\text{single}$ is given by a power series of $\lambda$ with positive coefficients, \begin{align} Z_\text{single}(\lambda) = N^2\left(a_0+a_1\lambda + a_2\lambda^2 +\cdots \right)= N^2\sum_{A=0}^{\infty} a_A \lambda^A~. \end{align} It has a convergence radius $\lambda_c$, and the critical behavior near $\lambda_c$ is given by\footnote{$\log\lambda$ and $(\lambda_c-\lambda)/\lambda_c$ are regarded as the bare and renormalized consmological constant, respectively.} \begin{align} \sim \mathrm{const.}N^2\left(\frac{\lambda_c-\lambda}{\lambda_c}\right)^{b}~. \end{align} The complete partition function is obtained by substituting $Z_\text{single}(\lambda)$ into Eq.~\eqref{Eq:average}, but it is divergent because the integration over $\lambda$ includes the region $\lambda>\lambda_c$. This consideration suggests the possibility of realizing the Coleman mechanism by considering a Lorenz model such as \begin{align} Z_L:= \int d\lambda \exp\left(i N^2\frac{(\lambda-\lambda_0)^2}{2C}\right) \exp\left(Z_{\text{single}}(\lambda)\right) \end{align} instead of the Euclidean model. The remainder of this paper will explore this possibility. \begin{figure} \centering \tikzset{every picture/.style={line width=0.75pt}} \begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1] \draw (195,96) .. controls (198.43,96) and (185.5,76) .. (183,53.5) .. controls (182,46) and (182,38) .. (183,30.5) .. controls (187,10) and (191.33,5) .. (195,2); \draw (260,96) .. controls (260.58,96) and (263,64.97) .. (263.5,58.5) .. controls (264,50) and (264,41.5) .. (262.5,33.5) .. controls (260,20) and (250.33,10) .. (240,2); \draw [shift={(-10,0)}] (225.5,25) .. controls (235,20) and (235,70) .. (225.5,65) ; \draw [shift={(-10,0)}] (225.5,25) .. controls (220,25) and (220,65) .. (225.5,65) ; \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=1] (240,2) .. controls (230,-5) and (205,-5) .. (195,2) .. controls (205,9) and (230,9) .. (240,2); \draw [color={rgb, 255:red, 0; green, 38; blue, 209 } ,draw opacity=1 ][line width=1] (260,96) .. controls (238.2,86) and (216.6,86) .. (196,96) .. controls (216.6,106) and (238.2,106) .. (260,96); \draw (183,38) node [anchor=north west][inner sep=0.75pt] {\textcolor[rgb]{0.82,0.01,0.11}{{\fontsize{12pt}{12pt}\selectfont $t_A$}}}; \draw (240,38) node [anchor=north west][inner sep=0.75pt] {\textcolor[rgb]{0.82,0.01,0.11}{{\fontsize{12pt}{12pt}\selectfont $t_B$}}}; \draw (130,48) node [anchor=north west][inner sep=0.75pt] [rotate=-270] {\textcolor[rgb]{0,0,0}{{\fontsize{12pt}{12pt}\selectfont time}}}; \draw [shift={(10,0)}, rotate = 0] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (227,65) -- (227,24) ; \draw [shift={(237,20)}, rotate = 90] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [shift={(237,68)}, rotate = -90] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29); \draw [shift={(-25,0)}, rotate = 0] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (227,65) -- (227,24) ; \draw [shift={(202,20)}, rotate = 90] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \draw [shift={(202,68)}, rotate = -90] [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29); \draw [shift={(-75,-20)}, rotate = 0] (227,115) -- (227,24) ; \draw [shift={(152,0)}, rotate = 90] [line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ; \end{tikzpicture} \caption{The topology change of the universe. The process in which one universe splits into two and then merges into one again is shown.} \label{Fig:splitting} \end{figure} Before concluding this section, a general point should be made about Lorentzian gravity. From Geroch's theorem~\cite{Geroch:1967fs}, a singularity exists when a topology change occurs. This creates an ambiguity about time in the separated universes. For example, consider the process of one universe splitting into two and merging back into one. In this case, the relationship between the time of each separated universe is not a priori clear ($t_A$ may or may not be equal to $t_B$ in Figure \ref{Fig:splitting}). This is in contrast to the case of Euclidean gravity, where a common proper time ($t_A=t_B$) should be chosen. \section{Coleman mechanism in Lorentzian multiverse}\label{Sec:Lorentzian} In this section, we propose a non-Hermitian Hamiltonian describing a Lorentzian multiverse and show that the Coleman mechanism actually works in such a model. We also discuss various phenomenological implications. \subsection{Lorentzian Model} We start with the Hamiltonian of the universe in the mini-superspace approximation of 4-dimensional Lorentzian gravity. \aln{ H_{\rm mini}^{}=-\frac{p_a^2}{2M_{\rm Pl}^2a}+\frac{a^3}{6}\lambda(a)~,\quad p_a^{}=M_{\rm Pl}^2a\dot{a}~,\label{mini-superspace Hamiltonian} } where $\lambda(a)$ is the energy density of the universe.\footnote{One can see that $H_{\rm mini}^{}=0$ corresponds to the Freedman equation. } It is convenient to choose the volume $l=a^3/3$ as a dynamical variable instead of $a$. Eq.~(\ref{mini-superspace Hamiltonian}) is then rewritten as \aln{ H_{\rm mini}^{}=-l\frac{p_l^2}{2M_{\rm Pl}^2}+\frac{l}{2}\lambda(l) ~,\quad p_l^{}=\frac{M_{\rm Pl}^2\dot{l}}{l}~. } In the following, we set $M_{\rm Pl}^{}=1$. To describe the multiverse including baby universes, we consider the following second quantized Hamiltonian. \aln{ \hat{H}_{}^{}&=\frac{1}{2}\int_0^\infty dl\hat{\psi}^\dagger(l)\left(\hbar^2\frac{d}{dl}l\frac{d}{dl}+l\lambda(l)+c\,l(\hat{a}+\hat{a}^\dagger)\right)\hat{\psi}(l) +\int_0^\infty dll\rho^*(l)\psi^\dagger(l)~, } where we impose the commutation relations \begin{align} [\psi(l),\psi^\dagger (l')]=\delta(l-l')~,\quad [\hat{a},\hat{a}^\dagger]=1~. \end{align} Here $\hat{a}$ and $\hat{a}^\dagger$ are the annihilation and creation operators of the baby universe, and $\hat{\psi}(l)$ and $\hat{\psi}^\dagger(l)$ are the annihilation and creation operators of the mother universe of length $l$. Also $\hat{\psi}^\dagger(l)(\hat{a}+\hat{a}^\dagger)\hat{\psi}(l)$ represents the emission and absorption of the baby universe from the mother universe. We note that $\hat{q}=\hat{a}+\hat{a}^\dagger$ is a conserved quantity because it commutes with the Hamiltonian. We have not introduced terms such as the first and second terms on the right hand side of Eq.(\ref{Eq:Euclidean_Hamiltonian}) representing the splitting and merging of the mother universes. This is because in Lorentzian gravity, topology changes of the universe occur through tunneling, but the Euclidean action representing the tunneling barrier between the 3-dimensional macroscopic universes is macroscopically large, so the tunneling probability is strongly suppressed. The last term in $\hat{H}_{}^{}$ describes the process by which a small mother universe arises from nothing by the tunneling effect~\cite{Vilenkin:1982de} (see Fig.~\ref{fig:tunneling}). It is important to note that there is a term that creates the mother universe, but not a term that annihilates the mother universe. This is because if the matter field of the universe is highly excited, the overlap with the ground state is so small that the probability of the universe becoming nothing after a big crunch is expected to be almost zero.\footnote{ Such a contracting universe may bounce back and repeat the cycle due to some quantum gravity effects. } For this reason, it is natural to consider the non-hermitian effective Hamiltonian of the multiverse. This is important in the discussion that follows. \begin{figure} \begin{center} \includegraphics[width=6cm]{tunneling.pdf} \end{center} \caption{Creation of a small mother universe from nothing. } \label{fig:tunneling} \end{figure} A general initial state is given by \aln{ |i\rangle=\int_{-\infty}^\infty dq f(q)|q\rangle \otimes |i,q\rangle_M^{}~, } where $|q\rangle$ and $|i,q\rangle_M^{}$ are the states of the baby universe and the mother multiverse, respectively. Then, the time evolution is given by \aln{e^{-i\hat{H}t/\hbar}|i\rangle&=\int_{-\infty}^\infty dqf(q) |q\rangle \otimes e^{-i\hat{H}_{q}^{}t/\hbar}|i,q\rangle_M^{} ~, \label{rhoM} } where \aln{ \hat{H}_{q}^{}&=\frac{1}{2}\int_0^\infty dl\hat{\psi}^\dagger(l)\left(\hbar^2\frac{d}{dl}l\frac{d}{dl}+l\lambda_{\rm eff}^{}(l)\right)\hat{\psi}(l) +\int_0^\infty dll\rho^*(l)\psi^\dagger(l)~, \label{q hamiltonian} \\ \lambda_{\rm eff}^{}(l)&=\lambda(l)+c\,q~. } It is clear that $c\,q$ plays the role of the cosmological constant. In the following, we denote the constant piece of $\lambda_{\rm eff}^{}(l)$ as $\lambda$. Then the integral of $q$ in Eq.~(\ref{rhoM}) can be regarded as an ensemble average over the cosmological constants. Note that the generalization to other coupling constants is straightforward. In fact, if we want to realize an ensemble average over the coupling constant $g_i^{}$ corresponding to a local operator ${\cal O}_i^{}(x)$, we can simply introduce another baby universe via \aln{ (\hat{a}_i^{}+\hat{a}_i^\dagger)\int d^dx \,{\cal O}_i^{}(x)~, } where $\hat{q}_i^{}=\hat{a}_i^{}+\hat{a}_i^\dagger$ plays the role of $g_i^{}$. This has already been discussed in the original work~\cite{Coleman:1988tj,Coleman:1988cy}. From Eq.~(\ref{rhoM}), we see that the quantum state of the mother multiverse with cosmological constant $\lambda$ is given by\footnote{For simplicity, we do not write subscript $M$, but $|\Psi_\lambda^{}(t)\rangle$ is the state of the mother multiverse.} \begin{align} \langle \lambda|e^{-i\hat{H}t/\hbar}|i\rangle=f(\lambda)e^{-i\hat{H}_\lambda^{}t/\hbar}|i,\lambda\rangle_M^{}=:f(\lambda)|\Psi_\lambda^{}(t)\rangle~. \label{Eq:psi_q}\end{align} In general, for non-unitary systems, there is no clear definition of probability a priori. However, it is natural to assume that the probability of observing that the cosmological constant is $\lambda$ is proportional to the norm of Eq.~\eqref{Eq:psi_q}, i.e. \aln{ P(t,\lambda)\propto|f(\lambda)|^2\langle \Psi_\lambda^{}(t)| \Psi_\lambda^{}(t)\rangle~. \label{P} } If the time evolution is unitary, this reduces to $|f(\lambda)|^2$. Then, the cosmological constant is simply determined by the initial wavefunction $f(\lambda)$. On the other hand, in the non-unitary model we are considering, the behavior of Eq.~(\ref{P}) is non-trivial. The Heisenberg equation of $\hat{\psi}(l)$ is \aln{ i\hbar \frac{\partial \hat{\psi}}{\partial t}&=[\hat{\psi},\hat{H}_q] =\frac{1}{2}\left(\hbar^2\frac{d}{dl}l\frac{d}{dl}+l\lambda_{\rm eff}^{}(l)\right)\hat{\psi}+l\rho^*(l)~,\label{Heisenberg eq} } which looks like a Schr\"{o}dinger equation with a source term $\rho^*(l)$. The inhomogeneous terms in this equation imply that universes of various sizes are generated per unit time. On the other hand, the homogeneous terms mean that the generated universe expands toward infinity.\footnote{One can interpret $-\lambda_{\rm eff}^{}(l)$ as the potential energy for the size of the universe.} Therefore, it is expected that the system reaches a stationary coherent state $|\Psi_{st}^{}\rangle$ after a sufficiently long time, so that $\psi_{st}^{}(l):=\langle\Psi_{st}|\hat{\psi}(l)|\Psi_{st}^{}\rangle$ satisfies \aln{ &\left(\hbar^2\frac{d}{dl}l\frac{d}{dl}+l\lambda_{\rm eff}^{}(l)\right)\psi_{st}^{}(l)+l\rho^*(l)=0~. \label{eq:equilibrium} } In fact, we can show that the following $|\Psi_{st}^{}\rangle$ satisfies $\hat{H}_\lambda^{}|\Psi_{st}^{}\rangle=0$ if Eq.~(\ref{eq:equilibrium}) is satisfied: \aln{ |\Psi_{st}^{}\rangle={\cal N}^{1/2}\exp\left(\int_0^\infty dl\hat{\psi}^{\dagger}(l)\psi_{st}^{}(l)\right)|0\rangle~, } where ${\cal N}$ is the normalization constant. Note that this stationary multiverse state corresponds to the multiverse partition function in the path-integral formulation~\cite{Coleman:1988tj,Coleman:1988cy,Kawai:2011rj,Kawai:2011qb,Hamada:2015dja}. In the present formulation, such a state emerges naturally from the non-hermitian many-body Hamiltonian. The probability distribution of the cosmological constant is now given by \aln{ P_{st}^{}(\lambda)=|f(\lambda)|^{2}\langle \Psi_{st}|\Psi_{st}^{}\rangle =|f(\lambda)|^{2}{\cal N} \exp\left(\frac{1}{2}\int_0^\infty dl|\psi_{st}^{}(l)|^2\right)~,\label{equilibrium distribution} } As argued above, the stationary state is expected as a result of the balance between inhomogeneous and homogenous terms. Here we have implicitly assumed that generated universe expands toward infinity. This is equivalent to the assumption that we consider an ensemble average of coupling constants where $\lambda_{\rm eff}^{}(l)\geq0$ is satisfied.\footnote{This may be viewed as an anthropic principle in a weak sense. We require that the universe can be large, though we do not require the formation of the galaxy~\cite{Weinberg:2000yb}. } In the following, we consider a source term of the form \aln{ l\rho^*(l)=\nu\epsilon \delta(l-\epsilon)~, } which means that a mother universe of initial size $\epsilon$ is generated at a rate of $\nu$ per unit time. Moreover, we simply put $\nu\epsilon\rightarrow \nu$. We now summarize the WKB solution for $\psi_{st}^{}(l)$. By expanding $\psi_{st}^{}(l)$ as \aln{\psi_{st}^{}(l)=\exp\left(\frac{i}{\hbar}S_0^{}+iS_1^{}+\cdots\right)~, } we have \aln{ {\cal O}(\hbar^0)&:\ \left(\frac{dS_0^{}}{dl}\right)^2=\lambda_{\rm eff}^{}(l)~, \\ {\cal O}(\hbar^1)&:\ -2l\frac{dS_0^{}}{dl}\frac{dS_1^{}}{dl}+i\left(\frac{dS_0^{}}{dl}+l\frac{d^2S_0^{}}{dl^2}\right)=0~. } These are solved as \aln{S_0^{}(l)=\pm \int^l dl' \sqrt{\lambda_{\rm eff}^{}(l')}~,\quad S_1^{}(l)=\frac{i}{2}\left(\log \sqrt{\lambda_{\rm eff}^{}(l)}+\log l\right)~. } Thus, the general WKB solution for $l>\epsilon$ is given by \aln{ \psi_{st}^{\rm WKB}(l)=\frac{A}{\sqrt{l}\,\lambda_{\rm eff}^{}(l)^{1/4}} e^{\frac{i}{\hbar}\int^l dl'\sqrt{\lambda_{\rm eff}^{}(l')}}+\frac{B}{\sqrt{l}\,\lambda_{\rm eff}^{}(l)^{1/4}}e^{-\frac{i}{\hbar}\int^l dl'\sqrt{\lambda_{\rm eff}^{}(l')}}~, \label{WKB solution} } where $A$ and $B$ are constants proportional to $\nu$ and are independent of $l$. In the following, we put $\hbar=1$. By substituting this into Eq.~\eqref{equilibrium distribution}, we obtain \aln{ P_{st}^{}(\lambda,\{g_i^{}\}) &={\cal N}|f(\lambda,\{g_i^{}\})|^2 \exp\left[\frac{1}{2}\int_0^{l_{\rm IR}^{}}\frac{d\log l}{\lambda_{\rm eff}^{}(l)^{1/2}} \left|A+B e^{-2i\int^l dl'\sqrt{\lambda_{\rm eff}^{}(l')}}\right|^2 \right]~, \label{equilibrium distribution 2} } where an IR cutoff $l_{\rm IR}$ as a maximum size of the universe is introduced, and we have added the other coupling constants, $\{g_i^{}\}$, to the argument of $P_{st}$ to emphasize that $\lambda_{\rm eff}$ may have a dependence on $g_i^{}$. Since the integrand of this exponent is clearly nonnegative, the dominant contribution to the integral will come from the neighborhood of the minimum of $\lambda_{\rm eff}$, unless the factor with absolute value happens to be small. In the remainder of this section, we will use Eq.~\eqref{equilibrium distribution 2} to compute the probability distributions of the cosmological constant and the other coupling constants under these assumptions. Here we consider the following two cases: \begin{itemize} \item Spatially flat universe with a cosmological constant $\lambda$ (Section~\ref{Sec:fine-tuning}). \item Universe with the positive spatial curvature, a cosmological constant $\lambda$, and a radiation or matter component of energy density (Section~\ref{Sec:MEP}). \end{itemize} In the former, the cosmological constant is taken into account. In the latter, not only that, but also the coupling constants that affect the energy density are considered. By explicitly computing Eq.~\eqref{equilibrium distribution 2}, we show that in both cases the probability distribution of $\lambda$ has a sharp peak near zero. This implies that the cosmological constant is fine-tuned to zero. Furthermore, in the latter case, the probability of $g_i$ is found to be maximum at the point where the energy density of radiation or matter in the late universe is maximum. We call this the maximum entropy principle~\cite{Kawai:2011rj,Kawai:2011qb,Hamada:2014ofa,Hamada:2014xra,Hamada:2015dja}, or the maximum matter principle. \subsection{Fine-tuning of the cosmological constant}\label{Sec:fine-tuning} We consider the spatially flat universe with the cosmological constant $\lambda$. The effective vacuum energy is just a constant: \begin{align} \lambda_{\rm eff}=\lambda \end{align} As discussed below Eq.~\eqref{equilibrium distribution}, we assume $\lambda$ is non-negative. The probability distribution Eq.~(\ref{equilibrium distribution 2}) becomes \aln{ P_{st}^{}(\lambda) &={\cal N} |f(\lambda)|^2\exp\left[\frac{1}{2}\int_0^{l_{\rm IR}^{}}\frac{d\log l}{\lambda^{1/2}}\left|A+B e^{-2i\int^l dl'\sqrt{\lambda}}\right|^2 \right]~. \nonumber\\ &\sim|f(\lambda)|^2\exp\left(\frac{\log l_{\rm IR}^{}}{\lambda^{1/2}}\right)~, \label{IR distribution} } which has a sharp peak at $\lambda=0$. Note that the constants $A$ and $B$ are not important as these are independent of $l$. Here, we again impose the IR cutoff $l_{\rm IR}$ as a maximum size of the universe and assumed that the initial wavefunction $f(\lambda)$ does not have strong parameter dependences compared to the singular exponential factor. Therefore, the cosmological constant is fixed to be zero. \subsection{Maximum entropy principle and maximum matter principle}\label{Sec:MEP} Next, let us consider that the universe has a three-dimensional spherical topology in space and that the energy density consists of the cosmological constant and the matter or radiation component. The energy density of the universe is \begin{align} \lambda_{\rm eff}^{}(l)= \begin{cases} \lambda+\dfrac{S}{l^{4/3}}-\dfrac{1}{Gl^{2/3}}\quad(\text{radiation})\\ \\ \lambda+\dfrac{M}{l}-\dfrac{1}{Gl^{2/3}}\quad(\text{matter}) \end{cases}, \end{align} where the first and second lines correspond to the radiation-dominated universe and matter-dominated universe, respectively. For both cases, the first term is the cosmological constant and the last term is the contribution from the spatial curvature of the universe where $8\pi G=M_{\rm Pl}^{-2}=1$. As for the second term, $S/l^{4/3}$ and $M/l$ are the radiation and matter energy densities with $S$ and $M$ being the total entropy of the radiation and total energy of the matter, respectively.\footnote{Strictly speaking, this definition of ``entropy" is different from the usual definition of radiation entropy $S_{\rm rad}^{}\sim \rho_{\rm rad}^{3/4}a^{3}\propto T^3a^{3}$. In our case, we have $S\sim \rho_{\rm rad}a^4 \propto S_{\rm rad}^{4/3}$. } As discussed below Eq.~\eqref{equilibrium distribution}, we assume $\lambda_\mathrm{min}\geq0$. As we will see explicitly, the dominant contribution to the integral in the probability distribution~\eqref{equilibrium distribution 2} comes from the region around the minimum of $\lambda_{\rm eff}^{}$. To this end, we expand $\lambda_{\rm eff}^{}(l)$ around its minimum at $l=l_\mathrm{min}$ as \begin{align} \lambda_{\rm eff}^{}(l) = \lambda_\mathrm{min} + \lambda_\mathrm{min}^{(2)}(l-l_\mathrm{min})^2+\mathcal{O}((l-l_\mathrm{min})^3)~, \label{Eq:expansion}\end{align} where \begin{align} &\lambda_\mathrm{min}= \begin{cases} \lambda-\dfrac{1}{4SG^2} \\ \\ \lambda-\dfrac{4}{27G^3M^2} \end{cases}, && \lambda_\mathrm{min}^{(2)}= \begin{cases} \dfrac{1}{72G^5S^4} \quad(\text{radiation})\\ \\ \dfrac{256}{59049G^9M^8}\quad(\text{matter}) \end{cases}, \end{align} and \begin{align} l_\mathrm{min}= \begin{cases} (2GS)^{3/2} \quad (\text{radiation})\\ \left(3GM/2\right)^{3} \quad (\text{matter}) \end{cases}. \end{align} Then, we divide the integral in Eq.~\eqref{equilibrium distribution 2} into two parts. The region around $l=l_\mathrm{min}$ and the other contribution. The former is evaluated by substituting Eq.~\eqref{Eq:expansion} into the integral: \begin{align} P_{st}^{}(\lambda,\{g_i^{}\}) &\sim |f(\lambda,\{g_i^{}\})|^2\exp\left( \int^{2l_{\mathrm{min}}}_{l_{\mathrm{min}}/2} \frac{dl}{l_\mathrm{min}}\frac{1}{\sqrt{\lambda_\mathrm{min}+\lambda_\mathrm{min}^{(2)}\left(l_\mathrm{min}-l\right)^2}} + \text{(other)} \right) \nonumber\\ &\sim |f(\lambda,\{g_i^{}\})|^2\exp\left[ \frac{1}{l_{\mathrm{min}} \sqrt{\lambda_\mathrm{min}^{(2)}}} \log\left(\frac{l_\mathrm{min}^2\lambda_\mathrm{min}^{(2)}}{\lambda_\mathrm{min}}\right)+ \text{(other)}\right]. \nonumber\\&\sim \begin{cases} \exp\left[ G \sqrt{S} \log\left(\dfrac{1}{\lambda_\mathrm{min}}\right) + \text{(other)} \right] \quad(\text{radiation})\\ \\ \exp\left[ G^{3/2} M \log\left(\dfrac{1}{\lambda_\mathrm{min}}\right) + \text{(other)} \right] \quad(\text{matter}) \end{cases}, \label{Eq:divergent_contribution}\end{align} where $A$ and $B$ are omitted as in Eq.~\eqref{IR distribution}. We observe that, when $\lambda_\mathrm{min}=0$, the integral around $l=l_\mathrm{min}$ is divergent.\footnote{Note that the above divergence is interpreted as the life-time of the universe in the path-integral formulation~\cite{Kawai:2011rj,Kawai:2011qb,Hamada:2014ofa,Hamada:2014xra,Hamada:2015dja}. } On the other hand, the other contributions to the integral are finite once the IR cutoff $l_{\rm IR}$ is introduced. Therefore, the probability is peaked at $\lambda=\lambda_c$, where $\lambda_\mathrm{min}=0$ is realized: \begin{align} \lambda_c = \begin{cases} \dfrac{1}{4SG^2} \quad(\text{radiation})\\ \\ \dfrac{4}{27G^3M^2} \quad(\text{matter}) \end{cases}. \end{align} Furthermore, the coupling constants other than $\lambda$ are tuned in such a way that Eq.~\eqref{Eq:divergent_contribution} is maximized. Therefore, the energy of the radiation $S$ and the matter $M$ are maximized for the universe with radiations and matters, respectively. We call them maximum entropy principle~\cite{Kawai:2011rj,Kawai:2011qb,Hamada:2014ofa,Hamada:2014xra,Hamada:2015dja} and maximum matter principle. As a result, the fine-tuned cosmological constant, $\lambda_c$, becomes almost zero.\footnote{For the universe with radiations, by assuming that $S/l^{4/3}$ equals to the energy density of the cosmic microwave background, the fine-tuned cosmological constant, $\lambda_c$, is much smaller than the observed value. The explanation of the small but finite cosmological constant is beyond the scope of this paper.} Let us discuss the two phenomenological implications of the maximum entropy principle. One is the flatness of inflaton potential. Assuming the instant reheating, we observe \aln{ &S=\rho_{\rm inf}^{}a_{\rm end}^4=\rho_{\rm inf}^{}e^{4N}a_{\rm ini}^{4}~, \label{initial entropy} } where $\rho_{\rm inf}^{}$ is the vacuum energy of the inflation, $N$ is the total e-folding number, and $a_{\rm end}^{}$ ($a_{\rm ini}^{}$) is the radius of the universe at the end (onset) of inflation. For a given value of $\rho_{\rm inf}^{}$, $S$ is an increasing function of $N$, which means that a flatter inflaton potential is preferred.\footnote{ Of course, we need to explain why the model with finite $N$ is realized in our universe. This may require a new idea. } It is noteworthy that the Higgs potential can actually have a saddle point around the Planck scale~\cite{Hamada:2013mya,Hamada:2014wna,Hamada:2021jls}, by tuning the top-quark mass. This could be viewed as one of the signatures of the maximum entropy principle. The other implication is the strength of a strong first-order phase transition. When the universe undergoes a first-order phase transition, the radiation entropy increases due to the release of latent energy. Suppose that a first-order phase transition happens at the time $t=t_*^{}$ (and temperature $T=T_*^{}$) and that all the latent energy $\Delta V$ is converted to radiation energy. Then, the entropy production is \aln{\delta S=\Delta V a^4(t_*^{})=\frac{\Delta V}{\rho_{\rm rad}^{}(T_*^{})}\rho_{\rm rad}^{}(T_*^{})a^4(t_*^{})=\alpha S_{\rm ini}^{}~, } where $\rho_{\rm rad}^{}(T_*^{})$ is the radiation energy density right before $T=T_*^{}$, $\alpha=\Delta V/\rho_{\rm rad}^{}(T_*^{})$ is the strength parameter of first-order phase transition, and $S_{\rm ini}^{}$ is the entropy before the phase transition. One can see that $\delta S$ linearly depends on $\alpha$, which means that a strong first-order phase transition is preferable by the maximum entropy principle. The maximum matter principle may also have a lot of implications for particle physics and cosmology such as various dark matter scenarios, baryogenesis, primordial black holes, and so forth. \section{Conclusion} In this paper, we have studied the validity of Coleman's mechanism for fine-tuning problems in two-dimensional and four-dimensional quantum gravity theories. In two-dimensional Euclidean gravity, we have shown that the mechanism does not work because the effect of baby universes is too small. Matrix models can give alternative approaches to realize the mechanism, but their naive non-local modification also does not work since the partition function is divergent. As a concrete example for the realization of Coleman's mechanism, we have proposed a Lorentzian non-hermitian model of the quantum universe. Such a non-Hermitian property was motivated by the physical intuition that the annihilation of an universe to nothing should be highly suppressed because of the matter fields. We have shown that the static distribution of coupling constants Eq.~(\ref{equilibrium distribution 2}) has a very strong and non-trivial peak depending on the matter contents of the universe. In the case of the spatially flat universe with a cosmological constant $\lambda$, the wave function has a peak at the point where $\lambda$ vanishes, and this resembles the original Coleman's baby universe theory. In a more realistic universe, we have shown that the distribution has a strong peak at which the entropy or matter energy becomes maximum, and we call it the maximum entropy principle or the maximum matter principle. There are still many open questions that should be addressed: In this paper, we have omitted the kinetic Hamiltonian of baby universe i.e. $\hbar \omega \hat{a}^\dagger \hat{a}$ for simplicity, but its existence can change the whole dynamics significantly. Moreover, the assumption of non-Hermiticity of the model was also not fully justified and we need more logical reasoning to explain that. We would like to study these issues in future investigations. \section*{Acknowledgements} We would like to thank Pablo Soler for the useful discussions. The work of YH was supported by JSPS Overseas Research Fellowships. At the final stages of the work, YH was supported by MEXT Leading Initiative for Excellent Young Researchers Grant Number JPMXS0320210099. HK thanks Prof. Shin-Nan Yang and his family for their kind support through the Chin-Yu chair professorship. HK is partially supported by JSPS (Grants-in-Aid for Scientific Research Grants No. 20K03970 and 18H03708), by the Ministry of Science and Technology, R.O.C. (MOST 111-2811-M-002-016), and by National Taiwan University. K.K. would like to thank Yukawa Institute for Theoretical Physics, Kyoto University for the support and the hospitality during his stay by the long term visiting program.
1,477,468,749,898
arxiv
\section{Introduction} In his seminal work in the 1950s, Feller \cite{feller1, feller2} classified one-dimensional diffusion processes and their boundary behaviour on an interval $[a,b]$ with $-\infty\leq a<b\leq \infty$. Feller identified four types of boundaries of the domain. The definition of each is given in terms of combinations of two fundamental properties (or the absence thereof), namely accessibility, i.e. reachable in finite time from within $(a,b)$, and enterability, i.e. the diffusion started at that point can enter $(a, b)$. The four types of boundary points are: regular, if it is both accessible and enterable; exit, if it is accessible but not enterable; entrance, if it is enterable but not accessible; natural if it is neither accessible nor enterable. Feller's definitions and proofs are purely analytic, using Hille-Yosida theory to characterize all possible subdomains of $C([a,b])$, the space of continuous functions on $[a,b]$, for second order differential operators $\mathcal A :=\kappa \frac{d}{dx}+\frac{\sigma^2}{2} \frac{d^2}{dx^2}$ to generate a Feller semigroup. Feller's study can be recovered probabilistically using stochastic differential equations (SDEs) and excursion theory to construct so-called sticky boundary behavior; a historical summary can be found in \cite{peskir}. In the present article we will not discuss sticky behavior so we focus on SDEs of the form \begin{align}\label{BB} dZ_t=\kappa(Z_t)\,{dt}+\sigma(Z_t)\,dB_t, \quad Z_0=z\in \mbox{\rm I\hspace{-0.02in}R}, \end{align} where $(B_t,t\geq 0)$ is a standard Brownian motion. A simple change of space allows to simplify the degree of generality in the choices of $\kappa$. Indeed, transforming space with the so-called scale function allows a reduction of \eqref{BB} to the driftless SDE \begin{align}\label{B} dZ_t=\tilde\sigma(Z_t)\,dB_t,\quad Z_0=z\in\mbox{\rm I\hspace{-0.02in}R}, \end{align} on a new interval $(\tilde a, \tilde b)$. In the setting of the entire real line, i.e. $a=-\infty$ and $b=+\infty$, the notion of entrance (in applications also called coming down from infinity) and exit (explosion) for \eqref{B} becomes interesting as they necessitate the range of the diffusion to be infinite over an almost surely finite period of time, a property not seen for the Brownian motion alone. It is a standard property (random time-change of a recurrent process) that solutions to \eqref{B} cannot explode in finite time, hence, neither $+\infty$ nor $-\infty$ are accessible. This can also be verified by plugging-into Feller's test for explosions, see for instance Karatzas and Shreve \cite{KaratzasShreve}, Section 5.5.C. On the other hand, depending on the growth of $\sigma$ at infinity the infinite boundary points can be of entrance type. Feller's results for this scenario imply that $+\infty$ is an entrance boundary if and only if \begin{align}\label{Test} \int^{+\infty} x\,\sigma(x)^{-2}\,{dx}<\infty, \end{align} i.e. $\sigma$ growth slightly more than linearly at infinity. An analogous integral test at $-\infty$ holds in the case that $-\infty$ is an entrance point.\smallskip In the present article we study a new type of boundary behaviour, namely simultaneously exit and entrance from $+\infty$ and $-\infty${\color{black}; See Figure \ref{fig1}}. The simultaneous infinite boundary point will be denoted by $\pm\infty$. We define entrance (resp. explosion at a finite random time $T$) from $\pm\infty$ if almost surely {liminf}$_{t\downarrow 0} Z_t=-\infty$ and {limsup}$_{t\downarrow 0} Z_t=+\infty$ (resp. {liminf}$_{t\uparrow T} Z_t=-\infty$ and {limsup}$_{t\uparrow T} Z_t=+\infty$). Entrance and exit at $\pm\infty$ are forced by an alternation of increasingly big jumps that avoid compact sets in $\mbox{\rm I\hspace{-0.02in}R}$. \begin{figure}[h] \includegraphics[scale=0.5]{p4.pdf} \caption{Entrance from $\pm \infty$ and exit at $\pm\infty$} \label{fig1} \end{figure} We focus our study on so-called stable jump diffusions, i.e. stochastic differential equations \begin{align}\label{2} dZ_t=\sigma(Z_{t-})\,dX_t,\quad Z_0=z\in\mbox{\rm I\hspace{-0.02in}R}, \end{align} driven by a stable L\'evy process $ (X_t,t\geq 0)$ with index $\alpha\in (0,2)$ up to a (possibly infinite) explosion time. The boundary case $\alpha=2$ corresponds to the Brownian case studied by Feller. More precisely, we derive necessary and sufficient conditions on $\sigma$ so that (i) non-exploding solutions exist and (ii) the corresponding transition semigroup of $Z$ extends to an entrance point at `infinity' in an appropriate Fellerian way. \section{Main results} Before stating the results let us clarify our notation. A stable process is a L\'evy process with the additional property that, for all $c>0$ and $x\in\mathbb{R}$, \[ (cX_{c^{-\alpha}t}, t\geq 0) \text{ under } \mathbb{P}_x \text{ is equal in law to } (X_t, t\geq 0) \text{ under }\mathbb{P}_{cx}, \] where $(\mathbb{P}_x, x\in\mathbb{R})$ are the probabilities of $X$ and $\alpha\in(0,2)$. As a L\'evy process, a stable process is a Feller process and the semigroup of $X$ is entirely characterised by its characteristic exponent. More precisely, $\Psi(z): = -\log \mathbb{E}\left[ \mathrm{e}^{\mathrm{i}{z} X_{1}}\right]$ satisfies \begin{equation} \Psi(z) = |z|^{\alpha} \left( e^{\pi {\mathrm i} \alpha (\frac{1}{2}-\rho)} {\bf 1}_{\{z>0\}}+ e^{-\pi {\mathrm i} \alpha (\frac{1}{2}-\rho)} {\bf 1}_{\{z<0\}}\right), \quad z\in\mathbb{R}, \label{Psi_alpha_rho_parameterization_process} \end{equation} where we have reserved the special notation $\mathbb{P}$, with expectation operator $\mathbb{E}$, for the law of $X$ when issued from the origin and $\rho={\mathbb P}(X_1>0)$ is the positivity parameter. The L\'evy measure associated with $\Psi$ can be written in the form \begin{equation} \Pi({dx})/dx = \Gamma(1+\alpha) \frac{\sin(\pi \alpha \rho)}{\pi} \frac{1}{x^{1+\alpha}}\mathbf{1}_{(x>0)}+ \Gamma(1+\alpha) \frac{\sin(\pi \alpha\hat\rho)}{\pi}\frac{1}{|x|^{1+\alpha}}\mathbf{1}_{(x<0)}, \quad x\in\mathbb{R}, \label{jumpmeas} \end{equation} where $\hat\rho := 1-\rho$. In the case that $\alpha = 1$, we take $\rho = 1/2$, meaning that $X$ corresponds to the Cauchy process. If $X$ has only upwards (resp. downwards) jumps we say $X$ is spectrally positive (resp. negative). If $X$ has jumps in both directions we say $X$ is two-sided. A spectrally positive (resp. negative) stable process with $\alpha\in(0,1)$ is necessarily increasing (resp. decreasing). See for example the recent review Kyprianou \cite{KALEA} for more on this parametric classification of stable processes. % \smallskip For a driving stable process $X$ on a filtered probability space $(\Omega, \mathcal A, \mathcal F_t,\P)$ and an initial value $z\in\mbox{\rm I\hspace{-0.02in}R}$, a stochastic process $Z$ on $(\Omega, \mathcal A,\P)$ is called a solution to \eqref{2} up to an explosion time if $Z$ is $\mathcal F_t$-adapted, has almost surely c\`adl\`ag sample paths and, with $T^n=\inf\{t: |Z_s|\geq n\}$, the stopped integral equation \begin{align}\label{2b} Z_{t\wedge T^n}=z+\int_0^{t \wedge T^n} \sigma(Z_{s-})\,dX_s,\quad t\geq 0, \end{align} is satisfied almost surely for all $n\in \mbox{\rm I\hspace{-0.02in}N}$. We denote by $T=\lim_{n\to \infty} T^n$ the (finite or infinite) explosion time of the solution. We note that with this notion, a solution $Z$ is a `local solution on $(-n,n)$' (in the sense of Zanzotto \cite{z2} or \cite{z}) for all $n\in\mbox{\rm I\hspace{-0.02in}N}$.\smallskip When $\alpha\in(1,2)$, the study of weak existence and uniqueness of solutions to \eqref{2} (resp. \eqref{2b}) in $\mbox{\rm I\hspace{-0.02in}R}$ is due to Zanzotto \cite{z}, complementing the classical Engelbert--Schmidt theory for one-dimensional Brownian SDEs (see Chapter 5.5 of \cite{KaratzasShreve}). In fact, the main difficulty is to understand existence and uniqueness for solutions at zeros of $\sigma$. The focus of the present article lies on finite time explosion and entrance from infinity, so we always work under the following simplifying assumption that avoids all difficulties in the interior of $\mbox{\rm I\hspace{-0.02in}R}$. \begin{assumption}\label{A} $\sigma$ is continuous and strictly positive. \end{assumption} Time-change techniques are a useful tool in the study of one-dimensional diffusions, see for instance Karatzas and Shreve \cite{KaratzasShreve}, Section 5.5.A. For stable SDEs time-change was the main tool in the study of Zanzotto \cite{z2}, \cite{z}. Under weaker assumptions than our Assumption \ref{A}, Zanzotto proved that for all $n\in \mbox{\rm I\hspace{-0.02in}N}$ there is a unique local solution on $(-n,n)$ so that $(Z_t, t\leq T^n)=(X_{\tau_t},t\leq T^n)$ in distribution, where $\tau_t = \inf\{s>0 : \int_0^s \sigma(X_s)^{-\alpha}ds > t\}$. Note that continuity of $\sigma>0$ on $\mbox{\rm I\hspace{-0.02in}R}$ implies that $\sigma$ is bounded away from zero on all intervals $(-n,n)$. Local solutions have a simple consistency property. For $m>n$, a local solution on $(-m,m)$ stopped at $T^n$ is a local solution on $(-n,n)$. Hence, we immediately obtain the following time-change representation of (possibly exploding) solutions. \begin{prop}\label{pr} Suppose that $\sigma$ satisfies Assumption \ref{A} and $z\in\mbox{\rm I\hspace{-0.02in}R}$. Then there is a unique (possibly exploding) weak solution $Z$ to the SDE \eqref{2} and $Z$ can be expressed as time-change under $\P_z$ via \begin{align}\label{timechangesolution} Z_t:=X_{\tau_t},\quad t<T, \end{align} where \begin{align}\label{7} \tau_t = \inf\left\{s>0 : \int_0^s \sigma(X_s)^{-\alpha}ds > t\right\} \end{align} and the finite or infinite explosion time is $T= \int_0^\infty \sigma(X_s)^{-\alpha}ds$. \end{prop} Henceforth, the law of the unique solution $Z$ as a process on $\mathbb{D}([0,\infty), \mathbb{R})$ will be denoted by $\emph{\rm P}_z$, $z\in\mathbb{R}$, where $\mathbb{D}([0,\infty), \mathbb{R})$ is the space of c\`adl\`ag paths mapping $[0,\infty)$ to $\mathbb{R}$, equipped with the Borel $\sigma$-algebra induced by the Skorokhod topology. \textcolor{black}{We call a finite-time explosion a Feller explosion if the explosion time T is weakly continuous in the Skorokhod topology with respect to the initial condition on $\mathbb{R}$ and T converges weakly to zero as $|x|\to\infty$ } \smallskip In the following two sections we present and discuss tests for \textcolor{black}{Feller} explosion and \textcolor{black}{Feller} entrance from infinity. All proofs are based solely on the time-change representation \eqref{timechangesolution} for solutions of the jump diffusion \eqref{2}, no further SDE calculus is used. \textcolor{black}{The main focus of our constructions lies on entrance from infinity.} \subsection{(Non-)Explosion of stable jump diffusions}\label{S1} In theory, the question of finite time explosion could be resolved immediately from \eqref{timechangesolution} and \eqref{7} by studying finiteness vs. infiniteness of the so-called perpetual integral $\int_0^\infty \sigma(X_s)^{-\alpha}ds$ for the stable process $X$. This is trivial for $\alpha\geq 1$ and the Brownian case due to (set-)recurrence of $X$. For $\alpha\in(0,1)$ the transience of $X$ implies that finiteness of $\int_0^\infty \sigma(X_s)^{-\alpha}ds$ will depend on the growth of $\sigma$ at infinity. Except for a general 0-1 law for perpetual integrals of L\'evy processes, which implies that finite time explosion is an event of probability $0$ or $1$ (see Lemma 5 of \cite{DK}, the stronger assumptions of \cite{DK} are not used for the 0-1 law), we are not aware of a sufficiently general result for perpetual integrals that is helpful in this respect. Our first main theorem gives necessary and sufficient conditions for \textcolor{black}{ Feller explosion } of stable SDEs \eqref{2} and identifies the infinite almost sure limit at the explosion time which is either $+\infty$, $-\infty$ or $\pm\infty$. Divergence of the solution $Z$ to $\pm\infty$ at the explosion time means $\limsup_{t\uparrow T} Z_t=+\infty$ and $\liminf_{t\uparrow T} Z_t=-\infty$ almost surely.\smallskip In terms of other work that are in close proximity to our own, we are only aware of the recent article Li \cite{Pei} for continuous state polynomial branching processes which coincides with our Theorem \ref{zthrm} below for polynomials $\sigma(x)=x^\theta$ and spectrally positive driving stable process. However, the use of technology for branching processes excludes generalizations of that article to two-sided jumps. \smallskip In the {\color{black}Table 1} a tick stands for \textcolor{black}{Feller} explosion to the corresponding infinite boundary point, a cross for almost sure non-explosion. We use the symbols $\uparrow$, $\downarrow$ and $\uparrow \& \downarrow$ to indicate the direction of jumps of the driving stable process. The table is complemented with a final row ($\alpha=2$) representing Feller's test for explosions for Brownian SDEs. \begin{theorem}\label{zthrm} Suppose that $\sigma$ satisfies Assumption \ref{A} and let \begin{align*} I^{\sigma,\alpha}(A) = \int_A \sigma(x)^{-\alpha}|x|^{\alpha -1}d x. \end{align*} Then {\color{black}Table 1} exhaustively summarises \textcolor{black}{Feller} explosion for the SDE \eqref{2} issued from any $z\in \mbox{\rm I\hspace{-0.02in}R}$, depending only on $\alpha, \sigma$ and the directions of jumps of the stable driving L\'evy process. \begin{footnotesize} \begin{table}[h!] \label{table1} \caption{\rm Necessary and sufficient conditions for exit at infinite boundary points} \hspace{-0.4cm} \begin{tabular}{|c|c | l| l| l|} \hline $\alpha$ &\scriptsize{Jumps}&$+\infty$&$-\infty$&$\pm\infty$\\ \hline\hline &only $\downarrow$& \ding{55} & \cmark \scriptsize{ iff } $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R}_-)<\infty$ &\ding{55} \\ $<1$&only $\uparrow$& \cmark \scriptsize{ iff } $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R}_+)<\infty $ &\ding{55}&\ding{55} \\ &$\uparrow \& \downarrow$&\ding{55} &\ding{55} &\cmark \scriptsize{ iff } $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R})<\infty$\\ \hline\hline $\scriptsize{=1}$&$\uparrow \& \downarrow$&\ding{55} &\ding{55} &\ding{55} \\ \hline\hline &only $\downarrow$&\ding{55} & \ding{55}&\ding{55} \\ $>1$&only $\uparrow$&\ding{55}&\ding{55} &\ding{55} \\ & $\uparrow \& \downarrow$&\ding{55}&\ding{55}& \ding{55} \\ \hline \hline $=2$&\text{none}&\ding{55}&\ding{55}& \ding{55} \\ \hline \end{tabular} \end{table} \end{footnotesize} \end{theorem} \subsection{Entrance from infinity}\label{S2} After the characterization of infinite exit points we continue with the characterization of entrance from infinity. By analogy to the three types of infinite boundary points for explosion, we distinguish entrance points $+\infty$, $-\infty$ and $\pm\infty$. Alternating entrance from $\pm\infty$ is a new phenomenon. A rigorous formulation will be given in terms of semigroup extensions under which trajectories enter continuously from infinity. Although in the spirit of Feller's work our construction is completely different. Feller constructed semigroups through the Hille-Yosida theorem (which gives a Markov process through the Riesz representation theorem) whereas we give explicit probabilistic constructions and then prove the corresponding semigroup is Feller. It is not clear if the Hille-Yosida approach for diffusions can be extended to jump diffusions as it involves the need to understand the resolvent equations $(\mathcal A-\lambda I)f=g$. Those are ordinary differential equations for the diffusive case and can be solved using the variation of constants formula. For jump diffusions the resolvent equations are intego-differential equations for which explicit solutions are not available.\smallskip Suppose that $S$ is a locally compact metrizable topological space. We write $C_b(S)$ for the space of bounded continuous functions mapping $S$ to $\mbox{\rm I\hspace{-0.02in}R}$. Then $C_b(S)$ is a Banach space with the supremum norm $||\cdot||$. \begin{defn}\rm \label{Fellersemigroupdef} A $C_b$-Feller semigroup is a collection of linear operators $\mathcal{P}=(\mathcal{P}_t, t\geq 0)$ mapping $C_b(S)$ into $C_b(S)$ satisfying \begin{itemize} \item[(i)] $\mathcal{P}_t 1\leq 1$ for all $t\geq 0$ (contraction), \item[(ii)] $\mathcal{P}_t f\geq 0$ for all $f\geq 0$ and $t\geq 0$ (positivity), \item[(iii)] $\mathcal{P}_0={\rm id}$ and $\mathcal{P}_{t+s}=\mathcal{P}_t \mathcal{P}_s$ for all $t,s\geq 0$ (semigroup), \item[(iv)] $\lim_{t\to 0}\mathcal{P}_t f(x)=f({x})$ for all $f\in C_b(S)$ and ${x}\in S$ (weak continuity). \end{itemize} Additionally, $\mathcal P$ is called conservative (or not killed) if \begin{itemize} \item[(i')] $\mathcal{P}_t 1=1$ for all $t\geq 0$. \end{itemize} \end{defn} Semigroups are the natural language with which to describe the transitions of a Markov process. A (possibly killed) Markov process $(Y_t,t\geq 0)$ on $S$ with cemetary state $\Delta\notin S$ is a collection $(\texttt{P}_{y},{y}\in S)$ of probability laws on the c\`adl\`ag trajectories $\mathbb{D}([0,\infty), S\cup \{\Delta\})$, mapping $[0,\infty)$ to $S\cup \{\Delta\}$, equipped with the Borel $\sigma$-algebra induced by the Skorokhod topology, such that the canonical process $Y_t(\omega):=\omega_t$, $t\geq 0$, is absorbed at $\Delta$ and satisfies \[ \texttt{E}_{y}[f(Y_t)\,|\sigma(Y_u,u\leq s)]=\texttt{E}_{y}[f(Y_t)\,|\sigma(Y_s)],\qquad \texttt{P}_{y}\text{-almost surely}, \] for all ${y}\in S$, $0\leq s\leq t$ and $f\in C_b(S)$. If $\mathcal P$ is conservative, then the killing time is infinite almost surely. If we define from a Markov process $(\texttt{P}_{y},{y}\in S)$ the so-called transition operators \begin{equation} \mathcal{P}_{t} f({y}): = \texttt{E}_{y}[f(Y_t)], \qquad t\geq 0, {y}\in S, f\in C_b(S), \label{semigroupMarkov} \end{equation} then conditions (i)-(iii) hold. However, it is not necessarily the case that $\mathcal P_tf$ is continuous and (iv) holds. Conversely, for a given Feller semigroup $\mathcal{P}$ there is a (possibly killed) strong Markov process $(\mathcal P_x:{x\in S})$ on $S$ with transition semigroup $\mathcal{P}$ in the sense of \eqref{semigroupMarkov}. In that case we refer to $Y$ as a (conservative) Feller process. We refer the reader for instance to Chapter 17 of Kallenberg \cite{Kallenberg} for a full account of the theory.\smallskip The main finding of this article is that there are three types of infinite entrance boundaries under the presence of jumps. In this respect, let us denote \begin{align} \overline{\mbox{\rm I\hspace{-0.02in}R}} := \mathbb{R}\cup\{\infty\}, \quad \underline{\mbox{\rm I\hspace{-0.02in}R}} : = \mathbb{R}\cup\{-\infty\}\quad \text{and }\quad\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}} : = \mathbb{R}\cup\{\pm\infty\} \label{3R} \end{align} with the usual extensions of the Eucledian topology, i.e. the smallest topology containing all open sets of $\mathbb{R}$ and sets \begin{align}\label{4R} (c,+\infty] \text{ for } \overline \mbox{\rm I\hspace{-0.02in}R},\quad [-\infty,c)\text{ for }\underline \mbox{\rm I\hspace{-0.02in}R} \quad \text{ and }\quad [-\infty, c)\cup (d,+\infty]\text{ for }\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}}. \end{align} Note that all these sets are metrizable as they are homeomorphic to intervals. It will later play a role that in this way $\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}}$ is the one-point compactification of $\mbox{\rm I\hspace{-0.02in}R}$. \begin{defn}\rm\label{Fellerdef} We say that $+\infty$ is a (continuous) entrance point for a Feller process $({\texttt P}_x:{x\in \mbox{\rm I\hspace{-0.02in}R}})$ if there is an extension $({\texttt P}_x:{x\in \overline \mbox{\rm I\hspace{-0.02in}R}})$ on the Skorokhod space, specifically, meaning Skorokhod continuity in the initial position, so that \begin{itemize} \item[(i)] the point $+\infty$ is not accessible under ${\texttt P}_x$ for all $x\in\mbox{\rm I\hspace{-0.02in}R}$, \item[(ii)] the corresponding transition semigroup $\mathcal P$ is Feller on $C_b(\overline{\mbox{\rm I\hspace{-0.02in}R}})$, \item[(iii)] there is continuous entrance in the sense that ${\texttt P}_{+\infty}(\lim_{t\downarrow 0} Y_t=+\infty)=1$ \end{itemize} Analogously, we define entrance from $-\infty$ as extension to $C_b(\underline{\mbox{\rm I\hspace{-0.02in}R}})$ and entrance from $\pm\infty$ as extension to $C_b(\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}})=C(\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}})$. \end{defn} Our next result extends Feller's characterization of infinite (continuous) entrance points to stable jump diffusions. In {\color{black} Table 2} below a tick stands for entrance from the corresponding infinite boundary point, a cross for no entrance point. We use the symbols $\uparrow$, $\downarrow$ and $\uparrow \& \downarrow$ to indicate the direction of jumps of the driving stable process. The table is complemented with a final row representing Feller's criterion for entrance from infinity in the Brownian case. We also note that, when the driving noise only has positive jumps, the necessary and sufficient condition in the table is a special form of the one given by (1.25) in \cite{Pei}, where some other equivalent conditions are also given. \begin{theorem}\label{main} Suppose that $\sigma$ satisfies Assumption \ref{A} and let \begin{align*} I^{\sigma,\alpha}(A) = \int_A \sigma(x)^{-\alpha}|x|^{\alpha -1}d x \quad\text{ and } \quad I^{\sigma,1}= \int_\mbox{\rm I\hspace{-0.02in}R} \sigma(x)^{-1}\log |x|d x. \end{align*} Then {\color{black} Table 2} exhaustively summarizes entrance points at infinity depending only on $\alpha, \sigma$ and the directions of jumps of the stable driving L\'evy process. \begin{footnotesize} \begin{table}[h!] \label{table2} \caption{\rm Necessary and sufficient conditions for entrance from infinite boundary points} \hspace{-0.4cm} \begin{tabular}{|c| c ||l| c || l|c || l|c |} \hline $\alpha$ &\scriptsize{Jumps}&$+\infty$&\scriptsize{Proof}& $-\infty$&\scriptsize{Proof}&$\pm\infty$&\scriptsize{Proof}\\ \hline\hline &only $\downarrow$& \ding{55} &\tiny{(\S \ref{proof6})}& \ding{55} & \tiny{(\S \ref{proof8})} &\ding{55} &\tiny{(\S \ref{proof6})}\\ $<1$&only $\uparrow$& \ding{55} & \tiny{(\S \ref{proof4.5}) &\ding{55} & \tiny{(\S \ref{proof4})}&\ding{55} &\tiny{(\S \ref{proof4})}\\ &$\uparrow \& \downarrow$&\ding{55}& \tiny{(\S \ref{proof0})}&\ding{55} &\tiny{(\S \ref{proof1})}&\ding{55} &\tiny{(\S \ref{proof2})}\\ \hline\hline $\scriptsize{=1}$&$\uparrow \& \downarrow$&\ding{55} &\tiny{(\S \ref{proof1})}&\ding{55} &\tiny{(\S \ref{proof1})}&\cmark\,\scriptsize{ iff $ I^{\sigma,1}<\infty$} &\tiny{(\S \ref{a=1}) \\ \hline\hline &only $\downarrow$&\ding{55} &\tiny{(\S \ref{proof6})}& \cmark\,\scriptsize{ iff $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R}_-)<\infty$ }&\tiny{(\S \ref{proof5}) &\ding{55} &\tiny{(\S \ref{proof6})}\\ $>1$&only $\uparrow$&\cmark\, \scriptsize{iff $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R}_+)<\infty$}& \tiny{(\S \ref{proof5}) &\ding{55} &\tiny{(\S \ref{proof4})}&\ding{55} & \tiny{(\S \ref{proof4})}\\ & $\uparrow \& \downarrow$&\ding{55}& \tiny{(\S \ref{proof0})}&\ding{55}& \tiny{(\S \ref{proof1})}& \cmark\,\scriptsize{ iff $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R})<\infty$} &\tiny{(\S \ref{proof3})} \\ \hline\hline \scriptsize{$=2$}\text{ }&\text{ none }& \cmark\, \scriptsize{iff $I^{\sigma,2}(\mbox{\rm I\hspace{-0.02in}R}_+)<\infty$} &\text{ Feller }&\cmark\, \scriptsize{iff $I^{\sigma,2}(\mbox{\rm I\hspace{-0.02in}R}_-)<\infty$}& \text{ Feller } &\ding{55} \quad\quad\quad\quad\quad\quad\quad\text{ }&\text{ ------ }\\ \hline \end{tabular} \end{table} \end{footnotesize} \end{theorem} Without loss of generality, throughout the article we will study entrance from infinity for the SDE \eqref{2} killed upon first hitting the origin, denoted by $Z^\dagger$. The time-change representation from Proposition \ref{pr} holds unchanged, replacing the stable process $X$ by the stable process killed at the origin $X^\dagger$. The additional killing is crucial to apply stochastic potential theory (killing makes solutions transient) but does not restrict the generality of our results for the following reasons. \smallskip (i) If $\alpha\leq 1$, then solutions almost surely do not hit the origin, hence, no killing occurs. This is a consequence of the time-change representation \eqref{timechangesolution} and the fact that points are polar for stable processes with $\alpha\leq 1$. \smallskip (ii) If $\alpha>1$, then solutions to \eqref{2} might be killed at zero in finite time. For all initial conditions, solutions are weakly unique, non-explosive and known to be $C_b$-Feller on $\mbox{\rm I\hspace{-0.02in}R}$ (see for instance van Casteren \cite{Cas}, but note that his statement is stronger than his proofs and ${\mathcal P}_tf$ does not necessarily vanish at infinity). To construct a Markov process without killing at $0$ from the killed solution, one proceeds as follows. Take the killed process up to the killing time and `glue' thereafter a new unkilled solution with $Z_0=0$. The reader should keep in mind that constructing Markov processes by `glueing' two processes is far from easy and the literature is limited. For continuous processes we refer the reader to Nagasawa \cite{N2}, the gluing results needed for the present article can be found for instance in Werner \cite{Flo}, Theorem 1.6. The process obtained by gluing is not only Markov but also Feller, which follows directly from inspecting the resolvent operator of the Markov process obtained by gluing. \subsection{Proof strategy for entrance from infinity} {\it (i) Direct arguments using the time-change representation \eqref{timechangesolution}.} No entrance from infinity in the impossible cases is argued as follows. If the stable process itself `diverges at fixed levels' for large starting conditions (i.e. does not hit intervals or has diverging overshoots), then the time-change in the representation \eqref{pr} cannot prevent solutions to the SDE \eqref{2} having the same property. Such arguments explain all the crosses in the tables. \smallskip {\it (ii) Spatial-inversion.} To construct the semigroup extensions at infinity we proceed similarly in all cases. In Section \ref{sec_transforms} we prove an extension of the so-called Riesz--Bogdan--\.{Z}ak transformation, which we then use to write solutions to \eqref{2} as a time-change of the spatial inversion $x\mapsto 1/x$ of a certain $h$-transformed process $\hat X^\circ$. As such, starting the SDE from infinity is equivalent to starting $\hat X^\circ$ from $0$ and insisting on a suitably well behaved time-change. For the first of these two, we can reduce the entrance of the auxiliary process $\hat X^\circ$ at 0 to recent result on self-similar Markov processes, the time-change can be controlled using recent explicit potential formulas.\smallskip {\it (iii) Time-reversal.} If infinity is an entrance point, we show (using the strong Markov property as a consequence of the Feller property) that $0$ for $\alpha>1$ (resp. $(-1,1)$ for $\alpha=1$) is hit in finite time. For $\alpha>1$ we use time-reversal to derive the integral test from a perpetual integral of a well-behaved L\'evy process. For $\alpha=1$ we apply an extended version of a transience result due to Getoor, see the Appendix. \subsection{Proof strategy for explosion} All recurrent cases are easily dealt with since the explosion time $T= \int_0^\infty \sigma(X_s)^{-\alpha}ds$ is obviously infinite almost surely if $X$ is recurrent. Since explosion is equivalent to entrance from $0$ of the space-inverted time-reversal (which can be identified as a time-change of the stable process itself), in the transient case $\alpha\in(0,1)$ we can work with a transience argument for $X$. \section{Self-similar Markov processes and stable processes}\label{background} The techniques we use to prove Theorem \ref{main} make significant use of the fact that the driving stable L\'evy process in the SDE \eqref{2} is also a self-similar Markov process. We will use a lot of facts and theory that have only very recently been developed in the field of self-similar Markov processes. As our desire is to keep the article mostly self-contained, we devote this section to a brief overview of the recent results that are needed. In particular, we look at how the theory of self-similar Markov processes plays into the setting of stable L\'evy processes. \smallskip The reader will quickly realise that there are many different types of processes that are involved in our analysis, least of all in this section. For this reason, we include as an annex at the end of this article, a glossary of mathematical symbols. \subsection{Positive self-similar Markov processes}\label{pssMpsect} A regular strong Markov family ${P}_z$, $z> 0$, with c\`adl\`ag paths on the state space $(0,\infty)$, with $0$ being an absorbing cemetery state, is called positive self-similar Markov process of index $\alpha>0$ (briefly pssMp) if the scaling property holds \begin{align}\label{self_sim} \text{The law of $(c\mathcal{X}_{c^{-\alpha}t},t\geq 0)$ under ${P}_z$ is ${P}_{cz}$}, \end{align} for all $z, c>0$. The analysis of positive self-similar processes is fundamentally based on the seminal work of Lamperti \cite{L72} (see also Chapter 13 of \cite{Kbook} for an overview). Lamperti's result gives a bijection between the class of pssMps and the class of L\'evy processes, possibly killed at an independent exponential time with cemetery state $-\infty$, such that, under ${P}_z$, $z>0$, \begin{align} \label{pssMpLamperti} \mathcal{X}_t=\exp(\xi_{\varphi_t}),\qquad t\leq I_\infty : = \int_0^\infty \exp(\alpha\xi_u)du, \end{align} where $\varphi_t=\inf\{s>0: \int_0^s \exp(\alpha \xi_u)du >t \}$ and the L\'evy process $\xi$ is started in $\log z$. \smallskip It is a consequence of the Lamperti representation \eqref{pssMpLamperti} that pssMps can be split into conservative and non-conservative regimes. If $\zeta$ denotes the first hitting time of 0 by $\mathcal{X}$, then \begin{align}\label{999} \begin{split} \quad\,\,\,{P}_z(\zeta <\infty)=1\text{ for all }z>0\quad &\Longleftrightarrow\quad\xi \text{ drifts to }-\infty\text{ or is killed},\\ \quad\,\,\,{P}_z(\zeta<\infty)=0\text{ for all }z>0\quad&\Longleftrightarrow\quad\xi\text{ drifts to }+\infty \text{ or oscillates}. \end{split} \end{align} The dichotomy \eqref{999} can be used to decide if a pssMps is transient or recurrent by examining the corresponding L\'evy process $\xi$. We will see this methodology employed in later sections.\smallskip For the present article we shall need one of several continuations of Lamperti's work. For the conservative case ${P}_z(\zeta<\infty)=0$, an important question to ask is: When is it possible to treat $0$ as an entrance point? More precisely, one asks for a Feller extension $({P}_z,z\geq 0)$ of $({P}_z,z> 0)$. It was shown incrementally in Bertoin and Yor \cite{BY02}, Caballero and Chaumont \cite{CC}, Chaumont et al. \cite{CKPR} and also in Bertoin and Savov \cite{BS} that, if the ascending ladder height process of $\xi$ is non-lattice, $0$ is an entrance point for $\mathcal{X}$ if and only if the overshoot distribution of $\xi$ over asymptotically large levels converges. That is to say, if $(\mathbf{P}_x, x\in\mbox{\rm I\hspace{-0.02in}R})$ are the distributions of $\xi$ and $\bf P=\bf P_0$, then $0$ is an entrance point for $\mathcal{X}$ if and only if \begin{align}\label{C2} \lim_{x\uparrow \infty} {\bf P}(\xi_{\varsigma^+_x}-x\in dy), \qquad y\geq 0, \end{align} exists in the sense of weak convergence, where $\varsigma^+_x:=\inf\{t>0:\xi_t\geq x\}$. If \eqref{C2} holds then one says the L\'evy process $\xi$ has stationary overshoots. The probabilistic condition which is equivalent to stationary overshoots is complicated to verify directly but has an explicit analytic counterpart in terms of the L\'evy triplet (see for instance Chapter 7 of \cite{Kbook}). In this paper, when encountering the need to verify stationary overshoots as such, we will do so directly. \subsection{Real-valued self-similar Markov processes}\label{sec:rssMp} A real self-similar Markov process (rssMp) extends the notion of a pssMp albeit the requirement that the process is positive is dropped allowing the exploration of $\mathbb{R}$ until absorption in the cemetery state 0 (if at all). Significant effort has been invested in the last few years to extend the theory of pssMp to the setting of $\mathbb{R}$. The description below is the culmination of the work in \cite{GV, VG, Kiu, Chy-Lam} with more recent clarity given in Chaumont et al. \cite{CPR}, Kuznetsov et al. \cite{KKPW} and Dereich et al. \cite{DDK}. \smallskip Analogously to Lamperti's representation, for a real self-similar Markov process $\mathcal{X}$ there is a Markov additive process $((\xi_t,J_t), t\geq 0)$ on $\mbox{\rm I\hspace{-0.02in}R}\times \{-1,1\}$ such that \begin{align}\label{LK} \mathcal{X}_t = J_{\varphi_t}\exp\bigl( \xi_{\varphi_t}\bigr) ,\quad t\leq I_\infty : = \int_0^\infty e^{\alpha\xi_s}ds, \end{align} where $\varphi_t=\inf\{s>0 : \int_0^s \exp(\alpha \xi_u)du >t\}$ and $(\xi_0,J_0)=(\log |z|,[z])$ with $$ [z]=\begin{cases} 1 &\mbox{ if } z>0, \\-1 &\mbox{ if }z<0.\end{cases} $$ The representation \eqref{LK} is known as the Lamperti--Kiu transform. Here, by Markov additive process (MAP), we mean the regular strong Markov process with probabilities ${\bf P}_{x,i}$, $x\in\mathbb{R}$, $i\in\{-1,1\}$, such that $(J_t, t\geq 0)$ is a continuous time Markov chain on $\{-1,1\}$ (called the modulating chain) and, for any $i\in \{-1,1\}$ and $s,t\geq 0$, \begin{align*} &\text{given }\{J_t=i\},\text{ the pair }(\xi_{t+s}-\xi_t,J_{t+s})_{s\geq 0} \text{ is independent of the past}\\%\mathcal F_t\\ &\qquad\text{ and has the same distribution as }(\xi_s, J_s)_{s\geq 0}\text{ under }{\bf P}_{0,i}. \end{align*} If the MAP is killed, then $\xi$ is sent to the cemetery state $\{-\infty\}$. All background results for MAPs that relate to the present article can be found in the Appendix of Dereich et al. \cite{DDK}.\smallskip The mechanism behind the Lamperti--Kiu representation is thus simple. The modulation $J$ governs the sign and, on intervals of time for which there is no change in sign, the Lamperti--Kiu representation effectively plays the role of the Lamperti representation of a pssMp. In a sense, the MAP formalism gives a concatenation of signed Lamperti representations between times of sign change. \begin{rem}\rm\label{rssMpispssMp} Typically one can assume the Markov chain $J$ to be irreducible as otherwise the corresponding self-similar Markov processes only switches signs at most once and can therefore be treated using the theory of pssMp. \end{rem} Analogously to L\'evy processes one knows that an unkilled MAP $(\xi,J)$ either drifts to $+\infty$ (i.e. $\lim_{t\uparrow\infty}\xi_t=+\infty$), drifts to $-\infty$ (i.e. $\lim_{t\uparrow\infty}\xi_t=-\infty$) or oscillates (i.e. $\liminf_{t\uparrow\infty}\xi_t=-\infty$ and $\limsup_{t\uparrow\infty}\xi_t=+\infty$), in the almost sure sense. Moreover, just in the case of pssMp a simple 0-1 law for rssMp holds, distinguishing the case of conservative processes from non-conservative processes. We have \begin{align*} {P}_z(\zeta<\infty)=1\text{ for all }z\neq 0\quad&\Longleftrightarrow\quad(\xi,J) \text{ drifts to }-\infty \text{ or is killed,} \\{P}_z(\zeta<\infty)=0\text{ for all }z\neq 0\quad&\Longleftrightarrow\quad (\xi,J)\text{ drifts to }+\infty \text{ or oscillates}, \end{align*} where $\zeta=\inf\{t>0: \mathcal{X}_t=0\}$. Generalizing the results for pssMps, the existence of 0 as an entrance point was addressed by Dereich et al. \cite{DDK}. It was shown that a necessary and sufficient condition for the existence of a Feller extension $(\P_z,z\in \mbox{\rm I\hspace{-0.02in}R})$ under which trajectories leave $0$ continuously of $(\P_z,z\neq 0)$ in terms of the underlying MAP is weak convergence of the overshoots; that is, \begin{align}\label{239} \lim_{a\to+\infty} {\bf P}_{0,i}&(\xi_{\varsigma_a^+}-a \in dy, J^+_{\varsigma_a^+} = j), \qquad y\geq 0, i,j\in\{-1,1\}, \end{align} exists in the sense of weak convergence independently of $i\in \{-1,1\}$ and is non-degenerate, where $\varsigma_a^+=\inf\{t>0:\xi_t\geq a\}$. Just as in the pssMp setting, this can be thought of as a natural condition for similar reasons. As for L\'evy processes, there is an analytic condition for \eqref{239} in terms of the generalized triplet for MAPs, see Theorem 5 of \cite{DDK}. \subsection{Stable processes and their path functionals as rssMp}\label{sec:stablerssMp} Stable processes and certain types of conditioned stable processes are linked to the theory of self-similar Markov processes. A little care is needed since the definition of a real self-similar Markov process given above asks for $0$ to be absorbing. \begin{itemize} \item[\bf (1)] To discuss stable process in the light of self-similarity we should remind ourselves of the accessibility of the single point set $\{0\}$, see for instance Chapter 7 of \cite{Kbook}. If we set $\tau^{\{0\}} = \inf\{t>0: X_t = 0\}$, then, for all $x\neq0$, \begin{align*} \mathbb{P}_x\big(\tau^{\{0\}}<\infty \big) = \begin{cases} 0&\text{ if } \alpha\in (0,1]\\ 1&\text{ if } \alpha\in (1,2) \end{cases}, \end{align*} for all $x\neq 0$. In other words, $\{0\}$ is polar if and only if $\alpha\leq 1$. In response to this observation, it is $(X^\dagger_t , t\geq 0)$ which conforms to our definition of a positive or real self-similar Markov process, where \begin{equation} X^\dagger_t : = X_t\mathbf{1}_{(t<\tau^{\{0\}})},\qquad t\geq 0. \label{Xdagger} \end{equation} This is clearly the case when $0$ is polar as $X^\dagger = X$. However, when 0 is not polar, a little more detail is deserving in order to verify the scaling property. Indeed, suppose momentarily we write $(X^{(x)}_t, t\geq 0) $, $x\neq 0$, to indicate the initial value of the process, i.e. $X^{(x)}_0 = x$. Then, for $c>0$, \begin{align*} \tau^{\{0\}}&=\inf\{t>0: X^{(x)}_t =0\} \\ &=c^{-\alpha}\inf\{c^{\alpha}t>0: cX^{(x)}_{c^{-\alpha} c^{\alpha}t} =0\} \\ & =:c^{-\alpha}\inf\{s>0: \tilde{X}^{(cx)}_{s} =0\}\\ &=:c^{-\alpha}\tilde{\tau}^{\{0\}}, \end{align*} where $\tilde{X}^{(cx)}_{s}: = cX^{(x)}_{c^{-\alpha}s}$, $s\geq 0$, is equal in law to $X^{(cx)}_s$, $s\geq 0$. With this in hand, we now easily verify that, for $c>0$, \[ cX^{(x)}_{c^{-\alpha}t}\mathbf{1}_{(c^{-\alpha}t<\tau^{\{0\}})} = \tilde{X}^{(cx)}_{t}\mathbf{1}_{(t< \tilde{\tau}^{\{0\}})}, \qquad t\geq 0, \] and, as such, the right-hand side is equal in law to $(X^\dagger, \mathbb{P}_{cx})$. \smallskip From Section \ref{sec:rssMp} we see that there is a family of MAPs corresponding to the family of killed stable processes through the Lamperti--Kiu representation. A characterisation of this family $(\xi, J)$ was uncovered in Chaumont et al. \cite{CPR} (see also Kuzentsov et al. \cite{KKPW}). From the characterization it can be deduced that \begin{itemize} \item $(\xi,J)$ drifts to $+\infty$ if $\alpha\in(0,1)$ and $\xi$ is not the negative of a subordinator, \item $(\xi, J)$ drifts to $-\infty$ if $\alpha\in(0,1)$ and $\xi$ is the negative of a subordinator \item $(\xi, J)$ oscillates if $\alpha = 1$, \item $(\xi, J)$ drifts to $-\infty$ if $\alpha\in(1,2)$. \end{itemize} This path behaviour is consistent through the Lamperti--Kiu represenation with the fact that, as a Markov process, a stable L\'evy process \begin{itemize} \item is transient when $\alpha \in(0,1)$, in which case 0 is polar, hence, $\lim_{t\to\infty}|X_t| = \infty$, \item is recurrent when $\alpha =1$ but points are polar, hence, $\limsup_{t\to\infty}|X_t| =\infty$ and $\liminf_{t\to\infty}|X_t| = 0$, \item almost surely hits zero when $\alpha\in(1,2)$ and $\lim_{t\to\tau^{\{0\} }}|X_t| = 0$. \end{itemize} See for instance the discussion around Theorems 7.4 and 7.5 in \cite{Kbook} for these facts. \item[\bf (2)] More examples of a rssMps that can be derived from stable processes emerge through special kinds of conditioning. We confine this remark to the setting of two-sided jumps. When $\alpha\in(0,1)$, it was shown in Kyprianou et al. \cite{KRS} that, for $x\neq 0$, $A\in\mathcal{F}_t: = \sigma(X_s, s\leq t)$ and each $a>0$, \begin{align}\label{attract} \mathbb{P}^{\circ}_x\big(A{\color{black}\,\cap\,\{t<\tau^{(-a,a)}\}}\big)=\lim_{\varepsilon\to 0}\mathbb{P}_x\big(A{\color{black}\,\cap\,\{t<\tau^{(-a,a)}\}}\,\big|\,\tau^{(-\varepsilon,\varepsilon)}<\infty\big), \end{align} where $\tau^{(-a,a)}=\inf\{t>0: |X_t|<a \}$, defines a consistent family of probability laws such that $(X, \mathbb{P}^{\circ}_x)$, $x\neq 0$, defines a rssMp, referred to as the stable process conditioned to continuously absorb at the origin. Additionally they show that, irrespective of the point of issue, the absorption time is almost surely finite and that $\P^\circ$ is an $h$-transform of $\P$ via \begin{equation}\label{updownCOM} \left.\frac{{\rm d}\mathbb{P}^\circ_x}{{\rm d}\mathbb{P}_x}\right|_{\mathcal{F}_t} = \frac{h(X_t)}{h(x)}, \qquad t\geq 0, x\in\mathbb{R}\backslash\{0\}, \end{equation} where \begin{align}\label{h} h(x): = -\Gamma(1-\alpha) \left(\dfrac{\sin(\pi\alpha\hat\rho)}{\pi}\mathbf{1}_{(x\geq 0)}+ \dfrac{\sin(\pi\alpha\rho)}{\pi}\mathbf{1}_{(x<0)}\right) |x|^{\alpha-1}, \quad x\in\mathbb{R} \end{align} Moreover, when $\alpha\in(1,2)$, it was also shown in Kyprianou et al. \cite{KRS} as well as in Chaumont et al. \cite{CPR} that, for $x\neq 0$ and $A\in\mathcal{F}_t$, \begin{align}\label{avoid} \mathbb{P}^{\circ}_x(A) =\lim_{a\to \infty}\mathbb{P}_x\big( A{\color{black}\,\cap\,\{t<\tau^{(-a,a)^{\rm c}}\}}\,\big|\,\tau^{(-a,a)^{\texttt c}}<\tau^{\{0\}}\big), \end{align} where $\tau^{(-a,a)^c}=\inf\{t>0: |X_t|\geq a \}$, also defines a consistent family of probability laws such that $(X, \mathbb{P}^{\circ}_x)$, $x\neq 0$, defines rssMp referred to as the stable process conditioned to avoid the origin. Moreover, the absolute continuity \eqref{updownCOM} is still valid, albeit with $X$ replaced by $X^\dagger$. \smallskip It is a straightforward exercise to show that expectations of the form $\mathbb{E}^\circ_x[f(cX_{c^{-\alpha}s}, s\leq t)]$, where $f$ is bounded and measurable, transform to $\mathbb{E}^\circ_{cx}[f(X_s, s\leq t)]$ thanks to the shape of the $h$-transform and the inherent scaling of the stable process. Said another way, the process $(X^\circ, \mathbb{P}_x)$, $x\neq 0$, is a rssMp. \smallskip The reader may be left wondering if either of these two conditionings applies when $\alpha = 1$ even though the $h$-transform becomes trivial. Clearly conditioning to avoid the origin is meaningless as 0 is inaccessible for the Cauchy process. It also turns out that conditioning the Cauchy process to continuously absorb at the origin cannot be made sense of. In this way the Cauchy process asserts itself again as a distinguished intermediary case in the class of stable processes. \end{itemize} Amongst the above examples of rssMp, i.e. the stable process $X$ killed on hitting the origin, the stable process conditioned to continuously absorb at the origin and the stable process conditioned to avoid the origin, we can examine the existence of $0$ as an entrance point. Clearly when $\alpha \in (0,1]$ we already know that $X$ is well-defined as entering from 0 (it never hits 0 again). Moreover, when $\alpha\in(1,2)$, the process $X$ is instantaneously absorbed at 0 when issued there and, hence, the origin cannot serve as an entrance point. When $\alpha\in(0,1)$, it is also clear that 0 cannot serve as an entrance point for the stable process conditioned to continuously absorb at the origin; cf. Kyprianou \cite{KALEA}. However, when $\alpha\in(1,2)$, it is meaningful to check whether $0$ is an entrance point in the sense that there is a Feller extension $\P^\circ_x$, $x\in \mbox{\rm I\hspace{-0.02in}R}$ of $\P^\circ_x$, $x\neq 0$. \begin{lemma}\label{zeroenter} When $ \alpha\in(1,2)$ and the stable process has two-sided jumps, then $0$ is an entrance boundary of $\mathbb{P}^\circ_x$, $x\neq 0$. \end{lemma} \begin{proof} As remarked above, the conditioned processes are also rssMps so we can apply the results on entrance from $0$ for rssMps (see Section \ref{sec:rssMp} and the necessary and sufficient condition \eqref{239} in particular). From the Lamperti--Kiu representation there is a corresponding MAP that we denote by $(\xi^\circ,J^\circ)$ for which we need to check convergence of overshoots \eqref{239}. One could either try to appeal to the analytic condition of Dereich et al. \cite{DDK} for the convergence of overshoots or invoke known formulas for overshoots for stable processes. \smallskip To carry out the second option, we note that, due to the Lamperti--Kiu representation, the range of $J^\circ_t\exp(\xi^\circ_t)$, $t\geq 0$, agrees with that of the conditioned process. Therefore, using the absolute continuity relation \eqref{updownCOM}, \eqref{239} is equivalent to the existence of the weak limit \begin{equation}\label{othercheck} \lim_{|x|\to0}\mathbb{P}^\circ_x(X_{\tau^{(-1,1)^c}}\in dy)=\lim_{|x|\to0}\frac{h(y)}{h(x)}\mathbb{P}_x\big(X_{\tau^{(-1,1)^c}}\in dy, \tau^{(-1,1)^c}<\tau^{\{0\}}\big), \qquad |y|\geq 1, \end{equation} in the sense of weak convergence, where $\tau^{(-1,1)^c} = \inf\{t>0:| X_t|\geq1\}$. Fortunately, there are fluctuation identities known in explicit form in existing literature, which enables us to deal with the righthand side of \eqref{othercheck} directly. Indeed, in this case, we may appeal to Corollary 2 of Kyprianou \cite{deep1} which tells us, e.g. when $y>1$ and $x\in(0,1)$, for all $\alpha\in(0,2)$ \begin{align}\label{2sideexit} & \mathbb{P}_x\big(X_{\tau^{(-1,1)^c}}\in dy, \tau^{(-1,1)^c}<\tau^{\{0\}}\big)/dy\notag\\ &=\frac{\sin(\pi\alpha\rho)}{\pi}(1+x)^{\alpha\hat{\rho}}(1-x)^{\alpha\rho}(1+y)^{-\alpha\hat{\rho}}(y-1)^{-\alpha\rho}( y-x)^{-1}\notag\\ &\quad-c_\alpha \frac{\sin(\pi\alpha\rho)}{\pi}(1+y)^{-\alpha\hat{\rho}}(y-1)^{-\alpha\rho} y^{-1}x^{\alpha-1} \int_1^{1/x} (t-1)^{\alpha\rho-1} (t+1)^{\alpha\hat{\rho}-1}\, d t, \end{align} where $c_\alpha = \max\{(\alpha-1),0\}$. Recalling the definition of $h$ we use the above identity together with L'H\^opital's rule to deduce the righthand side of \eqref{othercheck} exists. The details for these and other combinations of $x$ and $y$ are left to the reader (see also Remark 6 in Profeta and Simon \cite{PS}). Hence, overshoots of $(\xi, J)$ converge and the theory of rssMps implies the claim. \end{proof} To complete this section, we recall a remarkable result which gives a pathwise connection between $X$ and the conditioned processes given in \eqref{attract} and \eqref{avoid}. In the following result, which is due to Bogdan and \.Zak \cite{BZ}, we write $\hat{\mathbb{P}}_{x}$ for the law of $-X$ under $\P_x$. Under $\hat{\P}_x$ the canonical process is again a stable process, the so-called dual stable process. \begin{theorem}[Riesz--Bogdan--\.{Z}ak transform]\label{RBZthrm} Suppose that $X$ under $\mathbb{P}_x$ has two-sided jumps and \begin{align} \eta_t = \inf\left\{s>0 : \int_0^s |X^\dagger_u|^{-2\alpha}{\rm d}u >t\right\},\quad t\geq 0. \label{firsteta} \end{align} Then, for all $x\neq 0$, the law of $(1/X^\dagger_{\eta_t})_{t\geq 0}$ under $\hat{\mathbb{P}}_{x}$ is $\mathbb{P}_{1/x}^\circ$. \end{theorem} In words, this theorem gives a pathwise link (spatial inversion and time-change) between the killed stable process $X$ and the conditioned process ($h$-transform). \subsection{Stable processes and their path functionals as pssMp}\label{pssmpexamples} Stable processes are also naturally linked to pssMp by looking at different functionals of $X$. Three pertinent cases in point are that of the censored stable, the radial process and the stable process conditioned to stay positive. \begin{itemize} \item[\bf (1)] For the (positive) censored stable process, define the occupation time of $(0,\infty)$, \begin{align*} A_t = \int_0^t \mathbf{1}_{(X^\dagger_s > 0)} \, d s, \end{align*} and let $\gamma_t = \inf\{ s \ge 0 : A_s > t \}$ be its right-continuous inverse. The process $(X^\dagger_{\gamma_t})_{t\geq 0}$ is what is understood to be the (negatively) censored stable process. In words, this is the process formed by erasing the negative components of $X^\dagger$ and shunting together the resulting sections of trajectory so that the temporal gaps are closed. The L\'evy process that underlies its Lamperti representation, say $\xi^{>}$, was found in Theorem 5.5 of Kyprianou et al. \cite{KPW}; up to a multiplicative constant, its characteristic exponent has the form \begin{equation} \Psi^{>}(z)= \frac{\Gamma(\alpha\rho - \iu{z})}{\Gamma(-\iu{z})} \frac{\Gamma(1 - \alpha\rho + \iu{z})}{\Gamma(1 - \alpha + \iu{z})} , \qquad z\in\mathbb{R}. \label{censoredpsi} \end{equation} Note, here we use the convention that $\Psi^{>}(z) = -t^{-1}\log\mathbf{E}^>[\exp({\rm i }z\xi^>_t)]$, $t>0$, and we consistently use this arrangement when citing characteristic exponents of other L\'evy processes. It is not difficult to imagine that one may also consider the analogue of this process when we censor away the positive components of $X$. In that case, the roles of $\rho$ and $\hat{\rho}$ are exchanged on the right-hand side of \eqref{censoredpsi}. \smallskip It is also worthy of note at this point that the censoring procedure of $X^\dagger$ leading to a pssMp is not specific to the stable case. Indeed, any rssMp can be censored in the same way and will still result in a pssMp. (We leave it as an exercise to verify this fact, however the proof is essentially the same as in the stable setting, see Kyprianou et al. \cite{KPW}) We will see such an example later in this exposition. \medskip \item[\bf (2)] The radial process of $X$ is nothing more than $|X|$. In general, $|X|$ is not a Markov process as one needs to know the sign of $X$ to determine its increments. However, when $X$ is symmetric, that is to say $\rho = 1/2$, then |X| is Markovian. The same is true of $|X^\dagger|$ since $X = X^\dagger$. Moreover $|X^\dagger|$ is also a pssMp. The latter can be deduced from symmetry and the Lamperti--Kiu transformation \eqref{LK}; see the discussion in Chapter 13 of \cite{Kbook}. The associated L\'evy process, $ {\xi^{|\cdot|}}$, that underlies the Lamperti transform has characteristic exponent given by \begin{equation} \Psi^{|\cdot|}(z) =\frac{\Gamma(\frac{1}{2}(-{\rm i}z +1 ))}{\Gamma(-\frac{1}{2}{\rm i}z)}\frac{\Gamma(\frac{1}{2}({\rm i}z +1))}{\Gamma(\frac{1}{2}{\rm i}z)}, \qquad z\in\mathbb{R}, \label{a} \end{equation} up to a multiplicative constant. See Caballero et al. \cite{CPP} for further details. \medskip \item[\bf (3)] The stable process conditioned to stay positive is only of interest for our purposes when $X$ does not have monotone paths. Introduced in Chaumont \cite{C96}, it arises from the limiting procedure (which is indeed valid as a definition for any L\'evy process conditioned to stay positive) \begin{equation} \mathbb{P}_x^\uparrow(A): = \lim_{q\downarrow0}\mathbb{P}_x\big(A, \, t<q^{-1} \mathbf{e}\big| X_s \geq 0,\, s\leq q^{-1}\mathbf{e}\big) \label{CTSP} \end{equation} for $A\in\mathcal{F}_t: = \sigma(X_s, s\leq t)$, where $\mathbf{e}$ is an independent and exponentially distributed random variable with unit rate; see also Chaumont and Doney \cite{CD}. This defines a new family of probabilities on $\mathbb{D}(\mathbb{R}_+,\mathbb{R}_+)$ and the resulting process $(X, \mathbb{P}^\uparrow_x)$, $x>0$, is what we call the stable process conditioned to stay positive. \smallskip It turns out that the family $\mathbb{P}^\uparrow_x$, $x>0$, is absolutely continuous with respect to $\mathbb{P}_x$, $x>0$, on $(\mathcal{F}_t, t\geq 0)$ via the $h$-transform relation \begin{equation} \left.\frac{{\rm d}\mathbb{P}^\uparrow_x}{{\rm d}\mathbb{P}_x}\right|_{\mathcal{F}_t} = \frac{X_t^{\alpha\hat\rho}}{x^{\alpha\hat\rho}}\mathbf{1}_{(t<\tau^{(-\infty,0)})}, \qquad t\geq 0, x>0, \label{CSP} \end{equation} where $\tau^{(-\infty,0)} = \inf\{t>0: X_t <0\}$. Note that when $X$ is spectrally negative, the $h$-function in the above $h$-transform is precisely the one given in \eqref{h}, that is \begin{equation} h(x) =-\Gamma(1-\alpha) \frac{\sin(\pi(\alpha-1))}{\pi}x^{\alpha-1} = \frac{1}{\Gamma(\alpha)}x^{\alpha-1}, \qquad x\geq 0, \label{speconesidedh} \end{equation} on account of the fact that $\rho = 1/\alpha$ for spectrally negative stable processes. \smallskip Similarly to the conditioned process from the previous section, stable processes conditioned to be positive are self-similar. The L\'evy process $\xi^\uparrow$ that underpins the Lamperti transform was computed in Cabellero and Chaumont \cite{CC}, see also Section 13.4.2 of Kyprianou \cite{Kbook}), and takes the form \begin{equation} \Psi^\uparrow(z)=\frac{\Gamma(\alpha\rho -{\rm i} {z})}{\Gamma(-{\rm i}{z})} \frac{\Gamma(1+{\rm i}{z} + \alpha\hat{\rho})}{\Gamma(1+{\rm i}{z}) }, \qquad z\in\mbox{\rm I\hspace{-0.02in}R}. \label{Psiuparrow} \end{equation} They also proved that $\xi^\uparrow$ drifts to $+\infty$ so that according to \eqref{999}, $0$ is polar for the stable processes conditioned to be positive.\smallskip \item[{\bf (4)}] Finally, we consider the setting of $\alpha\in(0,1)$ and that $X$ has monotone paths. Conditioning ascending (resp. descending) stable subordinator to stay positive (resp. negative) is an uninteresting concept. However, what is more interesting is to consider an ascending (resp. descending) stable subordinator to approach the origin continuously from below (resp. above). \smallskip This was treated by Chaumont \cite{C96} and Kyprianou et al. \cite{KRSe}, where it was shown that for all $x>b >0$, \[ \P^{\circ}_x(A, t< \tau^-_b): = \lim_{\varepsilon\downarrow0}\P_x(A, t< \tau^-_b \,|\, X_{\tau^-_0-}\leq \varepsilon), \qquad t\geq 0, A\in\mathcal{F}_t, \] is well-defined such that, for $x>0$, \begin{equation} \left. \frac{{\rm d}\P^{\circ}_x}{{\rm d}\P_x}\right|_{\mathcal F_t} = \frac{X_t^{\alpha-1}}{x^{\alpha-1}}\mathbbm{1}_{\{X_t \geq 0\}}. \label{subCOM} \end{equation} In the Lamperti representation of $(X^\circ, \mathbb{P}_x)$, $x\geq 0$, it was also shown by \cite{KRSe} that $\xi$ is the negative of a subordinator so that its Laplace exponent is given by \[ -\frac{1}{t}\log\mathbf{E}_x[e^{\lambda \xi_t}]=\frac{\Gamma(\alpha+\lambda )}{\Gamma(\lambda)}, \qquad \lambda \geq 0. \] \end{itemize} \smallskip Similarly to the discussion on conditioned stable processes in Section \ref{sec:stablerssMp} we may ask whether 0 is an entrance point for the process conditioned to stay positive in the sense that there is a Feller extension $\P_x^\uparrow$, $x> 0$ allowing the meaningful inclusion of $\mathbb{P}^\uparrow_0$. \begin{lemma}\label{zeroenter2} If $ \alpha\in(1,2)$, then $0$ is an entrance point for $\P_x^\uparrow$, $x>0$. \end{lemma} \begin{proof} The proof is almost the same as the proof of Lemma \ref{zeroenter}. Analogously to \eqref{othercheck}, we may appeal to \eqref{C2} and the Lamperti transform \eqref{pssMpLamperti} to deduce that a necessary and sufficient condition for 0 to be an entrance point is that the righthand side of \begin{equation} \lim_{x\downarrow0}\mathbb{P}^\uparrow_x(X_{\tau^{(1,\infty)}} \in dy)=\lim_{x\downarrow0}\frac{y^{\alpha\hat{\rho}}}{x^{\alpha\hat{\rho}}}\mathbb{P}_x(X_{\tau^{(1,\infty)}}\in dy, \,\tau^{(1,\infty)}< \tau^{(-\infty,0)}) \label{check21} \end{equation} exists weakly. Similarly to \eqref{othercheck}, we can verify this directly by appealing to already known explicit fluctuation identities. In this case, we need the two sided exit problem which was solved by Rogozin \cite{Rog}. For example, under the regime $\alpha\in(1,2)$ when $0<\alpha\rho<1$ (which includes the case of spectral positivity), \begin{align}\label{1stcase} \begin{split} &\quad\mathbb{P}_x(X_{\tau^{(1,\infty)}} \in d y ; \,\tau^{(1,\infty)}< \tau^{(-\infty,0)})\\ &= \frac{\sin(\pi\alpha \rho)}{\pi} (1- x)^{\alpha\rho}x^{\alpha\hat{\rho}} (y-1)^{-\alpha\rho}y^{-\alpha\hat{\rho}}(y-x)^{-1}d y, \qquad y>1, \end{split} \end{align} and when $\rho = 1/\alpha$ (which is the case of spectral negativity), then necessarily $X_{\tau^{(1,\infty)}} =1$ and \begin{align}\label{2ndcase} \begin{split} &\quad\mathbb{P}_x(X_{\tau^{(1,\infty)}} =1; \,\tau^{(1,\infty)}< \tau^{(-\infty,0)})\\ &= 1- \frac{\sin(\pi\alpha \hat{\rho})}{\pi} (1- x)^{\alpha\hat{\rho}}x^{\alpha\rho}\int_0^\infty (y-1)^{-\alpha\hat{\rho}}y^{-\alpha\rho}(y-x)^{-1}d y, \qquad y>1. \end{split} \end{align} The limiting computation in \eqref{check21} is now trivial to verify using \eqref{1stcase} and, with a little care, straightforward to verify using \eqref{2ndcase} as well. \end{proof} In a similar spirit to the previous section, we complete this section by providing another remarkable pathwise transformation of the process $X$, connecting it to its conditioned version $\mathbb{P}^\uparrow_x$, $x>0$, but only in the case that $X$ is spectrally positive and $\alpha\in(1,2)$. As before, we write $\hat{\mathbb{P}}_x$, $x\neq0$ for the probabilities of $-X$. \begin{theorem}[Chaumont]\label{CRBZ} Suppose that $X$ is spectrally positive with $\alpha\in(1,2)$ and define \begin{equation} \eta_t = \inf\left\{s>0 : \int_0^s (X_u^\dagger)^{-2\alpha}{\rm d}u >t\right\},\quad t\leq \int_0^\infty (X_u^\dagger)^{-2\alpha}{\rm d}u. \label{gamma_t} \end{equation} For all $x>0$, the law of $(1/{X}^\dagger_{\eta_t})_{t\geq 0}$ under ${\mathbb{P}}_{x}$ is $\hat{\mathbb{P}}_{1/x}^\uparrow$. \end{theorem} \begin{proof}Strictly speaking, this result is a special case of Theorem 2.4.1 in Chaumont \cite{Loicnotes}, which demonstrates a more general result of this kind for pssMp. Indeed, suppose $X$ is a pssMp with associated L\'evy process $\xi$ via the Lamperti transform, and, in the same respect, $\hat{X}$ is the pssMp associated to the L\'evy process $-\xi$. Then Theorem 2.4.1 of \cite{Loicnotes} states that $\hat{X}$, when issued from $y>0$ is equal in law to $(1/{X}_{\gamma_t}, t\geq 0)$ when issued from $1/y$, where the endogenous time-change $\gamma_t$ is structured as in \eqref{gamma_t}. \smallskip The special case we are concerned with here makes use of the observation from Caballero and Chaumont \cite{CC} (see also Section 13.4.2 of Kyprianou \cite{Kbook}) that, if $X^\dagger$ is the spectrally positive stable process killed on hitting the origin with $\alpha\in(1,2)$, then its L\'evy process, say $\xi^\dagger$, underlying the Lamperti transform has characteristic exponent satisfying \begin{equation} \Psi^\dagger(z) = {\rm i}z \frac{\Gamma(\alpha-{\rm i}{z})}{\Gamma(1-{\rm i}{z})} ,\qquad {z}\in\mathbb{R}, \label{xipsi} \end{equation} up to a multiplicative constant. One easily computes from this exponent that the mean of this L\'evy process at time 1 is equal to $ -{\rm i} \Psi^{\dagger\prime}(0) = -\Gamma(\alpha)$, which is strictly negative, accounting for the almost sure hitting of the origin by $X$ as one would expect. On the other hand, from \eqref{Psiuparrow} the L\'evy process underlying $(X, \hat{\mathbb{P}}^\uparrow_x)$, $x>0$, via the Lamperti transform, say $\hat{\xi}^\uparrow$ takes the form \begin{equation} \hat{\Psi}^\uparrow(z)= {\rm i}z\frac{\Gamma(\alpha+{\rm i}{z})}{\Gamma(1+{\rm i}{z})},\qquad {z}\in\mathbb{R}, \label{xiuparrowpsi} \end{equation} up to a multiplicative constant. Recall that spectral positivity of $(X, \mathbb{P}_x)$, $x\in\mathbb{R}$, means that $\hat{\rho} = 1/\alpha$ and hence in the context of deriving \eqref{xiuparrowpsi}, where $\hat X$ is used, we have $\rho = 1/\alpha$. Note now that $-{\rm i} \Psi^{\uparrow\prime}(0)=\Gamma(\alpha)$, which is strictly positive and accounts for the fact that $(X, \hat{\mathbb{P}}^\uparrow_x)$ is transient to $+\infty$. The statement of Theorem \ref{CRBZ} now follows by comparing these two exponents and recalling that the law of a L\'evy process is entirely determined by its characteristic exponent and, moreover, that the law of a pssMp is entirely determined by its underlying L\'evy process (via the Lamperti transform). \end{proof} \section{Fundamental transformations}\label{sec_transforms} In this section we consider combinations of classical transformations (change of measure, change of space, random change of time) related to the SDE \eqref{2}, resp. the time-change representation \eqref{timechangesolution}. These will be crucial in the main part of the proof to apply results for stable L\'evy processes and self-similar Markov processes. \subsection{Time-space inversions} Before we state and prove an extension of the Riesz--Bogdan--\.Zak transformation (Theorem \ref{RBZthrm} above) we recall a simple lemma on time-changes which is essentially a re-wording of Theorem 1.1 and the discussion above in Chapter 6 of Ethier and Kurtz \cite{EthKur}, see also Proposition 3.5 of K\"uhner and Schnurr \cite{KSc}. \begin{lemma}\label{lemm} Suppose $(Y_t , t\geq 0)$ is a c\`adl\`ag trajectory, $f\geq 0$ continuous and \begin{align*} t_0=\inf\big\{t\geq 0: f(Y_t)=0\big\}\quad \text{and}\quad t_1=\inf\left\{t\geq 0: \int_0^t \frac{1}{f(Y_u)}\,du=\infty\right\}. \end{align*} If $t_0=t_1$, then the integral equation $\upsilon_t=\int_0^t f(Y_{\upsilon_s})\,ds$ has a unique solution which is of the form \begin{align*} \upsilon_t=\inf\left\{s\ge 0: \int_0^s \frac{1}{f(Y_u)}\,du>t\right\}\wedge t_0,\quad t\geq 0. \end{align*} \end{lemma} In what follows we set \[ \beta(x) = \sigma(1/x)^{-\alpha}|x|^{-2\alpha}, \qquad x\in\mathbb{R}\backslash\{0\} \] and prove an extension of the Riesz--Bogdan--\.Zak Theorem. \begin{prop} \label{prop} Assume the stable process $X$ with distribution $\mathbb{P}_x$, $x\in\mathbb{R}$, has two-sided jumps and $\sigma>0$ is continuous. \smallskip (i) Define the time-space transformation \begin{align} \label{RBZ} {Z}^\dagger_t=\frac{1}{\hat{X}^\circ_{\theta_t}}, \qquad t< \int_0^\infty \beta(\hat{X}^\circ_u){\,d u}, \end{align} where \[ \theta_t =\inf\left\{s> 0 : \int_0^s\beta(\hat{X}^\circ_u){\,d u}>t\right\}. \] If $\hat{X}^\circ$ has law $\hat{\mathbb{P}}^\circ_{1/x}$, $x\neq 0$, then $Z^\dagger$ is the time-changed process \eqref{timechangesolution} under $\mathbb{P}_x$ killed at the origin. \smallskip (ii) Define the time-space transformation \begin{align} X^\circ_t=\frac{1}{\hat{Z}^\dagger_{\vartheta_t}}, \quad t\geq 0, \label{XfromZ} \end{align} where \[ \vartheta_t = \inf\left\{s>0 : \int_0^s\frac{1}{\beta(1/\hat{Z}^\dagger_u)}{\,d u} >t\right\}. \] If $\hat{Z}^\dagger$ is the time-changed process from \eqref{timechangesolution} under $\hat{\mathbb{P}}_x$, $x\neq 0$, killed at the origin, then the law of $X^\circ$ is $\mathbb{P}^\circ_{1/x}$. \end{prop} Keeping in mind the time-change from \eqref{timechangesolution} gives the unique (possibly exploding) weak solution to the SDE \eqref{2}, the proposition tells us how to transform solutions to the SDE via spatial inversion and time-change into an $h$-transform of the driving stable process, and vice versa. Later on, this will be applied as follows: in order to understand solutions started at infinity, one can equivalently understand the $h$-process started from zero in combination with the behavior of the time-change. Since the $h$-process is a self-similar Markov process, the behavior at zero has been understood in recent years, so it all will boil down to understanding the time-change. \begin{proof}[Proof of Proposition \ref{prop}] (i) The Riesz--Bogdan--\.Zak Theorem (Theorem \ref{RBZthrm}) states that under $\P_x$ the transformation $\hat{X}^\circ_t=1/X^\dagger_{\eta_t}$, $t\geq 0$, has law $\hat{\mathbb{P}}_{1/x}^\circ$ with the time-change $\eta_t = \inf\left\{s>0 : \int_0^s |X^\dagger_u|^{-2\alpha}{\rm d}u >t\right\}$. Next, according to the statement, we time-change $1/\hat{X}^\circ_t=X^\dagger_{\eta_t}$ with $\theta$ and show that $1/\hat{X}^\circ_{\theta_t}=X^\dagger_{\eta\circ\theta_t}$ satisfies the time-change relation \eqref{timechangesolution}. To do so, let us consider the concatenation $\eta\circ \theta$ written in terms of $X^\dagger$. Using the chain rule gives \begin{align*} \frac{d \eta_t}{dt} = |X^\dagger_{\eta_t}|^{2\alpha}\quad \text{and}\quad \frac{d \theta_t}{dt} =1/\beta(1/X^\dagger_{\eta\circ{\theta_t}})=|X^\dagger_{\eta\circ {\theta_t}}|^{-2\alpha}\sigma(X^\dagger_{\eta\circ \theta_t})^{\alpha}, \end{align*} and hence, \begin{align}\label{5} \frac{d\eta\circ\theta_t}{dt} = \left.\frac{d\eta_s}{ds }\right|_{s = \theta_t} \frac{d\theta_t}{dt}=\sigma(X^\dagger_{\eta\circ\theta_t})^{\alpha}. \end{align} Defining $\gamma_t=\eta\circ \theta_t$, we note that $\gamma$ satisfies the pathwise equation \begin{equation} \gamma_t=\int_0^t\sigma(X^\dagger_{\gamma_u})^\alpha\,du. \label{gamma} \end{equation} Applying Lemma \ref{lemm} to $X^\dagger$ with $\gamma$ playing the role of $\upsilon$ and $f (x)= \sigma(x)^{\alpha}$, we see that, trivially, $t_0 = t_1 = \infty$ and \eqref{gamma} has a unique solution almost surely given by \begin{align*} \gamma_t = \inf\left\{s>0 : \int_0^s\sigma(X^\dagger_{u})^{-\alpha} du>t\right\},\quad t\geq 0. \end{align*} Plugging-in we see that $1/\hat{X}^\circ_{\theta_t}=X^\dagger_{\eta\circ \theta_t}=X^\dagger_{\gamma_t}$ for $t\geq 0$ and the righthand side obviously satisfies the claim. \smallskip (ii) By assumption $\hat Z=\hat X^\dagger_{\hat \tau}$, where the killed dual process $\hat X^\dagger=-X^\dagger$ has probabilities $\hat \P_x$, $x\in\mathbb{R}$, and $\hat \tau = \inf\{s>0 : \int_0^s \sigma(\hat{X}^\dagger_u)^{-\alpha} \, d u>t\}$, $t\geq 0$. Now note that, $X_t^\circ=1/\hat Z^\dagger_{\vartheta_t}=1/ \hat{X}^\dagger_{\hat\tau\circ \vartheta_t}$. If we can show that, almost surely under $\hat \P_x$, $\hat\tau\circ \vartheta=\hat\eta$, where $\hat\eta_t = \inf\{s>0 : \int_0^s |\hat X^\dagger_u|^{-2\alpha}\, du >t\}$, then the proof is complete due to the Riesz--Bogdan--\.Zak theorem. To this end, as in part (i), we get from the chain rule \begin{align}\label{5b} \frac{d\hat\tau\circ\vartheta_t}{dt} = \left.\frac{d\hat\tau_s}{ds }\right|_{s = \vartheta_t} \frac{d\vartheta_t}{dt}=\sigma(X^\dagger_{\tau\circ\vartheta_t})^\alpha\beta(1/X^\dagger_{\tau\circ\vartheta_t})=|X^\dagger_{\tau\circ \vartheta_t}|^{2\alpha}, \end{align} which implies that $\hat\tau\circ\vartheta$ solves $\hat \tau\circ \vartheta_t=\int_0^t|X^\dagger_{\hat\tau\circ\vartheta_u}|^{2\alpha}\,du$. This is the same equation that $\hat\eta$ satisfies. Our proof is complete as soon as we show that the equation that both $\hat\tau\circ\vartheta$ and $\hat\eta$ solve has an almost surely unique solution. We do this by applying Lemma \ref{lemm} again but this time to $\hat{X}$ with $\hat\tau\circ\vartheta$ playing the role of $\tau$ and with $f(x) = |x|^{2\alpha}$. The conditions of the Lemma are straightforward to verify, noting in particular from the Riesz--Bogdan--\.Zak transform that $t_0 = t_1= \tau^{\{0\}}$. \end{proof} In a similar way we can apply Chaumont's transformation for spectrally one-sided processes, cf. Theorem \ref{CRBZ}, to obtain the theorem below. On account of its similarity to the one above, we omit the proof. \begin{prop}\label{BZuparrow} Suppose that $X$ is a spectrally positive stable process with distribution $\mathbb{P}_x$, $x\in\mathbb{R}$ and assume that $\sigma>0$. \smallskip (i) Define the time-space transformation \begin{align} {Z}^\dagger_t=\frac{1}{\hat{X}^\uparrow_{\theta_t}}, \qquad t< \int_0^\infty \beta(\hat{X}^\uparrow_u){\,d u}, \label{RBZ} \end{align} where \[ \theta_t =\inf\left\{s> 0 : \int_0^s\beta(\hat{X}^\uparrow_u){\,d u}>t\right\}. \] If $\hat{X}^\uparrow$ has probabilities $\hat{\mathbb{P}}^\uparrow_{1/x}$, $x> 0$, then $Z^\dagger$ is the time-changed process \eqref{timechangesolution} under $\mathbb{P}_x$ killed at the origin. \smallskip (ii) Define the time-space transformation \begin{align} X^\uparrow_t=\frac{1}{\hat{Z}^\dagger_{\vartheta_t}}, \qquad t\geq 0, \label{XfromZ} \end{align} where \[ \vartheta_t = \inf\left\{s>0 : \int_0^s\frac{1}{\beta(1/\hat{Z}^\dagger_u)}{\,d u} >t\right\}. \] If $\hat{Z}^\dagger$ is the time-changed process from \eqref{timechangesolution} under $\hat{\mathbb{P}}_x$, $x> 0$, killed at the origin, then the law of $X^\uparrow$ is $\mathbb{P}^\uparrow_{1/x}$. \end{prop} Similarly to the discussion below Proposition \ref{prop}, we will use the proposition to reduce the behavior of solutions to the SDE \eqref{2} driven by a one-sided L\'evy process to the behavior at zero of self-similar Markov processes and the time-change. The situation is easier here, as we only need self-similar Markov processes with positive trajectories for which the theory is more classical. \subsection{Time-reversal} \label{timereversesubsec} It was already part of Feller's \cite{feller1, feller2} analytic treatment of diffusion processes that an entrance point of a diffusion can be related to an exit point of an $h$-transformed diffusion. The general structure behind this was revealed by Hunt \cite{Hunt3, Hunt1and2} who showed how to relate time-reversal and $h$-transforms for Markov processes. Hunt's discrete time arguments were extended to continuous time by Nagasawa. For our purposes, only the results of Section 3 (reversal of Markov processes at $L$-times) of Nagawasa \cite{N} are of importance. Even though all the theory involved is very old, the application to the boundary behavior of the SDE \eqref{2} is only possible due to explicit potential formulas for killed stable processes developed in the past few years. \smallskip Let us first recall some definitions. Suppose that $Y = (Y_t, t\leq \zeta)$ with probabilities ${\rm\texttt{P}}_x$, $x\in\mathbb{R}$, is a regular Markov process on (a subset of) $\mathbb{R}$ with cemetery state $\Delta$ and killing time $\zeta=\inf\{t>0: Y_t = \Delta\}$. Let us denote by $\mathcal{P}: = (\mathcal{P}_t, t\geq 0)$ the associated semigroup and we will write ${\texttt P}_\nu = \int_{\mathbb{R}}\nu(da){\texttt P}_a$, for any probability measure $\nu$ on the state space of $Y$. \smallskip Suppose that $\mathcal{G}$ is the $\sigma$-algebra generated by $Y$ and write $\mathcal{G}({\texttt P}_\nu)$ for its completion by the null sets of ${\texttt P}_\nu$. Moreover, write $\overline{\mathcal G} =\bigcap_{\nu} \mathcal{G}({\texttt P}_\nu)$, where the intersection is taken over all probability measures on the state space of $Y$, excluding the cemetery state. A finite random time $\texttt{k}$ is called an $L$-time (generalized last exit time) if \begin{itemize} \item[(i)] $\texttt{k}\leq \zeta$ and $\texttt{k}$ is measurable in $\overline{\mathcal G}$, \item[(ii)] $\{s<\texttt{k}(\omega)-t\}=\{s<\texttt{k}(\omega_t)\}$ for all $t,s\geq 0$. \end{itemize} Theorem 3.5 of Nagasawa \cite{N}, shows that, under suitable assumptions on the Markov process, $L$-times form a family of `good times' at which the pathwise time-reversal $\stackrel{_\leftarrow}{Y}_t:=Y_{(\texttt{k}-t)-}, t\in [0,\texttt{k}],$ is again a Markov process. The most important examples of $L$-times are killing times and last hitting times. To ease the reading, let us state precisely the three main conditions of Nagasawa's duality theorem, one of which is redundant in our setting (and we indicate as such lower down). \smallskip \smallskip \textbf{(A.3.1)} The potential measure $G_Y(a,\cdot)$ associated to $\mathcal{P}$, defined by the relation \begin{equation} \int_\mathbb{R}f(x)G_Y(a,d x) = \int_0^\infty \mathcal{P}_t[f](a)d t={\texttt E}_a\left[\int_0^\infty f(X_t)\,dt\right], \label{GY} \end{equation} for bounded and measurable $f$ on $\mathbb{R}$, is a $\sigma$-finite measure. For a $\sigma$-finite measure $\nu$, if we put \begin{align}\label{a1} \mu(A)=\int G_Y(a,A)\, \nu(da)\quad \text{ for }A\in \mathcal B(\mbox{\rm I\hspace{-0.02in}R}), \end{align} then there exists a Markov transition semigroup, say $\hat{\mathcal{P}}: = (\hat{\mathcal P}_t, t\geq 0)$ such that the corresponding transition semigroup satisfies \begin{align} \int \mathcal{P}_t[f](x) g(x)\, \mu(dx)=\int f(x) \hat{\mathcal P}_t [g](x)\,\mu(dx),\quad t\geq 0, \label{weakdualtity} \end{align} for bounded, measurable and compactly supported test-functions $f, g$.\smallskip In other words, (A.3.1) asks for the semigroup $\mathcal P$ to be {\it in weak duality to a semigroup} $\hat{\mathcal P}$ {\it with respect to the measure} $\mu$ taking the form \eqref{a1}.\smallskip {\bf (A.3.2)} Nagasawa's second condition pertains to the finiteness of the semigroup $\mathcal{P}$ and its associated resolvents when randomised by initial distribution $\nu$, which in his most general setting, need not be a probability measure. However, this condition is redundant in our setting as we always consider the initial distribution $\nu$ to be a probability measure. Hence we don't dwell on this condition any further. \smallskip \textbf{(A.3.3)} For any continuous test-function $f\in C_0(\mbox{\rm I\hspace{-0.02in}R})$, the space of continuous and compactly supported functions, and $a\in\mbox{\rm I\hspace{-0.02in}R}$, $\mathcal{P}_t[f](a)$ is right-continuous in $t$ for all $a\in \mbox{\rm I\hspace{-0.02in}R}$ and, for $q> 0$, $G_{\hat Y}^{(q)}[f](\stackrel{_\leftarrow}{Y}_t)$ is right-continuous in $t$, where, for bounded and measurable $f$ on $\mathbb{R}$, \[ {G}_{\hat Y}^{(q)}[f](a) =\int_0^\infty e^{-qt}\hat{\mathcal{P}}_t[f](a)d t,\qquad a\in\mathbb{R}\] is the $q$-potential associated to $\hat{\mathcal P}$. \smallskip \smallskip Nagasawa's duality theorem, Theorem 3.5. of \cite{N}, now reads as follows. \begin{theorem}[Nagasawa's duality theorem]\label{Ndual} Suppose that assumptions {\rm{\bf (A.3.1)} } and {\rm{\bf (A.3.3)}} hold. For the given starting probability distribution $\nu$ in {\rm{\bf (A.3.1)} } and any $L$-time $\emph{\texttt{k}}$, the time-reversed process $\stackrel{_\leftarrow}{Y}$ under $\emph{\texttt P}_\nu$ is a time-homogeneous Markov process with transition probabilities \begin{align} \emph{\texttt{P}}_\nu(\stackrel{_\leftarrow}{Y}_t \in A\,|\stackrel{_\leftarrow}{Y}_r, 0<r< s)=\emph{\texttt{P}}_\nu(\stackrel{_\leftarrow}{Y}_t \in A\,|\stackrel{_\leftarrow}{Y}_s)={p}_{\hat{Y}}(t-s,\stackrel{_\leftarrow}{Y}_s,A),\quad \emph{\texttt{P}}_\nu\text{-almost surely}, \end{align} for all $0<s<t$ and Borel $A$ in $\mathbb{R}$, where ${p}_{\hat{Y}}(u, x, A)$, $u\geq 0$, $x\in\mathbb{R}$, is the transition measure associated to the semigroup $\hat{\mathcal P}$. \end{theorem} We will apply Nagasawa's duality theorem to different processes obtained from solutions to the SDE \eqref{2} by killing in different sets which leads to different processes obtained through time-reversal at $L$-times. \begin{itemize} \item[(i)] Proposition \ref{nag}: two-sided jumps, $\alpha>1$, killed at the origin. \item[(ii)] Proposition \ref{nag2}: positive jumps, $\alpha>1$, killed at the origin, which is the same as killing at the negative half-line because of the positive jumps. \item[(iii)] Proposition \ref{nagAx}: two-sided jumps, $\alpha\in(0,1)$, no killing but explosion. \end{itemize} Proofs will be similar, in the sense that, to verify (A.3.1), explicit computations with different killed potential measures will be necessary.\smallskip Here is the first application of Nagasawa's duality theorem. \begin{prop}\label{nag} Consider a stable process with $\alpha\in (1,2)$ and two-sided jumps. Suppose that ${{\hat{X}^\circ}}$ has probabilities $\hat{\mathbb{P}}^\circ_x$, $x\in \mathbb{R}$, defined by the change of measure \eqref{updownCOM} albeit with respect to the dual $\hat{\mathbb{P}}_x$, $x\in\mathbb{R}$, and the entrance point $0$ from Lemma \ref{zeroenter}. Define $\hat{Z}^\circ_ t = {{\hat{X}^\circ}}_{\iota_t}$, $t\geq 0$, where the time-change $\iota$ is given by \begin{align}\label{tauhat} \iota_t = \inf\left\{s>0 : \int_0^s \sigma({{\hat{X}^\circ}}_s)^{-\alpha}ds > t\right\}, \qquad t<\int_0^\infty \sigma({{\hat{X}^\circ}}_s)^{-\alpha}ds. \end{align} Suppose that $Z^\dagger$ is the (non-exploding) process from \eqref{timechangesolution} killed on hitting the origin. Then \begin{align}\label{claim} Z^\dagger\text{ and } \hat{Z}^\circ \text{ are in weak duality on }\mbox{\rm I\hspace{-0.02in}R}\backslash \{0\}\text{ with respect to } \mu(dx)=\sigma(x)^{-\alpha}h(x)dx, \end{align} with $h$ defined in \eqref{h}. Moreover: \begin{itemize} \item[(i)] The time-reversed process $\hat{Z}^\circ_{(\mbox{\rm $\texttt{k}$} - t)-}$, $t\leq \mbox{\rm $\texttt{k}$}$, under $\hat{\mathbb P}^\circ_0$ is a time-homogenous Markov process with transition semigroup which agrees with that of $Z^\dagger$, where $\mbox{\rm $\texttt{k}$}$ is any almost surely finite $L$-time for $\hat{Z}^\circ$. \item[(ii)] If $\pm\infty$ is an entrance point for $Z$, then the time reversed process $Z^\dagger_{(\mbox{\rm $\texttt{k}$}-t)-}$, $t\leq \mbox{\rm $\texttt{k}$}$, under $\emph{\rm P}_{\pm\infty}$ is a time-homogenous Markov process with transition semigroup which agrees with that of $\hat{Z}^\circ$, where $\mbox{\rm $\texttt{k}$}$ is any almost surely finite $L$-time for $Z^\dagger$. \end{itemize} \end{prop} \begin{proof}We break the proof of \eqref{claim} in to several steps. \smallskip Step 1: At the heart of our proof is weak duality for L\'evy processes killed on hitting sets (in our case, the singleton $\{0\}$) with respect to Lebesgue measure. Theorem II.1.5 of Bertoin \cite{bertoin} gives us Hunt's classical duality relation \begin{align}\label{classicdual} p_{X^\dagger}(t,y, {dz}){dy} =p_{{\hat X}^\dagger}(t,z, {dy}){dz},\qquad y,z\in\mathbb{R}\text{ and }t\geq 0, \end{align} where $p_{X^\dagger}$ is the transition kernel associated to the transition semigroup of the stable process killed at $0$ and $p_{{\hat X}^\dagger}$ is the transition kernel associated to the dual stable process killed at $0$.\smallskip Step 2: Defining $m(dx)=h(x)dx$ and combining \eqref{classicdual} and \eqref{updownCOM} with the general formula `$p^h(t,x,dy)={h(y)} p(t,x,dy)/{h(x)}$' for transition kernels of $h$-transformed Markov processes, we obtain \begin{align}\label{verifiessufficetocheck} p_{X^\dagger}(t,y, {dz})m({dy}) &= \frac{h(y)}{h(z)}p_{X^\dagger}(t,y, {dz})h(z){dy} \notag\\ &=\frac{h(y)}{h(z)}p_{{\hat X}^\dagger}(t,z, {dy})h(z){dz}\notag\\ &= p_{{\hat{X}^\circ}}(t,z, {dy})m(dz) \end{align} for $y,z\in\mathbb{R}$ and $t\geq 0$. Here $p_{\hat{X}^\circ}(z, dy, t)$ denotes the transition kernel associated to the transition semigroup of ${\hat{X}^\circ}$. Hence, the transition kernels of $X^\dagger$ and ${\hat{X}^\circ}$ are in weak duality on $\mbox{\rm I\hspace{-0.02in}R}$ with respect to $m(dx)$.\smallskip Step 3: The claim \eqref{claim} can now be deduced from general theory for random time-changes. Theorem 4.5 of Walsh \cite{W} states that two Markov processes in weak dualiy remain so when time-changed by the inverse of the same additive functional. The new duality measure is what is known as the Revuz measure of the additive functional with respect to the former duality measure (definition given shortly below). To apply Walsh's result, recall from the definitions, that $Z^\dagger$ (resp. $\hat{Z}^\circ$) are time-changes of $X^\dagger$ (resp. $\hat{X}^\circ$) with the inverse of the additive functional \begin{align}\label{kk} A_t(\omega)= \int_0^t \sigma(\omega_s)^{-\alpha}ds, \qquad t\geq 0, \end{align} on the path space $\mathbb{D}([0,\infty), \mathbb{R})$. Theorem 4.5 of Walsh \cite{W} implies that $Z$ and $\hat{Z}^\circ$ are in weak duality with respect to the Revuz measure $\mu$ defined by \begin{align} \int_{\mathbb R}f(x)\mu(dx) = \lim_{t\downarrow0}\int_{\mathbb{R}}m(dz)\mathbb{E}_z\left[ \frac{1}{t}\int_0^t f(X^\dagger_s) dA_s\right] \label{assured} \end{align} for $f\geq 0 $ bounded and measurable. In order to identify $\mu$, given that the limit \eqref{assured} is assured in Walsh \cite{W}, we can check with the help of Fubini's Theorem, for continuous and compactly supported $f\geq0$, that \begin{align} \int_{\mathbb R}f(x)\mu(dx) & = \lim_{t\downarrow0} \int_{\mathbb{R}}m(dz)\int_{\mathbb{R}} f(x)\sigma(x)^{-\alpha}\frac{1}{t}\int_0^t p_{X^\dagger}(s, z, dx)ds \notag\\ & = \lim_{t\downarrow0} \frac{1}{t}\int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}}m(dz)p_{X^\dagger}(s, z, dx)f(x)\sigma(x)^{-\alpha} \,ds \notag\\ &=\lim_{t\downarrow0} \frac{1}{t}\int_0^t \int_{\mathbb{R}}\int_{\mathbb{R}}m(dx)p_{\hat{X}^\circ}(s, x, dz)f(x)\sigma(x)^{-\alpha} \,ds\notag\\ &=\int_{\mathbb{R}}f(x)\sigma(x)^{-\alpha}h(x)dx, \label{walsh} \end{align} where in the third equality we used duality from Step 2 and in the fourth equality we use the fact that $\hat{X}^\circ$ is a conservative process and so $\int_\mbox{\rm I\hspace{-0.02in}R} p_{\hat{X}^\circ}(s, x, dz) =1$. This finishes the proof of the claim \eqref{claim}.\smallskip To prove the time-reversal statements (i) and (ii) we check the conditions of Nagasawa's Theorem \ref{Ndual} appealing to the duality established in \eqref{claim}.\smallskip (i) In order that (A.3.1) holds in the present setting we need to verify that \begin{align*} \mu(dy) = \int_{\mathbb{R}}\nu({dx})G_{{{\hat{Z}^\circ}}}(x, {dy})\quad \text{ on } \mathcal B(\mbox{\rm I\hspace{-0.02in}R}), \end{align*} where $\nu= \delta_0$ and $G_{{\hat{Z}^\circ}}(x, {dy})$ is the potential measure of ${\hat{Z}^\circ}$ on $\mathcal B(\mbox{\rm I\hspace{-0.02in}R})$ and $\mu$ is the duality measure from \eqref{claim}. To this end, we first calculate the potential measure of ${\hat{X}^\circ}$, denoted by $ G_{\hat{X}^\circ}(0,dy)$. We use for the second equality the very last (unmarked) formula in Section 4.4 of Kuznetsov et al. \cite{KKPW}, Fubini's theorem and substitution to calculate, for bounded and measurable $f\geq 0$, \begin{align}\label{hilf2} G_{{{\hat{X}^\circ}}}[f](0)&=\int_0^\infty \hat{\mathbb{E}}^\circ_0\big[f({\hat{X}^\circ}_t)\big]\,dt\notag\\ &=\Gamma(-\alpha)\frac{\sin(\alpha \pi \rho)}{\pi}\int_0^\infty {\bf E}_1\big[I_\infty^{-1}f(-(t/{I_\infty})^{1/\alpha})\big]\,dt\notag\\ &\quad +\Gamma(-\alpha)\frac{\sin(\alpha \pi\hat \rho)}{\pi}\int_0^\infty {\bf E}_{-1}\big[I_\infty^{-1}f((t/{I_\infty})^{1/\alpha})\big]\,dt\notag\\ &=\Gamma(-\alpha)\frac{\sin(\alpha \pi \rho)}{\pi}\int_0^\infty f(-u^{1/\alpha})\,du +\Gamma(-\alpha)\frac{\sin(\alpha \pi\hat \rho)}{\pi}\int_0^\infty f(u^{1/\alpha})\,du\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(x) h(x)\,dx, \end{align} with $h$ from \eqref{h} and $I_\infty := \int_0^\infty e^{\alpha\xi_s}ds$ for the underlying MAP $(\xi,J)$, see Section \ref{sec:rssMp}. It follows that $G_{\hat{X}^\circ}(0,dy) = h(y){dy}$ on $\mathcal B(\mbox{\rm I\hspace{-0.02in}R})$. Since by definition ${\hat{Z}^\circ}$ is a time-change of $\hat{X}^\circ$, from the above we can easily compute the potential measure of $\hat{Z}^\circ$ issued from $0$ by change of variables and the explicit form of $\iota$, \begin{align}\label{nupot} G_{{{\hat{Z}^\circ}}}[f](0) &=\hat{\mathbb{E}}^\circ_0\left[\int_0^\infty f({\hat{X}^\circ}_{\iota_t})\,dt\right]\notag\\ &=\hat{\mathbb{E}}^\circ_0\left[\int_0^\infty f({\hat{X}^\circ}_t)\sigma({\hat{X}^\circ}_t)^{-\alpha}\,dt\right]\notag\\ &=G_{{{\hat{X}^\circ}}}[f\sigma^{-\alpha}](0)\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(x)\sigma(x)^{-\alpha} h(x)\,dx\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(x)\mu(dx), \end{align} for bounded and measurable $f\geq 0$. Hence, we obtain that $\mu(dy)=\int_{\mathbb{R}}\delta_0(dx)G_{{\hat{Z}^\circ}}(x, {dy})$ as claimed. Combined with \eqref{claim} we verified assumption \textbf{(A.3.1)} in the current context. The remaining condition of Nagasawa's Theorem \ref{Ndual} are trivially fulfilled since all processes involved have c\`adl\`ag trajectories. Therefore part (i) of the theorem now follows from Nagasawa's duality theorem.\smallskip \medskip (ii) We first show that \begin{align}\label{and} \mu(dy) = \int_{\mathbb{R}}\nu({dx})G_{Z^\dagger}(x, {dy}) \quad \text{ on } \mathcal B(\mbox{\rm I\hspace{-0.02in}R}), \end{align} where $\nu= \delta_{\pm\infty}$ and $G_{Z^\dagger}(x, {dy})$ is the potential measure of $Z^\dagger$ on $\mathcal B(\mbox{\rm I\hspace{-0.02in}R})$ and $\mu$ is the duality measure from \eqref{claim}. To do this, let us first prove that \begin{align}\label{hilf} G_{Z^\dagger}(\pm\infty, A) = \lim_{|x|\to\infty}G_{Z^\dagger}(x,A),\quad \forall A\in \mathcal B(\mbox{\rm I\hspace{-0.02in}R}), \end{align} and then compute the righthand side explicitly. For a bounded Borel set $A\subset[-{L},{L}]$ and $|x|>{L}$ or $x=\pm\infty$, we have by the strong Markov property that \begin{align*} G_{Z^\dagger}(x,A) = \int_{-{L}}^{L} G_{Z^\dagger}(z, A){\rm P}_x(Z_{T^{(-{L},{L})}}\in dz), \end{align*} where $T^{(-{L},{L})}=\inf\{t\geq 0: |Z_t|\leq {L}\}$. As we have assumed that ${\pm\infty}$ is an entrance point for $Z$, we also have the weak convergence \begin{align}\label{hg} {\rm P}_{\pm\infty}(Z_{T^{(-{L},{L})}}\in dz) =\lim_{|x|\to\infty}{\rm P}_{x}(Z_{T^{(-\infty,L]}}\in dz)\quad \text{ on }\mathcal B(\mbox{\rm I\hspace{-0.02in}R}). \end{align} See for example Chapter 13 of \cite{Whitt}, using the regularity of stable processes. The claim \eqref{hilf} now follows from the weak convergence \eqref{hg} if $z\mapsto G_{Z^\dagger}(z, A)$ is bounded and continuous on $[-{L},{L}]$. The boundedness and continuity comes from the explicit form of $G_{X^\dagger}$. The latter is given in Theorem II.4.3 of Kyprianou \cite{KALEA}, who proved that $G_{X^\dagger}(x,{dy})$ has a density, say $g_{X^\dagger}(x,y)$, such that \begin{align} g_{X^\dagger}(x,y) &=-\frac{\Gamma(1-\alpha)}{\pi^2}\left(|y|^{\alpha-1}s(y) - |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)\right), \label{daggerpot} \end{align} where $ s(x) = \sin(\pi\alpha\rho)\mathbf{1}_{(x>0)} + \sin(\pi\alpha\hat{\rho})\mathbf{1}_{(x<0)}$. It follows that, \begin{align} G_{Z^\dagger}(x,A)&={\rm E}_x\left[\int_0^\infty \mathbf{1}_A(Z^\dagger_s)\,ds\right]\notag\\ &=\mathbb{E}_x\left[\int_0^\infty \sigma(X^\dagger_s)^{-\alpha} \mathbf{1}_A(X^\dagger_s)\,ds\right]\label{nextblock}\\ &=G_{X^\dagger}[\sigma^{-\alpha} \mathbf{1}_A](x)\notag\\ &=-\frac{\Gamma(1-\alpha)}{\pi^2}\int_A\left(|y|^{\alpha-1}s(y) - |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)\right)\sigma(y)^{-\alpha}d y.\notag \end{align} Using the time-change in \eqref{timechangesolution} with killing at the origin in the potential measure of $Z^\dagger$ together with \eqref{nextblock} and the Riesz--Bogdan--\.Zak transform in Theorem \ref{RBZthrm}, we have, for any bounded open set $A$, \begin{align}\label{check1} G_{Z^\dagger}(\pm\infty,A) &= \lim_{|x|\to\infty}\mathbb{E}_x\left[\int_0^\infty \mathbf{1}_A(X^\dagger_t)\sigma(X^\dagger_t)^{-\alpha}|X^\dagger_t|^{2\alpha} |X^\dagger_t|^{-2\alpha}dt\right]\notag\\ &= \lim_{|x|\to\infty}\mathbb{E}_x\left[\int_0^\infty \mathbf{1}_A(X^\dagger_{\eta_s})\sigma(X^\dagger_{\eta_s})^{-\alpha}|X^\dagger_{\eta_s}|^{2\alpha} ds\right]\notag\\ &=\lim_{|x|\to\infty}\hat{\mathbb{E}}^\circ_{1/x}\left[\int_0^\infty \mathbf{1}_A(1/X_{s})\sigma(1/X_{s})^{-\alpha}|X_{s}|^{-2\alpha} ds\right]\notag\\ &= G_{\hat{X}^\circ}[g](0), \end{align} where $g(x) = \mathbf{1}_A(1/x)\sigma(1/x)^{-\alpha}|x|^{-2\alpha}$. The righthand side was already computed in \eqref{hilf2} as \begin{align}\label{check2} G_{\hat{X}^\circ}[g](0) &= \int_\mathbb{R}\mathbf{1}_A(1/x)\sigma(1/x)^{-\alpha}|x|^{-2\alpha}h(x){dx}\notag\\ &= \int_A\sigma(z)^{-\alpha}|z|^{2(\alpha-1)}h(1/z){dz}\notag\\ &=\int_A\sigma(z)^{-\alpha}h(z){dz}, \end{align} where in the final equality we used the explicit form of $h$ in \eqref{h} to check that $|z|^{2(\alpha-1)}h(1/z) = h(z)$, for $z\neq 0$. Putting \eqref{check1} and \eqref{check2} together gives us \[ \int_{\mathbb{R}}\nu({dx})G_{Z^\dagger}(x, A) = G_{Z^\dagger}(\pm\infty,A) = \int_{A} \sigma(z)^{-\alpha}h(z){dz}=\mu(A), \] which is \eqref{and}. The final step consists in deducing from \eqref{and} and Nagaswa's duality theorem the statement of (ii). We note that duality \eqref{claim} of the semigroups was proved in $\mbox{\rm I\hspace{-0.02in}R}\backslash \{0\}$ only, but we apply Nagasawa's theorem to $\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}} \backslash \{0\}$. This is justified by extending the duality measure $\mu$ with zero mass at the additional state. All conditions hold trivially with this extension. The claim of part (ii) now follows from Theorem \ref{Ndual} as in (i). \end{proof} The next proposition offers an analogous result to the first one, but now with respect to entrance from $+\infty$ (resp. $-\infty$) in the spectrally positive (resp. negative) setting. Accordingly, the $h$-transformed process that is involved in the proposition is taken as the stable process conditioned to be positive (resp. negative) which was defined in \eqref{CSP}. \begin{prop}\label{nag2} Suppose that $X$ is a spectrally positive stable processes with $\alpha\in (1,2)$. Suppose that ${{\hat{X}^\uparrow}}$ has probabilities $\hat{\mathbb{P}}^\uparrow_x$, $x\geq 0$. Define $\hat{Z}^\uparrow_t = {{\hat{X}^\uparrow}}_{\iota_t}$, $t\geq 0$, where the time-change $\iota$ is given by \begin{align}\label{tauhatb} \iota_t = \inf\left\{s>0 : \int_0^s \sigma({{\hat{X}^\uparrow}}_s)^{-\alpha}ds > t\right\}, \qquad t< \int_0^\infty \sigma({{\hat{X}^\uparrow}}_s)^{-\alpha}ds. \end{align} Suppose that $Z^\dagger$ is the process \eqref{timechangesolution} killed on hitting the origin. Then \begin{align}\label{claimb} Z^\dagger\text{ and } \hat{Z}^\uparrow \text{ are in weak duality with respect to } \mu(dx)=\sigma(x)^{-\alpha}h(x)dx\text{ on }[0,\infty), \end{align} where $h$ is given by \eqref{speconesidedh}. Moreover: \begin{itemize} \item[(i)] The time-reversed process $\hat{Z}^\uparrow_{(\mbox{\rm $\texttt{k}$} - t)-}$, $t\leq \mbox{\rm $\texttt{k}$}$, with $\hat{Z}^\uparrow_0 = 0$, is a time-homogenous Markov process with transition semigroup which agrees with that of $Z$, where $\mbox{\rm $\texttt{k}$}$ is any almost surely finite $L$-time for $\hat{Z}^\uparrow$. \item[(ii)] If $+\infty$ is an entrance point for $Z^\dagger$, then the time reversed process $Z^\dagger_{(\mbox{\rm $\texttt{k}$}-t)-}$, $t\leq \mbox{\rm $\texttt{k}$}$, with $Z^\dagger_0 = +\infty$ is a time-homogenous Markov process with transition semigroup which agrees with that of $\hat{Z}^\uparrow$, where $\mbox{\rm $\texttt{k}$}$ is any almost surely finite $L$-time for $Z$. \end{itemize} \end{prop} \begin{proof} The proof is similar to that of Proposition \ref{nag} but needs adjustment of the involved $h$-transforms. Consequently, different (more classical) results from fluctuation theory are needed. \smallskip We first deal with \eqref{claimb}. The analogues to Step 1 and Step 2 of the proof of Proposition \ref{nag} are due to Theorem 1 of Bertoin and Savov \cite{BS}. That is to say, the transition measures of $X^\dagger$ and $\hat X^\uparrow$ are in weak duality with respect to $m(dx)=h(x)dx =\Gamma(\alpha)^{-1}x^{\alpha-1}dx$. The analogue of Step 3 in the proof of Proposition \ref{nag} is the same here and hence \eqref{claimb} is verified without appealing to any other further results from fluctuation theory.\smallskip (i) In order that (A.3.1) holds in the present setting we need to verify that \begin{align*} \mu(dy) = \int_{[0,\infty)} \nu(da)G_{{{\hat{Z}^\uparrow}}}(a,dy)\quad \text{ on } \mathcal B(\mbox{\rm I\hspace{-0.02in}R}_+), \end{align*} where $\nu= \delta_0$ and $G_{{{\hat{Z}^\uparrow}}}$ is the potential measure of $\hat{Z}^\uparrow$ and $\mu$ is the duality measure from \eqref{claimb}. We first calculate the potential measure $G_{\hat{X}^\uparrow}(0,dy)$ of ${\hat{X}^\uparrow}$ on $\mathcal B(\mbox{\rm I\hspace{-0.02in}R})$. Using the expression for the entrance law of $X^\uparrow$ in Theorem 1 of Bertoin and Yor \cite{BY02} (see also Remark 4 of Chaumont et al. \cite{CKPR}), Fubini's theorem and substitution, we calculate, for bounded and measurable $f\geq 0$, \begin{align}\label{hilf2b} G_{{{\hat{X}^\uparrow}}}[f](0)&=\int_0^\infty \hat{\mathbb{E}}^\uparrow_0[f({\hat{X}^\uparrow}_t)]\,dt\notag\\ &=\frac{1}{\alpha {\bf E}[\xi^\uparrow_1]}\int_0^\infty {\bf E}\left[\hat{I}_\infty^{-1}f(-(t/{\hat{I}_\infty})^{1/\alpha})\right]\,dt\notag\\ &=\frac{1}{\Gamma(\alpha)}\int_\mbox{\rm I\hspace{-0.02in}R} f(x)x^{\alpha-1}\,dx\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(x)h(x)dx \end{align} with $h$ from \eqref{h} and $\hat{I}_\infty = \int_0^\infty e^{\alpha\hat{\xi}^\uparrow_s}ds$, where the L\'evy process $({\hat{\xi}}^\uparrow, {\bf P})$ is characterised by its exponent in \eqref{xiuparrowpsi}. Thus, $G_{{{\hat{X}^\uparrow}}}(0,dx)= h(x){dx}$, $x\in\mbox{\rm I\hspace{-0.02in}R}$. \smallskip Since ${\hat{Z}^\uparrow}$ is a time-change of $\hat{X}^\uparrow$, from the above we can easily compute the potential measure by change of variables, namely \begin{align}\label{nupot} G_{{{\hat{Z}^\uparrow}}}(0, A) &=\hat{\mathbb{E}}^\uparrow_0\left[\int_0^\infty \mathbf{1}_A({\hat{X}^\uparrow}_{\iota_t})\,dt\right]\notag\\ &=\hat{\mathbb{E}}^\uparrow_0\left[\int_0^\infty \mathbf{1}_A({\hat{X}^\uparrow}_t)\sigma({\hat{X}^\uparrow}_t)^{-\alpha}\,dt\right]\notag\\ &=\int_A \sigma(x)^{-\alpha}h(x)\,dx,\notag\\ &=\mu(A) \end{align} for bounded and open sets $A$. Hence, we obtain that $\int_{\mathbb{R}}\nu(dx)G_{{\hat{Z}^\uparrow}}(x, {dy})=\mu(dy)$ which is the condition \textbf{(A.3.1)} of Theorem \ref{Ndual}. Noting again that the other conditions of Nagasawa are trivially fulfilled since all processes involved have c\`adl\`ag trajectories, the proof is now complete.\smallskip (ii) In order that (A.3.1) holds in the present setting we need to verify that \begin{align*} \mu(dy) = \int_{\mathbb{R}}\nu({dx})G_{Z^\dagger}(x, {dy}) \quad \text{ on } \mathcal B(\mbox{\rm I\hspace{-0.02in}R}_+), \end{align*} where $\nu= \delta_{+\infty}$ and $G_{Z^\dagger}(x, {dy})$ is the potential measure of $Z^\dagger$ on $\mathcal B(\mbox{\rm I\hspace{-0.02in}R}_+)$ and $\mu$ is the duality measure from \eqref{claimb}. As in the proof of Proposition \ref{nag} (ii), it is straightforward to show that \begin{align}\label{hilfb} G_{Z^\dagger}(+\infty, A) = \lim_{x\to+\infty}G_{Z^\dagger}(x,A),\quad \forall A\in \mathcal B(\mbox{\rm I\hspace{-0.02in}R}_+). \end{align} Indeed, for $x>{L}$ or $x=+\infty$ and $A\subset[0,{L}]$ we have (recall that $X$ is assumed to only creep downwards) by the strong Markov property that \begin{align*} G_{Z^\dagger}(x,A) = \int_{0}^{L}{\rm P}_x(Z_{T^{(-\infty,L]}}\in dz)G_{Z^\dagger}(z, A)= G_{Z^\dagger}({L},A), \end{align*} where $T^{(-\infty,L]}=\inf\{t\geq 0: Z_t\leq {L}\}=\inf\{t\geq 0: Z_t= {L}\}$. The claim \eqref{hilfb} now follows. Now we combine \eqref{hilfb}, the time-change in Theorem \ref{zthrm} and the Chaumont's transform in Theorem \ref{CRBZ}, to get, for bounded and open set $A$, \begin{align*} G_{Z^\dagger}(+\infty, A) &= \lim_{x\to\infty}\mathbb{E}_x\left[\int_0^\infty \mathbf{1}_A(X^\dagger_t)(X_t^\dagger)^{2\alpha} (X^\dagger_t)^{-2\alpha}\sigma(X^\dagger_t)^{-\alpha}dt\right]\\ &= \lim_{x\to\infty}\mathbb{E}_x\left[\int_0^\infty \mathbf{1}_A(X^\dagger_{\gamma_s})(X^\dagger_{\gamma_s})^{2\alpha} \sigma(X^\dagger_{\gamma_s})^{-\alpha}ds\right]\\ &=\lim_{x\to\infty}\hat{\mathbb{E}}^\uparrow_{1/x}\left[\int_0^\infty \mathbf{1}_A(1/X_{s})X_{s}^{-2\alpha}\sigma(1/X_s)^{-\alpha} ds\right]\\ &= G_{\hat{X}^\uparrow}[g](0), \end{align*} where $g(x) = \mathbf{1}_A(1/x)x^{-2\alpha}\sigma(1/x)^{-\alpha}$ and (as above) the continuity at the origin of $G_{\hat{X}^\uparrow}$ is a consequence of $0$ being an entrance boundary for $\hat X^\uparrow$, see Lemma \ref{zeroenter2}. The righthand side was already computed in \eqref{hilf2b}. Plugging it into the right-hand side above gives us for bounded $A\in\mathcal{B}(\mbox{\rm I\hspace{-0.02in}R})$, \begin{align*} G_{Z^\dagger}(+\infty, A) &= \int \mathbf 1_A(1/x)x^{-2\alpha}\sigma(1/x)^{-\alpha}h(x){dx}\\ &= \int \mathbf 1_A(z)z^{2(\alpha-1)} \sigma(z)^{-\alpha}h(1/z){dz}\\ &=\int_A \sigma(z)^{-\alpha}h(z){dz}, \end{align*} where in the final equality we used the explicit form of $h$ to obtain $z^{2(\alpha-1)}h(1/z) = h(z)$ for $z\neq 0$. We can now conclude that $G_{Z^\dagger}(+\infty, dy) = \mu(dy)$ on $\mathbb{R_+}$ which is the condition \textbf{(A.3.1)} of Theorem \ref{Ndual}. The claim in part (ii) now follows from Theorem \ref{Ndual} of Nagasawa as before with the same slight adjustment mentioned in the final paragraph at the end of the proof of Proposition \ref{nag}. \end{proof} \section{Entrance from infinity, the impossible cases}\label{impossible} This first section of the main proof gathers the cases where entrance from infinity is impossible irrespectively of $\sigma$, i.e. a cross appears in the table of Theorem \ref{main}. Recall that entrance stands for enterable but not exit. All proofs are indirect and based on the triviality of certain limiting hitting distributions (overshoots, inshoots) of stable processes for which explicit formulas are available.\smallskip Recall that for $x\in \mbox{\rm I\hspace{-0.02in}R}$, ${\rm P}_x$ denotes the law of the unique weak solution to the SDE \eqref{2} issued from $x$, $\P_x$ denotes the law of the stable process issued from $x\in\mbox{\rm I\hspace{-0.02in}R}$ and ${\rm P}_x$ can be expressed via the time-change \eqref{timechangesolution} in terms of $\P_x$. To study ${\rm P}$ for infinite entrance points we use the strong Markov property (consequence of Feller assumption) at first hitting times and then use Proposition \ref{pr} to obtain formulas in terms of the stable process. First hitting distributions under ${\rm P}_x$ are identical to those under $\P_x$ as the time-change does not influence the jump sizes. Note that, since $\sigma>0$ is assumed continuous, $\sigma$ is bounded away from zero within all compact sets. Hence, the time-change in \eqref{timechangesolution} does not level off in $\mbox{\rm I\hspace{-0.02in}R}$ so that solutions to the SDE \eqref{2} visit the same sets as the driving stable process. \subsection{Entrance from $+\infty$, two-sided jumps, $\alpha\in(0,2)$}\label{proof0} In this first proof we show that divergence of overshoots for stable L\'evy processes implies that under ${\rm P}_{+\infty}$ trajectories would jump instantaneously from $+\infty$ to $-\infty$ which contradicts continuous entry. We consider $\overline{\mbox{\rm I\hspace{-0.02in}R}}=(-\infty,+\infty]$ and assume $({\rm P}_x,{x\in\bar \mbox{\rm I\hspace{-0.02in}R}})$ is a Feller extension of $({\rm P}_x, x\in\mbox{\rm I\hspace{-0.02in}R})$, satisfying ${\rm P}_{+\infty}(\lim_{t\downarrow 0} Z_t=+\infty)=1$. Recall that the Feller property of the extension implies the strong Markov property which we apply to the first hitting times $T^{(-\infty,L]}=\inf\{t\geq 0: Z_t\leq {L}\}$ for ${L}\in \mbox{\rm I\hspace{-0.02in}R}$. \smallskip Using the time-change representation \eqref{timechangesolution}, we find that, for all ${{L}}\in \mbox{\rm I\hspace{-0.02in}N}$ and compact sets $A\subset \mbox{\rm I\hspace{-0.02in}R}$, \begin{align}\label{ab} \lim_{z\to+\infty}{\rm P}_z(Z_{T^{{(-\infty,L]}}}\in A)= \lim_{z\to+\infty}\P_z(X_{\tau^{(-\infty,{{L}}]}}\in A)=0, \end{align} where $\tau^{(-\infty,{{L}}]} = \inf\{t<0: X_t\leq L\}$ and we have used that the ranges of $Z$ and $X$ agree and the fact that $X$ has no stationary overshoots (recall the discussion around \eqref{C2}). This last claim can be verified directly by recalling the classical result which states that \begin{align}\label{leif} \P_z(X_{\tau^{(-\infty,{{L}}]}}\leq {{L}}-y) = \frac{\sin(\pi\alpha\hat{\rho})}{\pi}\int_0^{y/(z-{{L}})}t^{-\alpha\rho}(1+t)^{-1}dt, \qquad z\leq {{L}}. \end{align} See for example Equation (2) of Rogozin \cite{Rog} for the above formula. For bounded and measurable $A$, define the auxiliary function \begin{align*} f(z)=\begin{cases} {\rm P}_z(Z_{T^{{(-\infty,L]}}}\in A)&\text{ if }z\in\mbox{\rm I\hspace{-0.02in}R},\\ 0&\text{ if }z=+\infty, \end{cases} \end{align*} so that $0\leq f\leq 1$. Thanks to \eqref{ab} and the explicit overshoot distribution \eqref{leif}, $f$ is continuous on $\overline{\mbox{\rm I\hspace{-0.02in}R}}$. Hence, for every $\epsilon>0$, there is some ${{L}}$ so that $0\leq f(z)\leq \epsilon$ for all $z>{{L}}$. Applying the strong Markov property at $T^{(-\infty,L']}$ for ${{L}'}>{{L}}$ gives \begin{align*} {\rm P}_{+\infty}(Z_{T^{{L}}}\in A) &=\lim_{{{L}'}\to +\infty} \int {\rm P}_y(Z_{T^{{L}}}\in A)\, {\rm P}_{+\infty} (Z_{T^{(-\infty,L']}}\in dy)\\ &=\lim_{{{L}'}\to +\infty}\left( \int_{y>{{L}}} f(y)\, {\rm P}_{+\infty} (Z_{T^{(-\infty,L']}}\in dy)+\int_{y\leq {{L}}} f(y)\, {\rm P}_{+\infty} (Z_{T^{(-\infty,L']}}\in dy)\right)\\ &\leq \epsilon+\lim_{{{L}'}\to +\infty}{\rm P}_{+\infty} (Z_{T^{(-\infty,L']}}\leq {{L}})\\ &=\epsilon, \end{align*} where the final equality follows since trajectories enter from infinity continuously by assumption. Hence, $\lim_{{L}'\to \infty}{\rm P}_{+\infty}(Z_{T^{(-\infty,L']}}\in A)=0$ for every bounded and measurable subsets $A$ of $\mbox{\rm I\hspace{-0.02in}R}$ which implies that under ${\rm P}_{+\infty}$ no compact subset of $\mbox{\rm I\hspace{-0.02in}R}$ is visited. \subsection{Entrance from $-\infty$, two-sided jumps, $\alpha\in(0,2)$}\label{proof1} The proof follows the same lines as before with $\underline{\mbox{\rm I\hspace{-0.02in}R}}=[-\infty,+\infty)$, replacing $T^{(-\infty,L]}$ by $T^{[L,\infty)}=\inf\{t\geq 0: Z_t\geq {L}\}$ and using, for all ${{L}}\in \mbox{\rm I\hspace{-0.02in}N}$ and $A\subset [-{{L}},{{L}}]$, the continuous function \begin{align*} f(z)=\begin{cases} {\rm P}_z(Z_{T^{[L,\infty)}}\in A)&\text{ if }z\in\mbox{\rm I\hspace{-0.02in}R},\\ 0&\text{ if }z=-\infty, \end{cases} \end{align*} with analogous formulas forcing an instantaneous jump from $-\infty$ to $+\infty$ \subsection{Entrance from $\pm \infty$, two-sided jumps, $\alpha\in(0,1)$}\label{proof2} The proof follows the same idea as in Section \ref{proof0} replacing overshoots by `inshoots' into compact intervals and then using that transience of stable processes for $\alpha\in(0,1)$ does not allow to reach arbitrary compact sets from infinity. \smallskip The differences are the use of first hitting times $T^{(-{L},{L})}=\inf\{t\geq 0: Z_t\in (-{L},{L})\}$, the auxiliary function \begin{align*} f(z)=\begin{cases} {\rm P}_z(Z_{T^{(-L,L)}}\in A)&\text{ if }z\in\mbox{\rm I\hspace{-0.02in}R},\\ 0&\text{ if }|z|=\pm\infty, \end{cases} \end{align*} on $\overline{\underline \mbox{\rm I\hspace{-0.02in}R}}$ and the argument for continuity of $f$. Here, $f$ is continuous in the interior of $\overline{\underline \mbox{\rm I\hspace{-0.02in}R}}$ due to the explicit form of ${\rm P}_z(Z_{T^{(-L,L)}}\in A) = \mathbb{P}_z(X_{\tau^{(-{{L}},{{L}})}}\in A) = \mathbb{P}_{z/{{L}}}(X_{\tau^{(-1,1)}}\in A/{{L}})$ given in Theorem 1.1 of Kyprianou et al. \cite{KPW}. Specifically, it says that, for $\alpha\in(0,1)$, \begin{align} &{\mathbb P}_{x}\bigl(X_{\tau^{(-1,1)}} \in d y\bigr) = \frac{\sin(\pi\alpha\hat{\rho})}{\pi} (1+x)^{\alpha\rho}(1+y)^{-\alpha\rho} (x-1)^{\alpha\hat{\rho}} (1-y)^{-\alpha\hat{\rho}} (x-y)^{-1} d y. \label{striplowalpha} \end{align} Continuity of $f$ at $\pm\infty$ is due to the transience of stable L\'evy processes for $\alpha\in(0,1)$, so that, using again the time-change representation \eqref{timechangesolution}, $\lim_{|z|\to\infty} {\rm P}_z(Z_{T^{(-L,L)}}\in A)=0$ which implies that under ${\rm P}_{\pm\infty}$ no compact subset of $\mbox{\rm I\hspace{-0.02in}R}$ is visited. \subsection{Entrance from $\pm \infty$ or $-\infty$, spectrally positive jumps, $\alpha\in(0,2)$}\label{proof4} First note that spectral positivity excludes the case that $\alpha = 1$ (which is necessarily symmetric). We therefore only need to deal with the cases $\alpha\in(0,1)\cup(1,2)$. \smallskip On account of the fact that we know the law of the overshoot of $X$ into $({L},\infty)$, see e.g. again Rogozin \cite{Rog}, we can apply a similar argument to the one in \eqref{ab} and deduce that $ \lim_{z\to-\infty}{\rm P}_z(Z_{T^{[L,\infty)}}\in A)=0 $ for all compact sets $A$ so that entrance from $-\infty$ is impossible. \smallskip Next, we consider the limit of ${\rm P}_z(Z_{T^{(-L,L)} }\in A)$ as $|z|\to\infty$ for all compact sets $A$. When $\alpha\in(0,1)$, the process $X$ is a subordinator and hence the paths of $Z$ are monotone increasing. Therefore the aforesaid limit does not exist. On the other hand, when $\alpha>1$, we can appeal to the spectrally positive analogue of \eqref{striplowalpha}, see Proposition 1.3 of \cite{KPW} or \cite{Port67}. This tells us that, for $z<-1$, \begin{align*} {\mathbb P}_z(X_{\tau^{(-1,1)}} \in d y)&= \frac{\sin \pi(\alpha-1)}{\pi} (|z|-1)^{\alpha-1} (1+y)^{1-\alpha} (|z|+y)^{-1} \mathrm{d} y \\ & \quad +\delta_{-1}(\mathrm{d} y) \frac{\sin \pi(\alpha-1)}{\pi} \int_0^{\frac{|z|-1}{|z|+1}} t^{\alpha-2} (1-t)^{1-\alpha} \, d t, \end{align*} and ${\mathbb P}_z(X_{\tau^{(-1,1)}} =1) = 1$ for $z>1$ (positive jumps). With the help of scaling, it is therefore clear that limits of ${\rm P}_z(Z_{T^{(-L,L)}}\in A) = {\mathbb P}_z(X_{\tau^{(-{L},{L})}} \in A)$ do not exist. Indeed, one need only compare the probabilities ${\rm P}_z(Z_{T^{(-L,L)}}={L}) $ as $z\to\infty$ and $z\to-\infty$. \subsection{Entrance from $\pm \infty$ or $+\infty$, spectrally negative jumps, $\alpha\in(0,2)$}\label{proof6} The proof is analogous to the one above. \subsection{Entrance from $+\infty$, spectrally positive jumps, $\alpha\in(0,1)$}\label{proof4.5} By virtue of the increasing nature of the paths in this setting, entrance at $+\infty$ is trivially impossible. \subsection{Entrance from $-\infty$, spectrally negative jumps, $\alpha\in(0,1)$}\label{proof8} By virtue of the decreasing nature of the paths in this setting, entrance at $+\infty$ is trivially impossible. \section{Entrance from $\pm\infty$, two-sided jumps, $\alpha\in(1,2)$}\label{proof3} In this section we discuss the main arguments of the article for which we have seen significant preparation in the earlier sections. Proofs of Section \ref{proof5} go along the lines.\smallskip We break the proof into necessity and sufficiency of the integral test \begin{align}\label{integral} I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R})=\int_\mbox{\rm I\hspace{-0.02in}R} \sigma(x)^{-\alpha}|x|^{\alpha-1}\,{dx}<\infty \end{align} for $\pm\infty$ as an entrance point.\smallskip \textit{Idea for necessity:} Suppose solutions enter from infinity. Since for $\alpha>1$ solutions will hit the origin almost surely (as they are time-changes of the stable process which hits points) we can time-reverse at the first hitting time of $0${\color{black}; see Figure \ref{fig2}}. From Proposition \ref{nag} we know the dynamics of the reversed process. It is a time-change under $\hat{\mathbb{P}}^\circ_{0}$, the stable process conditioned to avoid $0$. Since the conditioned process itself is conservative, necessarily the time-change \eqref{tauhat} needs to explode under $\hat{\mathbb{P}}^\circ_{0}$. Recall $\hat{\mathbb{P}}^\circ_{0}$ is well-defined due to Lemma \ref{zeroenter}. Hence, we obtain the necessity of an almost surely finite perpetual integral under $\hat{\mathbb{P}}^\circ_{0}$. Since the conditioned process is a self-similar Markov process we can employ the Lamperti representation for the positive part and the negative part (alternatively the Lamperti--Kiu transformation to the entire process) to get two almost surely finite perpetual integrals over two L\'evy processes with positive finite means and local times. For such perpetual integrals we can employ the article \cite{DK} to obtain an integral test which gives \eqref{integral}.\smallskip \begin{figure}[h] \includegraphics[scale=0.5]{ooand0.pdf} \caption{Time-reversing SDE entering at $\pm\infty$ to give time-change of $h$-transform $\hat{X}^\circ$ entering at $0$.} \label{fig2} \end{figure} \textit{Proof of necessity:} Let us assume $\pm\infty$ is an entrance point for the SDE \eqref{2} in the sense of Definition \ref{Fellerdef}. Necessarily we must have under $\emph{\rm P}_{\pm\infty}$ that $T^{(-{L},{L})} = \inf\{t>0: |Z_t|<{L}\}<\infty$ with positive probability for some ${L} >0$ and that this probability tends to 1 as ${L}\to\infty$. From Proposition \ref{pr} and the recurrence of stable processes for $\alpha\in (1,2)$, we also know that $\zeta=\inf\{t>0: Z_t = 0\}$ is almost surely finite when $Z$ is issued from any point in $\mbox{\rm I\hspace{-0.02in}R}$ (this uses the assumption that $\sigma$ is positive and continuous, hence, the time-change cannot level off in $\mbox{\rm I\hspace{-0.02in}R}$). It follows by the strong Markov property that the first hitting time of zero $\zeta$ is finite almost surely under $\emph{\rm P}_{\pm\infty}$. We also note that $\zeta$ is an $L$-time for the SDE killed at $0$. Hence, we will consider the time-reversal under $\emph{\rm P}_{\pm\infty}$ from $\texttt{k} =\zeta$. \smallskip As we have assumed that $\pm\infty$ is an entrance point for $\emph{\rm P}_{\pm\infty}$, Proposition \ref{nag} (ii) tells us that $\pm\infty$ is accessible in an almost surely finite time for $\hat{Z}^\circ$, where $\hat{Z}^\circ_ t = {{\hat{X}^\circ}}_{\iota_t}$ with $\hat{Z}^\circ_0 = 0$. The conservative process ${{\hat{X}^\circ}}$ has probabilities $\hat{\mathbb{P}}^\circ_x$, $x\in \mathbb{R}$, and the time-change $\iota$ is given by \begin{align*} \iota_t = \inf\left\{s>0 : \int_0^s \sigma({{\hat{X}^\circ}}_s)^{-\alpha}ds > t\right\}, \qquad t<\int_0^\infty \sigma({{\hat{X}^\circ}}_s)^{-\alpha}ds. \end{align*} The finite-time accessibility of $\pm\infty$ for $\hat{Z}^\circ$ and the fact that $\hat{X}^\circ$ is a conservative process implies that the time-change has to explode in finite time or, equivalently, \begin{align}\label{prove this} \int_0^\infty \sigma({{\hat{X}^\circ}}_s)^{-\alpha}ds<\infty \end{align} almost surely under $\hat{\mathbb{P}}^\circ_{0}$. The first exit time of $\hat{X}^\circ$ from $(-\varepsilon,\varepsilon)$, for any $\varepsilon>0$, occurs before $\hat{X}^\circ$ reaches $\pm\infty$. Moreover, appealing to \eqref{2sideexit} in combination with the $h$-transform that defines $\hat{\mathbb{P}}^\circ_{0}$, it is clear that the law of the overshoot of $\hat{\mathbb{P}}^\circ_{0}$ outside of $(-\varepsilon, \varepsilon)$ is absolutely continuous with respect to Lebesgue measure. Hence, it follows that \eqref{prove this} is almost surely finite under $\hat{\mathbb{P}}^\circ_{x}$, for Lebesgue almost every $x\in\mathbb{R}$. In what follows we continue with two such $x>0$ and $x<0$.\smallskip \smallskip To show that the necessary almost sure finiteness in \eqref{prove this} implies the finiteness of the integral test \eqref{integral}, we need to introduce a path transformation of $\hat{X}^\circ$. We note that in the spirit of the example in Section \ref{pssmpexamples}, we can censor out the negative parts of the path of $\hat{X}^\circ$ to create a positive self-similar Markov process, say $\hat{X}^{\circ\hspace{-1pt}>}$. That is to say \begin{align} \hat{X}^{\circ\hspace{-1pt}>}_t = \hat{X}^\circ_{\hat{\gamma}^\circ_t},\quad \text{ with } \hat{\gamma}^\circ_t = \inf\left\{s>0 : \int_0^s \mathbf{1}_{(\hat{X}^\circ_u<0)}du>t\right\}. \label{Ycen} \end{align} Let us write $\hat{\xi}^{\circ\hspace{-1pt}>}$ for the L\'evy process appearing in Lamperti's representation \eqref{pssMpLamperti} of $\hat{X}^{\circ\hspace{-1pt}>}$. The finiteness of \eqref{prove this} implies the almost sure finiteness of the integrals \begin{align}\label{censored} \int_0^\infty \sigma({{\hat{X}^\circ}}_t)^{-\alpha}\mathbf{1}_{({{\hat{X}^\circ}}_t>0)}{dt} &= \int_0^\infty \sigma({{\hat{X}^{\circ\hspace{-1pt}>}}}_s)^{-\alpha} {ds}\notag\\ &=\int_0^\infty \sigma( e^{\hat{\xi}^{\circ\hspace{-1pt}>}_{\hat\varphi_u}})^{-\alpha} {du}\notag\\ &=\int_0^\infty \sigma( e^{\hat{\xi}^{\circ\hspace{-1pt}>}_{v}})^{-\alpha} e^{\alpha\hat{\xi}^{\circ\hspace{-1pt}>}_{v}}{dv}. \end{align} To the (almost surely finite) righthand side we will apply \cite{DK} to obtain the integral test \eqref{integral}. The result of \cite{DK} that we apply states the following: If $\xi$ is a L\'evy process with local times and finite positive mean, then \begin{align*} \texttt{P}\left(\int_0^\infty f(\xi_s)\,ds<\infty\right)=1\quad \Longleftrightarrow \quad \int_0^\infty f(x)\,dx<\infty. \end{align*} We will now check that $\hat{\xi}^{\circ\hspace{-1pt}>}$ has local times (equivalently: $\hat{\xi}^{\circ\hspace{-1pt}>}$ hits points, compare for instance Theorem 7.12 of \cite{Kbook} and Theorem V.1 of \cite{bertoin}) and finite positive mean.\smallskip (i) {\it Local times}. Note that, for the stable process, as $\alpha\in(1,2)$, we have $\hat{\mathbb{P}}_x(\tau^{\{y\}}<\infty)=1$ for all $x,y\in\mbox{\rm I\hspace{-0.02in}R}$, where $\tau^{\{y\}}= \inf\{t>0: X_t = y\}$. It follows from \eqref{updownCOM} (albeit with $X$ replaced by $X^\dagger$) that $\hat{\mathbb{P}}^\circ_x(\hat{\tau}_\circ^{\{y\}}<\infty)>0$ for all $x,y\in\mbox{\rm I\hspace{-0.02in}R}$, where $\hat{\tau}_\circ^{\{y\}} = \inf\{t>0: \hat{X}^\circ_t = y\}$. But then the censored processes hit points (same range) and also the L\'evy processes through the Lamperti transformation hit points (exponential change of space, time-change irrelevant). Hence, $\hat{\xi}^{\circ\hspace{-1pt}>}$ has local times.\smallskip (ii) {\it Finite positive mean}. We can derive the characteristic exponent of $\hat{\xi}^{\circ\hspace{-1pt}>}$ from the characteristic exponent of, say $\hat\xi^{>}$, the L\'evy process that lies behind the stable process $\hat{X}^\dagger$, which has been negatively censored. Indeed, from \eqref{censoredpsi}, its characteristic exponent takes the form \begin{equation} \hat{\Psi}^{>}(z) = \frac{\Gamma(\alpha\hat{\rho} - \iu{z})}{\Gamma(-\iu{z})} \frac{\Gamma(1 - \alpha\hat{\rho} + \iu{z})}{\Gamma(1 - \alpha + \iu{z})}, \qquad z\in \mbox{\rm I\hspace{-0.02in}R}. \label{censoredpsi2} \end{equation} On account of the fact that, for $t\geq 0$ fixed, $\omega\mapsto \inf\left\{s>0 : \int_0^s \mathbf{1}_{(\omega_u<0)}du>t\right\}$ is a sequence of almost surely finite stopping times under $\hat{\mathbb{P}}_x$, $x\neq 0$, as well as the same being true of the time-change in the Lamperti transform \eqref{pssMpLamperti} for the process $\hat{X}^{\circ\hspace{-1pt}>}$, the Doob $h$-transform that defines $\hat{X}^\circ$ is tantamount to an Esscher transform (exponential change of measure) on $\hat\xi^{>}$. In particular, note that $\Psi_{\hat\xi^{>}}(-\iu(\alpha-1))=0$ and $\exp((\alpha-1)\hat\xi^{>}_t)$, $t\geq0$, is a $\hat{\mathbb{P}}$-martingale. It follows that the characteristic exponent of $\hat{\xi}^{\circ\hspace{-1pt}>}$ takes the form \begin{equation} \hat{\Psi}^{\circ\hspace{-1pt}>}(z) = \frac{\Gamma(1-\alpha\rho - \iu{z})}{\Gamma(1-\alpha -\iu{z})} \frac{\Gamma(\alpha\rho + \iu{z})}{\Gamma(\iu{z})} , \qquad z\in \mbox{\rm I\hspace{-0.02in}R}. \label{hypexp} \end{equation} By computing $-{\rm i} \Psi'_{\hat{\xi}^{\circ\hspace{-1pt}>}}(0)$ we can verify directly that the mean of $\hat{\xi}^{\circ\hspace{-1pt}>}_1$ is finite.\smallskip With local times and finite positive mean we apply Theorem 1 of \cite{DK} for which the starting value of $\hat{\xi}^{\circ\hspace{-1pt}>}$ is irrelevant. This tells us that \[ \int_0^\infty \sigma( e^{\hat{\xi}^{\circ\hspace{-1pt}>}_{v}})^{-\alpha} e^{\alpha\hat{\xi}^{\circ\hspace{-1pt}>}_{v}}{dv}<\infty\,\,\,\text{a.s.} \quad\Longleftrightarrow\quad \int_0^\infty \sigma( e^y)^{-\alpha} e^{\alpha y}{dy} = \int_1^\infty \sigma(x)^{-\alpha} x^{\alpha-1}dx<\infty. \] The analogous argument in which we censor away the positive parts of $\hat{X}^\circ$ (the negative of this censored process is a pssMp) shows that \[ \int_0^\infty \sigma({{\hat{X}^\circ}}_t)^{-\alpha}\mathbf{1}_{({{\hat{X}^\circ}}_t<0)}{dt}<\infty\,\,\,\text{a.s.} \quad\Longleftrightarrow \quad\int_{-\infty}^0\sigma(x)^{-\alpha}|x|^{\alpha-1}{dx}=\infty. \] We thus conclude that \[ \int_0^\infty \sigma({{\hat{X}^\circ}}_t)^{-\alpha}{dt} <\infty \,\,\,\text{a.s.}\quad \Longleftrightarrow \quad\int_{|x|>1}\sigma(x)^{-\alpha}|x|^{\alpha-1}{dx}<\infty. \] The integral test \eqref{integral} thus follows from \eqref{prove this}. \subsection*{Idea for sufficiency} From Proposition \ref{prop} we know that the SDE started from $x$ with law $\emph{\rm P}_{x}$ can be built under spatial-inversion ($x\mapsto 1/x$) as a time-change of the h-transformed (conditioned) process $\hat{X}^\circ$ started in $1/x$. The natural guess is to construct $\emph{\rm P}_{\pm\infty}$ as spatial inversion of the same time-change of $\hat{X}^\circ$ started from $0$. Two facts need to be established: the limit law $\hat{\mathbb{P}}^\circ_0=\lim_{x\to 0}\hat{\mathbb{P}}^\circ_x$ needs to be well-defined and the time-change needs to be well-defined under $\hat{\mathbb{P}}^\circ_0$. The first follows from \cite{DDK} as explained in Section \ref{sec:rssMp}, the latter by computing the expectation of the time-change which leads to the integral test \eqref{integral}. Finally, we show that the semigroup extension defined like this is indeed a Feller extension of $(\emph{\rm P}_{x}:x\in\mbox{\rm I\hspace{-0.02in}R})$ to $\overline{\underline \mbox{\rm I\hspace{-0.02in}R}}$. Since $\emph{\rm P}_{\pm\infty}$ is constructed explicitly through space-inversion and time-change from $\hat{\mathbb{P}}^\circ_0$, under which trajectories leave $0$ continuously, we see immediately that under $\emph{\rm P}_{\pm\infty}$ paths almost surely start from infinity continuously{\color{black}; see Figure \ref{fig3}}. \begin{figure}[h] \includegraphics[scale=0.5]{0andoo.pdf} \caption{Space inversion and time-change of $h$-transform entrance law $\hat{\mathbb{P}}^\circ_0$ to give SDE started at $\pm\infty$.} \label{fig3} \end{figure} \subsection*{Proof of sufficiency} Suppose the integral test \eqref{integral} is satisfied. We first use \eqref{integral} to prove that \begin{align}\label{abc} \hat{\mathbb{E}}^\circ_0\left[ \int_0^\infty\beta(\hat{X}^\circ_u){du}\right] <\infty \end{align} with $\beta(x)=\sigma(1/x)^{-\alpha}|x|^{-2\alpha}$ for $x\neq 0$. \smallskip Recalling that when $\hat{X}^\circ$ is negatively censored as in \eqref{Ycen}, as a positive self-similar Markov process, thanks to the type of underlying L\'evy process described in \eqref{hypexp}, the origin is left instantaneously and not hit again; see for example the discussion in \cite{CKPR}. A similar statement holds when $\hat{X}^\circ$ is positively censored. It follows that under $\hat{\mathbb{P}}^\circ_0$, the origin is left instantaneously and $0$ is not hit again, thus, the integral is well-defined but possibly infinite.\smallskip To prove \eqref{abc}, note that, for each fixed $t>0$, $\omega\mapsto \int_0^t \beta(\omega_s){\rm d}s$ is a continuous functional in the Skorohod topology. Using that $x\mapsto \mathbb P^\circ_x$ is weakly continuous, Fatou's Lemma and $\beta \geq 0$, we first get \begin{align}\label{NAnnoying} \hat{\mathbb{E}}^\circ_0\left[ \int_0^t\beta(\hat{X}^\circ_u){du}\right]\leq\lim_{|x|\to0}\hat{\mathbb{E}}^\circ_x\left[ \int_0^t\beta(\hat{X}^\circ_u){du}\right]<\lim_{|x|\to0}\hat{\mathbb{E}}^\circ_x\left[ \int_0^\infty\beta(\hat{X}^\circ_u){du}\right] \end{align} for all $t\geq 0$. Hence, to prove \eqref{abc} we show that the righthand side of \eqref{NAnnoying} is finite. Recalling that $\hat{X}^\circ$ is an $h$-transform of $X^\dagger$, using $\hat{h}$ defined as in \eqref{h} albeit the roles of $\rho$ and $\hat\rho$ are interchanged, and the general $h$-transform formula for potential measures `$G^h(x,dy)={h(y)}G(x,dy)/{h(x)} $' yields \begin{align}\label{takelimitx} \begin{split} \hat{\mathbb{E}}^\circ_x\left[ \int_0^\infty\beta(\hat{X}^\circ_u){du}\right] & = \int_{\mathbb{R}}G_{\hat{X}^\circ}(x,{dy})\sigma(1/y)^{-\alpha}|y|^{-2\alpha}\\ &=\int_{\mathbb{R}}G_{\hat{X}^\dagger}(x,{dy})\frac{\hat{h}(y)}{\hat{h}(x)}\sigma(1/y)^{-\alpha}|y|^{-2\alpha} \end{split} \end{align} In order to take the limit in \eqref{takelimitx} as $|x|\to0$, we can appeal to the expression for $G_{X^\dagger}(x,{dy})$. Recall from \eqref{daggerpot} that $G_{X^\dagger}(x,{dy})$ has a density \begin{align} g_{X^\dagger}(x,y) &=-\frac{\Gamma(1-\alpha)}{\pi^2}\left(|y|^{\alpha-1}s(y) - |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)\right), \label{daggerpot2} \end{align} where $ s(x) = \sin(\pi\alpha\rho)\mathbf{1}_{(x>0)} + \sin(\pi\alpha\hat{\rho})\mathbf{1}_{(x<0)}$. It was also noted there that, following classical potential theory (see also Theorem 6.5 of \cite{G}), \begin{align} \frac{|y|^{\alpha-1}s(y)- |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)}{|y|^{\alpha-1}(s(y)+ s(-y))} =\frac{g_{X^\dagger}(x,y)}{g_{X^\dagger}(y,y)} =\mathbb{P}_x(\tau^{\{y\}}<\tau^{\{0\}})\leq 1, \label{proba} \end{align} for $\tau^{\{y\}} = \inf\{t>0: X_t = y\}$. Using the assumption that \begin{align*} \int_{\mbox{\rm I\hspace{-0.02in}R}}\sigma(1/y)^{-\alpha}|y|^{-\alpha -1}{dy}= \int_{\mbox{\rm I\hspace{-0.02in}R}}\sigma(z)^{-\alpha}|z|^{\alpha-1}{dz}<\infty \end{align*} and $\alpha\in(1,2)$ together with \eqref{daggerpot2} and \eqref{proba}, we compute, with a floating unimportant constant $C$, which can take different values in each line, \begin{align*} & \lim_{|x|\to 0}-\frac{\pi^2}{\Gamma(1-\alpha)}\hat{\mathbb{E}}^\circ_x\left[ \int_0^\infty\beta(\hat{X}^\circ_u){du}\right] \\ &= \lim_{|x|\to0}-\frac{\pi^2}{\Gamma(1-\alpha)}\int_{\mathbb{R}}G_{X^\dagger}(x,{dy})\frac{h(y)}{h(x)}\sigma(1/y)^{-\alpha}|y|^{-2\alpha}\notag\\ &=\lim_{|x|\to0}\int_{\mbox{\rm I\hspace{-0.02in}R}} \frac{s(-y)\left(|y|^{\alpha-1}s(y) - |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)\right)}{s(-x)|x|^{\alpha -1}}\frac{\sigma(1/y)^{-\alpha}}{|y|^{\alpha + 1}}{dy}\notag\\ &\leq C \int_{\mbox{\rm I\hspace{-0.02in}R}}\lim_{|x|\to0} \frac{\left(|y|^{\alpha-1}s(y) - |y-x|^{\alpha-1} s(y-x) +|x|^{\alpha-1}s(-x)\right)}{|x|^{\alpha -1}}\frac{\sigma(1/y)^{-\alpha}}{|y|^{\alpha+1}}{dy}\notag\\ &\leq C\int_{\mbox{\rm I\hspace{-0.02in}R}}\lim_{|x|\to0}|x|^{2-\alpha}{\rm sign}(x)\frac{ \left(|y|^{\alpha-1} - |y-x|^{\alpha-1} \right)}{x}\frac{1}{|y|^{\alpha+1}}\sigma(1/y)^{-\alpha}{dy}\notag\\ &\quad+C\int_{\mbox{\rm I\hspace{-0.02in}R}}\frac{1}{|y|^{\alpha+1}}\sigma(1/y)^{-\alpha}{dy}\notag\\ &=C\int_{\mbox{\rm I\hspace{-0.02in}R}}{|z|^{\alpha-1}}\sigma(z)^{-\alpha}{dz}\notag\\ &<\infty, \end{align*} where in the first inequality we have used dominated convergence in combination with \eqref{proba} and the righthand side was assumed to be finite. Hence, \eqref{abc} is verified.\medskip Now we come to the crucial step. We write down explicitly the process that plays the role of the SDE \eqref{2} started from infinity. First, note that \eqref{abc} implies that $\hat{\mathbb{P}}^\circ_0$-almost surely $\int_0^\infty\beta(\hat{X}^\circ_u){du}<\infty$. In turn, this implies that the time-change $(\theta_t, t\geq 0)$, in Propostion \ref{prop} (i) explodes in finite time. Moreover, on account of the fact that $(\hat{X}^\circ,\hat{\mathbb{P}}_0^\circ)$ is well-defined, cf. Lemma \ref{zeroenter}, the space-time transformation \begin{align} {Z}^\dagger_t=\frac{1}{\hat{X}^\circ_{\theta_t}}, \qquad t< \int_0^\infty \beta(\hat{X}^\circ_u){\,d u}, \label{RBZ4} \end{align} where \[ \theta_t =\inf\left\{s> 0 : \int_0^s\beta(\hat{X}^\circ_u){\,d u}>t\right\} \] is well-defined under $\hat{\mathbb{P}}^\circ_0$. \smallskip Given the conclusion of Proposition \ref{prop} (i), it thus follows that we have constructed a candidate for the Feller extension of $({\rm P}_z, z\in \mbox{\rm I\hspace{-0.02in}R})$ with ${\rm P}_{\pm\infty}$ defined as \eqref{RBZ4} under $\hat{\mathbb P}^\circ_0$. Note that trajectories enter instantaneously with alternations between $+\infty$ and $-\infty$ as trajectories under $\hat{\mathbb P}^\circ_0$ leave $0$ instantaneously with alternations of sign. We still need to verify, for the extension at $\pm\infty$, the weak continuity of $({\rm P}_z, z\in \underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}})$ in the Skorokhod topology and the Feller property. Note, the latter means that, for continuous and bounded $f$ on $\underline{\overline{\mbox{\rm I\hspace{-0.02in}R}}}$, we need \begin{align} \lim_{|x|\to \infty}{\rm E}_x[f(Z^\dagger_t)] ={\rm E}_{\pm\infty}[f(Z^\dagger_t)]\qquad \text{ and }\qquad \lim_{t\to 0} {\rm E}_{\pm\infty}[f(Z^\dagger_t)] = f(\pm\infty). \label{Felleratpminf} \end{align} Propostion \ref{prop} and the definition of ${\rm P}_{\pm\infty}$ allows us to equivalently write \eqref{Felleratpminf} as \begin{equation} \lim_{|x|\to \infty}\hat{\mathbb{E}}^\circ_x[f(1/\hat{X}^\circ_{\theta_t})]= \hat{\mathbb{E}}^\circ_0[f(1/\hat{X}^\circ_{\theta_t})]\qquad \text{ and }\qquad \lim_{t\to 0} \hat{\mathbb{E}}^\circ_0[f(1/\hat{X}^\circ_{\theta_t})] = f(\pm\infty). \label{Felleratpminf2} \end{equation} Thanks to \eqref{abc} and the continuity of composition, first hitting and $\theta$ with respect to the Skorokhod topology for sufficiently regular processes, cf. Chapter 13 of Whitt \cite{Wh}, the weak continuity and the Feller property follow from the Skorokhod continuity of $X^\circ$ from Lemma \ref{zeroenter}. \section{Entrance from $+\infty$, spectrally positive, $\alpha\in(1,2)$}\label{proof5} The entire proof is along the lines of the previous section, albeit that we work with the duality relation explored in Proposition \ref{nag2}, replacing $\hat{\mathbb{P}}^\circ_0$ by $\hat{\mathbb{P}}^\uparrow_0$, in order to show the sufficiency and necessity of the condition \begin{align}\label{SPLPintegral} I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R}_+)=\int_0^\infty\sigma(x)^{-\alpha}x^{\alpha-1}{dx}<\infty. \end{align} The proof for entrance from $-\infty$ in the spectrally negative regime is analogous. \subsection*{Proof of Necessity} Suppose that $+\infty$ is an entrance point. Then the duality of $\hat{Z}^\uparrow$ and $Z^\dagger$ in Proposition \ref{nag2} means that, by time reversing $Z$ from its first hitting of the origin, $+\infty$ must be accessible for $\hat{Z}^\uparrow$. Reasoning in a similar way to the `necessity' part of the proof in Section \ref{proof3}, we must have that \begin{align} \int_0^\infty \sigma({{\hat{X}^\uparrow}}_s)^{-\alpha}ds<\infty \label{prove this2} \end{align} almost surely under $\hat{{\mathbb P}}^\uparrow_{x}$, for $x\geq 0$. Recalling that ${\hat{X}^\uparrow}$ is a positive self-similar Markov process we use Lamperti's representation to rewrite \eqref{prove this2} as a perpetual integral of the L\'evy process $\hat{\xi}^\uparrow$ discussed at the end of Section \ref{pssmpexamples}. The L\'evy process hits point (because $X$ and hence $X^\uparrow$ do), thus, has local times. The L\'evy process also has finite positive mean as can be seen similarly to the proof in Section \ref{proof3} using the characteristic exponent \eqref{xiuparrowpsi}. Hence, Theorem 1 of \cite{DK} is applicable to deduce via change of variables that \eqref{prove this2} holds if and only if \eqref{SPLPintegral} holds. \medskip \subsection*{Proof of Sufficiency} We are again guided by the sufficiency argument in Section \ref{proof3}. We appeal to the representation in Proposition \ref{BZuparrow} to provide a candidate for ${\rm P}_{+\infty}$ built from $1/\hat{X}^\uparrow_{\theta_t}$, $t\geq 0$, under $\hat{\mathbb{P}}^\uparrow_0$ from Lemma \ref{zeroenter2}. For this to work we need to ensure that $\int_0^\infty\beta(\hat{X}^\uparrow_u){du}<\infty$, $\hat{\mathbb{P}}^\uparrow_0$-almost surely. As in Section \ref{proof3} this will be achieved by proving \begin{align*} \hat{\mathbb{E}}^\uparrow_0\left[ \int_0^\infty\beta(\hat{X}^\uparrow_u){du}\right] <\infty. \end{align*} To this end, let us write $G_{\hat{X}^{\ddagger}}(x, dy)$, $x,y>0$, for the potential measure of $\hat X$ killed on entering $(-\infty,0)$. Appealing to Corollary 8.8 and Exercise 8.2 of \cite{Kbook}, it is shown that, up to a multiplicative constant, \begin{equation} G_{\hat{X}^\ddagger}(x,{dy}) =\left( x^{\alpha - 1} - (x-y)^{\alpha-1}\mathbf{1}_{(x\geq y)}\right)d y, \qquad x,y\geq 0. \label{Gdd} \end{equation} Thus, we have that, up to a multiplicative constant on the left-hand side, \begin{align*} \hat{\mathbb{E}}^\uparrow_x\left[ \int_0^\infty\beta(\hat{X}^\uparrow_u){du}\right] &=\int_{0}^\infty G_{\hat{X}^\ddagger}(x,{dy})\frac{h(y)}{h(x)}\sigma(1/y)^{-\alpha}y^{-2\alpha}\\ &=\int_{0}^\infty \sigma(1/y)^{-\alpha}y^{-\alpha-1}dy -\int_0^x \frac{(x-y)^{\alpha-1}}{x^{\alpha-1}} \sigma(1/y)^{-\alpha}y^{-\alpha-1}dy \end{align*} so that thanks to Fubini's Theorem and Fatou's Lemma \begin{align*} \hat{\mathbb{E}}^\uparrow_0\left[ \int_0^\infty\beta(\hat{X}^\uparrow_u){du}\right]&\leq \int_0^\infty\lim_{x\downarrow0} \hat{\mathbb{E}}^\uparrow_x\left[\beta(\hat{X}^\uparrow_u)\right] {du}\\ & = \int_{0}^\infty \sigma(1/y)^{-\alpha}y^{-\alpha-1}dy \\ &= \int_{0}^\infty \sigma(z)^{-\alpha}z^{\alpha-1}dz\\ &<\infty \end{align*} as required. \smallskip Now we come to the construction. We write down explicitly the process that plays the role of the SDE \eqref{2} started from infinity. Since we proved that $\int_0^\infty\beta(\hat{X}^\uparrow_u){du}$ is almost surely finite under $\hat{\mathbb{P}}^\uparrow_0$, the time-change $\theta_t$, $t\geq 0$, in Propostion \ref{BZuparrow} explodes in finite time. Moreover, on account of the fact that $(\hat{X}^\uparrow,\hat{\mathbb{P}}_0)$ is well-defined, cf. Lemma \ref{zeroenter2}, the space-time transformation \begin{align}\label{RBZ5} {Z}^\dagger_t=\frac{1}{\hat{X}^\uparrow_{\theta_t}}, \qquad t< \int_0^\infty \beta(\hat{X}^\uparrow_u){\,d u}, \end{align} where \[ \theta_t =\inf\left\{s> 0 : \int_0^s\beta(\hat{X}^\uparrow_u){\,d u}>t\right\} \] is well-defined under $\hat{\mathbb{P}}^\uparrow_0$. Volkonskii's Theorem, Corollary to Theorem 2 of \cite{Vol}, ensures that the right-hand side of \eqref{RBZ5} is a strong Markov process. Given the conclusion of Proposition \ref{BZuparrow} (i), it thus follows that we have constructed a candidate for the Feller extension of $({\rm P}_z, z\in \mbox{\rm I\hspace{-0.02in}R})$ with ${\rm P}_{+\infty}$ defined by \eqref{RBZ5} under $\hat{\mathbb P}^\uparrow_0$. Note that trajectories come down from $+\infty$ continuously as trajectories under $\hat{\mathbb P}^\uparrow_0$ leave zero continuously and are non-negative.\smallskip Checking for the Feller property of $Z^\dagger$ when entering at $+\infty$ we again follow the reasoning in Section \ref{proof3} and appeal to the representation in Proposition \ref{BZuparrow} to conclude that it suffices to check that for continuous and bounded $f$ on $[0,\infty]$ \[ \lim_{|x|\to \infty}{\rm E}_x[f(Z^\dagger_t)] = \hat{\mathbb{E}}^\uparrow_0[f(1/\hat{X}^\uparrow_{\theta_t})]\qquad \text{ and }\qquad \lim_{t\to 0}{\rm E}_x[f(Z^\dagger_t)]=\lim_{t\to 0} \hat{\mathbb{E}}^\uparrow_0[f(1/\hat{X}^\uparrow_{\theta_t})] = f(+\infty). \] As in Section \ref{proof3}, this follows as a consequence of the Feller property of $\hat{X}^\uparrow$ at $0$, Lemma \ref{zeroenter2}. The Skorokhod continuity of $({\rm P}_x , x\in\overline{\mbox{\rm I\hspace{-0.02in}R}})$ also follows in an easy and similar manner to the proof at the very end of Section \ref{proof3}. \section{Entrance from $\pm\infty$, $\alpha=1$}\label{a=1} Now we come to the more delicate case of $\alpha=1$. The sufficiency proof is similar to the ones before, the proof of necessity must be different. There are two reasons why additional arguments are needed. Since the Cauchy process does not hit points (has no local times) the time-reversal from points does not work unchanged and the 0-1 law for perpetual integrals of \cite{DK} is not applicable. To circumvent these difficulties we develop a different approach here, built upon a general theory of transience for Markov processes highlighted by Getoor \cite{G}. The general result for transient Markov processes we will use is developed in the Appendix to avoid distraction from the job at hand in this section. \subsection*{Proof of necessity} We start with an auxiliary lemma, which will be used as part of the proof of necessity thereafter. We need to compute the potential measure of the extension killed upon first entry to $(-1,1)$. This is a consequence of recent work on killed stable processes given in the lemma below, which is stated under the additional assumptions of the necessary part of the proof of entrance from $\pm\infty$ with $\alpha = 1$. \begin{lemma}\label{resolvent} Suppose that $Z^\star$ the unique solution to the SDE \eqref{2} (resp. the extension to infinity) killed upon first entry into $(-1,1)$. Then the potential measure is \begin{align*} G_{Z^\star}(x,dy)=\begin{cases} \sigma(y)^{-1}\frac{1}{\pi}\big(\log(|y|+(y^2-1)^{1/2})\big)\,dy& \text{if }x=\pm\infty\\ \sigma(y)^{-1}\frac{1}{\pi}\big(\log(|\frac{1-xy}{x-y}|+((\frac{1-xy}{x-y})^2-1)^{1/2}\big)\,dy& \text{if }x\in\mbox{\rm I\hspace{-0.02in}R}\backslash (-1,1), \end{cases} \end{align*} for $|y|\geq 1$. \end{lemma} \begin{proof} The formula for $x\in \mbox{\rm I\hspace{-0.02in}R}\backslash (-1,1)$ follows from Theorem B of Profeta and Simon \cite{PS} or Theorem II.3.3. of Kyprianou \cite{KALEA} (the potential density of the killed Cauchy process), the factor $\sigma^{-1}$ comes from the time-change and substitution in the time-integral defining the potential measure.\smallskip For $x=\pm\infty$, we can use the assumed Skorokhod continuity as in \eqref{hg} and reason as in \eqref{check1} with $x\notin A$ to deduce that $G_{Z^\star}(\pm\infty,A) =\lim_{|x|\to\infty}G_{Z^\star}(x,A)$, for all bounded Borel sets in $\overline{\underline{\mathbb{R}}}\backslash(-1,1)$. In turn this gives the statement of the lemma. \end{proof} To finish the proof of necessity of Theorem \ref{main} in the case $\alpha = 1$, recall that $Z^\star$ is the unique solution to the SDE \eqref{2} killed upon first entry into $(-1,1)$. We first check the assumptions of Proposition \ref{P1} for $Z^\star$. If $x\in \mbox{\rm I\hspace{-0.02in}R}\backslash (-1,1)$, then $Z^\star$ hits $(-1,1)$ in finite time because of the time-change representation from Proposition \ref{pr}, the (set)recurrence of the Cauchy process and the assumption that $\sigma>0$ is continuous (thus, locally bounded away from zero by a constant). Hence, ${\rm P}_x(\zeta^\star<\infty)=1$ for $x\in \mbox{\rm I\hspace{-0.02in}R}\backslash (-1,1)$, where $\zeta^\star$ is the lifetime of $Z^\star$. For $x=\pm\infty$ we apply \eqref{hg}, which is equally valid for $\alpha = 1$, to deduce that $\rm P_{\pm\infty}(\zeta^\star<\infty)=1$, by set recurrence. Since, by definition, $\texttt{E}_x[f(\zeta^\star)]=\texttt{E}_x[f(T^{(-1,1)})]$, where $T^{(-1,1)}= \inf\{t>0: Z_t\in(-1,1)\}$, the continuity of $x\mapsto \texttt{E}_x[f(\zeta^\star)]$ for $f$ bounded continuous follows from the assumed weak continuity in the Skorokhod topology of the extension of $Z$ and Chapter 13 of \cite{Whitt}. Applying Proposition \ref{P1} in the appendix, we obtain $G_{Z^\star}(\pm\infty,K)<\infty$ for all $K$ compact. Choosing $K=\overline{\underline{ \mbox{\rm I\hspace{-0.02in}R}}}\backslash (-1,1)$, we have from Lemma \ref{resolvent} \begin{align*} \int_{(-1,1)^c}\sigma(y)^{-1}\frac{1}{\pi}\big(\log(|y|+(y^2-1)^{1/2})\big)\,dy<\infty \end{align*} from which the integral test \begin{align*} I^{\sigma,1}= \int_\mbox{\rm I\hspace{-0.02in}R} \sigma(y)^{-1}\log |y|d y<\infty \end{align*} follows because $\sigma$ is bounded away from $0$ on compacts. \subsection*{Proof of sufficiency} We want to prove that the condition \begin{equation} I^{\sigma,1}=\int_\mbox{\rm I\hspace{-0.02in}R} \sigma(x)^{-1}\log |x| dx<\infty \label{NS} \end{equation} implies that $\pm\infty$ is an entrance point. The construction is identical to the one in Section \ref{proof3} but simpler as the $h$-function for $\alpha=1$ becomes $h=1$ so that $\hat{X}^\circ = X$. Specifically we relate via Proposition \ref{prop} the entrance of $Z$ at $\pm\infty$ to the entrance of the Cauchy process at $0$. Note that the Cauchy process leaves zero continuously and never returns. In analogy to the final paragraphs of Section \ref{proof3} the guess for the limiting law will be \begin{align}\label{ddd} {Z}_t=\frac{1}{X_{\theta_t}}, \quad t\geq 0, \end{align} under $\P_0$, where \[ \theta_t =\inf\left\{s> 0 : \int_0^s\beta(X_u){\,d u}>t\right\}. \] To show that $\theta$ is well-defined for all $t\geq 0$ we proved in Section \ref{proof3} that $\int_0^t\beta(\hat{X}^\circ_u){\,d u}<\infty$ almost surely by checking $\hat{\mathbb{E}}^\circ_0\left[ \int_0^\infty\beta(\hat{X}^\circ_u){du}\right] <\infty$ in \eqref{abc}. Controlling the integral up to $t$ by the integral up to $\infty$ is too coarse here as the latter is infinite due to the (set)recurrence of the Cauchy process. What we do instead is to show that $\int_0^{\tau^{(-a,a)^c}}\beta(X_u){\,d u}<\infty$ almost surely for all $a>0$. Since $\lim_{a\to\infty}\tau^{(-a,a)^c}=\infty$ almost surely, as a consequence we obtain $\int_0^t\beta(X_u){\,d u}<\infty$ almost surely.\smallskip As in Section \ref{proof3}, to verify $\int_0^{\tau^{(-a,a)^c}}\beta(X_u){\,d u}<\infty$ almost surely, we prove finiteness of the expectation under $\mathbb{P}_0$. To this end, considering only $a=1$ for notational convenience and write $(X^\bullet_t , t<\tau^{(-1,1)^c})$ for the process $X$ killed on exiting $(-1,1)$. Recalling that $G_{X^\bullet}$ denotes its potential measure, we compute \begin{align*} \mathbb{E}_0\left[ \int_0^{\tau^{(-1,1)^c}} \beta(X_u)\, {d u}\right]&=\int_{-1}^1\beta(y)G_{X^\bullet}(0,dy)\\ &=\frac{1}{\Gamma(\alpha/2)^2}\int_{-1}^1\beta(y)\int_1^{1/|y|} (s^2-1)^{-1/2} ds \,dy\\ &\leq -\frac{1}{\Gamma(\alpha/2)^2}\int_{-1}^1\sigma(1/y)^{-1}|y|^{-2}\log|y| dy\\ &=\int_{|z|\geq 1}\sigma(z)^{-1}\log|z| dz\\ &\leq I^{\sigma,1}<\infty, \end{align*} where we have taken advantage of the explicit form of $G_{X^\bullet}$; see for example Blumenthal et al. \cite{BGR}.\smallskip The rest of the sufficiency proof goes along the arguments of Section \ref{proof3} with the guessed limit \eqref{ddd} under $\P_0$. Using the above to see that the time-change in \eqref{ddd} is well-defined the argument is as in Section \ref{proof3}. \section{Explosion}\label{sec6} We only give the arguments for two-sided jumps, the one sided cases are modifications just as Section \ref{proof5} is a modification of Section \ref{proof3}, e.g. by replacing $X^\circ$ by $X^\uparrow$. \subsection*{Non-explosion for $\alpha\geq 1$} Recall from Proposition \ref{pr} that for initial condition $x\in\mbox{\rm I\hspace{-0.02in}R}$, under the stable law $\P_x$, the time-change $Z_t:=X_{\tau_t}$ is the unique solution to the SDE \eqref{2} up to the killing time $T= \int_0^\infty \sigma(X_s)^{-\alpha}ds$ which is a perpetual integral. To show that solutions do not explode we only need to verify that $\P_x(T=\infty)=1$. But this is a direct consequence of the (set)recurrence of stable processes for $\alpha\geq 1$. \subsection*{Explosion and non-explosion for $\alpha\in(0,1)$} Just as in the argument for $\alpha\geq 1$, a 0-1 law $\P_x(T<\infty)\in \{0,1\}$ for the perpetual integral $T=\int_0^\infty \sigma(X_s)^{-\alpha}ds$ depending on $\alpha$ and $\sigma$ would be sufficient to style the remainder of the proof. The 0-1 law for perpetual integrals is not hard to prove (see Lemma 5 of \cite{DK}) but we cannot provide a direct characterization of $\alpha$ and $\sigma$ that leads to respective probabilities of $0$ or $1$. Instead, we appeal again to our understanding of how expectation of the perpetual integral serves as an equivalent marker of almost sure convergence. In the `sufficient' direction, this is straightforward in the `necessary' direction, we will again use our variant of Getoor's characterisation of transience, given in Proposition \ref{P1} of the Appendix. \subsection*{Necessity} The main idea here will be to use a mixture of space inversion together with time reversal to convert the event of explosion into an event of entrance for a familiar transient process that lives on $\mbox{\rm I\hspace{-0.02in}R}$ (Proposition \ref{nagAx} below). As such, the latter will allow us to invoke Proposition \ref{P1}, whose conclusion can be reinterpreted as ensuring the desired integral test holds. \smallskip Recall from Section \ref{background} that, when $\alpha\in(0,1)$, the stable process does not hit points (hence, $X=X^\dagger$) and its Doob $h$-transform using $h$ from \eqref{h} corresponds to conditioning the process to be continuously absorbed (in finite time) at the origin. For the next proposition recall that $\beta (x) = \sigma(1/x)^{-\alpha}|x|^{-2\alpha}$ for $x\neq 0$. \begin{prop}\label{nagAx} Suppose that $\alpha\in(0,1)$, the stable process $X$ has two-sided jumps and the solution $Z$ to \eqref{2} explodes for all points of issue. Under $\mathbb{P}_x, x\in\mbox{\rm I\hspace{-0.02in}R}$, define $V_t = X_{\iota_t}$ for $t<\int_0^\infty \beta(X_s)ds$, where \begin{align}\label{iota} \iota_t = \inf\left\{s>0 : \int_0^s \beta(X_s)ds > t\right\} \end{align} and let $\hat{V}^\circ_ t ={Z^{-1}_t}$ for $t<T:=\int_0^\infty \sigma(X_s)^{-\alpha}ds$. Then \begin{equation} \label{third} V\text{ is in weak duality with }\hat{V}^\circ \text{ with respect to }\mu(dx)=\beta(x)h(x)dx, \end{equation} where $h$ is given by \eqref{h}. Moreover, when $Z$ is issued from the origin, the time-reversal $(\hat{V}^\circ_{(T-t)-}, t\leq T)$ is a time-homogenous Markov process with transition probabilities which agree with that of $V$ started in $0$. \end{prop} \begin{proof} The proof is similar in spirit to that of Proposition \ref{nag} so we only highlight the main points. \smallskip Proposition \ref{prop} tells us that $\hat{V}^\circ_ t=Z^{-1}_t = {{\hat{X}^\circ}}_{\theta_t}$, $t< \int_0^\infty \beta({{\hat{X}^\circ}}_s)ds$, where the time-change $(\theta_t, t\geq 0)$ is given by \begin{align}\label{tauhatAx} \theta_t = \inf\left\{s>0 : \int_0^s \beta({{\hat{X}^\circ}}_s)ds > t\right\}. \end{align} Since $Z$ is assumed to explode at the finite $T$, $\hat{X}^\circ_{\theta_\cdot}$ is absorbed at $T$. \smallskip The proof of the weak duality \eqref{third} follows by the use of Revuz measures, as in the proof of \eqref{claim}, as soon as we can show that $\hat{X}^\circ$ and $X$ are in weak duality with respect to $ h(x)dx$. This was already shown, however, in \eqref{verifiessufficetocheck}. \smallskip For the final part, we note that $\hat{V}^\circ= {{\hat{X}^\circ}}_{\theta_\cdot}$ is a Markov process that hits the origin at the explosion time $T$ of $Z$. As before, we want to apply Nagasawa's Duality Theorem \ref{Ndual}. As usual, the verification of {\bf (A.3.3)} is straightforward (appealing to dominated convergence). Taking account of \eqref{third}, to verify {\bf (A.3.1)}, we are required to check that, for all bounded and measurable $f$ which is compactly supported in the domain $\overline{\underline\mbox{\rm I\hspace{-0.02in}R}}\backslash\{0\}$ of $\hat{V}^\circ$, \begin{align} {\rm E}_0\left[\int_0^T f(1/Z_t)dt\right]=\int_\mbox{\rm I\hspace{-0.02in}R} f(x)\beta(x)h(x)dx. \label{showthisformu} \end{align} Writing $G_X$ for the potential measure of $X$, we have $G_X(0,dx) = h(x)dx$; see e.g. Theorem I.1.4 in Kyprianou \cite{KALEA}. We may thus write \begin{align} {\rm E}_0\left[\int_0^T f(1/Z_t)dt\right]&= \mathbb{E}_0\left[ \int_0^\infty f(1/X_s) \sigma(X_t)^{-\alpha}dt\right] \notag\\ &= \int_\mbox{\rm I\hspace{-0.02in}R} f(1/x) \sigma(x)^{-\alpha}h(x) dx\notag\\ & = \int_\mbox{\rm I\hspace{-0.02in}R} f(y) \sigma(1/y)^{-\alpha} h(1/y) y^{-2}dy\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(y) \sigma(1/y)^{-\alpha} h(y)|y|^{-2\alpha}dy\notag\\ &=\int_\mbox{\rm I\hspace{-0.02in}R} f(y) \beta(y)h(y)dy, \label{compactfimportant} \end{align} where, just as in \eqref{check2}, we have used that $h(1/y)|y|^{-2} = h(y)|y|^{-2\alpha}$. Note in particular that the compact support in $\overline{\underline\mbox{\rm I\hspace{-0.02in}R}}\backslash\{0\}$ of $f$ ensures that the right-hand side of \eqref{compactfimportant} is finite. The requirement \eqref{showthisformu} thus holds and hence the proof is complete. \end{proof} Let us now return to the proof of necessity for the case $\alpha\in(0,1)$ in Theorem \ref{zthrm} for which we aim to use Proposition \ref{P1}. Recall the notion $(X^\bullet_t , t<\tau^{(-1,1)^c})$ for the stable process $X$ killed on first exiting $(-1,1)$. Accordingly $ X^\bullet_{\iota_\cdot} $ denotes the process $V = X_{\iota_\cdot}$ killed on first exiting $(-1,1)$. Let us denote the killing time by $\zeta^\bullet$ and note that $\zeta^\bullet= \int_0^{\tau^{(-1,1)^c}}\beta(X_s)ds$. When $X^\bullet_{\iota_\cdot}$ is issued from a point $x\neq 0$, the aforementioned integral representation of $\zeta^\bullet$ and the fact that $|X^\bullet_{\iota_\cdot}|$ is almost surely bounded away from the origin and 1 implies that $\zeta^\bullet$ is almost surely finite. For $x=0$ the almost sure finiteness of $\zeta^\bullet$ is a consequence of the assumed explosion of $Z$ and the time-reversal statement in Proposition \ref{nagAx}. In total, the assumed explosion implies $\P_x(\zeta^\bullet\in (0,\infty))=1$ for all $x\in (-1,1)$, which is property (a) of Proposition \ref{P1}. \smallskip {\color{black} Property (b) of Proposition \ref{P1}, requires the weak continuity of $\zeta^\bullet$. Weak continuity is clear when the point of issue is away from the origin, as the trajectory of $X$ is bounded away from the origin; recall that the integrand of $ \int_0^{\tau^{(-1,1)^c}}\beta(X_s)ds$ (which equals $\zeta^\bullet$) is explosive if $|X_s|\to0$. Weak continuity of $\zeta^\bullet$ at zero is a more complicated issue but, fundamentally, is a consequence of the assumed Skorokhod continuity of the explosion time $T$. \smallskip To see why, we use duality, $h$-transforms and dominated convergence. First, note that the converse to the duality and spatial inversion in Proposition \ref{nagAx} (analogously to Propositions \ref{nag} and \ref{nag2}) is that, if we take the process $V= X_{\iota}$ issued from $x\in (-1,1), x\neq 0,$ and time reverse it from its last passage out of $(-1,1)$, say $\ell^{(-1,1)}$, the resulting process is equal in law to the process $\hat{V}^{\circ,(x)}$, defined as $1/Z^{\circ,(1/x)}$, where $Z^{\circ,(1/x)}$ is the Doob $h$-transform of $X$ with the $h$-function $y\mapsto h(y-1/x)$ on $\mathbb{R}$, where $h$ is given by \eqref{h} (i.e. $X$ conditioned to hit $1/x$ continuously), and time changed in the same way as \eqref{timechangesolution}. The initial condition of $Z^{\circ,(1/x)}$ is $\varpi_x(dy): =\mathbb{P}_x(1/X_{\ell^{(-1,1)}-}\in dy)$, $y\in(-1,1)^c$. Reasoning similarly to that of Step 1 of the proof of Proposition \ref{nag} shows that $X$ and $\hat{X}^\circ$ are in weak duality and we can also identify $ \mathbb{E}_x[f(X_{\ell^{(-1,1)}-}) ]=\lim_{|y|\to\infty} \hat{\mathbb{E}}_y[f({X}_{\tau^{(-1,1)}})\hat{h}(X_{\tau^{(-1,1)}}-x)/\hat{h}(y-x) ] $; see similar calculations in \cite{KV}. It follows from the explicit formula \eqref{striplowalpha} that $\varpi_x$ is absolutely continuous with respect to the Lebesgue measure, for each $x\in (-1,1)$, as well as that $(\varpi_x, x\in(-1,1))$ forming a weakly continuous family of measures. We will use these preparatory remarks to prove $$\lim_{|x|\to0}\mathbb{P}_x(t<\zeta^\bullet)=\mathbb{P}_{0}(t<\zeta^\bullet), \quad t\geq 0.$$ Define \[ H_x(y, t):= \mathbb{E}_{y}\left[\mathbf{1}_{(t<T)} {|xX_{\tau_t}-1|^{\alpha -1} }{|xy-1|^{1-\alpha}}\right],\quad y\in (-1,1)^c, x\in(-1,1), t>0, \] so that, due to the duality and spatial inversion mentioned above, $$\mathbb{P}_x(t<\zeta^\bullet) = \int_{(-1,1)^c} H_x(y,t)\,\varpi_x(dy).$$ In order to deal with the limit of $\mathbb{P}_x(t<\zeta^\bullet)$ for $|x|\to 0$, we first prove that \begin{equation} \lim_{|x|\to 0}\int_{(-1,1)^c} H_x(y,t)\,\varpi_x(dy)= \lim_{|x|\to0} \int_{(-1,1)^c} H_0(y,t)\, \varpi_x(dy), \label{varpi1} \end{equation} and then use weak continuity of the measures $(\varpi_x, x\in(-1,1))$ and continuity of $H_0$ to complete the argument. Note that the Doob $h$-transform in the definition of $H_x(y,t)$ is applied at the almost surely finite stopping times $(\tau_t, t\geq 0)$ which remains a martingale transform e.g. by Theorem III.3.4 of \cite{JacodShiryaev}.\smallskip Let us start to prove \eqref{varpi1}. As an $h$-transform, $H_x(y, t)$ is a probability and hence bounded in $[0,1]$. To verify \eqref{varpi1} we show $\lim_{|x|\to0}\sup_{ |y|\in[1, N]}|H_x(y, t)- H_0(y, t)| = 0$ for any $N>1$ which then allows us to replace $H_x$ by $H_0$ in \eqref{varpi1}. To this end, using the spatial homogeneity of $(X, \mathbb{P})$, we can choose $\delta>0$ sufficiently small such that, for given $\varepsilon>0$, \begin{align} &\quad \sup_{ |y|\in[1, N]}|H_x(y,t)- H_0(y,t)| \notag\\ &= \sup_{ |y|\in[1, N]}\left|\mathbb{E}_{y}\left[\mathbf{1}_{(t<T)} \frac{|xX_{\tau_t}-1|^{\alpha -1} }{|xy-1|^{\alpha-1}}\right]- \mathbb{P}_y(t<T)\right|\notag\\ &\leq \mathbb{E}_{0}\left[\sup_{ |y|\in[1, N]}\mathbf{1}_{(t<T^{(y)}, \, \inf_{s\geq 0}|y+X_{s}|>\delta)}\left| \frac{|x + (xX_{\tau^y_t}/y)-(1/y)|^{\alpha -1} }{|x-(1/y)|^{\alpha-1}}-1\right|\right]\notag\\ &\quad +\mathbb{E}_{0}\left[\sup_{ |y|\in[1, N]}\mathbf{1}_{(t<T^{(y)}, \, \inf_{s\geq 0}|y+X_{s}|\leq \delta)}\left| \frac{|x + (xX_{\tau^y_t}/y)-(1/y)|^{\alpha -1} }{|x-(1/y)|^{\alpha-1}}-1\right|\right], \label{delta} \end{align} where $T^{(y)}= \int_0^\infty \sigma(y+X_u)^{-\alpha}du$ and $\tau^y_t = \inf\{s>0: \int_0^s \sigma(y+X_u)^{-\alpha}du> t\}$. Note that the continuity of $\sigma$ and the restriction of $y\in[1,N]$ ensures that $\underline{c}_Nt \leq \tau^y_t\leq \overline{c}_Nt $ for constants $\underline{c}_N, \overline{c}_N$, depending on $N$. Next, we note that, for each fixed $u>0$, Doob's martingale inequality and the fact that $X$ is known to have absolute moments of all orders in $(-1,\alpha),$ ensures that, for $p>1$ sufficiently close to 1, fixed $u>0$ and $z\in\mathbb{R}$, $ \mathbb{E}_0[\sup_{s\leq u}|z+X_s|^{p(\alpha-1)}]\leq c_p \mathbb{E}_0[|z+X_u|^{p(\alpha-1)}]<\infty$, for some unimportant constant $c_p\in(0,\infty)$. As a consequence, when $x\in[-1/(2N),1/(2N)]$ and $|y|\in[1,N]$, there are constants $b_1^N$ and $b_2^N$ such that \begin{align*} \mathbf{1}_{(t<T^{(y)})}\left| \frac{|x + (xX_{\tau^y_t}/y)-(1/y)|^{\alpha -1} }{|x-(1/y)|^{\alpha-1}}-1\right| &\leq b_1^N \textstyle{ \sup_{s\leq b_2^N t}|X_s|^{\alpha -1} } + 1. \end{align*} For the first summand on the right-hand side of \eqref{delta}, we may now appeal to dominated convergence and take limits as $|x|\to0$ inside the expectation, noting that the term between the modulus signs in the previous display tends to zero. The second summand of the right-hand side of \eqref{delta} vanishes for $\delta\to 0$ directly with dominated convergence. The desired $\lim_{|x|\to0}\sup_{ |y|\in[1, N]}|H_x(y,t )- H_0(y,t )| = 0$ now follows.\smallskip To both verify and identify the limit in \eqref{varpi1}, we now note that the just-proved uniform continuity of $H_x(y,t ) $ implies that, for a given choice of $\varepsilon$, by taking $N$ sufficiently large such that $\varpi_0([-N,N]^c)<\varepsilon$, \begin{align} &\limsup_{|x|\to0}\left|\int_{(-1,1)^c}H_x(y,t )\, \varpi_x(dy) -\int_{(-1,1)^c} H_0(y,t)\, \varpi_x(dy)\right|\notag \\ &<\limsup_{|x|\to0}\varepsilon\varpi_x([-N,N]) + 2\varpi_x([-N,N]^c)\notag\\ &\leq \varepsilon + 2\varpi_0([-N,N]^c) <3\varepsilon. \label{almostthere} \end{align} Hence, \eqref{varpi1} is proved. To compute the righthand side of \eqref{varpi1} we need continuity of $y\mapsto H_0(y,t)=\P_y(T>t)$ for all $t\geq 0$ fixed, which is a consequence of the weak convergence assumption if the explosion time $T$ has no atoms. A variant of Proposition \ref{nagAx} states that the SDE started from $y$ and reversed from explosion is equal in law to the stable process $X$ issued at the origin and conditioned to hit $1/y$ via an $h$-transform using \eqref{h}, with the time change $\iota$ in \eqref{iota}. It follows that, for $y\neq 0$, \begin{equation} \P_y(T>t)= \mathbb{E}_0[\mathbf{1}_{(\iota_t<\infty)} |y X_{\iota_t} -1|^{\alpha-1}]. \label{noatoms} \end{equation} Dominated convergence (recall $X$ has absolute moments in $(-1,\alpha)$) together with quasi-left/right-continuity of $X$ and the fact that $( \iota_t, t\geq 0)$ is a continuous additive functional ensures that $\P_y(T>t)$ has no discontinuities for any $y\neq 0$, $t>0$. Hence, from \eqref{varpi1}, the continuity of $H_0$ and the weak continuity of $(\varpi_x, x\in(-1,1))$, we have $$\lim_{|x|\to0}\mathbb{P}_x(t<\zeta^\bullet) =\lim_{|x|\to0}\int_{(-1,1)^c}H_0(y,t )\,\varpi_x(dy) = \mathbb{P}_{\varpi_0}(t<T) = \mathbb{P}_{0}(t<\zeta^\bullet),\quad t\geq 0,$$ where the final equality follows from the duality of $V$ and $1/Z$ from Proposition \ref{nagAx}. Portmanteau's Theorem now ensures that we have the desired weak convergence in property (b) of Proposition \ref{P1}. } \smallskip Both conditions of Proposition \ref{P1} are thus met and hence, we may deduce as a conclusion of that proposition that, for all $0<\varepsilon<1$, \begin{align} \infty>\mathbb{E}_{0}\left[\int_0^\infty \mathbf{1}_{(|X^\bullet_{\iota_s}|\leq \varepsilon)} ds\right] &= \mathbb{E}_{0}\left[\int_0^{\tau^{(-1,1)^c}} \mathbf{1}_{(|X_{u}|\leq \varepsilon)} \beta(X_u)du\right] =\int_{[-\varepsilon, \varepsilon]} \beta(x)G_{X^\bullet}(0,dx). \label{mustbefinite} \end{align} From Theorem II.2.3 in \cite{KALEA}, equivalently Theorem B of \cite{PS}, it is known that, for $\alpha\in(0,1)$, $G_{X^\bullet}(0,dx)$ has a density which is asymptotically equivalent to $h$ times a constant at $0$. From \eqref{mustbefinite} we thus have that \begin{equation} \int_{[-\varepsilon, \varepsilon]} \beta(x)h(x)dx<\infty. \label{mustbefinite2} \end{equation} Changing variables as in \eqref{compactfimportant} gives the desired integral test $I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R})= \int_\mbox{\rm I\hspace{-0.02in}R}\sigma(y)^{-\alpha} |y|^{\alpha-1}dy<\infty.$ \subsection*{Sufficiency} First note that \begin{align}\label{potential} \mathbb{E}_x\left[\int_{0}^\infty \sigma(X_t)^{-\alpha}dt\right] = \int_\mathbb{R} \sigma(y)^{-\alpha}G_X(x,dy)=\int_\mathbb{R} \sigma(y)^{-\alpha}h(x-y)dy, \end{align} where, as before, $G_X$ is the potential measure of $X$ and $h$ is the the free potential density of $X$ given in \eqref{h}. The righthand side is finite for all $x\in \mbox{\rm I\hspace{-0.02in}R}$ if and only if \begin{align}\label{SDEexists} I^{\sigma,\alpha}(\mbox{\rm I\hspace{-0.02in}R})= \int_\mbox{\rm I\hspace{-0.02in}R}\sigma(y)^{-\alpha} |y|^{\alpha-1}dy<\infty. \end{align} Hence, if the assumed integral test holds, then the perpetual integral $T=\int_{0}^\infty \sigma(X_t)^{-\alpha}dt$ has finite expectation, thus, is finite $\P_x$-almost surely. Proposition \ref{pr} implies that for all initial conditions the unique solution to the SDE \eqref{2} almost surely explodes in finite time. {\color{black} As soon as we know that $T$ is almost surely finite, identity \eqref{noatoms} ensures there is Feller explosion.}
1,477,468,749,899
arxiv
\section{Introduction} \label{Sec:Intro} Superconducting circuits \cite{Bouchiat_Quantum_1998, Nakamura_Coherent_1999, Friedman_Quantum_2000, Makhlin_Quantum-State_2001, Van_Quantum_2000, Martinis_Rabi_2002, Blais_Cavity_2004, Wallraff_Strong_2004, Koch_Charge_2007, Majer_Coupling_2007} provide a promising platform for quantum computation \cite{Shor_Algorithms_1994, Divincenzo_Quantum_1995, Barenco_Elementary_1995, Steane_Quantum_1998, Shor_Quantum_1998, Nielsen_Quantum_2002}. To ensure fault-tolerant quantum computation \cite{Shor_Fault_1996, Gottesman_Theory_1998, Kitaev_Fault_2003, Raussendorf_Fault_2007}, the underlying single- and two-qubit gates must satisfy certain error thresholds \cite{Gambetta_Building_2017}. Proposals for two-qubit gates fall into two major categories depending on the requirement for dynamic flux tunability. All-microwave architectures \cite{Paraoanu_Microwave_2006, Rigetti_Fully_2010, Chow_Microwave_2013, Cross_Optimized_2015} employ microwave pulses to entangle qubits, especially useful for fixed-frequency qubits such as the transmon \cite{Koch_Charge_2007}. The Resonator-Induced Phase (RIP) gate \cite{Cross_Optimized_2015, Paik_Experimental_2016, Puri_High-Fidelity_2016} exploits an effective dynamic $ZZ$ (controlled-phase) interaction activated by a microwave pulse applied to a mediating bus resonator coupled to the qubits. The effective $ZZ$ rate is proportional to the resonator photon number and qubit-resonator dispersive couplings and inversely proportional to the resonator-drive detuning \cite{Cross_Optimized_2015, Puri_High-Fidelity_2016}. A theoretical proposal for the RIP gate was introduced in Ref.~\cite{Cross_Optimized_2015} as a promising multi-qubit controlled-phase gate. The most notable advantage of the RIP gate is the forgiving qubit frequency requirements. The RIP gate was experimentally demonstrated to entangle two fixed-frequency transmon qubits from as close as 380 MHz apart to as far as 1.8 GHz \cite{Paik_Experimental_2016} with fidelity per Clifford \cite{Knill_RB_2008, Magesan_Scalable_2011} ranging $93\%$ to $97\%$ (estimated gate fidelity $96\%$ to $98\%$). One unexplained experimental observation was that the measured fidelity showed improvement at smaller resonator-drive detuning. This did not agree with the detuning dependence expected for measurement-induced dephasing which should cause more error at small detuning. Moreover, this was not consistent with the leakage predictions of the dispersive Kerr model for the RIP gate \cite{Cross_Optimized_2015}. \begin{figure*}[t!] \centering \subfloat[\label{subfig:Model-AbInitio}]{% \includegraphics[scale=0.275]{RIPSchematicAbInitio.png}% } \subfloat[\label{subfig:Model-KM}]{% \includegraphics[scale=0.280]{RIPSchematicKM.png}% } \subfloat[\label{subfig:Model-TLM}]{% \includegraphics[scale=0.280]{RIPSchematicTLM.png}% } \caption{RIP gate under three levels of abstraction: (a) an ab-initio model based on the Josephson nonlinearity, (b) a multilevel Kerr model similar to Ref.~\cite{Cross_Optimized_2015} (see Appendix~\ref{App:KM}), and (c) a dispersive JC model similar to Ref.~\cite{Puri_High-Fidelity_2016} (see Appendix~\ref{App:TLM}). When referring to the phenomenological models, we adopt the same level of precision as introduced by the original studies~\cite{Cross_Optimized_2015, Puri_High-Fidelity_2016} as a point of comparison. For instance, \text{only} in the ab-initio model do we consider a RIP drive that acts on the \textit{bare} (shown with a bar) resonator mode although such a modification and more could be made to improve the former models.} \label{fig:Model-RIPDifferentModels} \end{figure*} In this work, we characterize the RIP gate operation over a broad span of system parameters for two transmon qubits coupled to a linear resonator and propose experimentally relevant choices for optimal design. To this aim, we introduce an ab-initio model, based on the Josephson nonlinearity, that accounts for high-excitation states of the qubits. We show that the ab-initio analysis agrees with as well as extends the findings of previous studies based on multilevel Kerr \cite{Cross_Optimized_2015, Paik_Experimental_2016} and dispersive Jaynes-Cummings (JC) \cite{Puri_High-Fidelity_2016} models. Our study is focused on the coherent dynamics of the RIP gate, with an emphasis on the effective gate interactions, and on leakage as the main source of error. To derive an effective RIP Hamiltonian \cite{Maricq_Application_1982, Grozdanov_Quantum_1988, Rahav_Effective_2003, Mirrahimi_Modeling_2010, Goldman_Periodically_2014, Wang_Photon_2020, Magesan_Effective_2020, Malekakhlagh_First-Principles_2020}, we employ time-dependent Schrieffer-Wolff Perturbation Theory (SWPT) \cite{Schrieffer_Relation_1966, Bravyi_Schrieffer_2011, Bukov_Schrieffer_2016, Malekakhlagh_Lifetime_2020, Petrescu_Lifetime_2020, Magesan_Effective_2020, Malekakhlagh_First-Principles_2020}. We make comparisons between the effective gate parameters based on the JC, Kerr and ab-initio models and analyze the validity and breakdown of each model. Our analysis of leakage reveals various distinct mechanisms that we categorize in terms of only the resonator (residual photons) \cite{Cross_Optimized_2015, Paik_Experimental_2016}, two qubits (qubit-qubit) similar to the cross-resonance (CR) gate \cite{Tripathi_Operation_2019, Malekakhlagh_First-Principles_2020, Hertzberg_Laser_2021, Zhang_High_2020}, one qubit and the resonator (qubit-resonator) similar to the standard readout scheme \cite{Sank_Measurement-Induced_2016, Lescanne_Escape_2019, Verney_Structural_2019}, and both qubits and the resonator (three-body). Revisiting the residual photons, we demonstrate order-of-magnitude improvement via DRAG \cite{Motzoi_Simple_2009, Gambetta_Analytic_2011, Malekakhlagh_Mitigating_2021} on the resonator. Qubit leakage, however, cannot be correctly captured by the phenomenological dispersive models \cite{Cross_Optimized_2015, Puri_High-Fidelity_2016}. This is due to the diagonal form of interactions with respect to the qubits subspace and the two-level assumption. Therefore, using the ab-initio model becomes essential. We present extensive ab-initio simulation results for qubit leakage and identify the leakage clusters in terms of collisions between high-excitation qubit states and computational states with high photon number, generalizing Ref.~\cite{Sank_Measurement-Induced_2016}. In particular, we show that qubit-resonator leakage involves very high excitation qubit states (5th-9th) compared to three-body leakage that involves also the medium-excitation states (2nd and above). These findings impose stringent trade-offs on parameter allocation for the RIP gate. As a general remedy for achieving both weaker leakage amplitude and less collision density, without compromising the effective RIP interaction, we propose to use (i) very weakly anharmonic transmons, with anharmonicity reduced down to approximately -200 MHz, and (ii) large qubit frequency of the order of 6 GHz. The paper is organized as follows. In Sec.~\ref{Sec:Model}, starting from the Josephson nonlinearity, we introduce an ab-initio model for the RIP gate and compare it with previous studies \cite{Cross_Optimized_2015, Puri_High-Fidelity_2016}. In Sec.~\ref{Sec:EffHam}, we demonstrate our analytical method, based on time-dependent SWPT, for deriving an effective Hamiltonian for the RIP gate. We then make a comparison of the effective gate parameters ($IZ$, $ZI$ and $ZZ$) between the considered models. Section~\ref{Sec:Leak} provides a characterization of leakage related to frequency collisions based on numerical simulation of the full dynamics. Lastly, in Sec.~\ref{Sec:Tkwys}, we summarize our findings towards system parameter optimization. There are ten appendices that will be referred to throughout the paper. In Appendices~\ref{App:TLM} and~\ref{App:KM}, using time-dependent SWPT, we provide the derivation of an effective gate Hamiltonian based on the dispersive JC \cite{Puri_High-Fidelity_2016} and the multilevel Kerr \cite{Cross_Optimized_2015} models, respectively. Appendix~\ref{App:NormModeHam} discusses the normal mode representation of the ab-initio Hamiltonian using a canonical (Bogoliubov) transformation. Appendix~\ref{App:DispTrans} provides the details of a displacement transformation and the effective dynamics for the resonator mode. In Appendix~\ref{App:ResRes}, we analyze the resonator response to basic RIP pulse shapes and provide a \textit{classical} characterization of resonator leakage. Appendix~\ref{App:MLM} presents the derivation of an approximate ab-initio model via normal ordering of the nonlinear interactions. In Appendix~\ref{App:EffRIPHam}, we derive an effective RIP gate Hamiltonian based on the approximate ab-initio model of Appendix~\ref{App:MLM}. Appendix~\ref{App:LeakMech} discusses leakage mechanisms based on a three-level toy model. In Appendix~\ref{App:FidITOLeak}, we characterize the average gate error due to leakage. Lastly, Appendix~\ref{App:NumMet} gives a brief summary of the numerical methods for leakage simulation. \section{Model} \label{Sec:Model} For comparison, we consider several starting Hamiltonian models for the RIP gate and discuss how the underlying assumptions may limit precise analysis of certain aspects of the gate operation. To this aim, we analyze the RIP gate under the following levels of abstraction: (i) an \textit{exact} ab-initio model for our numerical simulations, (ii) an \textit{approximate} ab-initio model obtained by normal mode expansion of the former, (iii) a multilevel Kerr model similar to the one introduced in Ref.~\cite{Cross_Optimized_2015} and (iv) a dispersive JC model similar to the one in Ref.~\cite{Puri_High-Fidelity_2016} (see Fig.~\ref{fig:Model-RIPDifferentModels}). In the exact ab-initio model, we account for the Josephson nonlinearity of each qubit. The system and the drive Hamiltonian in the lab frame can be expressed in a \textit{unitless} quadrature form as \cite{Koch_Charge_2007, Malekakhlagh_NonMarkovian_2016, Malekakhlagh_Origin_2016, Malekakhlagh_Cutoff-Free_2017, Didier_Analytical_2018, Malekakhlagh_Lifetime_2020, Petrescu_Lifetime_2020} \begin{subequations} \begin{align} \begin{split} \hat{\mathcal{H}}_s &=\sum\limits_{j=a,b}\frac{\bar{\omega}_j}{4}\left[(\hat{\bar{y}}_j-y_{gj})^2-\frac{2}{\epsilon_j}\cos(\sqrt{\epsilon_j}\hat{\bar{x}}_j)\right]\\ &+\frac{\bar{\omega}_c}{4}\left(\hat{\bar{x}}_c^2+\hat{\bar{y}}_c^2\right)+\sum\limits_{j=a,b}g_j\hat{\bar{y}}_j\hat{\bar{y}}_c \;, \end{split} \label{eqn:Model-Starting Hs}\\ \hat{\mathcal{H}}_d(t)&=-[\Omega_{cx}(t)\cos(\omega_d t)+\Omega_{cy}(t)\sin(\omega_d t)]\hat{\bar{y}}_c \;, \label{eqn:Model-Starting Hd} \end{align} \end{subequations} where the qubit modes, the resonator mode and the drive are labeled as $a$, $b$, $c$ and $d$, respectively. The bar notation is used for quantities in the \textit{bare} frame, to distinguish from the \textit{normal} mode frame quantities that will be denoted with no bar. Moreover, $\bar{\omega}_j\equiv \sqrt{8E_{Cj}E_{Jj}}$ is the harmonic frequency, $\epsilon_j \equiv \sqrt{2E_{Cj}/E_{Jj}}=\varphi_{\text{j,ZPF}}^2$ is a \textit{small} unitless anharmonicity measure and $y_{gj}\equiv n_{gj}/n_{\text{j,ZPF}}$ is the unitless gate charge for qubit $j=a, b$. Qubits are capacitively coupled to a common bus resonator with strength $g_a$ and $g_b$, respectively. The flux (phase) and charge (number) quadratures are written in a unitless form as $\hat{\bar{x}}_k \equiv \hat{\bar{k}}+\hat{\bar{k}}^{\dag}$ and $\hat{\bar{y}}_k \equiv -i(\hat{\bar{k}}-\hat{\bar{k}}^{\dag})$ for $k=a, b, c$. The drive acts capacitively on the resonator mode with time-dependent pulse amplitudes $\Omega_{cx}(t)$ and $\Omega_{cy}(t)$, and carrier frequency $\omega_d$. We take $\Omega_{cy}(t)$ as the main RIP pulse, while $\Omega_{cx}(t)$ provides additional degree of freedom used for DRAG \cite{Motzoi_Simple_2009, Gambetta_Analytic_2011, Malekakhlagh_Mitigating_2021}. Hamiltonian~(\ref{eqn:Model-Starting Hs})--(\ref{eqn:Model-Starting Hd}) serves as our point of reference and is used for the full numerical simulation. For our analytical calculations, starting from Eqs.~(\ref{eqn:Model-Starting Hs})--(\ref{eqn:Model-Starting Hd}), we derive an approximate ab-initio model by first solving for the normal modes up to the harmonic theory, and then expanding the nonlinearity in the normal mode frame \cite{Malekakhlagh_Lifetime_2020,Petrescu_Lifetime_2020} similar to the black-box quantization \cite{Nigg_BlackBox_2012} and energy participation ratio \cite{Minev_EPR_2020} methods. Normal mode expansion converges faster when the transmon qubits are weakly anharmonic ($\epsilon \lessapprox 0.2 $) and also when the effective interactions are implemented \textit{dispersively}, i.e. large detuning between the qubits and the resonator as well as between the qubits themselves compared to qubits anharmonicity. This is an alternative to \textit{few-quantum-level} models used commonly for systems operating in the straddling regime like the CR gate \cite{Magesan_Effective_2020, Tripathi_Operation_2019, Malekakhlagh_First-Principles_2020}. The normal mode representation of the RIP gate Hamiltonian reads \begin{subequations} \begin{align} \begin{split} \hat{\mathcal{H}}_s &=\sum\limits_{k=a,b,c}\tilde{\omega}_k\hat{k}^{\dag}\hat{k}+\sum\limits_{j=a,b}\sum\limits_{n=2}^{\infty} \frac{\bar{\omega}_j}{2}(-\epsilon_j)^{n-1}\\ &\times \frac{\left[\left(\sum\limits_{k=a,b,c}u_{jk}\hat{k}\right)+\text{H.c.}\right]^{2n}}{(2n)!} \;, \end{split} \label{eqn:Model-NormMode Hs}\\ \begin{split} \hat{\mathcal{H}}_d(t) &= [\Omega_{cx}(t)\cos(\omega_d t)+\Omega_{cy}(t)\sin(\omega_d t)]\\ &\times i[\sum\limits_{k=a,b,c}v_{ck}\hat{k}-\text{H.c.}] \;, \end{split} \label{eqn:Model-NormMode Hd} \end{align} \end{subequations} where $u_{jk}$ and $v_{jk}$, $j,k=a,b,c$, are the flux and charge hybridization coefficients that relate the corresponding bare and normal mode quadratures and are derived via a canonical (Bogoliubov) transformation \cite{Jellal_Two_2005, Merdaci_Entanglement_2020, Malekakhlagh_Lifetime_2020} (see Appendix~\ref{App:NormModeHam}). Moreover, the normal mode \textit{harmonic} frequencies are shown as $\tilde{\omega}_k$ in order to distinguish from the renormalized normal mode frequencies (no bar) that contain the static (Lamb) corrections. Equations~(\ref{eqn:Model-NormMode Hs})--(\ref{eqn:Model-NormMode Hd}) demonstrate a rich variety of possible mixing both at the linear level through charge hybridization and the nonlinear level through flux hybridization. Equation~(\ref{eqn:Model-NormMode Hd}) shows that the RIP drive that is \textit{ideally} supposed to populate the resonator mode will also act on the normal qubit modes through charge hybridization. For typical RIP parameters, the cross-drive on the qubits can be $10$\% of the intended resonator drive. Furthermore, flux hybridization in Eq.~(\ref{eqn:Model-NormMode Hs}) leads to numerous nonlinear processes even up to the quartic expansion. To handle such complexity, we employ SNEG \cite{Zitko_Sneg_2011} in Mathematica, which allows sybmolic manipulation of bosonic operators, and derive an approximate ab-initio model for the RIP gate (see Appendix~\ref{App:MLM}). To compare to previous studies, we also consider starting models similar to the ones introduced in Refs.~\cite{Puri_High-Fidelity_2016,Cross_Optimized_2015} (see Fig.~\ref{fig:Model-RIPDifferentModels}). Reference~\cite{Cross_Optimized_2015} models the transmon qubits as weakly anharmonic Kerr oscillators as \begin{align} \begin{split} \hat{\mathcal{H}}_{s,\text{Kerr}} &=\sum\limits_{j=a,b}\left[\omega_j \hat{n}_j+\frac{\alpha_j}{2}\hat{n}_j(\hat{n}_j-1)\right]\\ &+\omega_c\hat{n}_c+\sum\limits_{j,k=a,b,c\atop j> k}2\chi_{jc}\hat{n}_j\hat{n}_c \;, \end{split} \label{eqn:Model-Def of H_s,Kerr} \end{align} with the anharmonicity and dispersive couplings denoted by $\alpha_j$ and $2\chi_{jk}$ for $j,k=a,b,c$. The dispersive JC model \cite{Puri_High-Fidelity_2016} is similar to the Kerr model, but with two-level qubits as \begin{align} \hat{\mathcal{H}}_{s,\text{JC}}=\sum\limits_{j=a,b}\frac{\omega_{j}}{2}\hat{\sigma}_j^{z}+\omega_c \hat{n}_c+\sum\limits_{j=a,b} \chi_{jc}\hat{n}_c\hat{\sigma}_j^z \;, \label{eqn:Model-Def of H_s,JC} \end{align} where $\hat{\sigma}_j^z\equiv \ket{1_j}\bra{1_j}-\ket{0_j}\bra{0_j}$. Moreover, in both studies, it is assumed that the drive acts only on the \textit{normal} resonator mode as \begin{align} \hat{\mathcal{H}}_d(t)=\frac{1}{2}\left[\Omega_c^*(t)\hat{c}e^{i\omega_d t}+\Omega_c(t)\hat{c}^{\dag}e^{-i\omega_d t}\right] \;, \label{eqn:Model-Def of H_d} \end{align} where $\Omega_c(t)\equiv \Omega_{cy}(t)-i \Omega_{cx}(t)$. Equations~(\ref{eqn:Model-Def of H_s,Kerr})--(\ref{eqn:Model-Def of H_d}) have been modified with respect to the original studies to be consistent with our convention of denoting the full dispersive shift by $2\chi$ and the drive amplitude by $\Omega_c(t)$. In terms of capturing the dynamic $ZZ$ (RIP) interaction, we find in the following that the dipsersive JC model is valid in the parameter regime $2\chi_{ac},2\chi_{bc} \ll |\Delta_{cd}| \ll |\Delta_{ad}|, |\Delta_{bd}|$, the multilevel Kerr model in $2\chi_{ac},2\chi_{bc}<|\Delta_{cd}| \ll |\Delta_{ad}|, |\Delta_{bd}|$ and the approximate ab-initio model in a broader resonator-drive detuning range of $2\chi_{ac},2\chi_{bc} < |\Delta_{cd}| < |\Delta_{ad}|, |\Delta_{bd}|$ (see Sec.~\ref{Sec:EffHam}). In terms of capturing qubit leakage, however, the dispersive JC model cannot be used due to its two-level construction. The multilevel Kerr model is also unable to correctly predict qubit leakage for two reasons. Firstly, the starting Hamiltonian is \textit{diagonal} with respect to the qubits preventing transitions out of the computational subspace. This can in principle be improved by adding phenomenological direct drive terms on the qubit modes. However, more importantly, Kerr estimates for transmon eigenenergies become less valid for high-excitation states, for which we observe considerable leakage due to frequency collisions. Therefore, using the exact ab-initio model is necessary in the characterization of qubit leakage discussed in Sec.~\ref{Sec:Leak}. \section{Effective RIP gate Hamiltonian} \label{Sec:EffHam} In Cross et al.~\cite{Cross_Optimized_2015}, the gate operation was described within a Lindblad formalism by modeling the qubits as multilevel Kerr oscillators, and analytical estimates for the effective $ZZ$ rate were derived using the generalized P-representation \cite{Drummond_Generalised_1980}. Here, we analyze effective RIP interactions via SWPT and make a comparison between the aforementioned starting models. The effective RIP Hamiltonian takes the form \begin{align} \hat{\mathcal{H}}_{\text{RIP,eff}}(t) \equiv \omega_{iz}(t)\frac{\hat{I}\hat{Z}}{2}+\omega_{zi}(t)\frac{\hat{Z}\hat{I}}{2}+\omega_{zz}(t)\frac{\hat{Z}\hat{Z}}{2} \;, \label{eqn:EffHam-Def of H_RIP,eff(t)} \end{align} with a dynamic frequency shift for each qubit along with a two-qubit $ZZ$ interaction which consists of \textit{static} and \textit{dynamic} contributions. Our analytical method implements a series of unitary transformations from the lab frame to the \textit{effective} diagonal frame of Eq.~(\ref{eqn:EffHam-Def of H_RIP,eff(t)}). In Sec.~\ref{Subsec:SWPT}, we discuss a generic derivation of the effective Hamiltonian using time-dependent SWPT. In Sec.~\ref{Subsec:Kerr}, we apply it to the multilevel Kerr model, as a simpler case that captures the essential mechanism for the effective $ZZ$ interaction. Section~\ref{Subsec:Compare} compares the effective Hamiltonians found by applying the approach to all three models. Furthermore, detailed derivations of the effective RIP Hamiltonians based on the dispersive JC, the multilevel Kerr and the ab-initio models can be found in Appendices~\ref{App:TLM}, \ref{App:KM} and~\ref{App:EffRIPHam}, respectively. \subsection{Effective Hamiltonian via SWPT} \label{Subsec:SWPT} To arrive at the effective Hamiltonian introduced in Eq.~(\ref{eqn:EffHam-Def of H_RIP,eff(t)}), we devise a unitary transformation $\hat{U}_{\text{diag}}(t)$ that maps the lab frame to the diagonal frame as \begin{align} \hat{\mathcal{H}}_{I,\text{eff}}(t)\equiv \hat{U}_{\text{diag}}^{\dag}(t)\left[\hat{\mathcal{H}}_{s}+\hat{\mathcal{H}}_{d}(t)-i\partial_t\right]\hat{U}_{\text{diag}}(t) \;. \label{eqn:EffHam-Def of H_I,eff(t)} \end{align} It can be decomposed into intermediate unitary transformations as \begin{align} \hat{U}_{\text{diag}}(t) \equiv \hat{D}[d_c(t)] \hat{U}_0(t) \hat{U}_{\text{SW}}(t) \;, \label{eqn:EffHam-U_diag decomp} \end{align} where $\hat{D}[d_c(t)]$, $\hat{U}_0(t)$ and $\hat{U}_{\text{SW}}(t)$ denote a coherent displacement transformation of the resonator mode, transformation to the interaction frame, and finally a SW transformation, respectively. In the following, we discuss each transformation in more detail. A typical RIP drive can populate the resonator mode with several photons. The resonator response can therefore be effectively described in terms of classical (coherent) and quantum fluctuation parts. Formally, this is achieved by the displacement transformation \begin{align} \hat{D}[d_c(t)]\equiv e^{d_c(t)\hat{c}^{\dag}-d_c^*(t)\hat{c}} \;, \label{eqn:EffHam-Def of D[d_c]} \end{align} where $\hat{D}^{\dag}[d_c(t)]\hat{c}\hat{D}[d_c(t)]=\hat{c}+d_c(t)$. We then set $d_c(t)$ to cancel out terms linear in $\hat{c}$ and $\hat{c}^{\dag}$ in the displaced frame Hamiltonian. Up to the quartic expansion of the Josephson nonlinearity, this is equivalent to the response of a classical Kerr oscillator to the input RIP pulse. Under the rotating-wave approximation, one finds \begin{align} \dot{\eta}_c(t)+i\Delta_{cd}\eta_c(t)+i\alpha_c|\eta_c(t)|^2\eta_c(t)=-\frac{i}{2}\Omega_{c}(t)\;, \label{eqn:EffHam-Cond for eta_c(t)} \end{align} where $\eta_c(t)\equiv d_c(t)e^{i\omega_d t}$ is the slowly-varying response amplitude, $\Delta_{cd}\equiv \omega_c -\omega_d$ is the resonator-drive detuning and $\alpha_c$ is the effective anharmonicity for the resonator [see Appendices~\ref{App:TLM} and~\ref{App:DispTrans} for the derivations based on the JC ($\alpha_c=0$) and the ab-initio models, respectively]. In Appendix~\ref{App:ResRes}, based on Eq.~(\ref{eqn:EffHam-Cond for eta_c(t)}), we have analyzed the resonator response and leakage in detail. Importantly, we show that using DRAG \cite{Motzoi_Simple_2009, Gambetta_Analytic_2011, Malekakhlagh_Mitigating_2021} can be helpful in suppressing the residual photons \cite{Cross_Optimized_2015} (see Sec.~\ref{Sec:Leak} and Appendix~\ref{App:ResRes}). Next, in the displaced frame, we split the Hamiltonian into \textit{bare} and \textit{interaction} contributions as $\hat{\mathcal{H}}_0+ \hat{\mathcal{H}}_{\text{int}}(t)$. The interaction frame Hamiltonian is then found by the unitary transformation $\hat{U}_0\equiv \exp(-i\hat{\mathcal{H}}_{0} t)$ as $\hat{\mathcal{H}}_{I}(t) \equiv \hat{U}_0^{\dag}(t) \hat{\mathcal{H}}_{\text{int}}(t)\hat{U}_0(t)$. We note that there is flexibility in defining what bare and interaction contributions are. In particular, in Ref.~\cite{Puri_High-Fidelity_2016}, cross-Kerr terms were \textit{not} kept as bare contribution, which is justified for the explored parameter regime $|\Delta_{cd}|\gg |2\chi_{ac}|,|2\chi_{bc}|$. When referring to the dispersive JC model, we follow the same approximation as a point of comparison (see Appendix~\ref{App:TLM}). In Ref.~\cite{Cross_Optimized_2015} and the ab-initio model, cross-Kerr interaction terms are kept in the bare Hamiltonian allowing for more precise modeling of the effective gate parameters for resonator-drive detunings comparable to the dispersive shift, i.e. $|\Delta_{cd}| \sim |2\chi_{ac}|, |2\chi_{bc}|$. We then apply time-dependent SWPT to diagonalize the interaction frame Hamiltonian. The SW transformation is in principle a unitary transformation of the form, \begin{align} \hat{U}_{\text{SW}}(t)=e^{-i\hat{G}(t)} \;, \label{Eq:ResRes-Def Of U_SW} \end{align} where we expand the generator $\hat{G}(t)$ and the resulting effective Hamiltonian up to an arbitrary order in the interaction \cite{Gambetta_Analytic_2011, Magesan_Effective_2020, Malekakhlagh_Lifetime_2020, Petrescu_Lifetime_2020, Malekakhlagh_First-Principles_2020, Petrescu_Accurate_2021, Malekakhlagh_Mitigating_2021}. Here, we implement the perturbation up to the second order. The first order perturbation can be summarized as \begin{subequations} \begin{align} &\hat{\mathcal{H}}_{I,\text{eff}}^{(1)}(t)=\mathcal{S}\left(\hat{\mathcal{H}}_{I}(t)\right) \;, \label{eqn:EffHam-H_I,eff^(1) Cond}\\ &\dot{\hat{G}}_1(t)=\mathcal{N}\left(\hat{\mathcal{H}}_{I}(t)\right) \;, \label{eqn:EffHam-G1 Cond} \end{align} while the second order reads (see Appendix~C of Ref.~\cite{Malekakhlagh_First-Principles_2020}) \begin{align} &\hat{\mathcal{H}}_{I,\text{eff}}^{(2)}(t)=\mathcal{S}\Big(i[\hat{G}_1(t),\hat{\mathcal{H}}_I(t)]-\frac{i}{2}[\hat{G}_1(t),\dot{\hat{G}}_1(t)]\Big) \;, \label{eqn:EffHam-H_I,eff^(2) Cond}\\ &\dot{\hat{G}}_2(t)= \mathcal{N}\Big(i[\hat{G}_1(t),\hat{\mathcal{H}}_I(t)]-\frac{i}{2}[\hat{G}_1(t),\dot{\hat{G}}_1(t)]\Big) \;. \label{eqn:EffHam-G2 Cond} \end{align} \end{subequations} In Eqs.~(\ref{eqn:EffHam-H_I,eff^(1) Cond})--(\ref{eqn:EffHam-G2 Cond}), $\mathcal{S}(\bullet)$ and $\mathcal{N}(\bullet)$ denote the diagonal and off-diagonal parts of an operator. In summary, at each order in perturbation, we keep the diagonal contributions in the effective Hamiltonian and solve for a non-trivial SW generator that removes the off-diagonal part. For instance, first order off-diagonal terms in Eq.~(\ref{eqn:EffHam-G1 Cond}) can produce diagonal contributions through 2nd order mixings that appear in terms of commutators in Eq.~(\ref{eqn:EffHam-H_I,eff^(2) Cond}). Given the effective Hamiltonian in an extended Hilbert space, we read off the relevant gate parameters as \begin{subequations} \begin{align} &\omega_{iz}(t)\equiv \frac{1}{2}\text{Tr}\left\{\left(\hat{I}_a \otimes \hat{Z}_b \otimes \ket{0_c} \bra{0_c} \right)\cdot \hat{\mathcal{H}}_{I,\text{eff}}(t)\right\} \;, \label{eqn:EffHam-Def of w_iz}\\ &\omega_{zi}(t)\equiv \frac{1}{2}\text{Tr}\left\{\left(\hat{Z}_a \otimes \hat{I}_b \otimes \ket{0_c} \bra{0_c} \right)\cdot \hat{\mathcal{H}}_{I,\text{eff}}(t)\right\} \;, \label{eqn:EffHam-Def of w_zi}\\ &\omega_{zz}(t)\equiv \frac{1}{2}\text{Tr}\left\{\left(\hat{Z}_a \otimes \hat{Z}_b \otimes \ket{0_c} \bra{0_c} \right)\cdot \hat{\mathcal{H}}_{I,\text{eff}}(t)\right\}, \label{eqn:EffHam-Def of w_zz} \end{align} \end{subequations} where $\hat{Z}\equiv \ket{0}\bra{0}-\ket{1}\bra{1}$. We have distinguished between the \textit{physical} and the \textit{logical} Pauli operators, used in Eqs.~(\ref{eqn:Model-Def of H_s,JC}) and~(\ref{eqn:EffHam-Def of w_iz})--(\ref{eqn:EffHam-Def of w_zz}) such that $\hat{Z}=-\hat{\sigma}^z$. The effective Hamiltonian is defined in the displaced frame, and hence the zero-photon subspace in Eqs.~(\ref{eqn:EffHam-Def of w_iz})--(\ref{eqn:EffHam-Def of w_zz}) corresponds to no excitations beyond the coherent photon number $|\eta_c(t)|^2$. In experimental results, it is more common to report the \textit{full} $ZZ$ rate, which is twice the value quoted in this paper. In order to calibrate a controlled-phase gate of rotation angle $\theta_{zz}$ we need to set $\int_{0}^{\tau}dt'\omega_{zz}(t') = \theta_{zz}$, where $\omega_{zz}(t)=\omega_{zz}^{(0)}+\omega_{zz}^{(2)}(t)+O(\hat{\mathcal{H}}_{I}^4)$. In particular, $\theta_{zz}=\pi/2$ is equivalent to CNOT up to single-qubit Hadamard and $Z$ rotations as \begin{subequations} \begin{align} &\quad \quad \hat{U}_{\text{CZ}}=\exp \left[-i\frac{\pi}{4} \left(\hat{I}\hat{I}-\hat{I}\hat{Z}-\hat{Z}\hat{I}+\hat{Z}\hat{Z} \right)\right] \;, \label{eqn:EffHam-U_cz ITO U_zz} \\ & \quad \quad \hat{U}_{\text{CX}}=\hat{I}\hat{H} \cdot \hat{U}_{\text{CZ}} \cdot \hat{I}\hat{H} \;, \label{eqn:EffHam-U_cx ITO U_cz} \end{align} \end{subequations} with $\hat{U}_{\text{CZ}}$, $\hat{U}_{\text{CX}}$ and $\hat{H}$ denoting the controlled-$Z$, controlled-$X$ and Hadamard operations. The procedure outlined in this section can be extended to a Lindblad master equation to arrive at an effective $ZZ$ rate that accounts for the resonator decay rate $\kappa_c$ \cite{Cross_Optimized_2015}. The result can be obtained by replacing $\omega_c\rightarrow \omega_c-i\kappa_c/2$ and reading off the real part of Eq.~(\ref{eqn:EffHam-Def of w_zz}) (see Cross et al.~\cite{Cross_Optimized_2015} for more detail). \subsection{Kerr model} \label{Subsec:Kerr} We next analyze the effective Hamiltonian and gate parameters based on the multilevel Kerr model (see Appendix~\ref{App:KM} for derivation). The perturbative description of the effective Hamiltonian is valid when $|\chi_{jc} \eta_c(t)| < |\Delta_{cd}|$ for $j=a,b$. The lowest order correction to the effective Hamiltonian reads \begin{align} \hat{\mathcal{H}}_{I,\text{eff}}^{(1)}(t)=2\chi_{ac}|\eta_c(t)|^2\hat{n}_a+2\chi_{bc}|\eta_c(t)|^2\hat{n}_b \;, \label{eqn:EffHam-H_I,eff^(1) Sol} \end{align} consisting of a dynamic frequency shift equal to $2\chi$ per resonator photon number for each qubit. Up to the second order, we find \begin{subequations} \begin{align} \begin{split} \hat{\mathcal{H}}_{I,\text{eff}}^{(2)}(t)=&-8\chi_{ac}\chi_{bc}\hat{\mathcal{A}}_{\eta}(t)\hat{n}_a\hat{n}_b\\ &-4\chi_{ac}^2\hat{\mathcal{A}}_{\eta}(t)\hat{n}_a^2\\ &-4\chi_{bc}^2\hat{\mathcal{A}}_{\eta}(t)\hat{n}_b^2 \;,\\ \end{split} \label{eqn:EffHam-H_I,eff^(2) Sol} \end{align} which contains an effective number-number interaction as well as an anharmonic shift of the qubit eigenfrequencies. The drive dependence of the effective interaction can be compactly written in terms of the response function \begin{align} &\hat{\mathcal{A}}_{\eta}(t) \equiv \Im\left\{\int^{t}dt'\eta_c(t)\eta_c^*(t')e^{i\hat{\Delta}_{cd}(t-t')}\right\} \;, \label{eqn:EffHam-Def of A_eta(na,nb)}\\ &\hat{\Delta}_{cd} \equiv \Delta_{cd}+2\chi_{ac}\hat{n}_a+2\chi_{bc}\hat{n}_b \;, \label{eqn:EffHam-Def of hat(Delta)_cd} \end{align} \end{subequations} where the operator-valued detuning $\hat{\Delta}_{cd}$ encodes the qubit-state-dependent phase for the resonator (see Fig.~\ref{fig:EffHam-KerrModelFreqSchematic}). In the adiabatic limit, for which the spectral content of the drive has negligible overlap with the resonator-drive detuning $\Delta_{cd}$, we can approximate $\hat{\mathcal{A}}_{\eta}(t)$ as \begin{align} \begin{split} \hat{\mathcal{A}}_{\eta} (t)=\frac{|\eta_c(t)|^2}{\hat{\Delta}_{cd}}+\frac{\Im\left\{\eta_c(t)\dot{\eta}_c^*(t)\right\}}{\hat{\Delta}_{cd}^2}+O\left(\frac{\eta_c(t)\ddot{\eta}_c^*(t)}{\hat{\Delta}_{cd}^3}\right), \end{split} \label{eqn:EffHam-Adiabatic A_eta} \end{align} where the first and the second terms are known as the \textit{dynamic} and \textit{geometric} \cite{Pechal_geometric_2012, Bohm_Geometric_2013, Cross_Optimized_2015} contributions. The geometric correction becomes pertinent for relatively fast pulses with spectral widths comparable to the resonator-drive detuning. Because such pulses lead to significant resonator leakage (see Appendix~E), we consider only the leading dynamic contribution which is proportional to the photon number $|\eta_c(t)|^2$. \begin{figure} \centering \includegraphics[scale=0.40]{KerrModelFreqSchematic.png} \caption{Schematic of the four (computational) qubit-state-dependent frequencies for the resonator based on the Kerr model.} \label{fig:EffHam-KerrModelFreqSchematic} \end{figure} Using Eqs.~(\ref{eqn:EffHam-Def of w_iz})--(\ref{eqn:EffHam-Def of w_zz}), we read off the gate parameters by projecting the effective Hamiltonian in Eqs.~(\ref{eqn:EffHam-H_I,eff^(1) Sol}) and~(\ref{eqn:EffHam-H_I,eff^(2) Sol}) onto the computational subspace. The lowest order Hamiltonian contains dynamic frequency shifts for the qubits as \begin{subequations} \begin{align} &\omega_{iz}^{(1)}(t) = -2\chi_{bc}|\eta_c(t)|^2 \;, \label{eqn:EffHam-w_iz^(1)}\\ &\omega_{zi}^{(1)}(t) = -2\chi_{ac}|\eta_c(t)|^2 \;. \label{eqn:EffHam-w_zi^(1)} \end{align} \end{subequations} The second order dynamic frequency shifts read \begin{subequations} \begin{align} \begin{split} \omega_{iz}^{(2)}(t)=\frac{1}{2}\left[\frac{2\chi_{bc}(\Delta_{cd}+4\chi_{bc})}{\Delta_{cd}+2\chi_{bc}}-\frac{\Delta_{cd}^2}{\Delta_{cd}+2\chi_{ac}} \right.\\ \left.+\frac{\Delta_{cd}^2}{\Delta_{cd}+2(\chi_{bc}+\chi_{ac})}\right]|\eta_c(t)|^2 \;, \end{split} \label{eqn:EffHam-w_iz^(2)} \end{align} \begin{align} \begin{split} \omega_{zi}^{(2)}(t)=\frac{1}{2}\left[\frac{2\chi_{ac}(\Delta_{cd}+4\chi_{ac})}{\Delta_{cd}+2\chi_{ac}}-\frac{\Delta_{cd}^2}{\Delta_{cd}+2\chi_{bc}} \right.\\ \left.+\frac{\Delta_{cd}^2}{\Delta_{cd}+2(\chi_{ac}+\chi_{bc})}\right]|\eta_c(t)|^2 \;. \end{split} \label{eqn:EffHam-w_zi^(2)} \end{align} \begin{figure}[t!] \centering \includegraphics[scale=0.220]{IZComparisonTenResPhoton.pdf} \includegraphics[scale=0.220]{IZComparisonTenResPhotonWideRange.pdf}\\ \includegraphics[scale=0.220]{ZIComparisonTenResPhoton.pdf} \includegraphics[scale=0.220]{ZIComparisonTenResPhotonWideRange.pdf}\\ \includegraphics[scale=0.225]{ZZComparisonTenResPhoton.pdf} \includegraphics[scale=0.220]{ZZComparisonTenResPhotonWideRange.pdf} \caption{Comparison between the phenomenological and the approximate ab-initio models under adiabatic approximation as a function of $\Delta_{cd}$ and maximum photon number $|\eta_c(t)|^2\rightarrow |\eta_{c,\text{ss}}|^2\approx(\Omega_c/2\Delta_{cd})^2=10$. For the ab-initio model, system parameters are $E_{Ja}/2\pi=14250$, $E_{Ca}/2\pi=255$ ($\bar{\omega}_a/2\pi \approx 5391.660$), $E_{Jb}/2\pi=17000$, $E_{Cb}/2\pi=275$ ($\bar{\omega}_b/2\pi \approx 6115.550$), $\bar{\omega}_c/2\pi =7000$, with couplings $g_a/2\pi=150$ and $g_b/2\pi=85$, which up to the sextic expansion yields $2\chi_{ac}/2\pi \approx -4.863$ and $2\chi_{bc}/2\pi\approx -4.910$ (all in MHz). Gate charge is set to zero for simplicity. Phenomenological models adopt the same dispersive shifts and normal mode frequencies. The left column shows small--medium resonator-drive detuning where the RIP gate is typically implemented. The right column shows the same traces plotted over a wider detuning range to demonstrate that the ab-initio theory captures the renormalization of the gate parameters due to additional interaction forms involving qubit resonances (see Appendix~\ref{App:EffRIPHam}).} \label{fig:EffHam-GateParamsComparison} \end{figure} Furthermore, there is a \textit{dynamic} $ZZ$ term corresponding to the \textit{intended} RIP interaction, \begin{align} \omega_{zz}^{(2)}(t) =\frac{-4\chi_{ac}\chi_{bc}(\Delta_{cd}+\chi_{ac}+\chi_{bc})\Delta_{cd}|\eta_c(t)|^2}{(\Delta_{cd}+2\chi_{ac})(\Delta_{cd}+2\chi_{bc})[\Delta_{cd}+2(\chi_{ac}+\chi_{bc})]}, \label{eqn:EffHam-w_zz^(2)} \end{align} \end{subequations} which is proportional to the resonator photon number and the dispersive shift for each qubit. There are four distinct poles at $\Delta_{cd}\approx 0$ [hidden in $\eta_c(t)$ in the adiabatic limit], $\Delta_{cd}=-2\chi_{ac}$, $\Delta_{cd}=-2\chi_{bc}$ and $\Delta_{cd}=-2(\chi_{ac}+2\chi_{bc})$ corresponding to resonance between the drive frequency and the resonator frequency for the computational states $\ket{0_a0_b}$, $\ket{1_a 0_b}$, $\ket{0_a 1_b}$ and $\ket{1_a 1_b}$, respectively (see Fig.~\ref{fig:EffHam-KerrModelFreqSchematic}). In the limit where $\chi_{ac}=\chi_{bc}=\chi$, we recover the expression quoted in Refs.~\cite{Cross_Optimized_2015, Paik_Experimental_2016} as $-4\chi^2 \Delta_{cd}|\eta_c(t)|^2/[(\Delta_{cd}+2\chi)(\Delta_{cd}+4\chi)]$. Moreover, if the detuning is much larger than the dispersive shifts, we reach a simpler expression as $-4\chi^2 |\eta_c(t)|^2/\Delta_{cd}$ in agreement with the dispersive JC model \cite{Puri_High-Fidelity_2016}. Note that the RIP interaction sits on top of a static $ZZ$ rate that is captured only by the \textit{multilevel} models (Kerr and ab-initio). In terms of an effective qubit-qubit exchange interaction \begin{subequations} \begin{align} J \approx \left[\frac{(\bar{\Sigma}_{ab}-2\bar{\omega}_c)}{2\bar{\Delta}_{ac}\bar{\Delta}_{bc}}-\frac{(\bar{\Sigma}_{ab}+2\bar{\omega}_c)}{2\bar{\Sigma}_{ac}\bar{\Sigma}_{bc}}\right]g_ag_b\;, \label{eqn:EffHam-Def of J} \end{align} with $\bar{\Sigma}_{jk}\equiv \bar{\omega}_j+\bar{\omega}_k$ and $\bar{\Delta}_{jk}\equiv \bar{\omega}_j-\bar{\omega}_k$, the multilevel Kerr model \cite{Cross_Optimized_2015, Magesan_Effective_2020} predicts \begin{align} \omega_{zz}^{(0)} \approx \frac{J^2}{\Delta_{ab}-\alpha_b}-\frac{J^2}{\Delta_{ab}+\alpha_a}\;. \label{eqn:EffHam-w_zz^(0)} \end{align} \end{subequations} Static $ZZ$ interaction reduces the on/off contrast of the $ZZ$ gate and can be detrimental to most gate operations. However, there are various methods for its suppression: (i) large qubit-qubit detuning compared to qubit anharmonicity [see Eq.~(\ref{eqn:EffHam-w_zz^(0)})], (ii) fast tunable couplers \cite{Chen_Qubit_2014}, (iii) destructive interference between multiple couplers \cite{Mundada_Suppression_2019, Kandala_Demonstration_2020}, (iii) combining qubits with opposite anharmonicity \cite{Ku_Suppression_2020}, and (iv) dynamics Stark tones (siZZle) \cite{Wei_Quantum_2021, Mitchell_Hardware_2021}. In particular, method (iii) has shown significant improvement in reducing the static $ZZ$ down to 0.1 KHz for a test RIP device consisting of six qubits \cite{Kumph_Novel_APS2021}. \subsection{Comparison and discussion} \label{Subsec:Compare} We next study the RIP gate parameters in more detail and make a comparison between the various starting models introduced in Sec.~\ref{Sec:Model}. Figure~\ref{fig:EffHam-GateParamsComparison} presents the effective RIP gate parameters as a function of resonator-drive detuning for fixed photon number. In particular, in terms of agreement for the \textit{dynamic} $ZZ$ rate, we recognize three regions of operation depending on the relation between the resonator-drive detuning $\Delta_{cd}$, qubit-drive detunings $\Delta_{ad},\Delta_{bd}$ and the dispersive shifts $2\chi_{ac}$ and $2\chi_{bc}$: \begin{itemize} \item [(i)] Small detuning ($|\Delta_{cd}|\sim~2|\chi_{ac}|, \ 2|\chi_{bc}|$) --- The ab-initio and the Kerr \cite{Cross_Optimized_2015} models agree well and both predict a local maximum for the $ZZ$ rate for $\Delta_{cd}<0$ [Fig.~\ref{fig:EffHam-GateParamsComparison}(e)]. The local maximum is a result of having multiple relatively close poles at $\Delta_{cd}=-2\chi_{ac},-2\chi_{bc}$ and $\Delta_{cd}=-2\chi_{ac}-2\chi_{bc}$. \item [(ii)] Medium detuning ($2|\chi_{ac}|, \ 2|\chi_{bc}|\ll |\Delta_{cd}| \ll |\Delta_{ad}|,|\Delta_{bd}|$) --- RIP drive frequency is closer to the resonator than to the qubit frequencies. Moreover, resonator-drive detuning is sufficiently larger than the dispersive shifts such that the qubit-state dependence of the poles is less noticeable. Consequently, all models agree. \item [(iii)] Large detuning ($ 2|\chi_{ac}|, \ 2|\chi_{bc}|\ll |\Delta_{cd}|\sim |\Delta_{ad}|,|\Delta_{bd}|$) --- Resonator-drive and qubit-drive detunings are comparable. Therefore, there are additional processes, beyond dispersive JC and Kerr interactions, that renormalize the gate parameters (see Appendices~\ref{App:MLM} and~\ref{App:EffRIPHam}). This region is not necessarily relevant to RIP gate implementation but is a natural extension of our comparison which quantifies the validity of the phenomenological models [Fig.~\ref{fig:EffHam-GateParamsComparison}(f)].\\ \end{itemize} In terms of the RIP interaction, the ab-initio model agrees well with the former phenomenological models in their \textit{intended} resonator-drive detuning regimes. However, according to Fig.~\ref{fig:EffHam-GateParamsComparison}, we observe a large deviation between the dynamic frequency shifts predicted by the ab-initio theory compared to the phenomenological models in all regions of operation. We find the source of this deviation to be contributions of the form $\hat{\mathcal{H}}_{\text{qd}}(t) \equiv \sum_{j=a,b} [\lambda_j(t)\hat{j}e^{i\omega_d t}+\text{H.c.}]$ that act as \textit{direct} drive terms on the qubits. Here, $\lambda_j(t)$ denotes the \textit{effective} drive amplitude on qubit $j=a,b$ and contains contributions from \textit{linear} charge hybridization and nonlinear mode mixing through flux hybridization (see Appendices~\ref{App:MLM} and \ref{App:EffRIPHam}). In summary, larger dispersive shifts, stronger RIP drive amplitude and smaller resonator-drive detuning all contribute to a stronger $ZZ$ interaction. However, we have to account \textit{also} for the trade-off between a strong dynamic $ZZ$ rate and the corresponding unwanted increase in both resonator and qubit leakage. Our analysis of leakage reveals further restrictions on the choice for qubit frequency, anharmonicity and RIP drive parameters, which we discuss in Sec.~\ref{Sec:Leak} and Appendix~\ref{App:ResRes}. \section{Leakage} \label{Sec:Leak} \begin{figure}[t!] \centering \includegraphics[scale=0.35]{SchematicUniverseOfCollisions.png} \caption{Universe of collisions involving two transmon qubits that are connected via a common bus resonator. The possibilities can be summarized as (i) qubit-qubit collisions \cite{Magesan_Effective_2020, Malekakhlagh_First-Principles_2020, Hertzberg_Effects_2020}, (ii) qubit-resonator collisions \cite{Sank_Measurement-Induced_2016} and (iii) three-body collisions. Symbol ``$\sim$'' denotes collision (degeneracy) between system states.} \label{fig:Leak-CollisionUniverse} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq5p14GHzChi5p57MHz.png} \includegraphics[scale=0.144]{2DSweepAnyLeakWithDragFreq5p14GHzChi5p57MHz.png}\\ \includegraphics[scale=0.144]{2DSweepQuLeakNoDragFreq5p14GHzChi5p57MHz.png} \includegraphics[scale=0.144]{2DSweepQuLeakWithDragFreq5p14GHzChi5p57MHz.png}\\ \includegraphics[scale=0.144]{2DSweepResLeakNoDragFreq5p14GHzChi5p57MHz.png} \includegraphics[scale=0.144]{2DSweepResLeakWithDragFreq5p14GHzChi5p57MHz.png}\\ \caption{Qubit-resonator case --- Overall, qubit and resonator leakages for $\omega_a/2\pi \approx 5140$ MHz, $\omega_c/2\pi \approx 6971$ MHz, $2\chi_{ac}/2\pi \approx -5.57$ MHz, fixed maximum photon number $[\Omega_c/(2\Delta_{cd})]^2\approx 16$ and fixed gate charge set to $0.37$ as a function of qubit anharmonicity and resonator-drive detuning. The initial state is set as $\ket{\Psi(0)}=(1/\sqrt{2})(\ket{0_a}+\ket{1_a})\otimes \ket{0_c}$ to allow leakage from both qubit states in a single run. The results here are for $\omega_d>\omega_c$. We confirmed numerically that $\omega_d<\omega_c$ leads to more resonator leakage consistent with additional $2\chi$-shifted poles in Eq.~(\ref{eqn:EffHam-Def of hat(Delta)_cd}) and Fig.~\ref{fig:EffHam-KerrModelFreqSchematic}. The resonator is driven with a nested cosine pulse [Eq.~\ref{eqn:Leak-Def of NCPulse}] of $\tau=200$ ns. The left (right) column present results without (with) DRAG on the resonator given by Eqs.~(\ref{eqn:Leak-Def of Omcx})--(\ref{eqn:Leak-Def of Omcy}).} \label{fig:Leak-2DSweepWithDRAG} \end{figure} RIP gate leakage can be separated into a background leakage, also called residual photons, due to the non-adiabaticity of the drive pulse and the resonator, and leakage to specific system states due to frequency collisions. The background leakage is comparably easy to understand and control, as its ratio to the intended photon number depends primarily on the overlap of the pulse's spectrum with the resonator frequency via the collective quantity $\Delta_{cd} \tau$, with $\tau$ being the pulse (rise) time. Therefore, to keep the background leakage constant, while making the pulse twice as fast, the most basic solution is to double $\Delta_{cd}$ and $\Omega_c$. A more involved control scheme for mitigating the background leakage, however, is to filter the pulse spectrum at $\Delta_{cd}$. We show that adding a DRAG pulse to the resonator works as an effective filter (see also Appendix~\ref{App:ResRes}). Focusing on the leakage due to frequency collisions, depending on what circuit elements participate in the exchange of excitations, three categories arise (see Fig.~\ref{fig:Leak-CollisionUniverse}): (i) Qubit-qubit collisions \cite{Magesan_Effective_2020, Malekakhlagh_First-Principles_2020, Hertzberg_Effects_2020}, (ii) qubit-resonator collisions \cite{Sank_Measurement-Induced_2016}, and (iii) three-body collisions. Qubit-qubit collisions occur in the zero (fixed)-photon subspace of the resonator when the qubits' detuning is approximately an integer multiple of the qubits' anharmonicity. Having qubit-qubit detuning away from the straddling regime minimizes the leakage due to such inter-qubit collisions. Qubit-resonator collisions occur between high- and low-excitation states of one of the qubits and the resonator while the other qubit is in a fixed state. More involved three-body collisions are also possible where both qubits and the resonator participate. \begin{figure}[t!] \centering \includegraphics[scale=0.35]{ResLeakageDet50.png}\\ \includegraphics[scale=0.35]{QubitLeakageDet50.png} \caption{Qubit-resonator case --- (a) Resonator leakage, (b) qubit leakage for the same parameters as in Fig.~\ref{fig:Leak-2DSweepWithDRAG}, constant detuning $\Delta_{cd}/2\pi=-50$ MHz and varying photon number.} \label{fig:Leak-QuResLeakage} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=0.35]{AllLeakageDet50MaxPhoton16.png}\\ \includegraphics[scale=0.35]{MostLeakedStates.png} \caption{Qubit-resonator case --- (a) Resonator, qubit and total leakage, (b) decomposition of leakage into qubit$\otimes$resonator eigenstates for the same parameters as in Fig.~\ref{fig:Leak-QuResLeakage} and 16 maximum photons. The \textit{dominant} leakage clusters in (b) correspond to specific high-excitation qubit states and variant photon number. They can be understood in terms of the frequency collisions in Fig.~\ref{fig:Leak-HighLowStateCollision} and Table~\ref{tab:Leak-ReadoutCollisions}.} \label{fig:Leak-MostLeakedStates} \end{figure} \subsection{Qubit-resonator leakage} \label{SubSec:READLikeLeak} We first consider a single transmon qubit coupled to a driven resonator which serves as a simpler version of the RIP gate that exhibits the essential leakage mechanisms for the qubit-resonator category. In particular, we observe leakage clusters to certain high-excitation qubit states and associate them with collisions between high-excitation and computational states of the qubit with different photon number. This analysis has implications beyond the RIP gate and specifically for optimal design of qubit readout. Similar drive-induced collisions have been studied in Ref.~\cite{Sank_Measurement-Induced_2016}. Here, by sweeping qubit parameters, we categorize a series of collisions that extends Ref.~\cite{Sank_Measurement-Induced_2016} (see Table~\ref{tab:Leak-ReadoutCollisions}). For numerical simulation of the qubit-resonator case, we adopt the exact ab-initio Hamiltonian~(\ref{eqn:Model-Starting Hs})--(\ref{eqn:Model-Starting Hd}) with just one transmon qubit. To correctly model leakage to high-excitation states, we keep 10 transmon and 48 resonator eigenstates. The technical details of the numerical simulations are described in Appendix~\ref{App:NumMet}. For experimentally relevant comparisons, we numerically search for the ab-initio system parameters in Eqs.~(\ref{eqn:Model-Starting Hs})--(\ref{eqn:Model-Starting Hd}) that keep certain relevant quantities fixed, while sweeping others. Given that the ab-initio and the pulse parameter space is large [8D (12D) for the single (two) qubit cases], our results are presented in terms of continuous sweeps only in select quantities such as qubit anharmonicity and resonator-drive detuning, with approximately constant cuts in other quantities such as the qubit frequency, resonator frequency, resonator photon number and dispersive shift. We drive the resonator with the pulse shape \begin{align} P_{\text{nc}}(t;\tau) \equiv \frac{1}{2}\left\{\cos\left[\pi\cos\left(\pi\frac{t}{\tau}\right)\right]+1\right\} \;, \label{eqn:Leak-Def of NCPulse} \end{align} which we call a nested cosine of duration $\tau$. Compared to a Gaussian pulse, the nested cosine pulse leads to improved resonator leakage due to smoother ramps \cite{Cross_Optimized_2015} (see Appendix~\ref{App:ResRes}). We then compute the overall leakage as the residual occupation of system states at the \textit{end} of the pulse. More specifically, resonator leakage is computed as the probability of finding the system with a non-zero photon number summed over all possible qubit states, and qubit leakage is found as the probability of finding the qubit outside the computational subspace summed over all possible resonator states. \begin{figure}[t!] \centering \includegraphics[scale=0.290]{HighLowStateCollisionState6and0.png}\\ \includegraphics[scale=0.290]{HighLowStateCollisionState8and0.png} \caption{Qubit-resonator case --- Examples of collisions between high-excitation and computational states for the same system parameters as in Figs.~\ref{fig:Leak-QuResLeakage}--\ref{fig:Leak-MostLeakedStates} \textit{without drive}. States are labeled by the maximum overlap with the corresponding uncoupled case, hence avoided crossings appear as \textit{fictitious} jumps. (a) Collisions between state $\ket{6_a,0_c}$ and computational states $\ket{0_a,4_c}$ (similar to Ref.~\cite{Sank_Measurement-Induced_2016}) and $\ket{1_a,3_c}$. (b) Collisions between state $\ket{8_a,0_c}$ and computational states $\ket{0_a,5_c}$ and $\ket{1_a,4_c}$. Note the possibility of \textit{multiple} collisions between the \textit{same} states due to e.g. bending of the 8th excited state at large anharmonicity [see Eq.~(\ref{eqn:Leak-WhyStatesBend})]. This is in agreement with observing \textit{two} leakage clusters to the 6th excited state and \textit{three} to the 8th excited state in Fig.~\ref{fig:Leak-MostLeakedStates}(b).} \label{fig:Leak-HighLowStateCollision} \end{figure} We first analyze the case where the qubit and resonator frequencies are approximately fixed at $\omega_a/2\pi \approx 5140$~MHz and $\omega_c/2\pi \approx 6971$~MHz and the qubit-resonator dispersive coupling at $2\chi_{ac}/2\pi \approx -5.57$~MHz. Results for other qubit frequencies and dispersive shifts are also shown at the end of this section. Figure~\ref{fig:Leak-2DSweepWithDRAG} shows the overall, resonator and qubit leakage for 16 photons as a 2D sweep of detuning and qubit anharmonicity, while comparing between pulses with and without DRAG to suppress drive at the resonator frequency. The DRAG pulse has the generic form \begin{subequations} \begin{align} &\Omega_{cy}(t)=\Omega_c P_{\text{nc}}(t;\tau) \;, \label{eqn:Leak-Def of Omcx}\\ &\Omega_{cx}(t)=(1/ \Delta_D)\Omega_c\dot{P}_{\text{nc}}(t;\tau) \;. \label{eqn:Leak-Def of Omcy} \end{align} \end{subequations} Setting the DRAG coefficient to $\Delta_D=\Delta_{cd}$ suppresses the pulse spectrum at the resonator-drive detuning (see Appendix~\ref{App:ResRes}). This is confirmed in Fig.~\ref{fig:Leak-2DSweepWithDRAG} where the background leakage is reduced by at least one order of magnitude at small $\Delta_{cd}$. Besides the resonator leakage at small $\Delta_{cd}$, there are leakage clusters as well as sweet intervals with suppressed leakage as a function of qubit anharmonicity. We observe that qubit-resonator leakage amplitude is generally suppressed at smaller qubit anharmonicity. This is understood as nonlinear interactions connecting the underlying states that increase in powers of $E_C$. For the chosen parameters, in particular, setting the detuning to be larger than 30 MHz and anharmonicity smaller than -200 MHz keeps leakage below the desired threshold of $10^{-5}$. Figure~\ref{fig:Leak-QuResLeakage} shows the resonator and the qubit leakage for constant detuning $\Delta_{cd}/2\pi \approx -50$~MHz and varying resonator photon number. Both types of leakage are \textit{universally} increased at stronger drive. A decomposition of the overall leakage in terms of individual system states in Fig.~\ref{fig:Leak-MostLeakedStates} reveals that the dominant clusters can be associated with specific high-excitation qubit states and variant photon number. In particular, we observe considerable leakage to the 5th--9th excited states of the transmon qubit. Furthermore, clusters to a particular high-excitation qubit state can appear multiple times (twice for the 6th and 7th and three times for the 8th and 9th). \begin{table*}[t!] \begin{tabular}{|c|c|c|c|} \hline States & Condition (Kerr) & Kerr estimate of $\alpha_a/2\pi$ (MHz) & Numerical estimate of $\alpha_a/2\pi$ (MHz)\\ \hline $\ket{5_a,n_c} \sim \ket{0_a,n_c+3}$ & $5\omega_a +10\alpha_a \approx 3\omega_c-10n_c\chi_{ac}$ & $-478.700+2.785 n_c$ & $-355.213+0.609 n_c$ \\ \hline $\ket{6_a,n_c} \sim \ket{0_a,n_c+4}$ & $6\omega_a+15\alpha_a \approx 4\omega_c-12n_c\chi_{ac}$ & $-197.067+2.228n_c$ & $-177.230+1.292 n_c$\\ \hline $\ket{6_a,n_c} \sim \ket{1_a,n_c+3}$ & $5\omega_a+15\alpha_a \approx 3\omega_c-2(5n_c-3)\chi_{ac}$ & $-320.247+1.857 n_c$ & $-272.136-0.981 n_c$\\ \hline $\ket{7_a,n_c} \sim \ket{0_a,n_c+4}$ & $7\omega_a+21\alpha_a\approx 4\omega_c-14n_c\chi_{ac}$ & $-385.524+1.857 n_c$ & $-272.892-0.423 n_c$\\ \hline $\ket{7_a,n_c} \sim \ket{1_a,n_c+4}$ & $6\omega_a+21\alpha_a \approx 4\omega_c - 4(3n_c-2)\chi_{ac}$ & $-141.823+1.591 n_c $ & $-129.007+0.868 n_c$\\ \hline $\ket{8_a,n_c} \sim \ket{0_a,n_c+5}$ & $8\omega_a+28\alpha_a \approx 5\omega_c-16 n_c \chi_{ac}$ & $-223.750+1.591 n_c$ & $-185.549+0.853 n_c$\\ \hline $\ket{8_a,n_c} \sim \ket{1_a,n_c+4}$ & $7\omega_a +28\alpha_a \approx 4\omega_c -2(7n_c-4)\chi_{ac}$ & $-289.939+1.393n_c$ & \makecell{$ -235.336+1.096 n_c$ \\ $\bullet -313.738 +708 n_c$}\\ \hline $\ket{9_a,n_c} \sim \ket{0_a,n_c+5}$ & $9\omega_a +36\alpha_a \approx 5\omega_c -18n_c\chi_{ac}$ & $-316.806+1.393n_c$ & \makecell{$-227.839+0.807 n_c$ \\ $\bullet -280.415 +1.549 n_c$}\\ \hline $\ket{9_a,n_c} \sim \ket{1_a,n_c+5}$ & $8 \omega_a +36\alpha_a \approx 5 \omega_c -2(8n_c-5)\chi_{ac}$ & $-174.801 + 1.238 n_c$ & $-147.268+1.549 n_c$\\ \hline \end{tabular} \caption{Examples of qubit-resonator frequency collisions between the high-excitation and computational subspaces observed in Figs.~\ref{fig:Leak-2DSweepWithDRAG}--\ref{fig:Leak-MostLeakedStates}. The leftmost column shows the colliding quantum states (qubit $\otimes$ resonator), the second provides a collision condition based on the \textit{undriven} Kerr spectrum, and the third and the fourth provide experimental estimates for qubit anharmonicity assuming the same parameters as in Figs.~\ref{fig:Leak-QuResLeakage} and~\ref{fig:Leak-MostLeakedStates} and based on Kerr and exact ab-initio models, respectively. The Kerr model approximately captures the order by which these clusters happen, but the estimate is less valid especially for collisions involving higher qubit states and occurring at larger anharmonicity. In particular, the ab-initio analysis reveals the possibility of \textit{additional} collisions between the same states [shown with a bullet, see e.g. Fig.~\ref{fig:Leak-HighLowStateCollision}(b)].} \label{tab:Leak-ReadoutCollisions} \end{table*} Analytical modeling of leakage requires advanced time-dependent methods such as SWPT or Magnus expansion \cite{Magnus_Exponential_1954, Blanes_Magnus_2009, Blanes_Pedagogical_2010}. It is in principle possible to use time-dependent SWPT to compute leakage rates. However, generally, SWPT is more suitable for computing effective (resonant) rates, while leakage rates are more easily found via Magnus. For the RIP gate, the two methods are connected via \begin{align} \hat{U}_{I}(t,0) =\hat{U}_{\text{diag}}(t)\hat{U}_{I,\text{eff}}(t,0)\hat{U}^{\dag}_{\text{diag}}(0) \;, \label{eqn:Leak-UI=Ud*UIeff*Ud'} \end{align} where $\hat{U}_{I}(t,0)$ and $\hat{U}_{I,\text{eff}}(t,0)$ are the overall and the effective time-evolution operators, and $\hat{U}_{\text{diag}}(t)$ is the mapping similar to Eq.~(\ref{eqn:EffHam-U_diag decomp}). For a physical process to cause leakage, it \textit{must} be off-diagonal with respect to the computational eigenstates. Hence, in SWPT, the information about leakage is encoded indirectly through $\hat{U}_{\text{diag}}(t)$. In Magnus, however, we perform an expansion \textit{directly} on $\hat{U}_{I}(t,0)$ (see Appendix~\ref{App:LeakMech}). Regardless of the method, the contribution from each physical process appears as the Fourier transform of the underlying time-dependent interaction rate evaluated at the corresponding system transition frequency. Therefore, leakage can be characterized given the following information: (i) system transition frequencies, (ii) interaction matrix elements (connectivity between the eigenstates), and (iii) spectral content of the interaction (pulse shape). Once the pulse spectrum has a non-negligible overlap with a particular system transition frequency, we expect an increase in the leakage. Even though capturing the precise leakage amplitude is a difficult task, estimating where leakage occurs in the parameter space is feasible based on the frequency collisions in the system spectrum. Consequently, regions in system parameter space where a computational state becomes \textit{degenerate} with a high-excitation qubit state provide a bridge for leakage given that such states are connected directly or indirectly by the underlying interaction. Figure~\ref{fig:Leak-HighLowStateCollision} provides two examples of such collisions as a function of qubit anharmonicity. Firstly, state $\ket{6_a,0_c}$ collides with states $\ket{0_a,4_c}$ and $\ket{1_a,3_c}$ at $\alpha_a /2\pi\approx -177.230$ MHz and $\alpha_a /2\pi \approx -272.136$ MHz, respectively, explaining the \textit{two} observed leakage clusters in Fig.~\ref{fig:Leak-MostLeakedStates}(b). The clustering is due to additional collisions with increasing photon number, e.g. between $\ket{6_a,1_c}$ and $\ket{0_a,5_c}$ etc (see Table~\ref{tab:Leak-ReadoutCollisions}). Secondly, state $\ket{8_a,0_c}$ collides \textit{once} with state $\ket{0_a,5_c}$ at $\alpha_a/2\pi\approx -185.549$ MHz and \textit{twice} with state $\ket{1_a,4_c}$ at $\alpha_a /2\pi\approx -235.336$ MHz and $\alpha_a/2\pi\approx -313.738$ MHz, in agreement with the \textit{three} leakage clusters in Fig.~\ref{fig:Leak-MostLeakedStates}(b). Such repeated collisions between the same states \textit{cannot} be predicted by the Kerr model which is \textit{linear} in qubit anharmonicity. It is a signature that the sextic, and possibly higher order, expansion of the Josephson potential is relevant at larger anharmonicity as (assuming $g_a=0$ and $n_{ga}=0$) \begin{align} \begin{split} \hat{\mathcal{H}}_{qa}&=(8E_{Ca}E_{Ja})^{1/2}\hat{a}^{\dag}\hat{a}-\frac{E_{Ca}}{12} \left(\hat{a}+\hat{a}^{\dag}\right)^{4}\\ &+\frac{(2E_{Ca}^3/E_{Ja})^{1/2}}{360} \left(\hat{a}+\hat{a}^{\dag}\right)^{6}+O\left(\frac{E_{Ca}^2}{E_{Ja}}\right) \;. \end{split} \label{eqn:Leak-WhyStatesBend} \end{align} Moreover, the observation that the high- and low-excitation qubit states cross, as opposed to undergoing an anti-crossing, reveals that such states are not coupled via nonlinearity but rather through an \textit{unwanted} projection of the RIP drive over the corresponding transition. A summary of the observed qubit-resonator collisions as well as a comparison between ab-initio and Kerr predictions is given in Table~\ref{tab:Leak-ReadoutCollisions}. \begin{figure}[t!] \centering \includegraphics[scale=0.196]{2DGateChargeAnharm_OvLeakWithDrag_Freq5p14GHzChi5p57MHz.png} \includegraphics[scale=0.205]{MaxGateCharge_OvLeakWithDrag_Freq5p14GHzChi5p57MHz.png} \caption{Qubit-resonator case --- Dependence of overall leakage with DRAG on the gate charge for the same parameters as in Fig.~\ref{fig:Leak-QuResLeakage}, resonator-drive detuning set to -50 MHz, and maximum photon number set to 16. (a) 2D sweep of overall leakage as a function of gate charge and anharmonicity. The result is \textit{approximately} periodic and symmetric under change of sign for the gate charge, hence the approximately unique interval of [0,0.5] is shown. The white dashed line shows the value of gate charge at 0.37 used in Figs.~\ref{fig:Leak-2DSweepWithDRAG}--\ref{fig:Leak-HighLowStateCollision}. (b) Maximum overall leakage across the gate charge, which accounts for the worst case scenario at each anharmonicity value.} \label{fig:Leak-2DSweepDepOnGateCharge} \end{figure} Given the correspondence between qubit-resonator leakage clusters and frequency collisions involving \textit{high-excitation} qubit states, it is also crucial to quantify and optimize the dependence of leakage on gate charge for two main reasons. Firstly, higher-excitation eigenenergies of the transmon qubit depend more strongly on the gate charge \cite{Koch_Charge_2007}, causing non-negligible shifts of the leakage clusters in the parameter space. Secondly, and more importantly, the gate charge is neither controllable nor predictable in the experiment. A more realistic measure is then the maximum leakage over one period of gate charge for each parameter set. Figure~\ref{fig:Leak-2DSweepDepOnGateCharge} shows the dependence of qubit-resonator overall leakage on the qubit anharmonicity and the gate charge, over the approximately unique interval of $n_{ga}\in [0,0.5]$ \footnote{Spectrum of an isolated transmon, based on $\hat{\mathcal{H}}_a=4E_{Ca}(\hat{n}_a-n_{ga})^2-E_{Ja}\cos(\hat{\phi}_a)$, is periodic with respect to the gate charge $n_{ga}$ \cite{Koch_Charge_2007}. However, a charge-charge coupling of the form $g_{a}\hat{n}_a\hat{n}_c$ to a resonator mode breaks such a translational symmetry. Our numerical simulations show that for experimentally relevant values of qubit-resonator coupling (full dispersive shift of the order of -5 MHz), the deviation in the spectrum and also the corresponding deviation in the qubit-resonator leakage is small under $n_{ga} \rightarrow n_{ga}+1$. Moreover, the result is approximately symmetric with respect to $n_{ga} \rightarrow -n_{ga}$, Therefore, in Fig.~\ref{fig:Leak-2DSweepDepOnGateCharge}, we have presented the dependence of leakage on the approximately unique interval of $n_{ga}\in [0,0.5]$}. We observe that smaller anharmonicity, approximately below -200 MHz, leads to significantly less leakage cluster density, less leakage amplitude and less dependence of leakage on the gate charge. The results so far were based on fixed $\omega_a/2\pi \approx 5140$, $\omega_c/2\pi \approx 6971$ and $2\chi_{ac}/2\pi \approx -5.57$ MHz. According to Table~\ref{tab:Leak-ReadoutCollisions}, the collision conditions depend also strongly on these parameters. In particular, if the qubit frequency is increased, there is \textit{less} chance of qubit-resonator collisions in the anharmonicity range relevant for experiment. For instance, consider the $\ket{6_a,0_c} \sim \ket{1_a,3_c}$ collision, with approximate Kerr condition $6\omega_a+15\alpha_a \approx \omega_a+3\omega_c+6\chi_{ac}$. Setting $\omega_a/2\pi \approx 6000$ MHz, keeping other parameters the same, pushes the collision to $\alpha_a/2\pi \approx -606.914$ MHz, away from the transmon regime. This behavior holds for qubit-resonator collisions in general. Moreover, the dispersive shift $2\chi_{ac}$ determines the number-splitting span in each cluster. Hence there is a trade-off between large $2\chi_{ac}$, desired for large dynamic $ZZ$, and the width of collision-free anharmonicity intervals. Figure~\ref{fig:Leak-2DSweepDepOnFreqAndChi} compares the qubit-resonator leakage of three distinct qubit frequencies $\omega_a /2\pi\approx 4750$, $5140$, $6000$ MHz and two dispersive shifts $2\chi_{ac} /2\pi \approx -5.57$, $-2.79$ MHz, and confirms the above-mentioned trends. Importantly, based on Figs.~\ref{fig:Leak-2DSweepDepOnFreqAndChi}(e)--\ref{fig:Leak-2DSweepDepOnFreqAndChi}(f), working with the 6000 MHz frequency qubit removes most of the leakage clusters from the considered anharmonicity range and mitigates the amplitude of the remaining ones. \begin{figure}[t!] \centering \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq4p75GHzChi5p57MHz.png} \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq4p75GHzChi2p79MHz.png}\\ \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq5p14GHzChi5p57MHzV2.png} \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq5p14GHzChi2p79MHz.png}\\ \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq6p00GHzChi5p57MHz.png} \includegraphics[scale=0.144]{2DSweepAnyLeakNoDragFreq6p00GHzChi2p79MHz.png} \caption{Qubit-resonator case --- Overall leakage with DRAG for three different qubit frequencies $\omega_a /2\pi \approx 4750$ [(a), (b)], $5140$ [(c), (d)], $6000$ [(e), (f)] MHz, and two distinct dispersive shifts $2\chi_{ac} /2\pi \approx -5.57$ MHz (left column) and $-2.79$ MHz (right column), with maximum photon number set to 16. Other parameters are the same as Fig.~\ref{fig:Leak-2DSweepWithDRAG}.} \label{fig:Leak-2DSweepDepOnFreqAndChi} \end{figure} \subsection{Three-body leakage} \label{SubSec:RIPLikeLeak} The main physics of qubit-resonator collisions and leakage can in principle be extended to understand more complex three-body collisions. Compared to the qubit-resonator case, which results in leakage to relatively high-excitation qubit states (5th--9th), having both qubits participating in exchange interactions allows for leakage to low-excitation qubit states as well. For instance, a single excitation of the low-frequency qubit and one resonator photon can provide the energy to leak to the 2nd excited state of the high-frequency qubit, i.e. a collision of the form $\ket{1_a,0_b,1_c} \sim \ket{0_a,2_b,0_c}$. For example, this collision can be satisfied if $\omega_a /2\pi \approx 4750$, $\omega_b /2\pi \approx 6000$, $\omega_c /2\pi \approx 6971$ and $\alpha_b /2\pi \approx -279$ MHz. We refer to this frequency configuration as the high-low RIP pair. \begin{table*}[t!] \begin{tabular}{|c|c|c|} \hline States & Condition (Kerr) & Instance\\ \hline\hline $\ket{3_a,0_b,n_c} \sim \ket{0_a,1_b,n_c+1}$ & $3\omega_a+3\alpha_a+6n_c\chi_{ac} \approx \omega_b+\omega_c+2(n_c+1)\chi_{bc}$ & high-low\\ \hline $\ket{4_a,0_b,n_c} \sim \ket{0_a,1_b,n_c+2}$ & $4\omega_a+6\alpha_a+8n_c\chi_{ac} \approx \omega_b+2\omega_c+2(n_c+2)\chi_{bc}$ & high-high\\ \hline $\ket{5_a,0_b,n_c} \sim \ket{0_a,1_b,n_c+2}$ & $5\omega_a +10\alpha_a +10 n_c \chi_{ac}\approx \omega_b + 2 \omega_c +2(n_c+2)\chi_{bc}$ & high-low\\ \hline $\ket{5_a,0_b,n_c} \sim \ket{1_a,1_b,n_c+1}$ & $4 \omega_a +10\alpha_a +10 n_c \chi_{ac} \approx \omega_b + \omega_c +2(n_c+1)(\chi_{ac}+\chi_{bc})$ & high-low\\ \hline $\ket{6_a,0_b,n_c} \sim \ket{1_a,1_b,n_c+2}$ & $5 \omega_a +15\alpha_a +12 n_c \chi_{ac} \approx \omega_b + 2 \omega_c +2(n_c+2)(\chi_{ac}+\chi_{bc})$ & high-low\\ \hline $\ket{7_a,0_b,n_c} \sim \ket{1_a,1_b,n_c+2}$ & $6\omega_a+21\alpha_{a}+14n_c\chi_{ac}\approx \omega_b + 2\omega_c +2(n_c+2)(\chi_{ac}+\chi_{bc})$ & high-low\\ \hline $\ket{7_a,0_b,n_c} \sim \ket{1_a,1_b,n_c+3}$ & $6\omega_a+21\alpha_{a}+14n_c\chi_{ac}\approx \omega_b + 3\omega_c +2(n_c+3)(\chi_{ac}+\chi_{bc})$ & high-high\\ \hline $\ket{8_a,0_b,n_c} \sim \ket{1_a,1_b,n_c+2}$ & $7\omega_a+28\alpha_{a}+16n_c\chi_{ac}\approx \omega_b + 2 \omega_c +2(n_c+2)(\chi_{ac}+\chi_{bc})$ & high-low\\ \hline \hline $\ket{0_a,2_b,n_c} \sim \ket{1_a,0_b,n_c+1}$ & $2\omega_b+\alpha_b+4n_c\chi_{bc} \approx \omega_a+\omega_c+2(n_c+1)\chi_{ac}$ & high-low\\ \hline $\ket{0_a,5_b,n_c} \sim \ket{1_a,1_b,n_c+2}$ & $4\omega_b+10\alpha_b+10n_c\chi_{bc}\approx \omega_a+2\omega_c+2(n_c+2)(\chi_{ac}+\chi_{bc})$ & high-high\\ \hline $\ket{0_a,7_b,n_c} \sim \ket{1_a,1_b,n_c+3}$ & $6\omega_b+21\alpha_b+14n_c\chi_{bc}\approx \omega_a+3\omega_c+2(n_c+3)(\chi_{ac}+\chi_{bc})$ & high-high\\ \hline\hline $\ket{2_a,3_b,n_c} \sim \ket{1_a,0_b,n_c+3}$ & $\omega_a+\alpha_a+3\omega_b+3\alpha_b + 4n_c\chi_{ac}+6n_c\chi_{bc}\approx 3\omega_c+2(n_c+3)\chi_{ac}$ & \text{high-low}\\ \hline $\ket{2_a,5_b,n_c} \sim \ket{1_a,0_b,n_c+4}$ & $\omega_a+\alpha_a+5\omega_b+10\alpha_b + 4n_c\chi_{ac}+10n_c\chi_{bc}\approx 4\omega_c+2(n_c+4)\chi_{ac}$ & \text{high-low}\\ \hline $\ket{3_a,1_b,n_c} \sim \ket{1_a,0_b,n_c+2}$ & $2\omega_a+3\alpha_a+\omega_b+ 6n_c\chi_{ac}+2n_c\chi_{bc}\approx 2\omega_c+2(n_c+2)\chi_{ac}$ & \text{high-low}\\ \hline $\ket{3_a,2_b,n_c} \sim \ket{1_a,1_b,n_c+2}$ & $2\omega_a+3\alpha_a+\omega_b+\alpha_b + 6n_c\chi_{ac}+4n_c\chi_{bc}\approx 2\omega_c+2(n_c+2)(\chi_{ac}+\chi_{bc})$ & \text{high-low}\\ \hline $\ket{5_a,2_b,n_c} \sim \ket{1_a,1_b,n_c+3}$ & $4\omega_a+10\alpha_a+\omega_b+\alpha_b + 10n_c\chi_{ac}+4n_c\chi_{bc}\approx 3\omega_c+2(n_c+3)(\chi_{ac}+\chi_{bc})$ & \text{high-low}\\ \hline \end{tabular} \caption{Examples of dominant three-body collisions observed in Figs.~\ref{fig:RIPLikeLeak-HighLowPair}--\ref{fig:RIPLikeLeak-HighHighPair}. The first column describes the colliding states, the second provides an \textit{approximate} collision condition based on the Kerr model, and the last column shows in which frequency configuration the collision was observed. The list here is derived from Figs.~\ref{fig:RIPLikeLeak-HighLowPair}--\ref{fig:RIPLikeLeak-HighHighPair} with a leakage cut-off of $10^{-5}$ for each collision assuming 4 maximum resonator photons. In principle, however, there are numerous possibilities.} \label{tab:Leak-3bdCollisions} \end{table*} \begin{figure}[h!] \centering \includegraphics[scale=0.213]{HighLowPair_AvgOvLeak_DepOnDet.png}\\ \includegraphics[scale=0.213]{HighLowPair_OvLeak_DecompToInitState.png}\\ \includegraphics[scale=0.1625]{HighLowPair_TransitionsFrom00.png} \includegraphics[scale=0.1625]{HighLowPair_TransitionsFrom01.png}\\ \includegraphics[scale=0.1625]{HighLowPair_TransitionsFrom10.png} \includegraphics[scale=0.1625]{HighLowPair_TransitionsFrom11.png} \caption{Two-qubit simulation of leakage for the high-low pair with $\omega_a/2\pi\approx 4750$, $\omega_b/2\pi \approx 6000$, $\omega_c/2\pi\approx 6971$, $2\chi_{ac} /2\pi\approx 2\chi_{bc}/2\pi\approx -5.57$ MHz using a $200$ ns nested cosine pulse equivalent to 4 photons at the pulse maximum. (a) Overall leakage with DRAG, averaged over the four initial two-qubit states, $\ket{0_a,0_b,0_c}$, $\ket{0_a,1_b,0_c}$, $\ket{1_a,0_b,0_c}$ and $\ket{1_a,1_b,0_c}$, as a function of $\alpha_a\approx \alpha_b$ and $\Delta_{cd}$. (b) Overall leakage for each initial computational state at fixed $\Delta_{cd}/2\pi=-50$ MHz. (c)--(f) Individual leakage transition probabilities starting from the four computational states, respectively. Compared to the qubit-resonator simulations of Sec.~\ref{SubSec:READLikeLeak}, 10 energy eigenstates and 22 resonator states are kept.} \label{fig:RIPLikeLeak-HighLowPair} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.213]{HighHighPair_AvgOvLeak_DepOnDet.png}\\ \includegraphics[scale=0.213]{HighHighPair_OvLeak_DecompToInitState.png}\\ \includegraphics[scale=0.160]{HighHighPair_TransitionsFrom00.png} \includegraphics[scale=0.160]{HighHighPair_TransitionsFrom01.png}\\ \includegraphics[scale=0.160]{HighHighPair_TransitionsFrom10.png} \includegraphics[scale=0.160]{HighHighPair_TransitionsFrom11.png} \caption{Two-qubit simulation of leakage for the high-high pair with $\omega_a/2\pi\approx 5750$, $\omega_b/2\pi \approx 6250$ MHz and other parameters the same as Fig.~\ref{fig:RIPLikeLeak-HighLowPair}. Compared to the high-low configuration, we observe a significant suppression of leakage especially for $\alpha_a /2\pi \approx \alpha_b /2\pi \lessapprox -200$ MHz.} \label{fig:RIPLikeLeak-HighHighPair} \end{figure} We then compare the high-low configuration given above to an improved RIP pair. Based on the preceding discussion of qubit-resonator leakage (Sec.~\ref{SubSec:READLikeLeak} and Fig.~\ref{fig:Leak-2DSweepDepOnFreqAndChi}), both qubit frequencies should be set as close to the resonator frequency as is allowed by other constraints on the parameters. As one of the advantages of the RIP gate is a wider range of allowed qubit frequency values, we choose to keep the qubits outside of the straddling regime. When qubits are designed to operate in the straddling regime, the design, subject to sample-to-sample variation in fabrication, is more susceptible to qubit-qubit collisions of the form $\ket{1_a,1_b,n_c}\sim \ket{2_a,0_b,n_c}$ and $\ket{1_a,1_b,n_c}\sim \ket{0_a,2_b,n_c}$, with approximate collision conditions $\Delta_{ab}\approx \alpha_b$ and $\Delta_{ab}\approx -\alpha_a$. Moreover, since our model does not include a cancellation coupler, small qubit-qubit detuning leads to a large static $ZZ$, reducing the on/off ratio of the RIP gate. Therefore, as a contrast to the high-low configuration given above, we pick a second ``high-high'' configuration with $\omega_a /2\pi \approx 5750$, $\omega_b /2\pi \approx 6250$, and $\omega_c /2\pi \approx 6971$ MHz, corresponding to a $500$ MHz qubit-qubit detuning. We find that the high-high pair reduces three-body leakage compared to the high-low configuration while also having low qubit-resonator leakage. For the two-qubit simulation, the ab-initio system parameters were selected to produce the qubit and resonator frequency values of the high-high and high-low scenarios while fixing qubit-resonator dispersive shifts at $2\chi_{ac} /2\pi\approx 2\chi_{bc} /2\pi \approx -5.57$~MHz. Such parameters were found for a range of qubit anharmonicities from -500 to -100~MHz, with the anharmonicity of the two qubits kept approximately equal, $\alpha_a \approx\alpha_b$, for each case. The pulse parameters were set to produce a maximum resonator photon number of 4 over multiple traces of constant resonator-drive detuning. Compared to the single-qubit simulations of Sec.~\ref{SubSec:READLikeLeak}, these simulations used 22 resonator states and 10 energy eigenstates for each qubit. Figures~\ref{fig:RIPLikeLeak-HighLowPair} and~\ref{fig:RIPLikeLeak-HighHighPair} provide the leakage analysis for the high-low and high-high RIP pairs, respectively. We note that in such two-qubit simulations all leakage categories, as described in Fig.~\ref{fig:Leak-CollisionUniverse}, can be driven in principle. Therefore, to better understand each case, the observed leakage peaks in panel (a) are decomposed into separate transitions in terms of initial computational states in panel (b) and final leaked states in panels (c)--(f). We find that the high-high allocation produces regions of overall leakage below $10^{-6}$ especially at weak qubit anharmonicity less than -200 MHz. However, for the high-low case, the clusters are spread over the considered anharmonicity range and no discernible suppression is observed at weaker anharmonicity. This is in part due to the strong qubit-resonator collisions for the low-frequency qubit as also observed in Fig.~\ref{fig:Leak-2DSweepDepOnFreqAndChi}(a), but decomposition of leakage into individual transitions in panels (c)--(f) also reveals a series of three-body leakage transitions in which both qubits participate in the excitation exchange. In particular, the high-low case suffers from both higher three-body leakage cluster density and stronger peaks compared to the high-high case. Table~\ref{tab:Leak-3bdCollisions} provides examples of dominant three-body collisions corresponding to the two cases. For example, below -250~MHz anharmonicity, the dominant leakage transitions for the high-low case satisfy $\ket{0_a,2_b,n_c} \sim \ket{1_a,0_b,n_c+1}$. \begin{figure}[t!] \centering \includegraphics[scale=0.215]{1QSimComparedTo1QTransIn2QSim_HighLowPair.png}\\ \includegraphics[scale=0.215]{2QSimComperdTo2QTransPlus1QubitTrans.png} \caption{Comparison between single- and two-qubit simulations for the high-low pair with resonator-drive detuning set to -30 MHz and 4 maximum photons --- (a) Individual qubit-resonator leakage for each qubit compared to the qubit-resonator leakage obtained from the two-qubit simulation. (b) Overall leakage obtained from the two-qubit simulation compared to the sum of qubit-resonator leakage obtained from the single-qubit simulation and three-body leakage obtained from the two-qubit simulation.} \label{fig:Leak-JustifyingLeakCategories} \end{figure} To validate our characterization of the RIP gate leakage in terms of distinct categories, we show that qubit-resonator leakage computed from the single-qubit simulation in Sec.~\ref{SubSec:READLikeLeak} agrees well with the two-qubit simulation here. In particular, for the high-low pair, Fig.~\ref{fig:Leak-JustifyingLeakCategories}(a) demonstrates a good agreement between the qubit-resonator leakage from \textit{individual} single-qubit simulations and the qubit-resonator leakage from the two-qubit simulation. Furthermore, Fig.~\ref{fig:Leak-JustifyingLeakCategories}(b) shows that adding the three-body leakage from the two-qubit simulation to the qubit-resonator leakage from separate single-qubit simulations recovers the overall leakage in the two-qubit simulation. For this analysis, the two-qubit simulation results were split into qubit-resonator and three-body leakage by looking at whether the final leaked state involved a transition of one or both of the qubits. These observations confirm that, when choosing system parameters to avoid RIP gate leakage, the different leakage categories can be considered independently, since the presence of the 2nd qubit does not noticeably change the nature of the single-qubit transitions. In summary, our analysis of leakage has important implications in terms of optimal parameter allocation for the RIP gate. Although the background leakage can be controlled dynamically, i.e. through modifying $\omega_d$, $\tau$ and DRAG, the qubit-resonator and three-body leakage clusters result from ill-chosen static system parameters. It can be shown that average gate fidelity is limited by average leakage \cite{Wood_Quantification_2018} (see Appendix~\ref{App:FidITOLeak}). Therefore, improving fidelity requires suppressing leakage. \section{Takeaways for control and design} \label{Sec:Tkwys} In this section, we summarize our findings and provide a set of instructions for optimal parameter allocation. The discussion is based on the ab-initio model in Eqs.~(\ref{eqn:Model-Starting Hs})--(\ref{eqn:Model-Starting Hd}), for which there are 12 independent system and pulse parameters: $E_{Ca}$, $E_{Cb}$, $E_{Ja}$, $E_{Jb}$, $n_{ga}$, $n_{gb}$, $g_{a}$, $g_{b}$, $\omega_c$, $\omega_d$, $\Omega_c$ and $\tau$. The nested cosine pulse shape in Eq.~(\ref{eqn:Leak-Def of NCPulse}) is fully characterized by the gate time $\tau$, while in principle there can be more pulse parameters. In what follows, we describe distinct interdependencies between certain subsets of parameters which need to be taken into account for RIP device design. In principle, these subsets are \textit{not} completely independent; however, dissecting into simpler few-parameter dependencies facilitates the search. We sort the following conditions from the most to the least trivial as: \begin{itemize} \item[(i)] \textit{Resonator frequency $\omega_c$} --- The measurement equipment sets the choice for $\omega_c$ typically at 7000 MHz. \item[(ii)] \textit{Drive frequency $\omega_d$ and gate time $\tau$} --- Driving above the resonator, i.e. $\omega_d>\omega_c$, leads to less background leakage, while for $\omega_d<\omega_c$ there is excessive leakage due to the other $2\chi$-shifted resonator poles [Fig.~\ref{fig:EffHam-KerrModelFreqSchematic} and Eq.~(\ref{eqn:EffHam-Def of hat(Delta)_cd})]. Pulse-resonator spectral overlap and the resulting background leakage is approximately determined by the product $\Delta_{cd} \tau$ (see Appendix~\ref{App:ResRes}). The leakage clusters tend to fan out at larger detuning (Fig.~\ref{fig:Leak-2DSweepWithDRAG}), hence $\Delta_{cd}$ should be as small as allowed by the background leakage threshold. Filtering the pulse at $\Delta_{cd}$ in general, and applying DRAG in particular [Eqs.~(\ref{eqn:Leak-Def of Omcx})--(\ref{eqn:Leak-Def of Omcy})], are beneficial in reducing $\Delta_{cd}$ for fixed leakage. Based on the simulation with DRAG, $|\Delta_{cd}/2\pi|\gtrapprox 30$ MHz is a reasonable choice for $\tau\approx 200$ ns. \item[(iii)] \textit{Charging energy $E_C$ and Josephson energy $E_J$} --- Qubit frequency and anharmonicity are related to $E_C$ and $E_J$ up to $O(E_C^2/E_J)$ as \cite{Koch_Charge_2007, Didier_Analytical_2018} (assuming $g=0$ and $n_g=0$) \begin{subequations} \begin{align} &\quad\quad \omega \approx (8E_JE_C)^{1/2}-E_C-\frac{1}{4}\left(\frac{2E_C}{E_J}\right)^{1/2} E_C \;, \label{eqn:Tkwys-FreqITOEcEJ}\\ &\quad\quad \alpha \approx -E_C -\frac{9}{16}\left(\frac{2E_C}{E_J}\right)^{1/2} E_C \;. \label{eqn:Tkwys-AnhITOEcEJ} \end{align} \end{subequations} A few observations are in order based in part on our simulation results. First, operating in the transmon limit reduces the set of possible collisions by reducing the dependence of the spectrum on the gate offset charge \cite{Koch_Charge_2007}. Second, both qubit-resonator and three-body leakage \text{amplitude} are universally reduced at smaller qubit anharmonicity [Figs.~\ref{fig:Leak-2DSweepWithDRAG} and~\ref{fig:RIPLikeLeak-HighHighPair}]. Third, smaller anharmonicity leads also to less eigenenergy crowding and collisions (compare collision density at -400 to -200 MHz in Fig.~\ref{fig:Leak-2DSweepWithDRAG}). Fourth, larger qubit frequency pushes the qubit-resonator collisions to occur at larger anharmonicity values outside the range relevant to experiment (less collisions for $\omega /2\pi \approx 6000$~MHz compared to $\omega /2\pi \approx 5140$~MHz in Fig.~\ref{fig:Leak-2DSweepDepOnFreqAndChi}). All in all, it is beneficial to work with very weakly anharmonic qubits with sufficiently small anharmonicity, of the order of -200 MHz. With this choice, $\omega /2\pi \approx 5140$ and $6000$ MHz correspond to $E_J/E_C \approx 103.29$ and $136.71$, respectively. Lastly, we note that smaller anharmonicity, i.e. going from -340 MHz that is common for CR architectures to -200 MHz, can in principle enhance the $\ket{1_a}\rightarrow \ket{2_a}$ leakage during single-qubit gate operations. However, this is \textit{not} a limiting factor, since single-qubit leakage can be mitigated via a combination of DRAG \cite{Motzoi_Simple_2009, Gambetta_Analytic_2011} and \textit{slightly} longer single-qubit gate time (approximately 35 ns instead of 20 ns). \item[(iv)] \textit{Qubit-resonator coupling $g$} --- The effective dispersive coupling is approximately determined as \cite{Koch_Charge_2007} \begin{align} \chi_{jc} \approx \frac{\alpha_j}{\bar{\Delta}_{jc}+\alpha_j}\frac{g_j^2}{\bar{\Delta}_{jc}} \;, \label{eqn:Tkwys-Expr for chi} \end{align} for $j=a,b$, and the dynamic $ZZ$ behaves as $\omega_{zz}^{(2)}\propto \chi_{ac}\chi_{bc}$ [Eq.~(\ref{eqn:EffHam-w_zz^(2)})]. Working with very weakly anharmonic transmons suppresses $\chi_{jc}$, unless we compensate by keeping $g_j^2 \alpha_j$ constant or reduce $\bar{\Delta}_{jc}$. There are, however, a few trade-offs associated with large $g_j$. First, the number splitting \cite{Gambetta_Qubit-photon_2006} of leakage clusters is proportional to $\chi_{jc}$ (Tables~\ref{tab:Leak-ReadoutCollisions}--\ref{tab:Leak-3bdCollisions}); hence strong coupling leaves behind a narrower collision-free anharmonicity range (Fig.~\ref{fig:Leak-2DSweepDepOnFreqAndChi}). Second, generally speaking, large $g_j$ leads to cross-talk between the pair of qubits in the gate as well as between these qubits and other spectator qubits (beyond the scope of this paper). The static $ZZ$, for instance, grows as $g_a^2g_b^2$ [Eqs.~(\ref{eqn:EffHam-Def of J})--(\ref{eqn:EffHam-w_zz^(0)})], which can be suppressed using multi-path interference couplers \cite{Mundada_Suppression_2019, Kandala_Demonstration_2020, Kumph_Novel_APS2021}. Third, although beyond the scope of our analysis, the Purcell rate \cite{Purcell_Resonance_1946, Purcell_Spontaneous_1995, Houck_Controlling_2008, Malekakhlagh_Cutoff-Free_2017} and single-qubit measurement-induced dephasing \cite{Blais_Cavity_2004, Gambetta_Qubit-photon_2006, Puri_High-Fidelity_2016} are also enhanced at strong coupling approximately as \begin{subequations} \begin{align} &\gamma_{Pj} \approx \left(\frac{g_j}{\bar{\Delta}_{jc}}\right)^2 \kappa_c \;, \label{eqn:Tkwys-Expr for gamma_P} \\ &\gamma_{\phi j}(t)\approx \frac{2\chi_{jc}^2}{\Delta_{cd}^2+\chi_{jc}^2+(\kappa_c/2)^2}|\eta_c(t)|^2\kappa_c \;. \label{eqn:Tkwys-Expr for gamma_phi} \end{align} \end{subequations} Reference~\cite{Puri_High-Fidelity_2016} demonstrated the suppression of $\gamma_{\phi j}(t)$ via mode-squeezing. \item[(v)] \textit{Drive amplitude $\Omega_c$} --- The drive amplitude sets the resonator photon number $|\eta_c(t)|^2$ [Eq.~(\ref{eqn:EffHam-Cond for eta_c(t)}) and Appendix~\ref{App:ResRes}], and the dynamic $ZZ$ rate is proportional to photon number as $\omega_{zz}^{(2)}(t)\propto |\eta_c(t)|^2$ [Eq.~(\ref{eqn:EffHam-w_zz^(2)})]. On the other hand, most leakage clusters grow \textit{super-linearly} with photon number. Therefore, the maximum drive threshold is \textit{limited} by leakage threshold. Moreover, to calibrate a controlled-phase gate, there is also an interplay with steps (2) and (4) due to a fixed rotation angle given by $\int_{0}^{\tau}dt'\omega_{zz}^{(2)}(t') = \theta_{zz}$. \end{itemize} Having $2\chi_{ac}/2\pi \approx 2\chi_{bc}/2\pi\approx -5.57$ MHz at $\alpha_a/2\pi\approx\alpha_b/2\pi \approx -200$ MHz for the high-high pair requires $g_a/2\pi \approx 143.69$ and $g_b/2\pi\approx 92.13$ MHz. Using the nested cosine pulse, choosing $\Delta_{cd}/2\pi=-30$ MHz with 10 maximum photons, i.e. $[\Omega_c/(2\Delta_{cd})]^2\approx 10$, we can tune a CNOT-equivalent operation ($\theta_{zz}=\pi/2$) with a total gate time of $\tau\approx 155$ ns. The corresponding Purcell and \textit{pulse-averaged} measurement-induced dephasing rates read $\gamma_{Pa}/2\pi \approx 0.096$, $\gamma_{Pb}/2\pi \approx 0.114$, $\gamma_{\phi a}/2\pi \approx \gamma_{\phi b}/2\pi \approx 0.303$ KHz, where $\gamma_{\phi j}\equiv (1/\tau)\int_{0}^{\tau}dt' \gamma_{\phi j}(t')$. The coherence limit on the average two-qubit error due to each mechanism is estimated as $\bar{E}_{\gamma_{\phi}}\approx \sum_{j=a,b}(2/5)[(\gamma_{\phi j}/2\pi)\tau]\approx 3.74\times 10^{-5}$ and $\bar{E}_{\gamma_{P}} \approx \sum_{j=a,b}(1/5)[(\gamma_{Pj}/2\pi)\tau]\approx 6.51\times 10^{-6}$. Assuming longitudinal relaxation times $T_{1a}=T_{1b}\approx 100$ $\mu$s, the overall average incoherent error is estimated as $\bar{E}_{\text{incoh}}\approx 1.28\times 10^{-3}$. \section{Conclusion} \label{Sec:Conc} In this work, we presented an ab-initio analysis of the RIP gate dynamics, through which we characterized qubit leakage due to a series of unwanted transitions. The physics behind such transitions cannot be correctly analyzed using the dispersive JC or Kerr models since they are by construction diagonal with respect to the subspace of the qubits. Our ab-initio theory suggests that the qubit leakage can be reduced by using very weakly anharmonic transmon qubits with $E_J/E_C > 100$ compared to $E_J/E_C \approx 50$ \cite{Koch_Charge_2007} that is common for CR architectures \cite{Sheldon_Procedure_2016, Sundaresan_Reducing_2020, Jurcevic_Demonstration_2021}. In particular, we achieve this limit by simultaneously increasing the qubit frequency and decreasing the anharmonicity compared to the state-of-the-art operating point of 5 GHz and -340 MHz for CR gates \cite{Hertzberg_Laser_2021, Jurcevic_Demonstration_2021}. Weaker anharmonicity mitigates leakage amplitude, density, and its dependence on gate charge, while larger qubit frequency pushes the underlying collisions to larger negative anharmonicity away from experimentally relevant range. We demonstrated the advantage of such parameter allocation for a RIP pair with qubit and resonator frequencies at 5.75, 6.25 and 7.00 GHz, respectively (Fig.~\ref{fig:RIPLikeLeak-HighHighPair}). Despite focusing on the RIP gate operation, we note that our analysis of leakage and frequency collisions have immediate application to similar setups in which weakly anharmonic superconducting qubits are coupled to linear resonators. Prominent examples are dispersive readout \cite{Boissonneault_Nonlinear_2008, Boissonneault_Dispersive_2009, Minev_Catch_2019, Petrescu_Lifetime_2020, Hanai_Intrinsic_2021} and Kerr-cat qubits \cite{Mirrahimi_Dynamically_2014, Leghtas_Confining_2015, Grimm_Stabilization_2020}. Although the leakage strength depends on the specifics of the control and measurement scheme for each case, the unwanted transitions outlined in this work can also be driven in these setups. \section{Acknowledgements} \label{Sec:Acknow} We appreciate helpful discussions with the IBM Quantum team especially Lev Bishop, Oliver Dial, Aaron Finck, Jay Gambetta, Abhinav Kandala, Muir Kumph, Easwar Magesan, David McKay, James Raftery, Seth Merkel, Zlatko Minev, Matthias Steffen and Ted Thorbeck. We acknowledge the work of IBM Research Hybrid Cloud services, and especially Kenny Tran, which substantially facilitated our extensive numerical analyses.
1,477,468,749,900
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{M}obile applications (apps) now have become the most popular way of accessing the Internet as well as performing daily tasks, e.g., reading, shopping, banking, and chatting~\cite{web:mobileDesktop}. Different from traditional desktop applications, mobile apps are typically developed under the time-to-market pressure and facing fierce competitions --- over 3.8 million Android apps and 2 million iPhone apps are striving to gain users on Google Play and Apple App Store, the two primary mobile app markets~\cite{web:appNumber}. Additionally, a large number of mobile apps still suffer from functional bugs~\cite{fan18, fan18efficiently}, security vulnerabilities~\cite{chen2018ausera,chen2019ausera, chen2018mobile}, and the lack of marketing competitiveness. Therefore, for app developers and companies, it is crucial to perform extensive competitive analysis through app review over existing apps with similar purposes~\cite{guo2017automated,arbon2014app,web:competitorAnalysis,fox2017mobile}. This analysis helps understand the competitors' strengths and weaknesses, and reduces market risks before development. Specifically, it identifies common app features, design choices, and potential customers. Moreover, researching similar apps also helps developers gain more insights on the actual implementation, given that delivering commercial apps can be time-consuming and expensive~\cite{web:appCost}. Besides, from the {perspective} of app testers for testing purpose, they aim to catch more useful features, such as logic, functionalities, and version changes. However, to the best of our knowledge, existing reverse engineering tools can only provide partial features, such as the configuration file (i.e., AndroidManifest.xml) and Java {source} files to analysts directly~\cite{arnatovich2018comparison}. To achieve the aforementioned tasks such as competitive analysis, a freelance developer or a product manager (PM) in a tech company has to download the apps from markets, install them on mobile devices, and use them back-and-forth to identify what he is interested in{~\cite{guo2017automated,arbon2014app,fox2017mobile,web:competitorAnalysis}}. However, such manual exploration can be painstaking and ineffective. For example, if a tech company plans to develop a social media app, over 200 similar apps on Google Play will be under review. It is overwhelming to manually analyze them --- register accounts, feed specific inputs if required, and record necessary information (e.g., what are the main features, how are the app pages connected). Additionally, commercial apps can be too complex to be manually uncovered all functionalities in a reasonable time~\cite{azim2013targeted}. For UI/UX designers, the same exploration problem still remains when they want to get inspiration from similar apps' design. In addition, the large number of user interface (UI) screens within the app also makes it difficult for designers to understand the relation and navigation between pages. For developers who want to get inspiration from similar apps, it is difficult to link the UI screens with the corresponding implementation code --- the code can be separated in layout files as well as a large piece of functional code. For app testers who want to understand the existing apps in depth from multiple aspects, such as app logic, functionalities, and version changes in order to design test cases or testing strategies, it is difficult to obtain all the useful features at the same time with existing reverse engineering tools, such as {ApkTool}~\cite{apktool} and {Androguard}~\cite{androguard}. \begin{figure} \center \includegraphics[width=0.425\textwidth]{img/demo.png} \caption{The storyboard diagram of an Android app} \label{fig:demo} \end{figure} Inspired by the conception of \emph{\textbf{storyboard}}\footnote{``Storyboard'' was developed at Walt Disney Productions, including a sequence of drawings typically with some directions and dialogues, representing the shots planned for a movie or television production.} in movie industry~\cite{finch1995art}, we intend to generate the storyboard of an Android app to visualize its key app behaviors and rich features. Specifically, we use activities (i.e., UI screens) to characterize the ``scenes'' in the storyboard, since activities represent the intuitive impression of the apps in a full-screen window and are the most frequently used components for user interactions~\cite{activity}. Fig.~\ref{fig:demo} shows the storyboard diagram of \emph{Facebook} (one of the most popular social media apps), which includes the activity transition graph (ATG) with UI pages, the detailed layout code, independent UI components, the functional code of each activity (\emph{Activity Code}), and method call relations within each activity (\emph{Method Hierarchy}). Based on this storyboard, PMs can review a number of apps in a short period of time and propose more competitive features in their own app.\footnote{The main purpose is to help PMs, developers, designers, and testers understand and get inspiration from existing apps, instead of directly distributing any part of the code for developing apps for commercial purpose.} UI designers can obtain the most related UI pages for reference. And developers can directly refer to the related code to improve development efficiency. Meanwhile, app testers can understand the main logic, functionalities, as well as version changes to generate test cases. However, generating storyboards is challenging. \revise{First, ATG is usually incomplete with low activity coverage due to the limitations of static analysis tools such as A3E~\cite{azim2013targeted}, IC3~\cite{octeau2015composite}, and Gator~\cite{gator}.} Second, to render all UI pages, a pure static approach may miss parts of UIs that are dynamically rendered and reduce UI similarity compared with real pages, whereas existing pure dynamic approaches~\cite{monkey, su2017guided, chen2018ui, chen2019codegeneration} can only reach limited activities in the app, especially for those requiring login. Third, the obfuscated activity names lack the semantics of corresponding functionalities, making the storyboard hard to understand. In our previous conference version~\cite{chen2019storydroid}, to overcome these challenges, we propose a system (named \textbf{StoryDroid}) to automatically generate the storyboards of apps in three main phases: (1) \emph{Activity transition extraction}, which extracts ATG from the apks, especially the transitions in fragments~\cite{fragment} (components of Activity) and inner classes~\cite{inner}, making ATG more complete. (2) \emph{Static UI page rendering}, which first extracts the dynamic components (if any) for each UI page and embeds them into the corresponding static layout. It then renders each UI page statically based on the static layout files. (3) \emph{Semantic name inferring}, which infers the semantic names for the obfuscated activity names by comparing the layout hierarchy with the ones in our database.\rThree{\footnote{\rThree{According to a pilot study on 1,000 randomly selected activities names, we found that \textit{few activity names} lack semantics in the experimental dataset. Therefore, in this version, to make the paper more compact, we did not pay more attention to the \textit{semantic name inferring}.}}} \revise{However, there are still some limitations in {{StoryDroid}}\xspace~\cite{chen2019storydroid}, which motivates us to extend to this journal version. (1) The completeness of ATG is still not satisfying (below 70\% activity coverage on average) especially for the closed-source apps (below 60\%) due to the limitations of pure static analysis such as decompilation errors and dynamic-loading components. (2) Some of the rendered pages by the pure static method have a big visual difference compared with the real pages (Fig.~\ref{fig:two_versions} (a)) \rTwo{even though} it achieves \textasciitilde80\% similarity on average. More importantly, not all the dynamic/hybrid layout code can be transferred to static layout code, causing unexpected errors such as rendering failures of user-defined components, third-party dependency errors, and resource file errors, which directly leads to low success rate of page rendering (\textasciitilde 55\% launch ratio on average).} \revise{These above issues significantly reduce the usability of storyboards in practice.} \revise{However, it is non-trivial to overcome the above limitations, because it is challenging to further improve (1) the completeness of ATG only by a pure static method because it is hard to handle various types of activity startups or address the limitations caused by code reverse engineering~\cite{octeau2015composite, gator, chen2019storydroid,azim2013targeted}; (2) the capability of static UI page rendering because it cannot transfer various types of dynamic components and is hard to render UI pages of closed-source apps due to compilation failures. To address the limitations of the pure static method in {{StoryDroid}}\xspace~\cite{chen2019storydroid}, we propose a \textit{hybrid approach} named \textbf{{{StoryDistiller}}\xspace}, which combines static and dynamic methods to distill and generate storyboards for Android apps more effectively, and further help different stakeholders to explore and review apps. Consequently, in this paper, we make substantial effort to upgrade the generation capability of storyboards for apps from the following technical aspects:} \begin{itemize} \item \revise{In terms of the \textit{Activity transition extraction}, we leverage \textit{Dynamic UI component exploration} to dynamically augment the transition graph extracted by the pure static method in {{StoryDroid}}\xspace. Consequently, StoryDistiller combines the advantages of static and dynamic methods with over 20\% increase in activity transition pairs and more than 10\% improvement in activity coverage.} \item \revise{As for the \textit{Dynamic UI page rendering}, we leverage {static data-flow analysis} to extract the inter-component communication (ICC) data transferred across different activities. Based on it, {{StoryDistiller}}\xspace can render UI pages dynamically with a high success launch ratio (\textbf{\textasciitilde80\%} vs. \textasciitilde 55\% in {{StoryDroid}}\xspace on average) and can address the low page similarity of the static rendering method\rThree{\footnote{\rThree{For the pure static method in {{StoryDroid}}\xspace, rendering the page is based on the static layout files or the transferred layout files for the dynamic/hybrid layouts, and no more other parameters are needed like ICC data using in {{StoryDistiller}}\xspace.}}} used in {{StoryDroid}}\xspace (\textbf{\textasciitilde95\%} vs. \textasciitilde80\% on average).} \item \revise{{{StoryDistiller}}\xspace provides a web service to visualize the storyboards with rich features and enhance the usability of {{StoryDistiller}}\xspace. Thanks to the capability of {{StoryDistiller}}\xspace and large-scale dataset of apps, we are able to build a large and multi-dimension dataset with different kinds of data to enable different follow-up research directions.} \end{itemize} \revise{Specifically, in this extension, we evaluate {{StoryDistiller}}\xspace on 150 apps (75 open-source and 75 closed-source apps) from the following two aspects}: effectiveness evaluation of each phase of the proposed method and usefulness evaluation of the visualization outputs as a web service. \revise{The experimental results show that (1) for activity transitions, {{StoryDistiller}}\xspace outperforms the existing static methods such as IC3, Gator, and {{StoryDroid}}\xspace (7.8, 10.0, and 18.2 vs. \textbf{23.3} transition pairs on average); For activity coverage, {{StoryDistiller}}\xspace also performs the best compared with the above three static methods (38.7\%, 33.7\%, and 69.6\% vs. \textbf{77.5\%} on average) and the dynamic method (i.e., Stoat) (36.3\% vs. \textbf{77.5\%}). (2) {{StoryDistiller}}\xspace achieves around \textbf{80\%} launch ratio of activities for each app on average on the 150 selected apps, while {{StoryDroid}}\xspace only launches about 55\% activities due to the limitations of the pure static rendering method. Moreover, our rendered UI pages clearly show the actual functionalities of the activities compared with the ones that are obtained by manual exploration and achieve over 95\% UI similarity. In addition, the user study shows that with the help of {{StoryDistiller}}\xspace, activity coverage has a significant improvement compared with exploration without {{StoryDistiller}}\xspace when exploring and reviewing apps.} In summary, we make the following main contributions: \begin{itemize \item This research work aims to automatically generate the storyboards of Android apps. It assists app development teams including PMs, designers, developers, and app testers to quickly have a clear overview of other similar apps and target different tasks such as app exploration and app review. \item \revise{We leverage a hybrid approach to extract a comprehensive ATG for Android apps, and render UI pages dynamically with high UI similarity compared with the real ones.} \item \revise{We propose a novel method to render UI pages by obtaining the required ICC data for launching each activity, minimizing unexpected errors when rendering UI pages (Algorithm~\ref{algo:iccdata} in \S~\ref{subsec:dynamic}).} \item Our comprehensive experiments demonstrate not only the effectiveness of the generated storyboards, but also the usefulness of our StoryDistiller with the extracted rich features for assisting app review and analysis. \item To enhance the usability of {{StoryDistiller}}\xspace, we visualize the storyboards with all rich features through a web service (Fig.~\ref{fig:service}). We also construct {a multi-dimension} dataset with different kinds of features based on {{StoryDistiller}}\xspace and enable several follow-up research directions, such as extracting commonalities across apps, recommending UI design and code, and guiding app testing. We will gradually release these datasets to enable different research applications~\cite{storydistiller}. \item \rTwo{We released the code of {{StoryDistiller}}\xspace on GitHub for the community to facilitate the following works: \url{https://github.com/tjusenchen/storydistiller}} \end{itemize} \section{Motivating Scenario}\label{sec:motivation} We detail the typical app review process{~\cite{devprocess,typical,guo2017automated,arbon2014app,fox2017mobile}} with our {{StoryDistiller}}\xspace for Android apps in terms of different roles in the development team. Eve is a PM of an IT company. Her team plans to develop an Android social app. In order to improve the competitiveness of the designed app, she searches hundreds of similar apps (e.g., Facebook, Instagram, and Twitter) based on the input keywords (e.g., social and chat) from Google Play Store. She then inputs all of the URLs of these apps into {{StoryDistiller}}\xspace which automatically download all of these apps with Google Play API~\cite{api}. {{StoryDistiller}}\xspace further generates the storyboard (e.g., Fig~\ref{fig:demo}) of all these apps and displays them to Eve for an overview. By observing these storyboards together, she easily understands the storyline of these apps, and spots the common features among these apps such as registering, searching, setting, user profile, posting, etc. Based on these common features, Eve comes up with some unique features which can distinguish their own app from existing ones. Alice, as a UI/UX designer, needs to design the UI pages according to Eve's requirements. With our {{StoryDistiller}}\xspace, she can easily get not only a clear overview of the UI design style of related apps, but also interaction relations among different screens within the app. Then, Alice can develop the UI and user interaction of her app inspired by others' apps~\cite{web:designer1, web:designer2}. Bob is an Android developer who needs to develop the corresponding app based on Alice's UI design. Based on Alice's referred UI design in the existing app, he can also refer to that app with the help of our {{StoryDistiller}}\xspace. By clicking the UI screen of each activity in the storyboard, {{StoryDistiller}}\xspace returns the corresponding UI implementation code no matter it is implemented with pure static code, dynamic code, or hybrid ones. To implement their own UI design, he can refer to the implemented code and customize it based on their requirement. That development process is much faster than starting from scratch. In addition, Bob may also be interested in certain functionality within a certain app. By using {{StoryDistiller}}\xspace, he can easily locate the logic code. Mallory is an Android tester who has to test the corresponding app based on Bob's implementation. By exploring {{StoryDistiller}}\xspace, she can understand the main logic and functionalities to generate test cases. For apps with multiple versions, {{StoryDistiller}}\xspace is able to identify the UI components that have been modified between different app versions. Therefore, she can also reuse most of the test cases since different versions of a single app have many common functionalities. Reusing test cases is useful to improve the efficiency of app testing. \section{Preliminaries}\label{sec:background} In this section, we briefly introduce the concept of Android Activity and Fragment, and the mechanism of inter-component communication (ICC). \subsection{Android Activity and Fragment} There are 4 types of components in Android apps (i.e., Activity, Service, Broadcast, and Receiver). Activity~\cite{activity} and Fragment~\cite{fragment} render the user interface and are the visible parts of apps. Activity is a fundamental component for drawing the screens which users can interact with. Fragment represents a portion of UIs in the activities, which contributes their own UI to certain activities. Fragment always depends on an Activity and cannot exist independently. A Fragment can also be reused in multiple activities and an activity may contain multiple fragments based on the screen size, with which we can create multi-panel UIs to adapt to mobile devices with different screen sizes. \revise{Service is another important component of Android that is used to perform operations on the background such as playing music and handling network transactions. It does not has any UI.} \subsection{Inter-component Communication}\label{sec:background-icc} When an app intends to make inter-component communication (ICC), e.g., start a new activity $B$ or connect to other apps from the current activity $A$, it requires to create an ``Intent'' object describing the task. If there is other data/messages required to be transferred from activity $A$ to activity $B$, the parameters, such as action, category, and extra parameters can be stored in the ``Intent'' object or in the ``Bundle'' class, and transferred to activity $B$ for successful launching. When activity $B$ receives it, it can parse the data inside and use them to render the UI screen or conduct other transactions. If activity $B$ does not receive the necessary data or the proper form of the necessary data, it could not be rendered successfully, sometimes even causing an app crash, usually ``NullPointerException''. Note that, one activity also can be started by other activities via Fragments and inner classes~\cite{inner}. As shown in Table~\ref{tbl:iccdata}, the ICC data transferred between components are classified into two categories: \textit{primitive attribute} and \textit{extra parameter}. The primitive attributes are usually stated in the intent-filter element of an activity in the AndroidManifest.xml file, indicating that only the intents with specific attributes can launch the activity, such as actions to be performed (e.g., {android.intent.action.VIEW}), URI data to be operated on (e.g., {vnd.android.cursor.dir/vnd.google.note}), and special flags associated with the ``Intent'', etc. Primitive attributes can also be declared in the Java files. The extra parameters are usually declared in the Java files in an Intent object or Bundle class, indicating the data transferring to the target activity, which is also the necessary data to launch the target activity, in the form of <$key$, $type$> pairs where $key$ is a String \rTwo{indicating the parameter name} and $type$ indicates the data type of the value. \rTwo{For example, if an activity requires a specific ``pid'' (e.g., pid = 2) to be successfully launched, then the $key$ refers to the parameter name ``pid'', and the $type$ refers to data type of 2 (i.e., Integer).} \begin{figure*} \centering \includegraphics[width=1\textwidth]{img/overview.png} \caption{\revise{\rTwo{Overview of {{StoryDistiller}}\xspace}}} \label{fig:workflow} \end{figure*} \begin{table}[t] \centering \caption{\revised{Data types transferred in ICC}} \label{tbl:iccdata} \scalebox{0.95}{\revised{\begin{tabular}{c|c|l} \hline \textbf{Category} & \textbf{SubCategory} & \textbf{Data Type/Description} \\ \hline \multirow{4}{*}{\textit{\begin{tabular}[c]{@{}c@{}}Primitive \\ Attributes\end{tabular}}} & Action & String \\ \cline{2-3} & Category & Set\textless{}String\textgreater{} \\ \cline{2-3} & Data & String \\ \cline{2-3} & Type & String \\ \hline \multirow{3}{*}{\textit{\begin{tabular}[c]{@{}c@{}}Extra \\ Parameters\end{tabular}}} & Basic & \begin{tabular}[c]{@{}l@{}}\textless{}$key, type$\textgreater pair, \rTwo{where $key$ refers to the}\\ \rTwo{parameter name, and $type$ indicates the} \\\rTwo{data type of the value (e.g., Integer, String).} \end{tabular} \\ \cline{2-3} & Bundle & \begin{tabular}[c]{@{}l@{}}Set of \textless{}$key, type$\textgreater pairs, each of which\\ is a basic extra parameter.\end{tabular} \\ \hline \end{tabular}}} \end{table} \section{Our Hybrid Approach ({{StoryDistiller}}\xspace)}\label{sec:approach} {{StoryDistiller}}\xspace takes an \emph{apk} as input and outputs the visualized storyboard (${S}$) with rich features for the app. Fig.~\ref{fig:workflow} shows the overview of our hybrid approach (named {{StoryDistiller}}\xspace): \revise{(1) First of all, {{StoryDistiller}}\xspace instruments the \emph{apk} so that activities can be launched by third-parties. (2) \textit{Static extraction} includes \textit{ATG extraction}, which leverages static program analysis to obtain relatively complete ATG. Meanwhile, the required {ICC data} (i.e., Activity launching parameters shown in Table~\ref{tbl:iccdata}) can be extracted through control- and data-flow analysis (refer to Section \ref{subsubsec:icc}). (3) \textit{Dynamic UI page rendering} launches the activities registered in the app one by one with the extracted ICC parameters. Meanwhile, it can also augment ATG through \textit{dynamic UI component exploration}. After that, we can obtain a comprehensive ATG with rendered UI pages. (4) Moreover, the other rich features, such as layout code, Activity code, UI component, and call graphs are collected.} (5) {{StoryDistiller}}\xspace then visualizes the storyboard of the app with all the extracted features in a webpage. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{img/atg} \caption{Activity transitions between activities and fragment, inner class} \label{fig:atg} \end{figure} \subsection{APK Instrumentation}\label{subsec:instrumentation} \revise{In terms of the \textit{APK instrumentation}, we {first decompile the target apk and set} ``exported=true'' in the AndroidManifest.xml file for each activity to enable the launching process by third-parties. We then repackage it to a new installable APK file and sign it to ensure its usability. Note that the repackaged apps are only used for the experimental purpose, and all the experiments are conducted in a controlled environment. The repackaged apps will not be released for commercial use.} \subsection{Static Extraction}\label{subsec:static} \revise{Static extraction mainly contains two steps: ATG extraction and ICC data extraction for dynamic UI page rendering in the next phase.} \subsubsection{{ATG extraction}} \revise{Activity transition in fragment and inner class are representative and widely-used components in real-world apps. According to our study on 150 randomly selected real apps (75 open-source and 75 closed-source apps used in RQ1 (\S~\ref{subsec:rq1})), we find 44 apps use Activity transitions in fragment and 84 apps use Activity transitions in inner class.} Before extracting activity transitions in inner classes and fragments, we illustrate the transitions in them. Fig.~\ref{fig:atg} (a) is the sub ATG of \emph{Vespucci}~\cite{vespucci}, a map editor. Firstly, activity {Main} starts PrefEditor, in which PrefEditorFragment is started. And PrefEditorFragment further starts AdvancedPrefEditor. Specifically, as shown in Listing~\ref{fig:fragment}, fragments can be added to an activity in two ways: (1) by invoking fragment modification API calls, e.g., ``replace()'', ``add()'', and further leveraging ``FragmentTransaction.commit()'' (lines 3-4) to start the fragment; (2) By using ``setAdapter'' (line 7) to display the fragment in a certain view (e.g., ViewPager). The started PrefEditorFragment then starts a new activity (i.e., AdvancedPrefEditor). Fig.~\ref{fig:atg} (b) shows the sub ATG of ADSdroid, where SearchPanel uses an inner class SearchByPartName to handle time-consuming operations as shown in Listing~\ref{fig:inner}. After finishing the task, it starts an activity PartList by invoking ``StartActivity()'' (line 4). In this example, our goal is to extract activity transitions: Main$\rightarrow$PreEditor, PreEditor$\rightarrow$AdvancedPrefEditor, and SearchPanel$\rightarrow$PartList. Algorithm~\ref{algo:static} details the extraction of ATG. Specifically, it takes as input an ${apk}$, and outputs the activity transition graph (${atg}$). We first initialize $atg$ as an empty set (line 1), which stores the activity transitions gradually. We then generate the call graph ($cg$) of the given ${apk}$. For each method ($m$) in each class ($c$), if there exists an activity transition, we first get the target activity ($callee\_act$) by analyzing the data in \emph{Intent} \rTwo{via getTargetAct()} (lines 4-8). \rTwo{Specifically, for each explicit activity transition, the target activity is explicitly indicated in the Intent object where an intent variable usually either explicitly declares the callee activity or uses a variable defined before or other types of implementation to indicate the target activity. We first analyze which intent constructor it creates (Intent has various constructors to receive different kinds/numbers of parameters), and then track the parameter that indicates the target activity by data-flow analysis. Finally, we can obtain the target activity ($callee\_act$).} If the method ($m$) is in an inner class, we regard the outer class as the activity that starts the target activity and add the transition to $atg$ (lines 9-11). Take Fig.~\ref{fig:atg}~(b) as an example, we add an edge SearchPanel$\rightarrow$PartList to $atg$. \begin{lstlisting}[language=Java,caption={Simplified code snippet of Fragment},label={fig:fragment}] public class PrefEditor{... //Using replace/add PrefEditorFragment pref = new PrefEditorFragment(); FragmentTransaction.replace(R.id.content,pref); FragmentTransaction.commit(); } public class PrefEditor{... // Using setAdapter ViewPager.setAdapter(getSupFragmentManager(), new PrefEditorFragment()); } \end{lstlisting} \begin{lstlisting}[language=Java,caption={Simplified code snippet of Inner Class},label={fig:inner}] public class SearchPanel{... private class SearchByPartName extends Asynctack<>{... Intent intent = new Intent(MainActivity.this,PartList.class); startActivity(intent); } } \end{lstlisting} If $m$ is in a fragment, we construct the relation between the fragment ($caller\_frag$) and the target component (lines 12-13). \rTwo{Specifically, for each activity transition, we first locate the class $c$ that starts a new activity according to $m$, and then check the super class of it. If it extends a fragment, we then set $caller\_frag = c$. In fact, in terms of extracting the target activities from explicit transitions, there is no difference between extracting activities started by activity and fragment.} Note that this relation \rTwo{between the caller fragment and the target activity} does not represent the actual component transition, we optimize it by identifying the activities that start the fragment in lines 18-21. \rTwo{Specifically, to identify the activities that bind a specific fragment, we investigate different types of methods that bind activities and the corresponding fragments, where fragments are operated (e.g., removed, added, replaced, and setAdapter) using specific APIs, and we can track specific APIs to identify the activity corresponding to a specific fragment.} After that we update $atg$ by merging fragment relations to construct the actual activity transitions (line 22). For example, as shown in Fig.~\ref{fig:atg}~(a), we first obtain the relations PrefEditorFragment$\rightarrow$AdvancedPrefEditor, PrefEditor$\rightarrow$PrefEditorFragment, then we merge it to PrefEditor$\rightarrow$AdvancedPrefEditor to represent the actual activity transition. For method $m$ that is neither in an inner class nor a fragment, we backward traverse $cg$ starting from $m$ to obtain all the activities that start the target activity ($callee\_act$), then add them to $atg$ (lines 14-17). \begin{algorithm2e}[t] \setcounter{AlgoLine}{0} \caption{Static ATG Extraction} \label{algo:static} \DontPrintSemicolon \SetCommentSty{mycommfont} \KwIn{$apk$} \KwOut{$atg$: Activity transition graph, including Activity and Service} $atg$ $\leftarrow \emptyset$ \; $cg$ $\leftarrow$ getCallGraph($apk$) \; $all\_classes$ $\gets$ getAllClasses($apk$) \; \ForEach{$c$ $\in$ $all\_classes$}{ $methods$ $\gets$ getClassMethods($c$) \; \ForEach{$m$ $\in$ $methods$}{ \If{hasActivityTransition($m$)}{ $callee\_act$ $\gets$ getTargetAct($m$)\; \If{isInnerClass($c$)}{ $caller\_act$ $\gets$ outerClass($c$) \; $atg$.addPair($caller\_act$, $callee\_act$)\; } \ElseIf{isInFragment($m$)}{ $atg$.addPair($caller\_frag$, $callee\_act$)\; } \Else{ $caller\_acts$ $\gets$ getCallerAct($m$, $cg$)\; \ForEach{$act$ $\in$ $caller\_acts$}{ $act$.addPair($act$, $callee\_act$)\; } } } \tcp*[h]{Optimize $atg$}\; \If{startFragment($m$)}{ $caller\_acts$ $\gets$ getCallerAct($caller\_frag$)\; \ForEach{$act$ $\in$ $caller\_acts$}{ $atg$.addPair($act$, $callee\_frag$)\; } updateATGIfNeeded($atg$) } } } \Return{$atg$} \end{algorithm2e} \SetKwInput{KwInput}{Input} \SetKwInput{KwInput}{Input} \SetKw{Let}{let} \SetKw{Continue}{continue} \subsubsection{{ICC data extraction}}\label{subsubsec:icc} As aforementioned in \S~\ref{sec:background-icc}, to successfully launch an activity, data that are required to render the target UI page should be provided, including the \emph{primitive attributes} and \emph{extra parameters} listed in Table~\ref{tbl:iccdata}. Algorithm~\ref{algo:iccdata} details the extraction process of ICC data. \rThree{We highlight that the data-flow analysis for ICC data extraction is one of the core phases in {{StoryDistiller}}\xspace, which obviously improves the ability of UI page rendering (c.f. \S~4.3). } As shown in Algorithm~\ref{algo:iccdata}, it takes an \textit{apk} as input, and outputs the ICC data required to launch each activity. Specifically, we first obtain the call graph, all class instances, and the AndroidManifest.xml file by decompiling the apk, and the output $icc\_data$ is initialized as an empty set (Lines 1-4). We then traverse the classes to identify activities. For each activity $act$, we use the function \texttt{getParamters()} to obtain the required parameters (including \emph{primitive attributes} and \emph{extra parameters}) for launching the activity (Lines 9-31). As for the \textbf{primitive attributes}, we obtain them (if any) from the manifest file by parsing the corresponding fields, such as ``action'' and ``category'', and then save it in $para$ (Lines 11-12). Sometimes primitive attributes are also declared in the source code, and the extraction method is similar to that of extra parameters. \begin{algorithm2e}[t] \setcounter{AlgoLine}{0} \caption{ICC Data Extraction} \label{algo:iccdata} \DontPrintSemicolon \SetCommentSty{mycommfont} \KwIn{$apk$} \KwOut{$icc\_data$ <$act$, $para$>: ICC data of each activity for Activity launching} $icc\_data$ $\leftarrow \emptyset$ \; $cg$ $\leftarrow$ getCallGraph($apk$) \; $all\_classes$ $\gets$ getAllClasses($apk$) \; $mani \gets$ getManifest($apk$)\; \ForEach{$c$ $\in$ $all\_classes$}{ \If{isActivity($c$)}{ \tcp*[h]{Get parameters for activity $c$}\; $para \gets$ getParameters($c$, $cg$, $mani$) \; $icc\_data$ = $icc\_data$ $\bigcup$ <$c$, $para$>\; } } \SetKwFunction{FMain}{getParameters} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$act$, $cg$, $mani$}}{ $para$ $\gets \emptyset$ \; \tcp*[h]{Get primitive attributes from manifest}\; $attr, value \gets$ getPrimitiveAttr($c$, $mani$)\; $para \gets para \bigcup$ <$attr$, $value$> \; \tcp*[h]{Get extra parameters from source code}\; $methods_{lc}$ $\gets$ getLifecycleCallbacks($act$) \; \ForEach{$m$ $\in$ $methods_{lc}$}{ $type$, $key$ $\gets null$ \; $para \gets$ getExtras($m$, $para$); } \KwRet $para$\; } \SetKwFunction{FMain}{getExtras} \SetKwProg{Fn}{Function}{:}{} \Fn{\FMain{$m$, $para$}}{ \If{hasExtraParameters($m$)}{ $extras \gets$ getAllExtras($m$)\; \ForEach{$e \in extras$ }{ $key$ $\gets$ getKey($e$)\; $type$ $\gets$ getValueType($e$)\; $para \gets$ $para \bigcup$ <$key$, $type$> \; } } \Else{ \revise{$m_{callee}$ $\gets$ getCalleeMethod($m$, $cg$)\;} \While{\revise{$m_{callee}\neq null$}}{ \revise{$para \gets$ getExtras($m_{callee}$, $para$)\;} \revise{$m_{callee}$} $\gets$ getCalleeMethod(\revise{$m_{callee}$}, $cg$)\; } } \KwRet $para$\; } \Return $icc\_data$ \end{algorithm2e} As for the \textbf{extra parameter} extraction, we first identify methods related to activity lifecycle (denoted by $methods_{lc}$), such as \textit{onCreate}() and \textit{onStart}() since extra parameters in these methods are related to page rendering. For each lifecycle callback (i.e., method) $m$, if it invokes specific APIs (e.g., \textit{getStringExtra}, \textit{getBundle}) to get the ICC extra data from the previous activity, we obtain the key through backward data-flow analysis and the value type of each extra parameter based on the corresponding APIs. We then save them in $para$ (Lines 19-24). \rTwo{Specifically, as for the \textit{key}, whose main purpose is to get the attached data transferred from the source activity to the target activity, therefore, it is usually presented using constant strings and can be directly extracted from the code according to the specific APIs. As for the \textit{value type}, we can get it according to the specific APIs of value types such as getStringExtra and getBooleanExtra. For example, btd =getIntent().getStringExtra(``returnKey1"), the key we obtained is ``returnKey1'', the value type is ``String'', and the ICC data is saved as $<$returnKey1, String$>$, we will provide a string value for the key returnKey1 at runtime to launch the target activity. } In some cases, the extra parameters are not directly declared in the lifecycle methods, but in the methods that the lifecycle methods invoke, which would also affect the UI rendering if not provided with proper parameters. To tackle this situation, we first obtain the methods that the lifecycle callbacks invoke according to the call graph $cg$ (Line \revise{26}), and iteratively explore each method to obtain the potential extra parameters with their key and value types by invoking $getExtras$ method (Lines \revise{28-29}). After obtaining the parameters for activity $act$, we store it together its required parameters to $icc\_data$ for further UI page rendering and exploration. \subsection{Dynamic UI Page Rendering}\label{subsec:dynamic} \revise{Dynamic UI page rendering mainly contains two steps: \textit{UI page rendering}, which {launches each activity} dynamically based on the extracted ICC data; \textit{UI component exploration}, which {augments static ATG by exploring all interactive components of each activity to identify more activity transitions together with UI pages.}} \rThree{Note that the static UI page rendering method used in {{StoryDroid}}\xspace by leveraging layout code transformation may lead to a big visual difference between the rendered pages and the real pages like Fig.~9 (a), the reasons are explained in \S~5.2.2. However, there are no such limitations in StoryDistiller which uses the dynamic UI page rendering with the ICC data extracted by the data-flow analysis.} \subsubsection{UI page rendering}\label{subsec:rendering} After generating the activity transitions between different pages, we now aim to render the corresponding UI pages by exploring each activity of the app. Our goal is to render/explore as many UI pages as possible to visualize the transitions between activities. To the best of our knowledge, basically, there exist two methods rendering/exploring the UI pages: (1) Dynamic app testing tools such as {Monkey}~\cite{monkey} and {Stoat}~\cite{su2017guided}, which aim to explore as many UI pages as possible by dynamically running the apps to detect more bugs, however they are demonstrated to only achieve \textasciitilde35\% activity coverage, which is far away from representing the complete relations between activities. (2) Static UI page rendering. Chen et al.~\cite{chen2019storydroid} proposed to render UI pages by first converting dynamic/hybrid layout to static layout since they found 62.3\% apps construct their UI pages by adopting dynamic/hybrid layout, and developing a dummy app to launch each activity with the help of static layouts. However, the rendering largely relies on the layout conversion process, causing incomplete or error rendering of the UI pages if the conversion process is incomplete. \revise{To this end, we propose to render and launch activities dynamically with the help of the extracted ICC data and Android toolkit, and take screenshots accordingly.} This approach has several advantages over the existing methods: (1) It does not need to generate test cases to run the app like dynamic app testing, but directly launch all activities one by one, which addresses the limitation that the test cases may not reach all the activities successfully; (2) It considers the data transferred from the previous activity that is essential for rendering the current activity, which alleviates the limitation of improper conversion process in {{StoryDroid}}\xspace~\cite{chen2019storydroid}. The detail of the rendering process is as follows. \revise{For each activity, if it requires parameters to launch, we provide it with a random dummy value according to its required data types (e.g., String/Integer/Boolean). \rTwo{As for the dummy value, we extracted the data defined the layout files when exploring apps and randomly choose values from them for different data types.} In this way, we can append all parameters needed for activity launching. \rTwo{For example, if the extracted parameters of one Activity are <``userid'', Integer> and <``username'', String>. We will use the command: ``\texttt{adb shell am start -n pkg/pkg.activityname --ei userid 2 --es username Alice}'' to launch the current Activity, where \texttt{--ei} and \texttt{--es} refer to the data types of the parameters are Integer and String, respectively. More required extra parameters can be appended. For other data types, there are also corresponding commands, such as \texttt{--ez} for Boolean and \texttt{--ef} for float.} Besides, to eliminate side-effect between different activities during launching, we provide a fresh state for each activity by forcing stop the previous launched ones.} For activities that fail to launch due to app crashes or permissions required, we dump the layout hierarchy of the current activity and analyze it to check whether it contains keywords (e.g., ``has stopped'' and ``keeps stopping'' for app crashes, ``ALLOW'' and ``DENY'' for permission requests). When the app crashes, we stop the app and set it to the original state (i.e., a fresh state for another activity to launch). When the app requests permission from users, we automatically grant it to make it render the UI page normally. Note that, the activity that is actually launched may be different to what is intended to be launched. For example, we intend to launch an activity called ``{NewsDetailActivity}'', however this activity requires user credentials (e.g., user name and password). Without valid user credentials, it would jump to the ``sign in'' or ``sign up'' page. Thus the actual launched activity would be the ``signInActivity''. Considering such situations, to avoid assigning incorrect activity names to the launched UI pages, we obtain the current launched activity by retrieving the top activity from the back stack through the Android running system. This strategy also addresses the code obfuscation problem on activity names, which is better than the solution proposed in {{StoryDroid}}\xspace~\cite{chen2019storydroid}, i.e., inferring semantic names based on the layout tree similarity. \subsubsection{UI component exploration} \revise{Although the completeness of ATG is much better than the existing static method such as IC3 and dynamic method such as Stoat according to the comparison experiments in StoryDroid~\cite{chen2019storydroid}, some of the {important} activity transitions are still missing due to the limitation of the pure static method. In this paper, we propose to explore interactive components on each page and augment ATG. Specifically, when the UI page is rendered successful (\S~\ref{subsec:rendering}), {{StoryDistiller}}\xspace follows two steps to conduct dynamic UI component exploration.} \revise{ Firstly, we parse the layout code of each rendered activity and extract each \textit{interactive component} (e.g., \textit{ImageButton}, \textit{Button}, and \textit{clickable TextView}) together with its attributes, including UI component id, component description, etc. Secondly, we trigger each interactive component on the rendered activity by using UIAutomator\cite{uiautomator}. If the behavior triggers the launching of another activity and the transition is not included in the current ATG, we add the new explored transition pair into the ATG.} By leveraging the hybrid ATG construction approach, we are able to obtain a more complete ATG for the demonstration of storyboards. \subsection{Rich Feature Extraction and Implementation} \subsubsection{Feature Extraction} To visualize the storyboards of Android apps with all rich features, we highlight the extracted rich features for different software engineering tasks. Specifically, as shown in Fig.~\ref{fig:workflow}, we extract {8} kinds of features, including ATG, UI page, activity name, layout code, activity code, call graph, and UI components with their attributes. Among them, ATG, UI page, activity name, and call graph are extracted in \S~\ref{subsec:static}-\S~\ref{subsec:dynamic} to achieve specific tasks. For activity code, we extracted the corresponding code by decompiling the APK file using the reverse-engineering tool. For layout code, we obtain them when rendering UI pages by dumping the layout for the current activity, which is the actual layout for the launched activities. For UI components and their attributes, we first identify the boundary and the attributes of each component (e.g., ``Button'' and ``EditText'') from the layout code, and crop each component according to the boundary. \begin{figure} \center \includegraphics[width=0.5\textwidth]{img/web.jpg} \caption{Web service of {{StoryDistiller}}\xspace} \label{fig:service} \end{figure} \subsubsection{Implementation} We implement {{StoryDistiller}}\xspace as an automated tool, which is written in 4K lines of Java code, and 3K lines of Python code. {{StoryDistiller}}\xspace is built on top of several off-the-shelf tools: IC3, jadx~\cite{jadx} and Soot~\cite{soot}. We extend the Soot framework to extract inputs of UI page rendering, such as ATG and ICC data, and get the call graphs from apks. Activity transition extraction is built on IC3 to obtain a comparatively complete ATG. jadx is used to decompile the apk to obtain the source code for Android apps. ApkTool (v2.4.1)~\cite{apktool} is used to repackaged the apk to implement the instrumentation. \rTwo{We dump the actual activity names for each UI page from the console through activity back stack \cite{component_stack}. For the few cases where activity names lack semantics and users have demand to obtain the inferred activity name, the method proposed in StoryDroid \cite{storydroid} can also be applied.} The used Android emulator (Nexus 5X) is running on Genymotion (v3.0.0) with Android 8.0, 4G RAM, and 1920$*$1080 resolution ratio. We use data-driven document (D3)~\cite{d3} to visualize {{StoryDistiller}}\xspace's results, which provides a visualized technique based on data in HTML, JavaScript, and CSS. As shown in Fig.~\ref{fig:service}, the visualization~\cite{chen2019storydroid} contains 4 parts: (1) ATG with activity names and corresponding UI pages; (2) The layout code of each UI page; (3) The functional code of each activity; (4) The components of each UI page with corresponding attributes, such as label and size; (5) The method call relations within each activity. \section{Evaluation of {{StoryDistiller}}\xspace} \label{sec:effectiveness_eval} In this section, we evaluate the effectiveness and the usefulness of {{StoryDistiller}}\xspace based on the following three research questions: \noindent {\bf RQ1:} \revise{Can {{StoryDistiller}}\xspace extract a more complete ATG in terms of more transitions and higher activity coverage compared with existing ATG exploration tools (i.e., {IC3}~\cite{octeau2015composite}, Gator~\cite{gator}, Stoat~\cite{su2017guided}, and StoryDroid~\cite{chen2019storydroid})?} \noindent {\bf RQ2:} \revise{Can {{StoryDistiller}}\xspace render more UI pages with higher UI similarity compared with StoryDroid?} \noindent {\bf RQ3:} \revise{Can {{StoryDistiller}}\xspace help explore and review the functionalities of Android apps effectively and efficiently?} \subsection{\revise{RQ1: Effectiveness of Hybrid ATG Extraction}}\label{subsec:rq1} \subsubsection{Setup} \revise{To investigate the capability of constructing ATG, we randomly download 75 apps from Google Play Store (closed-source apps) and 75 apps from F-Droid~\cite{fdroid} (open-source apps) as subjects to demonstrate the effectiveness of ATG extraction on real-world apps. We compare {{StoryDistiller}}\xspace with four existing ATG exploration tools including three static methods (i.e., IC3~\cite{octeau2013effective}, Gator~\cite{gator}, and StoryDroid~\cite{chen2019storydroid}), and one dynamic method, i.e., Stoat~\cite{su2017guided} which has been demonstrated to be more effective on app exploration than other tools such as {Monkey}~\cite{monkey}. For some closed-source apps, IC3 and Gator take more than one hour to extract ATG probably due to some internal errors, therefore, we set a timeout of 30 minutes for each app which is sufficient to explore most of the apps. For Stoat, we run each app for 30 minutes. As for the evaluation metrics, we use the number of \textit{activity transition pair} and \textit{activity coverage} to demonstrate the performance of each tool. ``activity coverage'' is computed as the number of unique activities in the ATG over the total number of activities declared in the app.} \subsubsection{Results of RQ1} \begin{figure} \center \includegraphics[width=0.45\textwidth]{img/pairs_cmp.png} \caption{Comparison of transition pairs} \label{fig:pairs_cmp} \end{figure} \noindent \textbf{Activity transition pairs.} \revise{Fig.~\ref{fig:pairs_cmp} shows the result of tool ability in terms of extracting activity transition pairs. It can be seen that {{StoryDistiller}}\xspace outperforms the other three static tools for both open-source apps and closed-source apps. More specifically, {{StoryDistiller}}\xspace is able to extract 15.2 and 31.4 transition pairs on average for each open-source app and each closed-source app, receptively. Compared with {{StoryDroid}}\xspace, {{StoryDistiller}}\xspace improves over 20\% transition pairs, which benefits from the proposed dynamic UI component exploration. Compared with IC3 and Gator, {{StoryDistiller}}\xspace increases more than twofold (7.78 for IC3, 9.96 for Gator vs. \textbf{23.30}) on all these selected apps.} \revise{As for {{StoryDistiller}}\xspace, it can extract activity transitions with respect to all the features such as fragments, inner classes, and callbacks. Since we extract transitions by using particular APIs (e.g., {StartActivity}, {StartActivityForResult}, and {StartActivityIfNeeded}) that start new activities by leveraging data-flow analysis, the extracted transitions are more accurate. \revise{To investigate the contribution of fragment and inner class to ATG, we record the number of apps that use fragment or inner class to start new activities. The result shows 44 apps use fragments and 84 apps use inner class to start new activities, indicating the popularity of using these two types to build activity transitions in real scenarios.} Besides, the dynamic UI component exploration can also augment ATG. Even though, {{StoryDistiller}}\xspace sometimes may still miss some transitions due to the limitation of the underlying tools such as decompilation failures or extraction errors of certain classes. Besides, developers may self-define some methods to start new activities instead of using the default patterns (\textit{c.geo}~\cite{geo} open-source app), causing some activity transition pairs cannot be identified and extracted. Similarly, intent overloading~\cite{overloading} would also lead to missing activity transitions (\textit{FBReader: Favorite Book Reader}~\cite{fbreader}). To some extend, dynamic UI component alleviates this problem and augment the transitions effectively. Overall, the results shown in Fig.~\ref{fig:pairs_cmp} demonstrate the effectiveness of {{StoryDistiller}}\xspace on extracting activity transitions over other existing tools.} \revise{Compared with IC3, {{StoryDistiller}}\xspace has advantages on inner classes, fragments, and callbacks when extracting activity transitions, which has been evaluated in {{StoryDroid}}\xspace~\cite{chen2019storydroid}. However, according to our investigation, for intent overloading with complex parameters, IC3 can extract partial activity transitions statically. Therefore, to obtain a comparatively complete ATG and maximize the activity coverage, we implemented {{StoryDistiller}}\xspace by integrating the transition results of IC3.} \begin{figure} \center \includegraphics[width=0.45\textwidth]{img/coverage_cmp.png} \caption{Comparison of activity coverage} \label{fig:coverage_cmp} \end{figure} \noindent \textbf{Activity coverage.} Fig.~\ref{fig:coverage_cmp} depicts the activity coverage results of each tool. Compared with the dynamic method, on average, \revise{{{StoryDistiller}}\xspace outperforms Stoat in terms of activity coverage, achieving 88.5\% (vs. 43.2\%) and 66.4\% (vs. 29.4\%) coverage on open-source apps and closed-source apps, respectively. In addition, {{StoryDistiller}}\xspace costs much less time (i.e., 8.50 minutes on average) to extract and render the activities than Stoat (i.e., 30 minutes). The time cost includes the apk instrumentation and UI page rendering. As for the comparison results with static methods, the performance trend is similar to that of activity transition pairs. {{StoryDistiller}}\xspace still outperforms other tools, achieving nearly 80\% coverage on average. Compared with {{StoryDroid}}\xspace, {{StoryDistiller}}\xspace improves over 10\% activity coverage.} {{StoryDistiller}}\xspace does not cover all the activities for some apps due to the following reasons: (1) the limitation of reverse engineering techniques, some classes and methods cannot be decompiled from apks, causing failures in extraction of activity transition and coverage. That situation is more severe in closed source apps due to packing~\cite{packers} and code obfuscation techniques~\cite{proguard,dasho}. (2) Another reason is the dead activities (no transitions), such as unused legacy code and testing code in apps. We also investigate the reasons why dynamic exploration tools such as {Stoat} achieve low activity coverage: (1) Login requirement. For example, {Stoat} fails to explore {Santander} which is a banking app requiring login using password or fingerprint. (2) Lack of specific events. For example, {Open Training} is a fitness-training app, which can create fitness plans by swiping across the screen. However, {Stoat} does not support such events, resulting in low coverage. \begin{tcolorbox}[size=title,opacityfill=0.1,breakable] \revise{\textbf{Answer to RQ1.} {{StoryDistiller}}\xspace outperforms the static methods (e.g., IC3, Gator, and {{StoryDroid}}\xspace) in terms of activity transition pairs (\textbf{23.3} vs. 7.8 in IC3, 10.0 in Gator, and 18.2 in {{StoryDroid}}\xspace), and the dynamic method (e.g., Stoat) in terms of activity coverage (\textbf{77.5\%} vs. 36.3\% in Stoat). Therefore, {{StoryDistiller}}\xspace is able to obtain a more complete activity transition graph compared with existing tools.} \end{tcolorbox} \subsection{RQ2: Effectiveness of UI Page Rendering}\label{subsec:rq2} \subsubsection{Setup} To investigate the effectiveness of UI page rendering, \revise{we compare {{StoryDistiller}}\xspace (dynamic method) and {{StoryDroid}}\xspace (static method) in terms of the ratio of rendered pages and the UI similarity of rendered pages, by using the 150 Android apps in RQ1.} Specifically, (1) we first investigate the ratio of UI pages (activities) that are successfully launched in each app, denoted by $LaunchR$. \[ LaunchR_i=\frac{N_{i}^{Launched\_act}}{N_{i}^{All\_act}} \times 100\% \] Where $N_{i}^{All\_act}$ indicates the number of activities declared in the AndroidManifest.xml file in the $i^{th}$ app. (2) We further investigate whether the functionalities of the launched pages are clearly displayed, i.e., users can easily and clearly understand the functionality through the UI pages. \revise{To do it, we compute the visual similarity between the rendered UI pages and the real UI pages to demonstrate the rendering ability of {{StoryDistiller}}\xspace in practice. Note that the real UI pages are obtained by Monkey~\cite{monkey}.} \begin{figure} \center \includegraphics[width=0.315\textwidth]{img/cmp-launchratio.png} \caption{Comparison the Launch ratio of activities between {{StoryDroid}}\xspace and {{StoryDistiller}}\xspace} \label{fig:cmp-ratio} \end{figure} \begin{figure*} \center \includegraphics[width=0.95\textwidth]{img/good_cases.pdf} \caption{\revised{Examples of successfully rendered UI pages with diverse components}} \label{fig:good_cases} \end{figure*} \subsubsection{Results of RQ2}\label{subsub:RQ2} \noindent \textbf{Launch ratio.} Fig.~\ref{fig:cmp-ratio} shows the distribution of the launch ratio of each app. \revise{Note that since {{StoryDroid}}\xspace fails to launch activities due to the apk compilation failures for most of the closed-source apps (only 6 out of 75 closed-source apps), to avoid introducing bias based on such a small dataset, we only show its result on open-source apps in the box plot. The reasons for such a low launch ratio on closed-source apps are also described in this section.} We can see that on average, over 80\% activities (i.e., \rTwo{82.37\% for open-source apps, 80.14\% for closed-source apps shown in Table~\ref{tbl:ratio}}) can be launched successfully by {{StoryDistiller}}\xspace in our dataset, the remaining ones encounter crashes when being launched, usually caused by ``NullPointerException'' or ``ClassNotFoundException''. \revise{Besides, we further investigate the contribution of ICC data to activity launching, with the help of the extracted ICC data by {{StoryDistiller}}\xspace, there are 37.69\% additional activities for closed-source apps (29.48\% for open-source apps) being launched successfully.} \revise{\rTwo{While {{StoryDroid}}\xspace only launches 55\% activities for open-source apps, as shown in Table~\ref{tbl:ratio}}.} Fig.~\ref{fig:cmp-ratio} also indicates that apps from Google Play are more likely to get lower launch ratio, i.e., more cases at the bottom. \rTwo{Almost all the launch ratio of open-source (\rThree{i.e., 73 apps}) are over 50\%, and the lowest launch ratio is 30\% in our dataset shown in Table~\ref{tbl:ratio}}. However, as for closed-source apps, there are some cases (\rThree{i.e., 9 apps}) whose launch ratio are below 50\%. \rTwo{Specifically, the lowest rate achieves 28.27\% launch ratio shown in Table~\ref{tbl:ratio}.} \revise{A possible reason may be the more complex functionalities in closed-source apps, which is evidenced by the number of transition pairs in Fig.~\ref{fig:cmp-ratio} (a).} \begin{table}[t]\footnotesize \centering \caption{\rTwo{Comparison the Launch ratio of activities including average, minimum, and maximum ratio between StoryDroid and StoryDistiller}} \rTwo{\begin{tabular}{c|c|c|c|c} \hline \multicolumn{2}{c|}{\textbf{Method}} & \begin{tabular}[c]{@{}c@{}}Static\\ (StoryDroid)\end{tabular} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Dynamic\\ (StoryDistiller)\end{tabular}} \\ \hline \multicolumn{2}{c|}{\textbf{Sources}} & F-Droid & F-Droid & Google Play \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}\textbf{Launch}\\ \textbf{Ratio}\end{tabular}} & Avg. & 55\% & 82.37\% & 80.14\% \\ \cline{2-5} & Min. & 13.64\% & 30\% & 28.27\% \\\cline{2-5} & Max. & 100\% & 100\% & 100\% \\ \hline \end{tabular} } \label{tbl:ratio} \end{table} \revise{The main reason that {{StoryDroid}}\xspace fails to launch activities for most of the closed-source apps is apk compilation failures listed below, which are all mitigated by {{StoryDistiller}}\xspace.} (1) \revise{\textit{Due to missing necessary configuration files}.} {{StoryDroid}}\xspace supports rendering UI pages for open-source apps because it requires to obtain the configuration file of the project (e.g., \emph{build.gradle}\rThree{\footnote{\rThree{A \emph{build.gradle} file will be generated when creating a new Android project through Android Studio. We take this file as the default \emph{build.gradle} when closed-source apps render the UI pages in {{StoryDroid}}\xspace.}}}) which includes necessary library dependencies and other configurations, however, the configuration file only appears in the source code. It cannot be obtained even by decompiling the apk files. (2) \revise{\textit{Due to user-defined components~\cite{user_defined_component}, complex grammar representations, and resource file errors (e.g., XML layout files), etc.}} Even for open-source apps, it is still difficult for {{StoryDroid}}\xspace to obtain user-defined components and complex grammar representations (e.g., Syntactic Sugar~\cite{syntactic_sugar}) by using the proposed static method, causing rendering failures in these UIs. \revise{Last but not least, errors caused by resource files sometimes occur when we build the dummy app.} These limitations cause {{StoryDroid}}\xspace ineffective in many apps in our dataset. \vspace{1mm} \noindent \textbf{Visual similarity.} \revise{We compare the visual similarity between the real pages and the rendered UI pages by {{StoryDistiller}}\xspace and {{StoryDroid}}\xspace to demonstrate the quality improvement of rendered UI pages based on the 150 apps in RQ1. We obtain real pages by leveraging Google Monkey~\cite{monkey} to dynamically explore UI pages and take screenshots, and select the overlapping activities of real ones and rendered ones by their activity names. We use two widely-used similarity metrics~\cite{nguyen2015reverse,chen2019storydroid,chen2019guisquatting}: mean absolute error (MAE) and mean squared error (MSE) to measure the visual similarity.} \revise{The result shows that {{StoryDroid}}\xspace only achieves about 80\% UI similarity, while {{StoryDistiller}}\xspace achieves 96.5\% and 91.6\% UI similarity in terms of MAE and MSE respectively.} Fig.~\ref{fig:good_cases} shows some real examples rendered by {{StoryDistiller}}\xspace, we can see that {{StoryDistiller}}\xspace can render UI pages with various types of components, such as RadioButton and ListView. Even for the UI pages using complex design structure or theme, multi-components, self-defined components, multi-images, or rich page color, {{StoryDistiller}}\xspace still performs well in most cases. Compared with {{StoryDistiller}}\xspace, {{StoryDroid}}\xspace only uses testing data to replace real data for components such as ListView and GridView, which decreases the UI similarity compared with the real UIs. As shown in Fig.~\ref{fig:two_versions}, {{StoryDroid}}\xspace cannot render such complex design structure or theme due to lack of data dependency, which would lose some main functionalities. For example, Fig.~\ref{fig:two_versions} (a), rendered by {{StoryDroid}}\xspace, shows that ``No hosts created yet'' without showing the main structure of the UI page due to lack of history data for the EditHostActivity. In contrast, Fig.~\ref{fig:two_versions} (b), rendered by {{StoryDistiller}}\xspace, displays all functionalities dynamically even for the save button (on the top right) with the real theme. Similarly, Fig.~\ref{fig:two_versions} (c) cannot be rendered like Fig.~\ref{fig:two_versions} (d) to demonstrate the real functionalities. \begin{figure} \center \includegraphics[width=0.48\textwidth]{img/comp_two_versions.pdf} \caption{Examples of the same UI pages rendered by {{{StoryDroid}}\xspace} and {{StoryDistiller}}\xspace} \label{fig:two_versions} \end{figure} As we investigated, sometimes errors still occur in the rendered UI pages by {{StoryDistiller}}\xspace due to data loss in practice. We summarized six types as follows and show some real cases in Fig.~\ref{fig:bad_cases}. (1) \textit{Remote server data}. Fig.~\ref{fig:bad_cases} (a) shows an activity named ``PaylistActivity'', however, the detailed pay list information is lost since they are stored in the remote server. (2) \textit{Local database data}. Fig.~\ref{fig:bad_cases} (b) shows the profile of a user without detailed data since the data should be loaded from the local SQLite database. (3) \textit{Unauthorized access to webpages}. As shown in Fig.~\ref{fig:bad_cases} (c), it is a WebView page shows the terms and condition, however, due to unauthorized access, the WebView page fails to load. (4) \textit{Hardware support.} Fig.~\ref{fig:bad_cases} (d) is an app relying on hardware support (i.e., NFC). However, we conduct the UI page rendering on an Android emulator without required hardware support. (5) \textit{Login authentication.} Fig.~\ref{fig:bad_cases} (e) failed to be rendered due to the login authentication. Only users with valid authentication information can get access to the page, as indicated by the activity name. (6) \textit{Long loading time.} Fig.~\ref{fig:bad_cases} (f) is a map app. Due to the inadequate rendering time, the map is not rendered completely. \revised{Although some specific data is not loaded or rendered successfully, the rendered information together with the activity names are still enough for users to understand the functionality of these pages. For instance, for the cases in Fig.~\ref{fig:bad_cases} (a) (b) (f), we still can know the core logic of the activities. For the other three cases (i.e., Fig.~\ref{fig:bad_cases} (c) (d) (e)), the activity names contain rich semantics, which can help users understand the core logic.} \begin{tcolorbox}[size=title,opacityfill=0.1,breakable] \revise{\textbf{Answer to RQ2.} {{StoryDistiller}}\xspace achieves \textasciitilde80\% launch ratio of activities for each app on average, which is much better than {{StoryDroid}}\xspace with only \textasciitilde55\% launch ratio when rendering UI pages on our dataset. Moreover, the rendered UI pages by {{StoryDistiller}}\xspace achieve a high UI similarity compared with {{StoryDroid}}\xspace (\textasciitilde80\% vs \textbf{\textasciitilde95\%}).} \end{tcolorbox} \begin{figure} \centering \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=0.95\textwidth]{img/userstudy/usage.png} \caption{Years of Android device usage} \label{fig:usage} \end{subfigure}% \hfill \begin{subfigure}[t]{0.23\textwidth} \centering \includegraphics[width=0.95\textwidth]{img/userstudy/topic.png} \caption{Years of conducting Android-related work} \label{fig:topic} \end{subfigure} \caption{Distribution of participants} \label{fig:distribution} \end{figure} \begin{figure*} \center \includegraphics[width=0.95\textwidth]{img/bad_cases.pdf} \caption{\revised{Examples of rendered pages with clear functionalities but with some data loss} } \label{fig:bad_cases} \end{figure*} \subsection{Usefulness Evaluation of {{StoryDistiller}}\xspace}\label{sec:usefulness_eval} Apart from effectiveness evaluations, we further conduct a user study to demonstrate the usefulness of {{StoryDistiller}}\xspace. Our goals are to check whether {{StoryDistiller}}\xspace can help explore and \revised{review} the functionalities of apps effectively {and efficiently.} \noindent{\bf Dataset of user study.} We randomly select 4 apps (i.e., {Bitcoin}, {Bankdroid}, {ConnectBot}, and {Vespucci}) with different number of activities (12-15 activities) from 2 categories (i.e., finance, tool), which are hosted on Google Play Store. Each category contains two apps, and we ask participants to explore each app to finish the assigned tasks. \noindent{\bf Participant recruitment.} \revise{We recruit 12 people including 2 professors, 2 postdocs, and 6 Ph.D students from our university and 2 industry staff from local companies to participate in the experiment via word-of-mouth. All of the recruited participants have used Android devices for more than one year, and participated in Android related research topics. Detailed distribution is shown in Fig.~\ref{fig:distribution}.} They never use these apps before. \revise{They come from different countries, 1 from USA, 4 from China, 4 from European countries (e.g., Spain, Germany), and 3 from Singapore.} The participants receive a \$10 shopping coupon as a compensation of their time. \noindent{\bf Experiment procedures.} We installed the 4 apps on an Android device (Nexus 5 with Android 8.0). The experiment started with a brief introduction to the tasks. We explained and went through all the features we want them to use within the apps and asked each participant to explore and review the 4 apps separately to finish the tasks below. Note that for each category, each participant explored one app with {{StoryDistiller}}\xspace, and the other without {{StoryDistiller}}\xspace. To avoid potential bias, the order of app category, and the order of using {{StoryDistiller}}\xspace or not using are rotated based on the Latin Square~\cite{winer1962statistical}. This setup ensures that each app is explored by multiple participants with different development experience. We told each participant to complete the task with the given apps: manually explore as many functionalities of the apps as possible in 10 minutes, which is far longer than the typical average app session (71.56 seconds)~\cite{bohmer2011falling}, and understand the app functionalities with {{StoryDistiller}}\xspace; After the exploration, participants were asked to rate their satisfactoriness in exploration (on the 5-point likert scale with 1 being least satisfied and 5 being most satisfied). All participants carried out experiments independently without any discussions with each other. After performing the task, they were required to write some comments about our tool. \begin{table}\footnotesize \caption{User study results of app exploration and review. $*$ denotes $p$ $<$ 0.01 and $**$ denotes $p$ $<$ 0.05.} \centering \begin{tabular}{l c c} \hline {\bf Metrics} & \begin{tabular}[c]{@{}c@{}}{\bf Manual} {\bf Exploration}\end{tabular} & \begin{tabular}[c]{@{}c@{}}{\bf StoryDistiller}\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}{\bf Avg. Time (min)}\end{tabular} & 5.47 & 2.85$^*$ \\ \begin{tabular}[c]{@{}c@{}}{\bf Avg. Coverage}\end{tabular} & 39.06\% & 88.30\%$^*$ \\ {\bf Satisfactoriness (1-5)} & 3.99 & 4.48$^*$$^*$ \\ \hline \end{tabular} \label{tbl:study1} \end{table} \vspace{1mm} \noindent{\bf Experiment results.} As displayed in Table~\ref{tbl:study1}, the average activity coverage of manual exploration is quite low (i.e., 39.06\%), showing the difficulty in exploring app functionalities thoroughly by manual exploration. However, the participants' satisfactoriness of completeness of exploration is high (i.e., 3.99 on average). It indicates that the development teams sometimes are not aware that they miss many features when exploring others' apps. Such blind confidence and neglection may further negatively influence their strategy or decision in developing their own apps. \revise{Compared with manual exploration, {{StoryDistiller}}\xspace achieves over 2 times more activity coverage (\textbf{88.30\%} vs. 86.50\% in {{StoryDroid}}\xspace) with less time cost (\textbf{2.85 minutes} on average vs. \revise{2.5 minutes in {{StoryDroid}}\xspace}) to help understand the app functionalities. According to the \rTwo{participants}' feedback, the average satisfactoriness of {{StoryDistiller}}\xspace is \textbf{4.48} (vs. 4.40 in {{StoryDroid}}\xspace), which represents the usefulness of helping participants explore and understand app functionalities.} To understand the significance of the differences between without and with {{StoryDistiller}}\xspace, we carry out the Mann-Whitney U test~\cite{utest}, which is designed for small samples. The result in Table~\ref{tbl:study1} is significant with $p$-value $<$ 0.01 or $p$-value $<$ 0.05. \section{Dataset and Possible Applications}\label{sec:rq3} \revise{As aforementioned, {{StoryDistiller}}\xspace is a fundamental tool which constructs a multi-dimension dataset (e.g., app storyboards and UI components). Such a rich dataset can be used to expand the horizon of current mobile app research. In this section, we discuss several application scenarios by leveraging this dataset.} \subsection{UI Design Recommendation and Layout Code Generation} Developing the GUI of a mobile application involves two steps, i.e., UI design and implementation. Designing a UI focuses on proper user interaction and visual effects, while implementing a UI focuses on making the UI work as designed with proper layouts and widgets of a GUI framework. For the tasks of UI design recommendation~\cite{bernal2019guigle} and layout code generation~\cite{chen2018ui}, our dataset provides a large set of diverse UI pages, as well as the corresponding layout code. The diversity of the collected data depends on {{StoryDistiller}}\xspace's ability of thoroughly exploring apps' UI pages. Additionally, it is crucial to provide real UI pages for the UI design recommendation task. Based on the results of \emph{ATG extraction} (\S~\ref{subsec:rq1}) and \emph{UI page rendering} (\S~\ref{subsec:rq2}), {{StoryDistiller}}\xspace is able to obtain a high activity coverage compared with dynamic testing tools and a high successful rate of UI page rendering. Moreover, the rendered UI pages are almost same as the real ones that users would observe. \revise{The UI pages with attributes in our dataset can assist both UI designers and developers.} Such a dataset bridges the gap across the abstract activities (text), UI pages (image) and detailed layout code (i.e., activity $\rightarrow$ UI page $\rightarrow$ layout code) so that they can be searched as a whole. Due to such mapping relation, UI/UX designers can directly use keywords (e.g., ``Login'' and ``Search'') to search for the UI images by matching the activity name of the UI in our dataset. The searched images can be used for inspiring their own UI design. The UI developers can also benefit from searching our dataset for UI implementation. For another application scenario, given the UI design image from designers, developers can search for the similar UIs in our dataset by computing the image similarity. As each UI page in our dataset is also associated with corresponding run-time UI code, developers can choose the most related UI page in the candidate list and then customize the UI code for their own purpose to implement the given UI design. Additionally, by training a neural machine translator, we are able to translate a UI design image to a GUI skeleton. Chen et al.~\cite{chen2018ui} collected the training data based on the dynamic testing tool, {Stoat}. However, according to the experimental results of ATG generation, we find that {{StoryDistiller}}\xspace covers 2 times more activities than {Stoat} with less time. Consequently, the results are limited to the diversity of the training data used in~\cite{chen2018ui}. Our constructed dataset of UI pages are more comprehensive with diverse UI designs. \subsection{UI Component Recommendation} UI component sharing provides an opportunity to learn about GUI designs, gain design inspiration and understand design trend~\cite{chen2019gallery}. To enable the recommendation task of GUI components, \revise{our dataset collected a large number of separate UI components (e.g., ``Button'') together with their attributes and the corresponding bank-end design code.} Based on them, we highlight some typical tasks or potential application scenarios as follows. (1) Alice aims to design a social media app and wants to decide the style of the buttons so that it can fit for the theme of such a social media app. With the constructed dataset, she can search for hundreds of buttons to get inspirations. According to the candidates returned by the dataset, she can choose the most attractive one as the final decision for her own apps. (2) Apart from the style of UI component, the size and color are also provided to Alice. Therefore, she may observe that social apps usually use larger size with bright color buttons for most social media apps. (3) Based on the results of multiple-time searching, Alice may also understand the design trend of UI components in one app category, which is also helpful for developing apps in specific categories. \subsection{Code Search} When developers implement their own apps, aiming to ensure the competitive edge in the markets, they usually attempt to get inspirations from the similar components (e.g., Activity) implemented in other apps, because the components with the same semantic name have a great probability to own similar logic and architectures (e.g., method hierarchy). \revise{To enable such a code search task, our constructed dataset also collects the logic code with Activity names.} Firstly, we divide the apps based on their app categories, such as finance, social media, and news since the apps in the same category would contain more common features. Secondly, we store the activities if they have the same semantic activity name, such as LoginActivity, RegActivity, AboutActivity, and EditActivity. For example, Bob is a junior app developer. For the login activity, he may only implement the basic logic, i.e., collect user' inputs and validate whether the inputs are consistent with the information stored in the server or the database. With the help of our constructed dataset, he can search for the similar implementation by the same Activity name, i.e., \textit{LoginActivity} or just \textit{Login}. After searching, he would note that he should also validate the format before collecting the users' inputs, which is a typical specification. In this case, the logic code with same name could help to improve the quality of their own apps and customize more interesting features. \subsection{{{StoryDistiller}}\xspace for App Testing} \noindent \textbf{App GUI testing.} \revise{According to many previous studies~\cite{choudhary2015automated,zeng2016automated}, there are only about 40\% activity coverage for most dynamic GUI app testing tools such as Monkey and Stoat, mainly due to lack of improper user input complex constraints. Thanks to the relatively complete ATG constructed by {{StoryDistiller}}\xspace, we can leverage it to explore more activities and enhance the exploration capability of transition-based dynamic testing tools. For example, when apps are under testing by using Monkey, we can differentiate the transitions that are never explored by Monkey by comparing the transitions and covered activities. For the uncovered transitions, based on our ATG, we can directly launch the target activities and make the testing tool start to explore from this new state (using their own exploration strategy) to explore more state and detect more bugs.} \vspace{1mm} \noindent \textbf{App regression testing.} Reusing test cases is useful to improve the efficiency of regression testing for Android apps~\cite{rothermel2001prioritizing}. \revise{{{StoryDistiller}}\xspace can help guide app regression testing by identifying the ATG and UI components that have been modified. Note that, different versions of a single app have many common functionalities, which means most of the UI pages in the newer version are the same as the previous version. The ATGs of different versions can be easily used to demonstrate the common functionalities. Meanwhile, {{StoryDistiller}}\xspace stores the mapping relation between UI page and the corresponding layout code, therefore, analyzers can obtain the modified UI components by analyzing the differences of layout code, and further update the test cases accordingly.} In this scenario, most of the test cases can be reused, and the modified components can be identified effectively to guide test case update for regression testing. \section{Limitations}\label{sec:limitation} In this section, we discuss the limitations of {{StoryDroid}}\xspace. \noindent \textbf{Incomplete features due to the underlying tools.} The inputs of UI page rendering are extracted from static analysis based on {Soot}, but some files failed to be transformed, and the call graphs can still be incomplete. As for the closed-source apps, {jadx} is used to decompile apk to Java code. However, some Java files failed to be decompiled, which affects the analysis results of UI page rendering. But according to our observation, these cases rarely appear in the real apps. Besides, as the activities spawned by other components (e.g., Broadcast Receiver) can only be dynamically loaded, our static-analysis based approach cannot deal with them. \vspace{1mm} \noindent \textbf{Failures in UI page rendering.} Although {{StoryDistiller}}\xspace achieves \textasciitilde80\% launch ratio of activities for each app on average, some UI pages still cannot be rendered successfully due to several errors. (1) Some activities require valid authentication information to launch, that is, they will check whether the current state owns valid authentication (e.g., successful login in) before rendering the page, if the activity tries to be launched without valid authentication, it may redirect to the sign-in or sign-up page. Such scenario is an open challenge in Android app testing, unless the testers provide the login information before hand to enable the login process, then the app can continue explore the pages that require valid authentication. Thus, {{StoryDistiller}}\xspace would fail to render such kind of activities. (2) Although we provide the required ICC data as the activity launching parameters, some activities still need to load other required data from local storage (e.g., SharedPreference, SQLite Database) or remote servers. {{StoryDistiller}}\xspace cannot provide this kind of required data so far, causing failures when launching this kind of activities. \vspace{1mm} \noindent \textbf{\rTwo{Incomplete activity presentations due to fragments.}} \rTwo{As aforementioned in the paper, an activity may have multiple fragments in practice. First of all, it is possible to define in the static layout file of an activity that it contains fragments (i.e., static binding), and fragments are treated as views to render the activity. While developers can also choose to bind the fragments (e.g., add, delete, and replace) of an activity at runtime (i.e., dynamic binding). As for the current version of StoryDistiller, it only records one UI page per activity with static fragments. If the current activity uses the static binding method to bind fragments, StoryDistiller can leverage the proposed hybrid method to render the activity with fragments. However, if the fragments are integrated into the activity at runtime triggered by users or specific operations, StoryDistiller cannot record the changes for different fragments in one activity so far.} \section{Related Work}\label{sec:relatedwork} \noindent{\bf Assist Android development}. GUI provides a visual bridge between apps and users through which they can interact with each other. Developing the GUI of a mobile app involves two separate but related activities: design the UI and implement the UI. To assist UI implementation, Nguyen and Csallner~\cite{nguyen2015reverse} reverse-engineer the UI screenshots by image processing techniques. More powerful deep-learning based algorithms~\cite{chen2018ui, beltramelli2018pix2code, moran2018machine} are further proposed to leverage the existing big data of Android apps. Retrieval-based methods~\cite{reiss2018seeking, behrang2018guifetch} are also used to develop the user interfaces. Reiss~\cite{reiss2018seeking} parses the sketch into structured queries to search related UIs of Java-based desktop software in the database. Different from the UI implementation studies, our study focuses more on the generation of app storyboard which not only contains the UI code, but also the transitions among the UIs. In addition, the UI code generated in prior work~\cite{nguyen2015reverse, chen2018ui, beltramelli2018pix2code, moran2018machine} is all static layout, which conflicts with our observation in Section~\ref{sec:background} that developers often write Java code to dynamically render the UI. In our work, we provide developers with the original UI code (no matter static code, dynamic code, or hybrid) for each screen. Such real code makes developers more easy to customize the UIs for their own needs. Apart from the UI implementation, some studies also recommend UI design~\cite{chen2019gallery} and explore issues between UI design and its implementation. Moran et al~\cite{moran2018automated} check whether the UI implementation violates the original UI design by comparing the image similarity with computer vision techniques. They further detect and summarize GUI changes in evolving mobile apps. They rely on the dynamically running apps for collecting UI screenshots, and that is time-consuming and leads to low coverage of the app. In contrast, our method can extract most UI pages of the app statically, so it can complement with these studies for related tasks. {GUIfectch}~\cite{behrang2018guifetch} customizes Reiss's method~\cite{reiss2018seeking} into Android app UI search by considering the transitions between UIs. It can also extract UI screenshots with corresponding transitions, but our work is different from theirs in two aspects. First, their model can only deal with open-source apps, while ours can also reverse-engineer the closed-source apps, hence leading to more generality and flexibility. On the other hand, {GUIfectch} is much more heavy-weight than our static-analysis based approach, as it relies on both static analysis for UI code extraction and dynamic analysis for transition extraction. In addition, dynamically running the app usually cannot cover all screens \revised{like Stoat}, leading to the loss of information. \noindent{\bf Assist app comprehension by reverse engineering.} The process of reverse engineering of Android apps is that researchers rely on the state-of-the-art tools (e.g., {Apktool}~\cite{apktool}, {Androguard}~\cite{androguard}, {dex2jar}~\cite{dex2jar}, {Soot}~\cite{soot}) for decompiling an {APK} to intermediate language (e.g., {smali}, {jimple}) or Java code. Android reverse engineering is usually used to understand and analyze apps~\cite{understand}. It also can be used to extract features for Android malware detection~\cite{chen2016stormdroid}. However, reverse engineering only has the basic functionality for code review. Different from the general reverse engineering with plain decompiled code, our work extract more abstract representations, i.e., storyboard of each app to give the overview of app functionalities and mappings between the UI page and the corresponding layout code. Such storyboard can directly help product manager and designers who are of no technical expertise to understand competitor apps. \noindent{\bf Assist Android app analysis.} Many static analysis techniques~\cite{azim2013targeted,octeau2013effective,octeau2015composite,arzt2014flowdroid,li2015iccta,fan18,fan18efficiently,chen2018mobile,chen2018ausera} have been proposed for Android apps. A$^3$E provides two strategies, targeted and depth-first exploration, for systematic testing of Android apps~\cite{azim2013targeted}. It also extracts static activity transition graphs for automatically generated test cases. Apart from the target of Android testing, we extract activity transition graphs to identify and systematically explore the storyboard of Android apps. Epicc is the first work to extract component communication~\cite{octeau2013effective}, and it determines most Intent attributes to component matching. {ICC}~\cite{octeau2015composite} significantly outperforms Epicc on the extraction ability of inter-component communication by utilizing the solver for MVC problems based on the proposed {COAL} language. FlowDroid~\cite{arzt2014flowdroid} and IccTA~\cite{li2015iccta} extract call graphs based on {Soot} for data-flow analysis for detecting data leakage and malicious behaviors~\cite{chen2016stormdroid, fan2016poster,chen2016towards,chen2018automated,chen2019poison,fakeapp, chen2018ausera, chen2018mobile, chen2019ausera}. Liu et al.~\cite{liu2016understanding} utilized program analysis to understand the patterns that cause functional and nonfunctional issues and proposed a static analysis tool to detect two most common patterns of wake lock misuses. Wei et al.~\cite{wei2017oasis} combined program analysis and NLP techniques to prioritize Lint warnings by leveraging app user reviews. Dong et al.~\cite{dongicse2020} proposed time-travel testing for Android apps that can transit to the state it explored before when needed. Wei et al.~\cite{wei2019pivot} proposed an approach that automatically learns API-device correlations of compatibility issues induced by fragmentation from existing Android apps. \rTwo{Yan et al.~\cite{yanicse2020} proposed multi-entry testing for Android apps by analyzing the constraints for launching an activity and the solved constraints are used to launch the activity through a third-party app. They did not focus on ATG construction, instead, they focused on the construction of Activity Launching Models (ALM) by a static method (i.e., starting with a coarse-grained ATG mentioned in their paper). By contrast, extracting a relatively comprehensive ATG is one of the most important goals in our work, {and we not only statically extract the transitions between activities, fragments, and inner classes, but also dynamically augment ATG to construct a comprehensive graph}. In terms of dynamic exploration, their goal is to adjust the weights of their Activity Launching Context (ALC) dynamically to explore apps and find bugs {by leveraging their constructed Activity Launching Model}. Instead, our goal is to augment the transition graph extracted by the pure static method in the previous work~\cite{chen2019storydroid} {by traversing the actionable components in the UI page to explore as many transitions as possible}. As for the activity launching, their method required to build a dummy app to launch activities due to the limitation of launching via adb, while our work addresses this problem by instrumenting the app, thereby can launch the activity directly from the console via adb instead of a dummy app used in~\cite{yanicse2020}.} Compared with them, we provide another novel solution to assist Android app testing, i.e., reveal the relations between different components together with rich attributes to help understand the semantic and functionality of apps. \section{Conclusion}\label{sec:conclusion} In this paper, we propose {{StoryDistiller}}\xspace, a system to distill visualized storyboards of Android apps with rich features by extracting relatively complete ATG and rendering UI pages dynamically with the help of the extracted ICC data. Such a storyboard benefits different roles (i.e., PMs, UI designers, developers, and testers) in the app development process and analysis. The extensive experiments and user study demonstrate the effectiveness and usefulness of {{StoryDistiller}}\xspace. Based on the outputs of {{StoryDistiller}}\xspace, we constructed different kinds of large-scale datasets to bridge the gap across app activities (descriptive text), UI pages (image), and implementation code (source code). In the future, we will further explore these potential applications, and also extend our approach to other platforms such as iOS apps and desktop software for more general usage. \section*{Acknowledgments} {We appreciate all the reviewers for their valuable comments. This work was partially supported by the National Natural Science Foundation of China (No. 62102284, 62102197).} \bibliographystyle{IEEEtran}
1,477,468,749,901
arxiv
\section{Main text} Due to Brownian motion, it is impossible to keep a molecule within the detection volume for an extended period of time. The study of nanofluidics became popular to circumvent this problem with wide scientific applications \cite{Eijkel2005, Pennathur2005, Kameoka2006, Suman2009}. With the advancement of nanotechnologies, nanofluidic devices comprising arrays of nanochannels with diameters less than hundred nanometres have become significant for bioanalytical diagnostics applications, such as DNA optical mapping \cite{Persson2010a, Kim2011, Bashir2011, Wang2015}, single virus and nanoparticle detection \cite{Novotny2010, Hawkins2010, Cleland2011, Merkoci2011, Sanli2015, Sanli2016}. Nanochannels are useful tools allowing systematic studies of entities from single molecule to viruses over long periods of time. It can induce 1D fluidic confinement by suppressing the thermal motions in two directions. The primary problem is to establish a simple and efficient way of nanofabricating 1D nanofluidic channels. Different methods have been published, which described the fabrication processes of such nanofluidic devices \cite{Persson2007, Bien2012, Park2010, Tegenfeldt2010, Han2005}. However, majority of them are technically challenging, costly, and not easily applicable for high-throughput production. Hence, we present a cost-effective and simple nanofabrication technique based on electron beam lithography (EBL) and shadow-angle-electron-beam-deposition (SAEBD). The nanochannels were used to detect a flow of photoluminescent nanomaterials, such as small DNA molecules labelled with single organic fluorophores, carbon nanodots (CND) \cite{Ghosh2014}, and pure organic fluorophores. Using two-focus fluorescence correlation spectroscopy (2fFCS) \cite{Eigen1999, Joerg2007, Joerg2008, Joerg2010}, we recorded the transits of single molecules or nanodots through the nanochannels, and quantitatively analysed their flow velocities. Here, the detection regions were two diffraction limited focal spots with a lateral displacement of 400 nm. To ensure 1D tranport between the two foci, the diameter of the nanochannels should be smaller than the focal diameters. The nanochannels presented here have diameters ranging from 30 nm to 100 nm, which are smaller than the focal diameter. The process steps to create an enclosed nanochannels involve -- nanofabrication of open nanochannels using EBL and RIE and enclosing them using SAEBD. The ballistic path of the electron beam (e-beam) assisted evaporation \cite{Becker2013} is the principle behind SAEBD. When a straight beam of depositing material hits onto an open nano-trench at shallow incident angles, no deposition occurs in the shadowed region \cite{Nakayama2007, Donald2010}. Such deposition can enclose a large number of parallel nanochannels (depending on the e-beam diameter) leaving the shadowed region as the fluidic path of interest. The process is unaffected by nanometre sized leftover residues of the e-beam resists and does not require an atomically clean surface, unlike any wafer bonding process \cite{Ghosh2014-1}. For optical reasons, the final nanochannels were prepared using pure silicon dioxide. Being an insulator silica contributes to surface charging under e-beam which affects EBL and SEM. EBL and SEM induced surface charges were compensated using a thin-film of gold. Here we show the SAEBD process using silicon to obtain high resolution electron microscopy images of the intermediate steps. Figure \ref{fig:1}\textbf{a} shows a schematic flow-chart of creating nanochannels on [100] silicon wafers. The width of the nano-trenches (i.e. the final width of the enclosed nanochannels) can be optimised by the e-beam exposure of the EBL to the positive e-beam resist. Different widths of nano-trenches were created ranging from 30 nm to 100 nm. In Figure \ref{fig:1}\textbf{b} we observe nano-trenches of such widths, 65 nm and 100 nm. The nanolithographed e-beam resist acted as a mask for RIE to etch the final nano-trenches on silicon. The depth of the nano-trenches was investigated with AFM as 35 nm to 40 nm. \begin{figure}[!ht] \includegraphics[scale=0.51]{ncFig1.pdf} \caption{\textbf{Nanochannel fabrication using SAEBD.} \textbf{a.} Fabrication of nano-trenches on silicon with EBL and RIE. \textbf{b.} SEM of silicon nano-trenches with 62 nm and 100 nm width. \textbf{c.} SAEBD at angle $\theta$. \textbf{d.} Shadows of electron beam -- arrows represent angular e-beam evaporation. \textbf{e.} Schematic of high angle deposition and \textbf{f.} SEM of deposition at $80^\circ$. \textbf{g.} Schematic of low angle deposition and \textbf{h.} SEM of deposition at $45^\circ$. \textbf{i.} FIB cross-section of the enclosed nanochannels. Two layers of platinum were used to protect the nanochannel form high energy ions. \textbf{j.} Magnified view of the enclosed nanochannel, where Si-1 is the silicon substrate on which nano-trenches were fabricated, Ti-2 is the 50 nm thick titanium layer which was used in SAEBD, Pt-3 and Pt-4 were platinum deposited inside FIB. \textbf{k.} Further magnified view of the nanochannel where `1' represent silicon (Si-1) and `2' represent titanium (Ti-2). \label{fig:1}} \end{figure} In the next step, SAEBD was used to enclose the nano-trenches to create enclosed nanochannels. Figure \ref{fig:1}\textbf{c} schematically explains the concept of the SAEBD process. A high-energy e-beam (bent with magnet) sublimates the material which deposits onto the substrate. Figure \ref{fig:1}\textbf{d} schematically explains the role of deposition angle ($\theta$). The shadowed region is created due to the angular depositions. This region is unexposed to the depositing material. The deposited region and shadowed region are colour coded red and yellow, respectively. In the time evolution schematic, the SAEBD leads to enclosing the nano-trenches leaving apart a void. By decreasing $\theta$, one can increase the hypotenuse. At an acute angle close to $0^\circ$, the hypotenuse becomes nearly parallel to the base as shown in Figure \ref{fig:1}\textbf{e}-\textbf{h}. Figure \ref{fig:1}\textbf{e} schematically represents a high angle deposition which was realised at $80^\circ$ in Figure \ref{fig:1}\textbf{f}. Here, an array of 5 mm long nano-trenches were cross-sectioned using a wafer sawing instrument to observe the intermediate steps while performing SAEBD. Figure \ref{fig:1}\textbf{g} schematically represents a low angle deposition which was realised at $45^\circ$ in Figure \ref{fig:1}\textbf{f}. Designing the deposition stage is difficult for an acute angle close to $0^\circ$. Nevertheless, satisfactory results were obtained using $\theta = 45^\circ$ as shown in Figure \ref{fig:1}\textbf{i}-\textbf{k}. 60 nm titanium was deposited on the open nano-trenches at an angle of $45^\circ$ with a deposition rate of 1\AA/s at a pressure of $2\times 10^{-6}$ mbar. We predict that at an angle $~30^\circ$ one can prepare high-quality flat edge. We used FIB to cross-section the produced nanochannels for characterisation. To avoid ion beam induced damage, the top part of the region to be milled was protected with metallic thin-film layers. We deposited two thin-films on the top surface of the enclosed nanochannels. Figure \ref{fig:1}\textbf{i} - \textbf{k} show SEM images of the milled regions from low to high magnification. In Figure \ref{fig:1}\textbf{j} the first layer (Si-1) is the silicon substrate on which nano-trenches were fabricated. The second layer is the $45^\circ$ SAEBD-ed titanium (Ti-2) layer. The third and fourth layers -- Pt-3 and Pt-4 are platinum layers of 100 nm and 450 nm, respectively -- acted as protective layers to avoid FIB induced damage. Pt-3 and Pt-4 were deposited using FIB, which have no relation to the SAEBD process. In Figure \ref{fig:1}\textbf{k}, we observe the magnified cross-section of an enclosed nanochannel. Here, `1' and `2' refer to Si-1 and Ti-2 layers, respectively. As speculated, the non-conformal SAEBD growth of titanium thin-film produces a well defined flat encloser of the nanochannel. A 60 nm of titanium SAEBD at $45^\circ$ angle should produce a vertical thickness of 51 nm. In Figure \ref{fig:1}\textbf{j}, the vertical thickness of the titanium film is 47.8 nm. Considering the uncertainty of few nanometres deposition thickness, this agrees well with our expectation. After this proof of principle experiment with titanium, fused silica based nanofluidic devices were prepared. Here, 5 nm of gold thin-film was sputter coated prior to spin-coating e-beam resist on the silica wafer. Monte-Carlo simulations were performed to optimise gold thin-film thickness in order to reduce charging effect of silica under EBL. Fused silica based nano-trenches were enclosed with SAEBD using silicon dioxide at $45^\circ$. \hspace{2mm} The design of the silica based nanofluidic device to perform single molecule experiments is schematically illustrated in Figure \ref{fig:2}\textbf{a}. It has two reservoirs with gradual decrease of length-scale from milli-scale to microscale and finally, the nanometre-sized channels. Two milli-scale reservoirs were used as inlet and outlet. These two inlets were sand blasted on silica wafers using 70 ${\upmu}$m silica particles prior to nanofabrication process. The microscale reservoirs with the same thickness of nanochannels and micrometre width were etched with RIE. In the Figure \ref{fig:2}\textbf{a}, the white regions and blue regions are milli-scale and microscale reservoirs, respectively (Supporting information Figure S1 and S2). The red stripes correspond to nanochannels. They are connected to the microscale reservoirs. An SEM image of these silica nanochannels is also shown in the right inset of Figure \ref{fig:2}\textbf{a}. We filled these nanochannels with a solution of fluorescent probes. They were unidirectionally transported with electro-osmotic flow \cite{Zare1988, Hu1998, Zhang2000, Suman2007} as shown in Figure \ref{fig:2}\textbf{b}. After filling one reservoir with fluorescent molecules (diluted in PBS buffer - see methods - electro-osmotic flow) capillary force transported the fluid to the other reservoir. A relaxation time of 30 s was given to avoid development of trapped air bubbles, and then only, the second reservoir was filled. Two 100 ${\upmu}$m thick platinum electrodes were immersed into the PBS filled reservoirs (supporting information Figure S1), and an electric field was applied along the nanochannel (supporting information Figure S1 and S4 -- electrode integration with the nanofluidic device). In Figure \ref{fig:2}\textbf{c}, single Alexa-647 fluorophore molecules (purchased from Thermo Fisher, Massachusetts, USA) are horizontally lined up in all the parallel nanochannels. The pixel to photon counts profile plot shows an average SNR of 90. This image was captured with a wide-field optical microscope by exciting the molecules with a 640 nm continuous wave laser (Coherent Laser Systems GmbH, G\"ottingen, Germany). Beside the evidence of FIB and SEM, this also infers that the nanochannels are properly enclosed. No cross-talking of single molecules is observed among nanochannels \cite{Ghosh2014-1}. \begin{figure}[!ht] \begin{center} \includegraphics[scale= 0.42]{ncFig2.pdf} \end{center} \caption{\textbf{Nanofluidic device in a 2fFCS setup.} \textbf{a.} Schematic top view of nanofluidic device together with an SEM image of real nanochannels (scale bar is 30 ${\upmu}$m). \textbf{b.} Side view schematic of the complete experimental set up where electric field is applied through two reservoirs along the nanochannels using platinum electrodes. Two foci (of 2fFCS) were aligned with nanochannel using a 100$\times$ 1.49 NA oil immersion objective lens. Two different linearly polarised pulsed interleaved lasers were used for in two foci -- excitation. The emission from flowing single molecules was detected with two APDs. \textbf{c.} A wide-field of image frame showing the presence of single molecule (scale bar is 8 ${\upmu}$m).\label{fig:2}} \end{figure} Fluid flow inside a nanochannel is 1D when $D_f>>d_{nc}$, where $D_f$ and $d_{nc}$ are the diameters of the detecting region and the nanochannels, respectively (Figure \ref{fig:2}\textbf{a}). The degree of confinement is acceptable until the molecule is forced within the detection volume. This condition is satisfied because in our case $D_f$ is 300 nm to 500 nm (exciation wavelenght dependent) and $d_{nc}$ is 30 nm to 100 nm. The in-time and out-time of flowing single molecules within detection volumes were correlated using two APDs of 2fFCS (Figure \ref{fig:2}\textbf{b}) \cite{Joerg2007}. The diffraction limited two foci of 2fFCS were 40 MHz pulsed interleaved excitation. The two foci were partially overlapping with each other. They were generated using a Nomarski prism, a 100$\times$ 1.49 NA of oil immersion objective, and two linearly polarised lasers perpendicular to each other. An \textit{x-y-z} piezo-scanner was used to acquire scan images and point measurements of photon counts. To restrict unwanted surface adsorption of single molecule, we used fluorescent molecules carrying the same charge as the nanochannels' wall. Pure silica is negatively charged above its isoelectric point (pH(I) = 2) \cite{Kosmulski2001, Raghavan2009}. A buffer of pH 8.5 was used to increase the negative charges on the walls of silica nanochannels. The condition was suitable to flow negatively charged molecules without being adsorbed. We chose carbon-nanodots (CND) as the first probe of interest. They are $<$2 nm in size \cite{Ghosh2014} and negatively charged (see supporting information Figure S5). A confocal scanned image was recorded along the \textit{x-y} plane when CNDs were flowing inside nanochannel. In the schematic top-view of the nanochannels in Figure \ref{fig:4}\textbf{a} -- dark lines are the nanochannels. They were filled with carbon-dots (Figure \ref{fig:4}\textbf{b}). The pixel size ($p_a$) of the image is 320 nm with dwell time ($\delta t$) of 5 ms. To record this focused \textit{z} plane in Figure \ref{fig:4}\textbf{b}, we performed a \textit{y-z} scanned image and went to the position, where the count-rate was maximum. The scheme of \textit{y-z} piezo-scan is shown in Figure \ref{fig:4}\textbf{c}. The height-wise cross-section of several periodic point spread functions (PSF) were recorded from periodic nanochannels. When single CNDs are flowing in unidirectionally, jittery (or fluctuating) fluorescent signals are recored as observed in the periodic PSFs -- Figure \ref{fig:4}\textbf{d}, \textbf{e}, and \textbf{f}. An ascending concentrations of aqueous CND soluions were used. In Figure \ref{fig:4}\textbf{d} to \textbf{g}, the volume percentage of CNDs in water relative to its stock solution are $1\%, 2\%, 5\%$, and $50\%$, respectively. Here, $p_a$ is 100 nm and $\delta t$ is 2 ms. The applied electric field to induce electro-osmotic flow during the measurement was 15 V/mm. As the concentration of CNDs increased, the fluctuations of the photon signal decreased due to increasing rate of CNDs arrival in the focal volume (Figure \ref{fig:4}\textbf{d} - \textbf{g}). \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.7]{ncFig4.pdf} \caption {\textbf{1D Flow of CNDs.} \textbf{a.} Schematic top-view of nanochannels along which an (\textit{y-x}) scan was performed. \textbf{b.} Confocal scan image of nanochannels filled with CNDs. \textbf{c.} Schematic cross-sectional side view of nanochannels along which (\textit{y-z}) scans were performed. The dashed arrow represent the optical excitation path -- direction from immersion oil of objective lens to nanofluidic device. \textbf{d.}, \textbf{e.}, and \textbf{ f.} \textit{y-z} scan images of nanochannels with an increasing order of CNDs' concentration flowing through nanochannels. \textbf{g}. High concentration of CND. All the horizontal scale bars denotes 2 $\upmu$m. The vertical scale bar denotes photon counts as 0.0 (lowest) to 1.0 (highest). \label{fig:4}} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=1.4]{ncFig5.pdf} \caption{\textbf{1D Flow of 48bp DNA.} \textbf{a.} 2fFCS measured correlation plots of photon counts from two APDs. Here, applied electric field was of 220 $V/mm$. The data was fitted with 1D Fokker-Planck equation. From the fitting, we found diffussion coefficient of $1.51\times 10^{-7}$ $cm^2/s$. \textbf{b.} Plot of velocity versus applied voltage along the nanochannel. The data was fitted with linear fit. \textbf{c.} Burst size distribution of single molecule transits fitted with two Poissoinian distributions. Total time trace binned with 5 ms is shown in the inset. The inset showing total time trace -- the dashed bordered grey region is shown in \textbf{d}. \textbf{d.} A part of the time trace plots of single photon bursts due single DNA molecules transits through focus. \label{fig:5}} \end{center} \end{figure} Time-correlated analysis of photon counts was carried out to investigate flow velocities of single molecules. These measurements were carried out at the highest photon counted points of \textit{y-z} scanned PSFs. Prior to performing every 2fFCS measurement, a \textit{y-z} scan was taken as shown in Figure \ref{fig:5}. To substantiate that our method is not restricted to a particular fluorescent probe, we performed 2fFCS of Atto 488 dye (see supporting information for a control measurement of diffusion Figure S6) and 48 base-pair of DNA labelled with single Alexa-647 fluorophore (purchased from IBA GmbH, G\"ottingen, Germany). The electrolytic aqueous buffer solution used for the electrosmostic flow is detailed in the Method section. We measured the flow velocities of single DNA molecules inside 1D nanochannel within a range of applied electric field from 27 V/mm to 300 V/mm (see supporting information Figure S7). These measurements were performed inside a nanochannel with $d_{nc}$ of 30 nm. The correlated photon counts provide two autocorrelations for two foci, and one forward and one backward cross-correlation between them \cite{Joerg2008, Joerg2010}. A difference should be observed in two cross-correlations due to a unidirectional flow profile as shown earlier by Arbour and Enderlein \cite{Joerg2010}. The correlated data points from 2fFCS are fitted with the Fokker-Planck equation \cite{Risken1984} considering the 1D electro-osmotic flow. An exemplary correlated photon counts of 1D flow measurement performed at 220 V/mm is shown in Figure \ref{fig:5}\textbf{a}. Here, sky-blue and red fitted curves are two cross-correlations, and other two curves are autocorrelations. The velocity and diffusion coefficient from the curve fitting are $1.51 \times 10^{-7}$ $\mathrm{cm^2/s}$ and -207 $\upmu $m/s, respectively. The negative value of the flow velocity infers the direction of the flow, which can be altered by changing the applied polarity of the electric field. The linear relationship of electro-osmotic flows at different applied electric fields is plotted in Figure \ref{fig:5}\textbf{b}. The linear fit has an $r^2$ value of $0.992$. To confirm whether the photon used in 2fFCS were from aggregates of single molecules, photon burst sizes of molecular transits were analysed \cite{Joerg1998}. Burst size distribution (BSD) of the complete time trace for a flow measurement is shown in Figure \ref{fig:5}\textbf{c}. The complete time trace used in the BSD analysis is shown in the inset. We fit the BSD with Poisson distributions. The first peak is due to the single molecule bursts, and the second is due to multiple-molecule-events as observed by Enderlein et al. \cite{Joerg1998}. Multiple-molecule-events occur when more than one molecule transits through focus in close succession that the detectors cannot resolve the temporal discretisations between two or more molecules. Figure \ref{fig:5}\textbf{d} shows a magnified region of the inset 5 ms binned time-trace of Figure \ref{fig:5}\textbf{c} where 0 to 600 ms is highlighted with a dashed box (grey inside). We clearly observe single molecule bursts in Figure \ref{fig:5}\textbf{d}. In summary, this letter presented an efficient method (SAEBD) of nanofabricating nanochannels and their usage in single molecule nanofluidics to confine the thermal fluctuations. The nanofabrication process demonstrated here is simple and high-throughput for large scale producion of nanofluidic chips. The process is not material restricted, unlike the oxidation and bonding based techniques. Single fluorophore molecules, CNDs, and DNA molecules were studied inside nanochannels. 1D flows of them inside nanochannel were achieved using electro-osmosis. Wide-field and confocal microscopy were used for qualitative analysis of the 1D flow. Using 2fFCS method the 1D flows of the mentioned species were quantitatively analysed. A broad range of velocities was observed by varying the applied electric field along the nanochannels. The BSD analysis confirm that the observed transits were mainly due to single emitters. All the experiments were performed inside nanochannels of 30 nm $\times$ 35 nm in cross-section. This simple and high-throughput approach of fabricating nanochannels paves the way towards detecting early onset of any disease at single molecule level. In future, trapping nanoscale objects of less than 2 nm in size for large residence time should be also feasible using these nanochannels. Biomolecular interactions such as dynamics of DNA, protein aggregation, and structural biology of molecules in physiological condition can be also studied at single molecule level using the SAEBD based nanofluidic devices. \subsection*{Methods} \textbf{Nanofabrication:} A rigorous cleaning of the surface of interest is required to eliminate unwanted organic or dust-particle contaminations. The silicon wafers were ultrasonicated with acetone prior to piranha cleaning at 120$^\circ$ C for 15 min. The piranha cleaned wafers were then rinsed with de-ionised water. The wafers were heated at 180$^\circ$ C for 10 min to make sure all the water molecules were evaporated. The cleaning protocol was carried out inside a class 100 cleanroom. Prior to spin-coating, the wafers with e-beam resist, an upright optical microscope with 100$\times$ objective lens was used to investigate the surface of the silicon. Fused silica wafers were cleaned with RCA1 reagents instead of piranha. The pristine wafers were spin-coated with the e-beam resist PMMA-A2 (polymethylene methacrylate from MicroChem Corp., Newton, United States) at 2000 rpm for 60 s. The spin-coated wafers were then baked at 180$^\circ$ C for 90 s. A uniform 100 nm thick PMMA film was produced on the wafers. EBL was carried out on the PMMA coated wafers using fixed-e-beam-moving-stage condition \cite{Raith2009} The first experiment where silicon wafers were used -- nanochannel dimensions were 2 mm long and 30 nm - 100 nm wide. The final fused silica based nanofludic device contained 100 -- 150 $\upmu$m long nanochannels. To avoid overlapping periodic PSFs, the distance between every two nanochannels was 2 -- 3 $\upmu$m. The EBL exposed PMMA was monomerised and removed by immersing the wafers into MIBK:IPA (methyl isobutyl ketone dissolved in isopropanol purchased from MicroChem Corp., Newton, United States) developer at -20$^\circ$ C for 30 s. After 30 s, the wafers were immersed into isopropanol to stop over-development and nitrogen blown dried. Nanometre scale openings were formed revealing the silicon or silica where PMMA was removed with the developer. The remaining undeveloped PMMA regions acted as a mask where no etching was performed for RIE. The reactive plasma was created at high vacuum. This process-flow is schematically described in Figure \ref{fig:1}\textbf{a}. After the RIE treatment, the left-over PMMA (mask-region) was removed with acetone, piranha, and oxygen RIE. \textbf{SAEBD:} SAEBD was performed using a Leybold Univex 350 e-beam evaporation system purchased from Oerlikon Leybold Vacuum GmbH, Cologne, Germany. The vacuum pressure inside the chamber while evaporation was $~1\times10^{-6}$ mbar. The dicing system used to cross-section the silicon wafers was Disco Dad 320 purchased from Disco Corporation, Tokyo, Japan. The diamond blade used in the system was ZH05-SD2000-N1-70 (Disco Corporation, Tokyo, Japan). \textbf{FIB:} FIB used here is Nova NanoLab 600 DualBeam purchased from FEI Company, Hillsboro, USA. The protection layer of platinum was deposited in two separate growth rates. At the interface of Ti-2 we used 1 pA current for the deposition to achieve an uninterrupted and atomic interface. The top most surface (Pt-4) was grown at fast deposition rate because this surface is the first interacting surface with FIB dissipating majority of the damaging ionic bombardments. To avoid any damage at the region of interest, ion beam was created with 1 pA current at system vacuum of $~1.5\times10^{-6}$ mbar. \textbf{2fFCS setup:} We used the commercial confocal system Microtime 200 (PicoQuant, Berlin, Germany) for scanning our samples and performing 2fFCS. The details of the setup are included in the supporting information. \textbf{Buffer:} Electro-osmotic flow was performed using 300 pM of 48 base-paired DNA (11 nm long) tagged with Alexa647 was diluted with Tris Buffer and 30\%Glycerol with pH of 8.5. \\ \subsection*{Supporting Information} The supporting information contains the description of integrating the nanochannels to electrodes, design of the nanofludic device, carbon nanodots cation-pi interaction behaviour, 2fFCS measured 1D diffusion of single Atto 488 fluorophores and electro-osmotic flow of single DNA molecules at different electric fields. \subsection*{Acknowledgement} S.G. thanks the International Max Planck Research School for Physics of Biological and Complex Systems and the Ministry of Science and Culture (Lower Saxony) for awarding an Excellence Stipend/MWK Ph.D. scholarship. The work presented here was carried out at the research group of Prof. J\"org Enderlein. The authors thank the internal funding from the University of G\"ottingen associated to the Third Institute of Physics for this project. Lastly, the authors are extremely grateful to Prof. Enderlein for providing enormous amount of critical scientific advices. \subsection*{Authors Contribution} S.G. has written the manuscript, conceptualised the idea of nanofabrication, prepared the nanofluidic device, performed all the measurements and numerical fitting presented in this paper. S.G. and N.K. have performed intial measurements together (not presented here). N.K. have modified the fitting routine provided by Prof. J\"org Enderlein and I.G. I.G. has supervised the project and provided all necessary scientific advices. All the authors have read and made adequate correction in the manuscripts.
1,477,468,749,902
arxiv
\section{Introduction} In the conventional quark model\cite{Jaffe:1976ig,GellMann:1964nj}, hadrons generally have two kinds of structures: a meson consisting of a quark and an antiquark, and a baryon consisting of three quarks. However, quantum chromodynamics (QCD) allows the existence of hadrons different from the above two structures, such as the tetraquarks, hadronic molecules, pentaquarks, hybrids and so on\cite{Chen:2016qju,2017-Lebed-p143-194,2018-Guo-p15004-15004,Liu:2019zoy}. A compact tetraquak is composed of a diquark and an antidiquark, bounding by the color force among quarks and antiquarks. The light tetraquarks have been widely studied via different theoretical methods\cite{Chen:2007xr,Zhang:2006xp,Prelovsek:2008rf,Wallbott:2018lrl}. For the heavy quark sector, the hidden-charm/bottom $Q\bar{Q}q\bar{q}$ tetraquarks have been extensively investigated to interpret some observed XYZ states in various methods, such as the constituent quark models\cite{PhysRevD.98.094015,PhysRevD.94.014016,PhysRevD.94.074007}, meson exchange and scattering methods\cite{Liu:2017mrh,Ortega:2018cnm,Liu:2016kqx}, QCD sum rules\cite{2010-Nielsen-p41-83,2010-Chen-p105018-105018,2011-Chen-p34010-34010,2015-Chen-p54002-54002}, chromomagnetic interaction models\cite{Cui:2006mp,PhysRevD.79.077502}, etc. The doubly heavy tetraquark states $QQ\bar{q}\bar{q} $ have been studied to investigate the stability of tetraquarks\cite{Navarra:2007yw,2013-Du-p14003-14003}. In Ref. \cite{PhysRevD.89.054037,PhysRevD.99.014006,PhysRevD.99.054505}, the open-flavor heavy $bc\bar{q}\bar{q}$ tetraquark states have also been investigated, the results suggest that their masses may lie below the corresponding two-meson thresholds. In addition, such tetraquarks cannot decay via the annihilation channels and thus they will be very stable with narrow widths. Comparing to the above several tetraquark configurations, the fully open-flavor tetraquarks $bc\bar{s}\bar{q}$ and $sc\bar{q}\bar{b}$ ($q=u, d$) are more exotic since they contain four valence quarks with totally different flavors. However, the studies of these tetraquarks have drawn much less interest to date. In Ref. \cite{Cui:2006mp}, the authors studied the masses of $qs\bar{c}\bar{b}$ and $qc\bar{s}\bar{b}$ tetraquarks by using the color-magnetic interaction with the flavor symmetry breaking corrections. Their results show that the masses of $qs\bar{c}\bar{b}$ and $qc\bar{s}\bar{b}$ tetraquark states are about 7.1 GeV and 7.2 GeV, which are lower than the corresponding two-meson S-wave thresholds. In the heavy quark symmetry, the mass of $bc\bar{q}\bar{s}$ tetraquark states with $J^P=1^+$ was also evaluated to be around 7445 MeV~\cite{Eichten:2017ffp}, which is about 163 MeV above the $D\bar B_s^\ast$ threshold and thus allows such decay channel via strong interaction. The above conflicting results from different phenomenological models are inspiring more theoretical studies for the existence of these fully open-flavor tetraquark states. In this paper, we shall study the mass spectra of the fully open-flavor $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquarks in the method of QCD sum rules \cite{Reinders:1984sr,Shifman:1978bx}. This paper is organized as follows. In Sec.~\Rmnum{2}, we construct the interpolating tetraquark currents of the $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ systems with $J^{P}=0^{+},1^{+}$, respectively. In Sec.~\Rmnum{3}, we evaluate the correlation functions and spectral densities for these interpolating currents. The spectral densities will listed in the appendix because of their complicated form. We extract the masses for these tetraquarks by performing the QCD sum rule analyses in Sec.~\Rmnum{4}. The last section is a brief summary. \section{Interpolating currents for the $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquark systems} In this section, we construct the interpolating currents for $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquarks with $J^{P}=0^{+},1^{+}$. In general, there are five independent diquark fields, $q_{a}^{T} C \gamma_{5} q_{b},~ q_{a}^{T} C q_{b},~ q_{a}^{T} C \gamma_{\mu} \gamma_{5} q_{b},~ q_{a}^{T} C \gamma_{\mu} q_{b},\text{and} ~q_{a}^{T} C \sigma_{\mu \nu} q_{b}$, where $q$ stands for quark field, $a,b$ are the color indices, $C$ denotes the charge conjugate operator, and $T$ represents the transpose of the quark fields. The $q_{a}^{T} C \gamma_{5} q_{b}$ and $q_{a}^{T} C \gamma_{\mu} q_{b}$ are S-wave operators while $ q_{a}^{T} C q_{b}$ and $q_{a}^{T} C \gamma_{\mu} \gamma_{5} q_{b}$ are P-wave operators. The $q_{a}^{T} C \sigma_{\mu \nu} q_{b}$ contains both S-wave and P-wave operators according to its different components. To study the lowest lying $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquark states, we use only S-wave diquarks and corresponding antidiquark fields to construct the tetraquark interpolating currents with quantum numbers $J^{P}=0^{+},1^{+}$. For the $bc\bar{q}\bar{s}$ system, the scalar currents with $J^{P}=0^{+}$ can be written as \begin{equation} \begin{aligned} J_{1} &=b_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{s}_{b}^{T}+\bar{q}_{b} \gamma_{5} C \bar{s}_{a}^{T}\right)\, ,\\ J_{2} &=b_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{s}_{b}^{T}-\bar{q}_{b} \gamma_{5} C \bar{s}_{a}^{T}\right)\, , \\ J_{3} &=b_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{s}_{b}^{T}+\bar{q}_{b} \gamma^{\mu} C \bar{s}_{a}^{T}\right)\, , \\ J_{4} &=b_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{s}_{b}^{T}-\bar{q}_{b} \gamma^{\mu} C \bar{s}_{a}^{T}\right)\, , \label{scalarcurrents_bcqs} \end{aligned} \end{equation} in which $q$ is light quark field (up or down). The color structure for the currents $J_{1}$ and $J_{3}$ are symmetric $\left[\mathbf{6}_{\mathbf{c}}\right]_{b c} \otimes\left[\overline{\mathbf{6}}_{\mathbf{c}}\right]_{\bar{q} \bar{s}}$, while for the $J_{2}$ and $J_{4}$ are antisymmetric $\left[\overline{\mathbf{3}}_{\mathbf{c}}\right]_{b c}\otimes\left[\mathbf{3}_{\mathbf{c}}\right]_{\bar{q} \bar{s}}$. The axial-vector currents with $J^{P}=1^{+}$ can be written as \begin{equation} \begin{aligned} J_{1\mu} &=b_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{s}_{b}^{T}+\bar{q}_{b} \gamma_{5} C \bar{s}_{a}^{T}\right)\, , \\ J_{2\mu} &=b_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{s}_{b}^{T}-\bar{q}_{b} \gamma_{5} C \bar{s}_{a}^{T}\right)\, , \\ J_{3\mu} &=b_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{s}_{b}^{T}+\bar{q}_{b} \gamma^{\mu} C \bar{s}_{a}^{T}\right)\, , \\ J_{4\mu} &=b_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{s}_{b}^{T}-\bar{q}_{b} \gamma^{\mu} C \bar{s}_{a}^{T}\right)\, , \label{axialvectorcurrents_bcqs} \end{aligned} \end{equation} where the currents $J_{1\mu}$ and $J_{3\mu}$ are color symmetric while the $J_{2\mu}$ and $J_{4\mu}$ are color antisymmetric. For the $sc\bar{q}\bar{b}$ system, the currents with $J^{P}=0^{+}$ are \begin{equation} \begin{aligned} \eta_{1} &=s_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{b}_{b}^{T}+\bar{q}_{b} \gamma_{5} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{2} &=s_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{b}_{b}^{T}-\bar{q}_{b} \gamma_{5} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{3} &=s_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{b}_{b}^{T}+\bar{q}_{b} \gamma^{\mu} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{4} &=s_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{b}_{b}^{T}-\bar{q}_{b} \gamma^{\mu} C \bar{b}_{a}^{T}\right)\, , \label{scalarcurrents_scqb} \end{aligned} \end{equation} where the currents $\eta_{1}$ and $\eta_{3}$ are color symmetric with $\left[\mathbf{6}_{\mathbf{c}}\right]_{s c} \otimes\left[\overline{\mathbf{6}}_{\mathbf{c}}\right]_{\bar{q} \bar{b}}$, while the $\eta_{2}$ and $\eta_{4}$ are color antisymmetric with $\left[\overline{\mathbf{3}}_{\mathbf{c}}\right]_{s c}\otimes\left[\mathbf{3}_{\mathbf{c}}\right]_{\bar{q} \bar{b}}$. The currents with $J^{P}=1^{+}$ are \begin{equation} \begin{aligned} \eta_{1\mu} &=s_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{b}_{b}^{T}+\bar{q}_{b} \gamma_{5} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{2\mu} &=s_{a}^{T} C \gamma_{\mu} c_{b}\left(\bar{q}_{a} \gamma_{5} C \bar{b}_{b}^{T}-\bar{q}_{b} \gamma_{5} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{3\mu} &=s_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{b}_{b}^{T}+\bar{q}_{b} \gamma^{\mu} C \bar{b}_{a}^{T}\right)\, , \\ \eta_{4\mu} &=s_{a}^{T} C \gamma_{5} c_{b}\left(\bar{q}_{a} \gamma^{\mu} C \bar{b}_{b}^{T}-\bar{q}_{b} \gamma^{\mu} C \bar{b}_{a}^{T}\right)\, , \label{axialvectorcurrents_scqb} \end{aligned} \end{equation} in which the currents $\eta_{1\mu}$ and $\eta_{3\mu}$ are color symmetric while the $\eta_{2\mu}$ and $\eta_{4\mu}$ are color antisymmetric. \section{QCD sum rules} In this section, we investigate the two-point correlation functions of the above scalar and axial-vector interpolating currents. For the scalar currents, the correlation function can be written as \begin{equation} \begin{aligned} \Pi\left(p^{2}\right)&=i \int d^{4} x e^{i p \cdot x}\left\langle 0\left|T\left[J(x) J^{\dagger}(0)\right]\right| 0\right\rangle\, , \end{aligned} \end{equation} and for the axial-vector current \begin{equation} \begin{aligned} \Pi_{\mu \nu}\left(p^{2}\right) =i \int d^{4} x e^{i p \cdot x}\left\langle 0\left|T\left[J_{\mu}(x) J_{\nu}^{\dagger}(0)\right]\right| 0\right\rangle\, . \label{CF_AV} \end{aligned} \end{equation} The correlation function $\Pi_{\mu\nu} (p^{2})$ in Eq.~\eqref{CF_AV} can be expressed as \begin{equation} \Pi_{\mu \nu}\left(p^{2}\right)=\left(\frac{p_{\mu} p_{\nu}}{p^{2}}-g_{\mu \nu}\right) \Pi_{1}\left(p^{2}\right)+\frac{p_{\mu} p_{\nu}}{p^{2}}\Pi_{0}\left(p^{2}\right)\, , \end{equation} where $\Pi_{0}\left(p^{2}\right)$ and $\Pi_{1}\left(p^{2}\right)$ are the scalar and vector current polarization functions related to the spin-0 and spin-1 intermediate states, respectively. At the hadron level, the correlation function can be written through the dispersion relation \begin{equation} \Pi\left(p^{2}\right)=\frac{\left(p^{2}\right)^{N}}{\pi} \int_{(m_{b}+m_{c})^{2}}^{\infty} \frac{\operatorname{Im} \Pi(s)}{s^{N}\left(s-p^{2}-i \epsilon\right)} d s+\sum_{n=0}^{N-1} b_{n}\left(p^{2}\right)^{n}\, , \end{equation} in which $b_n$ is the subtraction constant. In QCD sum rules, the imaginary part of the correlation function is defined as the spectral function \begin{equation} \rho (s)=\frac{1}{\pi} \text{Im}\Pi(s)=f_{H}^{2}\delta(s-m_{H}^{2})+\text{QCD continuum and higher states}\, , \end{equation} where the ``pole plus continuum parametrization" is adopted. The parameters $f_{H}$ and $m_{H}$ are the coupling constant and mass of the lowest-lying hadronic resonance $H$ respectively \begin{equation} \begin{aligned}\langle 0|J| H\rangle &= f_{H}\, , \\ \left\langle 0\left|J_{\mu}\right| H\right\rangle &= f_{H} \epsilon_{\mu} \end{aligned} \end{equation} with the polarization vector $\epsilon_\mu$. At the quark-gluon level, we can evaluate the correlation function $\Pi(p^{2})$ and spectral density $\rho(s)$ using the method of operator product expansion (OPE). To calculate the Wilson coefficients, we use the light quark propagator in coordinate space and heavy quark propagator in momentum space \begin{equation} \begin{aligned} i S_{q}^{a b}(x)=& \frac{i \delta^{a b}}{2 \pi^{2} x^{4}} \hat{x} +\frac{i}{32 \pi^{2}} \frac{\lambda_{a b}^{n}}{2} g_{s} G_{\mu \nu}^{n} \frac{1}{x^{2}}\left(\sigma^{\mu \nu} \hat{x}+\hat{x} \sigma^{\mu \nu}\right) -\frac{\delta^{a b} x^{2}}{12}\left\langle\bar{q} g_{s} \sigma \cdot G q\right\rangle -\frac{m_{q} \delta^{a b}}{4 \pi^{2} x^{2}} \\ &+\frac{i \delta^{a b} m_{q}(\bar{q} q)}{48} \hat{x} -\frac{i m_{q}\left\langle\bar{q} g_{s} \sigma \cdot G q\right) \delta^{a b} x^{2} \hat{x}}{1152}\, , \\ i S_{Q}^{a b}(p)=& \frac{i \delta^{a b}}{\hat{p}-m_{Q}} +\frac{i}{4} g_{s} \frac{\lambda_{a b}^{n}}{2} G_{\mu \nu}^{n} \frac{\sigma^{\mu \nu}\left(\hat{p}+m_{Q}\right)+\left(\hat{p}+m_{Q}\right) \sigma^{\mu \nu}}{12} +\frac{i \delta^{a b}}{12}\left\langle g_{s}^{2} G G\right\rangle m_{Q} \frac{p^{2}+m_{Q} \hat{p}}{(p^{2}-m_{Q}^{2})^{4}}\, , \end{aligned} \end{equation} where $q$ represents $u$, $d$ or $s$ quark and $Q$ represents $c$ or $b$ quark. The superscripts $a, b$ are color indices and $\hat{x}=x^{\mu}\gamma_{\mu},\hat{p}=p^{\mu}\gamma_{\mu}$. In this work, we evaluate the Wilson coefficients up to dimension eight condensates at the leading order in $\alpha_s$. To improve the convergence of the OPE series and suppress the contributions from continuum and higher states region, one can perform the Borel transformation to the correlation function in both hadron and quark-gluon levels. The QCD sum rules are then established as \begin{equation} \mathcal{L}_{k}\left(s_{0}, M_{B}^{2}\right)=f_{H}^{2} m_{H}^{2 k} e^{-m_{H}^{2} / M_{B}^{2}}=\int_{(m_{b}+m_{c})^{2}}^{s_{0}} d s e^{-s / M_{B}^{2}} \rho(s) s^{k}\, , \label{Lk} \end{equation} where $M_B$ is the Borel mass introduced via the Borel transformation and $s_0$ the continuum threshold. The lowest-lying hadron mass can be thus extracted via the following expression \begin{equation} \begin{aligned} m_{H}\left(s_{0}, M_{B}^{2}\right)=&\sqrt{\frac{\mathcal{L}_{1}\left(s_{0}, M_{B}^{2}\right)}{\mathcal{L}_{0}\left(s_{0}, M_{B}^{2}\right)}}\, . \end{aligned} \end{equation} \section{Numerical analysis} In this section, we perform the QCD sum rule analyses for the $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquarks. We use the following values of quark masses and condensates\cite{Jamin:2001zr,Jamin:1998ra,Khodjamirian:2011ub,Tanabashi:2018oca,PhysRevD.99.054505} \begin{equation} \begin{array}{l} {m_{u}(2 \mathrm{GeV})=(2.2_{-0.4}^{+0.5} ) \mathrm{MeV}}\ , \vspace{1ex} \\ {m_{d}(2 \mathrm{GeV})=(4.7_{-0.3}^{+0.5}) \mathrm{MeV}}\, ,\vspace{1ex} \\ {m_{q}(2 \mathrm{GeV})=(3.5_{-0.2}^{+0.5}) \mathrm{MeV}}\, ,\vspace{1ex} \\ {m_{s}(2 \mathrm{GeV})=(95_{-3}^{+9}) \mathrm{MeV}}\, ,\vspace{1ex} \\ {m_{c}\left(m_{c}\right)=(1.275 _{-0.035}^{+0.025}) \mathrm{GeV}}\, , \vspace{1ex} \\ {m_{b}\left(m_{b}\right)=(4.18 _{-0.03}^{+0.04}) \mathrm{GeV}}\, , \vspace{1ex} \\ \langle\bar{q} q\rangle=-(0.23 \pm 0.03)^{3} \mathrm{GeV}^{3}\, , \vspace{1ex} \\ {\left\langle\bar{q} g_{s} \sigma \cdot G q\right\rangle=- M_{0}^{2}\langle\bar{q} q\rangle}\, ,\vspace{1ex} \\ { M_{0}^{2}=(0.8 \pm 0.2) \mathrm{GeV}^{2}}\, , \vspace{1ex} \\ {\langle\bar{s} s\rangle /\langle\bar{q} q\rangle= 0.8 \pm 0.1}\, , \vspace{1ex} \\ {\left\langle g_{s}^{2} G G\right\rangle= (0.48\pm0.14) \mathrm{GeV}^{4}}\, , \end{array} \end{equation} in which the masses of $u,d,s$ are the current quark masses in the $\overline{MS}$ scheme at a scale $\mu = 2$ GeV. We consider the scale dependence of the charm and bottom quark masses at the leading order \begin{equation} \begin{aligned} m_{c}(\mu) &=\bar{m}_{c}\left(\frac{\alpha_{s}(\mu)}{\alpha_{s}\left(\bar{m}_{c}\right)}\right)^{12 / 25}\, , \\ m_{b}(\mu) &=\bar{m}_{b}\left(\frac{\alpha_{s}(\mu)}{\alpha_{s}\left(\bar{m}_{b}\right)}\right)^{12 / 23}\, , \end{aligned} \end{equation} where \begin{equation} \alpha_{s}(\mu)=\frac{\alpha_{s}\left(M_{\tau}\right)}{1+\frac{25 \alpha_{s}\left(M_{\tau}\right)}{12 \pi} \log \left(\frac{\mu^{2}}{M_{\tau}^{2}}\right)}, \quad \alpha_{s}\left(M_{\tau}\right)=0.33 \end{equation} is determined by evolution from the $\tau$ mass using PDG values. To obtain a stable sum rule, the working regions should also be determined, i.e, the continuum threshold $s_{0}$ and the Borel mass $M_{B}^{2}$. The threshold $s_{0}$ can be fixed by minimizing the variation of the hadronic masses $m_{H}$ with the Borel mass $M_{B}^{2}$. The Borel mass $M_{B}^{2}$ can be obtained by requiring the OPE convergence, which results in the lower bound of $M_{B}^{2}$, and a sufficient pole contribution, which results in the upper bound of $M_{B}^{2}$. Specific details of these procedures will be shown later. The pole contribution is defined as \begin{equation} \mathrm{PC}\left(s_{0}, M_{B}^{2}\right)=\frac{\mathcal{L}_{0}\left(s_{0}, M_{B}^{2}\right)}{\mathcal{L}_{0}\left(\infty, M_{B}^{2}\right)}\, , \end{equation} in which $\mathcal{L}_{0}$ is defined in Eq.~(\ref{Lk}). \subsection{$bc\bar{q}\bar{s}$ systems} We firstly perform the QCD sum rule analyses for $bc\bar{q}\bar{s}$ tetraquarks. The spectral densities for the interpolating currents in Eqs.~\eqref{scalarcurrents_bcqs}--\eqref{axialvectorcurrents_bcqs} are calculated and listed in the appendix~\ref{spectral densities}. For any interpolating current in the $bc\bar{q}\bar{s}$ system, contributions from the quark condensate $\langle\bar{q}q\rangle$ and quark-gluon mixed condensate $\langle\bar{q}g_{s}\sigma\cdot Gq\rangle$ are numerically small since they are proportional to the quark mass $m_{q}$ and $m_{s}$. The dominant nonperturbative contribution to the correlation function comes from the four-quark condensate $\langle\bar{q}q\rangle\langle\bar{s}s\rangle$. In Fig.\ref{FigratioJ1}, we take the scalar interpolating current $J_{1}$ as an example to present the contributions to correlation function from the perturbative and various condensate terms. To extract the output parameters, the Borel mass $M_{B}^{2}$ should be large enough to guarantee the convergence of OPE series. Here, we require that the four-quark condensate contribution be less than one-fifth of the perturbative term. In Fig.\ref{FigratioJ1}, we can see that the convergence of OPE series can be ensured while $M_{B}^{2}\geq 5.4\text{GeV}^{2}$. \begin{figure}[h] \centering \includegraphics[width=10cm]{FigRatioJ.pdf}\\ \caption{OPE convergence for the current $J_{1}$ in the $J^{P}= 0^{+}$ $bc\bar{q}\bar{s}$ system} \label{FigratioJ1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm]{FigMBJ.pdf}\quad \includegraphics[width=8.5cm]{FigstaJ.pdf}\\ \caption{Variation of $m_{H}$ with $s_{0}$ and $M_{B}^{2}$ corresponding to the current $J_{1}$ in the $J^{P}= 0^{+}$ $bc\bar{q}\bar{s}$ system} \label{FigMBandstaJ1} \end{figure} To get the upper bound of $M_{B}^{2}$, we need to fix the value of $s_{0}$ at first. As mentioned before, the output hadron mass $m_{H}$ should be irrelevant to $M_{B}^{2}$. In Fig.~\ref{FigMBandstaJ1}, we show the variations of $m_{H}$ with the threshold $s_{0}$ and Borel mass $M_{B}^{2}$. It is shown that the variation of $m_{H}$ with $M_{B}^{2}$ minimizes around $s_{0}\sim 58~\text{GeV}^{2}$, which will result in the working region $56\leq s_0\leq 60~\text{GeV}^{2}$ for the scalar current $J_{1}$. Using this value of $s_{0}$, the upper bound of $M_{B}^{2}$ can be obtained by requiring the pole contribution be larger than 30\%. Finally, the working region of the Borel parameter for the scalar current $J_{1}$ can be determined to be $5.4\leq M_{B}^{2}\leq 5.8\text{GeV}^{2}$. We show the Borel curves in these regions in Fig.~\ref{FigMBandstaJ1} and extract the hadron mass to be $m_H=7.17\pm0.11$ GeV. The errors come from the continuum threshold $s_{0}$, condensates $\langle\bar{q}q\rangle$ and $\langle\bar{q}g_{s}\sigma\cdot Gq\rangle$, the heavy quark masses $m_{b}$ and $m_{c}$. The errors from the Borel mass and the gluon condensate are small enough to be neglected. For all other interpolating currents in Eqs.~\eqref{scalarcurrents_bcqs}--\eqref{axialvectorcurrents_bcqs}, we perform similar analyses and obtain the suitable working regions for the threshold $s_{0}$, Borel mass $M_{B}^{2}$, output hadron masses, pole contributions. We collect the numerical results in Table~\ref{resultone} for the scalar $bc\bar{q}\bar{s}$ tetraquarks and Table~\ref{resulttwo} for the axial-vector $bc\bar{q}\bar{s}$ tetraquarks. The last columns are the $S$-wave two-meson $\bar{B}_{s}D$ and $\bar{B}_{s}^\ast D$ thresholds for these possible tetraquark states. It is shown that both the scalar and axial-vector $bc\bar{q}\bar{s}$ tetraquarks lie below the corresponding two-meson thresholds, implying their stabilities against the strong interaction. \begin{table}[h] \caption{The continuum threshold, Borel window, hadron mass and pole contribution for the $bc\bar{q}\bar{s}$ system with $J^{P} = 0^{+}$.} \begin{center} \label{resultone} \begin{ruledtabular} \begin{tabular}{cccccc } Current& $s_{0}$(\text{GeV$^{2}$}) & $M_{B}^{2}$(\text{GeV$^{2}$}) &$ m_{H}$(\text{GeV}) &PC(\%) & \tabincell{c}{Two-meson threshold(GeV)} \\ \hline $J_{1}$ & 58$\pm$2 & 5.4$\sim$ 5.8 & $7.17\pm0.11$ & 33.1 & \vspace{1ex} \\ $J_{2}$ & 56$\pm$2 & 5.4$\sim$ 5.6 & $7.04\pm0.13$ & 31.7 & 7.24 \vspace{1ex} \\ $J_{3}$ & 57$\pm$2 & 5.4$\sim$ 5.6 & $7.12\pm0.15$ & 32.7 &($\bar{B}_{s}D$) \vspace{1ex} \\ $J_{4}$ & 57$\pm$2 & 5.3$\sim$ 5.5 & $7.11\pm0.20$ & 32.4 &\\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \begin{table}[th] \begin{center} \caption{The continuum threshold, Borel window, hadron mass and pole contribution for the $bc\bar{q}\bar{s}$ system with $J^{P} = 1^{+}$.} \label{resulttwo} \begin{ruledtabular} \begin{tabular}{cccccc} Current& $s_{0}$(\text{GeV$^{2}$}) & $M_{B}^{2}$(\text{GeV$^{2}$}) & $m_{H}$(\text{GeV}) & PC(\%) & \tabincell{c}{Two-meson threshold(GeV)} \\ \hline $J_{1\mu}$ & 58$\pm$2 & 5.4$\sim$ 5.8 & $7.19\pm0.12$ & 33.1 & \vspace{1ex} \\ $J_{2\mu}$ & 57$\pm$2 & 5.3$\sim$ 5.7 & $7.10\pm0.11$ & 34.3 & 7.28 \vspace{1ex}\\ $J_{3\mu}$ & 57$\pm$2 & 5.4$\sim$ 5.7 & $7.10\pm0.14$ & 33.9 & ($\bar{B}_{s}^{*}D$) \vspace{1ex}\\ $J_{4\mu}$ & 58$\pm$2 & 5.4$\sim$ 5.9 & $7.16\pm0.13$ & 33.7 & \\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \subsection{$sc\bar{q}\bar{b}$ systems} For the $sc\bar{q}\bar{b}$ systems, we calculate and list the correlation functions and spectral densities in the appendix~\ref{spectral densities} for all interpolating currents in Eqs.~\eqref{scalarcurrents_scqb}--\eqref{axialvectorcurrents_scqb}. Comparing to the $bc\bar{q}\bar{s}$ system, the OPE series behaviors are very different as shown in Fig.\ref{Figratioeta1} (for the scalar current $\eta_{1}$), where the contributions from the quark condensate $\langle\bar{q}q\rangle$ and quark-gluon mixed condensate $\langle\bar{q}g_{s}\sigma\cdot Gq\rangle$ are dominant while the contribution from the four-quark condensate $\langle\bar{q}q\rangle^{2}$ is relatively small. Such difference is due to the existence of $m_Q\langle\bar{q}q\rangle$ and $m_Q\langle\bar{q}g_{s}\sigma\cdot Gq\rangle$ in the $sc\bar{q}\bar{b}$ system. These terms are proportional to the heavy quark mass but absent in the OPE series of the $bc\bar{q}\bar{s}$ system. \begin{figure}[h] \centering \includegraphics[width=10cm]{FigRatioeta.pdf}\\ \caption{The OPE behavior for the current $\eta_{1}$ in the $J^{P}= 0^{+}$ $sc\bar{q}\bar{b}$ system.} \label{Figratioeta1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=8.5cm]{FigMBeta.pdf}\quad \includegraphics[width=8.5cm]{Figstaeta.pdf}\\ \caption{Variation of $m_{H}$ with $s_{0}$ and $M_{B}^{2}$ corresponding to the current $\eta_{1}$ in the $J^{P}= 0^{+}$ $sc\bar{q}\bar{b}$ system} \label{FigMBandstaeta1} \end{figure} For the interpolating current $\eta_{1}$ with $J^P=0^+$, we perform the numerical analysis and find the suitable working regions for the continuum threshold and Borel parameter are $55\leq s_0\leq 59~\text{GeV}^{2}$ and $7.6\leq M_B^2\leq 7.9~\text{GeV}^{2}$, respectively. We show the variations of $m_H$ with threshold $s_{0}$ and Borel mass $M_{B}^{2}$ in Fig.~\ref{FigMBandstaeta1}. Accordingly, the hadron mass can be extracted in these parameter regions. For all interpolating currents in the $sc\bar{q}\bar{b}$ systems, we can only establish reliable mass sum rules for $\eta_{1},\eta_{3},\eta_{4},\eta_{1\mu}$ and $\eta_{3\mu}$. We list the numerical results for the threshold $s_{0}$, Borel mass $M_{B}^{2}$, output masses, pole contributions and the two-meson thresholds in Table~\ref{resultthree} for the scalar $sc\bar{q}\bar{b}$ system and Table~\ref{resultfour} for the axial-vector channel. In Table~\ref{resultthree}, the extracted masses for the scalar $sc\bar{q}\bar{b}$ tetraquarks are slightly below the $B_{s}D$ threshold while higher than the $B_{c}^{+}K$ threshold. However, the numerical results in Table~\ref{resultfour} show that the axial-vector $sc\bar{q}\bar{b}$ tetraquarks lie below both the $B_{s}^{*}D$ and $B_{c}^{+}K^{*}$ thresholds. \begin{table}[h!] \begin{center} \caption{The continuum threshold, Borel window, mass and pole contribution for the $sc\bar{q}\bar{b}$ system with $J^{P} = 0^{+}$.} \label{resultthree} \begin{ruledtabular} \begin{tabular}{cccccc} Current &$s_{0}$(\text{GeV$^{2}$}) & $M_{B}^{2}$(\text{GeV$^{2}$}) & $m_{H}$(\text{GeV}) & PC(\%) & \tabincell{c}{Two-meson threshold(GeV)} \\ \hline $\eta_{1}$ & 55$\pm$2 & 7.6$\sim$ 7.9 & $7.11\pm0.11$ & 10.7 & \vspace{1ex}\\ $\eta_{2}$ & -- & -- & -- & -- &6.77($B_{c}^{+}K$) \vspace{1ex}\\ $\eta_{3}$ & 54$\pm$2 & 6.4$\sim$ 7.3 & $7.02\pm0.12$ & 12.7 &7.24($B_{s}D$) \vspace{1ex}\\ $\eta_{4}$ & 56$\pm$2 & 6.4$\sim$ 7.7 & $7.13\pm0.12$ & 14.6 &\\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \begin{table}[h!] \caption{The continuum threshold, Borel window, mass and pole contribution for the $sc\bar{q}\bar{b}$ system with $J^{P} = 1^{+}$.} \label{resultfour} \begin{center} \begin{ruledtabular} \begin{tabular}{cccccc} Current & $s_{0}$(\text{GeV$^{2}$}) & $M_{B}^{2}$(\text{GeV$^{2}$}) & $m_{H}$(\text{GeV}) & PC(\%) & \tabincell{c}{Two-meson threshold(GeV)} \\ \hline $\eta_{1\mu}$ & 55$\pm$2 & 7.7$\sim$ 7.9 & $7.10\pm0.12$ & 11.2 & \vspace{1ex}\\ $\eta_{2\mu}$ & -- & -- & -- & -- &7.17($B_{c}^{+}K^{*}$)\vspace{1ex}\\ $\eta_{3\mu}$ & 54$\pm$2 & 7.6$\sim$ 7.8 & $7.04\pm0.12$ & 10.4 &7.28($B_{s}^{*}D$) \vspace{1ex}\\ $\eta_{4\mu}$ & -- & -- &-- & -- &\\ \end{tabular} \end{ruledtabular} \end{center} \end{table} \section{Conclusion} We have investigated the mass spectra for the fully open-flavored $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquark states in the framework of QCD sum rules. We construct the interpolating tetraquark currents with $J^{P}=0^{+}$ and $1^{+}$ and calculate their two-point correlation functions and spectral densities up to dimension eight condensates at the leading order of $\alpha_s$. For the $bc\bar{q}\bar{s}$ tetraquark states, we find that the quark condensate $\langle\bar{q}q\rangle$ and quark-gluon mixed condensate $\langle\bar{q}g_{s}\sigma\cdot Gq\rangle$ are proportional to the light quark mass and thus numerically small. The dominant nonperturbative contribution to the correlation function comes from the four-quark condensate $\langle\bar{q}q\rangle\langle\bar{s}s\rangle$. The OPE series are very different for the $sc\bar{q}\bar{b}$ tetraquark systems, where the quark condensate and quark-gluon mixed condensate provide more important contribution than the four-quark condensate. Such difference leads to distinct behavior of the mass sum rules between the $bc\bar{q}\bar{s}$ and $sc\bar{q}\bar{b}$ tetraquark systems. After the numerical analyses, we extract the masses around $7.1-7.2$ GeV for both the scalar and axial-vector $bc\bar{q}\bar{s}$ tetraquark states while $7.0-7.1$ GeV for the $sc\bar{q}\bar{b}$ tetraquarks. These results show that the masses of the $bc\bar{q}\bar{s}$ tetraquark states are below the $\bar{B}_{s}D$ and $\bar{B}_{s}^{*}D$ two-meson S-wave thresholds, which are consistent with the results from the color-magnetic interaction model~\cite{Cui:2006mp}. For the axial-vector $sc\bar{q}\bar{b}$ tetraquarks, their masses are also lower than the two-meson thresholds of $B_{c}^{+}K^{*}$ and $B_{s}^{*}D$ modes. Such results indicate that the two-meson strong decay modes are kinematically forbidden for these possible tetraquark states. They can only decay via the weak interaction if they do exist. For the scalar $sc\bar{q}\bar{b}$ tetraquark state, their decay to the $B_{s}D$ final states is also forbidden, but the $B_{c}^{+}K$ decay mode is allowed due to their slightly higher masses. However, such decay will be difficult since the low production rate of the $B_c^+$ meson. These stable teraquark states may be found in BelleII and LHCb in future. \section*{ACKNOWLEDGMENTS} This project is supported in part by the Chinese National Youth Thousand Talents Program.
1,477,468,749,903
arxiv
\section{Introduction} Direct detection and spectroscopy of extra-solar planets (hereafter exo-planets) is expected to be one of essential methods in the understanding of the manner in which planetary systems are born, how they evolve, and, ultimately, the identification of biological signatures on these planets. The enormous contrast in luminosity between a central star and an associated planet has been a critical difficulty in the direct observation of exo-planets. For instance, if the solar system is suposed to be observed from outside, the contrast of luminosity between the Sun and Earth is 10$^{-10}$ at visible light wavelengths and 10$^{-6}$ in the mid-infrared wavelength region \citep{Traub2002}. Therefore, the number of exo-planets detected directly is somewhat smaller than those detected by other methods (\cite{Mayor1995}; \cite{Charbonneau2000}), although the first direct observation has been finally achieved (\cite{Marois2008}; \cite{Kalas2008}). The coronagraph, which was first developed for solar observation (\cite{Lyot1939}), is a special optics designed to reduce contrast. It is considered that a coronagraph, designed to achieve a very high dynamic range has the potential to further extend the possibility of the direct observation of exo-planets. It has been considered that the performance of the coronagraph is decreased by obscuration in the telescope pupil which occurs from the secondary mirror, its support structure and the gaps between segmented mirrors. For example, in the case of a checkerboard coronagraph (\cite{Vanderbei2004}), which is a type of binary pupil mask coronagraph, the inner working angle ($IWA$), with and without obscuration by the secondary mirror, is 3$\lambda/D$ and 7$\lambda/D$ respectively, where $\lambda$ is the wavelength and $D$ is the diameter of the aperture (\cite{Tanaka2006}; \cite{Enya2007}). In fact, off-axis telescopes are considered for proposed future space missions specializing in coronagraphy, e.g., TPF-C (\cite{Traub2006}); SEE-COAST(\cite{Schneider2006}); PECO (\cite{Guyon2009}), in order to avoid pupil obscuration. And therefore, various coronagraphs have been presented (e.g., summary in \cite{Guyon2006}) for pupils without obscuration. If a reduction in the influence of pupil obscuration in the coronagraph design is possible, the value of on-axis telescopes for general purposes (current working telescopes and those under construction) as platforms for a coronagraph becomes much higher. This paper presents a concept of solutions to realize a coronagraph for a segmented pupil by employing a binary shaped pupil mask. \section{Concept of Multi-barcode Mask Solution} Among the various current coronagraphic methods, coronagraphs using binary shaped pupil masks have some advantages in principle. Essentially the function of a binary pupil mask coronagraph to produce high contrast point spread function (PSF) is achromatic (except effect of scaling of PSF size), and is somewhat less sensitive to telescope pointing error than experienced with other coronagraphs (e.g., \cite{Lyot1939}). Another important property of binary pupil mask coronagraphs is the fact that they use only part of the pupil as the transmissive area of the mask. In this work, we employ this property to obtain solutions in coronagraph design with binary masks which ``skip over'' the obscured part of the pupil. Fig.\,\ref{fig1} shows an example of a solution for a segmented telescope pupil. In this case, the pupil is obscured by the secondary mirror and four off-center support structures, similar to the pupil of the SUBARU and GEMINI telescopes, as shown in the top panel of Fig.\,\ref{fig1}. The bottom left panel shows the coronagraphic PSF expected from the pupil mask. Two dark regions, DR1 and DR2, are produced in the PSF. It must be noted that the principle of this coronagraph is essentially the same as that of the one-dimensional coronagraph by barcode mask presented by Kasdin et al.(2005), while the length of the barcode mask in the vertical direction in Fig.\,\ref{fig1} is finite, and the barcode is split into two sets (i.e., double barcode mask), above and below the obstruction created by the secondary mirror. This solution has a coronagraphic power only in the horizontal direction. LOQO, a software presented by Vanderbei (1990), was used to optimize each barcode mask. $IWA$, the outer working angle ($OWA$), and the contrast ($C_0$) required for optimization of a one-dimensional coronagraph were 3.0$\lambda/D$, 16$\lambda/D$, and better than $10^{-5}$, respectively. In this work, the throughput is simply defined as the ratio of the area of transmission of the pupil, with and without the pupil mask, which is equivalent to the ratio of the areas of the white and black regions in the figure. The throughput for the solution shown in Fig.\,\ref{fig1} is 24\%. Values of $IWA$, $OWA$, $C_0$, and the throughput is summarized in Table\,\ref{table1}, together with values of other solutions described below. The central obstruction of this solution against the coronagraph is determined by the width of the support structure, which is much smaller than the diameter of the obstruction due to the secondary mirror. As a result, a smaller $IWA$ is realized. This solution has no coronagraphic power in the vertical direction. Along the vertical direction, the intensity of the PSF decreases, following the diffraction theory applied to a rectangular aperture, and there is no $OWA$. Therefore, the contrast defined as the intensity ratio between the core of the PSF and each position in the dark region, $C$, is not constant ($C$ is better than, or equal to $C_0$). \section{Solutions for Various Pupils} \subsection{Pupil Resulting from Hexagonal Mirrors} The top of Fig.\,\ref{fig2} shows a solution using an off-centered four barcode mask applied to a segmented telescope pupil, consisting of hexagonal mirrors and obstructions created by a secondary mirror and its support structure. This type of design of telescope is adopted by the James Webb Space Telescope (JWST). $IWA$, $OWA$, and $C_0$ in this solution are 3.5$\lambda/D_{\textsl{hex}}$, 19$\lambda/D_{\textsl{hex}}$, and $10^{-5.0}$, respectively, in which $D_{\textsl{hex}}$ is defined as shown in Fig.\,\ref{fig2}. The throughput of the solution is 24\%. Off-centering the barcode mask gives rise to a peculiar shape of the core of the PSF, while DR1 and DR2 values produced in this solution are similar to those when the barcode mask is not off-centered. JWST carries coronagraphs in two instruments, the Near-Infrared Camera (NIRCAM) (\cite{Rieke2005}; \cite{Green2005}) and the Mid-Infrared Instrument (MIRI) (\cite{Boccaletti2005}), in which Lyot-type coronagraphs and coronagraphs using four quadrant phase masks will be used. In these coronagraphs, the PSF is impaired by a complex diffraction pattern, especially by six bright tails in a radial direction from the core of the PSF resulting from the segmentation of the pupil. As a result, the discovery angle of these coronagraphs is reduced, particularly in positions close to the core of the PSF. These coronagraphs use devices set at the focal plane in order to realize a high contrast image (i.e., at the occluting mask or four quadrant phase mask), so that these coronagraphs are, in principle, sensitive to telescope pointing error and have limited working bandwidth. If it is possible to use a binary pupil mask coronagraph, these limits are essentially relaxed and the discovery angle of the coronagraphic image can be improved. On the other hand, a solution using a binary pupil mask, as shown in Fig.\,\ref{fig2}, applies a constraint of $OWA$. In order to get the best coronagraph design for each mission, it is essential to estimate the expected observational performance from both the instrument specification and scientific simulation. \subsection{Pupil with Central Obscuration and On-axis Spiders} The bottom of Fig.\,\ref{fig2} shows a solution provided by a double barcode mask, applied to an on-axis telescope pupil with obscuration by the secondary mirror and its four on-axis support structures. This is an example of a solution obtained from optimization presuming a central obstruction of the barcode mask. It should be noted that the central obstruction in this case is caused by the width of the support structure (not by the diameter of the secondary mirror). $IWA$, $OWA$, and $C_0$ in this solution are 3.4$\lambda/D_{\textsl{hex}}$, 15$\lambda/D_{\textsl{hex}}$, and $10^{-5.0}$, respectively. The throughput of the solution is 15\%. $C$ and $IWA$ of this solution satisfy the requirement for a mid-infrared coronagraph (\cite{Enya2010}) for the Space Infrared telescope for Cosmology and Astrophysics (SPICA) (\cite{Nakagawa2009}). SPICA will carry an on-axis Ritchey-Chretien telescope with a 3m class diameter aperture, and it is planned to be launched in 2018. Use of a binary-shaped pupil mask is considered as the baseline solution for the SPICA coronagraph because of its achromatic work, robustness against telescope pointing error caused by vibration of cryo-coolers and other mechanics, and feasibility. \subsection{Further Variations} Fig.\,\ref{fig4} shows further variations of solutions consisting of multi barcode masks. Mask-4 is a solution for a telescope pupil which is the same as the pupil in the case of Mask-2. Four barcode masks used in Mask-4 and Mask-2 are common. In addition, four segments of the pupil, located to the left and right of the central obscuration, are used in order to demonstrate the improvement in the throughput. $IWA$, $OWA$, and $C_0$ in this solution are 3.9$\lambda/D_{\textsl{hex}}$, 14$\lambda/D_{\textsl{hex}}$, and $10^{-5.0}$, respectively. The throughput of this solution is 27\%. Optimization of these newly used segments was carried out with the constraint of central obscuration, as in the case of Mask-3. In comparison with Mask-2, this solution can be regarded as an example in which $C_0$ is maintained but $IWA$ and $OWA$ is compromised as the result of trade-off. Mask-5 is a solution for a telescope pupil which is same as the pupil in the case of Mask-3. Eight barcode masks were employed in order to extend the effective area used by masks. Optimizations of each mask were carried out with constraints in the central obscuration caused by the secondary mirror or its support structure. $IWA$, $OWA$, and $C_0$ in this solution are 3.3$\lambda/D$, 10$\lambda$, and $10^{-5.3}$, respectively. The throughput of this solution is 30\%, implying a large improvement over the case of Mask-3. In comparison with Mask-3, this solution can be regarded as an example in which $IWA$ is maintained but $C_0$ and $OWA$ is compromised as the result of trade-off. Further improvement in performance is possible, if the telescope design takes account of use of a multi-barcode pupil mask coronagraph. Mask-6 and Mask-7 are solutions presuming segmented rectangular mirrors. The throughput of these solutions are 50\%, which is the highest of the solutions presented in this paper. For Mask-6, $IWA$, $OWA$, and $C_0$ in this solution are 3.6$\lambda/D_{\textsl{rect}}$, 11$\lambda/D_{\textsl{rect}}$, and $10^{-6.0}$, respectively, in which $D_{\textsl{rect}}$ is defined as shown in Fig.\,\ref{fig3}. Mask-7 provides $IWA$=2.5$\lambda/D_{\textsl{rect}}$, which is significantly better than Mask-6, while $C_0$ and $OWA$ are common to Mask-6 and Mask-7. \section{Discussion and Summary} If rotation of the pupil mask and coronagraphic imaging, before and after the rotation, are possible, the total discovery angle is improved. Fig.\,\ref{fig4} shows the concept of rotating the pupil mask by 90 degrees, applied to the solution shown in Fig.\,\ref{fig2}. As a result of double coronagraphic imaging, before and after mask rotation, most of the influence of the diffraction tails in the coronagraphic PSF is removed, and totally 360$^{\circ}$ continuous discovery angle is provided. The improved discovery angle makes it possible to observe companions of a central star, even if the companions are buried in diffraction tails of the original PSF. The mask rotation technique can be especially useful for SPICA, in which the role of the telescope (\cite{Trauger2007}) is strongly constrained due to the thermal system design required to realize a cryogenic infrared telescope satellite utilizing radiation cooling. A binary shaped pupil mask has also been used in order to support a Phase Induced Amplitude Apodization (PIAA) coronagraph (\cite{Guyon2010}). It is, in principle, possible with such a hybrid coronagraph to realize smaller $IWA$ and higher throughput than the values in coronagraph employing only a binary shaped pupil mask. The currently presented hybrid coronagraph by \citet{Guyon2010} includes circular apodization produced by PIAA and a binary shaped mask consisting of concentric rings (\cite{Vanderbei2003}). Therefore, the coronagraphic power is along radial direction in PSF and is not one-dimensional. We would like to point out the potential to combine barcode masks and a one-dimensional PIAA in order to realize a one-dimensional hybrid coronagraph which is applicable to segmented telescope pupils like those shown in this paper. In general, the size of a telescope aperture is limited by various factors. For instance, the size of the rocket fairing constrains off-axis space telescopes with seamless mirrors proposed especially for the observation of exo-planets. If a segmented pupil becomes more useful for coronagraphic observation of exo-planets, on-axis telescopes with a segmented pupil for general purposes becomes more valuable as a coronagraph platform. In the case of ground based telescopes, the current largest class of telescopes (e.g. VLT, KECK, GEMINI, SUBARU and so on), can be good target for application of a one-dimensional coronagraph using a binary shaped mask. These telescopes are starting direct detections of giant, young planetary objects in near infrared (\cite{Chauvin2004}; \cite{Marois2008}; \cite{Lagrange2010}; \cite{Thalmann2010}). Giant ground based telescope of the futre (e.g., TMT, EELT) will extend these observations in spatial resolution and sensitivity. Space telescopes have further potential, especially for observation in mid-infrared wavelength region. The contrast provided by several of solutions presented in this paper is $\sim10^{-6}$, which is the contrast needed for observation of matured terrestrial exo-planets in the mid-infrared (\cite{Traub2002}). This fact suggests that a mid-infrared coronagraph with giant telescope, consisting of segmented mirrors, has the potential for terrestrial planet search in future, where there is a critical spectral feature, O$_3$\,(9.8$\mu$m) considered to be a biomarker in atmosphere of terrestrial planets. In contrast, coronagraphic search for terrestrial planets in the visible wavelength requires observation with much higher contrast, $10^{-10}$ (\cite{Traub2002}), which any of the masks in this paper are unable to reach. Proposed off-axis telescopes with a seamless pupil, e.g., TPF-C (\cite{Traub2006}); SEE-COAST (\cite{Schneider2006}); PECO (\cite{Guyon2009}) might be a reasonable solution to attain such ultimate contrast, rather than larger telescopes with a segmented pupil. This paper presents a one-dimensional coronagraphic solution for a segmented telescope pupil by applying a type of binary shaped mask, a multiple barcode masks, which ``skip over'' the obscured part of the pupil. These coronagraphs have the general advantage of binary pupil mask coronagraphs, i.e., lack of susceptibility to telescope pointing errors and less constraint on the bandwidth. Furthermore, the multi-barcode mask coronagraph provides a large discovery angle and a small $IWA$, even for a pupil with a large central obstruction. We suggest that the concept of these solutions has a potential use in facilitating large telescopes having a segmented pupil to be used as platforms for an advanced coronagraph. \bigskip We deeply thank to pioneers of the barcode mask, particularly N. J. Kasdin and R. J. Vanderbei, with the best respect. The work is supported by the Japan Society for the Promotion of Science, and the Ministry of Education, Culture, Sports, Science and Technology of Japan. We would like to express special gratitude to S. Tanaka, even after the change of his field.
1,477,468,749,904
arxiv
\section{Introduction} Let $K$ be a field and $R=K[x_1, \dots, x_n]$ be the polynomial ring in $n$ variables over $K$ with each $\deg~x_i=1$. % If $u=x_1^{a_1}\cdots x_n^{a_n}$ is a monomial of $R$, then we denote the support of $u$ by $supp(u)=\{\,x_i~|~a_i\neq 0\}$. For a monomial ideal $I\subseteq R$, $G(I)$ is denoted for the set of its unique minimal monomial generators. We call a monomial ideal $I$ a \emph{matroidal ideal} if each member of $G(I)$ is square-free (i.e., $I$ is reduced) and that the following exchange condition is satisfied: for any $u=x_1^{a_1}\cdots x_n^{a_n}, v=x_1^{b_1}\cdots x_n^{b_n}\in G(I)$, if $a_i>b_i$ for some $i$, then there exists some $j$ with $a_j<b_j$ such that $x_ju/x_i \in G(I)$% \footnote{$I$ is called a \emph{polymatroidal ideal} when not requiring the square-free assumption(see\cite{hh}). } In other words, the set $\mathcal{B}(I) =\{\,supp(u)\,|\,u\in G(I)\,\}$ satisfies the following exchange condition: \begin{list}{(B)} \item If $B_1$ and $B_2$ are elements of $\mathcal{B}(I)$ and $x \in B_1 - B_2$, then there is an element $y\in B_2 - B_1$ such that $(B_1-\{x\})\cup \{y\} \in \mathcal{B}(I)$. \end{list} \medskip \noindent It follows form \cite[Theorem~1.2.3]{ox} that there is a ``matroid" having $\mathcal{B}(I)$ as its collection of bases (maximal independent sets). Since each maximal independent set of a matroid has the same cardinality (see \cite[Lemma~1.2.4]{ox}), each monomial $u\in G(I)$ must be of the same degree, say $d$, and we call this number $d$ the degree of the matroidal ideal $I$. The matroid theory is one of the most fascinating research area in combinatorics which has many links to graphs, lattices, codes, and projective geometry. For the interested reader, we refer the textbooks \cite{ox} or \cite {we}. In this paper, we focus on some arithmetic properties held by a matroidal ideal. It is known that a matroidal ideal has linear quotients (cf. \cite[Theorem~5.2]{ch}). We first discuss the linear quotient index $q(I)$ of a matroidal ideal in section two and get the following result. \begin{thm1} Let $I$ be a matroidal ideal of degree $d$ in the polynomial ring $R=K[x_1, \dots, x_n]$ with $supp(I)=\{x_1,\dots,x_n\}$. Then $q(I)=n-d$. \end{thm1} \noindent With this result and the fact \cite[Corollary~1.6]{ht} we obtain that the projective dimension of a matroidal ideal is $pd_R(I)=n-d$. An ideal is \emph{unmixed} if all its prime divisors are of the same height. It is known that the Cohen-Macaulay ideals hold this property. In section three, we discuss the unmixed matroid ideal and find the relation between the height and the degree of a matroid ideal as follows. (for the ``$*$" product of ideals, please see Definition 2.1) \begin{thm2} \label{maintheo2} Let $I \subseteq K[x_1, \dots, x_n]$ be an unmixed matroidal ideal of degree $d$ with $supp(I)=\{x_1,\dots,x_n\}$ and $n\geq 2$; then $h+d-1\leq n\leq hd$, where $h$ is the height of $I$. In particular, $n=h+d-1$ if and only if $I$ is square-free Veronese; and $n=hd$ if and only if $I=J_1*J_2* \cdots *J_d$, where each $J_i$ is generated by $h$ distinct variables. \end{thm2} For an ideal $I$, the minimal number of elements which generate $I$ up to radical is called the \emph{arithmetical rank} of this ideal and is denoted by $ara\, I$. When this numerical invariant equals to the height of $I$, we say that $I$ is a set-theoretic complete intersection. We discuss the relation between the arithmetical rank $ara\,I$ and the linear quotient index $q(I)$ of a matroidal ideal in the final section. The main result we obtain is as below. \begin{thm3}\label{maintheo4} Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring $R=K[x_1, \dots, x_n]$ and supp$(I)=\{x_1, \dots, x_n \}$. Then $ara~I =q(I)+1$ if one of the following conditions holds: \begin{description} \item{(i)} $I$ is square-free Veronese; \item{(ii)} $I=J_1 J_2 \cdots J_d$ such that each $J_i$ is generated by $h$ distinct variables; \item{(iii)} $d=2$, \end{description} where $h$ is the height of $I$. \end{thm3} \noindent As a consequence of the above theorem, we have the corollary. \begin{cor} Let $I\subseteq K[x_1,\dots,x_n]$ be a matroidal ideal such that supp$(I)=\{x_1, \dots, x_n \}$. Then $I$ is Cohen-Macaulay if and only if it is a set-theoretic complete intersection. \end{cor} \section{Linear quotients and matroidal ideals} Throughout, $R=K[x_1, \dots, x_n]$ is the polynomial ring in $n$ variables over a field $K$. By Cohen-Macaulay for an ideal $I$, we mean that the quotient ring $R/I$ is Cohen-Macaulay. For a monomial ideal $I$, we define the support of $I$ to be the set $supp(I)={\bigcup}_{u\in G(I)} supp(u)$. In this section, we discuss the linear quotient index $q(I)$ of a matroidal ideal. We first recall the following definition. \begin{defi} We say that a monomial ideal $I\subseteq R$ has linear quotients if there is an ordering $u_1, \dots, u_s$ of the monomials belonging to $G(I)$ with deg$~u_1\leq $deg$~u_2\leq \cdots \leq $deg$~u_s$ such that, for each $2\leq j\leq s$, the colon ideal $\langle u_1, u_2, \dots, u_{j-1}\rangle:u_j$ is generated by a subset of $\{x_1, \dots, x_n\}$. \end{defi} Let $I$ be a monomial ideal with linear quotients with respect to the ordering $\{u_1, \dots, u_s\}$ of the monomials belonging to $G(I)$. We write $q_j (I)$ for the number of variables which is required to generate the colon ideal $\langle u_1, u_2, \dots, u_{j-1}\rangle : u_j$. Let $q(I)=max\{ q_j(I)~|~2\leq j\leq s\}$. From the fact \cite[Corollary~1.6]{ht} that the length of the minimal free resolution of $R/I$ over $R$ is equal to $q(I)+1$, we see that the index $q(I)$ is independent of the particular choice of the ordering of the monomials which gives linear quotients. Moreover, by the Auslander-Buchsbaum formula, we have $depth\,R/I=n-q(I)-1$. It then follows from the equality $\dim R/I=n-ht(I)$ that a monomial ideal $I$ with linear quotients satisfies $ht(I)\leq q(I)+1$ and is Cohen-Macaulay if and only if $ht(I)=q(I)+1$. We summarize the above as the following proposition. \begin{prop}\label{qI} Let $I$ be a monomial ideal of $R$ with linear quotients. Then $ht(I)\leq q(I)+1$; and $I$ is Cohen-Macaulay if and only if $\,ht(I)=q(I)+1$. \end{prop} As stated in the introduction, it is known that the matroidal ideals have linear quotients. Therefore all the above discussion applied to matroidal ideals. Next, we introduce two lemmas which are useful later. In the sequel, we say that $I$ is a matroidal ideal of $K[x_1,\dots,x_n]$ if $supp(I)=\{x_1,\dots,x_n\}$. \begin{lemma} \label{keylem3} Let $I \subseteq K[x_1, \dots, x_n]$ be a matroidal ideal of degree $d$; and let $x$ and $y$ be variables in $R$ such that $xy\nmid u$ for any $u\in G(I)$. If $xf\in G(I)$ for some monomial $f$ of degree $d-1$, then $yf\in G(I)$. \end{lemma} \begin{proof} Write $f=x_1\cdots x_{d-1}$. The assertion is clear if $d=2$ so we may assume that $d\geq 3$. Let $g=y_1 \cdots y_{d-1}$ be a monomial in $R$ different from $f$ such that $yg\in G(I)$ and $|supp(f)\cap supp(g)|$ is maximal. We may assume that $y_i=x_i$ for $i=1, \dots, k$. Suppose that $k\leq d-2$. Then by the definition of matroidal ideal there are integers $i, j\geq k+1$ such that $\frac{yg}{y_j}x_i\in I$, which contradicts to the choice of $g$. Therefore, $f=g$ and the assertion holds. \end{proof} \begin{lemma} \label{keylem2} Let $I \subseteq K[x_1, \dots, x_n]$ be a matroidal ideal of degree $d$. If there are $d+1$ distinct variables $\{y, y_1, \dots, y_d\}\subseteq \{x_1, \dots, x_n\}$ such that $f=y_1\cdots y_d\in I$, then there exists an integer $i$ such that $\frac{f}{y_i} y\in I$. \end{lemma} \begin{proof} The assertion is clear if $d$ is small. We may assume that $d\geq 3$. Let $g=z_1 \cdots z_d$ be a monomial in $I$ different from $f$ such that $y\in supp(g)$ and $|supp(f)\cap supp(g)|$ is maximal. We may assume that $z_i=y_i$ for $i=1, \dots, k$ and $z_d=y$. Suppose that $k\leq d-2$. Then by the definition of matroidal ideal there are integers $i, j\geq k+1$ such that $\frac{g}{z_j}x_i\in I$, which contradicts to the choice of $g$. Therefore, $k=d-1$ and the assertion holds. \end{proof} \begin{theo} \label{maintheo3} Let $I$ be a matroidal ideal of degree $d$ of the polynomial ring $R=K[x_1, \dots, x_n]$ with $supp(I)=\{x_1,\dots,x_n\}$. Then $q(I)=n-d$. \end{theo} \begin{proof} Since $I$ has linear quotients, there is an ordering $u_1, \dots, u_s$ of the monomials belonging to $G(I)$ such that, for each $2\leq j\leq s$, the colon ideal $\langle u_1, u_2, \dots, u_{j-1}\rangle:u_j$ is generated by a subset of $\{x_1, \dots, x_n\}$.\\ To show the assertion, it is enough to show that \begin{equation} \label{eq1} \langle u_1, u_2, \dots, u_{j-1}\rangle:u_j\subseteq \{x_1, \dots, x_n\}-supp(u_j)\end{equation} for each $2\leq j\leq s$ and $$\langle u_1, u_2, \dots, u_{s-1}\rangle:u_s=\{x_1, \dots, x_n\}-supp(u_s).$$ Write $u_j=x_{i_1}\cdots x_{i_d}$. If $x_{i_t}\in \langle u_1, u_2, \dots, u_{j-1}\rangle:u_j$ for some $t\leq d$, then $u_j\in \langle u_1, u_2, \dots, u_{j-1}\rangle$ as $\langle u_1, u_2, \dots, u_{j-1}\rangle$ is a square-free monomial ideal, a contradiction. Thus, (\ref{eq1}) holds. By (\ref{eq1}), to finish the proof, it suffices to show that $y\in \langle u_1, u_2, \dots, u_{s-1}\rangle:u_s$ if $y\notin supp(u_s)$. However, this follows by Lemma~\ref{keylem2} with $u_s=y_1\cdots y_d$. \end{proof} \begin{coll} \label{pd} Let $I$ be a matroidal ideal of degree $d$ of the polynomial ring $R=K[x_1, \dots, x_n]$. Then the projective dimension of the ideal $I$ over $R$ is $pd_R(I)=n-d$. \end{coll} \begin{proof} Since the length of the minimal free resolution of $R/I$ over $R$ is $q(I)+1$ (see \cite[Corollary~1.6]{ht}), we obtain that $pd_R(I)=pd_R(R/I)-1=q(I)=n-d$ \end{proof} \section{Unmixed matroidal ideals} An ideal is \emph{unmixed} if all its prime divisors are of the same height. This property is held by a Cohen-Macaulay ideal. In this section, we give characterizations of an unmixed matroidal ideal in terms of its height, degree, and the number of variables. We first recall one special kind of matroidal ideals, the square-free Veronese ideals. \begin{example} The square-free Veronese ideal of degree $d$ in the variables $\{x_1, \dots, x_n \}$ is the ideal which is generated by all square-free monomials in $\{x_1, \dots, x_n \}$ of degree $d$. It is easy to see that the square-free Veronese ideals are matroidal and unmixed. In particular, from \cite[Theorem~4.2]{hh} one see that the square-free Veronese ideals are the only case for Cohen-Macaulay matroidal ideals. \end{example} We now give a characterization of matroidal ideal of degree $2$. \begin{theo} \label{keyprop} Let $I\subseteq K[x_1, \dots, x_n]$ be a matroidal ideal of degree $2$ with $supp(I)=\{x_1,\dots,x_n\}$. Then there are subsets $S_1, \dots, S_m$ of $\{x_1, \dots, x_n\}$ such that the following hold: \begin{description} \item{(i)} $m\geq 2$ and $|S_i|\geq 1$ for each $i$; \item{(ii)} $S_i\cap S_j=\emptyset$ if $i\neq j$ and $\bigcup_{i=1}^m S_i=\{x_1, \dots,x_n\}$; \item{(iii)} if $x\in S_i$, $y\in S_j$ for $i\neq j$, then $xy\in G(I)$; \item{(iv)} if $x, y\in S_i$ for some $i$, then $xy\notin G(I)$. \end{description} Moreover, let $P_i$ be the the prime ideals generated by the set $\{x_1,\dots,x_n\}-S_i$ for each $i$. Then $P_1\cap P_2\cap \cdots \cap P_m$ gives the primary decomposition of $I$. \end{theo} \begin{proof} Let $t-1=|\{x_i~|~i\neq 1, x_ix_1\notin G(I)\}|$; then $1\leq t\leq n-1$. Without loss of generality, we may assume that $x_1x_i\notin G(I)$ if $i=2, \dots, t$ and $x_1x_i\in G(I)$ if $i=t+1, \dots, n$. We first show the following two statements: \\ (a) $x_ix_j\in G(I)$ if $i\leq t$ and $j\geq t+1$; \\ (b) $x_ix_j\notin G(I)$ if $i, j\leq t$. To show (a) holds, suppose on the contrary that $x_ix_j\notin G(I)$ for some $i\leq t$ and $j\geq t+1$. Since $x_i\in supp(I)$, there is a variable $x_k$ such that $x_ix_k\in G(I)$. Moreover, $x_1x_j, x_ix_k\in G(I)$ and $I$ is matroidal imply that either $x_ix_1$ or $x_ix_j$ is in $G(I)$, a contradiction. Thus (a) holds. For (b), suppose on the contrary that $x_ix_j\in G(I)$ for some $i,j\leq t$. Since $x_1x_n, x_ix_j\in G(I)$, it follows from the exchange property of matroidal ideals that either $x_1x_i$ or $x_1x_j$ belongs to $G(I)$, a contradiction. Thus (b) holds. \par Let $S_1=\{x_1, \dots, x_t\}$. Observe that $\{x_ix_j~|~i\leq t,~and~j\geq t+1\}$ is a subset of $G(I)$. If $G(I)=\{x_ix_j~|~i\leq t,~and~j\geq t+1\}$ then by setting $S_2=\{x_{t+1}, \dots, x_n\}$ and we are done. Therefore, we may assume that there are $j,k\geq t+1$ such that $x_jx_k\in G(I)$. Let $I'$ be the monomial ideal in $K[x_{t+1}\dots,x_n]$ generated by the set $G(I)-\{x_ix_j~|~i\leq t,~and~j\geq t+1\}$. Then $supp(I')\subseteq \{x_{t+1}, \dots, x_n\}$. In fact, $supp(I')=\{x_{t+1}, \dots, x_n\}$. For if not, then there is a variable $x_l$ with $l\geq t+1$ such that $x_l\notin supp(I')$. Since $x_lx_1, x_jx_k\in G(I)$ and $I$ is matroidal, either $x_lx_j$ or $x_lx_k$ is in $G(I)$. Therefore, either $x_lx_j$ or $x_lx_k$ is in $G(I')$, a contradiction. We note that $I'$ is a matroidal ideal of degree $2$ of the polynomial ring $K[x_{t+1}, \dots, x_n]$. Thus, the assertion follows by induction. Let $P_i$ be the prime ideals generated by the set $\{x_1,\dots,x_n\}-S_i$. By the properties of $S_i$, it is easy to see that $P_i=I:y$ for every $y\in S_i$. Therefore each $P_i$ is an associate prime ideal of $I$. Let $w\in P_1\cap\dots \cap P_m$; then $w\cdot y\in I$ whence $y\in \bigcup_{i=1}^{m} S_i$. It follows that $(I:w)\supseteq\langle x_1,\dots,x_n\rangle$. Since $I$ is reduced, $I$ has no embedded prime ideals. Therefore $I:w=R$ and so that $w\in I$. Hence, $P_1\cap\dots \cap P_m=I$; and this completes the proof. \end{proof} From the above theorem we see that the $S_i\,'s$ are uniquely determined. Moreover, if $I$ is unmixed then we have that $|S_i|=|S_j|=n-ht(I)$ for all $i,j$. Therefore we have the following corollary. \begin{coll}\label{d=2} Let $I\subseteq K[x_1, \dots, x_n]$ be an unmixed matroidal ideal of degree $2$; then one has $\frac{n}{2}\leq ht(I)\leq n-1 $. In particular, $ht(I)=n-1$ if and only if $I$ is square-free Veronese; and $ht(I)=\frac{n}{2}$ if and and only if $n$ is even and $I=I_1*I_2$ such that each $I_i$ is generated by $\frac{n}{2}$ distinct variables. \end{coll} \begin{proof} It is obvious that $ht(I)\leq n-1$ and the equality holds when $m=n$ and $|S_i|=1$ for all $i$, i.e., $I$ is square-free Veronese. On the other hand, since $|S_1|+|S_2|=2(n-ht(I))\leq \sum_{i=1}^m|S_i|=n$, we obtain that $n\leq 2\,ht(I)$. This equality holds when $m=2$ and in this case $I=I_1*I_2$ such that each $I_i$ is generated by $\frac{n}{2}$ distinct variables. \end{proof} Here, we connect matroidal ideals with graphs. Observe that if $I\subseteq K[x_1, \dots, x_n]$ is a square-free monomial ideal of degree $2$ then $I$ defines a simple graph $G$ with vertex set $\{x_1, \dots, x_n\}$ and edge set $\{x_ix_j~|~x_ix_j\in I\}$. If this is the case, we also say that $I$ is the defining ideal of $G$. The following corollary is a consequence of Theorem~\ref{keyprop}. \begin{coll} \label{coll1} Let $I$ be a matroidal ideal of degree $2$ of a polynomial ring $R=K[x_1, \dots, x_n]$. If $I$ is the defining ideal of a simple graph $G$, then there are positive integers $t_1, \dots, t_m$ such that $n=t_1+ \cdots +t_m$ and $G=K_{t_1, t_2, \dots, t_m}$. In particular, if $I$ is unmixed, then $G=K_{t, t, \dots, t}$. \end{coll} \begin{example} Let $G$ be a graph defined by a matroidal ideal of degree $2$ of the polynomial ring $R=K[x_1, \dots, x_6]$. If $G$ is unmixed, then by Corollary~\ref{coll1}, $G=K_6$ or $K_{3,3}$ or $K_{2,2,2}$. \end{example} Next, we proceed to state and prove the main result in this section which gives a characterization of unmixed matroidal ideals of degree $d$ in a polynomial ring $R=K[x_1, \dots, x_n]$. \begin{theo} \label{maintheo2} Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring $R=K[x_1, \dots, x_n]$, where $n\geq 2$. If $I$ is unmixed, then $h+d-1\leq n\leq hd$, where $h$ is the height of $I$. In particular, $n=h+d-1$ if and only if $I$ is square-free Veronese; and $n=hd$ if and only if $I=J_1*J_2* \cdots *J_d$, where each $J_i$ is generated by $h$ distinct variables. \end{theo} \begin{proof} Observe first that the assertion holds if $d=1$ for that in this case $I=\langle x_1\cdots x_n\rangle$ and $n=h$. Therefore we assume that $d\geq 2$. We proceed the proof by induction on $d$. If $d=2$, then it is the content of Corollary~\ref{d=2}. Thus we assume now that $d\geq 3$. For $i=1, \dots, n$, let $S_i=\{\frac{u}{x_i}~|~u\in G(I),~and~x_i\mid u\}$ and $I_i$ be the ideal generated by $S_i$. Then $I_i$ is a matroidal ideal of degree $d-1$ with $supp(I_i)\subseteq \{x_1, \dots, \hat{x_i}, \dots, x_n\}$ and $$I=\sum_{i=1}^n x_iI_i.$$ We will show that $I_i$ is unmixed in the following. For this, we prove $I_1$ for example. Let $P_1, \dots, P_r$ be the minimal primes of $I$ that contain $x_1$ and $Q_1, \dots, Q_s$ be the minimal primes of $I$ that do not contain $x_1$; then $$I=P_1\cap \cdots \cap P_r\cap Q_1\cap \cdots \cap Q_s$$ is a minimal primary decomposition of $I$. Therefore $$x_1I_1\subseteq \langle x_1\rangle\cap I \subseteq \langle x_1\rangle\cap Q_1\cap \cdots \cap Q_s = \langle x_1\rangle\cdot (Q_1\cap \cdots \cap Q_s)\subseteq x_1I_1$$ as $\langle x_1\rangle\cdot (Q_1\cap \cdots \cap Q_s)\subseteq I$. Hence, $I_1=Q_1\cap \cdots \cap Q_s$ which is unmixed as $ht(Q_j)=h$ for all $j$. \par To obtain the inequality $n\geq h+d-1$, let $t_i=|supp(I_i)|$ for $i=1, \dots, n$; then $t_i\leq n-1$. Since $I_i$ is an unmixed matroidal ideal of degree $d-1$ and of height $h$, $t_i\geq h+(d-1)-1$ by induction . So $n\geq h+d-1$ as $t_i\leq n-1$. It is clear that if $I$ is square-free Veronese then $n=h+d-1$. Conversely, if $n=h+d-1$, then $t_i=n-1=h+(d-1)-1$, so that $I_i$ is square-free Veronese by induction, it follows that $I$ is square-free Veronese as $I=\sum_{i=1}^n x_iI_i$. \par To obtain $n\leq hd$, let $T=\{x_i~|~i\neq 1, x_1x_i\mid u,~ \text{for~some}~u\in G(I) \}\subseteq supp(I_1)\subseteq \{x_2, \dots, x_n\}$. For $i=1, \dots, r$, choose $f_i\in Q_1\cap \cdots \cap Q_s-P_i$. Let $y\in \{x_2, \dots, x_n\}-T$; then $x_1f_i\in I$, so that $yf_i\in I\subseteq P_i$ by Lemma~\ref{keylem3}, it follows that $y\in P_i$ for every $i$. Therefore $h=ht(P_i)\geq 1+n-1-|T|=n-t$, where $t=|T|$. Now, $I_1$ is an unmixed matroidal ideal of degree $d-1$ and of height $h$. By induction $h(d-1)\geq |supp(I_1)|\geq t\geq n-h$. Therefore, $hd\geq n$. It is clear that if $I=J_1*J_2* \cdots *J_d$ such that each $J_i$ is generated by $h$ distinct variables, then $n=hd$. Conversely, if $n=hd$, then $P_i$ is generated by the set $\{x_1, \dots, x_n\}-T$, so that $r=1$ and $I=P_1*(Q_1\cap \cdots \cap Q_s)$. The assertion follows as by induction. \end{proof} \section{Arithmetical rank of a matroidal ideal} The goal of this section is to study the {\it arithmetical rank} of a matroidal ideal. For this we recall the definition of {\it arithmetical rank} as follows.\par Let $R$ be a Noetherian ring and $I$ be an ideal of $R$. We say that the elements $x_1, \dots, x_m\in R$ {\it generate I up to radical} if $\sqrt{(x_1, \dots, x_m)}=\sqrt{I}$. The minimal number $m$ with this property is called the {\it arithmetical rank of $I$}, denoted by $ara~ I$. If $\mu(I)$ is the minimal number of generators for $I$ and $ht(I)$ is the height of $I$, then it is known that $$ht(I)\leq ara~I\leq \mu(I).$$ $I$ is called {\it set-theoretic intersection} if $ht(I)=ara~I$. The following results will be used later. \begin{lemma} \label{svlem}\cite{sv} Let $P$ be a finite subset of a ring $R$. Let $P_0, \dots, P_r$ be subsets of $P$ such that \begin{description} \item{(i)} $\bigcup_{i=0}^r P_i=P$; \item{(ii)} $P_0$ has exactly one element; \item{(iii)} if $p$ and $p'$ are different elements of $P_i$ ($0<i\leq r$) there is an integer $i'$ with $0\leq i'<i$ and an element in $P_{i'}$ which divides $pp'$. \end{description} If $q_i=\sum_{p\in P_i} p$, then $$\sqrt{P}=\sqrt{(q_0, \dots, q_r)}.$$ \end{lemma} \begin{lemma} \label{*lem} Let $I$ and $J$ be two monomial ideals of $R=K[x_1, \dots, x_n]$ such that $supp(I)\cap supp(J)=\emptyset$. Suppose that $ara~I=u$ and $ara~J=v$. Then $ara(I*J)=u+v-1$. \end{lemma} \begin{proof} See \cite[Theorem~1]{sv}, for example. \end{proof} \begin{lemma} \label{qlem} Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring $R=K[x_1, \dots, x_n]$. If $ara~I\leq q(I)+1$, then $ara~I=q(I)+1$. \end{lemma} \begin{proof} From \cite{l}, we know that for $I$, the following holds: $$pd_R~R/I\leq ara~I.$$ However, by \cite[Corollary~1.6]{ht} $$pd_R~R/I=q(I)+1.$$ Thus, the assertion holds. \end{proof} \begin{theo} \label{maintheo4} Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring $R=K[x_1, \dots, x_n]$ and supp$(I)=\{x_1, \dots, x_n \}$. Then $ara~I =q(I)+1$ if one of the following holds: \begin{description} \item{(i)} $I$ is square-free Veronese; \item{(ii)} $I=J_1 J_2 \cdots J_d$ such that each $J_i$ is generated by $h$ distinct variables; \item{(iii)} $d=2$, \end{description} where $h$ is the height of $I$. \end{theo} \begin{proof} By Lemma~\ref{qlem} and Theorem~\ref{maintheo3}, it is enough to show that $ara~I\leq n-d+1$.\\ (i) Suppose that $I$ is square-free Veronese. Let $S_i$ be the set of all square-free monomials of degree $d$ in variables $x_1, x_2, \dots, x_i$; then $|S_d|=1$ and $S_d\subset S_{d+1}\subset \cdots \subset S_n$. Let $P_0=S_d$ and $P_i=S_{d+i}-S_{d+i-1}$ for $i=1, \dots n-d$; then it is easy to check that $P=\bigcup_{i=0}^{n-d} P_i$ satisfies the assumptions of Lemma~\ref{svlem} and $P=G(I)$. Thus, $ara~I\leq n-d+1$. \\ (ii) Since $ara~J_i=h$, $ara~I\leq hd-d+1=n-d+1$ follows by Lemma~\ref{*lem}.\\ (iii) By Theorem~\ref{keyprop}, we can divide the set $\{x_1, \dots, x_n\}$ into subsets: \\ $\{x_1, \dots, x_{t_1}\}, \{x_{t_1+1}, \dots, x_{t_1+t_2}\}, \dots, \{x_{t_1+\cdots +t_{m-1}+1}, \dots, x_n\}$ such that $n=t_1+ \cdots +t_m$ and $x_ix_j\in I$ if and only if $i\leq t_1+\cdots +t_l<j$ for some positive integer $l$. We may further assume that $t_m\leq t_{m-1}\leq \cdots \leq t_1$ and arrange the generators of $I$ as follows: $$ \begin{array}{llllllll} x_1x_{t_1+1} & \cdots & \cdots & \cdots & \cdots & \cdots & x_1x_n\\ \cdots & \cdots & \cdots & \cdots & \cdots & \cdots & \cdots \\ x_{t_1}x_{t_1+1} & \cdots & \cdots & \cdots & \cdots & \cdots & x_{t_1}x_n\\ x_{t_1+1}x_{t_1+t_2+1} & \cdots & \cdots & \cdots & x_{t_1+1}x_n & & \\ \cdots & \cdots & \cdots & \cdots & \cdots & & \\ x_{t_1+t_2}x_{t_1+t_2+1} & \cdots & \cdots & \cdots & x_{t_1+t_2}x_n & & \\ \cdots & \cdots & \cdots & \cdots & & & \\ \cdots & \cdots & \cdots & \cdots & & & \\ x_{n-t_m-t_{m-1}+1}x_{n-t_m+1} & \cdots & x_{n-t_m-t_{m-1}+1}x_n & & & & \\ \cdots & \cdots & \cdots & & & & \\ x_{n-t_m}x_{n-t_m+1} & \cdots & x_{n-t_m}x_n & & & & \end{array} $$ From the above figure we can construct an $(t_1+\cdots +t_{m-1})\times (t_2+ \cdots +t_m)$ matrix $A=[y_{ij}]$ with entries in $I$ as follows: For every positive integer $i\leq t_1+\cdots +t_{m-1}$, there is an unique nonnegative integer $k\leq m-2$ such that $t_1+\cdots +t_k+1\leq i\leq t_1+\cdots +t_{k+1}$. Then $y_{ij}=x_ix_{t_1+\cdots +t_{k+1}+j}$ if $1\leq j\leq t_{k+2}+\cdots +t_m$ and $y_{ij}=0$ otherwise. Observe that $A$ has the following properties: \\ (a) If $y_{ij}\in G(I)$, then $y_{ii'}\in G(I)$ whenever $i'\leq j$. \\ (b) $y_{ij}=0$, then $y_{ii'}=0$ whenever $i'\geq j$.\\ (c) $y_{ij}=0$ whenever $i+j\geq n+1$.\\ (d) Every generator of $G(I)$ is an entry of $A$.\\ Now let $P_0=\{x_1x_{t_1+1}\}$ and $P_1=\{x_1x_{t_1+2}, x_2x_{t_1+1} \}$. In general, for $0\leq l<\infty$, let $$P_l=\{y_{ij}\in G(I)~|~i+j=l+2\}. $$ Then by (c), $P_l=\emptyset$ if $l\geq n-1$. Therefore by (d), $G(I)=\bigcup_{l=0}^{\infty} P_l=\bigcup_{l=0}^{n-2} P_l$ and $|P_0|=1$. Thus, it remains to check that the assumption (iii) of Lemma~\ref{svlem} holds. To see this, let $y_{ij}, y_{i'j'}\in P_l$ for some $l\geq 1$. We may assume that $i<i'$. To finish the proof, we need to discuss the following two cases: \\ Case~1. $x_i$ and $x_{i'}$ are independent, i.e., $x_ix_{i'}\notin G(I)$. In this case, let $k$ be the integer such that $t_1+\cdots +t_k+1\leq i<i' \leq t_1+\cdots +t_{k+1}$; then $y_{ij}=x_ix_{t_1+\cdots +t_{k+1}+l+2-i}$ and $y_{i'j'}=x_{i'}x_{t_1+\cdots +t_{k+1}+l+2-i'}$. Since $l+2-i'<l+2-i$, we see that $j'<j$, it follows by (a) that $y_{ij'}\in G(I)$. Moreover, $y_{ij'}\in P_{l'}$ for some $l'<l$ and $y_{ij'}$ divides $y_{ij}\cdot y_{i'j'}$, the assertion follows. \\ Case~2. $x_i$ and $x_{i'}$ are dependent, i.e., $x_ix_{i'}\in G(I)$. In this case, there are two integers $k<k'$ such that $t_1+\cdots +t_k+1\leq i\leq t_1+\cdots +t_{k+1}$ and $t_1+\cdots +t_{k'}+1\leq i' \leq t_1+\cdots +t_{k'+1}$. Since $t_1+\cdots +t_{k+1}<i'<n$, $1\leq i'-(t_1+\cdots +t_{k+1})\leq t_{k+2}+\cdots +t_m$. Thus $$\begin{array}{ccl} x_ix_{i'} & = & x_ix_{t_1+\cdots +t_{k+1}+i'-(t_1+\cdots +t_{k+1})} \\ & = & y_{i,i'-(t_1+\cdots +t_{k+1})} \\ & \in & P_{l'}\end{array} ,$$ where $l'=i+i'-(t_1+\cdots +t_{k+1})-2$. Observe that that $t_1+\cdots t_{k+1}\geq i$ and $i'+j'=l+2$ implies $i'-2<l$. We get $l'<l$. Since $x_ix_{i'}$ divides $y_{ij}\cdot y_{i'j'}$, the assertion follows. \end{proof} \smallskip The following corollary is a direct consequence of Proposition~\ref{qI} and the above theorem. \begin{coll}\label{ci} Let $I\subseteq K[x_1,\dots,x_n]$ be a matroidal ideal such that supp$(I)=\{x_1, \dots, x_n \}$. Then $I$ is Cohen-Macaulay if and only if it is a set-theoretic complete intersection. \end{coll} In view of Theorem~\ref{maintheo4}, we propose the following conjecture. \medskip \noindent{\bf Conjecture:} Let $I$ be a matroidal ideal of degree $d$ of a polynomial ring $R=K[x_1, \dots, x_n]$. Then $ara~I =n-d+1$.
1,477,468,749,905
arxiv
\section*{End Notes} \newpage \end{document} \section{Introduction} A ``startup'' has many variants of definitions; up until this date there is no consensus on the standard definition. Santisteban and Mauricio \cite{j8_santisteban2021critical} synthesized many popular definitions, and discovered some common labels such as ``new'', ``small'', ``rapid growth'', ``high risk'', where ``small'' is often approximated by limited financial funds and human resources \cite{j22_skawinska2020success}. Many literature, e.g.~\cite{blank2013lean}, associate startups with disruptive innovation and high scalability. As a result, \begin{quote} ``{\it A startup is a dynamic, flexible, high risk, and recently established company that typically represents a reproducible and scalable business model. It provides innovative products and/or services, and has limited financial funds and human resources.}'' \cite{j8_santisteban2021critical,j22_skawinska2020success,blank2013lean} \end{quote} Since startups stimulate growth, generate jobs and tax revenues, and promote many other socioeconomically beneficial factors \cite{acs2007entrepreneurship}, they are commonly regarded as powerful engines for economic and social development, especially after economic, environment, and epidemic crisis such as COVID-19\footnote{Coronavirus disease 2019 (COVID-19) is a contagious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first case was identified in 2019.} \cite{c1_zhang2021scalable}. As the startups continue to develop, they often increasingly rely on external funds (as opposed to internal funds from founders and co-founders), from either domestic or foreign capital markets, to unlock a high rate of growth that usually corresponds to a ``hockey stick'' growth curve (i.e. a linear line on a log scale) \cite{marmer2011startup}. Startups may receive funds from multiple sources like Venture Capital (VC) and debt financing; up till this date, the dominating source has been VC. As an industry, VC seeks opportunities to invest in startups with great potential (in the sense of financial returns) to grow and successfully exit. The risk-return trade-off tells us that the potential return rises with a corresponding increase in risk\footnote{Statistics revealing the high risk of funding startups: on average, only around 60\% of the startups survive for more than 3 years since founded \cite{hyytinen2015does}; top 2\% of VC funds receive 95\% of the returns in the entire industry \cite{j26_bai2021startup}; VC typically has only 10\% rate of achieving an ROI (return on investment) of 100\% or more \cite{shane2012importance,t3_Unal2019Machine}.}. As a consequence, VC firms usually strive to mitigate this risk by improving their 1) {\it deal sourcing}\footnote{Deal sourcing is the process by which investors identify investment opportunities.} and screening and 2) {\it value-add} process \cite{teten2013lower}. In this survey, we will focus on the published work around the former approach, i.e. finding the {\it startup unicorn}\footnote{ Unicorn and near-unicorn startups are private, venture-backed firms with a valuation of at least \$500 million at some point \cite{chernenko2021mutual}. } as accurately as possible during the deal sourcing phase. Finding the unicorn from candidate startups is a complex task with great uncertainty because of many factors such as vague and prone-to-change business ideas, no proof-of-concept prototype when applicable, no organic revenue. This creates a low information situation, where VC firms often have to make investment decisions based on insufficient information (e.g. lack of financial data) \cite{c9_dellermann2021finding}. Therefore a VC's deal sourcing process traditionally turns out to be manual and empirical, leaving estimations of the ROI (return on investment) heavily dependent on the human investors' decisions. As pointed out in \cite{cumming2010local}, human investors are inherently biased and intuition alone cannot consistently drive good decisions. A better approach should leverage big data to \begin{itemize} \item debias the decisions, so that the individual investment decision made for a particular startup is expected to drive lower risk and higher ROI; \item enable automation, so that more startups can be evaluated without requesting extra amount of time. \end{itemize} To that end, over the past two decades, data driven approaches have been dominating the research around {\it startup success prediction} (i.e. identifying startups that eventually turn into unicorns). However, the majority is analytical and statistical as opposed to ML (machine learning) approaches. Conventional statistical work (e.g. \cite{lussier2001crossnational,davila2003venture,j19_hochberg2007whom,j34_nahata2008venture,lussier2010three,samila2011venture,puri2012life,nanda2013investment,okrah2018exploring,islam2018signaling,t7_saini2018picking,prohorovs2019startup,j33_malmstrom2020they,gompers2020venture,j30_kaiser2020value,rj1_pasayat2020factors,diaz2021econometric,j8_santisteban2021critical,t10_melnychuk2021approved}) mostly starts with defining some hypotheses\footnote{Hypothesis often assumes certain impact of some factors to startup success. For example ``the founder's past entrepreneurial experience influences the likelihood of success \cite{diaz2021econometric}''.}, followed by testing them using statistical tools; the outcome of these work is often conclusions around correlation and/or causality between some factors and the success likelihood of startups. \begin{figure}[!t] \centerline{\includegraphics[width=.7\linewidth]{./figures/ml-approach.pdf}} \vspace{-8pt} \caption{\label{fig:ml-approach}\textbf{High-level overview of ML (machine learning) based startup sourcing}\newline The ML model is trained to approximate a function $f(\cdot)$ so that the input data $\mathbf{x}$ describing a startup can be mapped to an output variable $y$ indicating the recommended investment propensity that can be either discrete (good vs. bad) or continuous (success probability).} \end{figure} In conventional statistical research, good research hypotheses need to be simple, concise, precise, testable; and most importantly, they should be grounded in past knowledge, gained from the literature review or from theory \cite{williamson2002research}. Therefore, it is not a easy task to come up with good hypotheses. Over the last few years, researchers have started investigating the possibility to perform {\it hypothesis mining} from data using ML algorithms to avoid manually defining hypotheses upfront. Hypothesis mining aims to summarize (instead of manually define) hypotheses by carrying out explainability analysis (cf. Section~\ref{sec:explainability}) on the trained ML models \cite{w5_guerzoni2019survival}. For example, with a labeled (i.e. knowing which startups eventually become unicorns) dataset containing many attributes for many companies; one can directly start off with training an ML model to predict unicorns (i.e. prediction target) using the entire dataset (all companies and attributes). By explaining and quantifying how the change of certain attributes would change the prediction target, one may distil hypothesis that describes the relation between the attributes in scope and the prediction target. In comparison to exploratory data analysis, hypothesis mining is a much more structured procedure that trains an ML model using the entire dataset at hand. As illustrated in Figure~\ref{fig:ml-approach}, the ML-based approaches \cite{c5_xiang2012supervised,j36_rouhani2013erp,j27_liang2016predicting,zhong2016or,krishna2016predicting,bohm2017business,zhong2018startup,t5_bento2018predicting,arroyo2019assessment,j15_shin2019network,t6_unal2019searching,w5_guerzoni2019survival,li2020prediction,sadatrasoul2020hybrid,j17_bonaventura2020predicting,kipkogei2021tree,veloso2020predicting,cavicchioli2021learning,zbikowski2021machine,t8_kamal2021modeling,singhal2022data} require practitioners to define the input data $\mathbf{x}$ and annotation $y$ (labeling good or bad investment according to some criteria) before training a model $f(\cdot)$ that maps $\mathbf{x}$ to $y$, i.e. $y=f(\mathbf{x})$. There are already a few survey papers \cite{rj1_pasayat2020factors,bargagli2021supervised} about ML-based work. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize ANN with one hidden layer.\label{fig:non-dl-ann}} {\includegraphics[height=4cm]{figures/non-dl-ann.pdf}} \hspace{1.0cm} \subcaptionbox{\footnotesize ANN with more than one hidden layer.\label{fig:dl-ann}} {\includegraphics[height=4cm]{figures/dl-ann.pdf}} \caption{\label{fig:anns}\textbf{DL (deep learning) utilizes ANNs (artificial neural networks) with at least two hidden layers; thus (a) is not considered as a DL model in this work.}\newline The input data $\mathbf{x}$ is fed into the input layer before flowing through the hidden layers. The output layer generates the final prediction $y$. The connections (fully or partly connected) between the adjacent layers carry trainable weights.} \end{figure} With the rapid growth of dataset size and diversity (origin and modality), traditional ML models\footnote{Traditional ML models that are frequently applied: decision tree (e.g. \cite{arroyo2019assessment}), random forest (e.g. \cite{krishna2016predicting}), logistic regression (e.g. \cite{kipkogei2021tree}), gradient boosting (e.g. \cite{zbikowski2021machine}), SVM (support vector machine, e.g. \cite{j27_liang2016predicting}), $k$-means clustering (e.g. \cite{cavicchioli2021learning}), Bayesian network (e.g. \cite{c5_xiang2012supervised}).} sometimes struggle to directly and fully fit the big-and-raw dataset due to lack of model {\it capacity} and {\it expressivity}\footnote{\label{footnote:capacity-expressivity}{\it Expressivity} describes the classes of functions a model can approximate, while {\it capacity} measures how much ``brute force'' ability the model has to fit the data.}. Most recently, DL (deep learning) algorithms caught the eyes of increasing number of researchers hunting for unicorns. DL, by definition, represents a subset of ML methods \cite{lecun2015deep}, and is implemented (entirely or partly) with ANNs (artificial neural networks) that utilize at least two hidden layers of neurons as shown in Figure~\ref{fig:dl-ann}. The {\it capacity} of DL can be controlled by the number of neurons (width) and layers (depth) \cite{goodfellow2016deep}. Deep ANNs are exponentially {\it expressive} with respect to their depth \cite{raghu2017expressive}, making ANNs universal function approximators\footnote{Simply speaking, ANN containing a specific number of neurons in the hidden layer(s) can approximate almost any known function.} \cite{hornik1989multilayer}. As a well-known international investment firm practicing data-driven approaches to find startup unicorns, we strive to \begin{itemize} \item obtain a thorough and in-depth understanding of the methodologies for startup evaluation using DL, and \item distil important and actionable learning for practitioners in this domain. \end{itemize} To achieve these goals, we carry out a comprehensive literature survey on using DL to evaluate startups (mostly for investment deal sourcing) \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you}\footnote{The literature include 29 peer-reviewed English papers/theses addressing startup success prediction using DL methods. We do not apply any restrictions on publication years, geo-location of study, or publication type. The papers/theses are sourced using a combination of three approaches: (1) recommendation by investment professionals and researchers, (2) keywords searching in Google Search (\protect\url{google.com}), Google Scholar (\protect\url{scholar.google.com}), IEEE (\protect\url{ieee.org}), ACM (\protect\url{acm.org}), Scopus (\protect\url{scopus.com}), Wiley (\protect\url{wiley.com}), Springer (\protect\url{springer.com}) and Web of Science (\protect\url{clarivate.com}); (3) cross reference among papers/theses.}. To the best of our knowledge till this date, our work is the first of this kind. According to our high-level synthesis, most DL based approaches comprise nine consecutive key tasks listed below. \begin{itemize} \item {\it Define the prediction problem}, i.e. what specific question do we expect the model to answer? \item {\it Define the startup success criteria}, so that the data can be annotated accordingly to train the model. \item {\it Gather the data} to be used as model input; it is inevitable to make decisions around the source, category, modality and size of the data. \item {\it Process the data}, e.g. normalize, augment, debias, balance and densify the data to drive better performance of DL models. \item {\it Split the data}, i.e. divide the dataset into on-overlapping training, evaluation and testing sets. \item {\it Select the DL model variants}: there are many DL model variants, thus one can only pick a limited number of variants for experimentation. \item {\it Evaluate the trained DL model}: the performance of each trained (using the training set) DL model is evaluated over the evaluation set to find the best hyper-parameters. \item {\it Explain the predictions from the trained DL model}: DL models are commonly regarded as ``black-box'' models, hence explainability is required to promoting transparency, build trust, and capture feedback. \item {\it Deploy the trained DL model}: in applied scenarios, the trained DL model with satisfying test results will be productized\footnote{Model productization, a.k.a. model deployment, is the procedure by which practitioners integrate a machine learning model into an existing production environment to make practical business decisions based on input data. It is one of the last stages in the DL engineering life cycle and can be one of the most cumbersome.} to answer investors' questions on demand. \end{itemize} In the rest of this paper, we will consecutively discuss the key aspects to accomplish these nine tasks (corresponds to nine sections that follows). In each section, we present a literature synthesis and practical learnings to facilitate a successful application of DL to ``unicorn hunting''. The practical learning is synthesized from both the literature and our industrial experiences. For the sake of clarity, the headline of each section is made concrete to reflect the key learning. \section{Avoid Predicting Success and Compatibility Simultaneously} \label{sec:success-compatibility} The DL model is generally expected to suggest whether funding a certain startup is likely to fulfill the investment goal (cf. Figure~\ref{fig:ml-approach}). As a matter of fact, the common investment goal usually embodies two components: \begin{itemize} \item Success: the startup-in-scope eventually achieves a {\it successful outcome} after the decision of investing; and the different ways of defining that {\it successful outcome} will be introduced in Section~\ref{sec:success_criteria}. \item Compatibility: the startup matches the preference of the investment intention, which originates from many practical requirements such as geographical emphasis, sector focus, portfolio conflict, investment mandate, and exit opportunities \cite{j46_kim2020recommendation}. \end{itemize} The vast majority of the surveyed research does not explicitly distinguish predicting the success and compatibility; they either implicitly address both simultaneously (e.g.~\cite{c3_sharchilev2018web}), or simply ignore the compatibility part (e.g.~\cite{c15_ferrati2021deep}). In practice, VC investors are restricted to funding startups that meet the compatibility requirement. Ideally, we wish to split the output in Figure~\ref{fig:ml-approach} into two outputs as shown in Figure~\ref{fig:dl-approach-split-output}. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Ideal DL model outputs.\label{fig:dl-approach-split-output}} {\includegraphics[height=2.07cm]{figures/dl-approach-split-output.pdf}} \subcaptionbox{\footnotesize Apply compatibility filtering before prediction.\label{fig:dl-approach-filter}} {\includegraphics[height=2.07cm]{figures/dl-approach-filter.pdf}} \caption{\label{fig:dl-approach-success-compatibility}\textbf{Two ways of addressing the probability of startup success and compatibility.}\newline Most DL-based work do not explicitly consider startup success and compatibility at the same time. Two feasible solutions are presented here. We recommend solution (b) over (a) due to its simplicity, flexibility, and closer approximation to real use cases.} \end{figure} However, training one single DL model to predict both success and compatibility is more challenging than predicting merely one of them. Moreover, when engineering an applied solution\footnote{There are a few VC funds who try to develop and commercialize DL-based startup evaluation models, such as the Motherbrain platform from EQT Ventures (\url{eqtventures.com}) \cite{b2_corea2019ai}.} to facilitate VC operations, we observe that the compatibility definition is prone to change when time, context, or the actual user changes. Inspired by \cite{j46_kim2020recommendation,c16_chen2021trend,j47_wu2022estimating}, we propose to perform {\it compatibility filtering} before the DL model prediction, as demonstrated in Figure~\ref{fig:dl-approach-filter}. The compatibility filtering essentially removes the startups that are regarded as incompatible with the preference of investment professionals or the fund specifications, resulting in a set of ``feasible'' startups for consideration. This is mostly achieved by defining some filters using factors like geo-location (e.g. EU and US), business sector (e.g. Fintech and Biotech), customer focus (e.g. B2B/B2C/B2B2C)\footnote{B2B: business-to-business. B2C: business-to-consumer. B2B2C: business-to-business-to-consumer, where businesses access customers through a third party.}, and development stage (e.g. approximated by the total funding received by the startup in scope). Applying compatibility filtering before DL model (cf.~Figure~\ref{fig:dl-approach-filter}) simplifies the DL model, makes it flexible to use, and adapts the model better to investment preferences. An additional benefit of doing so is that a model trained on a subset of successful startups relevant to a given VC fund will, by nature, produce more relevant predictions for that VC fund. \section{Clearly Define the Success Criteria of Startups} \label{sec:success_criteria} Identifying potential unicorns relies on accurate prediction of startup success. So far there is no universally agreed definition of "true success"; most of the existing definitions commonly focus on ``growth'' which can be measured from different perspectives like revenue, employees, and valuation, to name a few. We summarize the adopted definitions from the reviewed literature \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you} in Figure~\ref{fig:success-criteria-dist-mixture}, showing each criterion's popularity among researchers. All {\it success criteria} are quantities in relation to a predefined duration since the time point of evaluation. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Distribution of the adopted startup success criteria.\label{fig:success-criteria-dist}} {\includegraphics[height=6cm]{figures/success-criteria-dist.pdf}} \hspace{2cm} \subcaptionbox{\footnotesize Criteria mixture.\label{fig:success-criteria-mixture}} {\includegraphics[height=6cm]{figures/success-criteria-mixture.pdf}} \caption{\label{fig:success-criteria-dist-mixture}\textbf{Summary of the adopted criteria to evaluate startup success.}\newline (a) shows the percentage of each success criterion sorted by the their occurrences. (b) shows the percentage of combining different number of criteria together.} \end{figure} \begin{enumerate} \item {\bf Fulfill the preset fundraising goal} \cite{c20_lee2018content,c22_yu2018prediction,c21_cheng2019success,j41_yeh2020machine,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting,j47_wu2022estimating,j9_tang2022deep}: the goal (the expected amount of money) of the fund-raise campaign or plan is reached or surpassed, which is common among crowdfunding projects. The readers should be cautious not to confuse with the fund-raise goal of investors. \item {\bf Future funding} \cite{c3_sharchilev2018web,w3_gastaud2019varying,c16_chen2021trend,j2_ross2021capitalvx,t1_stahl2021leveraging,w2_yin2021solving,w6_garkavenko2022you}: any future funding raised above a low-bar amount. \item {\bf Acquired} \cite{b1_ang2022using,c15_ferrati2021deep,j2_ross2021capitalvx,j46_kim2020recommendation,w1_lyu2021graph,w2_yin2021solving}: one company purchases and takes over the operations and assets of the startup. \item {\bf IPO} (initial public offering) \cite{b1_ang2022using,c15_ferrati2021deep,j2_ross2021capitalvx,w1_lyu2021graph,w2_yin2021solving}: it offers shares to the public in a new stock issuance for the first time; IPO allows the company to raise equity capital from public investors. \item {\bf Series A} \cite{c1_zhang2021scalable,c9_dellermann2021finding}: the startup receives the first VC funding round after the seed and angel rounds. \item {\bf $\boldmath{N}$-year survival} \cite{c6_ghassemi2020automated,j2_ross2021capitalvx}: the firm is still operating after $N$ years. \item {\bf Experts view} \cite{j26_bai2021startup,j37_kinne2021predicting}: the quantified review from human experts. \item {\bf Upround} \cite{b1_ang2022using}: the (post-money) valuation after a future funding round is higher than the current valuation. \item {\bf VC-backed} \cite{c19_garkavenko2021valuation}: the startup is funded by one or more VC firms. \item {\bf Total raised funding} \cite{c23_kim2017does}: the accumulated amount of funding received (the higher the better), which is often used as a regression target. \item {\bf Competition nomination} \cite{c6_ghassemi2020automated}: the idea of the startup wins (or nominated by the committee) a entrepreneurial competition. \item {\bf Team growth} \cite{t2_horn2021deep}: whether the team size has experienced a fast growth or not, such as ``{\small \it a minimum of $x$\% increase from at least 10 initial employees}''. \item {\bf Third party score} \cite{j16_allu2022predicting}: some data sources provide certain firm evaluation scores, such as the ``trend score'' from Crunchbase\footnote{\url{www.crunchbase.com}}. \end{enumerate} While the first 12 criteria are intuitively sound, we question the effectiveness of the last criterion of taking the 3rd-party (algorithmic) scores as ground truth to train the DL model, because it is guaranteed to obtain a model inferior to the 3rd-party method (often unknown). Additionally, there is no financial based success criteria\footnote{A few ML-based (instead of DL-based) work \cite{lussier2001crossnational,lussier2010three} have investigated using financial based success criteria.} adopted in the DL-based work, which is a consequence of missing rich operating data \cite{gompers2020venture} before exiting the startup phase and entering the {\it growth phase} \cite{j22_skawinska2020success}. Although the definition of a successful startup has many versions, for investors, it is relatively straightforward: a profitable exit, often in the form of acquisition or IPO, which incur high ROI \cite{b1_ang2022using}. However, if you ask an investment professional, short-term events like funding rounds have a higher adoption rate than longer-term acquisition/IPO; the reason is twofold: 1) acquisition/IPO is extremely scarce as very few startups achieve these milestones; and 2) it occurs very late in startup's trajectory, hence potentially weakening the correlation between early data and late success \cite{t1_stahl2021leveraging}. Finally, the choice of success criteria also depends on whose perspectives we take. \begin{itemize} \item {\bf Investor}'s perspective: the investors' view of a success, which typically include high ROI exit events such as acquisition and IPO \cite{w3_gastaud2019varying,b1_ang2022using}. \item {\bf Founder}'s perspective: what results do founders regard as a successful entrepreneurial outcome? Examples of this kind are competition nomination, series A, and profitable operation \cite{prohorovs2019startup,b1_ang2022using}. \item {\bf Policy maker}'s perspective: the view (governments or authorities) from the entities who set the plan pursued by a government or business; the policy maker usually considers a broad socio-economic impact, such as job creation and patentability\cite{j30_kaiser2020value,t2_horn2021deep}. \end{itemize} In most cases, different success criteria do not conflict with each other, implying the possibility to combine multiple criteria; but this kind of {\it criteria mixture} is still under-investigated as illustrated in Figure~\ref{fig:success-criteria-mixture}. Generally speaking, one can combine multiple criteria with logical \verb|AND| operators (e.g. \cite{w2_yin2021solving,b1_ang2022using}), or use each criterion separately in a {\it multi-task training} (cf. Section~\ref{sec:model-selection}) setup \cite{j43_shi2021leveraging}. It is theoretically proved that multi-task training can reduce the risk of overfitting\footnote{Overfitting happens when a DL model has an overly high capacity so that it is able to fit the entire dataset (including the noise), hence the model performs poorly on unseen data.} \cite{baxter1997bayesian}. Based on the discussions above, we highlight some key recommendations from an investor's perspective, when defining the success criteria of startups. \begin{itemize} \item Start with interviewing the users and stakeholders of your model to find out their definition of startup success. \item Avoid adopting success criteria that are mostly valued by non-investors like founders and policy makers. \item Whenever possible, assemble the success criteria that leads to more annotated (labeled) data, allowing bigger model capacity. \item Prioritize the short-term event over longer-term ones for the sake of stronger data-label correlation. \item Consider experimenting a mixture of criteria to incorporate more perspectives and possibly preventing overfitting the DL model. \end{itemize} \section{Use Multi-modal, Unstructured, Free and Extrinsic Data} DL models need data input to make predictions. Before we start gathering input data for model, we might be able to benefit from understanding what input(s) humans use to make decisions. When investment professionals (i.e.~humans) try to forecast the success of early stage startups, they make use of two cognitive modes: {\it intuitive} and {\it analytical}. The {\it intuitive mode} is characterized by processing ``soft'' signals (e.g. innovativeness and personality of entrepreneur) that are mostly {\it qualitative}; and humans are still the ``golden standard'' for this mode \cite{baer2014gold}. The {\it analytical mode}, on the other hand, deals with ``hard'' facts (e.g. industry and team size) that are often {\it quantitative} \cite{c9_dellermann2021finding}. The majority of the work we reviewed \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you} incorporate both modes into the model input, but they have to quantify the ``soft'' information via either approximation or questionnaire. Data is often fed into DL model in the form of {\it feature}s. Feature (a.k.a. ``factor'' in the scope of financial research) is an individual measurable property or characteristic of a phenomenon, which is sometimes aggregated from raw data. When we try to map out the large number of features used in \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you}, we found that features tend to cluster into different categories, describing different aspects of the startup in scope. We identified 15 natural categories (termed {\it funding}, {\it product/service}, {\it meta information}, {\it founder/owner}, {\it team}, {\it investor}, {\it web}, {\it context}, {\it connection}, {\it operation/planning}, {\it IP and R\&D}, {\it customer}, {\it financial}, {\it M\&A}\footnote{M\&A (merger and acquisition) refers to a business transaction in which the ownership of companies (or their operating units) are transferred to or consolidated with another company.} and {\it data}) and visualize their adoption percentage in Figure~\ref{fig:data-category-modality-dist}. We hereby walk through each category following the order of popularity. \subsection{A detailed walk-through of each data category} \label{sec:walkthrough-data-category} Historical {\bf funding} is direct evidence of recognition from other early investors, thus it is the most popular category in the literature. Frequently seen features in this category are {\small \it total number of funding rounds and total amount raised} \cite{c3_sharchilev2018web,w3_gastaud2019varying,c9_dellermann2021finding,j2_ross2021capitalvx,t1_stahl2021leveraging,t2_horn2021deep,w1_lyu2021graph,w2_yin2021solving,j16_allu2022predicting,b1_ang2022using}, {\small \it funding types (e.g. angel, series A/B/C, debt financing, etc.)} \cite{c3_sharchilev2018web,c9_dellermann2021finding,c19_garkavenko2021valuation,j2_ross2021capitalvx,t1_stahl2021leveraging,w3_gastaud2019varying,j41_yeh2020machine,b1_ang2022using}, {\small \it elapsed time since latest funding} \cite{c3_sharchilev2018web,w3_gastaud2019varying,c19_garkavenko2021valuation,t1_stahl2021leveraging,b1_ang2022using,w6_garkavenko2022you}, {\small \it size and type of the latest funding} \cite{w3_gastaud2019varying,j2_ross2021capitalvx,b1_ang2022using,w6_garkavenko2022you}, {\small \it size and type of seed funding} \cite{c9_dellermann2021finding,j26_bai2021startup,w1_lyu2021graph}, {\small \it average per-round statistics} \cite{c19_garkavenko2021valuation,b1_ang2022using,w6_garkavenko2022you}, {\small \it average time between consecutive rounds} \cite{c3_sharchilev2018web,c19_garkavenko2021valuation,j2_ross2021capitalvx}, {\small \it the raw time-series of funding rounds} \cite{c16_chen2021trend,t1_stahl2021leveraging,t2_horn2021deep}, {\small \it accumulated amount for different funding types} \cite{c3_sharchilev2018web, j2_ross2021capitalvx}, {\small \it amount raised from VC} \cite{c9_dellermann2021finding,j2_ross2021capitalvx}, and {\small \it the actual received amount of money} \cite{c3_sharchilev2018web,w6_garkavenko2022you}. There are also some less common factors: {\small \it max/min round} \cite{w6_garkavenko2022you}, {\small \it raised amount in different currencies}, {\small \it information of the undisclosed rounds}, {\small \it elapsed time from seed funding} \cite{c3_sharchilev2018web}, {\small \it post-money valuation of rounds} \cite{c19_garkavenko2021valuation}, and {\small \it alliance via M\&A} \cite{j2_ross2021capitalvx}. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Distribution of the category of input data.\label{fig:data-category-dist}} {\includegraphics[height=6cm]{figures/data-category-dist.pdf}} \hspace{1.5cm} \subcaptionbox{\footnotesize Data Modality.\label{fig:data-modality-dist}} {\includegraphics[height=6cm]{figures/data-modality-dist.pdf}} \caption{\label{fig:data-category-modality-dist}\textbf{Summary of the used categories of input data by surveyed work.}\newline (a) shows the percentage of each data category (detailed in Section~\ref{sec:walkthrough-data-category}) sorted by the their occurrences. (b) shows a snapshot (to the date when this paper is written) of the utilized data modalities: numerical, categorical, text, graph, time-series, image, video and audio.} \end{figure} The core value that early startups have to offer is reflected in the {\bf product/service} they aim to create, which makes this category of data widely adopted. The top-3 features are {\small \it industry/sector/sub-sector} \cite{c3_sharchilev2018web,c6_ghassemi2020automated,c9_dellermann2021finding,c22_yu2018prediction,j2_ross2021capitalvx,j42_srinivasan2020ensemble,j43_shi2021leveraging,j47_wu2022estimating,t1_stahl2021leveraging,w1_lyu2021graph,b1_ang2022using}, {\small \it textual description} \cite{c6_ghassemi2020automated,c16_chen2021trend,c20_lee2018content,c21_cheng2019success,c23_kim2017does,j37_kinne2021predicting,j44_kaminski2020predicting,j46_kim2020recommendation,j47_wu2022estimating}, and {\small \it project specification on crowdfunding platforms} \cite{c21_cheng2019success,c22_yu2018prediction,c23_kim2017does,j41_yeh2020machine,j42_srinivasan2020ensemble,j43_shi2021leveraging,j47_wu2022estimating}. DL models are also suited well for learning representations from {\small \it image, video and audio} \cite{c21_cheng2019success,j9_tang2022deep,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting}. The rest of the features describing product/service include {\small \it technology maturity} \cite{c9_dellermann2021finding,j16_allu2022predicting}, {\small \it customer focus (e.g. B2B/B2C/B2B2C)} \cite{c9_dellermann2021finding,t1_stahl2021leveraging}, {\small \it time to market} \cite{c9_dellermann2021finding,c3_sharchilev2018web}, {\small \it novelty and differentiation} \cite{c9_dellermann2021finding,j26_bai2021startup}, {\small \it quality measure (e.g. simplicity and usability)}, {\small \it market penetration and traction} \cite{j26_bai2021startup}, {\small \it business scalability}, {\small \it business models (e.g. subscription centric, freemium, cross selling, hidden revenue, no frills, layer player)} \cite{c9_dellermann2021finding}, {\small \it the number of product varieties} \cite{c3_sharchilev2018web}, {\small \it textual product review and comment} \cite{c20_lee2018content}, and {\small \it idea rating by experts} \cite{c6_ghassemi2020automated}. {\bf Meta information} refers to the general attributes about startups, which seldomly changes since creation/registration of the firm. Most work use factors of {\small \it elapsed time since founded}, {\small \it textual description}, and {\small \it geographical location} \cite{c3_sharchilev2018web,c9_dellermann2021finding,c16_chen2021trend,c19_garkavenko2021valuation,c22_yu2018prediction,j2_ross2021capitalvx,j9_tang2022deep,j42_srinivasan2020ensemble,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w1_lyu2021graph,w2_yin2021solving,w3_gastaud2019varying,w6_garkavenko2022you,b1_ang2022using}. Whether a startup {\small \it has Facebook/Linkedin/Twitter account} \cite{c9_dellermann2021finding,c23_kim2017does,j2_ross2021capitalvx,j43_shi2021leveraging,w6_garkavenko2022you} is also a common factor. Other less frequently seen factors include {\small \it domain name or homepage URL} \cite{c23_kim2017does,j2_ross2021capitalvx,j42_srinivasan2020ensemble}, {\small \it company name and aliases} \cite{j2_ross2021capitalvx,j42_srinivasan2020ensemble}, {\small \it office count and age} \cite{c3_sharchilev2018web,w6_garkavenko2022you}, {\small \it registered address}, {\small \it current status (e.g. operating, closed, zombie)}, {\small \it official email and phone number} \cite{j2_ross2021capitalvx}, and {\small \it incubator or accelerator support} \cite{c9_dellermann2021finding}. The traits of a startup's {\bf founder/owner} (i.e. the entrepreneur or the founding team) are so important that the founder(s) with relevant management experiences can improve the company performance \cite{b2_corea2019ai,ewens2018founder}. The attributes of founding teams and the individuals that comprise them contribute to their short-term success and longer-term survival \cite{c6_ghassemi2020automated}, and are also generally available from many data sources and entrepreneurial competitions. The {\small \it founding team size (number of co-founders)} and {\small \it founders' (successful) founding/industry experience} \cite{c3_sharchilev2018web,c9_dellermann2021finding,c19_garkavenko2021valuation,j2_ross2021capitalvx,j26_bai2021startup,j41_yeh2020machine,j42_srinivasan2020ensemble,j43_shi2021leveraging,j46_kim2020recommendation,w2_yin2021solving,w3_gastaud2019varying} are most widely used, followed by {\small \it founder IDs from data sources} \cite{c3_sharchilev2018web,j41_yeh2020machine,j42_srinivasan2020ensemble}, {\small \it gender/ethnicity} \cite{c3_sharchilev2018web,j2_ross2021capitalvx,w1_lyu2021graph,j30_kaiser2020value}, and {\small \it social capital} \footnote{Social capital is a positive product of human interactions, which comprises two aspects: bonding (intra group) and bridging (inter groups). Nowadays, it is increasingly represented by activities on social media and applications \cite{j43_shi2021leveraging}.} \cite{j42_srinivasan2020ensemble,j43_shi2021leveraging}. Additionally, some researchers \cite{c6_ghassemi2020automated,j26_bai2021startup,t5_bento2018predicting,rj1_pasayat2020factors} try to quantify the {\small \it founders' skill (e.g. leadership, research, development, product management, sales, law, consulting, finance, marketing, creativity and investment)}, and use it as model input. The rest of the factors in this category include {\small \it years between graduation and founding}, {\small \it education institute and major} \cite{c6_ghassemi2020automated,b2_corea2019ai,t5_bento2018predicting}, {\small \it founders' biography and photo} \cite{j42_srinivasan2020ensemble,c23_kim2017does}, and finally indications of founders' {\small \it entrepreneurial vision} \cite{c9_dellermann2021finding}, {\small \it capability of work (dedication)} \cite{j26_bai2021startup}, and {\small \it 3rd-party score} \cite{c3_sharchilev2018web,j43_shi2021leveraging}. Complementary to founder data, {\bf team} related factors are used in many research papers. The common factors are {\small \it team size of all or different functions} \cite{c9_dellermann2021finding,c19_garkavenko2021valuation,j2_ross2021capitalvx,j46_kim2020recommendation,w6_garkavenko2022you,b1_ang2022using}, {\small \it the time-series of team size} \cite{t1_stahl2021leveraging,t2_horn2021deep}, {\small \it statistics of new hire or leavers} \cite{c3_sharchilev2018web,c19_garkavenko2021valuation}, {\small \it completeness and capability of managers} \cite{c19_garkavenko2021valuation,j26_bai2021startup}, and {\small \it team composition (e.g. diversity and gender)} \cite{c3_sharchilev2018web,j2_ross2021capitalvx} The less common ones are {\small \it time of involvement}, {\small \it board member statistics}, {\small \it person IDs from data sources} \cite{c3_sharchilev2018web}, {\small \it vocational skill and experience} \cite{c19_garkavenko2021valuation}, {\small \it technical team size and quality}, {\small \it employees from renowned organizations} \cite{c16_chen2021trend}, {\small \it educational degrees of employees} \cite{j2_ross2021capitalvx}, {\small \it team constellation} \cite{c9_dellermann2021finding}, {\small \it balance/empowerment/competence of project team} \cite{j41_yeh2020machine}, and {\small \it 3rd-party team score} \cite{c6_ghassemi2020automated}. Closely related to funding data, statistics of the existing {\bf investor}(s) provide useful information about the startup. We observe that {\small \it number of total/distinct investors} \cite{c3_sharchilev2018web,c15_ferrati2021deep,c16_chen2021trend,c22_yu2018prediction,j2_ross2021capitalvx,j46_kim2020recommendation,w3_gastaud2019varying,b1_ang2022using} is the most heavily used factor in this category. Authors of \cite{c3_sharchilev2018web,c15_ferrati2021deep,t1_stahl2021leveraging,w2_yin2021solving} try to {\small \it rank investors by their reputation, experience, IPO/M\&A performance}. Moreover, the concept of {\small \it VC syndicate (e.g. advantage, diversity, centrality and foreignness of VC)} is also investigated in \cite{j19_hochberg2007whom,j15_shin2019network,w3_gastaud2019varying,j34_nahata2008venture}. In DL-based work, {\small \it share and involvement time of each investor} \cite{c3_sharchilev2018web} is also used. {\bf Web} category embodies information of web pages about the startup. The literature has mentioned several factors: {\small \it rank/count/duration/bounce rate (aggregated or time-series) of website visit} \cite{c3_sharchilev2018web,c9_dellermann2021finding,t1_stahl2021leveraging,t2_horn2021deep,w6_garkavenko2022you}, {\small \it news count} \cite{c3_sharchilev2018web,c19_garkavenko2021valuation,w2_yin2021solving,w3_gastaud2019varying}, {\small \it topics of news/articles} \cite{c3_sharchilev2018web,j46_kim2020recommendation,w6_garkavenko2022you}, {\small \it Twitter statistics (e.g. followers, tweets and sentiment)} \cite{c9_dellermann2021finding,c19_garkavenko2021valuation,w6_garkavenko2022you}, and {\small \it count of websites, pages and domain names that mentioned the firm} \cite{c3_sharchilev2018web,c9_dellermann2021finding,w6_garkavenko2022you}. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Connection among entities. \label{fig:connection-illustration}} {\includegraphics[height=4.5cm]{figures/connection-illustration.pdf}} \hfill \subcaptionbox{\footnotesize An example of company-person-investor graph.\label{fig:connection-example}} {\includegraphics[height=4.5cm]{figures/connection-example.pdf}} \caption{\label{fig:connection-illustration-example}\textbf{Illustration of connection data category where a graph is can be constructed.}\newline The graph comprises nodes (denoting company/person/investor) and edges (representing investing/employment/founding relations between nodes). To understand the example graph in (b), refer to the color coding in (a) as a legend.} \end{figure} So far, we have only touched upon the factors that are intrinsic to the startup in consideration, but more and more researchers have realized the importance of extrinsic factors\footnote{While intrinsic factors act from within a company, extrinsic factors wield their influence from the outside. The former can be controlled (to some extent) by the startup-in-scope, but the latter can not, since they represent external contexts that may be (but not limited to) competition, environmental, cultural, economical and tax-based.}. In this paper, we put each extrinsic factor into one of the two categories: {\bf context} and {\bf connection}. The top-3 {\bf context} features are {\small \it the number of direct competitors} \cite{c2_pasayat2021evolutionary,c3_sharchilev2018web,c5_xiang2012supervised,c9_dellermann2021finding,j16_allu2022predicting,j26_bai2021startup,j46_kim2020recommendation,w3_gastaud2019varying}, {\small \it funding raised by competitors} \cite{t1_stahl2021leveraging,w3_gastaud2019varying}, and {\small \it per-industry prosperity of the hosting geo-location} \cite{w2_yin2021solving,w3_gastaud2019varying}. Besides, there are other context factors, such as {\small \it market/industry size and growth rate}, {\small \it exchange rate}, {\small \it inflation level}, {\small \it governmental regulation}, {\small \it tax policy} \cite{j16_allu2022predicting}, {\small \it sector performance}, {\small \it country/state economy} \cite{j2_ross2021capitalvx}, {\small \it financing environment} \cite{w2_yin2021solving}, and {\small \it current month/week} \cite{t2_horn2021deep}. The {\bf connection} features are usually extracted from a graph (cf. Figure~\ref{fig:connection-example}) that encodes connections between different entities: startup, person and investor, as illustrated in Figure~\ref{fig:connection-illustration}. The DL approaches \cite{c1_zhang2021scalable,c16_chen2021trend,w1_lyu2021graph,w3_gastaud2019varying} often directly feed the graph into ANNs, while ML-based methods (e.g. \cite{j27_liang2016predicting,j17_bonaventura2020predicting,j19_hochberg2007whom}) always pre-calculate some graph features, such as betweenness/closeness/degree centrality, shortest paths, common neighbors, etc. {\bf Operation/planning} typically involves operational matters such as sales, localization (e.g. \cite{c2_pasayat2021evolutionary,rj1_pasayat2020factors}), marketing (e.g. \cite{j26_bai2021startup,j33_malmstrom2020they}), supply chain \cite{rj1_pasayat2020factors,rj2_song2008success}, digitization \cite{j36_rouhani2013erp}, advisory \cite{c5_xiang2012supervised}, company culture (e.g. \cite{j8_santisteban2021critical,w5_guerzoni2019survival}) and legal regulation \cite{j36_rouhani2013erp,j22_skawinska2020success}. However, the DL-based methods have only a few mentions of factors in this category: {\small \it planned revenue model} \cite{c9_dellermann2021finding,j16_allu2022predicting,j26_bai2021startup}, {\small \it global exposure and internationalization} \cite{c3_sharchilev2018web}, {\small \it market positioning and go-to-market strategy} \cite{j26_bai2021startup}, and {\small \it technological surveillance} \cite{j16_allu2022predicting}. For early startups, IP (intellectual property) and R\&D (research and development) are two of the aspects that are examined by investors. As a result, we make {\bf IP and R\&D} its own category (out of the operational factors), which contains {\small \it number of patents}, {\small \it patent growth}, {\small \it patent category} \cite{c15_ferrati2021deep,j2_ross2021capitalvx,j37_kinne2021predicting,j46_kim2020recommendation} and {\small \it university partnership} \cite{c9_dellermann2021finding}. The {\bf customer}, {\bf financial} and {\bf M\&A} data is, most of the time, unavailable publicly, which resonates with their scarce occurrence in Figure~\ref{fig:data-category-dist}. For {\bf customer} category, we have seen {\small \it customer satisfaction/loyalty} \cite{c16_chen2021trend} and {\small \it the number of pilot customers} \cite{c9_dellermann2021finding}. There is only one {\bf financial} factor, {\small \it revenue and turnover}, used in \cite{j46_kim2020recommendation}. In {\bf M\&A} category, \cite{j2_ross2021capitalvx} calculated the {\small \it the number of acquisitions} as an input feature to their model. Finally, the {\bf data} category contains statistics that are specific to the chosen data source, such as {\small \it the total number of events/records} \cite{j46_kim2020recommendation} in Crunchbase. \subsection{Several noticeable trends in data selection} The surveyed literature reflects several trend concerning selecting the input data for DL models. We summarize these trends as a guidance for investment practitioners. \begin{itemize} \item {\bf Single-modal$\rightarrow$multi-modal}\footnote{Modality refers to the way in which data is generated or presented, and a research is characterized as multi-modal when it includes multiple modalities such as text and image.}: although the {\small \it tabular (aggregated numerical/categorical data)} form still dominates, we see other emerging data modalities: {\small \it text}, {\small \it graph}, {\small \it time-series}, {\small \it image}, {\small \it video} and {\small \it audio}. The relative adoption of different modalities is shown in Figure~\ref{fig:data-modality-dist}. Especially, a few recent work \cite{j43_shi2021leveraging,j44_kaminski2020predicting,c21_cheng2019success,t1_stahl2021leveraging,t2_horn2021deep} has looked into combining multiple input modalities (i.e. multi-modal). \item {\bf Structured(aggregated)$\rightarrow$unstructured(raw)}\footnote{Unstructured data, such as image and timeseries, is a collection of many varied types that maintains their native and original form, while structured data is aggregated from original (raw) data and is usually stored in a tabular form.}: the modalities excluding ``tabular'' in Figure~\ref{fig:data-modality-dist} are all unstructured, which become increasingly important as a complement to the structured data (e.g. \cite{t1_stahl2021leveraging,t2_horn2021deep,w1_lyu2021graph,w3_gastaud2019varying,c16_chen2021trend}), or as a standalone input to the model (e.g. \cite{c1_zhang2021scalable,j9_tang2022deep}). Since raw, unstructured data often has a large scale and contains intact-yet-noisy signal, it may bring forward superior performance as long as a proper DL approach is applied \cite{w6_garkavenko2022you}. \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Distribution of used data sources (sorted).\label{fig:datasource-dist}} {\includegraphics[height=6.2cm]{figures/datasource-dist.pdf}} \hfill \subcaptionbox{\footnotesize Dataset size distribution\label{fig:dataset-scale}} {\includegraphics[height=6.2cm]{figures/dataset-scale.pdf}} \caption{\label{fig:dataset-dist-scale}\textbf{Statistics about datasets: (a) occurrences and (b) the number of samples.}\newline In (a), {\bf paid} data sources are Crunchbase (\protect\url{crunchbase.com}), Pitchbook (\protect\url{pitchbook.com}), Tianyacha (\protect\url{tianyancha.com}), Linkedin (\protect\url{linkedin.com}), Mattermark (\protect\url{mattermark.com}), Dealroom (\protect\url{dealroom.co}); {\bf free} sources are Kickstarter/Indiegogo/scraping (\protect\url{webrobots.io}), Twitter API (\protect\url{developer.twitter.com}), search engines (e.g. \protect\url{google.com}), USPTO (United States Patent and Trademark Office: \protect\url{uspto.gov}), Facebook (only the pages about startups); {\bf proprietary} data are usually only accessible from investment firms (in ``Other'' category), governmental/administrative departments or survey/questionnaire. In (b), we plot the distribution (probability density), histogram, median, mean, maximum (776,273) and minimum (100) of dataset size (i.e. the number of records).} \end{figure} \item {\bf Proprietary$\rightarrow$paid$\rightarrow$free}: all data sources utilized in DL-based methods are sorted in Figure~\ref{fig:datasource-dist} according to their occurrences. The traditional proprietary sources are not favored any more due to the limitation of scale and shareability. Paid data sources (e.g. Crunchbase and Pitchbook) are still very popular, because they are mostly quite affordable and well organized. However, neither paid or proprietary data is up-to-date or fine-grained, which contributes to the increasing adoption of free data sources, such as web page scraping \cite{w6_garkavenko2022you}. \item {\bf Gather at least 35k samples}: to understand how many samples (often correlated with the number of companies) researchers use for training their DL models, we plot the distribution/histogram of dataset size in Figure~\ref{fig:dataset-scale}. It shows a median and average size of 35,621 and 107,694 respectively. It has been discussed previously that DL model generally require more data to match with its big capacity$^7$, thus we recommend to collect at least 35k samples. \item {\bf Intrinsic(independent)$\rightarrow$extrinsic(contextual)}: classically, most elements driving investors’ decisions would seem to be only {\it independent and intrinsic}$^{18}$ to the startup in scope, most notably at the expense of {\it extrinsic and contextualized}$^{18}$ features \cite{w3_gastaud2019varying}. The community has realized this, and steers towards using more {\bf context} and {\bf connection} (cf. Figure~\ref{fig:connection-illustration-example}) data to complement the intrinsic features. \end{itemize} \section{Address the Problems of Data Imbalance and Sparsity} During the process of preparing the training data for DL models, one is almost certain to encounter two problems: data {\it imbalance} and {\it sparsity}. Therefore we will discuss the causes of these two problems and present some effective solutions. \begin{figure}[!t] \centerline{\includegraphics[width=\linewidth]{./figures/data-imbalance.pdf}} \caption{\label{fig:data-imbalance}\textbf{Re-balance the dataset by augmenting the minority samples.}\newline For the sake of visualization, we assume the dataset only has two dimensions (feature 1 and 2). The minority group (e.g. success startups) is indicated with red color in (a). SMOTE randomly sample new points {\bf on} the lines between existing minority samples, and AdaSyn allows sampling of new points {\bf off} those lines from a Gaussian distribution.} \end{figure} \subsection{Balance the dataset with augmentation or PU-learning} The purpose of clearly defining the startup success criteria (Section~\ref{sec:success_criteria}) is to assign a label (either positive/successful or negative/unsuccessful) for each sample/record in the dataset. Since successful startups are inherently rare$^2$, the number of positive samples [e.g. the red points in Figure~\ref{fig:data-imbalance}(a)] is significantly less than the negative ones. Another common formality of data imbalance concerns the drastically different population size in different groups (by sector, customer focus, country, etc.), for example, a dataset which contains mostly fintech companies mixed with only a few biotech companies [cf. the red points in Figure~\ref{fig:data-imbalance}(a)]. Without special treatment, DL models will exhibit bias towards the majority class, and in extreme cases, may ignore the minority class altogether \cite{johnson2019survey}. The literature \cite{w2_yin2021solving,j2_ross2021capitalvx,t3_Unal2019Machine,t5_bento2018predicting,t6_unal2019searching,w5_guerzoni2019survival,c15_ferrati2021deep,t3_Unal2019Machine,t8_kamal2021modeling,w6_garkavenko2022you} mentioned three approaches to ``rebalance'' an imbalanced dataset. \begin{itemize} \item {\bf SMOTE} (synthetic minority oversampling technique) \cite{chawla2002smote,w2_yin2021solving,j2_ross2021capitalvx,t3_Unal2019Machine,t5_bento2018predicting,t6_unal2019searching,w5_guerzoni2019survival}: As demonstrated in Figure~\ref{fig:data-imbalance}(b), random samples are drawn along the lines connecting the pairs of minority samples. \item {\bf AdaSyn} (adaptive synthetic) \cite{he2008adasyn,c15_ferrati2021deep,t3_Unal2019Machine,t6_unal2019searching,t8_kamal2021modeling}: Different from SMOTE, AdaSyn allows the newly sampled points [the red circles filled with gray color in Figure~\ref{fig:data-imbalance}(c)] to deviate (obeying a Gaussian distribution) from the lines between existing minority samples. \item {\bf PU-learning} (positive unlabeled learning) \cite{kiryo2017positive,w6_garkavenko2022you}: Instead of generating additional synthetic samples, PU-learning modifies the loss function [cf. Figure~\ref{fig:data-split-train-process}(b)] so that it 1) allows unlabeled samples to participate in the training, and 2) focuses more on the minority samples throughout the optimization. \end{itemize} \begin{figure}[!t] \centerline{\includegraphics[width=\linewidth]{./figures/data-split-train-process.pdf}} \vspace{5pt} \caption{\label{fig:data-split-train-process}\textbf{High-level illustration of (a) dataset splitting and (b) training procedure.}\newline The entire dataset $\mathbf{x}$ should be randomly divided into three splits: $\mathbf{x}_\text{train}$, $\mathbf{x}_\text{eval}$ and $\mathbf{x}_\text{test}$. The training subset $\mathbf{x}_\text{train}$ is used for training the DL model, where a loss calculated to measure the prediction error (the deviation from the ground-truth label). The loss will guide the optimization of the model parameters.} \vspace{10pt} \end{figure} It is worth mentioning that SMOTE and AdaSyn can merely augment tabular (numeric) features. More advanced techniques are required to augment unstructured raw data (e.g. text and images). For instance, text can be augmented with synonym replacement, insertion, swap, deletion \cite{wei_zou_2019_eda}, and summarization \cite{yao2017recent}. \subsection{Densify sparse input with simple imputation techniques} Sparsity and density describe the amount of features/factors (columns in Figure~\ref{fig:dense-vs-sparse}) in a dataset that are ``empty'' (sparse) and filled with information (dense), though ``empty'' could also mean zero-valued. Since early-stage startup companies do not have much data available to the public, the resulting dataset (for startup success prediction) is typically sparse as exemplified in Figure~\ref{fig:dense-vs-sparse}(b). While some ML algorithms are sparse-aware (e.g. XGBoost) \cite{w2_yin2021solving}, DL models (ANNs) rely on spatial or sequential correlations to learn and highly sparse input will break these correlations. As a result, ANNs are not sparsity agnostic. \begin{figure}[!t] \centerline{\includegraphics[width=0.8\linewidth]{./figures/dense-vs-sparse.pdf}} \vspace{-3pt} \caption{\label{fig:dense-vs-sparse}\textbf{An illustration of dense data vs. sparse data.}\newline A row denotes an individual sample, data record or company, while a column represents a certain feature or factor. A white-colored cell indicates a missing or zero value.} \end{figure} A straightforward way to avoid sparse input is to remove the entire sample/record (row in Figure~\ref{fig:dense-vs-sparse}) or feature/factor (column in Figure~\ref{fig:dense-vs-sparse}) that contain ``empty'' cell(s). But a drastically reduced dataset scale prohibits its adoption in reality. To match the dataset scale with model capacity, we need to keep as many rows and/or columns (Figure~\ref{fig:dense-vs-sparse}) as possible. Imputation is widely utilized to achieve that goal. Although there exist complex imputation methods based on DL models like autoencoders, there is no guarantee (e.g. \cite{jager2021benchmark}) that they outperform simpler ones. Instead, literature suggests to use simple methods, such as {\it mean/mode/zero imputation} \cite{t1_stahl2021leveraging,t5_bento2018predicting}, {\it latest-value imputation} \cite{t2_horn2021deep}, {\it Soft-Impute} \cite{mazumder2010spectral,t7_saini2018picking}, and {\it $k$-NN ($k$ nearest neighbor) imputation} \cite{w2_yin2021solving}. \section{Split the Dataset with an Investor-Centric View} \label{sec:dataset-split} Splitting the dataset is a mandatory step before training any ML/DL model, yet it is often discussed very lightly (sometimes even neglected) in the literature on startup success prediction. It is generally recommended to divide the dataset into non-overlapping {\it training} ($\mathbf{x}_\text{train}$), {\it evaluation} ($\mathbf{x}_\text{eval}$) and {\it test} ($\mathbf{x}_\text{test}$) subsets, as shown in Figure~\ref{fig:data-split-train-process}(a). The model will be trained solely on the training set, during which the {\it model parameters} will be optimized. But as illustrated in the left part of Figure~\ref{fig:data-split-train-process}(b), the DL model also has {\it hyper-parameters}\footnote{Hyper-parameters are parameters controlling the learning process, hence also indirectly determining the values of model parameters. After the model is trained, hyper-parameters will not participate the model inference, thus they are not a part of the model parameters.} to be tuned in a process called hyper-parameter search. In the simplest form, the training will be run for $N$ times with different hyper-parameters, resulting in $N$ trained models, each of which is evaluated on $\mathbf{x}_\text{eval}$. The best performing model on $\mathbf{x}_\text{eval}$ should be used for further testing and production. The test set $\mathbf{x}_\text{test}$ should not be exposed to the hyper-parameter tuning process; it is used to report the performance of the chosen model. Section~\ref{sec:eval} (Figure~\ref{fig:eval-test-process}) can be referred to for more details of the evaluation and testing processes. \subsection{Company-centric vs. investor-centric} To predict the success of startups, the appropriate way to split the dataset is not as straightforward as it appears in ML/DL researches for other domains. We visualize a minimal example in Figure~\ref{fig:data-split-demo} to facilitate our discussion; there are three startups (A, B and C) founded at different dates over the timeline. According to some predefined success criteria (Section~\ref{sec:success_criteria}), A and B are labeled as positive (i.e. promising investing targets: $y^{(\text{A})}=y^{(\text{B})}=1$) some time after they are founded. The majority become unfavourable (e.g. the label of C is $y^{(\text{C})}=0$) to VC firms, if no sign of success some years after their founding dates. \begin{figure}[!t] \centerline{\includegraphics[width=\linewidth]{./figures/data-split-demo.pdf}} \vspace{-3pt} \caption{\label{fig:data-split-demo}\textbf{Visualization of (investor-centric) dataset split using three example startups.}\newline A startup (A/B/C) obtains a positive/negative label on a certain date, before which several feature snapshot dates are picked. For each snapshot date, a sample is computed using data available up till that date. In this manner, A generates three $\langle$sample-label$\rangle$ pairs: $\langle\mathbf{x}_1^{(\text{A})}, y_1^{(\text{A})}\rangle$, $\langle\mathbf{x}_2^{(\text{A})}, y_2^{(\text{A})}\rangle$ and $\langle\mathbf{x}_3^{(\text{A})}, y_3^{(\text{A})}\rangle$; B produces two pairs: $\langle\mathbf{x}_1^{(\text{B})}, y_1^{(\text{B})}\rangle$ and $\langle\mathbf{x}_2^{(\text{B})}, y_2^{(\text{B})}\rangle$; C also spawns two pairs: $\langle\mathbf{x}_1^{(\text{C})}, y_1^{(\text{C})}\rangle$ and $\langle\mathbf{x}_2^{(\text{C})}, y_2^{(\text{C})}\rangle$. The global timeline is fragmented into training, evaluation and test periods. The period that the label (in the $\langle$sample-label$\rangle$ pairs) belongs determines the particular training/evaluation/test set each pair should go to.} \end{figure} With a {\bf company-centric view}, one could choose some event types (e.g. seed and pre-A rounds), the dates of which are called {\it feature snapshot dates}. We can then compute one sample using data before each snapshot date. As shown in Figure~\ref{fig:data-split-demo}, there are three snapshot dates on the timeline of startup A, leading to three samples (i.e. $\mathbf{x}_1^{(\text{A})}$, $\mathbf{x}_2^{(\text{A})}$ and $\mathbf{x}_3^{(\text{A})}$) that are all labeled positive (i.e. $y_1^{(\text{A})}=y_2^{(\text{A})}=y_3^{(\text{A})}=1$). In a sense, A is augmented by generating three $\langle$sample-label$\rangle$ pairs: $\langle\mathbf{x}_1^{(\text{A})}, y_1^{(\text{A})}\rangle$, $\langle\mathbf{x}_2^{(\text{A})}, y_2^{(\text{A})}\rangle$ and $\langle\mathbf{x}_3^{(\text{A})}, y_3^{(\text{A})}\rangle$. Similarly, B and C produce another four pairs: $\langle\mathbf{x}_1^{(\text{B})}, y_1^{(\text{B})}\rangle$, $\langle\mathbf{x}_2^{(\text{B})}, y_2^{(\text{B})}\rangle$, $\langle\mathbf{x}_1^{(\text{C})}, y_1^{(\text{C})}\rangle$ and $\langle\mathbf{x}_2^{(\text{C})}, y_2^{(\text{C})}\rangle$. The company-centric split \cite{b1_ang2022using,c19_garkavenko2021valuation,c20_lee2018content,c22_yu2018prediction,c9_dellermann2021finding,j41_yeh2020machine,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting,j46_kim2020recommendation,w3_gastaud2019varying} will randomly allocate those pairs into one of the sets (training, evaluation or test). With an {\bf investor-centric view} \cite{c15_ferrati2021deep,c16_chen2021trend,c21_cheng2019success,j47_wu2022estimating,w1_lyu2021graph,w2_yin2021solving,w6_garkavenko2022you}, the feature snapshot dates are randomly sampled (before the corresponding label date), therefore they do not represent any event(s). More importantly, the global timeline is fragmented (from earliest startup founding date to now) into three periods, i.e. training, evaluation and test period, as illustrated in Figure~\ref{fig:data-split-demo}. For a startup, the period that its label belongs determines the dataset split it should go to. Using this rule, we can see (cf. Figure~\ref{fig:data-split-demo}) that the three $\langle$sample-label$\rangle$ pairs from A should go to the training set; the two pairs from B belong to the test set; and lastly, the two pairs from C will head to the evaluation set. The {\bf investor-centric view is generally preferred}, because it better resembles the real-world scenario of how investment professionals predict the success of startup candidates \cite{c3_sharchilev2018web}. \subsection{Understand the data generation process} When assembling the samples (i.e. $\mathbf{x}_i^{(\cdot)}$ in Figure~\ref{fig:data-split-demo}) using data up till the snapshot dates, one should make sure that no future information is leaked into $\mathbf{x}_i^{(\cdot)}$. This requires in-depth understanding of not only the data itself ({\it know-what}) but also the data generation process ({\it know-how}), which we found is seldomly addressed by the literature. We hereby give a concrete example out of many: a startup in the dataset has an annual revenue data point (from BvD\footnote{Bureau van Dijk: \url{www.bvdinfo.com}}) with a timestamp 2020-12-31; but this data point should be ignored when predicting on 2021-06-01. The reason is that fiscal reports (the source of revenue data) often have a delay of more than 12 months, causing the 2020-12-31 data point unavailable until (earliest) 2021-12-31. Without carefully examining the risk of information leakage of features/factors from the perspective of data generation process, the model performance in production may fail catastrophically. \section{Model Selection: Occam's Razor and No-Free-Lunch} \label{sec:model-selection} \begin{figure}[t!] \centering \subcaptionbox{\footnotesize Distribution of DL model category.\label{fig:dl-model-category}} {\includegraphics[height=3.2cm]{figures/dl-model-category.pdf}} \hfill \subcaptionbox{\footnotesize Distribution of hyper-architecture\label{fig:model-hyper-architecture}} {\includegraphics[height=3.2cm]{figures/model-hyper-architecture.pdf}} \caption{\label{fig:model-category-architecture}\textbf{The adopted DL model categories and hyper-architectures.}\newline The main DL model categories (backbones) are ANN-based (ANN/DNN/MLP), recurrent based (LSTM/GRU), deep attention \cite{vaswani2017attention}, graph based (GNN/GCN/GAT) and convolution based (CNN). These common models can be used either directly (off-the-shelf) or as basic building blocks to form different hyper-architectures: combinatory, ensemble or sequential.} \end{figure} Based on the experimental results from \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you}, this section intends to provide insights to the choice of best-performing DL {\it model category} and {\it hyper-architecture}. The model category refers to the basic backbone of the DL model, which include ANN/DNN/MLP\footnote{In this paper, ANN, DNN (deep neural network) and MLP (multi-layer perceptron) all refer to a neural network with at least two hidden layers, as introduced in Section~\ref{fig:dl-ann}.} \cite{b1_ang2022using,c15_ferrati2021deep,c16_chen2021trend,c22_yu2018prediction,c3_sharchilev2018web,c6_ghassemi2020automated,c9_dellermann2021finding,j2_ross2021capitalvx,j26_bai2021startup,j37_kinne2021predicting,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,j47_wu2022estimating,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w6_garkavenko2022you}, LSTM (long short term memory) \cite{c15_ferrati2021deep,j16_allu2022predicting,j43_shi2021leveraging,j47_wu2022estimating,t2_horn2021deep,w1_lyu2021graph}, GRU (gated recurrent unit) \cite{c16_chen2021trend,c20_lee2018content,t1_stahl2021leveraging}, GNN (graph neural network) \cite{c1_zhang2021scalable,j47_wu2022estimating}, GCN (graph convolutional network) \cite{w3_gastaud2019varying,c16_chen2021trend}, GAT (graph attention network) \cite{w1_lyu2021graph,j47_wu2022estimating}, CNN (convolutional neural network) \cite{c15_ferrati2021deep,c21_cheng2019success,c23_kim2017does,j42_srinivasan2020ensemble,j43_shi2021leveraging} and deep attention \cite{vaswani2017attention,j9_tang2022deep,c16_chen2021trend,c20_lee2018content}. In Figure~\ref{fig:dl-model-category}, we group similar model categories (i.e. LSTM/GRU and GNN/GCN/GAT) together. It can be seen that over 40\% of the surveyed papers adopt an ANN/DNN/MLP backbone due to its wide applicability to many data types. LSTM/GRU almost dominates the cases when time-series are used. Deep attention and graph based models (GNN/GCN/GAT) have a rising trend of adoption due to increasing introduction of text and graph input modalities. Lastly, images and videos are relatively least used (cf. Figure~\ref{fig:data-modality-dist}), leading to only around 10\% adoption rate for CNN. Oftentimes, as shown in Figure~\ref{fig:model-hyper-architecture}, off-the-shelf models (i.e. specific implementation of a certain model category in Figure~\ref{fig:dl-model-category}) are used in research such as \cite{b1_ang2022using,c19_garkavenko2021valuation,c22_yu2018prediction,c6_ghassemi2020automated,j16_allu2022predicting,j26_bai2021startup,j37_kinne2021predicting,w6_garkavenko2022you,w1_lyu2021graph}. Some work manages to propose new architectures using existing (off-the-shelf) models as building blocks. The particular way to ``glue'' these basic blocks together is termed {\it hyper-architecture}, which has three forms (cf. Figure~\ref{fig:hyper-architecture}): \begin{itemize} \item Combinatory \cite{c1_zhang2021scalable,c15_ferrati2021deep,c16_chen2021trend,c20_lee2018content,c21_cheng2019success,c23_kim2017does,j43_shi2021leveraging,j47_wu2022estimating,t1_stahl2021leveraging,t2_horn2021deep}: the basic building blocks interact with (sometimes even embody) one another to achieve the same goal collaboratively; \item Ensemble \cite{c3_sharchilev2018web,c9_dellermann2021finding,j2_ross2021capitalvx,j41_yeh2020machine}: each block still works independently, and their results will be aggregated to produce the final output. \item Sequential (step-wise) \cite{j44_kaminski2020predicting,j46_kim2020recommendation}: a block can only start working when the previous block completes. \end{itemize} \begin{figure}[!t] \centerline{\includegraphics[width=0.95\linewidth]{./figures/hyper-architecture.pdf}} \vspace{-3pt} \caption{\label{fig:hyper-architecture}\textbf{High-level illustration of different hyper-architectures.}\newline The off-the-shelf basic models are used as building blocks of different hyper-architectures in literature, where ``combinatory'' is the most complex and ``sequential'' is the simplest. } \end{figure} From the literature, it is almost impossible to draw a solid conclusion of which DL model works the best, because each work is often unique in sense of data sources, data modalities, data splitting, data processing, feature engineering, startup success criteria, evaluation metrics, and so on \cite{rj1_pasayat2020factors,rj2_song2008success}. Unfortunately, the situation that each practitioner faces will continue to differ. According to the {\bf No-Free-Lunch} ({\bf NFL}) theory \cite{wolpert1997no}, there is no such model that works the best in every situation: the model assumptions might fit for one situation yet fail to hold true for another. As a result, search for the optimal model for a particular setup (mainly the data and success definition) is important. However, it is simply infeasible to attempt many model categories and hyper-architectures. It is a common practice is to evaluate multiple models (cf. Section~\ref{sec:eval}) and pick the best-performing one, which is also reflected in the literature \cite{c23_kim2017does,c20_lee2018content,c22_yu2018prediction,c3_sharchilev2018web,w3_gastaud2019varying,c21_cheng2019success,j46_kim2020recommendation,c6_ghassemi2020automated,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,c9_dellermann2021finding,c15_ferrati2021deep,c19_garkavenko2021valuation,c16_chen2021trend,j2_ross2021capitalvx,c1_zhang2021scalable,j26_bai2021startup,j37_kinne2021predicting,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w2_yin2021solving,w1_lyu2021graph,j16_allu2022predicting,j9_tang2022deep,b1_ang2022using,j47_wu2022estimating,w6_garkavenko2022you}. Moreover, the data modality can sometimes imply the range of feasible DL model; for example, LSTM/GRU is best suited to handle time-series data. Finally, during this selection process, one should respect the principle of {\bf Occam's Razor} \cite{blumer1987occam}, which implies that one should prioritize the model with least complexity and best explainability. This principle explains Figure~\ref{fig:model-category-architecture} where the simple off-the-shelf ANN/DNN/MLP model dominates. Some work (e.g. \cite{w6_garkavenko2022you}) conclude that DL models do not necessarily yield better results than much simpler ML methods. As a consequence, we recommend to start with testing simpler suitable model (even a pure random one); and then progressively increase the model complexity until the pre-allocated resources (e.g. man hour and GPU/CPU quota) are exhausted. \section{Evaluate Model with Precision-First and Simulation Mindset} \label{sec:eval} The decision of productizing any trained model is often made by looking at the evaluation results. To achieve this goal, some {\it evaluation metrics}, as shown in Figure~\ref{fig:eval-test-process}, are employed to measure the quality of predictions (i.e. $y$) by comparing to the ground-truth labels (i.e. $\hat{y}$). The metric values computed over the evaluation set (i.e. $\mathbf{x}_{\text{eval}}$) are used to determine which model (among many trained using different hyper-parameters) will be deployed for production eventually. This process also fulfills the objective of searching for optimal hyper-parameters (i.e. hyper-parameter search). It has been discussed previously, in Section~\ref{sec:dataset-split}, that the evaluation metrics for the test set (i.e. $\mathbf{x}_{\text{test}}$) are merely reported as an indication of the model's generalization capability\footnote{Generalization capability describes a model's ability to adapt properly to new, previously unseen data, drawn from the same distribution as the one used to train the model.}. \begin{figure}[!t] \centerline{\includegraphics[width=0.9\linewidth]{./figures/eval-test-process.pdf}} \caption{\label{fig:eval-test-process}\textbf{The Evaluation (hyper-parameter search) and Testing process.}\newline The evaluation/test data is fed into the trained model (hence the asterisk ``*'' beside the ``model parameters''), producing success prediction $y$. The evaluation metric estimates the prediction quality by comparing to the ground-truth labels $\hat{y}$. The calculated metric values from the evaluation set ($\mathbf{x}_{\text{eval}}$) are used to guide selecting the best hyper-parameters.} \end{figure} The evaluation metrics adopted in the DL literature include (ordered by their occurrences as shown in Figure~\ref{fig:eval-metrics-dist}) {\it precision} \cite{c1_zhang2021scalable,c15_ferrati2021deep,c21_cheng2019success,j2_ross2021capitalvx,j37_kinne2021predicting,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting,j9_tang2022deep,t1_stahl2021leveraging,t2_horn2021deep,w1_lyu2021graph,w2_yin2021solving,w3_gastaud2019varying,c3_sharchilev2018web}, {\it recall} \cite{c15_ferrati2021deep,c21_cheng2019success,j2_ross2021capitalvx,j37_kinne2021predicting,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting,j9_tang2022deep,t2_horn2021deep,w2_yin2021solving,w3_gastaud2019varying}, {\it F1 score} \cite{c1_zhang2021scalable,c15_ferrati2021deep,c21_cheng2019success,j37_kinne2021predicting,j42_srinivasan2020ensemble,j43_shi2021leveraging,j44_kaminski2020predicting,j9_tang2022deep,w2_yin2021solving,w3_gastaud2019varying}, {\it ROC-AUC} ({\it area under the receiver operating characteristics}) \cite{c1_zhang2021scalable,c21_cheng2019success,c22_yu2018prediction,c3_sharchilev2018web,c6_ghassemi2020automated,j42_srinivasan2020ensemble,j43_shi2021leveraging,t1_stahl2021leveraging,t2_horn2021deep,w3_gastaud2019varying,w6_garkavenko2022you}, {\it accuracy} \cite{c1_zhang2021scalable,c20_lee2018content,c22_yu2018prediction,c9_dellermann2021finding,j2_ross2021capitalvx,j26_bai2021startup,j41_yeh2020machine,j42_srinivasan2020ensemble,j44_kaminski2020predicting,j9_tang2022deep}, {\it FPR} ({\it false-positive rate}) \cite{c6_ghassemi2020automated,t2_horn2021deep,w2_yin2021solving,w6_garkavenko2022you}, {\it TPR} ({\it true-positive rate}) \cite{c6_ghassemi2020automated,t2_horn2021deep,w6_garkavenko2022you}, {\it hit rate} \cite{j16_allu2022predicting,c16_chen2021trend}, {\it NDCG} ({\it normalized discounted cumulative gain}) \cite{c16_chen2021trend,j16_allu2022predicting}, {\it portfolio simulation} \cite{w2_yin2021solving,j2_ross2021capitalvx}, {\it RMSE} ({\it root mean square deviation}) \cite{c19_garkavenko2021valuation,j47_wu2022estimating}, {\it AUPR} ({\it area under the precision-recall curve}) \cite{c1_zhang2021scalable}, {\it average precision} \cite{w1_lyu2021graph}, {\it confusion matrix} \cite{j2_ross2021capitalvx}, {\it F0.1 score} \cite{c3_sharchilev2018web}, {\it MAE} ({\it mean absolute error}) \cite{j47_wu2022estimating}, {\it MCC} ({\it Matthews correlation coefficient}) \cite{c9_dellermann2021finding}, {\it PR curve} ({\it precision-recall curve}) \cite{t1_stahl2021leveraging}, {\it $R^2$ or ``R squared''} \cite{c19_garkavenko2021valuation}, and {\it sensitivity/specificity} \cite{rj2_song2008success}. To measure prediction quality, these evaluation metrics are formally similar to loss functions used in training [cf. Figure~\ref{fig:data-split-train-process}(b)], therefore some evaluation metrics (e.g. RMSE) can be used as loss functions too. Most trained models are expected to act as a decision-support system for VC deal sourcing. Realistically, human investors are only able to assess a limited amount of startups. Further, because of fund size limitation, investors can only fund a very small fraction of startups \cite{t1_stahl2021leveraging}. As a result, the {\bf evaluation metric should aim for high-precision (corresponding to high-certainty and low-recall)}\footnote{In the scope of VC deal sourcing, high-precision means the rate of ``correct'' prediction within the top-$N$ list (i.e. TPR: true-positive rate) should be high. According to the typical precision-recall curve, precision tends to be higher for smaller $N$; yet recall suffers from a small value of $N$.} \cite{c3_sharchilev2018web,t1_stahl2021leveraging}, which explains the popularity of {\it precision}, {\it true/false-positive rate}, {\it hit rate} and {\it F0.1 score} in Figure~\ref{fig:eval-metrics-dist}. \begin{figure}[!t] \centerline{\includegraphics[width=\linewidth]{./figures/eval-metrics-dist.pdf}} \vspace{-3pt} \caption{\label{fig:eval-metrics-dist}\textbf{The distribution of adopted evaluation metrics in the surveyed DL literature.}\newline All metrics are quantitative indication of model performance. The notation ``@N'' implies the corresponding metric is calculated over a top-$N$ list. Precision, hit rate, and F0.1 score are popular metrics with a focus of high-precision. Portfolio simulation suited particularly well to the domain of startup success prediction, while others are general-purpose metrics for evaluating ML/DL models.} \end{figure} There are four key questions to answer concerning any model trained to facilitate VC deal sourcing: {\bf Q1} What is the expected success ratio (or ROI) of the portfolio (with different sizes) constructed according to model predictions? {\bf Q2} How will the model-driven portfolio perform in relation to the historical records of renowned investment firms? {\bf Q3} Is the model significantly superior than a brainless random policy? {\bf Q4} How far does the model fall behind a theoretical perfect portfolio with 100\% success ratio? Answering all questions simultaneously using any single general-purpose ML/DL metric is challenging and sometimes far-fetched. To that end, some recent works \cite{j2_ross2021capitalvx,w2_yin2021solving} (though still far from a wide adoption according to Figure~\ref{fig:eval-metrics-dist}) have emerged proposing to {\bf evaluate via portfolio simulations}. Recall that in Section~\ref{sec:dataset-split}, we recommended the investor-centric dataset split illustrated in Figure~\ref{fig:data-split-demo}. With that split, we make the trained models to predict the conditional success probability of each startup in evaluation/test set, using the last date of training period as the feature snapshot date. Then, we construct an investment portfolio of size $k$ by selecting top-$k$ startups with the highest predicted probabilities. As an indication of portfolio performance, we count the number of startups that eventually obtain a positive label. The portfolio size $k$ should be varied, so that we can plot one performance curve (the three colored curves in Figure~\ref{fig:portfolio-sim}) for each model. To answer {\bf Q1}, a steeper curve corresponds to a better model. The performance of a perfect model is a diagonal line, implying all portfolio startups will be successful. To address {\bf Q2}, one just needs to measure the angular distance to diagonal. The simplest possible model is a randomly policy, the performance of which is represented by the flattest straight line in Figure~\ref{fig:portfolio-sim}; the angular distance between this ``random'' line to any model's curve answers {\bf Q3}. Finally, the historical fund performance of investment firms can be easily plotted as individual points, the vertical distances from which to models' curves give insights for {\bf Q4}. Practically, the investment firms are more constrained than simulation: they can not invest in any startup due to many reasons like founders preference, portfolio conflict and investment mandate. This constraint becomes more prominent when investors compete to invest in startups with great success potential. \begin{figure}[!t] \centerline{\includegraphics[width=0.7\linewidth]{./figures/portfolio-sim.png}} \vspace{-3pt} \caption{\label{fig:portfolio-sim}\textbf{Evaluate model performance using simulated portfolios of different sizes.}\newline The trained DL model is used to form portfolios of different sizes $k\in \{20, 40, 60,80, 100, 120\}$ (x-axis); the number of eventually successful startups is plotted against the corresponding $k$, resulting in a model performance curve (i.e. the three colored curves). The perfect and random case are also plotted as straight lines showing the high and low performance boundary, respectively. The historical performance of real-world investment firms can be plotted as standalone points (red circles) for comparison. This chart is adapted from \cite{w2_yin2021solving}.} \end{figure} \section{Resort to Model-Agnostic and Instance-Level Explainability} \label{sec:explainability} ANNs typically have thousands (often millions) of parameters, leading to extremely complicated nonlinear relationship between the input features/factors and output predictions. As a result, DL models suffer from the criticism that they are mysterious black-boxes, as opposed to some white-box ML models like logistic regression. However, explaining why a DL model comes up with certain predictions for startups (especially diverging predictions for comparable startups) is crucial for investment professionals \cite{j2_ross2021capitalvx}. This requirement has been driving the application of Explainable AI (XAI) \cite{gade2019explainable} solutions in the field of startup success forecasting. Practically, XAI helps to {\it increase trust in DL models} \cite{w2_yin2021solving}, {\it enable hypotheses-mining} \cite{w5_guerzoni2019survival}, {\it simplify model troubleshooting}, and {\it bust potential AI potholes} like biases. Specifically, XAI aims to quantify the importance of every input feature/factor across all observations ({\bf global-level} explainability), or for one specific observation ({\bf instance-level} explainability) in the data. For {\bf global-level} explainability, the contribution of any input feature is easily understood in the likes of regression models whose coefficients are directly associated with features: an increase of a feature by one unit increases the outcome by the amount of the corresponding coefficient. But this approach does not apply to DL models, since there is no one-to-one relationships between model parameters and input features. One solution is to simply aggregate the learned weights ({\it weights aggregation}) associated with a feature to approximate the impact of that feature \cite{j47_wu2022estimating}. For example, to measure the impact of feature $x_1$ in Figure~\ref{fig:dl-ann}, we may average all four weights connected to it. This weight aggregation solution will fail quickly upon polarized weights or overly deep ANNs. {\it Ablation analysis} \cite{w6_garkavenko2022you}, instead, generalizes better to various scenarios, where it investigates the model performance by removing certain input feature(s) to understand the contribution of the removed feature(s) to the overall system. While feature importance here can be interpreted globally, it is not specific to any particular startup. \begin{figure}[!t] \centerline{\includegraphics[width=0.9\linewidth]{./figures/lime.png}} \vspace{-3pt} \caption{\label{fig:lime}\textbf{LIME quantifies instance-level feature importance by examining local space.}\newline For simplicity, we assume each startup has merely two features (i.e. Feature 1 and 2). The positive (successful) and negative (unsuccessful) startups are color coded differently, and separated by a squigly classification boundary. To explain the prediction for a startup (represented by a red polygon), LIME examines the local space around this startup. In this example, the local separation boundary (linear approximation) turns out to be rather steep (i.e. a slope larger than 1.0), implying a much bigger impact from Feature 1. } \end{figure} The {\bf instance-level} explainability is a little less straightforward due to the black-box nature of DL models; but it is perhaps the most valuable problem to address, since it provides fine-grained insight into why each individual prediction is made. It should also help explaining how the influence of a feature may vary with different observations that are fed to a DL model \cite{j2_ross2021capitalvx}. Given drastically different DL model categories and hyper-architectures in Figure~\ref{fig:model-category-architecture}, model-agnostic approaches are called for. Our survey suggests two popular methods: \begin{itemize} \item {\bf LIME} (Local Interpretable Model-agnostic Explanations) \cite{ribeiro2016should,j2_ross2021capitalvx}, as illustrated in Figure~\ref{fig:lime}, examines the feature space local to an observation, and then applies a locally interpretable linear function $g(\mathbf{x})$ to approximate the model's black box function $f(\mathbf{x})$. \item {\bf SHAP} (SHapley Additive exPlanations) \cite{j2_ross2021capitalvx,w2_yin2021solving,w3_gastaud2019varying,w6_garkavenko2022you}, as na\"ively shown in Figure~\ref{fig:shap}, measures the contribution of a certain feature $\mathbf{x}_1$ by permutating other features. In each permutation, the model will output one value with $\mathbf{x}_1$ (in the input), and output another without $\mathbf{x}_1$; the difference between these two values represents the contribution for that permutation. Aggregating contributions from all permutations gives estimation for $\mathbf{x}_1$. The authors of SHAP \cite{lundberg2017unified} propose a way to approximate this permutation procedure so that it scales to large number of input features. \end{itemize} SHAP is fast and deterministic, making it generally more attractive to researchers. However, neither LIME nor SHAP is perfectly suited to explain extremely unstructured and high-dimensional modalities (e.g. texts and graphs in Figure~\ref{fig:data-modality-dist}). This is why novel and tailored methods are being invented recently for explaining specific feature modalities such as text \cite{janizek2021explaining} and graph \cite{ying2019gnnexplainer}. \begin{figure}[!t] \centerline{\includegraphics[width=0.9\linewidth]{./figures/shap.pdf}} \vspace{-3pt} \caption{\label{fig:shap}\textbf{A na\"ive illustration of how SHAP thinks about instance-level explainability.}\newline For simplicity, we assume that each startup is represented by a feature vector $\mathbf{x}$ with three features $x_1$, $x_2$ and $x_3$. The model's prediction $f(\mathbf{x})$ can be viewed as a compound of contributions from $x_1$, $x_2$ and $x_3$. To measure contribution of $x_1$, permutations of other features ($x_2$ and $x_3$) are created. For each permutation, the output difference between the two cases (with and without $x_1$ in the input) are calculated. Aggregating these differences for all permutations gives the final estimation for $x_1$. SHAP \cite{lundberg2017unified} proposes a way to efficiently approximate this permutation procedure so that it scales to high-dimension input. } \end{figure} \section{Keep Humans in the Loop of Model Serving} \label{sec:human-in-the-loop} The end of model development is just the beginning of paying back the technical debts\footnote{Technical debt is a metaphor introduced by Ward Cunningham in 1992 to help reason about the long term costs incurred by moving quickly in software engineering \cite{sculley2015hidden}.} \cite{sculley2015hidden} incurred from model productization$^{10}$, during which humans (i.e.~the consumer of the model predictions) play a critical role in perceiving the prediction and providing feedback. Because of this, a few works \cite{c9_dellermann2021finding,j26_bai2021startup,t10_melnychuk2021approved} have attempted to shed lights on this topic. We hereby briefly touch upon three most important aspects: \begin{itemize} \item {\bf Collective intelligence}: humans and DL models have advantage over one another and can make a unique contribution to informed investment decisions. Human decisions are intuitive, subjective, and sometimes bias-prone due to bounded rationality and mental shortcuts \cite{hoenig2015quality}. However, they also have highly distilled domain knowledge that enables them to recognize and interpret very rare information. this leads to outcomes that are difficult to predict and would rather represent outliers in ML/DL models \cite{blattberg1990database}. To that end, collective intelligence is suggested (by e.g. \cite{c9_dellermann2021finding,blohm2016rate,quinn2011human}) to complement the model prediction by assessing unknowable risk that can not be explained through data prior distribution. \item {\bf Performance monitoring}: the model performance in production might be drastically different from evaluation/testing results (cf. Section~\ref{sec:eval}) due to the drift of input data and users' contexts. As a result, monitoring users' feedback over prediction quality in a timely (if not real-time) manner is mandatory. For example, on weekly basis, each user (investment professional) is asked to assess a few startups (e.g. 5 to 10) that are highly likely to succeed according to the model. Since precision is prioritized over other metrics (cf. Figure~\ref{fig:eval-metrics-dist}), we can monitor weekly precision as a gate keeper of model quality. \item {\bf Model iteration}: degradation of the monitored metrics (e.g. precision) is expected at a certain time point, which is discussed in some industrial research such as \cite{cao2020debiasing}. It is mostly caused by either drift of data distribution or change of user context. While data drift could be corrected by retraining the model periodically using recent data, user context is much harder to compensate. Here, user context refers to his/her way of defining the criteria of startup success or failure; and the perturbance of such definition is hard to capture with additional heuristics as discussed in Section~\ref{sec:success_criteria}. However, this problem may be relieved if we retrain the model using new assessments (both positive and negative) from users as additional labels. \end{itemize} \section{Conclusion}\label{s4} Finding the rare unicorn startups is inherently challenging, hence often regarded as the holy grail for early-stage investors like Venture Capital (VC) firms. Among the data driven approaches to help reaching that goal, deep learning (DL) has recently attracted more and more attention due to its superior model capacity and expressivity. However, there has not been any comprehensive synthesis of the existing DL based researches. This leaves many practitioners uninformed, and also vulnerable to some pitfalls hidden in nine key tasks -- problem scoping, success definition, data gathering, data processing, data split, model selection, model evaluation, model explanation, and model productization. As a result, we carry out this literature synthesis on DL based methods with a practical lens. The key statistics and learnings are presented from nine perspectives corresponding to the aforementioned nine tasks. The authors' outlook of DL adoption in startup success prediction is three fold: (1) more easy-to-use software tools will be developed to promote good practices and lower the barrier to entry; (2) the majority of the available data is unlabeled and small scaled, hence more data/label efficient DL models will be proposed; (3) data privacy and model security will gain more emphasis in the coming years.
1,477,468,749,906
arxiv
\section*{Acknowledgements} The authors thank Elisa Kreiss for helpful discussions. We also thank the anonymous reviewers and meta-reviewers for their helpful feedback. This research was supported in part by DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI (AI2). \section{Knowledge Models of Negated Events} \label{sec:model-comet} \textsc{Anion}{} can be used as training data for commonsense models to make inferences about negated events. Here, we recap COMET \cite{bosselut2019comet}, a commonsense knowledge model, and evaluate how training knowledge models on \textsc{Anion}{} affects their ability to hypothesize commonsense knowledge for negated and oppositional events. \subsection{Setup} \label{ssec:model-comet:setup} Commonsense transformers (COMET) are generative knowledge models that learn to hypothesize commonsense inferences by training on examples from a knowledge graph. Specifically, COMET receives knowledge tuples in $\{h,r,t\}$ form during training, where $h$ is a head entity, $r$ is a relation type, and $t$ is a tail entity. The model is trained to maximize the conditional loglikelihood of predicting the tokens of the tail entity $t$ given the tokens of the head entity $h$ and relation $r$: \vspace*{-1mm} \begin{equation} \mathcal{L}_{G} = - \sum \log{P(t|h, r)} \label{eq:comet-loss} \end{equation} \noindent In \textsc{Atomic}{} and \textsc{Anion}{}, $h$ corresponds to events, such as ``X has a nightmare,'' $t$ corresponds to commonsense inferences about those events, such as ``X wakes up,'' and $r$ corresponds to commonsense inference types, such as ``As a result, X does...''. Following \citet{bosselut2019comet} and \citet{sap2019atomic}, for each event and relation type in \textsc{Atomic}{}, 10 candidate inferences are decoded from COMET using beam search with $b$=10 \subsection{Experiments} \label{ssec:model-comet:exps} As oppositional instances remain challenging to knowledge models such as COMET, we evaluate how \textsc{Anion}{} can be used to augment the type of examples seen by COMET during training. \begin{table}[t] \small \renewcommand{\arraystretch}{1.1} \resizebox{\linewidth}{!}{ \centering \begin{tabular}{l|lrrrr} \textbf{Eval Set} & \textbf{Train Set} & \textbf{PPL $\downarrow$} & \textbf{BL2 $\uparrow$} & \textbf{P@10 $\uparrow$} \\ \midrule \multirow{2}{*}{\textsc{Atomic}} & \textsc{Atomic} & 9.30 & \textbf{14.18} & \textbf{55.18} \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.28} & 14.05 & *53.61 \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-L}} & \textsc{Atomic} & 10.87 & 10.86 & 35.84 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.08} & \textbf{11.96} & \textbf{**45.42} \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-S}}& \textsc{Atomic} & 11.69 & 12.07 & 36.89 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.80} & \textbf{13.22} & \textbf{**46.88} \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-C}} & \textsc{Atomic} & 12.02 & 14.32 & 46.70 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{11.20} & \textbf{14.64} & \textbf{**50.65} \\ \bottomrule \hline \end{tabular} } \caption{Evaluations of COMET models trained on \textsc{Atomic}{} and \textsc{Anion}{} KGs. Training on examples of negated events leads to large improvements in the quality of generated inferences with minimal dropoff in the quality of inferences for affirmative events. Single (*) and double asterisks (**) indicate significance at p<0.05 and p<0.01, respectively.} \label{tab:comet-neg-result-main} \end{table} \paragraph{Evaluation Metrics} Following \citet{bosselut2019comet}, we evaluate the quality of generated inferences using BLEU-2 \cite{bleu-2003} as an automatic evaluation. We also compute the perplexity of models on their reference generations. For the human evaluation, we employ human judges from MTurk to identify whether generated commonsense inferences are plausible. We randomly sample 100 events from the original \textsc{Atomic}{} test set along with their negated counterparts from \textsc{Anion}. For each event, we present every decoded inference to five crowdworkers and ask them to identify whether the inference is plausible given the event. For each model trained on a different combination of \textsc{Atomic}{} and \textsc{Anion}{} (\textit{i.e.}, \textsc{Anion}-L, \textsc{Anion}-S, \textsc{Anion}-C), we evaluate the same events for comparison. We calculate Precision @ 10{} (P@10{}) across these human ratings, \textit{i.e.}, the average number of correct options per event-relation prompt. Specifically, we average the results from 45K ratings to compute the final human score (100 events $\times$ 9 relations $\times$ 10 options $\times$ 5 annotators). The pairwise agreement score of human evaluation is 63.6, which is on par with other similar commonsense reasoning annotation tasks \cite{rashkin-etal-2016-connotation}. \paragraph{Does negated event training improve commonsense inference for negated situations?} We train a COMET model on the events from \textsc{Atomic}{} (\textit{i.e.}, COMET-\textsc{Atomic}{}), and another on the examples from both \textsc{Atomic}{} and \textsc{Anion}{} (\textit{i.e.}, COMET-\textsc{Full}{}). The combined dataset is shuffled so that the original and negated examples are uniformly mixed during training. We report our comparison of these two models in Table~\ref{tab:comet-neg-result-main}. The performance of the original COMET model trained only on the \textsc{Atomic}{} knowledge graph drops significantly across all types of oppositional instances. Most surprisingly, a drop in performance is also observed on commonsense contradictions (\textsc{Anion}-C), which have no explicit negation cues. However, commonsense contradiction events can often be richer in content (see Table~\ref{tab:atomic-commonsense-contradiction-pair}), making them more challenging for knowledge models. Meanwhile training on all negated examples in the \textsc{Anion}{} knowledge graph produces significant improvements across all negation categories (\textsc{Anion}-\{L,S,C\}), though we do observe a slight drop in human ratings on the examples from the original \textsc{Atomic}{} test set. \paragraph{Does negated event training deteriorate commonsense inference of affirmative situations?} We note in Table~\ref{tab:comet-neg-result-main} that training on \textsc{Atomic}{} + \textsc{Anion}{} hurts inference performance on the original \textsc{Atomic}{} evaluation set. To analyze why COMET-\textsc{Full}{} does not improve on this set of examples, we perform a case study on inferences generated by COMET-\textsc{Atomic}{} and COMET-\textsc{Full}{} under the same event and relation prompt, and note two qualitative patterns. First, we observe that COMET-\textsc{Full}{} tends to generate inferences that are less generic, but that may require additional implicit context. For example, for the event ``X is really sad'' and the relation \textit{xEffect} (\textit{i.e.}, the effect of the event on X), COMET-\textsc{Atomic}{} generates inferences such as ``cries,'' ``gets depressed'' and ``takes medication.'' Conversely, COMET-\textsc{Full}{} generates context-specific inferences such as ``thinks about the past'' and ``thinks about what they did,'' which, while plausible in some context, may be less straightforward when evaluated broadly (not all feelings of sadness lead to reflection on the past or one's own actions). Second, we find an overall improvement for certain compositional events in \textsc{Atomic}{} that contain conjunction words: ``and'' or ``but.'' On these examples, COMET-\textsc{Full}{} outperforms COMET-\textsc{Atomic}{} with 12.41 and 12.22 BLEU-2 scores respectively. For example, for the event ``X is hot and humid'' and the relation \textit{xEffect}, COMET-\textsc{Atomic}{}’s generation includes correct inferences, such as ``to take a shower,'' ``to cool down,'' ``to drink some water,'' ``to go outside,'' and incorrect inferences, such as ``to turn on the heat'' and ``to drink a hot tea.'' COMET-\textsc{Full}{} generates all of COMET-\textsc{Atomic}{}’s correct inferences, but none of the incorrect inferences, demonstrating that training COMET jointly on \textsc{Atomic}{} and \textsc{Anion}{} can help avoid incorrect inferences involving commonsense mismatch in more compositional situations. In summary, the ability to generate richer, contextual inferences for COMET-\textsc{Full}{} is beneficial when handling complex events, but may not be necessary for many of the simple events in \textsc{Atomic}{}, and may backfire when subtler inferences are made. \begin{table}[t] \small \centering \begin{tabular}{l|lrrrr} \textbf{Eval Set} & \textbf{Train Set} & \textbf{PPL $\downarrow$} & \textbf{BL2 $\uparrow$} & \textbf{P@10 $\uparrow$} \\ \midrule \multirow{4}{*}{\textsc{Atomic}} & \textsc{Atomic} & 9.30 & 14.18 & 55.18 \\ & + \textsc{Anion}-L & \textbf{9.27} & \textbf{14.20} & \textbf{**58.11} \\ & + \textsc{Anion}-S & 9.30 & 14.09 & 55.74 \\ & + \textsc{Anion}-C & 9.29 & 14.10 & **52.22 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-L}} & \textsc{Atomic} & 10.87 & 10.86 & 35.84 \\ & + \textsc{Anion}-L & \textbf{9.28} & \textbf{11.94} & \textbf{**44.94} \\ & + \textsc{Anion}-S & 9.93 & 11.29 & **44.01 \\ & + \textsc{Anion}-C & 10.34 & 11.04 & **42.33 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-S}}& \textsc{Atomic} & 11.69 & 12.07 & 36.89 \\ & + \textsc{Anion}-L & 10.69 & 12.69 & **42.38 \\ & + \textsc{Anion}-S & \textbf{10.23} & \textbf{12.79} & \textbf{**45.50} \\ & + \textsc{Anion}-C & 10.95 & 12.35 & **41.76 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-C}} & \textsc{Atomic} & 12.02 & 14.32 & 46.70 \\ & + \textsc{Anion}-L & 11.72 & 14.43 & 47.78 \\ & + \textsc{Anion}-S & 11.67 & 14.34 & 46.09 \\ & + \textsc{Anion}-C & \textbf{11.50} & \textbf{14.58} & \textbf{**48.79} \\ \bottomrule \hline \end{tabular} \caption{Ablation results of models trained and evaluated on different portions of \textsc{Anion}{}. The best result on each subset of \textsc{Anion}{} comes from training on similar examples. The model trained on negated events from \textsc{Anion}-L performs the best at generating inferences for the original \textsc{Atomic}{} events. Double asterisks (**) indicate significance at p<0.01.} \label{tab:comet-neg-result-ablation} \end{table} \paragraph{Which variety of negated events are most crucial to include in training sets?} As ablations, we train additional models using different subsets of \textsc{Anion}{}: logical~negations (\textsc{Atomic}{} + \textsc{Anion}-L), semi-logical~negations (\textsc{Atomic}{} + \textsc{Anion}-S), and commonsense contradictions (\textsc{Atomic}{} + \textsc{Anion}-C). These ablations evaluate whether knowledge models can adapt to certain types of negation more efficiently with additional data. In Table \ref{tab:comet-neg-result-ablation}, we show that training with examples of each negation type improves performance on the evaluation set related to that negation type. Interestingly, though, training on certain types of negation examples can also yield benefits downstream on other negation types. For example, training on commonsense contradictions (\textsc{Anion}-C) provides a clear benefit when evaluating on semi-logically negated events (\textsc{Anion}-S) as opposed to merely training on \textsc{Atomic}. Notably, the knowledge model trained with logically negated examples (\textsc{Atomic}{} + \textsc{Anion}-L) outperforms the model trained only on \textsc{Atomic}{} on all test sets. \section{Conclusion} \label{sec:conclusion} We present the first comprehensive study on commonsense implications of negations and contradictions. To expand commonsense resources for the challenge of negation modeling, we introduce \textsc{Anion}, a large scale commonsense knowledge graph for negated and contradicted events. We use \textsc{Anion}{} to train commonsense knowledge models and demonstrate that it effectively enriches machine commonsense inference capabilities around negation. Lastly, we propose a negation discriminator capable of identifying logical flaws in commonsense inferences. By combining the model trained on \textsc{Anion}{} with the negation discriminator, we achieve a further performance boost. \section{Discussion} \label{sec:discussion} \begin{table}[t] \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|r|rrrr} \multicolumn{2}{c|}{\diagbox[width=\widthof{xxxxxxxxxx}]{\textbf{Eval}}{\textbf{Disc}}} & \textbf{L} & \textbf{S} & \textbf{C} & \textbf{LSC} \\ \midrule \multirowcell{3}{\textsc{Atomic}} & all & 54.16 & 54.49 & 55.03 & 55.68 \\ & valid & \textbf{54.20} & \textbf{54.64} & \textbf{55.71} & \textbf{**57.58} \\ & \%iprv & 0.08 & 0.28 & 1.23 & \underline{3.41} \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 46.54 & 46.26 & 46.15 & 46.39 \\ & valid & \textbf{**50.71} & \textbf{**48.36} & \textbf{46.16} & \textbf{**49.85} \\ & \%iprv & \underline{8.98} & 4.54 & 0.03 & 7.45 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 46.90 & 47.73 & 47.47 & 47.53 \\ & valid & \textbf{47.14} & \textbf{**50.42} & \textbf{48.20} & \textbf{**50.62} \\ & \%iprv & 0.51 & 5.65 & 1.55 & \underline{6.50} \\ \midrule \multirowcell{3}{\textsc{Anion}-C} & all & 50.80 & 51.29 & 51.28 & 51.83 \\ & valid & \textbf{50.94} & \textbf{51.52} & \textbf{52.65} & \textbf{*53.91} \\ & \%iprv & 0.28 & 0.45 & 2.67 & \underline{4.02} \\ \midrule \end{tabular} \vspace*{-2mm} \caption{P@\{\# valid\}{} scores of the \textit{all} and \textit{valid} sets determined by the \textbf{L}, \textbf{S}, \textbf{C} and \textbf{LSC} discriminators. Generations are from COMET-\textsc{Full}{} . Single (*) and double asterisks (**) indicate significance at p<0.05 and p<0.01, respectively. iprv\% is the improvement of the \textit{valid} over the \textit{all} set. \underline{Underlines} indicate the highest iprv\% across discriminators.} \label{tab:o+l+s+p-p@len(valid)} \end{table} \paragraph{Are learning-based and discriminator-based approaches complementary? We apply our discriminators to the generations of the COMET model trained on \textsc{Anion}{}. In Table \ref{tab:o+l+s+p-p@len(valid)}, we see that the \textbf{LSC} discriminator, when applied to generations of COMET trained on \textsc{Anion}{}, achieves significant improvements over all evaluation sets, including the original events. The full evaluation of the P@\{\# valid\}{} and P@3{} scores of applying different discriminators to generations of COMET trained on different data over all evaluation sets are shown in Table \ref{tab:full-p@len(valid)} and \ref{tab:full-p@3} in Appendix \ref{sec:appendix}. \begin{table} \small \renewcommand{\arraystretch}{0.8} \centering \begin{tabular}{l|l|rrr} \textbf{Beam Size} & \textbf{Set} & \textbf{$\#$\cmark} & \textbf{$\#$total} & \textbf{P@$\#$total} \\ \toprule \multirow{2}{*}{\textbf{10}} & all & 3.6 & 10.0 & 35.84 \\ & valid & 2.1 & 4.4 & 45.59 \\ \cmidrule{0-1} \multirow{2}{*}{\textbf{25}} & all & 8.1 & 25.0 & 32.29 \\ & valid & \textbf{4.3} & 10.5 & \textbf{38.18} \\ \bottomrule \end{tabular} \caption{Number of correct generations from applying the \textbf{LSC} discriminator to generations of COMET-\textsc{Atomic}{} for beam size of 10 and 25 for logical negation events. \textit{$\#$\cmark} is the number of correct options. \textit{$\#$total} is the number of options in each set.} \label{tab:valid-option-yield} \end{table} \vspace*{2mm} \noindent \textbf{Can discriminators be used to more aggressively generate inferences?} While applying discriminators to generated inferences yields a \textit{valid} subset with higher accuracy, we are left with fewer correct inferences in total. Thus, we investigate the efficiency of using discriminators to expand the number of inferences generated. We decode inferences from COMET with beam size 25, and then apply the discriminator to this larger candidate set. Table \ref{tab:valid-option-yield} shows that for logical negation, the \textit{valid} set of beam 25 has higher accuracy and more correct options than the \textit{all} set of beam 10. Thus, when we have a larger and potentially more noisy set of candidates, applying the negation discriminator yields a set of options that have higher quality than using all the candidates from a smaller set of initial generations. \section{\textsc{Anion}: Commonsense Inferences of Oppositions and Negations} \label{sec:resource} To provide a rich resource of commonsense inferences for opposition and negation events, we design \textsc{Anion}. Using the same schema as the \textsc{Atomic}{} knowledge graph \citep{sap2019atomic}, we initialize 22,483 negated forms paired to original \textsc{Atomic}{} events and crowdsource 627,042 new inferences for these negated events. Consistent with \textsc{Atomic}{}, \textsc{Anion}{} is constructed using English formulations of events and inferences. We briefly recap \textsc{Atomic}{} and describe the construction of \textsc{Anion}{} below. \paragraph{\textsc{Atomic}{} Background} The \textsc{Atomic}{} knowledge graph contains $\sim$24K base events (\textit{e.g.}, ``X plays the piano'') with 877K accompanying social commonsense inferences (\textit{e.g.}, ``Before, X needs to buy a piano.'') along nine dimensions (\textit{e.g.}, \textit{xNeed}). The full description of \textsc{Atomic}{} relation types can be found in Table \ref{tab:atomic-rel-pattern} in the Appendix. \subsection{Overview of \textsc{Anion}{} Construction} Our knowledge construction pipeline consists of two steps. First, we collect negated and contradictive events by deriving oppositions of events in \textsc{Atomic}{}. Inspired by the distinction made between negation contributed by semantic assertion (explicit negation) or non-asserted content (implicit negation) \cite{XIANG201671}, we define three varieties of negated events: logical negations, semi-logical negations, and commonsense contradictions, which we describe in detail below. Logical and semi-logical negations were heuristically formulated from \textsc{Atomic}{} events. Commonsense contradiction events were crowdsourced from Amazon Mechanical Turk (MTurk). Negated events in \textsc{Anion}{} are assigned to the same data split as the corresponding affirmative event from which they are derived (\textit{e.g.}, negated events for \textsc{Atomic}{} training set events are found in the \textsc{Anion}{} training set). Once a list of negated events is compiled, we crowdsource inferences of these new events on MTurk using similar annotation templates as \citet{sap2019atomic}. We design qualifying tasks to filter out unreliable workers and screen their answers manually for quality control purposes. \paragraph{Logical Negation} We define logical negation events as events with the negation cue \textit{not} added to their original formulation (\textit{e.g.}, ``X does not play the piano''). However, different positions of the \textit{not} modifier in a clause can result in different \textit{negation scopes}, which can alter the semantics of the event \cite{councill-etal-2010-great}. To be consistent, we systematically insert \textit{not} after the subject of the event clause. If necessary, we change verb forms and add auxiliary words (\textit{e.g.}, do, does, did, is, was, can, could, would, should, may, might). For quality control, we have human workers validate each logically negated event form and exclude events that annotators identify as uninterpretable or awkwardly worded. For each created event, we then collect the same nine dimensions of inferences as defined in \textsc{Atomic}{}. Consequently, we collected 8,285 logically negated events with 225K corresponding inferences (as shown in Table \ref{tab:atomic-neg-data-stats}). Appendix~\ref{sec:appendix:data-collection-details} provides further details of the compilation of logical negation events. \paragraph{Semi-logical Negation} We define semi-logical negation using explicit cues other than \textit{not}. We categorize these negation cues (words or phrases) into four subtypes: affixes (\textit{e.g.}, legal/illegal), single-word cues (\textit{e.g.}, never), multi-word cues (\textit{e.g.}, no longer), and negative verbs (\textit{e.g.}, refuse). See Table \ref{tab:negation-cue-examples} for examples. We create semi-logical negation events by heuristically adding these cues to different positions of \textsc{Atomic}{} events. Similar to logically-negated events, we avoid grammatically incorrect or semantically awkward events by removing auto-generated instances of low quality. The final set of data includes 5,019 semi-logical negation events. We then crowdsource a total of 138K inferences for these new events. Appendix~\ref{sec:appendix:data-collection-details} provides further details of the compilation of semi-logical negation events. \input{tables/atomic-contradiction-example-table} \paragraph{Commonsense Contradiction} We formulate commonsense contradiction as contradictory statements without negation cues. Commonsense contradiction events are not identifiable as negations on their own, but demonstrate reversed semantic or pragmatic meaning when paired with their affirmative counterparts (\textit{e.g.}, ``X eats a hamburger'' vs. ``X eats a salad''). To obtain commonsense contradictions, we crowdsource two oppositional events for each \textsc{Atomic}{} event, excluding events with blank placeholders representing generic objects, resulting in 40K new commonsense contradiction events. For 9,179 of these events, we crowdsource an additional 262K commonsense inferences. Appendix~\ref{sec:appendix:data-collection-details} provides further details of the crowdsourcing of commonsense contradiction events. \section{Introduction} \label{sec:intro} Humans reason about underlying causes and effects of events described in text. For example, in Figure \ref{fig:example_pos_neg_inference}, the event ``X wears a mask'' is associated with many causal inferences such as ``X is seen as responsible,'' or ``Others get protected.'' Hypothesizing and reasoning about commonsense inferences is used for understanding complex situations encountered in everyday life \cite{sap2019socialiqa, bisk2019piqa, bhagavatula2020abductive, Sakaguchi2020WINOGRANDEAA}. This ability eludes AI systems, and has motivated the design of a wealth of commonsense knowledge resources, such as Cyc \cite{Lenat1995Cyc}, ConceptNet \cite{speer2018conceptnet}, and \textsc{Atomic}{} \citep{sap2019atomic, hwang2020cometatomic}, to provide structured reasoning capabilities to AI systems \cite{lin2019kagnet, feng2020scalable}. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/example_pos_neg_inference.pdf} \caption{Commonsense inferences for the event ``X wears a mask,'' its logical negation and commonsense contradiction events, and their associated inferences.} \label{fig:example_pos_neg_inference} \end{figure} However, reasoning about negated observations remains a challenge \cite{hossain2020its}. While negation is often considered a poorer form of meaning than affirmation\footnote{Following \citet{sep-negation}, we classify declarative expressions as affirmations or negations/contradictions based on whether they affirm or deny an action or object.} \cite{ackrill1963aristotle, sep-negation}, negated statements can still imply expressive commonsense inferences. In Figure~\ref{fig:example_pos_neg_inference}, the negated event “X doesn't wear a mask,” is connected to rich commonsense inferences, despite describing the \textit{absence} of action. However, negated observations are rarely found in commonsense knowledge resources. For example, negated examples make up only $\sim$3\% of examples in the ConceptNet knowledge graph \citep{li-etal-2016-commonsense}. This scarcity poses downstream issues for systems that must understand negated situations. Commonsense knowledge models \citep{bosselut2019comet, hwang2020cometatomic} trained on resources of largely affirmative instances struggle particularly with negation examples. Their ability to hypothesize inferences for negated events is 35\% lower than for affirmative events (\S\ref{ssec:model-comet:exps}). Furthermore, since negated statements are asymmetrically mentioned in text compared to affirmative statements \citep{jowett1892dialogues,sep-negation}, large-scale pretraining does not implicitly learn negation scoping \citep{kim-etal-2019-probing}. As a result, when presented with negated concepts, pretrained neural language models (PLMs) often exhibit the same associations as affirmative statements \cite{kassner-etal-2020-pretrained}. Motivated by these observations, our work focuses on improving the ability of knowledge models to make commonsense inferences about events that convey denial, rejection or contradiction of actions. We define our contributions as follows. First, we crowdsource a new large scale resource, \textbf{A}rray of commonse\textbf{N}se \textbf{I}nferences for \textbf{O}ppositions and \textbf{N}egations (\textsc{Anion}{}), which contains inferences for different types of negated events This new resource can be used to train knowledge models on commonsense inferences associated with the absence of actions. Second, we propose a new class of negation discriminators that can be applied to generated commonsense inferences. These discriminators partition inferences based on logical consistency, thereby mitigating the effects of common affirmative associations that violate negation constraints. Discriminators are trained using contrastive samples from paired affirmative and negated events in \textsc{Anion}. Finally, we conduct an empirical study of both of these techniques and show that using training- and discriminator-based approaches for modeling negation cuts the performance difference between affirmative and negated events by 73\% - 85\% depending on the negation variety. \section{Related Work} \label{sec:related} \paragraph{Commonsense Knowledge Models} Recent approaches have investigated how to distill commonsense knowledge from deep neural language models. \citet{li-etal-2016-commonsense} assign quality scores to novel tuples in ConceptNet to distinguish true knowledge from false knowledge. However, they investigate how to use neural models to validate proposed knowledge rather than generating to expand the KB. \citet{saito-etal-2018-commonsense} propose a joint learning method that incorporate generated knowledge tuples to augment their KB completion model, rather than to enrich the commonsense KB. \citet{jastrzebski-etal-2018-commonsense} extend these approaches to automatically mine commonsense knowledge from Wikipedia. \citet{cui2020does} propose two attention-based methods to analyze commonsense knowledge inside BERT, and the contribution of such knowledge for the model prediction. We find that attention heads successfully capture the structured commonsense knowledge encoded in CONCEPTNET, which helps BERT solve commonsense tasks directly. Fine-tuning fur- ther makes BERT learn to use the common- sense knowledge on higher layers. \citet{davison-etal-2019-commonsense} develop a method for generating commonsense knowledge using a large, pre-trained bidirectional language model. By transforming relational tuples into masked sentences, we can use this model to rank a tuple’s validity by the estimated pointwise mutual information between the two entities. Since we do not update the weights of the bidirectional model, our approach is not bi- ased by the coverage of any one common- sense knowledge base. Though this method performs worse on a test set than models ex- plicitly trained on a corresponding training set, it outperforms these methods when mining commonsense knowledge from new sources, suggesting that unsupervised techniques may generalize better than current supervised ap- proaches. use a masked language model to estimate point-wise mutual information between entities in a possible relation, \citet{klein-nabi-2020-contrastive} We propose a self-supervised method to solve Pronoun Disambiguation and Winograd Schema Challenge problems. Our approach exploits the characteristic structure of training corpora related to so-called “trigger” words, which are responsible for flipping the answer in pronoun disambiguation. We achieve such commonsense reasoning by constructing pair- wise contrastive auxiliary predictions. To this end, we leverage a mutual exclusive loss regu- larized by a contrastive margin. Our architec- ture is based on the recently introduced trans- former networks, BERT, that exhibits strong performance on many NLP benchmarks. Em- pirical results show that our method allevi- ates the limitation of current supervised ap- proaches for commonsense reasoning. \paragraph{Natural Language Inference} \citep{warstadt-etal-2019-investigating} \section*{Ethical Considerations} \subsection*{\textsc{Anion}{} Language Choice and Implications} We select English as the base language of \textsc{Anion}{} so that our resource may be directly linked with the original \textsc{Atomic}{} knowledge graph. We acknowledge, however, that resources in English are more likely to reflect the mindsets and behaviors of English speakers. Furthermore, and in our case specifically, our annotators were primarily from the US. Consequently, this language choice biases the content of the knowledge graph toward North American perspectives, which affects what models trained on these resources would learn about social norms \citep{acharya2020atlas}. Future works may also include other languages and cultures to make the \textsc{Anion}{} resource more culturally and ideologically inclusive. \subsection*{Crowdworker Recruitment, Quality Control and Remuneration} We recruit crowdworkers from MTurk who are located within the US with HIT approval rates higher than 98\%. To ensure high quality task completions, we post pilot batches and manually examine tens of thousands of responses to identify users who provide high quality annotations. We select 834 qualified users for the formal data collection and human evaluation tasks. Since the entire study spans multiple months, we regularly sample responses to re-examine their quality during the formal study, and remove HITs from crowdworkers who provide decreased-quality responses over time. We are particularly cautious about the human evaluation tasks, so even with qualified users, we still comprehensively examine tens of thousands of human evaluation tasks by grouping HITs per users, and look at their responses together to identify potential spamming behaviors and inconsistencies. For the data collection and human evaluation tasks, we aimed to compensate crowdworkers with an average of \$15 per hour. To ensure a fair payment, we first post a pilot task to evaluate average time cost of a specific task, and pay users at a high rate in this round to avoid underpayment during the pilot study. We then calculate new payment from the pilot task such that approximately 75\% of the HITs would have been paid with more than \$15 per hour at the adjusted rate in the pilot round. We then adopt this new rate for the formal study. We repeat the above procedure of determining payment periodically during the study to ensure the crowdworkers are consistently well-paid. \section{Commonsense Knowledge Model of Negated Events} \label{sec:model} In this section, we first introduce COMET \cite{bosselut2019comet}, the base commonsense knowledge generation model we focus on to study commonsense negation reasoning and a negation discriminator submodule used to identify logically plausible COMET generations. \subsection{COMET} \label{sec:model-comet} \textbf{COM}mons\textbf{E}nse \textbf{T}ransformers (COMET) are generative commonsense knowledge models that learn to adapt a language model by training on a seed set of knowledge tuples. Specifically, COMET consumes knowledge tuples in $\{h,r,t\}$ form during training, where $h$ is a head entity, $r$ is a relation type, and $t$ is a tail entity. In \textsc{Atomic}{}, head entities correspond to events, such as ``PersonX has a nightmare,'' and tail entities correspond to commonsense inferences about those events, such as ``As a result, PersonX wakes up.'' The relation types are various dimensions of commonsense inferences that can be made about the events. During the generation, COMET is given $h$ and $r$ to predict $t$. \paragraph{Input} We use the same input token setup for training configurations as in \citet{bosselut2019comet}. A knowledge triple $\{h,r,t\}$ is represented as a concatenated sequence with tokens of each element in the tuple: $X = \{X^{h}, X^{r}, X^{y}\}$ where $X^{h} = \{x_{0}^{h},...,x_{|h|}^{h}\}$ are the tokens comprising the event, $X^{r} = \{x_{0}^{r},...,x_{|r|}^{r}\}$ as tokens comprising the relation, and $X^{t} = \{x_{0}^{t},...,x_{|t|}^{t}\}$ are the tokens comprising the commonsense inference. \paragraph{Loss Function} We train COMET to maximize the conditional loglikelihood of predicting the tokens of the commonsense inference, $X^{t}$: \begin{equation} \mathcal{L}_{C} = - \sum_{s=|h|+|r|}^{|h|+|r|+|t|} \log{P(x_{s}|x_{<s})} \label{eq:comet-loss} \end{equation} \noindent where $|h|$, $|r|$, and $|t|$ are the length of the event, the relation type, and condition object, respectively. \paragraph{Initialization} Similar to \citet{bosselut2020dynamic}, we initialize the trained parameters of COMET to the 345M parameter GPT2 model (GPT2-M) from \citet{Radford2019LanguageMA} Special tokens that represent relation types (e.g., ``xIntent'') are added to the vocabulary and initialized via sampling from the normal distribution. \paragraph{Hyperparameters} Following \citet{bosselut2019comet}, we use a dropout rate of 0.1 and GeLU \cite{hendrycks2020gaussian} units as activation functions. During training, we use the Adam optimizer \cite{kingma2017adam} with a batch size of 64. For COMET models trained on different subsets of the \textsc{Atomic}{} dataset, we adopt a maximum learning rate of 6.25e-5 with a warmup period of 0.002 times of the total number of minibatches customized for each model, which decays linearly until finishing training. We train different COMET models for different subsets of the full on original data (\textbf{O}), original and logical negation data (\textbf{O+L}), original and semi-logical negation data (\textbf{O+S}), original and pragmatic negation data (\textbf{O+P}), and the overall dataset (\textbf{O+L+S+P}), for 21k, 25k, 24k, 24k and 29k minibatches respectively, and apply early stopping for all models. The rest of the hyperparameters are the same as those of GPT2-M in \citet{Radford2019LanguageMA} implemented via the publicly available HuggingFace API\footnote{https://huggingface.co/transformers/}. \section{Negation Discriminator for Inconsistent Inferences} \label{sec:model-discriminator} The negation discriminator is designed to identify plausible generations of negated events. We fine-tune RoBERTa\textsubscript{BASE} \cite{liu2019roberta} as a binary classifier with automatically constructed logically valid and invalid commonsense knowledge triples in the \textsc{Atomic}{} form. \paragraph{Data} The logically valid and invalid data used to train the negation discriminator is automatically constructed from the \textsc{Atomic}{} original and negation dataset. Positive training samples are comprised of the original and newly crowdsourced negation data from \textsc{Atomic}{}. To construct negative training samples, we leverage the concept of common and contrastive sets by swapping the set of contrastive differences between paired original and negation events. Specifically, we replace conditions of original events with distinct conditions of their negation counterparts and vice versa. For example, to construct a negative example of \textit{$\{$``PersonX passes the exam'', xNeed, ``to study hard''$\}$}, we replace the condition object with ``to miss classes all the time,'' which is a condition of the corresponding logical negation event ``PersonX does not pass the exam'' under the same relation. To balance the overall dataset, we sample the same proportion of positive and negative knowledge triples of original and negation events, respectively. Since the boundary between commonsense inferences of negated event pairs is not clear-cut, conditions in the human-annotated contrastive sets may fall into the common set as well, so not all training data has a golden label. However, previous work demonstrates that introducing a proper degree of noise improves the training outcome \cite{Luo2014DeepLW}. The four versions of negation discriminators and their respective training data are shown in Table \ref{tab:discriminator-data-stats}. \begin{table} \small \centering \begin{tabular}{l|c|c} \textbf{Discriminator} & \textbf{Data Subset} & \textbf{$\#$Trn} \\ \midrule Logical (L) & \textsc{Anion}-L & 324,843 \\ Semi-logical (S) & \textsc{Anion}-S & 194,732 \\ Pragmatics (P) & \textsc{Anion}-P & 276,272 \\ All Negation Data (LSP) & \textsc{Anion} & 795,845 \\ \bottomrule \end{tabular} \caption{Statistics of data used to train negation discriminators.} \label{tab:discriminator-data-stats} \end{table} \paragraph{Input} As input to the discriminator model, we design sentence patterns that express relation types in natural language and fill out the patterned sentences with events and conditions before encoding them (e.g., ``PersonX addresses a talk. As a result, PersonX wants to convince others.''). Relations and their corresponding patterned sentences are listed in Table \ref{tab:discriminator-pattern} in Appendix \ref{sec:appendix}. Adopting patterned sentences is found to be a more effective approach than concatenating components in knowledge triples from the pilot study. \paragraph{Loss Function} The negation discriminator is trained to minimized the binary cross-entropy loss for each example: \begin{equation} \mathcal{L}_{D} = y \cdot \log{P( y)} + (1-y) \cdot \log{(1-P( y))} \label{eq:comet-loss} \end{equation} \noindent where $y$ is the label for an input (i.e., logically valid or invalid). \paragraph{Hyperparameters} Parameters are initialized with the trained weights of the RoBERTa-base model in \citet{liu2019roberta}. During training, we use the Adam optimizer \cite{kingma2017adam} and train the model with a batch size of 64. We adopt a maximum learning rate of 4.5e-5 with a warmup period of 10 minibatches. We trained \textbf{L}, \textbf{S}, \textbf{P}, \textbf{LSP}, for 25K, 14K, 21K and 6K minibatches respectively, and apply early stopping for all models. We use a probability threshold of 0.7 to determine whether an input knowledge triple to the discriminator is plausible based on pilot study on the development sets. The rest of the hyperparameters are the same as those of RoBERTa-base \cite{liu2019roberta} implemented via the publicly available HuggingFace API\footnote{https://huggingface.co/transformers/}. \section{Discriminating Inconsistent Inferences} \label{sec:model-discriminator} While training on examples of negated events helps knowledge models generate commonsense inferences for these event types, there is still a large gap compared to their performance on affirmative events. To address this discrepancy, we introduce a discriminator-based approach for distinguishing inconsistent inferences of negated events. Our inference discriminator learns to identify plausible and invalid inferences of events by learning from contrastive samples from \textsc{Atomic}{} and \textsc{Anion}{}. \subsection{Experimental Setup} \label{ssec:disc:setup} We fine-tune the RoBERTa-base model \citep{liu2019roberta} as a binary classifier to identify whether a given knowledge tuple $\{h,r,t\}$ is logically valid. The model is trained on paired original and negated events as described below. Such training examples inject implicit commonsense nuances that differ between oppositional events to teach the discriminator to identify logical pitfalls. Training details for discriminators can be found in Appendix~\ref{sec:appendix:train-discriminator}. \paragraph{Data} \label{ssec:disc:data} The paired events used to train the negation discriminator are automatically constructed from the \textsc{Atomic}{} and \textsc{Anion}{} knowledge graphs. Positive examples can be constructed by sampling tuples from each knowledge graph. To construct negative training samples, we introduce the concept of \textit{common} and \textit{contrast} sets among inferences of events and their oppositions. Common and contrast sets distinguish how commonsense inferences are not necessarily negated in the same manner as their corresponding events. While certain inferences of events are also in opposition to a negated event, some may be common. For the events ``X eats a cheeseburger'' and ``X eats a salad,'' an inference such as ``X is hungry'' might be common to both events while inferences such as ``X is unhealthy'' or ``X is healthy'' would be viewed as contrastive. Specifically, we assume two head events in \textsc{Atomic}{} and \textsc{Anion}{}, and their respective set of tail inferences regarding a common relation type. We define the common set of these inferences as the intersection of the two sets of tail inferences connected to each head event by applying the exact match of string forms. The contrast set is formed by distinct tail inferences connected to the two head events. Logically valid (\textit{i.e.}, positive) training examples consist of knowledge tuples from \textsc{Atomic}{} and \textsc{Anion}{}. Logically invalid (\textit{i.e.}, negative) training examples are formed by swapping the set of contrast set inferences between paired original and negated events.\footnote{We note that annotations in \textsc{Atomic}{} and \textsc{Anion}{} are finite (\textit{i.e.}, not covering the full space of possible commonsense inferences about events). As a result, it is possible that in a more expansive annotation, elements of the contrast sets would in fact be part of the common set of an event and its negation. For the purpose of this work, however, contrast sets were an efficient way of acquiring high-quality semantically negative examples for training discriminators.} To balance the training set, we sample the same number of positive and negative tuples for original and negation events. Statistics of the resulting training sets are in Table \ref{tab:discriminator-data-stats}. \begin{table}[t] \small \centering \begin{tabular}{l|c|c} \textbf{Discriminator} & \textbf{Train Set} & \textbf{Size} \\ \midrule Logical Negation \textbf{(L)} & \textsc{Anion}-L & 324,843 \\ Semi-logical Negation \textbf{(S)} & \textsc{Anion}-S & 194,732 \\ Commonsense Contradiction \textbf{(C)} & \textsc{Anion}-C & 276,272 \\ All Oppositional Data \textbf{(LSC)} & \textsc{Anion} & 795,845 \\ \bottomrule \end{tabular} \caption{Statistics of data used to train negation discriminators.} \label{tab:discriminator-data-stats} \end{table} \subsection{Experiments} \label{ssec:disc_exps} Using different portions of \textsc{Anion}{} for training yields four unique discriminators (\textit{i.e.}, \textbf{L}, \textbf{S}, \textbf{C} and \textbf{LSC}) that we apply to commonsense inferences generated by COMET. The discriminators classify each option as either logically \textit{valid} or \textit{invalid}, partitioning the candidates into two sets, which we evaluate with human judgements. As a baseline, we also record the precision of not using a discriminator, which assumes all generated inferences are valid candidates (\textit{i.e.}, the \textit{all} set). \paragraph{Metrics} We evaluate and compare the quality of the \textit{all}, \textit{valid} and \textit{invalid} sets using BLEU-2 and the same human evaluation as in \S \ref{sec:model-comet}. The \textit{all} set contains the full set of 10 candidates, while the \textit{valid} and \textit{invalid} sets have varying number of elements depending on how discriminators classify them, summing to 10. To compute statistical significance between valid and all sets, we use a permutation test with 100K permutations. Details are provided in Appendix~\ref{app:permutation}. \begin{table} \small \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|lrrrr} \textbf{Eval Set} & & \textbf{$\#$} & \textbf{BL2$\uparrow$} & \textbf{P@\textit{k}$\uparrow$} \\ \midrule \multirowcell{3}{\textsc{Atomic}}& all & 10.0 & 14.18 & 55.18 \\ & valid & 6.3 & \textbf{14.24} & \textbf{59.07} \\ & invalid & 3.7 & 13.93 & 44.10 \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 10.0 & 10.86 & 35.84 \\ & valid & 5.6 & \textbf{11.33} & \textbf{45.59} \\ & invalid & 4.4 & 10.13 & 25.96 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 10.0 & 12.07 & 36.89 \\ & valid & 6.3 & \textbf{12.63} & \textbf{44.93} \\ & invalid & 3.7 & 11.32 & 27.83 \\ \midrule \multirowcell{3}{\textsc{Anion}-C} & all & 10.0 & 14.32 & 46.70 \\ & valid & 5.9 & \textbf{14.78} & \textbf{51.45} \\ & invalid & 4.1 & 13.56 & 37.33 \\ \midrule \end{tabular} \caption The evaluation of the \textit{all}, \textit{valid} and \textit{invalid} sets of inferences generated by COMET-\textsc{Atomic}{} as partitioned by the \textbf{LSC} discriminator. \textbf{P@\textit{k}} corresponds to the human-rated precision of a set. $k$ is the number of elements in \textit{all, valid,} or \textit{invalid} set. For the \textit{valid} set, higher \textbf{P@\textit{k}} is better (\textit{i.e.}, more valid inferences are being partitioned). For the \textit{invalid} set, lower \textbf{P@\textit{k}} is better (\textit{i.e.}, fewer valid inferences are being included). } \label{tab:comet-disc-eval-per-example} \end{table} \begin{table}[t!] \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{P}\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not skate \\around\\\textbf{\textit{xAttr}}}} & athletic & \xmark & \xmark \\ & careless & \xmark & \xmark \\ & lazy & \cmark & \cmark \\ & uncoordinated & \cmark & \cmark \\ & unskilled & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not sit \\behind Y\\\textbf{\textit{xIntent}}}} & to be alone & \cmark & \cmark \\ & to be left alone & \cmark & \cmark \\ & to avoid Y & \cmark & \cmark \\ & to sit & \xmark & \xmark \\ & to wait & \cmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not look \\angry\\\textbf{\textit{xNeed}}}} & to calm down & \xmark & \cmark \\ & to watch a movie & \cmark & \xmark \\ & to have been provoked & \xmark & \xmark \\ & to not be angry & \cmark & \cmark \\ & to be calm & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X refuses \\to hear a \\scary noise\\\textbf{\textit{xWant}}}} & to run away & \xmark & \xmark \\ & to go to sleep & \cmark & \cmark \\ & to be safe & \cmark & \cmark \\ & to keep quiet & \cmark & \cmark \\ & to avoid the noise & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X never \\brings Y into \\conflicts\\\textbf{\textit{oWant}}}} & to avoid X & \xmark & \xmark \\ & to be left alone & \xmark & \cmark \\ & to thank X & \cmark & \cmark \\ & to fight back & \xmark & \xmark \\ & to avoid conflict & \xmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X scarcely\\gets sunburned\\ \\\textbf{\textit{xReact}}}} & burned & \xmark & \xmark\\ & hurt & \xmark & \xmark\\ & sick & \xmark & \xmark\\ & sad & \xmark & \xmark\\ & satisfied & \cmark & \cmark\\ \midrule \multirow{5}{*}{\makecell[tl]{X under no\\ circumstances\\forgets Y's wallet\\\textbf{\textit{oReact}}}} & upset & \xmark & \xmark \\ & sad & \xmark & \xmark \\ & angry & \xmark & \xmark \\ & thankful & \cmark & \cmark \\ & grateful & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X has trouble \\with advertising \\X's business\\\textbf{\textit{xEffect}}}} & loses money & \cmark & \cmark \\ & loses clients & \cmark & \cmark \\ & gets fired & \cmark & \cmark \\ & gets sued & \xmark & \xmark \\ & cries & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X puts Y \\out of mind \\ \\\textbf{\textit{oEffect}}}} & has a better day & \xmark & \xmark \\ & becomes sad & \cmark & \cmark \\ & cries & \cmark & \cmark \\ & is grateful towards X & \xmark & \xmark \\ & feels better & \xmark & \xmark \\ \bottomrule \end{tabular} \vspace*{-1mm} \caption{ Inferences of randomly selected \textsc{Anion}{} events by COMET-\textsc{Atomic}{}. The top 5 options are classified as \textit{valid} or \textit{invalid} by the \textbf{LSC} discriminator. \textbf{V} indicates whether an option is classified as \textit{valid} by the \textbf{LSC} discriminator. \textbf{P} indicates whether an option is plausible judging by humans.} \vspace*{-2mm} \label{tab:discriminator-generations-all} \end{table} \paragraph{Do discriminators effectively distinguish inconsistent inferences?} The results in Table \ref{tab:comet-disc-eval-per-example} demonstrate that the discriminator trained on all subsets of \textsc{Anion}{} (\textbf{LSC}) can select subsets of inferences (\textit{i.e.}, the \textit{valid} set) that are more logically consistent with their seed event. This observation holds across all evaluation subsets of \textsc{Anion}{}, as well as the original \textsc{Atomic}{} evaluation set. Table \ref{tab:discriminator-generations-all} shows examples of \textit{valid} and \textit{invalid} candidates for negated and contradicted events from \textsc{Anion}{} as specified by the \textbf{LSC} discriminator. The discriminator is notably good at identifying invalid inferences wrongly associated to corresponding affirmative events (\textit{e.g.}, ``athletic'' and ``careless'' for the event ``X does not skate around'' under the relation, \textit{xAttr}). However, this analysis leaves open the possibility that we are generating too many inferences for each event, but that the decoder could rank correct inferences higher among the full set of generated candidates. To evaluate this possibility, we count the number of elements in the \textit{valid} sets for each example and only keep the same number of the top-scoring elements from the \textit{all} set (scored using generation perplexity). In Table~\ref{tab:o-p@len(valid)}, we see the average precision score for the pruned \textit{all} sets (P@\{\# valid\}) still underperforms the precision of their corresponding \textit{valid} sets. \begin{table}[t] \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|r|rrrr} \multicolumn{2}{c|}{\diagbox[width=\widthof{xxxxxxxxxx}]{\textbf{Eval}}{\textbf{Disc}}} & \textbf{L} & \textbf{S} & \textbf{C} & \textbf{LSC} \\ \midrule \multirowcell{3}{\textsc{Atomic}} & all & \textbf{55.69} & 55.93 & 56.94 & 58.30 \\ & valid & 55.65 & \textbf{56.18} & \textbf{57.26} & \textbf{59.07} \\ & \%iprv & -0.07 & 0.44 & 0.57 & \underline{1.32} \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 39.46 & 37.85 & 36.43 & 39.45 \\ & valid & \textbf{**46.39} & \textbf{**41.93} & \textbf{37.54} & \textbf{**45.59} \\ & \%iprv & \underline{17.55} & 10.78 & 3.03 & 15.57 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 37.13 & 39.29 & 37.72 & 38.55 \\ & valid & \textbf{37.48} & \textbf{**44.58} & \textbf{39.03} & \textbf{**44.93} \\ & \%iprv & 0.96 & 13.47 & 3.45 & \underline{16.56} \\ \midrule \multirowcell{3}{\textsc{Anion}-C} & all & \textbf{46.92} & 47.32 & 48.26 & 48.81 \\ & valid & 46.83 & \textbf{47.68} & \textbf{48.79} & \textbf{*51.45} \\ & \%iprv & -0.20 & 0.75 & 1.09 & \underline{5.40} \\ \midrule \end{tabular} \vspace*{-2mm} \caption{P@\{\# valid\}{} scores of the \textit{all} and \textit{valid} sets determined by the \textbf{L}, \textbf{S}, \textbf{C} and \textbf{LSC} discriminators. Generations are from COMET-\textsc{Atomic}. Asterisks (**) indicate significance at p<0.01. iprv\% is the improvement of the \textit{valid} over the \textit{all} set. \underline{Underlines} show the highest iprv\% across discriminators.} \label{tab:o-p@len(valid)} \end{table} \paragraph{Which negation categories are most important to provide a discriminator for?} To examine the generalization effects of each negation type, we also train discriminators on a single negation subset of \textsc{Anion}{} examples (\textit{i.e.}, \textbf{L}, \textbf{S}, \textbf{C}) and compare the P@\{\# valid\}{} score of the \textit{all} and \textit{valid} sets. Results in Table \ref{tab:o-p@len(valid)} indicate that each discriminator is best for identifying valid inferences for the types of events on which it was trained. The \textbf{L}, \textbf{S}, and \textbf{C} discriminators all achieve improvements when partitioning events similar to their training. However, the \textbf{LSC} discriminator trained on all negation forms shows the largest \textit{valid} set improvement across all discriminators on \textsc{Atomic}{}, \textsc{Anion}-S, and \textsc{Anion}-C. On \textsc{Anion}-L, the \textbf{LSC} discriminator still yields a significantly improved \textit{valid} set \section{Experiments} \label{sec:exps} We evaluate two methods of improving COMET's ability of predicting around negation events: fine-tuning COMET on negation data or/and applying negation discriminators to filter out generations that are logically plausible. \subsection{COMET with Negation Data} \label{sec:exp-comet-neg-data} \begin{table} \small \renewcommand{\arraystretch}{1.1} \resizebox{\linewidth}{!}{ \centering \begin{tabular}{l|lrrrr} \textbf{Eval} & \textbf{Trn Set} & \textbf{PPL $\downarrow$} & \textbf{B2 $\uparrow$} & \textbf{P@10 $\uparrow$} \\ \midrule \multirow{2}{*}{\textsc{Atomic}} & \textsc{Atomic} & 9.30 & \textbf{14.18} & \textbf{55.18} \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.28} & 14.05 & 53.61 \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-L}} & \textsc{Atomic} & 10.87 & 10.86 & 35.84 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.08} & \textbf{11.96} & \textbf{45.42} \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-S}}& \textsc{Atomic} & 11.69 & 12.07 & 36.89 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{9.80} & \textbf{13.22} & \textbf{46.88} \\ \midrule \multirow{2}{*}{\makecell[tl]{\textsc{Anion}-P}} & \textsc{Atomic} & 12.02 & 14.32 & 46.70 \\ & \textsc{Atomic}{} + \textsc{Anion}{} & \textbf{11.20} & \textbf{14.64} & \textbf{50.65} \\ \bottomrule \hline \end{tabular} } \caption{Evaluations of COMET models trained on different combinations of the \textsc{Atomic}{} and \textsc{Anion}{} knowledge graphs. Training on examples of negated events leads to large improvements in the quality of generated inferences with minimal dropoff in the quality of inferences for positive polarity events.} \label{tab:comet-neg-result-main} \end{table} \paragraph{Data and baseline} We first train COMET on the overall \textsc{Atomic}{} data, including the original, logical negation, semi-logical negation, and pragmatic negation data (\textbf{O+L+S+P}) to expand the commonsense KB along both affirmative and negated dimensions. The combined dataset is shuffled so that the original and negation data are uniformly mixed during training and evaluation. As a baseline, we also train COMET solely on the original data (\textbf{O}). As an ablation study, we trained COMET on both the original and logical negation data (\textbf{O+L}), the original and semi-logical negation data (\textbf{O+S}), and the original and pragmatic negation data (\textbf{O+P}). \paragraph{Metrics} Following \citet{bosselut2019comet}, we evaluate the quality of COMET generations with BLEU-2\cite{bleu-2003} as an automatic evaluation metric. We also include the perplexity of models on their reference generations. For the human evaluation, we employ human judges from MTurk to identify whether generated \textsc{Atomic}{} commonsense conditions given events and typed relations of various COMET models are adequately plausible. We compiled a list of original events from the \textsc{Atomic}{} original test set with their logical, semi-logical, and pragmatic negation counterparts included in their test sets. We randomly sampled 100 such original events with corresponding logical, semi-logical, and pragmatic negation events to conduct the human evaluation. Following \citet{bosselut2019comet} and \citet{sap2019atomic}, for each event and relation type, 10 options are generated with beam search. We present the full beam of each event-relation prompt to at least six distinct crowdworkers and ask them to select all plausible options. We conduct comprehensive pre- and post-evaluation screening on the users and the tasks being completed to ensure the objectivity and high quality of the evaluations. Besides qualifying users before tasks, we double check to remove evaluation tasks that are not carefully conducted (e.g., tasks done by users that select all/no options for all hundreds of tasks that they perform, etc.). To ensure comparable evaluation sets with a balanced number of examples under each prompt across models, we randomly sample at most five evaluations from the screened evaluation set for each event-relation prompt. For each model (\textbf{O}, \textbf{O+Ln}, \textbf{O+S}, \textbf{O+P}, \textbf{O+L+S+P}) with respect to each evaluation set (\textbf{O}, \textbf{L}, \textbf{S}, \textbf{P}), at most 45K (m=43,374, $\sigma$=1,848) annotations are used to compute the final human evaluation score (100 events $\times$ 9 relations $\times$ 10 options $\times$ at most 5 annotations). We calculate the average precision at 10 (P@10), the average number of correct options per event-relation prompt, for each model's generations across different evaluation sets. The full assessment of COMET model trained on \textsc{Atomic}{} original and negation data is reported in Table \ref{tab:comet-neg-result-main} and the ablation result is reported in Table \ref{tab:comet-neg-result-ablation}. \begin{table} \small \centering \begin{tabular}{l|lrrrr} \textbf{Eval} & \textbf{Trn Set} & \textbf{PPL $\downarrow$} & \textbf{BLEU $\uparrow$} & \textbf{P@10 $\uparrow$} \\ \midrule \multirow{4}{*}{\textsc{Atomic}} & \textsc{Atomic} & 9.30 & 14.18 & 55.18 \\ & + \textsc{Anion}-L & \textbf{9.27} & \textbf{14.20} & \textbf{58.11} \\ & + \textsc{Anion}-S & 9.30 & 14.09 & 55.74 \\ & + \textsc{Anion}-P & 9.29 & 14.10 & 52.22 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-L}} & \textsc{Atomic} & 10.87 & 10.86 & 35.84 \\ & + \textsc{Anion}-L & \textbf{9.28} & \textbf{11.94} & \textbf{44.94} \\ & + \textsc{Anion}-S & 9.93 & 11.29 & 44.01 \\ & + \textsc{Anion}-P & 10.34 & 11.04 & 42.33 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-S}}& \textsc{Atomic} & 11.69 & 12.07 & 36.89 \\ & + \textsc{Anion}-L & 10.69 & 12.69 & 42.38 \\ & + \textsc{Anion}-S & \textbf{10.23} & \textbf{12.79} & \textbf{45.50} \\ & + \textsc{Anion}-P & 10.95 & 12.35 & 41.76 \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Anion}-P}} & \textsc{Atomic} & 12.02 & 14.32 & 46.70 \\ & + \textsc{Anion}-L & 11.72 & 14.43 & 47.78 \\ & + \textsc{Anion}-S & 11.67 & 14.34 & 46.09 \\ & + \textsc{Anion}-P & \textbf{11.50} & \textbf{14.58} & \textbf{48.79} \\ \bottomrule \hline \end{tabular} \caption{Ablation results of knowledge models trained and evaluated on different portions of the \textsc{Anion}{} knowledge graph of negated events. The best result on each subset of \textsc{Anion}{} comes from training on similar examples. The model trained on negated events from \textsc{Anion}-L performs the best at generating inferences for the original \textsc{Atomic}{} events.} \label{tab:comet-neg-result-ablation} \end{table} \paragraph{Results} While the original COMET model can produce correct generations with P@10 of 55.18 for the original \textsc{Atomic}{} test set, its performance regarding logical and semi-logical negation events drops substantially for approximately 35\%, with 33.84 and 36.89 P@10 respectively. Although pragmatic negation events share similar sentence structures as original events without specific negation cues, since they are human crafted, they tend to be more creative and diverse than the corpora extracted events as in the original \textsc{Atomic}{} dataset. Therefore, the performance over pragmatic negation events also experiences a 9.6\% drop from the original task. By training COMET on both the original and negation data (\textbf{O+L+S+P}), as shown in Table \ref{tab:comet-neg-result-main}, COMET's performance increases over logical negation events for 26.7\%, semi-logical events for 27.1\%, and pragmatic negation events for 8.5\%, while maintains the approximately same level of performance over the original events under human evaluation. The BLEU-2 scores share similar trends as human evaluation scores, indicating that the newly proposed \textsc{Atomic}{} negation data effectively improves COMET's ability to make commonsense inferences over negated events reflected by both human and automatic evaluation metrics. When we separate the effects of different negation types, Table \ref{tab:comet-neg-result-ablation} shows that while each negation type improves COMET's performance most effectively with respect to their own type of data, they can transfer the learning outcomes between negation types as well. Notably, for COMET trained on \textbf{O+L}, it approaches a similar level of a performance boost as COMET trained on \textbf{O+L+S+P} across all negation types. Besides, it slightly improves over the original events as well, which is not observed from COMET trained on \textbf{O+L+S+P}. Such improvements across original and different negation types of events indicate that as simple as raising the model's awareness of the negation cue ``not'' results in some complex reasoning path involving negation, which boosts the model's commonsense reasoning ability overall. \subsection{Negation Discriminator} \paragraph{Baselines} As described in Section \ref{sec:model-discriminator}, we fine-tune RoBERTa\textsubscript{BASE} with automatically constructed positive and negative data derived from logical negation, semi-logical negation, pragmatic negation and the combination of all to obtained four negation discriminators (i.e., \textbf{L}, \textbf{S}, \textbf{LP} and \textbf{LSP}). By applying a negation discriminator to a set of candidates generated by COMET, it classifies each option as either logically \textit{valid} or \textit{invalid}, partitioning the overall set of candidates into \textit{valid} and \textit{invalid} sets. We employ COMET generations without applying discriminators (i.e., the \textit{all} set) as baselines. \paragraph{Metrics} We evaluate and compare the quality of the \textit{all}, \textit{valid} and \textit{invalid} sets automatically with BLEU-2 and with accuracy, precision, recall and F1 scores based on human evaluation scores. To assess the quality of a discriminator, we use the same human evaluation scores as in Section \ref{sec:exp-comet-neg-data} but calculate accuracies (i.e., the percentage of correct options among all candidates) separately for each of the \textit{all}, \textit{valid} and the \textit{invalid} sets. The \textit{all} set contains the full beam with 10 elements, while the \textit{valid} and \textit{invalid} sets have varying number of elements depending on how discriminators classify them, summing to 10. We also report the precision, recall and F1 scores for the \textit{all}, \textit{valid} and \textit{invalid} set to assess the negation discriminator's ability of identifying logic flaws in commonsense reasoning. In addition, when we compare the \textit{valid} set to the full beam, since there are a different number of options in each set, it is not immediately obvious whether the \textit{valid} set is actually a better subset of the full beam than directly extracting the same number of elements from the top of the beam. Therefore, we compute precision at the length of the valid set for the full beam, P@len(valid), by taking the same number of options as in the \textit{valid} set from the top of the ranked \textit{all} set, and calculate the accuracy of this subset. To compare P@len(valid) for the \textit{all} and \textit{valid} sets, we use Permutation Test\footnote{http://rasbt.github.io/mlxtend/} with 100k permutations to test for statistical significance. \paragraph{Results} \begin{table} \small \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|lrr|rrr} \textbf{Eval Set} & & \textbf{BL2$\uparrow$} & \textbf{Acc$\uparrow$} & \textbf{Prec$\uparrow$} & \textbf{Rec$\uparrow$} & \textbf{F1$\uparrow$} \\ \midrule \multirowcell{3}{\textsc{Atomic}} & all & 14.18 & 55.18 & - & 1.00 & - \\ & valid & \textbf{14.24} & \textbf{59.66} & 59.66 & 67.73 & 63.44 \\ & invalid & 13.93 & 47.67 & 52.33 & 43.61 & 47.57 \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 10.86 & 35.84 & - & 1.00 & - \\ & valid & \textbf{11.33} & \textbf{47.98} & 47.98 & 58.90 & 52.88 \\ & invalid & 10.13 & 26.30 & 73.70 & 64.33 & 68.70 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 12.07 & 36.89 & - & 1.00 & - \\ & valid & \textbf{12.63} & \textbf{47.78} & 47.78 & 54.77 & 51.04 \\ & invalid & 11.32 & 28.91 & 71.09 & 65.02 & 67.92 \\ \midrule \multirowcell{3}{\textsc{Anion}-P} & all & 14.32 & 46.70 & - & 1.00 & - \\ & valid & \textbf{14.78} & \textbf{52.07} & 52.07 & 63.25 & 57.12 \\ & invalid & 13.56 & 39.66 & 60.34 & 48.98 & 54.07 \\ \midrule \end{tabular} \caption{O, BLEU-2 \antoine{need better caption}} \label{tab:comet-disc-eval} \end{table} \begin{table}[t] \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{C}\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not skate \\around\\\textbf{\textit{xAttr}}}} & athletic & \xmark & \xmark \\ & careless & \xmark & \xmark \\ & lazy & \cmark & \cmark \\ & uncoordinated & \cmark & \cmark \\ & unskilled & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not sit \\behind Y\\\textbf{\textit{xIntent}}}} & to be alone & \cmark & \cmark \\ & to be left alone & \cmark & \cmark \\ & to avoid Y & \cmark & \cmark \\ & to sit & \xmark & \xmark \\ & to wait & \cmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not look \\angry\\\textbf{\textit{xNeed}}}} & to calm down & \cmark & \cmark \\ & to watch a movie & \xmark & \xmark \\ & to have been provoked & \xmark & \xmark \\ & to not be angry & \xmark & \cmark \\ & to be calm & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not rent \\an apartment\\\textbf{\textit{xWant}}}} & to save money & \cmark & \cmark \\ & to get a job & \cmark & \xmark \\ & to pay rent & \xmark & \xmark \\ & to move in & \xmark & \xmark \\ & to get a new apartment & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X is \\not offered \\the job\\\textbf{\textit{oWant}}}} & to hire X & \xmark & \xmark \\ & to fire X & \cmark & \cmark \\ & to hire someone else & \cmark & \cmark \\ & to accept the job & \xmark & \xmark \\ & to hire them & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does\\not buy \\a snack\\\textbf{\textit{xReact}}}} & satisfied & \xmark & \xmark\\ & hungry & \cmark & \cmark\\ & satiated & \xmark & \xmark\\ & full & \xmark & \xmark\\ & guilty & \cmark & \xmark\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not bring Y\\into conflict\\\textbf{\textit{oWant}}}} & relieved & \cmark & \cmark \\ & sad & \xmark & \xmark \\ & satisfied & \cmark & \cmark \\ & grateful & \cmark & \cmark \\ & angry & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not learn \\new things\\\textbf{\textit{xEffect}}}} & gains knowledge & \xmark & \xmark \\ & becomes lazy & \cmark & \cmark \\ & gets bored & \cmark & \cmark \\ & becomes ignorant & \cmark & \cmark \\ & cries & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not put Y \\in mind\\\textbf{\textit{oEffect}}}} & becomes confused & \xmark & \cmark \\ & does not think about X & \cmark & \cmark \\ & Y thinks about X & \xmark & \xmark \\ & Y is not remembered & \cmark & \cmark \\ & cries & \xmark & \cmark \\ \bottomrule \end{tabular} \caption{Randomly selected generations of the original COMET model regarding logical negation events. The top 5 options are classified as either \textit{valid} or \textit{invalid} by the \textbf{LSP} discriminator. \textbf{V}alid indicates whether a option is classified as \textit{valid} by the \textbf{LSP} discriminator. \textbf{C}orrect indicates whether a option is plausible judging by humans.} \label{tab:discriminator-generations-logical} \end{table} The BLEU-2 and human evaluation accuracy scores in Table \ref{tab:comet-disc-eval} demonstrate that the overall negation discriminators trained on logical, semi-logical, and pragmatic negation data (\textbf{LSP}) can extract a subset (i.e., the \textit{valid} set) of candidates from the full beam with higher average accuracy for all types of negation and original events while leaving options with lower quality out (i.e., the \textit{invalid} set). For the logical and semi-logical negation evaluation sets, recall scores indicate that the discriminator successfully rules out around 65\% of the incorrect candidates from the full beam. Furthermore, about 55\% correct candidates are selected by the \textbf{LSP} discriminator. For the original and pragmatic negation event types for which the logical implication is subtle, about 65\% of correct options and 45\% of incorrect options are successfully identified by the \textbf{LSP} discriminator. Table \ref{tab:discriminator-generations-logical} shows examples of \textit{valid} and \textit{invalid} candidates for logical negation events as specified by the \textbf{LSP} discriminator. \begin{table} \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|r|rrrr} \multicolumn{2}{c|}{\diagbox[width=\widthof{xxxxxxxxxx}]{\textbf{Eval}}{\textbf{Disc}}} & \textbf{L} & \textbf{S} & \textbf{P} & \textbf{LSP} \\ \midrule \multirowcell{3}{\textsc{Atomic}} & all & \textbf{55.69} & 55.93 & 56.94 & 58.30 \\ & valid & 55.65 & \textbf{56.18} & \textbf{57.26} & \textbf{59.07} \\ & \%iprv & -0.07 & 0.44 & 0.57 & \underline{1.32} \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 39.46 & 37.85 & 36.43 & 39.45 \\ & valid & \textbf{**46.39} & \textbf{**41.93} & \textbf{37.54} & \textbf{**45.59} \\ & \%iprv & \underline{17.55} & 10.78 & 3.03 & 15.57 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 37.13 & 39.29 & 37.72 & 38.55 \\ & valid & \textbf{37.48} & \textbf{**44.58} & \textbf{39.03} & \textbf{**44.93} \\ & \%iprv & 0.96 & 13.47 & 3.45 & \underline{16.56} \\ \midrule \multirowcell{3}{\textsc{Anion}-P} & all & \textbf{46.92} & 47.32 & 48.26 & 48.81 \\ & valid & 46.83 & \textbf{47.68} & \textbf{48.79} & \textbf{**51.44} \\ & \%iprv & -0.20 & 0.75 & 1.09 & \underline{5.40} \\ \midrule \end{tabular} \caption{For generations of COMET trained on \textsc{Atomic}, the Precision @ \{\# valid\} scores of the \textit{all} and \textit{valid} sets determined by the \textbf{L}, \textbf{S}, \textbf{LP} and \textbf{LSP} discriminators with respect to the original and negation evaluation sets. The double asterisks(**) indicate significant level at 0.01. iprv\% is the percentage improvement of the \textit{valid} set over the \textit{all} set.} \label{tab:o-p@len(valid)} \end{table} To examine the specific contribution of each negation type towards different evaluation sets, we also look at discriminators trained on single type of negation data (i.e., \textbf{L}, \textbf{S}, \textbf{P}) and compare the P@len(valid) score of the \textit{all} and \textit{valid} sets. Table \ref{tab:o-p@len(valid)} shows that the \textbf{L} discriminator achieves significant improvement of P@len(valid) over the logical negation evaluation set at p < 0.01. The \textbf{S} discriminator not only achieves significant improvement over semi-logical negation events but also over logical negation events at p < 0.01, indicating a transfer of learning outcome between negation types. Although the \textbf{P} discriminator does not gain a significant improvement of p@len(valid) over any evaluation set, when combining the pragmatic negation with logical and semi-logical negations in training the \textbf{L+S+P} discriminator, it achieves significant improvement over all three types of negation events at p < 0.01. Results from Table \ref{tab:o-p@len(valid)} indicates that we can apply negation discriminators, particularly the more well-rounded \textbf{LSP} discriminator, as filters to extract a subset of options from raw generations of the original COMET model with better accuracy for negated event queries. \begin{table} \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|r|rrrr} \multicolumn{2}{c|}{\diagbox[width=\widthof{xxxxxxxxxx}]{\textbf{Eval}}{\textbf{Disc}}} & \textbf{L} & \textbf{S} & \textbf{P} & \textbf{LSP} \\ \midrule \multirowcell{3}{\textsc{Atomic}} & all & 54.16 & 54.49 & 55.03 & 55.68 \\ & valid & \textbf{54.20} & \textbf{54.64} & \textbf{55.71} & \textbf{**57.58} \\ & \%iprv & 0.08 & 0.28 & 1.23 & \underline{3.41} \\ \midrule \multirowcell{3}{\textsc{Anion}-L} & all & 46.54 & 46.26 & 46.15 & 46.39 \\ & valid & \textbf{**50.71} & \textbf{**48.36} & \textbf{46.16} & \textbf{**49.85} \\ & \%iprv & \underline{8.98} & 4.54 & 0.03 & 7.45 \\ \midrule \multirowcell{3}{\textsc{Anion}-S} & all & 46.90 & 47.73 & 47.47 & 47.53 \\ & valid & \textbf{47.14} & \textbf{**50.42} & \textbf{48.20} & \textbf{**50.62} \\ & \%iprv & 0.51 & 5.65 & 1.55 & \underline{6.50} \\ \midrule \multirowcell{3}{\textsc{Anion}-P} & all & 50.80 & 51.29 & 51.28 & 51.83 \\ & valid & \textbf{50.94} & \textbf{51.52} & \textbf{*52.65} & \textbf{**53.91} \\ & \%iprv & 0.28 & 0.45 & 2.67 & \underline{4.02} \\ \midrule \end{tabular} \caption{For generations of COMET trained on \textbf{O+L+S+P}, the P@len(valid) scores of the \textit{all} and \textit{valid} sets determined by \textbf{L}, \textbf{S}, \textbf{LP} and \textbf{LSP} discriminators with respect to the original and negation evaluation sets. The single (*) and double asterisks (**) indicate significant level at 0.05 and 0.01 respectively. iprv\% is the percentage improvement of the \textit{valid} set over the \textit{all} set.} \label{tab:o+l+s+p-p@len(valid)} \end{table} Furthermore, we apply negation discriminators to the COMET model trained on negation data and achieve further improvement over the select output set's quality as shown in Table \ref{tab:o+l+s+p-p@len(valid)}. By applying the \textbf{LSP} discriminator to generations of COMET model trained on \textbf{O+L+S+P}, it achieves significant improvements over all evaluation sets, including the original events at p < 0.01. The \textbf{P} discriminator also achieves significant improvement with respect to pragmatic negation events at P < 0.05, which is not observed when applying it to the generations of COMET trained on \textbf{O}. The full evaluation of the P@len(valid) and P@3 scores of applying different discriminators to generations of COMET trained on different data over all evaluation sets are shown in Table \ref{tab:full-p@len(valid)} and \ref{tab:full-p@3} in Appendix \ref{sec:appendix}. \section{Experiment} \label{sec:exps} We evaluate two methods of improving COMET's ability of predicting around events with negation cues: fine-tuning COMET on negation data or/and applying logic discriminators to separate logically valid and invalid generations. \subsection{COMET with Negation Data} \label{sec:exp-comet-neg-data} \paragraph{Baselines} As described in Section \ref{sec:model-comet}, we combine ATOMIC original with logical negation (\textbf{O+Ln}), semi-logical negation (\textbf{O+Sn}) and both logical and semi-logical negation data (\textbf{O+Ln+Sn}) for training COMET. As a baseline, we also train COMET only on the original data (\textbf{O}). \paragraph{Metrics} Following \citet{bosselut2019comet}, we evaluate the quality of COMET generations with BLEU-2\cite{bleu-2003} as an automatic evaluation metric. We also include the perplexity of models on their reference generations. For the human evaluation, we employ human judges from Amazon Mechanical Turk (AMT) to identify whether generated ATOMIC commonsense conditions given events and typed relations of various COMET models are adequately plausible. We compiled a list of original events from the ATOMIC original test set with both their logical and semi-logical negation counterparts included in their test set. We randomly sampled 100 such original events with corresponding logical and semi-logical events to conduct human evaluation. Following \citet{bosselut2019comet} and \citet{sap2019atomic}, for each event and relation type, 10 options of the inferred condition are generated with beam search. We present the full beam to five distinct crowdworkers and ask them to select all plausible options. For each model (\textbf{O}, \textbf{O+Ln}, \textbf{O+Sn}, \textbf{O+Ln+Sn}), 45K annotations are collected for each of the original, logical negation and semi-logical negation test sets (100 events $\times$ 9 relations $\times$ 10 options $\times$ 5 annotations), resulting in 135K annotations per model. We calculate the average precision at 10 (P@10), the average number of correct generations per dimension, for generations of each model across different evaluation sets. The full assessment of different versions of COMET models is reported in Table \ref{tab:comet-neg-auto-eval}. \begin{table} \small \renewcommand{\arraystretch}{1.1} \centering \begin{tabular}{l|lrrrr} \textbf{Eval} & \textbf{Trn Set} & \textbf{PPL $\downarrow$} & \textbf{BLEU $\uparrow$} & \textbf{P@10 $\uparrow$} \\ \midrule \multirow{4}{*}{original} & O & 9.30 & 14.18 & 53.22 \\ & O+L & \textbf{9.27} & \textbf{14.20} & \textbf{55.60} \\ & O+S & 9.30 & 14.09 & 48.35 \\ & O+P & 9.29 & & 52.16 \\ & O+L+S & 9.29 & 14.12 & 49.40 \\ & O+L+S+P & 9.28 & & 51.94 \\ \midrule \multirow{4}{*}{\makecell[tl]{logical}} & O & 10.87 & 10.86 & 33.21 \\ & O+L & 9.28 & 11.94 & 46.69 \\ & O+S & 9.93 & 11.29 & 40.18 \\ & O+P & 10.34 & & 43.18 \\ & O+L+S & 9.17 & 12.00 & 41.51 \\ & O+L+S+P & \textbf{9.08} & & \textbf{51.55} \\ \midrule \multirow{4}{*}{\makecell[tl]{semi}}& O & 11.69 & 12.07 & 35.15 \\ & O+L & 10.69 & 12.69 & 42.13 \\ & O+S & 10.23 & 12.79 & 41.75 \\ & O+P & 10.95 & & 45.84 \\ & O+L+S & 9.93 & 13.23 & 35.68 \\ & O+L+S+P & \textbf{9.80} & & \textbf{53.02} \\ \midrule \multirow{4}{*}{\makecell[tl]{prag}} & O & 12.02 & & 52.12 \\ & O+L & 11.72 & & \textbf{53.16} \\ & O+S & 11.67 & & 49.84 \\ & O+P & 11.50 & & 37.41 \\ & O+L+S & 11.48 & & 48.15 \\ & O+L+S+P & \textbf{11.20} & & 41.53 \\ \bottomrule \hline \end{tabular} \caption{Evaluations of COMET models trained on different combinations of the original and negation ATOMIC data.} \label{tab:comet-neg-auto-eval} \end{table} \paragraph{Results} While the original COMET model is capable of producing correct generations with P@10 of 53.22, it generates less plausible predictions for the logical and semi-logical negation events, with P@10 of 33.21 and 35.15 respectively. However, by adding either or both types of negation events into the training data, COMET is able to make better predictions about both logical and semi-logical negation events. Note that although COMET with \textbf{O+Ln} does not have the best automatic scores with respect to negation evaluation sets, it produces generation with the highest quality by increasing P@10 over baselines by 40.6\% and 19.9\% for logical and semi-logical negation respectively. Whereas \textbf{O+Ln+Sn}, which has the lowest perplexity and highest BLEU-2 scores over negation evaluation sets, achieves poorer-quality generations reflected by lower P@10. Such discrepancy is a reaffirmation of the observation that automatic metrics have limited ability of representing the overall quality of generative language models. It is also noticeable that training on logical negation also increases P@10 for the semi-logical negation, and vice versa, indicating that COMET successfully distill knowledge from logical and semi-logical negation data and can transfer the learning outcomes between negation types. However, when combining logical with semi-logical negation in \textbf{O+Ln+Sn}, the learning outcomes do not transfer well over neither negation type. \subsection{Discriminator} \begin{table} \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|l|ccc|ccc} \multicolumn{2}{l|}{} & \multicolumn{3}{c|}{\textbf{BLEU-2}} & \multicolumn{3}{c}{\textbf{HE}} \\ \midrule \multicolumn{2}{c|}{\diagbox[width=\widthof{xxxxxxxxxx}]{\textbf{Eval}}{\textbf{Dis}}} & \textbf{L} & \textbf{S} & \textbf{LS} & \textbf{L} & \textbf{S} & \textbf{LS} \\ \midrule \multirowcell{3}{O} & all & 14.18 & \textbf{14.18} & 14.18 & 53.22 & 53.22 & 53.22\\ & vld & \textbf{14.21} & \textbf{14.18} & \textbf{14.25} & \textbf{53.70} & \textbf{54.18} & \textbf{54.26} \\ & inv & 12.75 & 13.81 & 12.56 & 42.62 & 43.13 & 41.12 \\ \midrule \multirowcell{3}{Ln} & all & 10.86 & 10.86 & 10.86 & 33.21 & 33.21 & 33.21 \\ & vld & \textbf{11.44} & \textbf{11.26} & \textbf{11.26} & \textbf{44.61} & \textbf{40.71} & \textbf{44.54} \\ & inv & 10.03 & 10.17 & 10.11 & 23.97 & 23.69 & 24.04 \\ \midrule \multirowcell{3}{Sn} & all & 12.07 & 12.07 & 12.07 & 35.15 & 35.15 & 35.15 \\ & vld & \textbf{12.12} & \textbf{12.48} & \textbf{12.33} & \textbf{35.62} & \textbf{44.91} & \textbf{45.28} \\ & inv & 11.26 & 11.45 & 11.41 & 25.99 & 27.42 & 27.12 \\ \midrule \multirowcell{3}{Ln\\+Sn} & all & 11.30 & 11.30 & 11.30 & - & - & - \\ & vld & \textbf{11.87} & \textbf{11.69} & \textbf{11.67} & - & - & - \\ & inv & 10.12 & 10.64 & 10.56 & - & - & - \\ \bottomrule \end{tabular} \caption{Average BLEU-2 scores of beam search generations of the COMET model trained on ATOMIC original over various evaluation sets, classified into \textit{valid} and \textit{invalid} sets by discriminators.} \label{tab:comet-disc-bleu2} \end{table} \begin{table} \small \renewcommand{\arraystretch}{1.1} \centering \begin{tabular}{l|l|lll} \toprule \multicolumn{2}{l|}{\diagbox[width=\widthof{xxxxxxxxxxxxxxxx}]{\textbf{Eval}}{\textbf{Disc}}} & \textbf{LD} & \textbf{SD} & \textbf{LSD} \\ \midrule \multirowcell{3}{original} & all & \textbf{61.95} & \textbf{64.44} & \textbf{63.08} \\ & valid & 61.71 & 63.91 & 62.90 \\ & invalid & 50.50 & 53.60 & 54.49 \\ \midrule \multirowcell{3}{logical} & all & 42.64 & 41.84 & 41.72 \\ & valid & \textbf{51.97*} & \textbf{46.70*} & \textbf{49.78*} \\ & invalid & 30.46 & 30.90 & 28.74 \\ \midrule \multirowcell{3}{semi} & all & 41.21 & 42.13 & 39.32 \\ & valid & \textbf{48.30*} & \textbf{48.33*} & \textbf{46.07*} \\ & invalid & 29.30 & 33.57 & 27.70 \\ \midrule \end{tabular} \caption{Precision at 3 (\textit{P@3}) for the \textit{all}, \textit{valid} and \textit{invalid} sets obtained by applying discriminators to generations of the COMET model trained on ATOMIC original across three evaluation sets. Every \textit{valid} and \textit{all} sets outperform the corresponding \textit{invalid} set significantly at $p < 0.01$. Asterisks mark \textit{valid} sets that outperform their corresponding \textit{all} set at $p < 0.01$.} \label{tab:rerank-human-eval} \end{table} \begin{table} \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{C}\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not skate \\around\\\textbf{\textbf{xAttr}}}} & athletic & \xmark & \cmark \\ & careless & \xmark & \cmark \\ & lazy & \cmark & \cmark \\ & uncoordinated & \cmark & \cmark \\ & unskilled & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not sit \\behind Y\\\textbf{\textbf{xIntent}}}} & to be alone & \cmark & \cmark \\ & to be left alone & \cmark & \cmark \\ & to avoid Y & \cmark & \cmark \\ & to sit & \xmark & \cmark \\ & to wait & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not look \\angry\\\textbf{\textit{xNeed}}}} & to calm down & \cmark & \cmark \\ & to watch a movie & \xmark & \cmark \\ & to have been provoked & \xmark & \cmark \\ & to not be angry & \xmark & \xmark \\ & to be calm & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X refuses \\to hear a \\scary noise\\\textbf{\textit{xWant}}}} & to run away & \xmark & \xmark \\ & to go to sleep & \cmark & \cmark \\ & to be safe & \cmark & \cmark \\ & to keep quiet & \cmark & \cmark \\ & to avoid the noise & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X never \\brings Y into \\conflicts\\\textbf{\textit{oWant}}}} & to avoid X & \xmark & \xmark \\ & to be left alone & \xmark & \cmark \\ & to thank X & \cmark & \cmark \\ & to fight back & \xmark & \xmark \\ & to avoid conflict & \xmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X scarcely\\gets sunburned\\ \\\textbf{\textit{xReact}}}} & burned & \xmark & \xmark\\ & hurt & \xmark & \xmark\\ & sick & \xmark & \xmark\\ & sad & \xmark & \xmark\\ & satisfied & \cmark & \cmark\\ \midrule \end{tabular} \caption{Randomly selected generations from the \textbf{ATOMIC logical negation} set. The top 5 options are classified as either \textit{valid} or \textit{invalid} by the \textbf{logical+semi discriminator}. \textbf{V} indicates whether the generation is classified as \textit{valid} by the discriminator. \textbf{C} indicates whether options are correctly classified by human judges.} \label{tab:discriminator-generations-logical} \end{table} \begin{table*} \small \renewcommand{\arraystretch}{1.1} \centering \begin{tabular}{l|l|rrr|rrr|rrr} \multicolumn{2}{l|}{Discriminator} & \multicolumn{3}{c|}{\textbf{Logical}} & \multicolumn{3}{c|}{\textbf{Semi}} & \multicolumn{3}{c}{\textbf{\textbf{Logical+Semi}}} \\ \midrule \multicolumn{2}{l|}{Eval Set} & \multicolumn{1}{r}{\textbf{Prec}} & \multicolumn{1}{r}{\textbf{Rec}} & \multicolumn{1}{r|}{\textbf{F1}} & \multicolumn{1}{r}{\textbf{Prec}} & \multicolumn{1}{r}{\textbf{Rec}} & \multicolumn{1}{r|}{\textbf{F1}} & \multicolumn{1}{r}{\textbf{Prec}} & \multicolumn{1}{r}{\textbf{Rec}} & \multicolumn{1}{r}{\textbf{F1}} \\ \midrule \multirowcell{2}{O} & valid & 53.70 & 96.53 & 69.01 & 54.18 & 92.91 & 68.45 & 54.26 & 93.87 & 68.77 \\ & invalid & 57.38 & 5.32 & 9.73 & 56.87 & 10.63 & 17.91 & 58.88 & 9.98 & 17.07 \\ \midrule \multirowcell{2}{Ln} & valid & 44.61 & 60.11 & 51.22 & 40.71 & 68.56 & 51.09 & 44.54 & 59.98 & 51.12 \\ & invalid & 76.03 & 62.90 & 68.84 & 76.31 & 50.36 & 60.68 & 75.96 & 62.87 & 68.80 \\ \midrule \multirowcell{2}{Sn} & valid & 35.62 & 96.43 & 52.02 & 44.91 & 56.48 & 50.03 & 45.28 & 56.97 & 50.46 \\ & invalid & 74.01 & 5.50 & 10.24 & 72.58 & 62.44 & 67.13 & 72.88 & 62.69 & 67.40 \\ \midrule \end{tabular} \caption{Precision, recall and F1 scores for the \textit{valid} and \textit{invalid} sets of applying various discriminators to the generations of the COMET model trained on ATOMIC original.} \label{tab:partition-f1} \end{table*} \paragraph{Baselines} As described in Section \ref{sec:model-discriminator}, we fine-tune RoBERTa\textsubscript{BASE} with logical negation, semi-logical negation and the combination of both to obtained three negation discriminators (i.e., \textbf{L}, \textbf{S}, \textbf{LS}). By applying a negation discriminator to a set of candidates generated by COMET, it classifies each option as either logically \textit{valid} or \textit{invalid}, partitioning the overall set of candidates into \textit{valid} and \textit{invalid} sets. We employ COMET generations without applying discriminators (i.e., the \textit{all} set) as baselines. \paragraph{Metrics} We evaluate and compare the quality of the \textit{all}, \textit{valid} and \textit{invalid} sets automatically with BLEU-2 and with accuracy, precision, recall and F1 scores based on the human evaluation. To assess the quality of a discriminator, we use the same human evaluation scores as in Section \ref{sec:exp-comet-neg-data} but calculate accuracies (i.e., the percentage of correct options among all candidates) separately for each of the \textit{all}, \textit{valid} and the \textit{invalid} sets. The \textit{all} set contains the full beam with 10 elements, while the \textit{valid} and \textit{invalid} sets have varying number of elements depending on how discriminators classify them, summing to 10. In addition, for each evaluation set and applied each discriminator, per relation, we randomly selected a subset of 25 events that have at least three options in both the \textit{valid} and \textit{invalid} sets and compare their P@3, calculated as the percentage of correct options among top three options. We also report the precision, recall and F1 scores for the \textit{valid} and \textit{invalid} set to assess discriminators' ability of identifying logic flaws in commonsense reasoning. Finally, while by applying the discriminator to the full beam we get a \textit{valid} subset with higher accuracy, we have less correct options in total. Thus we also investigate the efficiency of using discriminators to expand the commonsense KB. We use the COMET model to generate beams with size 25 and then apply the discriminator to the full beam. We collect human annotations for the same 100 events but with generations of beam size of 25. We compare the number of correct candidates and the accuracy of the \textit{valid} set of beam 25 to the \textit{all} set of beam 10 in Table \ref{tab:valid-option-yield}. \paragraph{Results} The BLEU-2 and human evaluation accuracy scores in Table \ref{tab:comet-disc-bleu2} demonstrate that all types of negation discriminators (\textbf{L}, \textbf{S} and \textbf{LS}) are able to extract a subset of candidates with higher average quality than the set directly generated by the COMET model, for both negation and original events. While discriminators trained only on logical or semi-logical negation are specialized in their corresponding type of data, they also achieve higher percentage accuracy for the \textit{valid} set of other types of negation and the original data in small margins, demonstrates a transferred learning outcome. Particularly, the discriminator trained on both logical and semi-logical negations improves the accuracy of logical and semi-logical negation typed generations by 34.1\% and 28.8\% respectively in Table \ref{tab:comet-disc-bleu2}. In Table \ref{tab:partition-f1}, recall scores indicate that around 62\% of the incorrect candidates from the full beam are successfully ruled out, while about 60\% correct candidates are selected by the logical and semi-logical discriminator for both the logical and semi-logical evaluation sets. Table \ref{tab:discriminator-generations-logical} shows examples of \textit{valid} and \textit{invalid} candidates for logical negation events as specified by the logical and semi-logical discriminator. When we restrict examples to prompts with at least three elements in both the \textit{valid} and \textit{invalid} set, the top three elements in the \textit{valid} set significantly outperforms the top three options from the original beam at $p < 0.01$ for all negation discriminators and across both logical and semi-logical evaluation sets. However, for original events, such improvement does not hold. This indicates that if a query event is known to be in the negation form, then we can apply the discriminators as an reranking mechanism towards the original beam to remove options that are less plausible to get better candidate sets. \begin{table} \small \renewcommand{\arraystretch}{1.1} \centering \begin{tabular}{l|lll} \multicolumn{1}{l|}{Eval} & \multicolumn{1}{c}{\textbf{O}} & \multicolumn{1}{c}{\textbf{Ln}} & \multicolumn{1}{c}{\textbf{Sn}} \\ \midrule baseline & 53.22 & 33.21 & 35.15 \\ \midrule all & 55.60 & 46.69 & 42.13 \\ valid & \textbf{56.45} & \textbf{54.82} & \textbf{51.02} \\ invalid & 45.78 & 33.48 & 31.75 \\ \end{tabular} \caption{Human evaluation scores for applying the \textbf{LS} discriminators to beam search generations of COMET trained on \textbf{O+Ln}. \textbf{Bold} numbers indicate the set with the highest score among \textit{all}, \textit{valid} and \textit{invalid} sets.} \label{tab:comet-ol-dis-ls-human-eval} \end{table} Furthermore, we apply negation discriminators to the COMET model trained on negation data and achieve further improvement over the quality of the select output set. Table \ref{tab:comet-ol-dis-ls-human-eval} shows that by applying the \textit{LS} discriminator to generations of the \textit{O+Ln} version of COMET model, the accuracy score of valid set gains 6\%, 65\% and 45\% reletive improvement over the original, logical and semi-logical negations baseline sets respectively, which are the raw beams of the COMET trained on \textit{O}. \begin{table*} \small \renewcommand{\arraystretch}{0.8} \centering \begin{tabular}{l|l|rrr|rrr|rrr} \multicolumn{2}{l|}{\textbf{Evaluation Set}} & \multicolumn{3}{c|}{\textbf{Original}} & \multicolumn{3}{c|}{\textbf{Logical}} & \multicolumn{3}{c}{\textbf{\textbf{Semi}}} \\ \midrule \textbf{Beam} & \textbf{Set} & \textbf{$\#$\cmark} & \textbf{$\#$total} & \textbf{$\%$\cmark} & \textbf{$\#$\cmark} & \textbf{$\#$total} & \textbf{$\%$\cmark} & \textbf{$\#$\cmark} & \textbf{$\#$total} & \textbf{$\%$\cmark} \\ \midrule \multirow{2}{*}{\textbf{10}} & all & 5.32 & 10.00 & 53.22 & 3.32 & 10.00 & 33.21 & 3.52 & 10.00 & 35.15 \\ & valid & 5.00 & 9.21 & 54.26 & 1.99 & 4.47 & 44.54 & 2.00 & 4.42 & 45.28 \\ \cmidrule{0-1} \multirow{2}{*}{\textbf{25}} & all & 12.24 & 25.00 & 48.98 & 7.86 & 25.00 & 31.44 & 7.45 & 25.00 & 29.78 \\ & valid & 11.35 & 22.86 & 49.65 & \textbf{4.20} & 10.69 & \textbf{39.29} & \textbf{4.04} & 10.98 & \textbf{36.77} \\ \bottomrule \end{tabular} \caption{Yields of correct generations of applying the logical+semi discriminator (\textbf{LSD}) to generations of COMET trained on ATOMIC original for beam size of 10 and 25. \textit{$\#$\cmark} is the number of correct options yielded. \textit{$\#$total} is the total number of options in each set. \textit{$\%$\cmark} is the percentage correctness.} \label{tab:valid-option-yield} \end{table*} Finally, for the efficiency of using discriminators to expand the commonsense KB, Table \ref{tab:valid-option-yield} shows that for both logical and semi-logical types, the \textit{valid} set of beam 25 has higher accuracy and more correct options than the \textit{all} set of beam 10. This indicates that when we have a larger and potentially more noisy set of candidates, by applying the negation discriminator, we are able to get a set of options that have both higher quality and higher correct option yields than using the full beam with smaller beam size of 10. \section{Commonsense Negation} \label{sec:negation_background} \paragraph{Negation in Language} In \textit{Categories} and \textit{De Interpretatione}, Aristotle classifies declarative statements into affirmation and negation, which respectively affirms or denies observations about an event \citep{ackrill1963aristotle}. Despite this seeming simplicity, natural language often expresses negation in complex and subtle ways, using diverse syntactic, semantic and pragmatic formulations \cite{sep-negation}. For example, syntactically, different negation determiners (\textit{i.e.}, negation cues) such as \textit{no}, \textit{few} and \textit{only} result in distinct explicit and implicit negative perceptions \cite{XIANG201671}. Despite their diversity, however, negated language expressions are much less likely to appear in text than affirmative statements \cite{reitan-etal-2015-negation}. Consequently, PLMs, which rely on large-scale textual corpora as training data, are prone to decreased performance when confronted with negated constructions. In machine translation, for example, the presence of negation may heavily affect the quality of produced translations \cite{fancellu-webber-2015-translating-negation,hossain2020its}. In factual knowledge understanding tasks, PLMs memorize positive and negative sentences seen during training, but generalize more poorly to unseen negated instances \cite{kassner-schutze-2020-negated}. \input{tables/atomic-neg-cue-example-table} \paragraph{Negation in Commonsense Reasoning} Understanding negation and oppositional expressions is critical for reasoning about commonsense knowledge, particularly in counterfactual scenarios \citep{qin-etal-2019-counterfactual}. However, negation is rarely explicitly modeled in NLP studies on commonsense reasoning. As a result, in many NLP tasks, these models experience a performance drop when presented with examples exhibiting negated characteristics. As a case study, the \textsc{Atomic}{} \cite{sap2019atomic} knowledge graph encodes social commonsense knowledge about event pre-conditions, event post-conditions, and static attributes in the form of natural language \textit{if-then} rules. However, despite the fact that \textsc{Atomic}{} provides a rich set of seed events, it comprises an unbalanced set of affirmative events (97.9\%) and negated events (2.1\%). As a result, when systems link to \textsc{Atomic}{} to retrieve relevant social commonsense inferences, they are likely to recover inferences of affirmative events even when searching for negated instances. Furthermore, knowledge models that use this resource (\textit{e.g.}, COMET; \citealp{bosselut2019comet}) are unlikely to learn implicit differences between inferences of affirmative and negated events. When given negated events, these models often produce associations of counterpart affirmative events. For example, for the negated event, ``X opposes racism,'' COMET infers ``X intends to be a racist,'' an association of the affirmative statement, ``X supports racism.'' At the heart of this problem is that inferring commonsense knowledge about negations often requires implicit reasoning. In factual knowledge reasoning, applying logical rules over statements can be effective for handling negative queries \cite{asai-hajishirzi-2020-logic,ren2020beta}. However, directly manipulating affirmative forms with logic-guided rules may fail for commonsense reasoning: the boundary of commonsense inferences between affirmative and negated statements is not always wholly contrastive. Many inferences can be relevant to both forms. The events ``X puts the potato in the oven'' and ``X doesn't put the potato in the oven,'' could both have an associated inference: ``X wants to make dinner.'' The affirmative event clearly implies this inference. For the negated event to be worth mentioning on its own \citep{grice1975logic}, an implicit complementary event (\textit{e.g.}, ``X puts the potato in the microwave'') would likely hold, which might validate the inference w.r.t. the negated event. To model the defeasibility of commonsense reasoning \citep{Pratt1994,rudinger-etal-2020-thinking}, modeling both common and contrastive inferences of negated forms is necessary. \input{tables/data-statistics-table} \section{Appendices} \label{sec:appendix} \subsection{\textsc{Anion}{} Data Collection Details} \label{sec:appendix:data-collection-details} \paragraph{Heuristic of Creating Logical and Semi-logical Negation Events} For logical negation, with the majority of the original events being simple sentences with one predicate, our general rule of thumb is to negate the original event at the sentence level. Specifically, with respect to each original event, we first identify each tokens’ part of speech (POS) tags via the NLTK toolkit\footnote{\url{https://www.nltk.org}}. Then, we insert the negation cue \textit{not} after the subject of each sentence, with majority of the case the entity ``PersonX,'' with few exceptions of ``PersonX’s'' and ``PersonX and PersonY.'' To ensure the grammar correctness of the heuristically generated logical negation events, we add appropriate auxiliary verbs (\textit{e.g.}, do, does, did, is, was, can, could, would, should, may, might) in accordance with the tenses (\textit{e.g.}, present, past, future) of the original events. Since NLTK’s POS parser fails to recognize some of the verbs that have both noun and verb usage (\textit{e.g.}, ``waters'' the plant, ``supports'' her argument), we curate a list of dual-used words and map them manually. Also, while converting the original events to their logical negation counterparts, we revise grammar mistakes from \textsc{Atomic}{} and exclude awkward expressions as much as possible. In addition, to make the negation forms sound more natural, we replace the modifier ``some'' by ``any'' during conversion (\textit{e.g.}, ``PersonX buys some shoes'' is converted to ``PersonX doesn't buy any shoes''). For the minority of compound events with clauses or complex sentence structures, we disregard them for the purpose of ensuring the data quality. For semi-logical negation events, we curate a list of semi-logical negation cues besides \textit{not} from various sources\footnote{\url{https://dictionary.cambridge.org/us/grammar/british-grammar/negation_2}} \cite{councill-etal-2010-great, hossain2020its, kim-etal-2019-probing} and categorize them into four types including affixes, single-word cues, multi-word cues and negative verbs (Table \ref{tab:negation-cue-examples}). We identify appropriate rules to insert each semi-logical negation cue in simple base events from \textsc{Atomic}{} consisting of a subject and a predicate. We apply the rules to original events from \textsc{Atomic}{} and randomly select at least 200 automatically generated semi-logical negation events per each negation cue for manual screening by the first author to avoid misplacement of negation cues and awkward expressions. In the end, we were able to identify 5,019 high quality semi-logical negation events originating from \textsc{Atomic}{}. As a final quality control step of the constructed logical and semi-logical events, after obtaining the crowdsourced inferences for each event, we remove all events that annotators comment as ``unclear,'' ``doesn’t make sense'' or ``grammatically wrong.'' \paragraph{Crowdsourcing of Commonsense Contradiction Events} For collecting commonsense contradiction events, we present an original \textsc{Atomic}{} event to the annotators and ask them to formulate corresponding opposite events. We exclude \textsc{Atomic}{} events with placeholders representing generic objects) to capture semantic and pragmatic subtlety. In the MTurk task, we present annotators detailed instructions of formulating the opposite events (\textit{e.g.}, avoid using negative words as much as possible, use complete sentences, follow grammar rules) and concrete examples as references. Figure \ref{fig:commonsense_contradiction_amt_hit} shows details of the MTurk task. Although we explicitly instruct annotators to avoid using negation cues, there are still some exceptions. Therefore, after the compilation of all commonsense contradiction events, we remove ones that contain any explicit negation cues to make sure the categorization is clean. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/commonsense_contradiction_amt_hit.pdf} \caption{Snippet of the annotation task used to collect commonsense contradiction events.} \label{fig:commonsense_contradiction_amt_hit} \end{figure*} \paragraph{Crowdsourcing of \textsc{Anion}{} Event Inferences} For the collection of \textsc{Anion}{} event inferences, we adopt the MTurk templates used by the original \textsc{Atomic}{} data collection\footnote{\url{https://homes.cs.washington.edu/~msap/atomic/mTurkFiles/}}. Similarly to logical and semi-logical events, we remove all inferences of events that annotators comment as ``unclear,'' ``doesn’t make sense'' or ``grammatically wrong.'' \subsection{Training Details of COMET Models} \paragraph{Input} A knowledge tuple $\{h,r,t\}$ is represented as a concatenated sequence with tokens of each element in the tuple: $X = \{X^{h}, X^{r}, X^{t}\}$ where $X^{h} = \{x_{0}^{h},...,x_{|h|}^{h}\}$ are the tokens comprising the event, $X^{r} = \{x_{0}^{r},...,x_{|r|}^{r}\}$ as tokens comprising the relation, and $X^{t} = \{x_{0}^{t},...,x_{|t|}^{t}\}$ are the tokens comprising the commonsense inference. \paragraph{Initialization} Similar to \citet{bosselut2020dynamic}, we initialize the trained parameters of COMET to the 345M parameter GPT2 model (GPT2-M) from \citet{Radford2019LanguageMA}. Special tokens that represent relation types (\textit{e.g.}, \textit{xIntent}) are added to the vocabulary and initialized via sampling from the normal distribution. \paragraph{Hyperparameters} Following \citet{bosselut2019comet}, we use a dropout rate of 0.1 and GeLU \cite{hendrycks2020gaussian} units as activation functions. During training, we use the Adam optimizer \cite{kingma2017adam} with a batch size of 64. For COMET models trained on different subsets of the \textsc{Atomic}{} and \textsc{Anion}{} datasets, we adopt a maximum learning rate of 6.25e-5 with a warmup period of 0.002 times of the total number of minibatches customized for each model, which decays linearly until finishing training. We train different COMET models for different subsets of the full on original data (\textsc{Atomic}), original and logical negation data (\textsc{Atomic}{} + \textsc{Anion}-L), original and semi-logical negation data (\textsc{Atomic}{} + \textsc{Anion}-S), original and commonsense contradiction data (\textsc{Atomic}{} + \textsc{Anion}-C), and the overall dataset (\textsc{Atomic}{} + \textsc{Anion}), for 21k, 25K, 24K, 24K and 29K minibatches respectively, and apply early stopping for all models. The rest of the hyperparameters are the same as those of GPT2-M in \citet{Radford2019LanguageMA} implemented via the publicly available HuggingFace API\footnote{https://huggingface.co/transformers/} . All models are fine-tuned and evaluated on a single NVIDIA QUADRO RTX 8000 GPU for six to twelve hours depending on the complexity of the experimental setup. \subsection{Training Details of Negation Discriminator} \label{sec:appendix:train-discriminator} \paragraph{Input} As input to the discriminator model, we design sentence patterns that express relation types in natural language and fill out the patterned sentences with events and conditions before encoding them (\textit{e.g.}, ``PersonX addresses a talk. As a result, PersonX wants to convince others.''). Relations and their corresponding patterned sentences are listed in Table \ref{tab:atomic-rel-pattern}. Adopting patterned sentences is found to be a more effective approach than concatenating components in knowledge tuples from the pilot study. \paragraph{Loss Function} The negation discriminator is trained to minimized the binary cross-entropy loss: \vspace*{-2mm} \begin{equation} \mathcal{L}_{D} = y \cdot \log{P( y)} + (1-y) \cdot \log{(1-P( y))} \label{eq:disc-loss} \end{equation} \noindent where $y$ is the label for an input (\textit{i.e.}, logically valid or invalid). \paragraph{Hyperparameters} Parameters are initialized with the trained weights of the RoBERTa-base model in \citet{liu2019roberta}. During training, we use the Adam optimizer \cite{kingma2017adam} and train the model with a batch size of 64. We adopt a maximum learning rate of 4.5e-5 with a warmup period of 10 minibatches. We trained \textbf{L}, \textbf{S}, \textbf{C}, \textbf{LSC} discriminators, for 25K, 14K, 21K and 6K minibatches respectively, and apply early stopping for all models. We use a probability threshold of 0.7 to determine whether an input knowledge tuples to the discriminator is plausible based on pilot study on the development sets. The rest of the hyperparameters are the same as those of RoBERTa-base \cite{liu2019roberta} implemented via the publicly available HuggingFace API\footnote{https://huggingface.co/transformers/} . All models are fine-tuned and evaluated on a single NVIDIA QUADRO RTX 8000 GPU for four to six hours depending on the different experimental setups. \begin{table}[t] \small \centering \begin{tabular}{l|l} \textbf{Relation} & \textbf{Patterned sentences} \\ \midrule \textit{xIntent} & $\{h\}$. Because PersonX wanted $\{t\}$.\\ \textit{xNeed} & $\{h\}$. Before, PersonX needed $\{t\}$. \\ \textit{xAttr} & $\{h\}$. PersonX is seen as $\{t\}$. \\ \textit{xWant} & $\{h\}$. As a result, PersonX wants $\{t\}$. \\ \textit{oWant} & $\{h\}$. As a result, others want $\{t\}$. \\ \textit{xEffect} & $\{h\}$. As a result, PersonX then $\{t\}$. \\ \textit{oEffect} & $\{h\}$. As a result, others then $\{t\}$. \\ \textit{xReact} & $\{h\}$. As a result, PersonX feels $\{t\}$. \\ \textit{oReact} & $\{h\}$. As a result, others feel $\{t\}$. \\ \bottomrule \end{tabular} \caption{Patterned sentences representing relation types in \textsc{Atomic}{}, used to construct inputs for training negation discriminators.} \label{tab:atomic-rel-pattern} \end{table} \begin{table*} \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{l|r|rrr|rrr|rrr|rrr} \multicolumn{2}{l|}{\textbf{Eval}} & \multicolumn{3}{c|}{\textbf{\textsc{Atomic}}} & \multicolumn{3}{c|}{\textbf{\textsc{Anion}-L}} & \multicolumn{3}{c|}{\textbf{\textsc{Anion}-S}} & \multicolumn{3}{c}{\textbf{\textsc{Anion}-C}} \\ \toprule \textbf{Trn} & \textbf{Dis} & all & valid & ipv\% & all & valid & ipv\% & all & valid & ipv\% & all & valid & ipv\% \\ \midrule \multirow{4}{*}{\textsc{Atomic}} & L & \textbf{55.69} & 55.65 & -0.07 & 39.46 & \textbf{**46.39} & 17.55 & 37.13 & \textbf{37.48} & 0.96 & \textbf{46.92} & 46.83 & -0.20 \\ & S & 55.93 & \textbf{56.18} & 0.44 & 37.85 & \textbf{**41.93} & 10.78 & 39.29 & \textbf{**44.58} & 13.47 & 47.32 & \textbf{47.68} & 0.75 \\ & C & 56.94 & \textbf{57.26} & 0.57 & 36.43 & \textbf{37.54} & 3.03 & 37.72 & \textbf{39.03} & 3.45 & 48.26 & \textbf{48.79} & 1.09 \\ & LSC & 58.30 & \textbf{59.07} & 1.32 & 39.45 & \textbf{**45.59} & 15.57 & 38.55 & \textbf{**44.93} & 16.56 & 48.81 & \textbf{*51.44} & 5.40 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-L}} & L & 58.62 & \textbf{58.72} & 0.16 & 46.05 & \textbf{**51.19} & 11.16 & 42.42 & \textbf{42.89} & 1.11 & \textbf{47.98} & 47.97 & -0.02 \\ & S & 58.93 & \textbf{59.31} & 0.64 & 45.90 & \textbf{**49.00} & 6.77 & 44.22 & \textbf{**47.59} & 7.64 & 48.10 & \textbf{48.69} & 1.24 \\ & C & 59.63 & \textbf{60.07} & 0.73 & 45.88 & \textbf{46.23} & 0.76 & 43.40 & \textbf{43.74} & 0.79 & 48.81 & \textbf{49.84} & 2.12 \\ & LSC & 60.83 & \textbf{62.49} & 2.74 & 45.96 & \textbf{**50.19} & 9.20 & 44.61 & \textbf{**48.30} & 8.27 & 49.73 & \textbf{*51.97} & 4.51 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-S}} & L & \textbf{56.37} & 56.35 & -0.05 & 44.77 & \textbf{**51.76} & 15.60 & 45.58 & \textbf{45.87} & 0.63 & 46.24 & \textbf{46.29} & 0.11 \\ & S & 56.60 & \textbf{56.66} & 0.11 & 44.39 & \textbf{**47.42} & 6.83 & 46.07 & \textbf{**48.32} & 4.89 & 46.62 & \textbf{47.17} & 1.19 \\ & C & 57.46 & \textbf{57.60} & 0.23 & 44.46 & \textbf{45.39} & 2.07 & 45.81 & \textbf{47.15} & 2.93 & 47.38 & \textbf{48.83} & 3.06 \\ & LSC & 58.74 & \textbf{*60.39} & 2.81 & 44.94 & \textbf{**49.88} & 10.98 & 46.08 & \textbf{**48.67} & 5.62 & 48.56 & \textbf{**51.22} & 5.46 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-C}} & L & 52.72 & \textbf{52.73} & 0.02 & 43.45 & \textbf{**49.62} & 14.20 & 41.83 & \textbf{41.88} & 0.12 & 48.93 & \textbf{48.97} & 0.07 \\ & S & 52.93 & \textbf{53.33} & 0.76 & 42.66 & \textbf{**46.40} & 8.75 & 42.57 & \textbf{**46.40} & 8.98 & 49.18 & \textbf{49.49} & 0.62 \\ & C & 53.70 & \textbf{54.07} & 0.69 & 42.83 & \textbf{43.26} & 1.00 & 42.25 & \textbf{42.70} & 1.07 & 49.30 & \textbf{*50.97} & 3.38 \\ & LSC & 54.38 & \textbf{55.74} & 2.49 & 44.17 & \textbf{**48.84} & 10.58 & 42.37 & \textbf{**46.22} & 9.10 & 50.07 & \textbf{**52.80} & 5.46 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}}} & L & 54.16 & \textbf{54.20} & 0.08 & 46.54 & \textbf{**50.71} & 8.98 & 46.90 & \textbf{47.14} & 0.51 & 50.80 & \textbf{50.94} & 0.28 \\ & S & 54.49 & \textbf{54.64} & 0.28 & 46.26 & \textbf{**48.36} & 4.54 & 47.73 & \textbf{**50.42} & 5.65 & 51.29 & \textbf{51.52} & 0.45 \\ & C & 55.03 & \textbf{55.71} & 1.23 & 46.15 & \textbf{46.16} & 0.03 & 47.47 & \textbf{48.20} & 1.55 & 51.28 & \textbf{52.65} & 2.67 \\ & LSC & 55.68 & \textbf{**57.58} & 3.41 & 46.39 & \textbf{**49.85} & 7.45 & 47.53 & \textbf{**50.62} & 6.50 & 51.83 & \textbf{*53.91} & 4.02 \\ \bottomrule \end{tabular} \caption{For generations of COMET models trained on different subsets of \textsc{Atomic}{} and \textsc{Anion}{}, the Precision @ \{\# valid\}{} scores of the \textit{all} and \textit{valid} sets determined by \textbf{L}, \textbf{S}, \textbf{C} and \textbf{LSC} discriminators with respect to the original and negation evaluation sets. The single (*) and double asterisks (**) indicate significance at p<0.05 and p<0.01 respectively. iprv\% is the percentage improvement of the \textit{valid} set over the \textit{all} set.} \label{tab:full-p@len(valid)} \end{table*} \begin{table*} \footnotesize \renewcommand{\arraystretch}{1} \setlength{\tabcolsep}{4.5pt} \centering \begin{tabular}{r|r|rrr|rrr|rrr|rrr} \multicolumn{2}{l|}{\textbf{Eval}} & \multicolumn{3}{c|}{\textbf{\textsc{Atomic}}} & \multicolumn{3}{c|}{\textbf{\textsc{Anion}-L}} & \multicolumn{3}{c|}{\textbf{\textsc{Anion}-S}} & \multicolumn{3}{c}{\textbf{\textsc{Anion}-C}} \\ \toprule \textbf{Trn} & \textbf{Dis} & all & valid & iprv\% & all & valid & iprv\% & all & valid & iprv\% & all & valid & iprv\% \\ \midrule \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}}} & L & 59.41 & \textbf{59.65} & 0.40 & 44.92 & \textbf{**49.95} & 11.20 & 39.47 & \textbf{39.94} & 1.21 & 50.77 & \textbf{50.91} & 0.27 \\ & S & 59.48 & \textbf{60.14} & 1.12 & 42.88 & \textbf{**46.24} & 7.83 & 45.27 & \textbf{**49.25} & 8.81 & 51.22 & \textbf{51.84} & 1.21 \\ & C & 59.89 & \textbf{60.89} & 1.66 & 39.20 & \textbf{40.28} & 2.75 & 40.07 & \textbf{41.40} & 3.32 & 51.77 & \textbf{52.88} & 2.15 \\ & LSC & 61.37 & \textbf{63.12} & 2.85 & 46.00 & \textbf{**50.34} & 9.44 & 46.15 & \textbf{**50.23} & 8.85 & 53.29 & \textbf{55.24} & 3.65 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-L}} & L & 61.33 & \textbf{61.57} & 0.39 & 51.47 & \textbf{**56.16} & 9.11 & 45.73 & \textbf{46.04} & 0.68 & 51.40 & \textbf{51.57} & 0.33 \\ & S & 61.13 & \textbf{62.05} & 1.51 & 50.12 & \textbf{**53.40} & 6.54 & 50.84 & \textbf{**54.09} & 6.40 & 51.89 & \textbf{52.91} & 1.96 \\ & C & 61.48 & \textbf{62.96} & 2.40 & 48.23 & \textbf{49.06} & 1.72 & 46.42 & \textbf{46.99} & 1.22 & 52.31 & \textbf{53.67} & 2.61 \\ & LSC & 63.66 & \textbf{*65.85} & 3.44 & 51.90 & \textbf{**56.26} & 8.40 & 51.12 & \textbf{**54.59} & 6.78 & 53.97 & \textbf{56.15} & 4.04 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-S}} & L & 60.25 & \textbf{60.65} & 0.67 & 48.11 & \textbf{**54.45} & 13.18 & 45.97 & \textbf{46.35} & 0.81 & 50.82 & \textbf{50.89} & 0.15 \\ & S & 60.23 & \textbf{60.85} & 1.03 & 46.48 & \textbf{**49.14} & 5.72 & 47.58 & \textbf{**50.29} & 5.70 & 51.11 & \textbf{52.00} & 1.74 \\ & C & 60.43 & \textbf{61.28} & 1.40 & 44.61 & \textbf{46.31} & 3.80 & 46.21 & \textbf{**48.78} & 5.58 & 51.72 & \textbf{53.25} & 2.95 \\ & LSC & 62.22 & \textbf{*64.44} & 3.58 & 47.63 & \textbf{**51.12} & 7.32 & 48.24 & \textbf{*50.70} & 5.11 & 53.51 & \textbf{*56.04} & 4.74 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}-C}} & L & 54.36 & \textbf{54.80} & 0.81 & 46.25 & \textbf{**51.57} & 11.51 & 42.81 & \textbf{43.13} & 0.76 & 50.71 & \textbf{50.81} & 0.20 \\ & S & 54.50 & \textbf{55.75} & 2.29 & 45.59 & \textbf{*48.11} & 5.53 & 45.40 & \textbf{**48.50} & 6.83 & 51.00 & \textbf{51.78} & 1.52 \\ & C & 54.43 & \textbf{55.50} & 1.97 & 42.61 & \textbf{43.26} & 1.53 & 43.11 & \textbf{44.13} & 2.38 & 51.44 & \textbf{*53.46} & 3.93 \\ & LSC & 55.68 & \textbf{*57.91} & 4.01 & 47.11 & \textbf{**51.25} & 8.80 & 45.75 & \textbf{**49.03} & 7.18 & 52.44 & \textbf{**55.68} & 6.18 \\ \cmidrule{1-2} \multirow{4}{*}{\makecell[tl]{\textsc{Atomic}+\\\textsc{Anion}}} & L & 56.63 & \textbf{57.11} & 0.85 & 50.39 & \textbf{**54.52} & 8.20 & 47.92 & \textbf{48.27} & 0.73 & 53.11 & \textbf{53.41} & 0.56 \\ & S & 56.53 & \textbf{57.42} & 1.57 & 48.92 & \textbf{**52.07} & 6.44 & 49.21 & \textbf{**52.51} & 6.72 & 53.10 & \textbf{53.67} & 1.09 \\ & C & 56.40 & \textbf{57.64} & 2.21 & 47.96 & \textbf{48.30} & 0.70 & 48.16 & \textbf{50.00} & 3.82 & 53.90 & \textbf{55.48} & 2.94 \\ & LSC & 58.27 & \textbf{60.53} & 3.87 & 50.25 & \textbf{**54.27} & 8.02 & 49.85 & \textbf{**53.09} & 6.50 & 54.50 & \textbf{**57.66} & 5.79 \\ \bottomrule \end{tabular} \caption{For generations of COMET models trained on different subsets of \textsc{Atomic}{} and \textsc{Anion}{}, the Precision @ 3{} scores of the \textit{all} and \textit{valid} sets determined by \textbf{L}, \textbf{S}, \textbf{C} and \textbf{LSC} discriminators with respect to the original and negation evaluation sets. The single (*) and double asterisks (**) indicate significance at p<0.05 and p<0.01 respectively. iprv\% is the percentage improvement of the \textit{valid} set over the \textit{all} set.} \label{tab:full-p@3} \end{table*} \begin{table}[t!] \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{P}\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not skate \\around\\\textbf{\textit{xAttr}}}} & athletic & \xmark & \xmark \\ & careless & \xmark & \xmark \\ & lazy & \cmark & \cmark \\ & uncoordinated & \cmark & \cmark \\ & unskilled & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not sit \\behind Y\\\textbf{\textit{xIntent}}}} & to be alone & \cmark & \cmark \\ & to be left alone & \cmark & \cmark \\ & to avoid Y & \cmark & \cmark \\ & to sit & \xmark & \xmark \\ & to wait & \cmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not look \\angry\\\textbf{\textit{xNeed}}}} & to calm down & \xmark & \cmark \\ & to watch a movie & \cmark & \xmark \\ & to have been provoked & \xmark & \xmark \\ & to not be angry & \cmark & \cmark \\ & to be calm & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not rent \\an apartment\\\textbf{\textit{xWant}}}} & to save money & \cmark & \cmark \\ & to get a job & \cmark & \xmark \\ & to pay rent & \xmark & \xmark \\ & to move in & \xmark & \xmark \\ & to get a new apartment & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X is \\not offered \\the job\\\textbf{\textit{oWant}}}} & to hire X & \xmark & \xmark \\ & to fire X & \cmark & \cmark \\ & to hire someone else & \cmark & \cmark \\ & to accept the job & \xmark & \xmark \\ & to hire them & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does\\not buy \\a snack\\\textbf{\textit{xReact}}}} & satisfied & \xmark & \xmark\\ & hungry & \cmark & \cmark\\ & satiated & \xmark & \xmark\\ & full & \xmark & \xmark\\ & guilty & \cmark & \xmark\\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not bring Y\\into conflict\\\textbf{\textit{oReact}}}} & relieved & \cmark & \cmark \\ & sad & \xmark & \xmark \\ & satisfied & \cmark & \cmark \\ & grateful & \cmark & \cmark \\ & angry & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not learn \\new things\\\textbf{\textit{xEffect}}}} & gains knowledge & \xmark & \xmark \\ & becomes lazy & \cmark & \cmark \\ & gets bored & \cmark & \cmark \\ & becomes ignorant & \xmark & \cmark \\ & cries & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X does \\not put Y \\in mind\\\textbf{\textit{oEffect}}}} & becomes confused & \xmark & \xmark \\ & does not think about X & \cmark & \cmark \\ & Y thinks about X & \xmark & \xmark \\ & Y is not remembered & \cmark & \cmark \\ & cries & \xmark & \cmark \\ \bottomrule \end{tabular} \caption{Randomly selected generations of the original COMET model regarding logical negation events in \textsc{Anion}{}-L. The top 5 options are classified as either \textit{valid} or \textit{invalid} by the \textbf{LSC} discriminator. \textbf{V} indicates whether an option is classified as \textit{valid} by the \textbf{LSC} discriminator. \textbf{P} indicates whether an option is plausible judging by humans.} \label{tab:discriminator-generations-logical} \end{table} \begin{table}[t!] \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{P}\\ \midrule \multirow{5}{*}{\makecell[tl]{X hardly ever\\increases X's\\knowledge\\\textbf{\textit{xAttr}}}} & intelligent & \xmark & \xmark \\ & determined & \xmark & \xmark \\ & studious & \xmark & \xmark \\ & lazy & \cmark & \cmark \\ & dedicated & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X avoids \\skating \\around\\\textbf{\textit{xIntent}}}} & to have fun & \xmark & \xmark \\ & to be safe & \cmark & \cmark \\ & to stay home & \cmark & \cmark \\ & to stay in shape & \xmark & \xmark \\ & to get fit & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X not at all \\wants to learn \\karate\\\textbf{\textit{xNeed}}}} & learn karate & \xmark & \xmark \\ & to not like it & \cmark & \cmark \\ & to avoid it & \cmark & \cmark \\ & to be lazy & \cmark & \cmark \\ & to find a teacher & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X refuses \\to hear a \\scary noise\\\textbf{\textit{xWant}}}} & to run away & \xmark & \xmark \\ & to go to sleep & \cmark & \cmark \\ & to be safe & \cmark & \cmark \\ & to keep quiet & \cmark & \cmark \\ & to avoid the noise & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X never \\brings Y into \\conflicts\\\textbf{\textit{oWant}}}} & to avoid X & \xmark & \xmark \\ & to be left alone & \xmark & \cmark \\ & to thank X & \cmark & \cmark \\ & to fight back & \xmark & \xmark \\ & to avoid conflict & \xmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X scarcely\\gets sunburned\\ \\\textbf{\textit{xReact}}}} & burned & \xmark & \xmark\\ & hurt & \xmark & \xmark\\ & sick & \xmark & \xmark\\ & sad & \xmark & \xmark\\ & satisfied & \cmark & \cmark\\ \midrule \multirow{5}{*}{\makecell[tl]{X under no\\ circumstances\\forgets Y's wallet\\\textbf{\textit{oReact}}}} & upset & \xmark & \xmark \\ & sad & \xmark & \xmark \\ & angry & \xmark & \xmark \\ & thankful & \cmark & \cmark \\ & grateful & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X has trouble \\with advertising \\X's business\\\textbf{\textit{xEffect}}}} & loses money & \cmark & \cmark \\ & loses clients & \cmark & \cmark \\ & gets fired & \cmark & \cmark \\ & gets sued & \xmark & \xmark \\ & cries & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X fails to \\make it through \\the day\\\textbf{\textit{oEffect}}}} & loses a friend & \cmark & \cmark \\ & worries about X & \cmark & \cmark \\ & worried & \cmark & \cmark \\ & want them to do better & \cmark & \cmark \\ & cries & \cmark & \xmark \\ \bottomrule \end{tabular} \caption{Randomly selected generations of the original COMET model regarding semi-logical negation events from \textsc{Anion}{}-S. The top 5 options are classified as either \textit{valid} or \textit{invalid} by the \textbf{LSC} discriminator. \textbf{V} indicates whether an option is classified as \textit{valid} by the \textbf{LSC} discriminator. \textbf{P} indicates whether an option is plausible judging by humans.} \label{tab:discriminator-generations-semi-logical} \end{table} \begin{table}[t!] \small \centering \begin{tabular}{l|l|lc} \toprule \textbf{Event + Rel} & \textbf{Generation} & \textbf{V} & \textbf{P}\\ \midrule \multirow{5}{*}{\makecell[tl]{X keeps \\the old one \\ \\\textbf{\textit{xAttr}}}} & determined & \xmark & \xmark \\ & careful & \xmark & \xmark \\ & loyal & \cmark & \cmark \\ & tired & \xmark & \xmark \\ & caring & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X pays twice \\the retail price \\for a rare item\\\textbf{\textit{xIntent}}}} & to own something rare & \cmark & \cmark \\ & to buy something rare & \cmark & \cmark \\ & to purchase something rare & \cmark & \cmark \\ & to obtain something & \xmark & \xmark \\ & to acquire something & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X is a \\complete \\loner\\\textbf{\textit{xNeed}}}} & to avoid people & \cmark & \cmark \\ & to be lonely & \cmark & \cmark \\ & to start a relationship & \xmark & \xmark \\ & to stay quiet & \cmark & \cmark \\ & to get into trouble & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X is \\miserable \\playing \\\textbf{\textit{xWant}}}} & to get better & \cmark & \cmark \\ & to take a break & \xmark & \cmark \\ & to go home & \xmark & \xmark \\ & to go to sleep & \xmark & \xmark \\ & to cry & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X is picky \\about \\everything\\\textbf{\textit{oWant}}}} & to ignore X & \cmark & \cmark \\ & to avoid X & \cmark & \cmark \\ & to talk to X & \xmark & \xmark \\ & to help X & \xmark & \xmark \\ & to make X feel better & \cmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X resigns\\himself \\ \\\textbf{\textit{xReact}}}} & relieved & \cmark & \cmark\\ & relaxed & \cmark & \xmark\\ & satisfied & \xmark & \xmark\\ & accomplished & \xmark & \xmark\\ & sad & \cmark & \cmark\\ \midrule \multirow{5}{*}{\makecell[tl]{X gives away \\X's laptop\\ \\\textbf{\textit{oReact}}}} & grateful & \cmark & \cmark \\ & thankful & \cmark & \cmark \\ & upset & \xmark & \xmark \\ & sad & \xmark & \xmark \\ & surprised & \cmark & \cmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X goes \\home \\ \\\textbf{\textit{xEffect}}}} & relaxes & \cmark & \cmark \\ & goes to sleep & \cmark & \cmark \\ & is greeted by family & \xmark & \cmark \\ & gets rest & \cmark & \cmark \\ & gets tired & \xmark & \xmark \\ \midrule \multirow{5}{*}{\makecell[tl]{X puts Y \\out of mind \\ \\\textbf{\textit{oEffect}}}} & has a better day & \xmark & \xmark \\ & becomes sad & \cmark & \cmark \\ & cries & \cmark & \cmark \\ & becomes grateful towards X & \xmark & \xmark \\ & feels better & \xmark & \xmark \\ \bottomrule \end{tabular} \caption{Randomly selected generations of the original COMET model regarding commonsense contradiction events from \textsc{Anion}{}-C. The top 5 options are classified as either \textit{valid} or \textit{invalid} by the \textbf{LSC} discriminator. \textbf{V} indicates whether an option is classified as \textit{valid} by the \textbf{LSC} discriminator. \textbf{P} indicates whether an option is plausible judging by humans.} \label{tab:discriminator-generations-pragmatic} \end{table} \subsection{Statistical Significance Testing} \label{app:permutation} To compare P@\{\# valid\}{} for the \textit{all} and \textit{valid} sets, we use a Permutation Test\footnote{http://rasbt.github.io/mlxtend/} with 1,000 permutations to test for statistical significance. For multiple comparisons, we use the Bonferroni method \cite{Haynes2013} to correct significance thresholds. \subsection{Quality Check for the Human Evaluation} We conduct comprehensive pre- and post-evaluation screening on the users and the tasks being completed to ensure the objectivity and high quality of the evaluations. Besides qualifying users during pilot batches, we double check to remove evaluation tasks that are not carefully conducted (\textit{e.g.}, tasks done by users that select all/no options for all hundreds of tasks that they perform). Figure \ref{fig:human_eval_mturk_hit} shows a snippet of the human evaluation MTurk task. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/human_eval_mturk_hit.pdf} \caption{Snippet of the human evaluation task used to evaluate model generated tail inferences.} \label{fig:human_eval_mturk_hit} \end{figure*}
1,477,468,749,907
arxiv
\section{Derivation of the accretion efficiencies} \label{sec:appendix} \subsection{Settling regime -- planar limit (2D)} \label{sec:appendixset} \subsubsection{General expression} \label{sec:appendixge} We use physical, order-of-magnitude arguments to derive the scaling relations of the pebble accretion efficiency (see also \cite{Ormel2010,Ormel2012,Lambrechts2012,Guillot2014,Morbidelli2015,Ida2016}). These expressions are complemented by prefactors which we obtain from the numerical simulations. There are two key requirements for pebble accretion (see \cite{Ormel2017} for a review): \begin{enumerate} \item During a pebble-planet encounter, the time a pebble settles onto the planet is shorter than its encounter time, $t_{\rm set} < t_{\rm enc}$ such that the pebble is able to hit the planet. \item The stopping time is shorter than the encounter time, $t_{\rm s} < t_{\rm enc}$. Otherwise, the gas drag is not important during this encounter. \end{enumerate} \begin{figure}[t] \includegraphics[width=9cm,height=7cm,keepaspectratio]{appendix0.pdf} \caption{ Sketch illustrating the pebble-planet encounter in a local frame where the pebble is co-moving with the central planet. The approach velocity is given by the relative velocity between these two objects ($\Delta v$). The perturbed and unperturbed trajectories are represented in the blue solid and dashed lines. The important timescales are: the pebble-planet encounter time $t_{\rm enc} = b/\Delta v$, and the settling time $t_{\rm set} = b/v_{\rm set}$, where $v_{\rm set}$ is the sedimentation velocity when the planet's gravity equals to the gas drag. } \label{fig:appendix} \end{figure} The failure of either criterion implies that accretion is not in the settling regime. In \fg{appendix}, the settling velocity of a pebble is obtained from balancing the planets' gravitational force with the gas drag force, $v_{\rm set} = (GM_{\rm p}/b^2) t_{\rm s}$ where $b$ is the impact parameter. The settling time is defined as $t_{\rm set} = b/v_{\rm set}$ while the encounter time approximates as $t_{\rm enc} \simeq b/\Delta v$, where $\Delta v$ is the relative velocity between the planet and the pebble. Therefore, from criterion (1) the accretion radius of the planet in the settling regime, $b_{\rm set}$ can be written as \begin{equation} b_{\rm set} \sim \sqrt{\frac{GM_p t_s}{\Delta v}}. \label{eq:reff1} \end{equation} Equivalently, criterion (1) can also be obtained from the gravitational deflection time $t_g$ \citep{Perets2011,Lambrechts2012,Baruteau2016}. For settling $t_g$ must be shorter than the stopping time, $t_{\rm g} < t_{\rm s}$. Here $t_{\rm g}= \Delta v/(GM_{\rm p}/b^2)$ is the time for deflecting the approaching velocity $\Delta v$ by the gravity of the planet. In the coplanar (2D) accretion case, the mass flux of pebbles accreted by the planet is \begin{equation} \dot M_{\rm PA,2D} \sim 2 b_{\rm set} \Delta v \Sigma_{\rm P} \sim 2 \sqrt{GM_{\rm p} t_s \Delta v} \Sigma_{\rm P}. \label{eq:mdotpeb2d} \end{equation} According to the definition, the pebble accretion efficiency for a circular orbit planet is given by \begin{equation} \varepsilon_{\rm 0,2D} = \frac{\dot M_{\rm PA}}{\dot M_{\rm disk}} \sim \frac{ \sqrt{GM_p t_{\rm s} \Delta v} (1 + \tau_{\rm s}^2) }{2 \pi \tau_{\rm s} \eta v_{\rm K} r }. \label{eq:eff0} \end{equation} Rewritten in terms of dimensionless quantities, this expression becomes \begin{equation} \varepsilon_{\rm 0,2D} = A_{\rm 2D} \sqrt{\frac{q_{\rm p} }{ \tau_{\rm s} \eta^2} \left( \frac{\Delta v}{v_{\rm K}} \right) }, \label{eq:oneefficiency0a} \end{equation} where $A_\mathrm{2D} =0.32$ is a fitting factor. In the above expression, the factor $(1 + \tau_{\rm s}^2)$ has been omitted since we focus on pebbles with $ \tau_{\rm s} <1$. In addition, dimensionless quantities are used \begin{equation} q_{\rm p} \equiv \frac{M_{ \rm p}}{M_\star}= 3\times 10^{-6} \left(\frac{M_{\rm p}}{1 \ \rm M_\oplus} \right); \ \eta \equiv \frac{v_{\rm hw}}{v_{\rm K}} = 10^{-3} \left(\frac{v_{\rm hw}}{30 \rm \ m/s}\right) \end{equation} Since the pebble accretion criterion (2) breaks down when the encounter time is shorter than the stopping time, a transition velocity $v_\ast$ can be calculated by $t_{\rm enc} = t_{\rm s}$ \begin{equation} v_{\ast} = (q_{\rm p} / \tau_{\rm s})^{1/3} v_{\rm K} \label{eq:vtrans} \end{equation} When $\Delta v$ approaches $v_\ast$, the accretion transitions from the setting regime to the ballistic regime. We adopt an exponential decay function to fit such a transition\footnote{The form of the modulation factor is very similar to \citet{Ormel2010}, \citet{Ormel2012}, and \citet{Visser2016}, but the numerical factors are slightly different from these works.} \begin{equation} f_{\rm set} = \exp[ -0.5(\Delta v/v_{\ast})^{2} ]. \end{equation} When $\Delta v \ll v_{\ast} $, accretion operates in the settling regime and $f_{\rm set} = 1$; on the other hand, when $\Delta v \gg v_{\ast} $, accretion is in the ballistic regime and $f_{\rm set}=0$. Therefore, the general expression of the accretion efficiency in the settling regime reads \begin{equation} \varepsilon_{\rm set,2D} = A_{\rm 2D} \sqrt{\frac{q_{\rm p} }{ \tau_{\rm s} \eta^2} \left( \frac{\Delta v}{v_{\rm K}} \right) } f_\mathrm{set}, \label{eq:efficiency1} \end{equation} \subsubsection{Circular \& Eccentric cases} \label{sec:appendixecc} For a planet on a circular orbit, its relative velocity $\Delta v$ is in principle the sum of the headwind velocity, $v_{\rm hw} \simeq \eta v_{\rm K}$, and the Keplerian shear velocity between the planet and the pebble, $ v_{\rm sh} \sim \Omega_{\rm K} b_{\rm set}$ (\fg{frames}). In the literature, the pebble accretion in the circular {($e_{\rm p}=0$) limit is classified into two regimes depending on which contribution dominates $\Delta v$: the headwind regime (Bondi regime) or the shear regime (Hill regime) \citep{Lambrechts2012,Guillot2014}. Note that the shear velocity ($v_{\rm sh} \propto b_{\rm set}$) increases with $ \tau_{\rm s}$ and $M_{\rm p}$. Therefore, for given pebble size and disk properties (same $ \tau_{\rm s}$ and $v_{\rm hw}$), planetesimals and low-mass planets tend to be in the headwind regime, whereas massive planets would be in the shear regime. Equivalently, expressed in terms of $ \tau_{\rm s}$, accretion of small $ \tau_{\rm s}$ pebbles takes place in the headwind regime whereas accretion of large $ \tau_{\rm s}$ pebbles will be in the shear regime (as seen from \fg{diff}). When $v_{\rm hw} \simeq v_{\rm sh}$ (or $\varepsilon_{\rm 0,hw} \simeq \varepsilon_{\rm 0,sh} $), a transition mass ratio for the above two regimes is defined as $q_{\rm hw/sh} = \eta^3/ \tau_{\rm s}$. We construct an expression for the relative velocity $\Delta v$ to be used in \eq{eff0}, which combines both the headwind and the shear regimes. For a circular orbit planet, we define $\Delta v = v_{\rm cir}$ with \begin{equation} v_{\rm cir} = \left[1 + a_{\rm hw/sh} \left(\frac{q_{\rm p}}{q_{\rm hw/sh}}\right) \right]^{-1} v_{\rm hw} + v_{\rm sh}, \label{eq:vcir2} \end{equation} where $v_{\rm hw} = \eta v_{\rm K}$ and $v_{\rm sh} =0.52 (q_{\rm p} \tau_{\rm s})^{1/3} v_{\rm K}$. Numerically, we fit $a_{\rm hw/sh}= 5.7$. \Eq{vcir2} provides a smooth transition at the boundary, ensuring that $ v_{\rm cir}= v_{\rm hw}$ for $q_{\rm p} \ll q_{\rm hw/sh}$ and $ v_{\rm cir}= v_{\rm sh}$ for $q_{\rm p} \gg q_{\rm hw/sh}$. When a planet is on a eccentric orbit, the relative velocity in addition includes an eccentricity contribution due to the elliptic orbit of the planet. $v_{\rm ecc}$ is the eccentric velocity of the planet relative to its circular Keplerian value. Therefore, in \eq{efficiency1} \begin{equation} \Delta v = \rm {max} (v_{\rm cir}, v_{\rm ecc}). \end{equation} We fit $v_{\rm ecc} = 0.76 e_{\rm p} v_{\rm K}$ from simulations, consistent with the analysis of \cite{Guillot2014}. When the eccentricity is large that $v_{\rm ecc}> v_{\rm cir}$, the accretion is in the eccentricity regime instead of the shear/headwind regime. \subsection{Settling regime -- 3D limit} For completeness, we also include the expression of 3D accretion efficiency here. The mass flux of pebbles accreted by the planet is \begin{equation} \dot M_{\rm set,3D} = b_{\rm set}^2 \Delta v \rho_{\rm P} \sim GM_{\rm p} t_s \rho_{\rm P}, \label{eq:mdotpeb3d} \end{equation} where $\rho_{\rm P} = \Sigma_{\rm P}/( \sqrt{2 \pi}r_{\rm p} h_{\rm P})$ is the volume density of the pebbles and $h_{\rm P}$ is the aspect ratio of the pebble layer. The expression of the pebble accretion efficiency in the 3D limit reads \begin{equation} \varepsilon_\mathrm{set,3D} = A_{\rm 3D} \frac{q_{\rm p}}{\eta h_{\rm P}} f_\mathrm{set}^2 \end{equation} where $A_{\rm 3D}$ will be calculated in Paper II. Note that in the expression for $\varepsilon_\mathrm{set, 3D}$ $\Delta v$ only appears in $f_\mathrm{set}$. \subsection{Ballistic regime} In the ballistic regime, the accretion radius is significantly reduced due to the lack of gas drag, resulting in a drop of the accretion efficiency (see \fg{default}). The ballistic accretion radius now reads \citep{Safronov1972} \begin{equation} b_{\rm bal} = R_{\rm p} \sqrt{\left( \frac{v_{\rm esc}}{\Delta v }\right)^2 +1 }. \label{eq:reffbl} \end{equation} The accretion efficiency in the $2$D ballistic regime becomes \begin{equation} \varepsilon_{\rm bal,2D} = \frac{\Delta v b_{\rm bal} }{ 2\pi r_{\rm p} \tau_{\rm s} \eta v_{\rm K} } = \frac{ R_{\rm p} \sqrt{ {v_{\rm esc}}^2 + {\Delta v}^2 } }{ 2\pi r_{\rm p} \tau_{\rm s} \eta v_{\rm K} }. \label{eq:effbl01} \end{equation} We note that when $v_{\ast} \lesssim \Delta v \lesssim v_{\rm esc}$, the accretion is in the gravitational focusing regime and $\varepsilon_{\rm bal,2D}$ is independent of the eccentricity. On the other hand, when $\Delta v\gtrsim v_{\rm esc}$, the accretion is geometric regime. The efficiency increases with the eccentricity and $b_{\rm bal}$ is reduced to the physical radius of the planet \citep{Guillot2014}. We rewrite the above formula including the ($1-f_{\rm set}$) term \begin{equation} \varepsilon_{\rm bal,2D} = \frac{ R_{\rm p} } { 2\pi \tau_{\rm s} \eta r_{\rm p} } \sqrt{ \frac{2 q_{\rm p} r_{\rm p}}{R_{\rm p } } +\left( \frac{\Delta v}{v_{\rm K}} \right)^{2} } \left(1 - f_{\rm set}\right), \label{eq:effbl02} \end{equation} In the 3D limit, we have \begin{equation} \varepsilon_{\rm bal,3D} = \frac{1} {4 \sqrt{2\pi} \eta \tau_{\rm s} h_{\rm P}} \left( 2q_{\rm p} \frac{v_{\rm K}}{\Delta v} \frac{R_{\rm p}}{r_{\rm p}} +\frac{R_{\rm p}^2}{r_{\rm p}^2} \frac{\Delta v}{v_{\rm K}} \right) \left( 1 -f_\mathrm{\rm set}^2 \right) \end{equation} For notations of masses, velocities, timescales, etc., see Table A.1. \begin{table*} \caption{List of notations} \centering \begin{tabular}{lp{10cm}} \hline \hline Symbol & Description \\ \hline $\varepsilon_{\rm 3D}$ or $\varepsilon_{\rm 2D}$ & 3D or 2D pebble accretion efficiency (probability of capture) \\ $\varepsilon_0$ & Pebble accretion efficiency for planets on circular orbits \\ $\varepsilon_{\rm set}$ & Pebble accretion efficiency in the settling regime \\ $\varepsilon_{\rm bal}$ & Pebble accretion efficiency in the ballistic regime \\ $A_{\rm 2D}$ & Prefactor of pebble accretion efficiency in 2D\\ $a_{\rm m}$ & Acceleration for type I migration \\ $a_{\rm e}$ & Acceleration for the eccentricity damping \\ $a_\mathrm{p}$ & Planet semimajor axis \\ $b$ & Impact parameter in the local frame\\ $b_{\rm set}$ & Accretion radius in the settling regime \\ $b_{\rm bal}$ & Accretion radius in the ballisitc regime \\ $e_\mathrm{p}$ & Planet eccentricity \\ $e_\mathrm{eq}$ & Equilibrium eccentricity at resonance \\ $\Omega_{\rm p}$ & Keplerian angular velocity at planet's location\\ $\eta$ & Headwind prefactor \\ $\Sigma_{\rm gas}$ & Gas surface density \\ $\Sigma_{\rm P}$ & Pebble surface density \\ $\rho_{\rm gas}$ & Gas volume density \\ $\rho_{\rm P}$ & Pebble volume density \\ $f_{\rm set}$ & Attenuation factor \\ $H_\mathrm{gas}$ & Gas disk scaleheight \\ $H_\mathrm{P}$ & Pebble disk scaleheight \\ $h_\mathrm{gas}$ & Gas disk aspect ratio \\ $h_\mathrm{P}$ & Pebble disk aspect ratio \\ $M_\mathrm{p}$ & Planet mass \\ $M_\mathrm{p, sio}$ & Pebble isolation mass \\ $M_\mathrm{\star}$ & Star mass \\ $N_\mathrm{hit}$ & Number of pebbles hit onto the planet in the global simulation \\ $N_\mathrm{tot}$ & Total number of pebbles initially given in the global simulation \\ $P$ & gas disk pressure \\ $q_\mathrm{p}$ & Mass ratio between the planet and the star \\ $R_\mathrm{H}$ & Hill radius of the planet \\ $R_\mathrm{Bondi}$ & Bondi radius of the planet \\ $R_\mathrm{p}$ & Planet physical radius \\ $r_\mathrm{p}$ & Distance between the planet and the central star \\ $r$ & Distance between the pebble and the central pebble \\ $t_{\rm stop}$ & Stopping time of the pebble\\ $t_{\rm syn}$ & Synodical time of the pebble \\ $\tau_{\rm m}$ & Type I migration timescale \\ $\tau_{\rm e}$ & Eccentricity damping timescale \\ $\tau_{\rm s}$ & Dimensionless stopping time ($\tau_{\rm s}=t_\mathrm{s}\Omega_{\rm K}$) \\ $v_{\rm esc}$ & Planet escape velocity \\ $v_{\ast}$ & Transition velocity from the settling regime to the ballistic regime \\ $v_{\rm r}$ & Radial drift velocity of the pebble \\ $v_{\rm \phi}$ & Azimuthal velocity of the pebble \\ $v_{\rm hw}$ & Headwind velocity of disk gas \\ $v_{\rm sh}$ & Keplerian shear velocity between the planet and the pebble \\ $v_{\rm ecc}$ & Relative velocity between a planet on an eccentric and a circular orbit \\ $v_{\rm cir}$ & Relative velocity between the pebble and the planet on a circular orbit \\ $v_{\rm K}$ & Keplerian velocity at planet's location\\ $\dot M_{\rm PA}$ & Pebble mass accretion rate onto the planet \\ $\dot M_{\rm disk}$ & Pebble mass flux in the disk \\ $\Delta r_{\rm p}$ & Distance between the pebble and the planet \\ $\Delta v$ & Relative velocity between the pebble and the planet \\ \hline \end{tabular} \end{table*} \section{Application: formation of a secondary planet} \label{sec:application} In this section we envision an already-formed primary planet (a gas giant) and a planetary embryo for the future second planet. We discuss how secondary planet formation is aided by a higher pebble accretion efficiency at resonance locations, where eccentricities are excited by the primary giant planet. Based on the observational statistics of the occurrence rate, $10\%$-$20\%$ of solar-type systems contain gas giants \citep{Cumming2008,Mayor2011}. The formation of a gas giant requires a solid core mass that exceeds $10 \ \rm M_\oplus$ \citep{Pollack1996} before the gas disk is depleted. Disk migration is an efficient way to transport embryos to construct such cores, but it also depends on detailed disk models \citep{Cossou2014,coleman2014,Liu2015}. On the other hand, (sub)millimeter dust observations suggest the existence of massive pebble reservoirs ($\sim 100 \ \rm M_\oplus$) in the outer region of disks during the gas-rich phase \citep{Ricci2010,Andrews2013,Ansdell2016}. In the pebble accretion scenario the inward drift of these pebbles provide the building blocks to form massive planets \citep{Lambrechts2014,Bitsch2015b}. Here we only focus on the pebble accretion scenario model. Once the core mass reaches the $10 \ \rm M_\oplus$, the gas accretion enters a runaway mode \citep{Pollack1996}. As a result, the surrounding embryos and planetesimals are strongly perturbed by the sudden increase of planet's gravity \citep{Zhou2007b,Raymond2017}. These bodies could be scattered outward during close encounters. They subsequently migrate inward either due to the type I migration (embryos) or aerodynamic gas drag (planetesimal), and could be capture into the $2$:$1$ mean motion resonance with this giant planet \citep{Zhou2007b}. Here, we only focus on the largest embryo among these bodies because it has the highest chance to grow into a secondary planet. The embryo's eccentricity will be excited at the resonant location. Meanwhile, it can accrete pebbles that drift from the outer part of the disk. With the pebble accretion prescription of \se{expression}, we conduct N-body simulations to simulate the embryo's orbital and mass evolution. We explore how massive the pebble disk must be to produce a secondary planet. We restrict our problem to a quiescent (low turbulent) disk such that pebbles are settled into the disk midplane. The accretion is thus in the $2$D regime (pebble scale height $H_{\rm P}$ is smaller than impact radius $b_{\rm set}$). The influence of disk turbulence and the prescription of $3$D pebble accretion will be presented in Paper II. We use the pebble generation model derived by \cite{Lambrechts2014}. The adopted gas surface density, the disk aspect ratio are \citep{Lambrechts2014} \begin{equation} \Sigma_{\rm gas} = 500 r_{\rm AU}^{-1} {\rm \ gcm^{-2}}, \ h_{\rm gas} = 0.033 r_{\rm AU}^{1/4}, \end{equation} where $r_{\rm AU}= r/(1 \rm \ AU)$, the headwind prefactor $\eta =1.5 \times 10^{-3} r_{\rm AU}^{1/2}$ and the dust-to-gas ratio is $1\%$. The formation of the first gas giant may already take a few Myr, depending on the opacity in its envelope \citep{Movshovitz2010}. Thus, $\dot M_{\rm disk} = 5 \times 10^{-5} \ \rm M_\oplus/yr$ is the pebble mass flux estimated from \cite{Lambrechts2014} (assuming $t=5 \times 10^6$ yr in their Eq. (14)). The pebble aerodynamal size is $ \tau_{\rm s} = 0.05$ based on their Eqs. (20) and (25). The above $ \tau_{\rm s}$ is consistent with advanced dust coagulation model \citep{Birnstiel2012} as well as disk observations \citep{Perez2015,Tazzari2016}. A gas giant of $1 M_{\rm J}$ is initialized at $1 \ \rm AU$. We ignore the migration of the gas giant, but do consider the eccentricity damping of the disk gas, which operates on a timescale of $\tau_{\rm e} = 10^{3} \ \rm yr$ \citep{Bitsch2013}. Since the Jupiter mass planet is much more massive than the embryo, the eccentricity of the gas giant remains low even without any gas damping. We find that the specific choice of $\tau_{\rm e}$ for the giant planet does not affect our simulation results. A $0.1 \ \rm M_\oplus$ embryo is initialized at a period ratio of $2.1$ (just exterior to the $2$:$1$ resonance) with the inner giant planet. We consider two scenarios: (i) a resonant case where the embryo experiences type I migration and is captured in resonance; and (ii) a non-resonant case without type I migration but including eccentricity damping for a comparison. For the resonant case, we implement the additional accelerations of the planet-disk interaction and eccentricity damping into the N-body code based on \citet{Papaloizou2000} and \citet{Cresswell2008}: $\bm{a}_{\rm m} = - \bm{v_{\rm p}}/ \tau_{\rm m}$ and $\bm{a}_{\rm e} = - 2 (\bm{v_{\rm p}} \bm\cdot \bm{r_{\rm p}}) \bm{r_{\rm p}} /r_{\rm p}^2 \tau_{\rm e}$. The embryo's migration timescale $\tau_{\rm m}$ and the eccentricity damping timescale $\tau_{\rm e}$ are adopted from Eqs. (11) and (13) in \cite{Cresswell2008}. These expressions take into account the supersonic regime when $e_{\rm p} \gtrsim h_{\rm gas}$. Thus, for the resonant case we expect that the embryo will migrate and get trapped into the $2$:$1$ resonance while for the comparison case the embryo will remain at the original orbit. It should be noted that we do not consider the inclinations of both planets. Since the inclination damping timescale is much shorter compared to the migration timescale \citep{Bitsch2011}, their inclinations would be soon damped down, restricting both planets to coplanar orbits. Thus, the above simplification is appropriate. The gas giant also opens a gap in the disk gas with a typical width of Hill radius. The depletion of the gas near the planet also changes the gas pressure gradient, resulting in the truncation of drifting pebbles. Therefore, a wider dust gap can be formed exterior to the gas gap. Hydrodynamic simulations show that the width of the dust gap produced by a Jupiter mass planet is less than the distance between the giant planet and its $2$:$1$ resonance \citep{Zhu2012}. Therefore, embryos at the $2$:$1$ resonance can still accrete pebbles that drift from the outer region of the disk. \footnote{ In a very low viscous disk the the gap opened by the gas giant may be wider than its $2$:$1$ resonance. Our model discussed in this section would not apply in that case. } \begin{figure}[t] \includegraphics[scale=0.5, angle=0]{fsp.pdf} \caption{Eccentricity (a), pebble accretion efficiency (b) and mass of the embryo (c) vs total mass of drifting pebbles (lower x axis) and time (upper x axis). The black line corresponds to the resonant case with type I migration and resonance trapping, while the red line represents the non-resonant, comparison case that the embryo without type I migration moves on a nearly circular orbit. } \label{fig:fsp} \end{figure} \fg{fsp} shows a) the eccentricity, b) the mass growth, and c) the pebble accretion efficiency of the embryo as functions of the total mass of drifting pebbles (lower x-axis) and time (upper x-axis). For the resonant case, the embryo gets trapped into the $2$:$1$ resonance where the eccentricity is gradually excited (black line in \fg{fsp}a). Meanwhile, the gas damps the eccentricity and circularizes the orbit. An equilibrium eccentricity is achieved by balancing gas damping and resonant excitation. The equilibrium eccentricity ($e_{\rm eq}$) can be analytically derived by solving Lagrange's planetary equations with migration and eccentricity damping (see details in \cite{Teyssandier2014}). Due to the high mass ratio of the giant planet and the embryo, from their Eq. (46) $e_{\rm eq}$ can be simplified as $e_{\rm eq} \simeq \sqrt{\tau_{\rm e}/\tau_{\rm m}} \simeq h_{\rm gas}$. In \fg{fsp}a we also find $e_{\rm eq} \simeq 0.035$, consistent with the above analysis. However, the embryo in the comparison case without migration has a nearly circular orbit ($e_{\rm p} < 4 \times 10^{-3}$; red line in \fg{fsp}a). It is non-zero due to weak, secular perturbations of the inner gas giant. We find in \fg{fsp}b that after $3\times 10^{4}$ yr the pebble accretion efficiency in the resonant case is higher than the non-resonant, comparison case. As was already seen from \fg{default}, this is again because for the resonant case the moderate eccentricity in the $2$:$1$ resonance facilitates a higher pebble accretion rate. After that, a higher eccentricity in the resonant case increases the pebble accretion efficiency, leading to a more massive planet. Because of its higher mass, it accretes pebbles more efficiently, promoting an even higher pebble accretion efficiency. That is why in \fg{fsp}b both efficiencies increase with time, but the efficiency in the resonant case (black) is higher than the comparison case (red). As a result, in \fg{fsp}c the resonant embryo grows more rapidly than the non-resonant planet. And this mass difference increases with time. In \fg{fsp}c we see that the final mass of the embryo depends on the total amount of pebbles available in the outer disk. Since the formation of the giant planet already consumes large amounts of pebbles, the amount of pebbles left behind that feed the secondary planet should be limited. For instance, if the total mass of drifting pebbles beyond the planet's orbit is limited to $20 \ \rm M_\oplus$, the embryo at resonance can grow to a $3 \ \rm M_\oplus$ super-Earth, whereas the embryo in a non-resonant orbit will only reach $ 1 \ \rm M_\oplus$. Similarly, if the total pebble mass is limited to $40 \ \rm M_\oplus$, the embryo in resonance can grow into a planet with $11 \ \rm M_\oplus$, whereas the non-resonant embryo only attains $3 \ \rm M_\oplus$. For the resonant case, then, the massive core is able to initiate the rapid gas accretion and grow into a giant planet. However, for the non-resonant case, the slow growth of the embryo results in a final super-Earth mass planet. To conclude, an embryo at resonance accretes pebbles more efficiently and grows faster than neighboring embryos that move on nearly circular orbits. A secondary planet therefore preferentially forms at the resonance location. Radial velocity surveys show that multiple gas giants pile up the $2$:$1$ resonance \citep{Wright2011}, supporting our hypothesis. \section{Circular pebble accretion} In this section, we only consider the planet in a circular orbit ($e =0$). The comparison simulations among different methods are conducted in \se{compare}, and the analytical fitting expression for pebble accretion efficiency is illustrated in \se{analy}. \subsection{Comparison between different approaches} \label{sec:compare} We conduct simulations among three different approaches and compare the pebble accretion efficiencies. These approaches include the local frame integration (Local) (Ormel2010), the global frame direct integration (Global-direct), and the global frame hybrid method combined with direct integration and strong coupling approximation (Global-hybird). Three masses of planets with varying $ \tau_{\rm s}$ have been explored. We focus on the pebble accretion (settling) cases where typically $10^{-3} \lesssim \tau_{\rm s} < 1$, but for the additional method comparison, the cases of $ \tau_{\rm s} \gtrsim 1$ (ballistic regime) have also been investigated. The strong coupling approximation (SCA) is applied to particles with $ \tau_{\rm s} \leqslant 10^{-2}$. In \fg{diff}, we verify that results of the hybrid method well match with the direct method for these low $ \tau_{\rm s}$ particles. Moreover, the Global-hybrid algorithm significantly reduces the computational time. For instance, the hybrid integration of a $0.1 \ \rm M_\oplus$ planet accreting particles of $ \tau_{\rm s} =10^{-2}$ is $10$ times faster than that of the direct integration, and $50$ times faster for the case of $ \tau_{\rm s} =10^{-3}$. We also find that in \fg{diff} results from the local frame and the global frame are generally in good agreement with each other from the low to the intermediate mass planet ($10^{-3} \ \rm M_\oplus$ and $0.1 \ \rm M_\oplus$). Only massive planet case ($10 \ \rm M_\oplus$) the efficiencies obtained from the local simulations are higher than the global simulations. There are two limitations mainly caused for this difference. First, within the local framework there is a risk of multiple counting particles. There are two possible ways of accretion when particles drift inward and across the orbit of the planet. Some of them either enter into the Hill sphere and directly hit onto the planet, Some others just flyby during the encounters, but orbit within the co-orbital region and re-accrete onto the planet again after a synodical time ($t_{\rm syn} \sim \Omega_{\rm p}^{-1} r_{\rm p }/\Delta r_{\rm co}$, and $\Delta r_{\rm co}$ is the width of the co-orbital region of the planet). We initially place particles from both sides of planet in the local frame simulations. The problem for this initialization is that it may count the same particle twice, in particular when the radial drift of the particle is too small compared to the effective accretion radius ($r_{\rm eff}$) within a synodical time ($v_{\rm r} t_{\rm syn} < r_{\rm eff}$). We estimate that the planet beyond a few Earth mass may overestimate the efficiency due to this multiple counting effect. The other reason is the linearization in the local frame . In the local approach, we ignore the high order terms of $(\Delta r_{\rm p}/r_{\rm p})^n$ ($n \geq2$) in the particle's equation of the motion regarding to the fact that $\Delta r_{\rm p} \ll r_{\rm p}$, where $\Delta r_{\rm p}$ is the distance between the planet and the particle. But the integration domain (maximum $\Delta r_{\rm p}$ we consider there) is larger for a more massive planet. Thus, the condition of $\Delta r_{\rm p} \ll r_{\rm p}$ breaks and this first order approximation is inappropriate for massive planets. To conclude, both effects are biased to large $M_{\rm p}$, and as a result, the local results overestimate $\epsilon$ in the high mass branch. We also find in \fg{diff} that the local and the global simulations fail to match with each when $ \tau_{\rm s}$ becomes large. The cases of $ \tau_{\rm s} =10$ are given where the accretion is in the ballistic regime. Two approaches have very different results, and this mismatch increases with the planet mass. In such a situation, the particle's eccentricity is excited by the planet during the encounter. The eccentricity damping from the gas drag is much longer than the encounter time ($\simeq$ orbital time). However, only (unperturbed) circular pebbles are assumed in the local frame simulations, which is not appropriated in this large $ \tau_{\rm s}$ case any more. There is no problem for the global simulations since it traces the full motion of the particles. For the $M_{\rm p} = 10 \ \rm M_\oplus $ and $ \tau_{\rm s} =10$ case, global simulation gives zero efficiencies. This is because the particles behave like planetesimals: they get trapped into resonance with the planet. In summary, the global and the local frame are mostly consistent with each other. But the global simulation accurately deals with cases of high mass planet ($M_{\rm p} \gtrsim 1-10 \ \rm M_\oplus$) and large particles size ($ \tau_{\rm s} \gtrsim 1 $). \ccc{(perhaps you mentioned this before): do say here that it is very straightforward and physically clear to extent the global approach towards eccentric planets. For the local approach this is not so obvious!.} \subsection{Analytical expression} \label{sec:analy} \ccc{My suggestion is to move most of this stuff to the appendix. The final part where you discuss the fits can be merged with the Analysis section} We find that in \fg{diff} the accretion efficiency is a decreasing function of the pebble size and an increasing function of the planet mass. For a given planet, it is able to accrete small size pebbles more efficiently than those of large sizes. For a preferred size of pebbles, a massive planet can accretes more than a less massive one. Here we use a physical order-of-magnitude argument to explain such correlations. The expression is obtained by combining the analytical and simulation results. There are two key requirements for pebble accretion (see Ormel2017): \begin{enumerate} \item The time pebble settle onto the planet is shorter than the encounter time, $t_{\rm set} < t_{\rm enc}$. Thus, the pebble is able to settle onto the planet before the flyby. This criterion is also equivalently interpreted as the gravitational deflection time is shorter than the gas-pebble coupling time, $t_{g} < t_{\rm s}$. \item The gas-pebble coupling time is shorter than the pebble-planet encounter time, $t_{\rm s} < t_{\rm enc}$. Otherwise, the gas drag is not important during this encounter. \end{enumerate} The failure of either criterion indicates the accretion is not in the settling regime. When gravitational acceleration equals to the gas drag, the settling velocity for the particle approaching the planet is given by $v_{\rm set} = (GM_{\rm p}/r_{\rm eff}^2) t_{\rm s}$, where $r_{\rm eff}$ is the effective pebble accretion radius. The settling time $t_{\rm set} \simeq r_{\rm eff}/v_{\rm set}$. The bypass timescale during an encounter, $t_{\rm enc} \simeq r_{\rm eff}/\Delta v$, where $\Delta v$ is the relative velocity between the planet and the pebble. Based on criterion (1), $r_{\rm eff}$ can be written as \begin{equation} r_{\rm eff} \sim \sqrt{\frac{GM_p t_s}{\Delta v}}. \label{eq:reff1} \end{equation} For a circular orbit planet, the relative velocity is either the headwind velocity of disk gas, $v_{\rm hw} \simeq \eta v_{\rm K}$, or the Keplerian shear velocity between the planet and the pebble, $ v_{\rm sh} \sim \Omega_{\rm K} r_{\rm eff}$. The circular pebble accretion can naturally classified into two regimes by the dominated velocity: the headwind regime (Bondi regime) or the shear regime (Hill regime). In a coplane (2D) accretion case, the mass flux of pebbles accreted by the planet is given by \begin{equation} \dot M_{\rm PA} = 2 r_{\rm eff} \Delta v \Sigma_{\rm P} \sim 2 \sqrt{GM_p t_s \Delta v} \Sigma_{\rm P}. \label{eq:mdotpeb} \end{equation} According to the definition, the pebble accretion efficiency for a circular planet is given by \begin{equation} \epsilon_0 = \frac{\dot M_{\rm PA}}{\dot M_{\rm disk}} \sim \frac{ \sqrt{GM_p t_{\rm s} \Delta v} (1 + \tau_{\rm s}^2) }{2 \pi \tau_{\rm s} \eta V_{\rm K} r } \end{equation} Based on our numerical results, we change the prefactors in these expressions and further distinguish between the headwind and the shear regimes: \begin{equation} \epsilon_0 = \begin{cases} \epsilon_{\rm 0,hw} = 0.30 q^{1/2} \eta^{-1/2} \tau_s^{-1/2} & \mbox{when } q \leq q_{\rm hw/sh }; \\ \epsilon_{\rm 0,sh}= 0.25 q^{2/3} \eta^{-1} \tau_s^{-1/3} & \mbox{when } q> q_{\rm hw/sh}. \end{cases} \label{eq:efficiency0} \end{equation} We have used dimensionless quantities to express the efficiency where $q \equiv M_{ \rm p}/M_{\star}= 3\times 10^{-6} (M_{\rm p} /1 \ \rm M_\oplus)$ and the headwind prefactor $\eta \equiv v_{\rm hw}/v_{\rm K} = 10^{-3} (v_{\rm hw}/30 \rm \ m/s)$. The $(1+ \tau_{\rm s}^{2})$ term is neglected since we focus on particles with $ \tau_{\rm s}<1$. Because the shear velocity increases with the planet mass, we expect that the low-mass planet mass is in the headwind regime and the massive planet is in the shear regime. The transition mass ratio ($q_{\rm hw/sh}$) between these two regime can be estimated when $v_{\rm hw}= v_{\rm sh}$. Including a numerically-obtained prefactor, we find $ q_{\rm hw/sh} = 3 \eta^3 \tau_{\rm s}^{-1}$. Finally, we adopt a uniform expression for two regimes and enforce $\epsilon_0 \leqslant 1$. The accretion efficiency then reads \begin{equation} \epsilon_0 = \min \left[C \epsilon_{\rm 0,hw} + ( 1 -C)\epsilon_{\rm 0,sh},1\right]. \end{equation} where $C=\exp \left[-0.68(q/q_{\rm hw/sh})^{5}\right]$. It provides a sharp transition at $q_{\rm hw/sh}$, and both ensures $ \epsilon_0= \epsilon_{\rm 0,hw}$ for $q<q_{\rm hw/sh}$ and $ \epsilon_0= \epsilon_{\rm 0,sh}$ for $q>q_{\rm hw/sh}$. In \fg{diff}, we find that the analytical expression (black lines) agrees with the simulation results quite well in both regimes. One exception is for large stokes number pebbles ($ \tau_{\rm s} \gtrsim 1$). This is because the accretion is not in the settling regime any more. For the high masse cases, the analytical expression matches better for the global simulations than the local simulations, which has been explained in \se{compare}. \begin{figure*}[tbh!] \includegraphics[scale=1.0, angle=0]{diffapproach.pdf} \caption{\ccc{There is still no caption. Actually, I suggest to (i) to enlarge the figure -- it is an important result; (ii) then add more points so that it is clear the fitting formula works really well; (iii) add error bars; (iv) get rid of the ugly pink background color; (v) label axes properly\ldots} } \label{fig:diff} \end{figure*} \begin{table*} \centering \caption{Pebble accretion efficiency comparison between different approaches. The planet is in a circular orbit, and the disk headwind velocity is set to be $30 \rm \ m/s$.} \begin{tabular}{llllllllllll|} \hline \hline $M_{\rm p } $ ($ \rm M_{\oplus}$) & $ \tau_{\rm s}$ & \multicolumn{3}{c}{Pebble accretion efficiency $\epsilon_{\rm PA}$}\\ & & Local & Global-direct & Global-hybrid \\ \hline \hline $10^{-3}$ & $10^{-3} $ & $1.73 \times 10^{-2}$ & & $1.73 \times 10^{-2}$ \\ $10^{-3}$ & $10^{-2.5} $ & $ 9.63 \times 10^{-3}$ &$ $ &$ 9.65 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-2} $ & $ 5.31 \times 10^{-3}$ &$5.31 \times 10^{-3}$ &$ 5.35 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-1.5} $ & $2.71 \times 10^{-3}$ & $2.71 \times 10^{-3}$ & $2.71 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-1} $ & $1.43 \times 10^{-3}$ & $1.45 \times 10^{-3}$ & $1.45 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-0.5} $ & $8.9 \times 10^{-4}$ & $8.8 \times 10^{-4}$ & $8.8 \times 10^{-4}$ \\ $10^{-3}$ & $0.5 $ & $8 \times 10^{-4}$ & $8 \times 10^{-4}$ & $8 \times 10^{-4}$ \\ $10^{-3}$ & $1 $ & $8 \times 10^{-4}$ &$8 \times 10^{-4}$ & $8 \times 10^{-4}$ \\ $10^{-3}$ & $10^{0.5} $ & $ 6.5 \times 10^{-4}$ &$6.4 \times 10^{-4}$ & $6.4 \times 10^{-4}$ \\ $10^{-3}$ & $10 $ & $6.9 \times 10^{-4}$ &$6.7 \times 10^{-4}$ & $6.7 \times 10^{-4}$ \\ \hline $0.1$ & $10^{-3} $& $0.132$ & $0.129$ & $0.129$ \\ $0.1$ & $10^{-2.5} $& $7.89 \times 10^{-2}$ & $7.84 \times 10^{-2}$ & $7.84 \times 10^{-2}$ \\ $0.1$ &$10^{-2} $ & $5.10\times 10^{-2}$ & $5.10 \times 10^{-2}$ & $5.10 \times 10^{-2}$ \\ $0.1$ &$10^{-1.5} $ & $3.42 \times 10^{-2}$ & $3.42 \times 10^{-2}$ & $3.42 \times 10^{-2}$ \\ $0.1$ & $10^{-1} $ & $2.29 \times 10^{-2}$ &$2.30 \times 10^{-2}$ & $2.30 \times 10^{-2}$\\ $0.1$ & $10^{-0.5} $ & $1.49 \times 10^{-2}$ &$1.5 \times 10^{-2}$ & $1.5 \times 10^{-2}$ \\ $0.1$ & $0.5 $ & $1.27 \times 10^{-2}$ &$1.27 \times 10^{-2}$ & $1.27 \times 10^{-2}$ \\ $0.1$ & $1 $ & $7.85 \times 10^{-3}$ &$7.80 \times 10^{-3}$ & $7.80 \times 10^{-3}$ \\ $0.1$ & $10^{0.5} $ & $9.83 \times 10^{-3}$ &$9.65 \times 10^{-3}$ & $9.65 \times 10^{-3}$ \\ $0.1$ & $1 0$ & $2.67\times 10^{-2}$ &$1.37 \times 10^{-2}$ & $1.37 \times 10^{-2}$ \\ \hline $10$ & $10^{-3} $& $2.83$ & $1.0$ & $1.0$ \\ $10$ & $10^{-2.5} $& $1.93$ & $0.96$ & $0.95$ \\ $10$ & $10^{-2} $& $1.25$ & $0.75$ & $0.75$ \\ $10$ &$10^{-1.5} $ & $0.83$ &$0.57$ & $0.57$ \\ $10$ &$10^{-1} $ & $0.53$ &$0.45$ & $0.45$ \\ $10$ &$0.5 $ & $0.27$ &$0.24$ & $0.24$ \\ $10$ &$10^{-0.5} $ & $0.331$ &$0.284$ & $0.284$ \\ $10$ & $1 $ & $0.23$ &$0.13$ & $0.13$ \\ $10$ & $10^{0.5} $ & $0.386$ &$0.195$ & $0.195$ \\ $10$ &$10$ & $0.58$ &$0$ &$0$& \\ \hline \hline \end{tabular} \label{tab:tab1} \end{table*} \section{Conclusions} \label{sec:conclusion} Pebble accretion is an important mechanism that drives planet growth by accreting millimeter to centimeter size particles in gas-rich disks. Formed in the outer regions of disks, pebbles drift inward due to the aerodynamic drag from the disk gas. When pebbles cross the orbit of a planet, a fraction of them will be accreted by the planet. In this paper, we have calculated the efficiency of pebble accretion in the $2$D limit by conducting a series of numerical integrations of the pebble's equation of motion in both the local (co-moving) frame and the global (heliocentric) frame. The key findings of this study are the following: \begin{enumerate} \item The results of the local and global methods are generally consistent. However, the global method more accurately simulates the pebble-planet interaction, due to the fact that it accounts for curvature effects and models the dynamics of the pebble properly. Since the equations of motions are linearized in the local frame, the local method tends to overestimate $\varepsilon$ when the planet is more massive than a few Earth masses, or when the aerodynamic size of the pebble is larger than $1$ (\se{compare}). \item We find that the 2D efficiency ($\varepsilon_\mathrm{2D}$) is a function of the planet eccentricity. A planet will accrete pebbles at a higher efficiency once the eccentricity velocity is higher than the relative velocity obtained for a circular orbit planet ($v_{\rm ecc}> v_{\rm cir}$). The pebble accretion efficiency then increases with eccentricity. However, $\varepsilon_\mathrm{2D}$ drops quickly when the eccentricity becomes too large for encounters to satisfy the settling conditions. The accretion therefore transitions to the ballistic regime. Planets with moderate eccentricities ($e_{\rm p} \sim10^{-2} - 0.1$) accrete pebbles at rates a factor of $3-5$ higher than planets on circular orbits (\se{ecc}). \item We have obtained a recipe for the pebble accretion efficiency $\varepsilon_\mathrm{2D}$ as functions of the planet eccentricity ($e_{\rm p}$), the mass ratio of the planet to the star ($q_{\rm p}$), the disk headwind prefactor ($\eta$), and the aerodynamic size of the pebble ($ \tau_{\rm s}$). Consistent with previous work, the efficiency increases with the planet-to-star mass ratio, and decreases with both the headwind velocity and the pebble size. An analytical fit expression of $\varepsilon_\mathrm{2D} (e_{\rm p}, q_{\rm p}, \eta, \tau_{\rm s})$ is derived from our simulations (\se{expression}). Such a recipe can be readily implemented into N-body codes to study the long-term growth and evolution of planetary systems. \item In the 2D limit embryos trapped in resonance and on eccentric orbits grow faster than those on circular orbits. Therefore, the secondary planet formation occurs preferentially at resonances with the first giant planet. \end{enumerate} In this work we have focused on the 2D limit where all pebbles are in the midplane. However, disk turbulence may lift small pebbles from the midplane. In addition, the inclinations of the embryos/planetesimals can be excited by their mutual gravitational interactions. Under these circumstances, the pebble accretion efficiency is not $\varepsilon_\mathrm{2D}$, but rather involves effects related with the planet's inclination, the pebble accretion radius, and the scale height of the pebble layer \citep{Ormel2012,Morbidelli2015,Levison2015a,Levison2015b,Xu2017}. These 3D effects will be addressed in the subsequent paper (Paper II). \section{Introduction} \label{introduction} In protoplanetary disks, micron-sized dust grains grow by coagulation \citep{Weidenschilling1993,Dominik2007,Johansen2014}. Typically, dust coagulation can be divided into two phases: an initial growth phase, where particles grow from micron-sized dust grains into much larger pebbles; and a drift phase, where particles are transported to the inner disk regions. Aerodynamically, a particle is defined in terms of a dimensionless quantity $ \tau_{\rm s}$ (often referred to as the Stokes number). Dust grains have $\tau_s$$\ll$$10^{-3}$ and are strongly coupled to the gas. Particles that have become aerodynamically larger ($ \tau_{\rm s}\sim10^{-2}-1$), on the other hand, decouple from the gas and drift inward \citep{Weidenschilling1977a}. The radial drift effectively limits the size of the particles. Other processes that limit the size occur once the particles' relative velocities reach the bouncing or fragmentation thresholds \citep{Guttler2010}. The drifting particles are defined as pebbles, whose size therefore depends on the material strength and the disk properties but typically lies in the mm$-$cm range \citep{Brauer2008,Birnstiel2010}. Because the growth timescale of pebbles increases with the disk radius, the dust evolution proceeds in an inside-out manner. The interior, planet-forming regions are therefore constantly supplied by pebbles that drift from the outer regions of disks \citep{Birnstiel2012,Lambrechts2014,Sato2016,Krijt2016}. Observationally, the existence of pebble-sized particles in disks is inferred from optically thin emission at sub-mm and radio wavelengths. Particles that have grown to sizes larger than the observing wavelength tend to emit as a gray body. Therefore, a millimeter spectral index that is reduced compared to the interstellar medium (ISM) value is a signature of the existence of pebble-sized particles \citep{Natta2004,Draine2006,Perez2015,Tazzari2016}. Assuming a temperature (profile) and a millimeter opacity (quantities that are rather uncertain), the total mass of the pebble reservoir can be calculated \citep{Williams2011}. Typical values for the pebble disk mass lie from a few Earth masses to hundreds of Earth masses \citep{Ricci2010,Andrews2013,Ansdell2016,Barenfeld2016,Pascucci2016}. \changed{However, the observed pebble mass may not represent the full reservoir of solid material in the disk. For instance, depending on the rapidity of the planet formation process, some of the erstwhile pebbles may already have been locked up in larger bodies, invisible to millimeter disk observations.} Indeed, \cite{Ansdell2017} find lower average dust masses for the older $\sigma$ Orionis cluster. When the drifting pebbles cross the orbit of big bodies (planetesimals, planetary embryos, or planets), a fraction of them can be accreted by the combined effects of the planet's gravitational attraction and gas drag \citep{Ormel2010,Lambrechts2012}. This process is known as pebble accretion (see \cite{Ormel2017,Johansen2017} for recent reviews). Depending on the importance of gas drag during the pebble-planet encounter, the accretion can be classified into two regimes: settling and ballistic. In the classical, planetesimal-driven planet formation paradigm \citep{Safronov1972,Lissauer1987,Ida2004a}, accretion takes place entirely through ballistic interactions. In the ballistic regime, gas drag is unimportant during the encounters and accretion relies on hitting the surface. On the other hand, in the settling regime, pebbles settle onto the planet at their terminal velocity (the velocity where the planet's gravity balances gas drag). The accretion rate does not depend on the planet's physical radius; only the mass of the planet matters. For large planets accreting $ \tau_{\rm s}\sim1$ particles, the accretion cross section can be as large as the planet's Hill sphere. The large accretion cross sections and continuous supply of pebbles from the outer disk may, at first sight, offer ideal conditions to produce planets. However, there is one catch: the planet may not accrete all pebbles, simply because they drift too fast or they are stirred to a large height. The viability of pebble accretion as a planet formation process therefore depends not only on the (large) accretion cross sections, but also on the disk conditions, i.e., whether pebbles drift fast and how settled they are. Specifically, we define the pebble accretion efficiency ($\varepsilon$) as the number of pebbles that are accreted over the total number of pebbles that the disk supplies. In terms of mass fluxes, the definition reads \footnote{This definition is the same as the filtering factor/efficiency of \changed{\cite{Guillot2014} and \cite{Lambrechts2014}}.} \begin{equation} \varepsilon \equiv \frac{\dot{M}_\mathrm{PA}}{\dot{M}_\mathrm{disk}}, \end{equation} where $\dot{M}_\mathrm{PA}$ is the pebble accretion rate on the planet and $\dot{M}_\mathrm{disk}$ is the mass flux of pebbles through the disk. Put simply, $\varepsilon$ is the probability that a pebble is accreted by the planet. A high value of $\varepsilon$ indicates that pebble accretion is efficient. For example, if $\varepsilon=1$ (the highest value possible) and the disk contains a total $10 \ \rm M_\oplus$ in pebbles, an initial $1 \ \rm M_\oplus$ planet will grow to $11 \ \rm M_\oplus$, which is large enough to trigger giant planet formation \citep{Pollack1996}. On the other hand, when only one in a thousand pebbles is accreted ($\varepsilon=10^{-3}$), the gained planetary mass will be $10^{-2} \ \rm M_\oplus$, meaning that the planet growth has significantly stalled. The efficiency $\varepsilon$ is a \textit{local} quantity: it can be computed entirely from the conditions at the location of the planet. Nevertheless, as illustrated by the example, $\varepsilon$ carries global significance since it directly states the total amount of pebbles a disk needs to contain in order to grow planets, \changed{i.e., how efficiently the pebble mass can be converted into planet mass}. The goal of our paper series is to obtain a general recipe for pebble accretion efficiency $\varepsilon$, i.e., to quantify how $\varepsilon$ depends on planet properties (e.g., the planet mass, the eccentricity and the inclination), pebble properties (the pebble's aerodynamical size $ \tau_{\rm s}$), and disk properties (temperature and turbulence strength). In this work (hereafter Paper I) only the planar limit is considered, where we assume that the pebbles have settled to the disk midplane, or more generally, that the pebble accretion radius exceeds the scaleheight of the pebble layer. We elucidate the role of the planet eccentricity on the pebble accretion efficiency in this 2D limit ($\varepsilon_{\rm 2D}$). In the subsequent paper (Paper II, \cite{Ormel2018}), we calculate the 3D pebble accretion efficiency ($\varepsilon_\mathrm{3D}$) by investigating the roles of the planet inclination and the disk turbulence. The prescriptions that we obtain in these studies can be implemented in N-body codes, in order to study the formation and long-term dynamical evolution of planetary systems. In the literature, most theoretical and numerical work consider pebble accretion for planets on circular orbits \citep{Ormel2010,Lambrechts2012,Lambrechts2014,Guillot2014,Morbidelli2015,Bitsch2015b,Levison2015a,Ida2016,Matsumura2017,Picogna2018}. The relatively velocity between the pebble and the planet changes when the planet is on an eccentric orbit. Therefore the accretion efficiency can change as well. During planet formation, planets can acquire moderate eccentricities through mechanisms as planet-planet scattering, secular and mean motion resonances \citep{Lee2002,Raymond2006,Zhou2007b,Zheng2017}. To investigate how the accretion efficiency depends on the planet eccentricity is therefore relevant. \cite{Johansen2015} already considered the eccentric situation in which they focus on planetesimals with relatively low eccentricities in the local (co-moving) frame. In this paper, we will also conduct numerical orbital integrations in the global (heliocentric) frame, which is especially appropriate for planets with high eccentricities. The paper is structured as follows. In \se{method}, we outline two approaches to calculate $\varepsilon_{\rm 2D}$ by considering the equation of motion in the local frame and the global frame, respectively. Results are presented in \se{results}. We compare the results from the above two approaches (\se{compare}), and investigate how $\varepsilon_{\rm 2D}$ depends on properties of the planet, the pebble and the disk (\se{ecc}). We also provide analytical fit expressions for $\varepsilon_{\rm 2D}$ (\se{expression}). In \se{application}, we apply our results by assessing how fast a secondary planet can grow from a planetary embryo, in the presence of an already-formed giant planet. We summarize our key findings in \se{conclusion}. The derivation of the pebble accretion efficiency expressions and list of notations are given in the Appendix A. \section{Method} \label{sec:method} In this section, we present two ways to calculate the pebble accretion efficiency. One approach is to treat the pebble's motion with respect to the planet in a non-inertial, local frame (\se{local}). The alternatively approach is to consider it in a global frame centred on the star (\se{global}). Three numerical methods based on the above two approaches are described in \se{integration}. \subsection{Local frame} \label{sec:local} \begin{figure*}[tbh!] \includegraphics[scale=0.65, angle=0]{local.pdf} \includegraphics[scale=0.65, angle=0]{global.pdf} \caption{ The sketch illustrating pebble accretion in two different frames. a) Local frame: the co-moving frame is centred on the planet where the x-axis is pointing radially away from the star and the y-axis follows the orbital direction of the planet. Grey arrows purely indicate the gas velocity of the shearing flow, and the blue arrow shows the trajectory of the pebble, with the x-axis impact distance $b$ and the y-axis velocity $v_{\rm y}(b)$. b) Global frame: the planet orbits around the central star with a semi-major axis $a_{\rm p}$. The black arrow depicts the planet motion and the blue arrows illustrate the trajectories of pebbles that cross the orbit of the planet. } \label{fig:frames} \end{figure*} In literature, \cite{Ormel2010} and \cite{Lambrechts2012} adopt a local frame approach to investigate the pebble-planet interaction. As shown in \fg{frames}a, the coordinate is in a comoving frame centred on and rotating with the planet. In the shearing box approximation, the pebble's motion is linearized with respect to the planet (Eq. (7) in \cite{Ormel2010}). In a $2$D limit, the pebble accretion efficiency in the local frame is given by \citep{Guillot2014,Ormel2017} \begin{equation} \varepsilon = \frac{\dot{M}_\mathrm{PA}}{\dot{M}_\mathrm{disk}} = \frac{\int |v_{\rm y} (b)| \bar{H}(b) \Sigma_{\rm P} {\rm d} b }{2\pi r_{\rm p} v_{\rm r} \Sigma_{\rm P} }, \label{eq:eta-local} \end{equation} where $b$ is the pebble's impact distance measured from the $x$-axis and $v_{\rm y} (b)$ is the pebble's velocity perpendicular to $b$ (\fg{frames}a), $ \Omega_{\rm p} $ is the angular velocity of the planet, and $\bar{H}(b)$ is the Heaviside step function; $\bar{H}(b) = 1$ means the pebble with a impact distance $b$ hits the planet, otherwise, $\bar{H}(b)= 0$. The total pebble mass flux is given by $\dot{M}_\mathrm{disk} = 2\pi r_{\rm p} v_{\rm r} \Sigma_{\rm P}$, where $v_{\rm r}$ is the radial drift velocity of the pebble and $\Sigma_{\rm P}$ is the pebble surface density. All above quantities refer to the values at the planet's location ($r_{\rm p}$). In \eq{eta-local}, the efficiency is obtained by the mass flux ratio of the accreted pebbles to total pebbles that cross the orbit of the planet. \subsection{ Global frame} \label{sec:global} Here we introduce a new method to calculate the pebble accretion efficiency $\varepsilon$. In the global frame centred on the central star, we integrate the equations of motion to find the orbital evolution of the planet as well as pebbles. As pebbles drift in from regions exterior to the planet and bypass its orbit, the accretion efficiency is simply obtained by counting the fraction of pebbles that hit the planet. The equation of motion for the pebble in heliocentric coordinates is \begin{multline} \frac{\mathrm{d}^2 \bm{ r }}{\mathrm{d} t^2} = \bm{F_{\star}} + \bm{F}_{\rm planet} + \bm{F}_{\rm drag}\\ = - GM_{\star} \frac{ \bm{r} }{r^3} + GM_{ \mathrm{p}} \left( \frac{\bm{r}_{ \rm p} - \bm{ r}}{ | \bm{r}_{\rm p} - \bm{ r} |^3 } -\frac{\bm{r}_{\rm p}}{r_{\mathrm p}^3} \right) -\frac{ \bm{v} - \bm{ v}_{\rm gas}}{t_{\rm s}}, \label{eq:eq-pebble} \end{multline} where $G$ is the gravitational constant, $\bm{r} = (x,y,z)$, $\mathbf{r_{\rm p}} = (x_{\rm p},y_{\rm p},z_{\rm p})$ are the positions of the pebble and the planet with respect to the central star, $M_{\star}$ and $M_{\rm p }$ are the masses of the central star and the planet, $\bm{v} = (v_{\rm x},v_{\rm y},v_{\rm z})$ and $\bm{v_{\rm gas }} = (v_{\rm K}-v_{\rm hw})\mathbf{e}_{\rm \phi}$ are the velocity vector of the pebble and the gas's azimuthal velocity at the pebble's location $r$, and $t_{\rm s}$ is the pebble's stopping time. It should be note that $v_{\rm hw} = \eta v_{\rm K} $ is the headwind velocity which measures the deviation between the Keplerian velocity ($v_{\rm K}$) and the gas azimuthal velocity, $\eta = -\frac{1}{2} \frac{H_{\rm gas}^2 \partial \mathrm{log} P}{r^2 \partial \mathrm{log} r}$ is the headwind prefactor, $H_{\rm gas}$ is the disk scale height and $P$ is the gas pressure at the planet's location $r_{\rm p}$. In \eq{eq-pebble}, the total (per unit mass) forces of the pebble include the gravitational forces from the central star and the planet, and the drag force from the disk gas. The pebble is treated as a body with zero gravitational mass. The planet only feels the gravity from the central star, and its equation of motion is expressed as \begin{equation} \frac{\mathrm{d}^2 \bm{ r_{\rm p} }}{\mathrm{d} t^2} = - \frac{ GM_{\star} \bm{r_{\rm p}} }{r_{\rm p}^3}. \label{eq:eq-planet} \end{equation} Thus, we can obtain the orbital motions of the planet and the pebble from \eqs{eq-pebble}{eq-planet}. Integrating \eq{eq-pebble} in the global frame is computationally expensive, in particular when the pebble is strongly coupled to the gas (small $t_s$), resulting in a tiny relative velocity ($\Delta v = v- v_{\rm gas}$). In such cases, the pebble tends to move at its terminal velocity, where gas drag balances the gravitational forces. Therefore, the relative acceleration $\frac{\mathrm{d}^2}{\mathrm{d}t^2} (\bm{r}-\bm{r_{\rm gas}})$ vanishes and the pebble's velocity can be approximated as \begin{equation} \bm{v} \approx \bm{v_{\rm gas}} + t_{\rm s} \left( \bm{F_{\star} + \bm{F_{\rm planet}} - \Omega_{\rm gas }^2 r } \right), \label{eq:sc} \end{equation} We refer to this approximation as the strong coupling approximation (SCA) \changed{ \citep{Johansen2005}}. This condition is satisfied when the pebble and the gas are well coupled ($\bm{v}\approx \bm{v}_{\rm gas}$). It is invalid when the perturbation from the planet is significant or $ \tau_{\rm s}$ becomes large. Clearly, in \eq{sc} the velocity can be explicitly calculated whereas in \eq{eq-pebble} the velocity needs to solve from a differential equation. Therefore, the SCA greatly simplifies the calculation of velocity compared to directly integrating the equation of motion. The efficiency in the global frame is simply \begin{equation} \varepsilon_{\rm 2D} = \frac{N_{\rm hit}}{N_{\rm tot}}, \label{eq:eta-global} \end{equation} where $N_{\rm tot}$ is total number of pebbles across the orbit of the planet, and $N_{\rm hit}$ is the number of pebbles accreted by the planet. \subsection{Three numerical methods} \label{sec:integration} We employ three methods to calculate the pebble accretion efficiency: \begin{enumerate} \item \textit{Local} -- direct integration of the equation of motion \citep{Ormel2010} in the local frame. The pebble accretion efficiency $\varepsilon_{\rm 2D}$ is obtained from \eq{eta-local}; \item \textit{Global direct} -- direct integration of the equation of motion in the global frame (\eq{eq-pebble}), and $\varepsilon_{\rm 2D}$ is calculated directly from the fraction of pebbles that hit the planet (\eq{eta-global}); \item \textit{Global hybrid} -- The SCA (\eq{sc}) is applied for $|r_{p}- r|$$>$$2 R_{\rm H}$; otherwise switch to direct integration (\eq{eq-pebble}). The efficiency $\varepsilon_{\rm 2D}$ is also obtained the same as global direct method. The Hill radius is defined as $R_{\rm H} \equiv (M_{\rm p}/3M_{\star})^{1/3}a_{\rm p}$ and $a_{\rm p}$ is the planet's semi-major axis. \end{enumerate} \changed{We assume that the inclination of the planet $i_{\rm p} $ is zero. More generally, the inclination can be neglected when it is much smaller than the the scale height of the pebble disk. In this work we only consider the $2$D planar limit where the planet and pebbles are all settled into the disk midplane. 3D effects (planet inclination and disk turbulence) will be modeled in detail in paper II}. The local method can be used for a planet on a circular orbit ($e_{\rm p}=0$). To model a planet on an eccentric orbit in the local frame, the elliptic motion of the planet needs to be considered, see e.g., Eqs. (15) and (16) in the supplementary material of \cite{Johansen2015}. This is nevertheless only a first order approximation, valid for a relatively low eccentricity. The local frame is not a good approach in case that the eccentricity is not so small. In contrast, the global approach is conceptually straightforward to conduct simulations with the planet moving on an eccentric orbit. In this paper, the local method is restricted to planets on circular orbits, while the global method will be applied to planets on both circular and eccentric orbits. \changed{In the local simulation, the pebbles' initial $x$ locations ($x_0$) range from $-5 R_{\rm H}$ to $5 R_{\rm H} $, with an interval of $10^{-4} R_{\rm H}$, and $y_0$ is initialized at either $-40R_{\rm H}$ or $40R_{\rm H}$. The initial radial and azimuthal velocity of the pebble are: $ v_{\rm x} = -2 \tau_{\rm s} v_{\rm hw}/(1 + \tau_{\rm s}^2 )$ and $ v_{\rm y} = 1.5 x_0- v_{\rm hw} /(1 + \tau_{\rm s}^2)$ \citep{Weidenschilling1977a} where $ \tau_{\rm s} \equiv t_{\rm s}\Omega_{\rm p}$ is the dimensionless stopping time (Stokes number). The simulation terminates when the pebble leaves the domain of the shearing box ($|y| >1.05 y_0$), or when the separation between the pebble and the planet is smaller than the planet radius $R_{\rm p}$, indicating a collision. } In the global simulation, the planet is initiated on a circular ($e_{\rm p}=0$) orbit (\se{compare}) or on an eccentric orbit ($e_{\rm p}>0$) with a random true anomaly $\theta$ (\se{ecc}). \changed{ For circular cases, $N_{\rm tot}$ pebbles are uniformly distributed along a ring exterior to the planet's orbit at an initial distance $r_{0} = a_{\rm p}(1 + e) + 5 R_{\rm H}$. However, the orbital phase of the planet affects the accretion efficiency when the planet is on an eccentric orbit. In such a situation ($e_{\rm p}>0$), pebbles are not only distributed azimuthally, but also radially, from $r_{0}$ to $r_0 + 2\pi v_{\rm r} /\Omega_{\rm K}$. There are $\sqrt{N_{\rm tot}}$ grid points in both the vertical and radial directions and $N_{\rm tot}$ in total.} The initial radial and azimuthal velocity of the pebble are: $ v_{\rm r} = -2 \tau_{\rm s} v_{\rm hw}/(1 + \tau_{\rm s}^2 )$ and $ v_{\rm \phi} = v_{\rm K} - v_{\rm hw} /(1 + \tau_{\rm s}^2)$. The simulation terminates when the pebble drifts interior to the orbit of the planet where $r < a_{\rm p}(1 - e) - R_{\rm H}$, or collides with the planet where $|r - r_{\rm p} |< R_{\rm p}$. We set $a_{\rm p} = 1 \ \rm AU$ for all simulations in this paper. For both methods, the equations are numerically integrated with a Runge-Kutta-Fehlberg variable timestep scheme \citep{Fehlberg1969}. The relative error tolerance is adopted to be $10^{-8}$ to ensure the numerical convergence. \section{Results} \label{sec:results} Results comparing the different methods for planets on circular orbits are presented in \se{compare}, and global simulations for planets on eccentric orbits are conducted in \se{ecc}. The analytical fit is given in \se{expression}. \subsection{Comparison between different approaches} \label{sec:compare} We conduct simulations with the three different methods (\se{integration}) for planets on circular orbits ($e_{\rm p}=0$) and compare the obtained pebble accretion efficiencies. We assume that the planet density is $3 \ \rm gcm^{-3}$ and central star has $1 \ \rm M_\odot $. Three planet masses with varying $ \tau_{\rm s}$ have been explored. \Fg{diff} shows the pebble accretion efficiency as functions of $ \tau_{\rm s}$ and $M_{\rm p}$. The symbols and colors depict results from three different methods, and black lines represent our analytical fit expression (to be discussed in \se{expression}). For the global method, the dot and the errorbar represent the mean value from the simulations and the Poisson counting error ($\Delta \varepsilon = \sqrt{N_{\rm hit}}/N_{\rm tot}$), respectively. Hereafter the efficiency refers to its mean value from the simulations. We focus on the pebble accretion regime (settling regime) when $10^{-3} \lesssim \tau_{\rm s} < 1$. But for the additional method comparison, the cases of $ \tau_{\rm s} \gtrsim 1$ have also been investigated with the local and the global direct methods, in which the gravity of the central star also becomes important during the pebble-planet encounter (referred to as the three-body regime in \cite{Ormel2010}). The strong coupling approximation (SCA) works well for the small $ \tau_{\rm s}$ pebble where the gas-pebble coupling time is short. We do not apply the global hybrid method for $ \tau_{\rm s} >1$ cases. It can be seen in \fg{diff} that the hybrid method (orange circle) well matches with the direct method (blue triangle) when $ \tau_{\rm s} \lesssim 0.1$. Moreover, the hybrid algorithm significantly reduces the computational time. For a $0.1 \ \rm M_\oplus$ planet accreting pebbles of $ \tau_{\rm s} =10^{-2}$, the hybrid integration is $40$ times faster than the direct integration. It becomes $200$ times faster when $ \tau_{\rm s}$ is $10^{-3}$. We also see in \fg{diff} that the results from the local method (red square) and the global methods are in good agreement with each other for the low and intermediate mass planet ($10^{-3} \ \rm M_\oplus$ and $0.1 \ \rm M_\oplus$). Only for the massive planet case ($10 \ \rm M_\oplus$) the local efficiencies are clearly higher than the global ones. There are two reasons for the difference in $\varepsilon_\mathrm{2D}$ between the two methods in the high mass planet case. First, there is a risk of multiple counting accreting pebbles for the local method. In general pebbles can be accreted by the planet through two ways: (i) immediate accretion upon the first penetration of the planet's Hill sphere; and (ii) secondary accretion after a synodical time of the first close-encounter where $t_{\rm syn} \sim \Omega_{\rm p}^{-1} r_{\rm p }/\Delta r_{\rm co}$ and $\Delta r_{\rm co}$ is the width of the co-orbital region of the planet. In the local frame, we initially place pebbles at both sides of the planet (\ses{local}{integration}). The problem with this setup, however, is that it may amount to simulate the same pebble twice. The risk of double counting occurs in particular when the radial drift of the pebble within a synodical time is small compared to the accretion radius, $v_{\rm r} t_{\rm syn} < b_{\rm set}$. It follows that the pebble accretion efficiency of the planet whose mass is higher than a few Earth masses is overestimated due to this multiple counting effect. The other reason is due to the linearization of the equations of motions. In the local method, we ignore the high order terms of $(\Delta r_{\rm p}/r_{\rm p})^n$ ($n \geq2$) in the pebble's equation of motion by virtue of the approximation that $\Delta r_{\rm p} \ll r_{\rm p}$, where $\Delta r_{\rm p}$ is the distance between the planet and the pebble. The integration domain (the maximum $\Delta r_{\rm p}$ we consider in the local frame) is larger for a more massive planet. Thus, the condition $\Delta r_{\rm p} \ll r_{\rm p}$ breaks down and the first order linear approximation becomes inappropriate for very massive planets. To conclude, both effects are biased towards high planet mass, and therefore the local results overestimate $\varepsilon$ in the high mass planet case. We also see in \fg{diff} that the local and the global simulations are inconsistent when $ \tau_{\rm s}$ becomes larger than $1$. The two methods give very different results and this mismatch increases with the planet mass. The pebble's eccentricity can be excited by the planet during the encounter. For pebbles with large stopping time, their eccentricity damping from the gas drag is much longer than the encounter time ($\simeq$ orbital time). These pebbles should remain at moderate eccentricities during pebble-planet interactions. However, in the local method pebbles' are initialized on unperturbed ($e_{\rm p}=0$) trajectories, which is not valid when $ \tau_{\rm s}$ is large. On the other hand, the global method does not suffer from this effect, since it integrates these pebbles for many complete orbits. These differences become more extreme for larger stopping times and higher planet masses. For the $M_{\rm p} = 10 \ \rm M_\oplus $ and $ \tau_{\rm s} =10$ case, $\varepsilon_{\rm 2D}$ evaluates to zero in the global simulation, due to the fact that the pebbles get trapped into resonance with the planet. Of course, resonance trapping cannot be incorporated in a local framework. In addition, in \fg{diff} the analytical fit expression derived in \se{expression} matches well with both methods when the pebble accretion is in the settling regime ($10^{-3} \lesssim \tau_{\rm s} < 1$). We find that the efficiency increases with the mass of the planet and decreases with the size of the accreted pebbles. The efficiency is increased, because massive planets have a larger impact cross section \eq{reff1}, while small pebbles, because of their slow radial drift, are more likely to encounter the planet. In summary, the results of the global methods and the local method are mostly in good agreement with each other in the settling regime. Although computationally more expensive, the global methods are intuitive, and more accurate for higher planet mass ($M_{\rm p} $ large than a few Earth mass) and large pebbles ($ \tau_{\rm s} \gtrsim 1 $). The global method is arguably the better approach to use, in particular for configurations such as resonance trapping, and, more generally, for eccentric orbits. In the following subsection, we only use the global method to calculate the accretion efficiency. \iffalse \begin{table*} \centering \caption{Pebble accretion efficiencies among different approaches. The planet is in a circular orbit, and the disk headwind velocity is set as $30 \rm \ m \ s^{-1}$. \ccc{Is this table actually discussed in the main text?! Also I think you can add an $N_\mathrm{tot}$ column and/or error bars?! I'm not sure if we need 3 significant digits.}} \begin{tabular}{llllllllllll|} \hline \hline $M_{\rm p}$ & $ \tau_{\rm s}$ & \multicolumn{3}{c}{Pebble accretion efficiency $\varepsilon_{\rm PA}$}\\ ($ \rm M_{\oplus}$) & & Local & Global-direct & Global-hybrid \\ \hline \hline $10^{-3}$ & $10^{-3.0} $ & $1.73 \times 10^{-2}$ & $1.73 \times 10^{-2}$ & $1.73 \times 10^{-2}$ \\ $10^{-3}$ & $10^{-2.5} $ & $ 9.63 \times 10^{-3}$ &$9.60 \times10^{-3} $ &$ 9.65 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-2.0} $ & $ 5.31 \times 10^{-3}$ &$5.31 \times 10^{-3}$ &$ 5.35 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-1.5} $ & $2.71 \times 10^{-3}$ & $2.71 \times 10^{-3}$ & $2.72 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-1.0} $ & $1.43 \times 10^{-3}$ & $1.45 \times 10^{-3}$ & $1.45 \times 10^{-3}$ \\ $10^{-3}$ & $10^{-0.5} $ & $8.9 \times 10^{-4}$ & $8.8 \times 10^{-4}$ & $8.9 \times 10^{-4}$ \\ $10^{-3}$ & $10^{0.0} $ & $8.0 \times 10^{-4}$ &$8.0 \times 10^{-4}$ & $6.0 \times 10^{-4}$ \\ $10^{-3}$ & $10^{0.5} $ & $ 6.5 \times 10^{-4}$ &$6.4 \times 10^{-4}$ & $1 \times 10^{-5}$ \\ $10^{-3}$ & $10^{1.0} $ & $6.9 \times 10^{-4}$ &$6.7 \times 10^{-4}$ & $0$ \\ \hline $0.1$ & $10^{-3.0} $& $0.132$ & $0.129$ & $0.129$ \\ $0.1$ & $10^{-2.5} $& $7.89 \times 10^{-2}$ & $7.84 \times 10^{-2}$ & $7.84 \times 10^{-2}$ \\ $0.1$ &$10^{-2.0} $ & $5.10\times 10^{-2}$ & $5.10 \times 10^{-2}$ & $5.10 \times 10^{-2}$ \\ $0.1$ &$10^{-1.5} $ & $3.42 \times 10^{-2}$ & $3.42 \times 10^{-2}$ & $3.42 \times 10^{-2}$ \\ $0.1$ & $10^{-1.0} $ & $2.29 \times 10^{-2}$ &$2.30 \times 10^{-2}$ & $2.30 \times 10^{-2}$\\ $0.1$ & $10^{-0.5} $ & $1.49 \times 10^{-2}$ &$1.50 \times 10^{-2}$ & $1.52 \times 10^{-2}$ \\ $0.1$ & $10^{0.0} $ & $7.85 \times 10^{-3}$ &$7.80 \times 10^{-3}$ & $1.00 \times 10^{-2}$ \\ $0.1$ & $10^{0.5} $ & $9.83 \times 10^{-3}$ &$9.65 \times 10^{-3}$ & $3.95 \times 10^{-3}$ \\ $0.1$ & $10^{1.0}$ & $2.67\times 10^{-2}$ &$1.37 \times 10^{-2}$ & $7.67 \times 10^{-3}$ \\ \hline $10$ & $10^{-3.0} $& $2.83$ & $1.00$ & $1.00$ \\ $10$ & $10^{-2.5} $& $1.93$ & $0.96$ & $0.95$ \\ $10$ & $10^{-2.0} $& $1.25$ & $0.75$ & $0.75$ \\ $10$ &$10^{-1.5} $ & $0.83$ &$0.57$ & $0.57$ \\ $10$ &$10^{-1.0} $ & $0.53$ &$0.45$ & $0.46$ \\ $10$ &$10^{-0.5} $ & $0.33$ &$0.28$ & $0.25$ \\ $10$ & $10^{0.0} $ & $0.23$ &$0.13$ & $0.18$ \\ $10$ & $10^{0.5} $ & $0.39$ &$0.20$ & $9 \times 10^{-2}$ \\ $10$ &$10^{1.0}$ & $0.58$ &$0$ &$6 \times 10^{-3}$& \\ \hline \hline \end{tabular} \label{tab:tab1} \end{table*} \fi \subsection{Eccentric pebble accretion} \label{sec:ecc} In \se{default}, we describe the default run for eccentric pebble accretion. In \se{para}, a parameters study is conducted to explore the dependence of pebble accretion efficiency on the planet mass ($M_{\rm p}$), the stellar mass ($M_{\star}$), the headwind velocity ($v_{\rm hw}$) and the dimensionless stopping time ($ \tau_{\rm s}$) with respect to the planet eccentricity. \begin{figure}[tb] \includegraphics[scale=0.5, angle=0]{default.pdf} \caption{Pebble accretion efficiency vs planet eccentricity for the default run: $M_{\rm p} = 0.1 \ \rm M_\oplus$, $M_{\star} = 1 \ \rm M_\odot $, $ v_{\rm hw} = 30 \rm \ m \ s^{-1}$, and $ \tau_{\rm s} = 10^{-2}$. Eccentricities have been explored from $e_{\rm p}=6\times 10^{-5}$ to $0.2$. The dot gives the mean value, and the errorbar indicates the Poisson counting error. The solid line represents the analytical fit. The efficiency peaks around an eccentricity of $0.03$, and this peak efficiency is four times of the circular efficiency when $e_{\rm p} \simeq 0$. } \label{fig:default} \end{figure} \subsubsection{Default run} \label{sec:default} The parameters of the default model are: $M_{\rm p} = 0.1 \ \rm M_\oplus $, $M_{\star} = 1 \ \rm M_\odot $, $ v_{\rm hw} = 30 \rm \ m \ s^{-1}$ and $ \tau_{\rm s} = 10^{-2}$. \Fg{default} shows that the pebble accretion efficiency varies with the planet eccentricity. We conduct simulations with $18$ different eccentricities, logarithmically spaced between $6\times 10^{-5}$ and $0.2$. For each eccentricity, we perform ten simulations randomly varying with different initial planet phase angle $\theta$. In this case we set the total number of pebbles $N_{\rm tot}=200$, $N_{\rm hit}$ is counted from the simulations , and the pebble accretion efficiency is $N_{ \rm hit}/N_{\rm tot}$. The black line is the analytical fit, which will be described in \se{expression}. We find that the pebble accretion efficiency varies with eccentricity. When the planet is on a nearly circular orbit ($e_{\rm p} \lesssim 10^{-3}$), it remains at $0.05$. When the eccentricity keeps increasing, the efficiency first increases and then decreases. It attains a maximum value of $\varepsilon_{\rm 2D} \simeq 0.18$ when the eccentricity approaches $0.03$. After that, the efficiency quickly reduces to $0.07$ when $e_{\rm p} \simeq 0.1$. Clearly, the planet's eccentricity plays an important role in determining the pebble accretion efficiency. At its peak, the efficiency of an eccentric orbit planet ($\varepsilon_{\rm 2D} (e_{\rm p})$) is higher than a circular orbit planet by a factor of $4$. \changed{ By balancing the settling time and the encounter time, it can be shown that the pebble accretion radius is \citep{Ormel2010,Lambrechts2012} \begin{equation} b_{\rm set} \sim \sqrt{\frac{GM_p t_s}{\Delta v}}, \label{eq:reff} \end{equation} where $\Delta v$ is the relative velocity between the planet and the pebble (see \se{appendix})}. \changed{When a planet is on a circular orbit, the relative velocity between the pebble and the planet is either dominated by Keplerian shear or the headwind velocity. When a planet is on an eccentric orbit, the relative velocity in addition includes an eccentricity contribution due to the elliptic motion of the planet, which increases with the planet eccentricity. Therefore, when $e_{\rm p}$ is tiny, $\Delta v$ is very close to the circular case and $\varepsilon_{\rm 2D}$ still remains constant. The rise of $\varepsilon_{\rm 2D}$ is due to an increase in $\Delta v$ when the eccentric velocity $\sim$$ev_K$ starts to dominate the relative velocity. The flux of pebbles increases. Physically, in the 2D limit, a planet on a more eccentric orbit sweeps up more pebbles due to a larger pebble feeding zone, resulting in a higher efficiency. } The rapid drop of $\varepsilon_{2D}$ at higher $e_{\rm p} \simeq 0.04$, however, indicates the settling interactions fail. \changed{The reason is that the time for the planet-pebble encounter decreases as $\Delta v$ (eccentricity) increases. When the planet-pebble encounter is short, the change of the pebble's velocity due to the gas drag is modest. } Therefore, the accretion transitions from the settling regime to the ballistic regime, where the gas drag effect is negligible and the accretion radius reduces to the planet's physical radius (gravitational focusing is unimportant at these high eccentricities). Thus, the accretion efficiency declines significantly during this transition. \begin{figure*}[tbh!] \includegraphics[scale=0.48, angle=0]{angle1.pdf} \includegraphics[scale=0.48, angle=0]{angle2.pdf} \caption{\changed{ Relative velocity between the planet and pebbles (dark blue line) and number of accreted pebbles (light blue bars) as function of the planet's orbital phase angle. Perihelion is at $0 ^\circ$ and aphelion is at $180 ^\circ$. The planet eccentricity is $e_{\rm p} = 0.007$ in the left and $e_{\rm p} = 0.03$ in the right panel, respectively. Other parameters are the same for the two panels: $M_{\rm p} = 0.01 \ \rm M_\oplus $, $M_{\star} = 1 \ \rm M_\odot $, $ \tau_{\rm s} = 10^{-2}$, $v_{\rm hw} = 60 \ \rm m/s$ and $N_{\rm tot} =300\times300$. The black dashed line represents the transition velocity from the settling to the ballistic accretion, $v=v_\ast$ (\eq{vstar}).} } \label{fig:angle} \end{figure*} \changed{In order to analyse this issue in detail, we investigate at which location (planet's orbital phase angle) pebbles are captured by the planet. The capture condition is defined as the separation between the pebble and the planet is smaller than the accretion radius ($ |r-r_{\rm p} |< b_{\rm set}$). In order to clarify the effect we focus on the low mass planet when it is initially in the headwind regime, therefore we adopt: $M_{\rm p} = 0.01 \ \rm M_\oplus $, $M_{\star} = 1 \ \rm M_\odot $, $ \tau_{\rm s} = 10^{-2}$, $v_{\rm hw} = 60 \ \rm m/s$. We artificially set the planet radius $10$ times smaller than its normal value to eliminate any ballistic accretion. \Fg{angle} plots the relative velocity between the planet and pebble (left y axis) and the number of accreted pebbles (right y axis) as functions of the planet's phase angle for $e_{\rm p} = 7 \times 10^{-3}$ (\fg{angle}a) and $e_{\rm p} = 3 \times 10^{-2} $ (\fg{angle}b). The radial and azimuthal velocity of a planet on eccentric orbit are $v_{\rm K} (r_{\rm p}) e_{\rm p} \sin\theta /\sqrt{1-e_{\rm p}^2} $ and $v_{\rm K} (r_{\rm p})(1 + e_{\rm p} \cos\theta) /\sqrt{1-e_{\rm p}^2} $, where $\theta$ is the true anomaly of the planet \citep{Murray1999}. Assuming that the orbit of the pebble is not affected by the planet, its radial and azimuthal velocity are $-2 v_{\rm hw} \tau_{\rm s} v_{\rm K}/(1+ \tau_{\rm s}^2) $ and $v_{\rm K}- v_{\rm hw}/(1 + \tau_{\rm s}^2)$. Therefore, the relative velocity between the planet and the pebble can be analytically calculated, which is shown by the blue lines of \fg{angle}. } \changed{We find that in \fg{angle} the number of accreted pebbles correlates with $\Delta v$. In the low eccentricity case (\fg{angle}a), more pebbles are accreted when $\Delta v$ is higher. For instance, when the pebble moves on a sub-Keplerian orbit it has the lowest relative velocity with the planet at aphelion ($180 ^\circ$). We find that at this location the planet accretes the least number of pebbles. Since in \eq{reff} $b_{\rm set} \propto \Delta v^{-1/2}$, the accretion radius is largest at aphelion. However, the accretion rate correlates with $b_{\rm set} \Delta v$ in the 2D limit and with $b_{\rm set}^2 \Delta v$ in the 3D limit \citep{Johansen2015,Morbidelli2015}. Therefore, $\varepsilon_{\rm 2D}$ increases as $\Delta v^{1/2}$. Less pebbles are accreted at aphelion than elsewhere. } \changed{ On the other hand, when the eccentricity is high, the pebble accretion rate displays the opposite dependence on the phase angle (\fg{angle}b), i.e., more pebbles are accreted at lowest $\Delta v$ (aphelion). This is because in this high eccentricity case, $\Delta v$ becomes too high for the settling. Thus, only pebbles with relatively low $\Delta v $ are able to be accreted by the planet. The transition velocity from the settling to the ballistic accretion is marked by the black dashed line in \fg{angle} (\eq{vstar}). } \subsubsection{Parameter study} \label{sec:para} To investigate the effects of different parameters on the pebble accretion efficiency, we conduct a parameter study by varying the planet mass ($M_{\rm p}$), the stellar mass ($M_{\star}$), the headwind velocity ($v_{\rm hw}$) and the dimensionless stopping time ($ \tau_{\rm s}$). We explore four planet masses ($10^{-3} \ \rm M_\oplus, 10^{-2} \ \rm M_\oplus, 10^{-1} \ \rm M_\oplus, 1 \ \rm M_\oplus $), three stellar mass ($3 \ \rm M_\odot , 1 \ \rm M_\odot , 0.3 \ \rm M_\odot $), three headwind velocities ($15\ \rm m \ s^{-1}, 30\ \rm m \ s^{-1}, 60\ \rm m \ s^{-1}$) and three dimensionless stopping times ($10^{-3}, 10^{-2},10^{-1}$). Nine of these simulations are illustrated in \Tb{tab1} and \fg{para}. \subsubsection*{Planet mass} \fg{para}a shows the efficiency as a function of the eccentricity for three different planet masses. The other parameters are the same as the default values. The default case ($M_{\rm p}= 0.1 \ \rm M_\oplus$), the massive planet case ($M_{\rm p}= 1 \ \rm M_\oplus$), and the less massive planet case ($M_{\rm p}= 0.01 \ \rm M_\oplus$) are shown in black, dark red and light red, respectively. The three cases exhibit a similar trend with eccentricity. The efficiency curves are initially flat, gradually increase with $e_{\rm p}$ later and drop to lower values. For the massive planet case, the efficiency is $ 0.22$ when the planet is on a circular orbit. The maximum efficiency attains $0.59$ when $e_{\rm p}=0.05$. For the less massive planet case, the efficiency of a circular orbit planet is $0.013$ and the maximum efficiency is $0.04$ when $e_{\rm p}$ is close to $0.01$. Comparing the three cases, we find that (i) the efficiency increases with the planet mass for all eccentricities; more massive planets accrete pebbles more efficiently (the same as the circular case in \se{compare}). (ii) The transition eccentricity where the efficiency attains its maximum increases with the planet mass. This is because the transition occurs when the relative velocity $\Delta v$ is so high that the settling accretion enters the ballistic accretion (see \se{appendixset}). Since a more massive planet has a stronger gravitational attraction that enables to accrete more eccentric pebbles, the transition velocity increases with the planet mass (\eq{vstar}). Consequently, the transition eccentricity also increases with the planet mass. \subsubsection*{Stellar mass} \fg{para}b illustrates the dependence of the stellar mass on the efficiency where the rest of parameters are identical as the default case. The default case ($M_{\star} =1 \ \rm M_\odot $) is shown in black, and the massive giant star case ($M_{\star}= 3 \ \rm M_\odot $) and the low-mass M dwarf star case ($M_{\star}= 0.3 \ \rm M_\odot $) are given in light purple and dark purple, respectively. For the massive star, the efficiency is $ 0.026$ when the planet is on a circular orbit. The maximum efficiency attains $0.09$ when $e_{\rm p}=0.03$. For the less massive star, the efficiency for a circular orbit planet is $0.1$ and the maximum efficiency is $0.34$ when $e_{\rm p} = 0.04$. Comparing the three cases, we find that (i) the efficiency decreases with the stellar mass. (ii) The transition eccentricity decreases with the stellar mass. Both effects can be explained from the fact that a planet around a less massive star has a larger Hill sphere and therefore can accrete more pebbles. \begin{figure*}[tbh!] \includegraphics[scale=0.5, angle=0]{paramp.pdf} \includegraphics[scale=0.5, angle=0]{params.pdf} \includegraphics[scale=0.5, angle=0]{parahw.pdf} \includegraphics[scale=0.5, angle=0]{parats.pdf} \caption{ Pebble accretion efficiency vs planet eccentricity. The default run is shown in black, with $M_{\rm p} = 0.1 \ \rm M_\oplus $, $M_{\star} = 1 \ \rm M_\odot $, $v_{\rm hw} = 30 \rm \ m \ s^{-1}$ and $ \tau_{\rm s} = 10^{-2}$. In each panel one parameter varies and the other three keep the same. The upper left panel: three different planet masses, $M_{\rm p} = 1 \ \rm M_\oplus$, $M_{\rm p} = 0.1 \ \rm M_\oplus$ and $M_{\rm p} = 0.01 \ \rm M_\oplus$; The upper right panel: three different stellar masses, $M_{\star} = 3 \ \rm M_\odot $, $ M_{\star} = 1 \ \rm M_\odot $ and $M_{\star} = 0.3 \ \rm M_\odot $; The lower left: three different headwind velocities, $v_{\rm hw} = 15 \rm \ m \ s^{-1}$, $v_{\rm hw} = 30 \rm \ m \ s^{-1}$ and $v_{\rm hw} = 60 \rm \ m \ s^{-1}$; The lower right: three dimensionless stopping times, $ \tau_{\rm s} = 10^{-3} $, $ \tau_{\rm s}=10^{-2}$ and $ \tau_{\rm s}=10^{-1}$. The dot gives the mean value, and the errorbar indicates the Poisson counting error. The solid line is the analytical fit of \eq{efficiency}. The efficiency increases with $M_{\rm p }$, and decreases with $M_{\star}$, $v_{\rm hw}$ and $ \tau_{\rm s}$.} \label{fig:para} \end{figure*} \subsubsection*{Headwind velocity} \fg{para}c plots three cases with different headwind velocities. The disk headwind now varies with low speed ($v_{\rm hw}= 15 \ \rm m \ s^{-1}$) and high speed ($v_{\rm hw}= 60 \ \rm m \ s^{-1}$) shown in dark blue and light blue, respectively. Efficiencies obtained from the default run ($v_{\rm hw} =30 \ \rm m \ s^{-1}$) are shown in black. For the low headwind case, the efficiency of pebble accretion is $0.10$ when the planet is on a circular orbit and the maximum efficiency is $0.33$. For higher headwind velocity case, the efficiency when the planet is on a circular orbit is $0.03$, and the maximum efficiency is $0.08$. We conclude that (i) the efficiency decreases with the headwind velocity. Because the headwind is related with the radial drift velocity of the pebble, a lower headwind speed results in a slower radial drift, and the planet is therefore able to accrete more pebbles. Furthermore, (ii) the eccentricity for which the maximum efficiency is attained, is independent of the headwind velocity. This is because as the eccentricity increase, $\Delta v$ is dominated by the eccentricity velocity but not the headwind velocity. Therefore, the transition velocity between the settling and the ballistic regime is independent of the headwind velocity (\eq{vstar}). \subsubsection*{Dimensionless stopping time} Finally, three cases with different $ \tau_{\rm s}$ are shown in \fg{para}d. The default case ($ \tau_{\rm s} =10^{-2}$) is shown in black while small pebbles ($ \tau_{\rm s} =10^{-3}$) and large pebbles ($ \tau_{\rm s} =10^{-1}$) are shown in dark green and light green, respectively. For $ \tau_{\rm s}= 10^{-3}$ the efficiency of pebble accretion is $ 0.13$ when the planet is on a circular orbit. When $e_{\rm p} \simeq 0.06$, the efficiency attains a maximum value of $0.60$. For $ \tau_{\rm s}= 10^{-1}$ the efficiency of a circular orbit planet is $0.02$. The maximum efficiency is close to $0.04$ when $e_{\rm p}= 0.01$. It is clear that the planet is more efficient at accreting small size pebbles due to their slow radial drift, which is consistent with the results of circular pebble accretion in \se{compare}. In addition, we find that the ratio of the maximum $\varepsilon_{\rm 2D}$ to $\varepsilon_{\rm 2D}$ when $e_{\rm p} \simeq 0$ is high for $\tau_s=10^{-3}$ and low for $\tau_s=10^{-1}$. The reason is that the settling interactions extend to higher $e_{\rm p}$ when $ \tau_{\rm s}$ is small. Therefore, the transition to the ballistic regime occurs at higher $e_{\rm p}$ and the peak efficiency has a stronger boost for small $ \tau_{\rm s}$. To summarize, the pebble accretion efficiency $\varepsilon_{\rm 2D}$ is an increasing function of the planet-to-star mass ratio ($q_{\rm p} = M_{\rm p}/M_{\star}$), a decreasing function of the headwind velocity ($v_{\rm hw}$) and the dimensionless stopping time of the pebble ($ \tau_{\rm s}$). The efficiency is independent of eccentricity when it is relatively small ($e_{\rm p} \lesssim 10^{-3}$), gradually increases with eccentricity when it is moderate ($e_{\rm p} \simeq 10^{-2}$), and drops quickly when the eccentricity becomes relatively high ($e_{\rm p} \gtrsim 0.1$). \begin{table*} \centering \caption{Model set-up in a parameter study (planet density is $ 3 \ \rm gcm^{-3}$)} \begin{tabular}{lllllllllll|} \hline \hline Run & $M_{\rm p }$ &$M_{\star }$ & $v_{\rm hw}$ & $ \tau_{\rm s}$ & $N_{\rm tot}$ \\ & ($ \rm M_{\oplus}$) & ($ \ \rm M_\odot $)& ($\rm m \ s^{-1}$) & \\ \hline \#1 default & $0.1 $ &$1$ & 30 & $10^{-2}$ & \changed{$45\times45$ } \\ \#2 massive planet & $1 $ &$1$ & 30 & $10^{-2}$ & \changed{$25\times 25$} \\ \#3 low-mass planet & $0.01 $ & $1$ & 30 & $10^{-2}$ & \changed{$80 \times 80$} \\ \#4 massive star & $0.1$ &$3$ & 30 & $10^{-2}$ & \changed{$ 64\times 64 $} \\ \#5 low-mass star & $0.1 $ & $0.3$ & 30 & $10^{-2}$ & \changed{$ 32 \times 32$ } \\ \#6 low headwind velocity & $0.1 $ &$1$ & 15 & $10^{-2}$ &\changed{$32\times 32$ } \\ \#7 high headwind velocity & $0.1 $ & $1$ & 60 & $10^{-2}$ &\changed{$64\times 64$} \\ \#8 small stopping time & $0.1 $ & $1$ & 30 & $10^{-3}$ & \changed{$25\times 25$} \\ \#9 large stopping time & $0.1 $ &$1$ & 30 & $10^{-1}$ & \changed{$ 80\times 80$} \\ \hline \hline \end{tabular} \label{tab:tab1} \end{table*} \subsection{Analytical fit expression} \label{sec:expression} \changed{In this subsection, we present analytical fitting formulas, the detailed derivations of these formulas are described at length in the appendix.} We find that the obtained accretion efficiency in the settling regime can be well fitted by \begin{equation} \varepsilon_\mathrm{set,2D} = \changed{0.32} \sqrt{\frac{q_{\rm p} }{ \tau_{\rm s} \eta^2} \left( \frac{\Delta v}{v_{\rm K}} \right) } f_\mathrm{set}, \label{eq:eps2d-set} \end{equation} where $q_{\rm p} $ is the mass ratio of the planet to the central star. The relative velocity between the planet and the pebble ($\Delta v$) is \begin{equation} \Delta v = {\rm max} \left(v_{\rm cir},v_{\rm ecc} \right), \end{equation} Here $v_{\rm cir}$ is the relative velocity between the circular orbit planet and the pebble. We adopt an expression of $v_{\rm cir}$ that combines the headwind and the shear regimes, \begin{equation} v_{\rm cir} = \left[1 + \changed{5.7} \left(\frac{q_{\rm p}}{q_{\rm hw/sh}}\right) \right]^{-1} v_{\rm hw} + v_{\rm sh}, \label{eq:vcir} \end{equation} where $v_{\rm hw} = \eta v_{\rm K}$, $v_{\rm sh} =\changed{0.52} (q_{\rm p} \tau_{\rm s})^{1/3} v_{\rm K}$ and the transition mass ratio for two regimes $q_{\rm hw/sh} = \eta^3/ \tau_{\rm s}$. \changed{ The eccentric velocity of the planet relative to its circular Keplerian value is $v_{\rm ecc}$,} where we fit \begin{equation} v_{\rm ecc} = \changed{0.76} e_{\rm p} v_{\rm K}. \label{eq:vecc} \end{equation} In the epicycle approximation, it can be shown that the velocity relative to a circular orbit ranges from a minimum of $e_{\rm p} v_{\rm K}/2$ to maximum of $e_{\rm p} v_{\rm K}$ \citep[e.g.][]{Johansen2015}. Our numerical coefficient of \changed{$0.76$} falls between these limits. Once $\eta>e_{\rm p}$, $\Delta v$ will be dominated by the headwind, also consistent with the analytical expression by \citep{Johansen2015}. When $ \Delta v $ becomes higher, the accretion is no longer in the settling regime. We adopt an exponential function $f_\mathrm{set}$ to express the decay of $ \varepsilon_\mathrm{set,2D}$, \begin{equation} f_\mathrm{set} = \exp{\left[-0.5 \left( \frac{\Delta v}{v_{\ast}} \right)^{2}\right]}, \label{eq:fset} \end{equation} and \begin{equation} v_{\ast} = ( q_{\rm p} / \tau_{\rm s})^{1/3} v_{\rm K} \label{eq:vstar} \end{equation} is the transition velocity from the settling to the ballistic regimes \changed{(see derivations in \se{appendixset})}. The accretion efficiency in the ballistic regime is \begin{equation} \varepsilon_{\rm bal,2D} = \frac{ R_{\rm p}} { 2\pi \tau_{\rm s} \eta r_{\rm p} }\sqrt{ \frac{2 q_{\rm p} r_{\rm p} }{R_{\rm p}} + \left( \frac{ \Delta v }{v_{\rm K} } \right)^2 } \left(1 -f_{\rm set}\right). \label{eq:effbl0} \end{equation} It is important to note that $\varepsilon_{\rm set,2D}$ is independent of $r_{\rm p}$ and $R_{\rm p}$, whereas $\varepsilon_{\rm bal,2D}$ is related with the ratio of above two quantities, $R_{\rm p}/r_{\rm p}$. The total accretion efficiency is \begin{equation} \varepsilon_{\rm 2D} = \varepsilon_{\rm set,2D} + \varepsilon_{\rm bal,2D} \end{equation} The above recipe for $\varepsilon_{\rm 2D}$ is calculated based on rate expression. It is therefore not guaranteed that the accretion probability $\varepsilon_\mathrm{2D}$ remains no larger than unity ($\varepsilon_{\rm 2D}\leqslant 1$). The situation is analogous to radioactive decay, where, for a decay rate of $\lambda$, the probability of decaying after a time is $P=1-\exp(-\lambda t)$. We can therefore correct for this effect, simply by redefining $\tilde{\varepsilon}_{\rm 2D} = 1 - \exp(-\varepsilon_{\rm 2D})$ Clearly, in such a expression, $\tilde{\varepsilon}_{\rm 2D} = \varepsilon_{\rm 2D}$ when $ \varepsilon_{\rm 2D}\ll1$ and $\tilde{\varepsilon}_{\rm 2D} \simeq 1$ when $ \varepsilon_{\rm 2D} \geqslant 1$. Accounting for these probabilistic nature of $\varepsilon$, we give the efficiency expression as, \begin{equation} \varepsilon_{\rm 2D} \rightarrow 1 - \exp(-\varepsilon_{\rm 2D}). \label{eq:efficiency} \end{equation} to ensure that $\varepsilon \le 1$. In \fg{diff} and \fg{para}, we find that this analytical fit expression (\eq{efficiency}) agrees with the simulations quite well for planets on circular orbits as well as on eccentric orbits. \changed{ \cite{Lambrechts2014} also calculate the efficiency when the planet is on a circular orbit. In the shear-dominated regime, \eq{eps2d-set} can be written as \begin{equation} \begin{split} \varepsilon_{\rm sh} &= 0.1 \left( \frac{M_{\rm p}}{M_{\oplus}} \right)^{2/3} \left( \frac{\eta}{10^{-3}} \right)^{-1} \left( \frac{\tau_{\rm s}}{0.1} \right)^{-1/3} \\ & \simeq 0.022 \left( \frac{M_{\rm p}}{M_{\oplus}} \right)^{2/3} \left( \frac{\tau_{\rm s}}{0.1} \right)^{-1/3} \left( \frac{r}{10 \rm \ \rm AU} \right)^{-1/2} \end{split} \label{eq:efficiency2} \end{equation} The latter expression of the above formula adopts the same disk model as \cite{Lambrechts2014}. Comparing \eq{efficiency2} with \cite{Lambrechts2014}'s Eq.(33), we find that the scaling relations are identical, and our prefactor is $35 \%$ lower than theirs.} In the headwind-dominated regime, \eq{eps2d-set} can be written as \begin{equation} \begin{split} \varepsilon_{\rm hw} &= 0.055 \left( \frac{M_{\rm p}}{M_{\oplus}} \right)^{1/2} \left( \frac{\eta}{10^{-3}} \right)^{-1/2} \left( \frac{\tau_{\rm s}}{0.1} \right)^{-1/2}. \end{split} \label{eq:efficiency_hw} \end{equation} The transition mass from the headwind to the shear regimes is \begin{equation} M_{\rm hw/sh} = 0.03 \left( \frac{\eta}{10^{-3}} \right)^{3} \left( \frac{\tau_{\rm s}}{0.1} \right)^{-1}M_{\oplus}. \end{equation} In the eccentricity-dominated regime, \eq{eps2d-set} can be written as (neglecting $f_{\rm set}$) \begin{equation} \begin{split} \varepsilon_{\rm ecc} &= 0.055 \left( \frac{e_{\rm p}}{10^{-3}} \right)^{1/2} \left( \frac{M_{\rm p}}{M_{\oplus}} \right)^{1/2} \left( \frac{\eta}{10^{-3}} \right)^{-1}\left( \frac{\tau_{\rm s}}{0.1} \right)^{-1/2}. \end{split} \label{eq:efficiency_e} \end{equation} We find that in this regime $\varepsilon_{\rm set,2D} $ increases as $e_{\rm p}^{1/2}$. \iffalse \subsubsection{Comparison with previous results} \begin{table*} \centering \caption{Comparison with previous work} \begin{tabular}{lllllll|} \hline \hline prefactor in front of $\varepsilon$ & this work & Ormel \& Kobayashi (2012) & Lambtechts \& Johansen (2012,2014) \\ \hline headwind regime & 0.32 & 0.32 & 0.32 \\ shear regime & 0.25 & 0.32 & 0.35 \\ \hline \end{tabular} \label{tab:tab2} \end{table*} We compare our results (\eqs{effhw}{effsh} ) with \citep{Lambrechts2014} and \cite{Ormel2012} in a circular planet case. The parameters ($q,\eta, \tau_{\rm s}$) dependence are consistent with literature work. The calculated prefactors in front of $\varepsilon$ are given in \tb{tab2}. \cite{Ormel2012} and \cite{Lambrechts2012} have the same results in the headwind regime, which is $10\%$ larger than we obtained here. In the shear regime, we find that \cite{Lambrechts2014}'s result ( their Eq.(14)) is $30 \%$ larger than us. \begin{table} \centering \caption{The accreting probability for the adopted parameters with $v_{\rm hw} = 50 \rm \ m \ s^{-1}$}. \begin{tabular}{llllllllllll|} \hline \hline Run & $ \tau_{\rm s}$ & $M_{\rm p }$ & \multicolumn{3}{c}{Probability $P_{\rm eff}$}& \multicolumn{3}{c}{$\Delta P_{\rm eff}$ } & \multicolumn{3}{c} {CPU } (min) \\ & & ($ \rm M_{\oplus}$) & LOCAL & SCA & GLOBAL \footnote[1]{ without optimized algorithm} &LOCAL & SCA & GLOBAL &LOCAL & SCA & GLOBAL \\ \hline \#1& $1 $ & $10^{-4}$ & $1.80 \times 10^{-5}$ &$1.80 \times 10^{-5}$ & &$2 \times 10^{-6}$ & $2 \times 10^{-6}$ & & $35$ & $2$ \\ \#2& $10^{-1} $ & $10^{-4}$ & $2.24 \times 10^{-4}$ &$2.24 \times 10^{-4}$ & & $5\times 10^{-6}$ & $2 \times 10^{-6}$ & & 6 & $34$ \\ \#3& $10^{-2} $ & $10^{-4}$ & $1.13 \times 10^{-3}$ &$1.15 \times 10^{-3}$ & & $5 \times 10^{-5}$ & $5 \times 10^{-5}$ & & $1$ & $4$ \\ \#4& $10^{-3} $ & $10^{-4}$ & $4.04 \times 10^{-3}$ & $4.0 \times 10^{-3}$ & & $10^{-4}$ & $2 \times 10^{-4}$ & & <1 & $ 10 $ \\ \#5& $1 $ & $1$ & $2.07 \times 10^{-2}$ &$2.06 \times 10^{-2}$ & & $2.3 \times 10^{-4}$ & $2 \times 10^{-4}$ & & $<1$ & $<1$ \\ \#6& $10^{-1} $ & $1$ & $50$ & $6.28 \times 10^{-2}$ &$6.28 \times 10^{-2}$ & & $2 \times 10^{-4}$ & $2 \times 10^{-4}$ & & $ <1$ & $<2$ \\ \#7& $10^{-2} $ & $1$ & $0.137$ & $0.137$ & $0.137$ & $4 \times 10^{-4}$ & $5 \times 10^{-4}$ & $5 \times 10^{-4}$ &$ <1$ & $2$ & $120$ \\ \#8& $10^{-3} $ & $1$ & $0.316$ & $0.302$ & & $7 \times 10^{-4}$ & $2 \times 10^{-3}$ & &$ 2$ & $6$ \\ \hline \hline \end{tabular} \label{tab:tab1} \end{table} \begin{table*} \centering \caption{Adopted parameters} \begin{tabular}{lllllllllllll|} \hline \hline Run & eccentricity & St & $M_{\rm p }$ & $v_{\rm hw}$ & Probability $P_{\rm eff}$ with same $\Delta P$ given above & Probability $P_{\rm eff}$W \footnote[1]{ without optimized algorithm} \\ & & ($ \rm M_{\oplus}$) & ($\rm m \ s^{-1}$) \\ \hline \#1 & $0$ & $10^{-2} $ & $1$ & $50$ & $0.137$ & $0.137$ \\ \#2 & $10^{-4}$& $10^{-2} $ & $1$ & $50$ & $0.136$ & $0.139$ \\ \#3 & $10^{-3}$ & $10^{-2} $ & $1$ & $50$ & $0.136$ & $0.138$ \\ \#4 & $0.01$ & $10^{-2} $ & $1$ & $50$ & $0.228$ & $0.212$ \\ \#5 & $0.1$ & $10^{-2} $ & $1$ & $50$& $0.367$ & $0.365$ \\ \#6 & $0.2$ & $10^{-2} $ & $1$ & $50$& $0.193$ & $0.210$ \\ \#7 & $0.3$ & $10^{-2} $ & $1$ & $50$& $0.210$ & $0.198$ \\ \#8 & $0.5$ & $10^{-2} $ & $1$ & $50$ & $0.218$ & $0.228$ \\ \#9 & $0.7$ & $10^{-2} $ & $1$ & $50$ & $0.286$ & $0.291$ \\ \#10 & $0.9$ & $10^{-2} $ & $1$ & $50$ & $0.382$ & $0.370$ \\ \#11 & $0$ & $10^{-2} $ & $1$ & $50$ & $2.06 \times 10^{-2}$ \\ \#12 & $10^{-4}$& $1 $ & $1$ & $50$ & $2.04 \times 10^{-2}$ \\ \#13 & $10^{-3}$ & $1 $ & $1$ & $50$ & $2.64 \times 10^{-2}$ \\ \#14 & $0.01$ & $1 $ & $1$ & $50$ & $2.78 \times 10^{-2}$ \\ \#15 & $0.1$ & $1 $ & $1$ & $50$& $3.40 \times 10^{-3}$ \\ \#16 & $0.5$ & $1 $ & $1$ & $50$ & $6.40 \times 10^{-3}$ \\ \hline \hline \end{tabular} \label{tab:tab2} \end{table*} \fi \subsection{Neglected effects} In deriving the expression above, we have neglected several effects: \begin{enumerate} \item Evaporation/molten of pebbles for planetesimals travelling on supersonic velocity (relevant for $e_{\rm p} >h_{\rm gas}$). One caveat is that planetesimals on eccentric orbits can produce bow shocks as they move supersonically through the disk gas (i.e., $e_{\rm p} > h_{\rm gas}$, \cite{Morris2012}). The surrounding gas temperature is raised due to energy deposition at the shock front. \changed{Therefore, pebbles are likely to be sublimated/melted before they settle onto the planetesimal, depending on the gas density and pressure (e.g., Fig. 6 of \cite{Brouwers2017}), the planetesimal's mass, and the pebble's size and composition \citep{Hood1998,Miura2005}.} The detail treatment is beyond the scope of this paper; here we do not solve for the thermal balance of the pebble. Nevertheless, stopping the mass growth due to the pebble evaporation is most relevant for low-mass planetesimals. For high mass planets capable of accreting the primordial disk gas, evaporating pebbles may not reach the surfaces of the cores, but still enrich their envelopes \citep{Venturini2016,Alibert2017,Brouwers2017}. \item Aerodynamic deflection \citep{Guillot2014,Johansen2015,Visser2016}, relevant for very low-mass planetesimals and small $ \tau_{\rm s}$ pebbles. When the planet mass is very low, its impact radius reduces to the physical radius. However, when the stopping time of the pebble is short compared to the crossing time of the planetesimal ($\sim R_{\rm p}/v_{\rm hw}$), the pebble will follow gas streamlines and avoid accretion.This occurs for typical planetesimal size $R_{\rm p} \lesssim 100 \ \rm km$ and $ \tau_{\rm s} \lesssim 10^{-3}$. \item Pre-planetary atmosphere formation, relevant for $b_{\rm set} <R_\mathrm{\rm Bondi}$. Once the planet's surface escape velocity becomes larger than the local sound speed, a planet will start to accrete the disk gas, creating a pre-planetary atmosphere. The gas density in planetary atmosphere is higher compared to the surrounding disk gas. When the pebble enters the planetary atmosphere, gas drag becomes larger and the accretion cross section also increases due to this enhanced drag. However, for pebble accretion, the accretion cross section ($b_{\rm set} \simeq \tau_{\rm s}^{1/3}R_{\rm H}$ in the shear regime) can be as large as the planet's Hill sphere. The radius of the planetary atmosphere cannot exceed the Bondi radius, $R_{\rm Bondi} = G M_{\rm p}/c_{\rm s}^2$, where $c_{\rm s}$ is the sound speed at the planet's location. As long as $R_{\rm Bondi} < b_{\rm set}$ (or $q_p<2.3\tau_s^{1/2}h_\mathrm{gas}^3$ in dimensionless units), it is therefore appropriate to neglect the atmosphere enhancement on pebble accretion. For a planet accreting $ \tau_{\rm s} = 10^{-2}$ pebbles in the disk with a typical scale height $h_{\rm gas} = 0.05$, this condition is justified as long as the planet is less than $10 \ \rm M_\oplus$. In addition, we assume that the gas moves on an unperturbed sub-Keplerian orbit. In reality the planet will perturb the gas flow. A natural scale on which these effects appear is again the planet's Bondi radius \citep{Ormel2015}. Therefore, the flow pattern effect on pebble accretion is also minor when $R_{\rm Bondi} < b_{\rm set}$. \item Pebble isolation mass. When the planet is massive enough to strongly perturb the disk gas, a gap can be formed in the vicinity of the planet and inverses the locally gas pressure gradient. This process truncates pebbles' inward drift, and therefore terminates pebble accretion. The planet mass is defined as the pebble isolation mass ($M_{\rm p, iso}$). Approximating the gap opening mass mass \citep{Lin1993}, $M_{\rm p, iso}$ is around $20 \ \rm M_\oplus$ for a solar mass star ($M_{\rm p, iso} \sim h_{\rm gas}^3 M_{\star}$, \cite{Lambrechts2014b,Bitsch2018}). In our study, we consider the planet mass in a range of $10^{-3} \ \rm M_\oplus \lesssim M_{\rm p} \lesssim 10 \ \rm M_\oplus < M_\mathrm{p,iso}$. \end{enumerate}
1,477,468,749,908
arxiv
\section{Introduction} Approximately two thirds of the available observing time with the Parkes radio telescope is currently dedicated to searching for and studying pulsars. Numerous pulsars are observed over many years enabling studies of the pulsars themselves, theories of gravity, the interstellar medium and many other phenomena. Traditionally pulsars were analysed individually. The pulsar timing method, in which the pulse arrival times are compared with predictions for those arrival times, is used to obtain accurate measurements of each pulsar's pulse, astrometric and binary parameters. The resulting post-fit timing residuals indicate unmodelled physical effects that affect the pulsar signal. Some of these, such as intrinsic instabilities in the pulsar rotation will be specific to a given pulsar. If the post-fit timing residuals for all pulsars are identical then the cause must be an Earth-based phenomenon. Processing the data sets with an imprecise knowledge of the solar system planetary masses will affect some pulsars, but not others (depending for instance, on the ecliptic latitude of the pulsar). A supermassive black hole binary system emitting gravitational waves will induce timing residuals dependent upon the angle between the pulsar, Earth and gravitational-wave source. Therefore by identifying the angular distribution of the correlations it is possible to disentangle many of these phenomena. This leads to the concept of a Pulsar Timing Array (see, e.g., Foster \& Backer 1990) in which a large number of millisecond pulsars are observed, their post-fit timing residuals determined and a search is made for correlated timing residuals. Since the year 2005, the Parkes Pulsar Timing Array (PPTA) project team have been observing a sample of pulsars in order to form a PTA data set. The major scientific aims of the project are to: \begin{enumerate} \item make a direct detection of ultra-low-frequency gravitational waves, \item improve the Solar System planetary ephemeris, \item develop a pulsar-based time scale. \end{enumerate} As described in this review article, significant progress has been made towards all these goals. A recent detailed summary of the PPTA project has been published in Manchester et al. (2013). Here we provide a brief review of the project as a whole, before describing recent developments that were not described in the earlier publication. In \S\ref{sec:history} we provide a brief history of the PPTA project. In \S\ref{sec:telescope} we describe the observations. \S\ref{sec:toad} explains how we process the raw data sets to form pulse times-of-arrival and from them obtain timing residuals. Our recent science results, current research and future plans are reviewed in the final three sections. \section{History of the Parkes Pulsar Timing Array}\label{sec:history} A history of the PPTA project has been presented in Hobbs et al. (2012a). Here we provide a brief summary. The first request for observations with the Parkes telescope for the PPTA was submitted in late 2003 and the first high-quality observations were obtained in March 2005. At that time the basic concept of a PTA was understood, but the necessary timing precision and the total data span required to produce scientifically valuable data was only roughly known. The initial sample of pulsars was based on the analysis of Jenet et al. (2005) who demonstrated that an isotropic, stochastic gravitational wave background with the amplitude predicted from the available theoretical calculations could be detected if $\sim 20$ pulsars were timed weekly over a period of five years with an rms timing residual of $\sim$100\,ns. As data were collected, a few pulsars were dropped from the sample and, as new discoveries made, new pulsars added. Jenet et al. (2006) published the first major result from the PPTA. That work provided, at the time, the most stringent upper bound on the existence of a gravitational wave background, but only made use of a small subset of pulsars. The algorithm developed in 2006 could only be applied to pulsar timing residuals that were ``white". However, ``red" (low-frequency) noise was already detectable in many of the data sets. You et al. (2007a,b) showed that much of this red noise could be attributable to dispersion measure variations from the interstellar medium and/or the solar wind. This work highlighted the necessity for an observing system that provides sufficient frequency coverage to enable correction for these dispersion measure variations. The first analysis of the sensitivity of a PTA to individual, continuous sources of gravitational waves was presented by Yardley et al. (2010) using PPTA data sets. This led to a sky-averaged constraint on the merger rate of nearby ($z < 0.6$) black hole binaries with a chirp mass of $10^{10}$\,M\,$_\odot$ of less than one merger every seven years. In the year 2010, progress was also made towards the second scientific aim of the PPTA project. Champion et al. (2010) developed algorithms that allowed errors in the masses of known solar system planets to be identified and this work led to the most precise published estimate for the mass of the Jovian system of $9.547921(2) \times 10^{-4}$\,M\,$_\odot$. \begin{table} \caption{Parameters of the current observing systems for the Parkes Pulsar Timing Array}\label{tb:parkes} \begin{center} \begin{tabular}{ll} \hline Parameter & Value \\ \hline Telescope diameter & 64\,m \\ 10\,cm observing band & 2588 -- 3612\,MHz\\ 20\,cm observing band & 1241 -- 1497\,MHz\\ 50\,cm observing band & 700 -- 764\,MHz \\ 10\,cm system equivalent flux density & 50\,Jy \\ 20\,cm system equivalent flux density & 36\,Jy\\ 50\,cm system equivalent flux density & 62\,Jy \\ Incoherent digital filter bank systems & PDFB3, PDFB4 \\ Coherent de-dispersion systems & APSR, CASPSR \\ Typical observing time & 1\,hr\\ \hline \end{tabular} \end{center} \end{table} Yardley et al. (2011) attempted to make a detection of gravitational waves with the PPTA data sets. His algorithms were able to detect simulated gravitational waves, but his work showed that our observations were consistent with the hypothesis that no gravitational wave background is present in the data. However, his algorithm is not effective in the presence of significant red noise in the pulsar timing residuals. This led to Coles et al. (2011) which described how pulsar data sets should be analysed when affected by red noise. The third of our major project aims was achieved by Hobbs et al. (2012b) who developed the first pulsar-based time scale that had a precision comparable to terrestrial time standards. This work demonstrated that, for existing data sets, the atomic timescales were sufficient for our purposes, but with improved data sets it is expected that pulsar-based time scales will become more important. Even after the work of You et al. (2007a), the effects of dispersion measure variations were still not fully dealt with. Keith et al. (2012) developed a new algorithm for the measurement and removal of dispersion measure variations and this is now being routinely applied in our data analysis. Ravi et al. (2012) improved our prediction of the expected amplitude of a gravitational wave background. Interestingly, his predictions (based on the Millennium simulation) are ruled out by a new PPTA upper bound on the gravitational wave background (Shannon et al., submitted). The ramifications of the Shannon et al. limit on the predictions of the expected gravitational wave signal are still being considered. \section{The telescope and details of observations}\label{sec:telescope} All observations are obtained using the 64-m Parkes radio telescope situated in New South Wales, Australia. Since the year 2005, observations have been made in sessions at 2-3 week intervals. In each observing session each pulsar is observed at least once in the 20\,cm observing band and once with a dual-frequency 10/50\,cm receiver. It is often possible within an observing session to obtain more than one observation of each pulsar (particularly for pulsars such as PSR J0437$-$4715 that are situated out of the Galactic plane). Before each observation a calibration source is recorded to enable subsequent polarisation calibration. Observations of Hydra A are made at least once during the observing session in order to calibrate the flux density of the pulsar observations. We currently record the data using four backend instruments: PDFB3, PDFB4, CASPSR and APSR. CASPSR and APSR are coherent dedispersion systems whereas the digital filter bank systems (PDFB3 and PDFB4) do not coherently dedisperse the data. The current parameters for the observing system are given in Table~\ref{tb:parkes}. Further details about the backend instrumentation are given in Manchester et al. (2013). The sample of pulsars evolves as new pulsars are discovered. In Table~\ref{tb:pulsars} we list the pulsars that have been observed since Jan 2012 along with their pulse periods, dispersion measures and orbital period. We also list the number of observations during this time obtained in the 20\,cm and in the 10/50\,cm observing bands (note that 26 observing sessions have been carried out during this time with the first around 5th Jan 2012 and the last 12th April 2013). Many of the pulsars scintillate strongly. Typical observing durations are 1\,hr, but if the pulsar is weak in one observation we usually stop the observation, move to another pulsar and then, later in the observing session, return to observe the initial pulsar again. In Table~\ref{tb:pulsars} we list the minimum and median arrival time uncertainties for each observing band. \begin{table} \caption{Parkes Pulsar Timing Array pulsar sample and summary of data sets since Jan. 1 2012.}\label{tb:pulsars} \begin{footnotesize} \begin{tabular}{lllllllllllllllllll} \hline PSR J & P & DM & ${\rm P}_b$ & \multicolumn{3}{c}{10\,cm} & \multicolumn{3}{c}{20\,cm} & \multicolumn{3}{c}{50\,cm} \\ & & & & N$_{\rm pts}$ & $\sigma_{\rm min}$ & $\sigma_{\rm med.}$ & N$_{\rm pts}$ & $\sigma_{\rm min}$ & $\sigma_{\rm med.}$ & N$_{\rm pts}$ & $\sigma_{\rm min}$ & $\sigma_{\rm med.}$\\ & (ms)& (cm$^{-3}$pc)& (days)& & ($\mu$s) & ($\mu$s)& & ($\mu$s) & ($\mu$s)& & ($\mu$s) & ($\mu$s)\\ \hline J0437$-$4715 & 5.757 & 2.64 & 5.74 & 49 & 0.027 & 0.031 & 208 & 0.028 & 0.038 & 73 & 0.051 & 0.121 \\ J0613$-$0200 & 3.062 & 38.78 & 1.20 & 31 & 1.586 & 2.213 & 47 & 0.397 & 0.729 & 34 & 0.294 & 0.484 \\ J0711$-$6830 & 5.491 & 18.41 & --- & 35 & 0.964 & 3.932 & 53 & 0.503 & 2.072 & 36 & 0.476 & 2.330 \\ J1017$-$7156 & 2.339 & 94.23 & 6.51 & 45 & 0.503 & 2.465 & 60 & 0.122 & 0.331 & 45 & 0.258 & 0.509 \\ J1022$+$1001 & 16.453 & 10.25 & 7.81 & 38 & 0.560 & 1.261 & 48 & 0.121 & 0.789 & 43 & 0.396 & 2.150 \\ \\ J1024$-$0719 & 5.162 & 6.49 & --- & 39 & 1.720 & 3.996 & 47 & 0.216 & 0.163 & 40 & 0.451 & 4.142 \\ J1045$-$4509 & 7.474 & 58.17 & 4.08 & 29 & 4.196 & 6.825 & 36 & 1.072 & 1.629 & 29 & 1.729 & 2.165 \\ J1600$-$3053 & 3.598 & 52.33 & 14.35 & 28 & 0.381 & 0.612 & 40 & 0.209 & 0.273 & 34 & 1.193 & 1.433 \\ J1603$-$7202 & 14.842 & 38.05 & 6.301 & 28 & 1.234 & 8.498 & 40 & 0.447 & 0.892 & 31 & 0.815 & 1.850 \\ J1643$-$1224 & 4.622 & 62.41 & 147.02 & 26 & 1.079 & 1.385 & 36 & 0.469 & 0.597 & 26 & 1.169 & 1.269 \\ \\ J1713$+$0747 & 4.570 & 15.99 & 67.83 & 28 & 0.060 & 0.258 & 41 & 0.021 & 0.110 & 30 & 0.337 & 0.762 \\ J1730$-$2304 & 8.123 & 9.62 & --- & 29 & 0.581 & 2.701 & 31 & 0.320 & 0.895 & 30 & 0.676 & 1.533 \\ J1744$-$1134 & 4.075 & 3.14 & --- & 33 & 0.122 & 0.677 & 40 & 0.073 & 0.319 & 33 & 0.134 & 0.574 \\ J1824$-$2452A & 3.054 & 120.50 & --- & 18 & 0.673 & 1.102 & 28 & 0.168 & 0.255 & 20 & 0.481 & 0.589 \\ J1857$+$0943 & 5.362 & 13.30 & 12.33 & 26 & 0.679 & 2.815 & 34 & 0.368 & 1.037 & 26 & 1.679 & 2.454 \\ \\ J1909$-$3744 & 2.947 & 10.39 & 1.53 & 34 & 0.040 & 0.141 & 59 & 0.010 & 0.072 & 40 & 0.056 & 0.180 \\ J1939$+$2134 & 1.558 & 71.04 & --- & 28 & 0.079 & 0.205 & 37 & 0.015 & 0.035 & 31 & 0.044 & 0.059\\ J2124$-$3358 & 4.931 & 4.60 & --- & 31 & 4.709 & 7.475 & 40 & 0.697 & 1.849 & 33 & 0.519 & 2.979 \\ J2129$-$5721 & 3.726 & 31.85 & 6.63 & 32 & 4.437 & 25.942 & 45 & 0.261 & 1.813 & 33 & 0.321 & 1.204 \\ J2145$-$0750 & 16.052 & 9.00 & 6.84 & 31 & 0.503 & 1.132 & 40 & 0.086 & 0.542 & 35 & 0.345 & 1.125 \\ \\ J2241$-$5236 & 2.187 & 11.41 & 0.15 & 36 & 0.321 & 0.595 & 49 & 0.050 & 0.154 & 38 & 0.033 & 0.220 \\ \hline \end{tabular} \end{footnotesize} \end{table} \section{Forming pulse arrival times and timing residuals}\label{sec:toad} The ``raw" data as obtained from the observing system are available for download from the Parkes pulsar data archive (http://data.csiro.au; Hobbs et al. 2011). An automated pipeline based on the PSRCHIVE software suite (Hotan et al. 2004) runs the following routines on the raw data (details are provided in Manchester et al. 2013): \begin{itemize} \item The edges of the observing band are removed along with any identified radio-frequency interference \item The best available timing model for the pulsar is installed into the data file \item Polarisation and flux calibration routines are applied \item The data are averaged in time, frequency and polarisation to produce a single profile for each observation.\footnote{For PSR~J0437$-$4715 we form and subsequently analyse the invariant interval profile.} \end{itemize} The pulse arrival time is then calculated by cross-correlating, in the frequency domain, a noise-free, analytic template with the calibrated observation. Timing residuals are formed using the \textsc{tempo2} (Hobbs, Edwards \& Manchester 2006) software package. Each arrival time is referred to Terrestrial Time as realised by the Bureau International des Poids et Mesures (BIPM2012)\footnote{\url{www.bipm.org}} and we make use of the Jet Propulsion Laboratory (JPL) DE421 planetary ephemeris\footnote{\url{http://tmo.jpl.nasa.gov/progress_report/42-178/178C.pdf}}. The pulsar models are based on those of Verbiest et al. (2009) and we correct the dispersion measure variations as described by Keith et al. (2012). In order to produce data sets with the longest possible data spans we have, where possible, combined the PPTA data (which starts in the year 2005) with earlier Parkes observations of the same pulsars. These earlier data sets have been described by Verbiest et al. (2008, 2009). These extra data allow us to expand our data sets backwards to 1995. The earlier data have poorer observing cadence and timing precision than the more recent data, but the most significant problem is that the data were only obtained in the 20\,cm observing band. This restricts the precision with which the dispersion measure variations can be measured and corrected. Over time our data sets have been improving as new systems (both front-end receivers and back-end instruments) have been commissioned. To demonstrate this improvement we show, in Figure~\ref{fg:median_toa}, the median ToA uncertainty for data obtained in a given year for PSRs J0437$-$4715, J1713$+$0743 and J1744$-$1134. Prior to the solid, vertical line the observations were from the earlier Parkes observing program. After the vertical line the data were obtained for the PPTA project. The timing precision continued to improve until around 2009 at which time we commissioned our current suite of instrumentation. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=-90]{median_toa.ps} \end{center} \caption{Average median timing residuals per year for PSR J0437$-$4715 (dashed), J1713$+$0747 (dotted) and PSR~J1744$-$1134 (solid)}\label{fg:median_toa} \end{figure} Our extended data sets are described in Table~\ref{tb:extended} and the timing residuals in the 20\,cm observing band are shown in Figure~\ref{fg:longest}. The table lists the pulsar name, data span, total number of observations and, to give an indication of the variation in the data set, the weighted rms timing residual measured in the 20\,cm observing band. We note that the weighted rms timing residual is often a misleading statistic. Some pulsars scintillate strongly and so the weighted rms residual can be dominated by a few measurements (that also dominate the least-squares-fitting procedure). The rms residual is also affected by red noise in the data set and therefore depends upon data span. Finally, the rms residual is affected by arbitrary phase jumps that often needed to be included in the timing model fit to account for time offsets between different instruments. Details on determining these jumps are given in Manchester et al. (2013). \begin{table} \begin{center} \caption{The extended Parkes Pulsar Timing Array data sets. $\sigma_{\rm 20cm}$ is the weighted rms timing residual in the 20\,cm observing band.}\label{tb:extended} \begin{tabular}{llllll} \hline PSR J & First obs. & Last obs. & T$_{\rm span}$ & N$_{\rm obs}$ & $\sigma_{\rm 20cm}$\\ & (MJD) & (MJD) & (yr) & & ($\mu$s) \\ \hline J0437$-$4715 & 50191 & 56397 & 17.0 & 5160 & 0.223 \\ J0613$-$0200 & 51527 & 56395 & 13.3 & 799 & 1.177 \\ J0711$-$6830 & 49374 & 56396 & 19.2 & 729 & 1.307 \\ J1017$-$7156 & 55456 & 56395 & 2.6 & 282 & 0.684 \\ J1022$+$1001 & 52650 & 56395 & 10.3 & 755 & 1.996 \\ \\ J1024$-$0719 & 50118 & 56395 & 17.2 & 626 & 4.622 \\ J1045$-$4509 & 49406 & 56396 & 19.1 & 714 & 4.150 \\ J1600$-$3053 & 52302 & 56396 & 11.2 & 754 & 0.797 \\ J1603$-$7202 & 50026 & 56396 & 17.4 & 590 & 2.046 \\ J1643$-$1224 & 49422 & 56396 & 19.1 & 581 & 2.146 \\ \\ J1713$+$0747 & 49422 & 56396 & 19.1 & 683 & 0.399 \\ J1730$-$2304 & 49422 & 56396 & 19.1 & 514 & 1.905 \\ J1732$-$5049 & 52647 & 55725 & 8.4 & 226 & 2.246 \\ J1744$-$1134 & 49729 & 56395 & 18.2 & 637 & 0.575 \\ J1824$-$2452A & 53519 & 56395 & 7.9 & 401 & 3.661 \\ \\ J1857$+$0943 & 53087 & 56395 & 9.1 & 416 & 0.876 \\ J1909$-$3744 & 52618 & 56396 & 10.3 & 1189 & 0.186 \\ J1939$+$2134 & 49957 & 56395 & 17.6 & 489 & 4.347 \\ J2124$-$3358 & 49490 & 56395 & 18.9 & 713 & 2.677 \\ J2129$-$5721 & 49987 & 56396 & 17.5 & 576 & 1.110 \\ \\ J2145$-$0750 & 49518 & 56396 & 18.8 & 751& 1.066 \\ J2241$-$5236 & 55235 & 56396 & 3.2 & 292 & 0.320 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=8cm]{longDset.ps} \end{center} \caption{Extended 20\,cm PPTA data sets. Each panel, representing each pulsar in the PPTA sample, is scaled independently. The value listed under the pulsar's name indicates the residual range (i.e., the highest residual minus the lowest).}\label{fg:longest} \end{figure} \section{Current research} Members of the PPTA team are working on all aspects of the project - from improving the instrumentation at Parkes to developing the algorithms required for us to achieve the main aims of the project. Here we first describe improvements being made in the observing system. We then highlight current research related to the three main project goals. \subsection{Improving the PPTA data sets} \begin{figure} \begin{minipage}{8cm} \includegraphics[width=5cm,angle=-90]{1022.ps} \end{minipage} \begin{minipage}{8cm} \includegraphics[width=5cm,angle=-90]{profiles.ps} \end{minipage} \caption{(left panel) Timing residuals for PSR~J1022$+$1001 in the 20\,cm observing band. Two observations separated by two days are indicated by the labels (A) and (B). (right panel) the profiles for the observations indicated.}\label{fg:1022} \end{figure} \begin{figure} \begin{minipage}{7cm} \includegraphics[width=7cm]{redNoise.ps} \end{minipage} \begin{minipage}{7cm} \includegraphics[width=6cm,angle=-90]{1603.ps} \end{minipage} \caption{(left panel) A sample of 20\,cm data sets that have been corrected for dispersion measure variations, but are still significantly affected by low frequency noise. (right panel) The timing residuals in the 50\,cm band for PSR~J1603$-$7202.}\label{fg:redNoise} \end{figure} The initial intention for the PPTA project was to obtain timing residuals for around 20 pulsars with rms timing residuals of $\sim$100\,ns. This has not been achieved. Many of the pulsars are relatively weak and with the available observation times and receiver systems we are only able to achieve timing precisions between $\sim$500\,ns and $\sim 1\mu s$. However, some of our data sets are affected by unexplained white- and red-noise processes. Removing the cause of such excess noise would significantly improve these data sets. In this section we describe the current status of current research to study such excess noise. Oslowski et al. (2011) analysed 25 hours of observations of PSR J0437$-$4715 and showed that the timing residuals had a standard deviation around four times the expected value given the arrival time uncertainties. This was explained as being caused by the intrinsic variability of the pulse shape and they showed that, in the 20\,cm band, timing precision better than 30--40\,ns in a 1 hour observation is highly unlikely, regardless of future improvements in telescope sensitivity. In Table~\ref{tb:pulsars}, PSR~J1022$+$1001 has a median error bar size in the 20\,cm band of 0.8\,$\mu$s. The timing residuals show no excess red-noise and yet the weighted rms timing residual is 1.2\,$\mu$s and the unweighted rms is 2.4\,$\mu$s (Figure~\ref{fg:1022}). In the figure two observations are labelled as (A) and (B). They are separated in time by only two days, but the residuals are $\sim$\,7$\mu$s apart. The reason for this is shown in the right hand panel of Figure~\ref{fg:1022}. The profiles in the two observations differ. In (A) the leading pulse component is weaker than the trailing component. In (B) the reverse is observed. The reason for this variability is not yet understood, but is likely to be caused by calibration errors (see, van Straten et al. 2013), intrinsic pulse shape variability (Kramer et al. 1999) and/or scintillation and pulse-shape evolution across the band (Ramachandran \& Kramer 2003). Many of our pulsar data sets indicate the presence of an unmodelled red noise process. Manchester et al. (2013) reports that approximately half of the pulsar sample have a significant $\ddot{\nu}$ measurement. The most extreme cases are shown in Figure~\ref{fg:redNoise} where we show the timing residuals after correction for dispersion measure variations. If this red noise has similar properties to the timing noise studied by Lyne et al. (2010) then it may be possible to correct this timing noise by searching for correlations with pulse shape variations. To date, we have not been successful in our attempts to do this. The timing residuals for PSR~J1603$-$7202 show another unmodelled process (right panel in Figure~\ref{fg:redNoise}). The excess that lasts for around 250\,d during the year 2006 has been explained by Keith et al. (2013) as a discrete change in the dispersion measure of $\sim 2 \times 10^{-3}$\,cm$^{-3}$pc possibly caused by an extreme scattering event. Improving our calibration procedures and attempting to correct for red timing noise will significantly improve our current data sets. However, it is also necessary to continue to improve our hardware systems. We are currently proposing a wide-band receiver system that will cover the band from 0.7 to 4\,GHz. The entire band will be directly digitised in the focus cabin and processed by a GPU-based processor. It is hoped that such a system will be commissioned within two years. \subsection{Gravitational wave detection} \begin{figure} \begin{minipage}{7cm} \includegraphics[width=6cm,angle=0]{burstEvent1.ps} \end{minipage} \begin{minipage}{7cm} \includegraphics[width=6cm,angle=-90]{burstEvent2.ps} \end{minipage} \caption{The left-hand panel shows the simulated timing residuals for five pulsars that are affected by a gravitational wave burst. The functional form of the burst in the two polarisation states A$_+$ and A$_\times$ is shown as the solid lines in the right hand panel. The \textsc{tempo2} fit for A$_+(t)$ and A$_\times(t)$ is shown in the right-hand panel with error bars. }\label{fg:burst} \end{figure} New theoretical calculations are suggesting that the detectable gravitational wave signal is unlikely to be an isotropic, stochastic gravitational wave background (see, for instance, Ravi et al. 2012). We therefore require gravitational-wave detection algorithms that are sensitive to backgrounds, individual continuous wave sources, evolving sources, burst events or memory events. We have recently attempted to simplify the search for these various types of wave, by noting that the pulsars in the PPTA act as individual elements of a giant gravitational wave telescope. By suitably weighting the timing residuals from each pulsar we can ``point" this gravitational wave telescope to a particular direction in the sky and hence obtain the functional form of the gravitational wave signal from that direction. We have recently updated the \textsc{tempo2} software package to produce time series for the two gravitational wave polarisation states ($A_+(t)$ and $A_\times(t)$) from a specified sky direction. Separate algorithms can then be applied to search the time series corresponding to different sky positions for the signatures of burst events, individual sources or memory events. As an example we show, in the left-hand panel of Figure~\ref{fg:burst}, five simulated data sets that contain a unrealistic, but instructional, gravitational wave burst event. In the right-hand panel of Figure~\ref{fg:burst} we show the resulting $A_+(t)$ and $A_\times(t)$ time series (data points with errors) that clearly recover the simulated burst (solid lines). The resulting fit is not perfect as it is impossible to measure the linear or quadratic component of a burst event that lasts longer than the data span. This is because the pulsars' intrinsic pulse periods and time derivatives are unknown. The $A_+(t)$ and $A_\times(t)$ time series are therefore constrained within the fit not to include a quadratic polynomial. The method described above can be used to search for individual sources of gravitational waves. However, this method is not suitable for a gravitational wave background. We are therefore improving the algorithm described by Yardley et al. (2011). Our algorithm is still being developed, but is based on forming cross-power spectra for each pair of pulsars using the Cholesky method for dealing with steep, red noise (Coles et al. 2011). These cross-power spectra are used to form the covariance between two data sets. The correlations between pairs of pulsars as a function of the angle between the pulsars are expected to follow the prediction of Hellings \& Downs (1983). Our algorithm is still work-in-progress, but we have successfully applied our method to the International Pulsar Timing Array data challenge\footnote{\url{http://www.ipta4gw.org/?page_id=214}}. \subsection{Improving the solar system ephemeris} \begin{figure} \begin{center} \begin{minipage}{7cm} \includegraphics[width=6cm,angle=0]{ephemError.ps} \end{minipage} \begin{minipage}{7cm} \includegraphics[width=6cm,angle=-90]{dxdydz.ps} \end{minipage} \end{center} \caption{Left-hand panel shows the induced timing residuals due to an incorrect mass of Jupiter of $\Delta M_J = 10^{-7}M_\odot$. The right-hand panel displays the offset in the observatory-SSB position in X, Y and Z as a function of time.}\label{fg:dxdydz} \end{figure} Champion et al. (2010) searched for the signatures of incorrect mass estimates of the planetary systems in our solar system. This method relies on knowledge of the relative positions as a function of time of the planetary system, the Earth and the solar system barycentre (SSB). This method is therefore not applicable to searching for unknown masses in the solar system. We have therefore updated the \textsc{tempo2} software to measure offsets in the estimate of the Earth's position with respect to the SSB using a PTA data set. The initial estimate of the Earth's position with respect to the SSB is obtained from a planetary ephemeris. Any error, or omission, in that ephemeris will therefore lead to an incorrect estimate of the Earth-SSB vector. Inspection of time series of the error in the three components of the Earth-SSB vector allows unknown masses to be identified. As an example we simulate data sets that include an error in the mass of the Jovian system of $\Delta M_J = 10^{-7}M_\odot$. This mass error could easily be measured using the Champion et al. (2010) approach, but here we make no assumption about the nature of the error in the planetary ephemeris. We assume that five pulsars have been observed since 1990. The output of the \textsc{tempo2} global fit provides the offset of the Earth-SSB vector from the planetary ephemeris prediction (Figure~\ref{fg:dxdydz}). These offsets could subsequently be searched to identify the orbital period of the unknown mass and the orbital angle with respect to the ecliptic plane, thereby determining the position and orbit of the unknown objects. This procedure can be generalised and used to determine the position of any telescope at any position in the solar system. We have recently used PPTA data to show how this method can be used to navigate a space craft through the solar system (Deng et al. submitted). \subsection{Pulsar-based time standards} In Hobbs et al. (2012b) we describe new updates to \textsc{tempo2} that allow the signal common to all pulsars to be identified given a PTA data set. This method was applied to the PPTA data sets and we recovered the known offsets between the world's best time standards: International Atomic Time and Terrestrial Time as realised by the Bureau International des Poids et Mesures (BIPM). One major issue with our method is that we assume that the noise in the data sets has the same statistical form throughout the observations. However, as described in Manchester et al. (2013), we are unable to correct our earliest data sets for dispersion measure variations. The timing residuals for most pulsars are therefore affected, for the earliest observations, by dispersion measure variations and, for the most recent observations, by other noise processes such as intrinsic pulsar timing noise. We are currently enhancing the Coles et al. (2011) Cholesky routines to account for non-stationary noise processes and will use the new routines to improve our pulsar-based time standard. \section{Outreach} Some of the PPTA observations are carried out by high school students as part of the PULSE@Parkes education program (Hobbs et al. 2009). This program provides high school students with two hours of observing per month. So far more than 1000 students from Australia, Japan, the USA, the Netherlands, England and Wales have been part of the program. \section{Conclusion} The Parkes Pulsar Timing Array is continuing to obtain high quality pulsar timing observations of $\sim$ 20 millisecond pulsars. These data sets are leading to exciting new research on topics as diverse as studying the solar wind to searching for bursts of gravitational waves to navigating space craft through the solar system. The data sets are also being combined with observations from Europe and North America to form the International Pulsar Timing Array data sets. With further development to the hardware systems at Parkes it is likely that these observations will continue to contribute to timing array data sets into the Square Kilometre Array era. \section*{References}
1,477,468,749,909
arxiv
\section{Introduction} For sixty years, Magnetic Confinement Fusion (MCF) is one of the most important technological challenges for producing domestic energy. Indeed, this worldwide project involves physicists, engineers and mathematicians in order to understand and reproduce on Earth the solar magnetic fusion reaction. One of the most famous examples of this work programme is the ITER project localized in Cadarache (France) which attempts to produce a fusion plasma in a tokamak reactor by confining it thanks to a strong external magnetic field. Besides the required technological aspects of MCF, it became necessary for thirty years to lead a rigorous study of the behaviour of such a plasma and this work takes the form of the derivation of mathematical models and of high precision numerical experiments. \\ \indent In the present paper, we focus on the Vlasov equation in presence of a external magnetic field with an amplitude of the same order as $\epsilon^{-1} \gg 1$ and on its limit regime as $\epsilon \to 0$. Such an equation is the main subject of many previous works: indeed, many results about the mathematical justifications of Guiding-Center and Finite Larmor Radius limit regimes have been established by Bostan \cite{Bostan_2007,Bostan_2010}, Fr\'enod \& Sonnendr\"ucker \cite{Two-scale_expansion,Homogenization_VP,Finite_Larmor_radius}, Fr\'enod \& Mouton \cite{Frenod-Mouton,PhD_Mouton}, Golse \& Saint-Raymond \cite{Golse_1,Golse_2} and Han-Kwan \cite{Han-Kwan_3,Han-Kwan_2,Han-Kwan}. Most of these results are based on the use of two-scale convergence and homogenization techniques (see Allaire \cite{Allaire} and Nguetseng \cite{NGuetseng}) or compactness methods. These mathematical studies allowed to validate and reinforce the tokamak plasma models presented by Littlejohn, Lee \textit{et al.}, Dubin \textit{et al.} or Brizard \textit{et al.} (see \cite{Littlejohn}, \cite{Lee, Lee_2}, \cite{Dubin}, \cite{Brizard_PhD, Hahm-Brizard}). \\ \indent The linear Vlasov equations we are focused on in the present paper are the following: \begin{equation} \label{Vlasov_GCeps_intro} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon}+\mathbf{v}\cdot\nabla_{\mathbf{x}}f_{\epsilon} + \left(\mathbf{E}_{\epsilon} + \mathbf{v} \times \mathbf{B}_{\epsilon} + \cfrac{\mathbf{v} \times \bm{\beta}_{\epsilon}}{\epsilon} \right) \cdot \nabla_{\mathbf{v}}f_{\epsilon} = 0 \, , \\ f_{\epsilon}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation} \begin{equation}\label{Vlasov_FLReps_intro} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon} + \cfrac{\mathbf{v}_{\perp}}{\epsilon}\cdot\nabla_{\mathbf{x}_{\perp}}f_{\epsilon} +v_{||}\,{\partial}_{x_{||}}f_{\epsilon}+ \left(\mathbf{E}_{\epsilon} + \mathbf{v} \times \mathbf{B}_{\epsilon} + \cfrac{\mathbf{v} \times \bm{\mathcal{M}}}{\epsilon} \right) \cdot \nabla_{\mathbf{v}}f_{\epsilon} = 0 \, , \\ f_{\epsilon}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation} \begin{equation}\label{Vlasov_axibeam_intro} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon}(t,r,v_{r}) + \cfrac{v_{r}}{\epsilon}\,{\partial}_{r}f_{\epsilon}(t,r,v_{r}) + \left(E_{\epsilon}(t,r)-\cfrac{r}{\epsilon}\right)\,{\partial}_{v_{r}}f_{\epsilon}(t,r,v_{r}) = 0 \, , \\ f_{\epsilon}(t=0,r,v_{r}) = f^{0}(r,v_{r}) \, . \end{array} \right. \end{equation} \indent The two first equations can be seen in kinetic models for magnetic fusion plasma. In such context, $f_{\epsilon} = f_{\epsilon}(t,\mathbf{x},\mathbf{v})$ is the distribution function that describes the evolution of the plasma in the phase space, $\mathbf{E}_{\epsilon} = \mathbf{E}_{\epsilon}(t,\mathbf{x})$ and $\mathbf{B}_{\epsilon}=\mathbf{B}_{\epsilon}(t,\mathbf{x})$ are the external electric and magnetic fields that are applied on the plasma, $\bm{\beta}_{\epsilon} = \bm{\beta}_{\epsilon}(t,\mathbf{x})$ is a given vector function assumed to oscillate in time with $\mathcal{O}(\epsilon^{-1})$ order frequency, $\bm{\mathcal{M}}$ is a fixed unit vector in ${\mathbb R}^{3}$ allowing to define, for any $\mathbf{v} \in {\mathbb R}^{3}$, $v_{||} = \bm{\mathcal{M}} \cdot \mathbf{v}$ and $\mathbf{v}_{\perp} = \mathbf{v}-v_{||}\,\bm{\mathcal{M}}$. Finally, $t$, $\mathbf{x}$ and $\mathbf{v}$ stand for the time, position and velocity variables. Both equations \eqref{Vlasov_GCeps_intro} and \eqref{Vlasov_FLReps_intro} can be derived from the collisionless Vlasov equation \begin{equation*}\label{Vlasov} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f + \mathbf{v} \cdot \nabla_{\mathbf{x}}f + \frac{q}{m}\left(\mathbf{E}+\mathbf{v}\times \mathbf{B}\right) \cdot \nabla_{\mathbf{v}} f = 0 \, , \\ f(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation*} where a rescaling is considered: $\epsilon$ stands for the ratio between the characteristic gyro-period of the particles and the characteristic duration of the experiment and the ratio between the electric force amplitude and the magnetic force amplitude. In order to differenciate the derivation of \eqref{Vlasov_GCeps_intro} from \eqref{Vlasov_FLReps_intro}, we assume that the characteristic Larmor radius is of the same order as $\epsilon$ in front of any characteristic length to obtain \eqref{Vlasov_GCeps_intro}, whereas we assume that it is of the same order as $\epsilon$ in front of the characteristic length in $\bm{\mathcal{M}}$-direction and of the same order as the characteristic length in the orthogonal plane to $\bm{\mathcal{M}}$ for obtaining \eqref{Vlasov_FLReps_intro}. The details of such derivations can be found in \cite{Homogenization_VP} for \eqref{Vlasov_GCeps_intro} and in \cite{Finite_Larmor_radius} for \eqref{Vlasov_FLReps_intro}. \\ \indent Equation \eqref{Vlasov_axibeam_intro} can be encountered in the context of axisymmetric charged particle beam submitted to a external electric that oscillates with high frequency. In this context, $f_{\epsilon} = f_{\epsilon}(t,r,v_{r})$ is the distribution function of the particles that are submitted to the focusing electric field $E_{\epsilon}(t,r)-\frac{r}{\epsilon}$, $t$, $r$ and $v_{r}$ stand for the pseudo-time, radial position and radial velocity variables. Such equation can be derived from the paraxial approximation of the Vlasov equation given by \begin{equation*}\label{Vlasov_paraxial} \left\{ \begin{array}{l} {\partial}_{z}f + \cfrac{\mathbf{v}}{v_{z}}\cdot \nabla_{\mathbf{x}}f + \cfrac{q}{\gamma_{z}m v_{z}}\, \left( \cfrac{\mathbf{E}}{\gamma_{z}^{2}} - H_{0}\mathbf{x} \right) \cdot \nabla_{\mathbf{v}}f = 0 \, , \\ f(\mathbf{x},z=0,\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation*} where $(\mathbf{x},z) \in {\mathbb R}^{2}\times {\mathbb R}_{+}$ is the position variable, $\mathbf{v} \in {\mathbb R}^{2}$ is the velocity variable in the perpendicular plane to $z$-direction, $f = f(\mathbf{x},z,\mathbf{v})$ is the distribution function of the particles with constant longitudinal velocity $v_{z}$, $\gamma_{z}$ is the time dilatation coefficient associated to $v_{z}$, $\mathbf{E}$ and $\bm{\Xi}$ are respectively the self-consistent and external electric fields, and $H_{0}$ is a positive constant tension. Such Vlasov equation can be derived from the stationary Vlasov-Maxwell system (see \cite{Degond-Raviart,Filbet-Sonnen}). To obtain \eqref{Vlasov_axibeam_intro}, we assume that the beam is long and thin so the ratio $\epsilon$ between the characteristic radius of the beam and its characteristic length in the propagation direction is small, we assume that the angular momentum is equal to zero at the beam source $z=0$, and we consider polar coordinates in $\mathbf{x}$ and $\mathbf{v}$. Details on such derivation can be found in \cite{PIC-two-scale, Mouton_2009}. \\ \indent The main goal of the present paper is to study the two-scale asymptotic behaviour of the distribution function $f_{\epsilon}$ when $\epsilon$ converges to 0 for each Vlasov equation \eqref{Vlasov_GCeps_intro}, \eqref{Vlasov_FLReps_intro} and \eqref{Vlasov_axibeam_intro}. Some papers already provide two-scale convergence results of each of these models. Indeed, in \cite{Homogenization_VP}, Fr\'enod \& Sonnendr\"ucker studied the two-scale convergence of the solution $f_{\epsilon}$ of \eqref{Vlasov_GCeps_intro} as $\epsilon \to 0$: they proved that the sequence $(f_{\epsilon})_{\epsilon\,>\,0}$ admits a 0-th order two-scale limit $F_{0}$ when $\epsilon$ converges to 0 in the case where $\mathbf{B}_{\epsilon} = 0$ and $\bm{\beta}_{\epsilon}$ is a $\epsilon$-independent uniform vector in space and time. In \cite{Finite_Larmor_radius}, they establish a similar 0-th order two-scale convergence result for the solution $f_{\epsilon}$ of the model \eqref{Vlasov_FLReps_intro} in the 4D+time case where the model does not depend on $x_{||}$ nor $v_{||}$ and where $\mathbf{B}_{\epsilon} = 0$. Furthermore, in \cite{Two-scale_expansion}, the authors establish a $k$-th order two-scale convergence result for the solution $f_{\epsilon}$ of the 6D+time equation \eqref{Vlasov_FLReps_intro} with $k \in {\mathbb N}$ arbitrarily chosen: in this paper, the external electric field $\mathbf{E}_{\epsilon}$ is assumed to be independent of $\epsilon$ and $\mathbf{B}_{\epsilon} = 0$. The authors prove that $f_{\epsilon}$ two-scale converges at $k$-order to a profile $F_{k}$ thanks to a recursive procedure on a generic singularly perturbed convection equation. Some two-scale convergence results have also been established for the solution of \eqref{Vlasov_axibeam_intro}. Indeed, in \cite{PIC-two-scale}, the authors established a 0-th order two-scale convergence by proving that $f_{\epsilon}$ two-scale converges to a profile $F_{0}$ as $\epsilon$ tends to 0. In \cite{Frenod-Gutnic-Hirstoaga}, this result is extended to the first order. Indeed, introducing $f_{\epsilon,1}$ defined as \begin{equation*} f_{\epsilon,1}(t,r,v_{r}) = \cfrac{1}{\epsilon}\,\left( f_{\epsilon}(t,r,v_{r})-F_{0}\left(t,\cfrac{t}{\epsilon},r,v_{r}\right)\right) \, , \end{equation*} the authors proved that the sequence $(f_{\epsilon,1})_{\epsilon\,>\,0}$ two-scale converges to a profile $F_{1}$ and provide a limit system satisfied by $F_{1}$ by assuming that $E_{\epsilon}(t,r) = E_{0}(t,\frac{t}{\epsilon},r) + \epsilon\,E_{1}(t,\frac{t}{\epsilon},r)$ with $\epsilon$-independent functions $E_{0}$ and $E_{1}$. \\ \indent The aim of the present document is to generalize the two-scale convergence results on \eqref{Vlasov_GCeps_intro}-\eqref{Vlasov_FLReps_intro}-\eqref{Vlasov_axibeam_intro} presented in \cite{Frenod-Gutnic-Hirstoaga, Two-scale_expansion, PIC-two-scale, Homogenization_VP, Finite_Larmor_radius}. More precisely, we aim to generalize the two-scale convergence results on \eqref{Vlasov_GCeps_intro} to the $k$-order and with an non-zero $\mathbf{B}_{\epsilon}$ and a varying $\bm{\beta}_{\epsilon}$. Our goal is also to generalize the results on \eqref{Vlasov_FLReps_intro} established in \cite{Two-scale_expansion} to the case with non-zero $\epsilon$-dependent external fields $\mathbf{E}_{\epsilon}$ and $\mathbf{B}_{\epsilon}$. Finally, we aim to extend the results from \cite{Frenod-Gutnic-Hirstoaga} to the k-order of two-scale convergence. For this, we consider the following generic singular perturbed convection equation that includes the linear Vlasov equations \eqref{Vlasov_GCeps_intro}, \eqref{Vlasov_FLReps_intro} and \eqref{Vlasov_axibeam_intro}: \begin{equation}\label{convection} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}u_{\epsilon}(t,\mathbf{x}) + \mathbf{A}_{\epsilon}(t,\mathbf{x})\cdot \nabla_{\mathbf{x}}u_{\epsilon}(t,\mathbf{x}) + \cfrac{1}{\epsilon} \, \mathbf{L}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\cdot \nabla_{\mathbf{x}}u_{\epsilon}(t,\mathbf{x}) = 0 \, , \\ u_{\epsilon}(t=0,\mathbf{x}) = u^{0}(\mathbf{x}) \, , \end{array} \right. \end{equation} where $t \in [0,T]$ and $\mathbf{x} \in {\mathbb R}^{n}$ ($n \in {\mathbb N}^{*}$) are the variables ($T > 0$ is fixed), $\mathbf{A}_{\epsilon}:[0,T] \times {\mathbb R}^{n} \to {\mathbb R}^{n}$ and $\mathbf{L}:[0,T]\times{\mathbb R}\times{\mathbb R}^{n} \to {\mathbb R}^{n}$ are given vector functions and the solution quantity is $u_{\epsilon}:[0,T]\times{\mathbb R}^{n} \to {\mathbb R}$. We fix $\theta > 0$ and we assume that $\mathbf{A}_{\epsilon}$ and $\mathbf{L}$ are divergence-free in $\mathbf{x}$-direction and that $\mathbf{L}$ is $\theta$-periodic in $\tau$-direction, \textit{i.e.} \begin{align*} \forall\,(t,\tau,\mathbf{x}) \in [0,T]\times{\mathbb R}\times{\mathbb R}^{n} \, , \qquad &\nabla_{\mathbf{x}} \cdot \mathbf{A}_{\epsilon}(t,\mathbf{x}) = \nabla_{\mathbf{x}} \cdot \mathbf{L}(t,\tau,\mathbf{x}) = 0 \, , \\ \forall\,(t,\tau,\mathbf{x}) \in [0,T]\times{\mathbb R}\times{\mathbb R}^{n} \, , \qquad &\mathbf{L}(t,\tau+\theta,\mathbf{x}) = \mathbf{L}(t,\tau,\mathbf{x}) \, . \end{align*} We also assume that, for any fixed $\epsilon > 0$, the initial data $u^{0}$ and the vector functions $\mathbf{A}_{\epsilon}$ and $\mathbf{L}$ satisfy the minimal required smoothness properties for insuring the existence and the uniqueness of the solution $u_{\epsilon}$ of \eqref{convection}. This generic model is close to the convection equation studied in \cite{Two-scale_expansion}. Indeed, in this paper, the convection term $\mathbf{A}_{\epsilon}$ does not depend on $\epsilon$ and $\mathbf{L}$ only depends on $t$ and $\mathbf{x}$. \\ \indent Thus, the present paper is organized as follows: in Section 2, we present a two-scale convergence theorem for the generic convection equation \eqref{convection} then we use it to extend in a straightforward way the existing two-scale convergence results for the solution of each Vlasov equation \eqref{Vlasov_GCeps_intro}, \eqref{Vlasov_FLReps_intro} and \eqref{Vlasov_axibeam_intro}. In the following Section, we describe the proof for obtaining the two-scale convergence theorem on \eqref{convection}. In a last section, we will discuss some perspectives for future work. \section{Two-scale convergence results} In this section, we present the main results of the present paper. After recalling some definitions and notations that will be used along the paper, we first present a 0-th order two-scale convergence result for the solution $u_{\epsilon}$ of the generic convection equation \eqref{convection}. Secondly we detail the required hypotheses for reaching the $k$-order two-scale convergence of $u_{\epsilon}$, then the result itself. Finally, we adapt these results for \eqref{convection} to each linear Vlasov equation \eqref{Vlasov_GCeps_intro}, \eqref{Vlasov_FLReps_intro} and \eqref{Vlasov_axibeam_intro}. \subsection{Notations and definitions} \indent Before going further and presenting the main results, we introduce some notations and definitions. Considering a fixed $\theta > 0$, we define for any $p \in[1,+\infty]$ the space $L_{\#}^{p}(0,\theta)$ as the functions $f : {\mathbb R} \to {\mathbb R}$ that are $\theta$-periodic and such that $f_{|_{[0,\theta]}} \in L^{p}(0,\theta)$. In the same spirit, we define ${\mathcal C}_{\#}(0,\theta)$ stands for the subspace of ${\mathcal C}({\mathbb R})$ constituted of $\theta$-periodic functions and provided with the the norm induced by ${\mathcal C}({\mathbb R})$. Having these notations in hands, we recall the definition of two-scale convergence as it has been introduced by Allaire \cite{Allaire} and Nguetseng \cite{NGuetseng} and a useful two-scale convergence criterion: \\ \begin{definition} Let $X$ be a separable Banach space, $X'$ its topological dual space, and $\langle \cdot , \cdot \rangle_{X,X'}$ the duality bracket associated to $X$ and $X'$. Considering fixed $q \in [1,+\infty[$, $T > 0$, and $q'$ such that $\frac{1}{q}+\frac{1}{q'} = 1$, a sequence $(u_{\epsilon})_{\epsilon\,>\,0} \subset L^{q'}(0,T;X')$ two-scale converges to a function $U \in L^{q'}\left(0,T;L_{\#}^{q'}(0,\theta;X')\right)$ if, for any test function $\psi \in L^{q}\left(0,T;{\mathcal C}_{\#}\left(0,\theta;X\right)\right)$, we have \begin{equation*} \lim_{\epsilon\,\to\,0} \int_{0}^{T} \left\langle u_{\epsilon}(t), \psi\left(t,\cfrac{t}{\epsilon}\right)\right\rangle_{X,X'} \, dt = \int_{0}^{T} \int_{0}^{\theta} \left\langle U(t,\tau), \psi\left(t,\tau\right)\right\rangle_{X,X'} \, d\tau\,dt \, . \end{equation*} \end{definition} \begin{theorem}[Allaire \cite{Allaire}]\label{TSCV_Allaire} If a sequence $(u_{\epsilon})_{\epsilon\,>\,0} \subset L^{q'}(0,T;X')$ is bounded independently of $\epsilon$, there exists a profile $U \in L^{q'}\left(0,T;L_{\#}^{q'}(0,\theta;X')\right)$ such that, up to the extraction of a subsequence \begin{equation*} u_{\epsilon} \, \longrightarrow \, U \qquad \textit{two-scale in $L^{q'}\left(0,T;L_{\#}^{q'}(0,\theta;X')\right)$.} \end{equation*} Furthermore, the so-called two-scale limit $U$ of $u_{\epsilon}$ is closely linked to the weak-* limit of $(u_{\epsilon})_{\epsilon\,>\,0}$ in $L^{q'}(0,T;X')$. Indeed this function denoted by $u$ satisfies \begin{equation*} u(t) = \cfrac{1}{\theta} \, \int_{0}^{\theta} U(t,\tau) \, d\tau \, . \end{equation*} \end{theorem} \subsection{The singularly perturbed convection equation} For any $(t,\sigma,\mathbf{x}) \in [0,T]\times{\mathbb R}\times{\mathbb R}^{n}$ fixed, we consider the following differential system \begin{equation*} \left\{ \begin{array}{rcl} {\partial}_{\tau}\mathbf{X}(\tau) &=& \mathbf{L}\left(t,\tau,\mathbf{X}(\tau)\right) \, , \\ \mathbf{X}(\sigma) &=& \mathbf{x} \, , \end{array} \right. \end{equation*} where the unknown is the vector function $\tau \mapsto \mathbf{X}(\tau)$. We assume from now that this system admits a unique solution in the class of $\theta$-periodic functions in $\tau$-direction and we denote this solution by $\tau \mapsto \mathbf{X}(\tau;\mathbf{x},t;\sigma)$. \\ \indent In the following lines, we aim to write the following development of $u_{\epsilon}$ \begin{equation} \label{expansion} u_{\epsilon}(t,\mathbf{x}) = \sum_{k\,=\,0}^{+\infty} \epsilon^{k}\,U_{k}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, , \end{equation} and to characterize successively the terms $U_{k}$ of this expansion. \subsubsection{0-th order convergence} \label{TSCV_U0} The first main result is the two-scale convergence of $(u_{\epsilon})_{\epsilon\,>\,0}$ to a profile $U_{0} = U_{0}(t,\tau,\mathbf{x})$. For this purpose, we consider some hypotheses derived from those which are required for proving Theorem 1.5 of \cite{Finite_Larmor_radius}: \begin{hypothesis}\label{hyp_U0} Fixing $p \in \, ]1,+\infty[$, $q > 1$ and $q'$ such that $\frac{1}{p}+\frac{1}{q'} < 1$ and $\frac{1}{q'} = \max(\frac{1}{q}-\frac{1}{n},0)$, we assume that \begin{itemize} \item $u^{0} \in L^{p}({\mathbb R}^{n})$, \item $(\mathbf{A}_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $\left(L^{\infty}\left(0,T;\left(W^{1,q}(K)\right) \right)\right)^{n}$ for any compact subset $K \subset {\mathbb R}^{n}$, \item $\mathbf{L}$ is smooth enough in order to insure that, for any compact subset $K \subset {\mathbb R}^{n}$, \begin{itemize} \item $\mathbf{L}$ is in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{n}$, \item $(t,\tau,\mathbf{x}) \mapsto {\partial}_{t}\mathbf{X}(\tau;\mathbf{x},t;0)$ is in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{n}$, \item $(t,\tau,\mathbf{x}) \mapsto \nabla_{\mathbf{x}}\mathbf{X}(\tau;\mathbf{x},t;0)$ is in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{\infty}(K)\right)\right)\right)^{n^{2}}$. \end{itemize} \end{itemize} \end{hypothesis} As a trivial consequence, we can write, up to a subsequence and for any compact $K \subset {\mathbb R}^{n}$, \begin{equation*} \mathbf{A}_{\epsilon}\, \longrightarrow \, \bm{\mathcal{A}}_{0} = \bm{\mathcal{A}}_{0}(t,\tau,\mathbf{x}) \qquad \textnormal{two-scale in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;\left(W^{1,q}(K)\right) \right)\right)\right)^{n}$.} \end{equation*} Assuming that the profile $\bm{\mathcal{A}}_{0}$ is somehow known, we introduce $\bm{\alpha}_{0}$ and $\tilde{\mathbf{a}}_{0}$ as \begin{equation*} \bm{\alpha}_{0}(t,\tau,\mathbf{x}) = \left((\nabla_{\mathbf{x}}\mathbf{X})(\tau;\mathbf{x},t;0)\right)^{-1} \,\left( \bm{\mathcal{A}}_{0}\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) - ({\partial}_{t}\mathbf{X})(\tau;\mathbf{x},t;0)\right) \, , \end{equation*} and \begin{equation*} \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) = \cfrac{1}{\theta}\int_{0}^{\theta} \bm{\alpha}_{0}(t,\tau,\mathbf{x}) \, d\tau \, . \end{equation*} With these hypotheses and definitions, we can characterize the 0-th order term $U_{0}$ in the expansion \eqref{expansion}: \begin{theorem}\label{def_U0} Assume that Hypotheses \ref{hyp_U0} and that the sequence $(u_{\epsilon})_{\epsilon\,>\,0}$ is bounded in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ independently of $\epsilon$. Then, up to a subsequence, $u_{\epsilon}$ two-scale converges to the profile $U_{0} = U_{0}(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$ defined by \begin{equation}\label{link_U0V0} U_{0}(t,\tau,\mathbf{x}) = V_{0}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation} where $V_{0} = V_{0}(t,\mathbf{x}) \in L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ satisfies \begin{equation}\label{eq_V0} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}V_{0}(t,\mathbf{x}) + \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}V_{0}(t,\mathbf{x}) = 0 \, , \\ V_{0}(t=0,\mathbf{x}) = u^{0}(\mathbf{x}) \, . \end{array} \right. \end{equation} \end{theorem} \begin{theorem} \label{eq_U0} $U_{0}$ satisfies the following equation: \begin{equation*} {\partial}_{t}U_{0}(t,\tau,\mathbf{x}) + \mathbf{a}_{0}(t,\tau,\mathbf{x}) \cdot \nabla_{\mathbf{x}}U_{0}(t,\tau,\mathbf{x}) = 0 \, , \end{equation*} with $\mathbf{a}_{0}$ defined by \begin{equation*} \begin{split} \mathbf{a}_{0}(t,\tau,\mathbf{x}) = \left((\nabla_{\mathbf{x}}\mathbf{X})(-\tau;\mathbf{x},t;0)\right)^{-1}\left(\tilde{\mathbf{a}}_{0}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) - ({\partial}_{t}\mathbf{X})(-\tau;\mathbf{x},t;0) \right) \, . \end{split} \end{equation*} \end{theorem} \subsubsection{Two-scale convergence at $k$-th order} \label{TSCV_Uk} We fix $k \in {\mathbb N}^{*}$ and we aim to identify the $k$-th term of the expansion \eqref{expansion}. Before stating the result, we need additional assumptions besides Hypotheses \ref{hyp_U0}: \begin{hypothesis} \label{Hilbert_Aeps} Defining the sequence $(\mathbf{A}_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \left\{ \begin{array}{ll} \mathbf{A}_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(\mathbf{A}_{\epsilon,i-1}(t,\mathbf{x}) - \bm{\mathcal{A}}_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\,i=1,\dots,k \, , \\ \mathbf{A}_{\epsilon,0}(t,\mathbf{x}) = \mathbf{A}_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \end{equation*} we assume that, for all $i=0,\dots,k$, $(\mathbf{A}_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to the profile $\bm{\mathcal{A}}_{i} = \bm{\mathcal{A}}_{i}(t,\tau,\mathbf{x})$ in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{n}$ for any compact subset $K \subset {\mathbb R}^{n}$. \end{hypothesis} \begin{hypothesis} \label{Hilbert_ueps_kth} Defining the sequence $(u_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \label{def_uepsi} \left\{ \begin{array}{ll} u_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(u_{\epsilon,i-1}(t,\mathbf{x}) - U_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\, i=1,\dots,k-1 \, , \\ u_{\epsilon,0}(t,\mathbf{x}) = u_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \end{equation*} we assume that, for all $i=0,\dots,k-1$ and up to a subsequence, $(u_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to the profile $U_{i} = U_{i}(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$. \end{hypothesis} Under these hypotheses, we define $\bm{\alpha}_{i}$, $\tilde{\mathbf{a}}_{i}$ and $\mathbf{a}_{i}$ as \begin{equation*} \bm{\alpha}_{i}(t,\tau,\mathbf{x}) = \left((\nabla_{\mathbf{x}}\mathbf{X})(\tau;\mathbf{x},t;0)\right)^{-1}\,\bm{\mathcal{A}}_{i}\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) \, , \end{equation*} \begin{equation*} \tilde{\mathbf{a}}_{i}(t,\mathbf{x}) = \cfrac{1}{\theta}\, \int_{0}^{\theta}\bm{\alpha}_{i}(t,\tau,\mathbf{x}) \, d\tau \, , \end{equation*} \begin{equation*} \begin{split} \mathbf{a}_{i}(t,\tau,\mathbf{x}) &= \cfrac{1}{\theta}\,\int_{0}^{\theta} \left((\nabla_{\mathbf{x}}\mathbf{X})(\sigma-\tau;\mathbf{x},t;0)\right)^{-1}\,\bm{\mathcal{A}}_{i}\left(t,\sigma,\mathbf{X}(\sigma-\tau;\mathbf{x},t;0)\right) \, d\sigma \\ &= \cfrac{1}{\theta}\, \int_{0}^{\theta} \left((\nabla_{\mathbf{x}}\mathbf{X})(-\tau;\mathbf{x},t;0)\right)^{-1}\,\bm{\alpha}_{i}\left(t,\sigma,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, d\sigma \, , \end{split} \end{equation*} for all $i=1,\dots,k-1$. We also define recursively the functions $W_{1},\dots,W_{k}$ and $R_{1},\dots,R_{k-1}$ as \begin{equation} \label{def_Wi} W_{i}(t,\tau,\mathbf{x}) = \int_{0}^{\tau} \left(\sum_{j\,=\,0}^{i-1}(\mathbf{a}_{j}-\bm{\mathcal{A}}_{j})\cdot\nabla_{\mathbf{x}}U_{i-1-j} - R_{i-1}\right)\left(t,\sigma,\mathbf{X}(\sigma;\mathbf{x},t;0)\right) \, d\sigma \, , \end{equation} \begin{equation} \label{def_Ri} \begin{split} R_{i}(t,\tau,\mathbf{x}) &= ({\partial}_{t}W_{i})\left(t,\tau,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) -\cfrac{1}{\theta}\int_{0}^{\theta}({\partial}_{t}W_{i})\left(t,\sigma,\mathbf{X}(-\tau;\mathbf{x},t;0)\right)\, d\sigma \\ &\qquad + \sum_{j\,=\,0}^{i}\left(\tilde{\mathbf{a}}_{j}\cdot\nabla_{\mathbf{x}}W_{i-j}\right)\left(t,\tau,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \\ &\qquad - \cfrac{1}{\theta}\sum_{j\,=\,0}^{i} \int_{0}^{\theta} \left(\bm{\alpha}_{j}\cdot\nabla_{\mathbf{x}}W_{i-j}\right) \left(t,\sigma,\mathbf{X}(-\tau;\mathbf{x},t;0)\right)\,d\sigma \, , \end{split} \end{equation} with the convention $W_{0} = R_{0} = 0$. Having these notations and assumptions in hands, we can now state a two-scale convergence result at $k$-th order by proceeding recursively: \begin{theorem} \label{CV_Uk} We define $s' > 0$ such that $\frac{1}{s'} = 1-\frac{1}{q}-\frac{1}{r}$ with $r \in [1,\frac{nq}{n-q}[$ and we define the functional space $X^{s'}(K) = \left(W^{1,q}(K)\right)' \cup \left(W^{1,s'}(K)\right)'$. We assume that Hypotheses \ref{hyp_U0}-\ref{Hilbert_Aeps}-\ref{Hilbert_ueps_kth} are satisfied and that, for any $K \subset {\mathbb R}^{n}$ compact, \begin{itemize} \item $W_{k}$ is in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}(K)\right)\right)$, \item ${\partial}_{t}W_{k}$ is in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$, \item $R_{k-1}$ is in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$. \end{itemize} Then, if the sequence $(u_{\epsilon,k})_{\epsilon\,>\,0}$ defined by \begin{equation*} u_{\epsilon,k}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(u_{\epsilon,k-1}(t,\mathbf{x})-U_{k-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\right)\,, \end{equation*} is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$, $u_{\epsilon,k}$ two-scale converges to the profile $U_{k}$ in $\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$ characterized as follows: \begin{equation} U_{k}(t,\tau,\mathbf{x}) = V_{k}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) + W_{k}\left(t,\tau,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation} where $W_{k} = W_{k}(t,\tau,\mathbf{x})$ is defined in \eqref{def_Wi} and where $V_{k} = V_{k}(t,\mathbf{x})$ satisfies \begin{equation} \label{eq_Vk} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}V_{k}(t,\mathbf{x}) &+ \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}V_{k}(t,\mathbf{x}) \\ &= -\cfrac{1}{\theta}\,\int_{0}^{\theta} ({\partial}_{t}W_{k}+\bm{\alpha}_{0}\cdot\nabla_{\mathbf{x}}W_{k})(t,\sigma,\mathbf{x}) \, d\sigma \\ &\qquad - \sum_{i\,=\,1}^{k}\Bigg[ \cfrac{1}{\theta}\,\int_{0}^{\theta} \bm{\alpha}_{i}(t,\sigma,\mathbf{x}) \cdot \left[ \nabla_{\mathbf{x}}V_{k-i}(t,\mathbf{x})+\nabla_{\mathbf{x}}W_{k-i}(t,\sigma,\mathbf{x})\right] \, d\sigma \Bigg] \, , \end{split} \\ V_{k}(t=0,\mathbf{x}) = 0 \, . \end{array} \right. \end{equation} \end{theorem} \begin{theorem} \label{cor_eq_Uk} $U_{k}$ satisfies the following equation: \begin{equation*} \begin{split} {\partial}_{t}U_{k}(t,\tau,\mathbf{x}) + \mathbf{a}_{0}(t,\tau,\mathbf{x}) \cdot & \nabla_{\mathbf{x}}U_{k}(t,\tau,\mathbf{x}) = R_{k}(t,\tau,\mathbf{x}) - \sum_{i\,=\,1}^{k}\mathbf{a}_{i}(t,\tau,\mathbf{x})\cdot\nabla_{\mathbf{x}}U_{k-i}(t,\tau,\mathbf{x}) \, , \end{split} \end{equation*} where $R_{k}$ is obtained from the definition \eqref{def_Ri} extended to the case $i=k$. \end{theorem} We can remark that the statements of Theorems \ref{CV_Uk} and \ref{cor_eq_Uk} can be viewed as improvements of the contents of \cite{Two-scale_expansion}. Indeed, assuming that $\mathbf{A}_{\epsilon}$ does not depend on $\epsilon$ and that $\mathbf{L}$ only depends on $t$ and $\mathbf{x}$ leads to \begin{equation*} \mathbf{A}_{\epsilon,i}(t,\mathbf{x}) = \left\{ \begin{array}{ll} \mathbf{A}(t,\mathbf{x}) \, , & \textnormal{if $i = 0$,} \\ 0 \, , & \textnormal{else,} \end{array} \right. \end{equation*} so $\bm{\alpha}_{i} = \mathbf{a}_{i} = \tilde{\mathbf{a}}_{i} = 0$ for any $i > 0$. Consequently, the expression of $R_{i}$ and $W_{i}$ is reduced to \begin{equation*} \begin{split} R_{i}(t,\tau,\mathbf{x}) &= {\partial}_{t}U_{i}(t,\tau,\mathbf{x}) + \mathbf{a}_{0}(t,\tau,\mathbf{x}) \cdot \nabla_{\mathbf{x}}U_{i}(t,\tau,\mathbf{x}) \, , \end{split} \end{equation*} \begin{equation*} W_{i}(t,\tau,\mathbf{x}) = \int_{0}^{\tau} \left({\partial}_{t}U_{i-1}+\mathbf{A}\cdot\nabla_{\mathbf{x}}U_{i-1}\right)\left(t,\sigma,\mathbf{X}(\sigma;\mathbf{x},t;0)\right) \, d\sigma \, . \end{equation*} Finally, the transport equation for $V_{i}$ is reduced to \begin{equation*} {\partial}_{t}V_{i}(t,\mathbf{x}) + \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}V_{i}(t,\mathbf{x}) = -\cfrac{1}{\theta}\,\int_{0}^{\theta} ({\partial}_{t}W_{i}+\bm{\alpha}_{0}\cdot\nabla_{\mathbf{x}}W_{i})(t,\sigma,\mathbf{x}) \, d\sigma \, , \end{equation*} for any $i \geq 0$. \subsection{Application to the Guiding-Center regime} We now apply the results above to the case of the following linear Vlasov equation: \begin{equation* \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon}+\mathbf{v}\cdot\nabla_{\mathbf{x}}f_{\epsilon} + \left(\mathbf{E}_{\epsilon} + \mathbf{v} \times \mathbf{B}_{\epsilon} + \cfrac{\mathbf{v} \times \bm{\beta}_{\epsilon}}{\epsilon} \right) \cdot \nabla_{\mathbf{v}}f_{\epsilon} = 0 \, , \\ f_{\epsilon}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, . \end{array} \right. \end{equation*} In this equation, $t \in [0,T]$ is the dimensionless time variable, $\mathbf{x} \in {\mathbb R}^{3}$ is the dimensionless space variable, $\mathbf{v} \in {\mathbb R}^{3}$ is the dimensionless velocity variable, $f_{\epsilon} = f_{\epsilon}(t,\mathbf{x},\mathbf{v}) \in {\mathbb R}$ is the unknown distribution function, $\mathbf{E}_{\epsilon} = \mathbf{E}_{\epsilon}(t,\mathbf{x}) \in {\mathbb R}^{3}$ and $\mathbf{B}_{\epsilon} = \mathbf{B}_{\epsilon}(t,\mathbf{x}) \in {\mathbb R}^{3}$ are the given electric and magnetic fields, $f^{0} = f^{0}(\mathbf{x},\mathbf{v}) \in {\mathbb R}$ is the given initial distribution. We finally assume from now that the vector function $\bm{\beta}_{\epsilon}$ is of the form \begin{equation*} \bm{\beta}_{\epsilon}(t,\mathbf{x}) = \bm{\beta}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, , \end{equation*} where $\bm{\beta} : [0,T] \times {\mathbb R} \times {\mathbb R}^{3} \to {\mathbb R}^{3}$ is a given function assumed to be $\theta$-periodic and continuous in $\tau$ with $\theta > 0$ fixed. \\ \indent Before going further, we introduce additional objects linked to $\bm{\beta}$. First, let $\tilde{\bm{\beta}}$ be defined such that ${\partial}_{\tau}\tilde{\bm{\beta}} = \bm{\beta}$ and $\tilde{\bm{\beta}}(t,0,\mathbf{x}) = 0$ for any $t,\mathbf{x}$. Second, we define the matrix $\tilde{\mathfrak{B}} = \tilde{\mathfrak{B}}(t,\tau,\mathbf{x})$ such that $\tilde{\mathfrak{B}}(t,\tau,\mathbf{x})\, \mathbf{v} = \mathbf{v} \times \tilde{\bm{\beta}}(t,\tau,\mathbf{x})$ for any $t,\tau,\mathbf{x},\mathbf{v}$. Finally, we define $\mathcal{R} = \mathcal{R}(t,\tau,\mathbf{x}) = \exp\left(\tilde{\mathfrak{B}}(t,\tau,\mathbf{v})\right)$. \\ \indent We fix $q > 3/2$ and we consider the following hypotheses: \begin{hypothesis}\label{hyp_beta_GC} We assume that the function $\mathcal{R}$ satisfies \begin{itemize} \item $\mathcal{R}$ is $\theta$-periodic in $\tau$ direction, \item $\mathcal{R} \in \left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,\infty}(K)\right)\right)\right)^{3\times 3}$ for any compact subset $K \subset {\mathbb R}^{3}$, \item ${\partial}_{t}\mathcal{R} \in \left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{3\times 3}$ for any compact subset $K \subset {\mathbb R}^{3}$. \end{itemize} \end{hypothesis} Consequently, adding sufficient hypotheses on $f^{0}$, $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$ allows to establish a 0-th order two-scale convergence result: \begin{theorem} \label{TSCV_F0_GC} We assume that Hypotheses \ref{hyp_beta_GC} are satisfied and that $f^{0} \in L^{2}({\mathbb R}^{6})$ and both sequences $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$ are bounded in $\left(L^{\infty}\left(0,T;W^{1,q}(K)\right)\right)^{3}$ independently of $\epsilon$ and for any $K \subset {\mathbb R}^{3}$ compact. We denote $\bm{\mathcal{E}}_{0} = \bm{\mathcal{E}}_{0}(t,\tau,\mathbf{x})$ and $\bm{\mathcal{B}}_{0} = \bm{\mathcal{B}}_{0}(t,\tau,\mathbf{x})$ as the respective two-scale limit of $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$ in the space $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{3}$ and we define $\bm{\mathcal{L}}_{0}$ as \begin{equation*} \bm{\mathcal{L}}_{0}(t,\tau,\mathbf{x},\mathbf{v}) = \bm{\mathcal{E}}_{0}(t,\tau,\mathbf{x}) + \mathbf{v} \times \bm{\mathcal{B}}_{0}(t,\tau,\mathbf{x}) \, . \end{equation*} Then, $(f_{\epsilon})_{\epsilon\,>\,0}$ is bounded in $L^{\infty}\left(0,T;L^{2}({\mathbb R}^{6})\right)$ independently of $\epsilon$ and, up to the extraction of a subsequence, two-scale converges to the profile $F_{0} = F_{0}(t,\tau,\mathbf{x},\mathbf{v})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{2}({\mathbb R}^{6})\right)\right)$. Furthermore, $F_{0}$ is characterized by \begin{equation} F_{0}(t,\tau,\mathbf{x},\mathbf{v}) = G_{0}\left(t,\mathbf{x},\mathcal{R}(t,-\tau,\mathbf{x})\,\mathbf{v}\right) \, , \end{equation} with $G_{0} = G_{0}(t,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$ solution of \begin{equation}\label{def_F0_GC} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}G_{0}(t,\mathbf{x},\mathbf{v}) &+ \left(\mathcal{J}_{1}(t,\mathbf{x})\,\mathbf{v}\right)\cdot\nabla_{\mathbf{x}}G_{0}(t,\mathbf{x},\mathbf{v}) + \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}}G_{0}(t,\mathbf{x},\mathbf{v}) = 0 \, , \end{split} \\ G_{0}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation} where $\mathcal{J}_{1}(t,\mathbf{x})$ and $\mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v})$ are defined by \begin{align*} \mathcal{J}_{1}(t,\mathbf{x}) &= \cfrac{1}{\theta}\,\int_{0}^{\theta} \mathcal{R}(t,\tau,\mathbf{x}) \, d\tau \, , \\ \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) &= \cfrac{1}{\theta}\int_{0}^{\theta} J_{2}(\bm{\mathcal{L}}_{0})(t,\tau,\mathbf{x},\mathbf{v}) \, d\tau \, . \end{align*} with $J_{2}$ defined as \begin{equation*} \begin{split} \hspace{-0.5em}J_{2}(\bm{\mathcal{L}}_{0})(t,\tau,\mathbf{x},\mathbf{v}) = \mathcal{R}(t,\tau,\mathbf{x})^{-1} \Big[ -{\partial}_{t}\mathcal{R}(t,\tau,\mathbf{x}) \mathbf{v} &- \big(\nabla_{\mathbf{x}}\mathcal{R}(t,\tau,\mathbf{x})\,\mathbf{v}\big) \mathcal{R}(t,\tau,\mathbf{x})\,\mathbf{v} \\ &+ \bm{\mathcal{L}}_{0}\left(t,\tau,\mathbf{x},\mathcal{R}(t,\tau,\mathbf{x})\mathbf{v}\right) \Big] \, . \end{split} \end{equation*} \end{theorem} We can remark here that the results of Theorem \ref{TSCV_F0_GC} are coherent with the Guiding-Center model presented in \cite{Homogenization_VP}. Indeed, taking $\mathbf{B}_{\epsilon} = 0$ and $\bm{\beta} = \mathbf{e}_{z}$ leads to the matrix \begin{equation*} \mathcal{R}(t,\tau,\mathbf{x}) = \left( \begin{array}{ccc} \cos\tau & \sin\tau & 0 \\ -\sin\tau & \cos\tau & 0 \\ 0 & 0 & 1 \end{array} \right) \, , \end{equation*} which is $2\pi$-periodic in $\tau$. Consequently, assuming that $\mathbf{E}_{\epsilon}$ and $\mathbf{B}_{\epsilon}$ converge strongly in $\left(L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{3})\right)\right)^{3}$ to $\mathbf{E}$ and $\mathbf{B}$ respectively, we have \begin{equation*} \mathcal{J}_{1}(t,\mathbf{x})\mathbf{v} = v_{z}\,\mathbf{e}_{z} \, , \end{equation*} and \begin{equation*} \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) = E_{z}(t,\mathbf{x})\,\mathbf{e}_{z} + \mathbf{v} \times \left(B_{z}(t,\mathbf{x})\,\mathbf{e}_{z}\right) \, . \end{equation*} \indent In order to characterize the higher order terms, it is necessary to add several assumptions. We fix an integer $k > 0$ and we consider the following hypotheses for $(f_{\epsilon})_{\epsilon\,>\,0}$, $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$: \begin{hypothesis} \label{hyp_EiBi_GC} Defining recursively the sequences $(\mathbf{E}_{\epsilon,i})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \begin{split} &\left\{ \begin{array}{ll} \mathbf{E}_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(\mathbf{E}_{\epsilon,i-1}(t,\mathbf{x}) - \bm{\mathcal{E}}_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\,i=1,\dots,k \, , \\ \mathbf{E}_{\epsilon,0}(t,\mathbf{x}) = \mathbf{E}_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \\ &\left\{ \begin{array}{ll} \mathbf{B}_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(\mathbf{B}_{\epsilon,i-1}(t,\mathbf{x}) - \bm{\mathcal{B}}_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\,i=1,\dots,k \, , \\ \mathbf{B}_{\epsilon,0}(t,\mathbf{x}) = \mathbf{B}_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \end{split} \end{equation*} we assume that, for all $i=0,\dots,k$, $(\mathbf{E}_{\epsilon,i})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converge to $\bm{\mathcal{E}}_{i} = \bm{\mathcal{E}}_{i}(t,\tau,\mathbf{x})$ and $\bm{\mathcal{B}}_{i} = \bm{\mathcal{B}}_{i}(t,\tau,\mathbf{x})$ respectively in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;W^{1,q}(K)\right)\right)\right)^{3}$ for any compact subset $K \subset {\mathbb R}^{3}$. \end{hypothesis} \begin{hypothesis} \label{hyp_fi_GC} Defining recursively the sequence $(f_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \left\{ \begin{array}{ll} f_{\epsilon,i}(t,\mathbf{x},\mathbf{v}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,i-1}(t,\mathbf{x},\mathbf{v}) - F_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x},\mathbf{v}\right)\right) \, , & \forall\,i=1,\dots,k-1\, , \\ f_{\epsilon,0}(t,\mathbf{x},\mathbf{v}) = f_{\epsilon}(t,\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation*} we assume that, up to a subsequence, the sequence $(f_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to the profile $F_{i} = F_{i}(t,\tau,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{2}({\mathbb R}^{6})\right)\right)$ for all $i=0,\dots,k-1$. \end{hypothesis} We now define $\bm{\mathcal{L}}_{i}$ for $i=0,\dots,k$ such that \begin{equation*} \bm{\mathcal{L}}_{i}(t,\tau,\mathbf{x},\mathbf{v}) = \bm{\mathcal{E}}_{i}(t,\tau,\mathbf{x}) + \mathbf{v} \times \bm{\mathcal{B}}_{i}(t,\tau,\mathbf{x}) \, . \end{equation*} We introduce $W_{0},\dots,W_{k}$ such that $W_{0} = 0$ and, for any $i=1,\dots,k$, \begin{equation} \label{def_Wk_GC} \begin{split} W_{i}(t,\tau,\mathbf{x},\mathbf{v}) &= \int_{0}^{\tau} \left( \begin{array}{c} \left(\mathcal{J}_{1}(t,\mathbf{x})-\mathcal{R}(t,\sigma,\mathbf{x})\right)\mathbf{v} \\ \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) - J_{2}(\bm{\mathcal{L}}_{0})(t,\sigma,\mathbf{x},\mathbf{v}) \end{array} \right) \\ &\qquad \qquad \qquad \cdot \left( \begin{array}{c} \nabla_{\mathbf{x}}G_{i-1}(t,\mathbf{x},\mathbf{v}) + \nabla_{\mathbf{x}}W_{i-1}(t,\sigma,\mathbf{x},\mathbf{v}) \\ \nabla_{\mathbf{v}}G_{i-1}(t,\mathbf{x},\mathbf{v}) + \nabla_{\mathbf{v}}W_{i-1}(t,\sigma,\mathbf{x},\mathbf{v}) \end{array} \right) \, d\sigma \\ &\quad + \sum_{j\,=\,1}^{i-1} \int_{0}^{\tau} \Big[ \left[ \mathcal{J}_{3}(\bm{\mathcal{L}}_{j})(t,\mathbf{x},\mathbf{v}) - \mathcal{R}(t,\sigma,\mathbf{x})^{-1} \bm{\mathcal{L}}_{j}\left(t,\sigma,\mathbf{x},\mathcal{R}(t,\sigma,\mathbf{x})\mathbf{v}\right) \right] \\ &\qquad \qquad \qquad \cdot \left[ \nabla_{\mathbf{v}}G_{i-j-1}(t,\mathbf{x},\mathbf{v}) + \nabla_{\mathbf{v}}W_{i-j-1}(t,\sigma,\mathbf{x},\mathbf{v})\right] \Big] \, d\sigma \\ &\quad - \int_{0}^{\tau} \Bigg[ {\partial}_{t}W_{i-1}(t,\sigma,\mathbf{x},\mathbf{v}) - \cfrac{1}{\theta}\int_{0}^{\theta} {\partial}_{t}W_{i-1}(t,\zeta,\mathbf{x},\mathbf{v})\, d\zeta \Bigg]\, d\sigma \, , \end{split} \end{equation} with $\mathcal{J}_{3}(\bm{\mathcal{L}}_{j})$ defined by \begin{equation*} \mathcal{J}_{3}(\bm{\mathcal{L}}_{j})(t,\mathbf{x},\mathbf{v}) = \cfrac{1}{\theta}\,\int_{0}^{\theta} \mathcal{R}(t,\tau,\mathbf{x})^{-1} \, \bm{\mathcal{L}}_{j}(t,\tau,\mathcal{R}(t,\tau,\mathbf{x})\,\mathbf{v}) \, d\tau \, , \end{equation*} and where $G_{0},\dots,G_{k-1}$ are linked with $F_{0},\dots,F_{k-1}$ and $W_{0},\dots,W_{k-1}$ thanks to the relation \begin{equation*} F_{i}(t,\tau,\mathbf{x},\mathbf{v}) = G_{i}\left(t,\mathbf{x},\mathcal{R}(t,-\tau,\mathbf{x})\,\mathbf{v}\right) + W_{i}\left(t,\tau,\mathbf{x},\mathcal{R}(t,-\tau,\mathbf{x})\,\mathbf{v}\right) \, . \end{equation*} With these notations, we can establish a two-scale convergence result at the $k$-th order: \begin{theorem} We assume that the hypotheses of Theorem \ref{TSCV_F0_GC} and that Hypotheses \ref{hyp_EiBi_GC} and \ref{hyp_fi_GC} are satisfied. We introduce $R_{k-1}$ as follows \begin{equation} \begin{split} R_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) &= {\partial}_{t}F_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) + \left(\mathcal{J}_{1}(t,\mathbf{x})\,\mathbf{v}\right) \cdot \nabla_{\mathbf{x}}F_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) \\ &\quad + \left[\cfrac{1}{\theta}\int_{0}^{\theta} \mathcal{R}(t,\sigma,\mathbf{x})^{-1}\big[ -{\partial}_{t}\mathcal{R}(t,\sigma,\mathbf{x})\mathbf{v} - (\nabla_{\mathbf{x}}\mathcal{R}(t,\sigma,\mathbf{x})\mathbf{v})\mathcal{R}(t,\sigma,\mathbf{x})\,\mathbf{v} \big] \, d\sigma\right] \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \cdot \nabla_{\mathbf{v}}F_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) \\ &\quad+ \sum_{i=0}^{k-1} \left[\cfrac{1}{\theta} \int_{0}^{\theta}\mathcal{R}(t,\sigma,\mathbf{x})^{-1}\bm{\mathcal{L}}_{i}\left(t,\sigma+\tau,\mathbf{x},\mathcal{R}(t,\sigma,\mathbf{x})\mathbf{v}\right) \, d\sigma \right] \cdot \nabla_{\mathbf{v}}F_{k-1-i}(t,\tau,\mathbf{x},\mathbf{v}) \, . \end{split} \end{equation} In addition, taking $s'$ such that $\frac{1}{s'} = 1 - \frac{1}{q} - \frac{1}{r}$ with $r \in [1,\frac{6q}{6-q}[$ and defining $X^{s'}(K) = \left(W^{1,q}(K)\right)' \cup \left(W^{1,s'}(K)\right)$, we assume that, for any compact subset $K \subset {\mathbb R}^{6}$, \begin{itemize} \item $W_{k} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{2}(K)\right)\right)$, \item ${\partial}_{t}W_{k},R_{k-1} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$. \end{itemize} Then, if the sequence $(f_{\epsilon,k})_{\epsilon\,>\,0}$ defined by \begin{equation*} f_{\epsilon,k}(t,\mathbf{x},\mathbf{v}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,k-1}(t,\mathbf{x},\mathbf{v}) - F_{k-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x},\mathbf{v}\right)\right)\, , \end{equation*} is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$, it two-scale converges to the profile $F_{k} = F_{k}(t,\tau,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{2}({\mathbb R}^{6})\right)\right)$ up to the extraction of a subsequence. Furthermore, $F_{k}$ is fully characterized by \begin{equation} F_{k}(t,\tau,\mathbf{x},\mathbf{v}) = G_{k}\left(t,\mathbf{x},\mathcal{R}(t,-\tau,\mathbf{x})\,\mathbf{v}\right) + W_{k}\left(t,\tau,\mathbf{x},\mathcal{R}(t,-\tau,\mathbf{x})\,\mathbf{v}\right) \, , \end{equation} where $W_{k}$ is defined in \eqref{def_Wk_GC} and where $G_{k} = G_{k}(t,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$ is the solution of \begin{equation} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}G_{k}&(t,\mathbf{x},\mathbf{v}) + \left(\mathcal{J}_{1}(t,\mathbf{x})\mathbf{v}\right) \cdot \nabla_{\mathbf{x}}G_{k}(t,\mathbf{x},\mathbf{v}) + \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}} G_{k}(t,\mathbf{x},\mathbf{v}) \\ &= -\cfrac{1}{\theta}\int_{0}^{\theta} \left[{\partial}_{t}W_{k}(t,\tau,\mathbf{x},\mathbf{v}) + \left(\mathcal{R}(t,\tau,\mathbf{x})\mathbf{v}\right)\cdot \nabla_{\mathbf{x}}W_{k}(t,\tau,\mathbf{x},\mathbf{v}) \right] \, d\tau \\ &\quad -\cfrac{1}{\theta}\int_{0}^{\theta} J_{2}(\bm{\mathcal{L}}_{0})(t,\tau,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}}W_{k}(t,\tau,\mathbf{x},\mathbf{v})\, d\tau \\ &\quad - \cfrac{1}{\theta}\sum_{i=0}^{k}\int_{0}^{\theta} \left[ \mathcal{R}(t,\tau,\mathbf{x})^{-1}\bm{\mathcal{L}}_{i}\left(t,\tau,\mathbf{x},\mathcal{R}(t,\tau,\mathbf{x})\mathbf{v}\right) \right] \cdot \nabla_{\mathbf{v}}W_{k-i}(t,\tau,\mathbf{x},\mathbf{v}) \, d\tau \\ &\quad - \sum_{i=1}^{k} \mathcal{J}_{3}(\bm{\mathcal{L}}_{i})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}}G_{k-i}(t,\mathbf{x},\mathbf{v}) \, , \end{split} \\ G_{k}(t=0,\mathbf{x},\mathbf{v}) = 0 \, . \end{array} \right. \end{equation} \end{theorem} \subsection{Finite Larmor Radius regime} We focus now on the following linear equation: \begin{equation* \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon} + \cfrac{\mathbf{v}_{\perp}}{\epsilon}\cdot\nabla_{\mathbf{x}_{\perp}}f_{\epsilon} + v_{||}\,{\partial}_{x_{||}}f_{\epsilon} + \left(\mathbf{E}_{\epsilon} + \mathbf{v} \times \mathbf{B}_{\epsilon} + \cfrac{\mathbf{v} \times \bm{\mathcal{M}}}{\epsilon} \right) \cdot \nabla_{\mathbf{v}}f_{\epsilon} = 0 \, , \\ f_{\epsilon}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation*} in which $(\mathbf{x},\mathbf{v}) \in {\mathbb R}^{3} \times {\mathbb R}^{3}$, $t \in [0,T]$, $f_{\epsilon} = f_{\epsilon}(t,\mathbf{x},\mathbf{v}) \in {\mathbb R}$ is the unknown distribution function, $\mathbf{E}_{\epsilon} = \mathbf{E}_{\epsilon}(t,\mathbf{x}) \in {\mathbb R}^{3}$ is the external electric field, $f^{0} = f^{0}(\mathbf{x},\mathbf{v})$ is the initial distribution function, $\bm{\mathcal{M}} = \mathbf{e}_{z} \in {\mathbb R}^{3}$ and $\mathbf{B}_{\epsilon}=\mathbf{B}_{\epsilon}(t,\mathbf{x}) \in {\mathbb R}^{3}$ constitute the external magnetic field. \\ \indent Thanks to well-chosen hypotheses for $f^{0}$ and the sequences $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$, it is possible to establish a 0-th order two-scale convergence result: \begin{theorem}\label{TSCV0_FLR} We assume that $f^{0} \in L^{2}({\mathbb R}^{6})$ and that $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$ are bounded independently of $\epsilon$ in $\left(L^{\infty}\left(0,T;W^{1,q}(K)\right) \right)^{3}$ for $q > 3/2$ and for any compact subset $K \subset {\mathbb R}^{3}$. \\ We denote with $\bm{\mathcal{E}}_{0} = \bm{\mathcal{E}}_{0}(t,\tau,\mathbf{x})$ and $\bm{\mathcal{B}}_{0} = \bm{\mathcal{B}}_{0}(t,\tau,\mathbf{x})$ the respective two-scale limits of $(\mathbf{E}_{\epsilon})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon})_{\epsilon\,>\,0}$ in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;W^{1,q}(K)\right)\right)\right)^{3}$ and we introduce the vector function $\bm{\mathcal{L}}_{0}$ defined by \begin{equation*} \bm{\mathcal{L}}_{0}(t,\tau,\mathbf{x},\mathbf{v}) = \bm{\mathcal{E}}_{0}(t,\tau,\mathbf{x}) + \mathbf{v} \times \bm{\mathcal{B}}_{0}(t,\tau,\mathbf{x}) \, . \end{equation*} Up to a subsequence, $f_{\epsilon}$ two-scale converges to the profile $F_{0} = F_{0}(t,\tau,\mathbf{x},\mathbf{v})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{6})\right)\right)$ and $F_{0}$ is fully characterized by \begin{equation} F_{0}(t,\tau,\mathbf{x},\mathbf{v}) = G_{0}\left(t,\mathbf{x}+\mathcal{R}_{1}(-\tau)\,\mathbf{v}, \mathcal{R}_{2}(-\tau)\,\mathbf{v}\right) \, , \end{equation} where $\mathcal{R}_{1}$, $\mathcal{R}_{2}$ and $G_{0} = G_{0}(t,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$ satisfy \begin{equation*} \label{def_matR} \mathcal{R}_{1}(\tau) = \left( \begin{array}{ccc} \sin\tau & 1-\cos\tau & 0 \\ \cos\tau-1 & \sin\tau & 0 \\ 0 & 0 & 0 \end{array} \right) \, , \quad \mathcal{R}_{2}(\tau) = \left( \begin{array}{ccc} \cos\tau & \sin\tau & 0 \\ -\sin\tau & \cos\tau & 0 \\ 0 & 0 & 1 \end{array} \right) \, , \end{equation*} \begin{equation}\label{FLR_0th} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}G_{0}(t,\mathbf{x},\mathbf{v}) + v_{||}\,{\partial}_{x_{||}}G_{0}(t,\mathbf{x},\mathbf{v}) &+ \mathcal{J}_{1}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{x}}G_{0}(t,\mathbf{x},\mathbf{v}) \\ &+ \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}}G_{0}(t,\mathbf{x},\mathbf{v}) = 0 \, , \end{split} \\ G_{0}(t=0,\mathbf{x},\mathbf{v}) = f^{0}(\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation} with $\mathcal{J}_{1}(\bm{\mathcal{L}}_{0})$ and $\mathcal{J}_{2}(\bm{\mathcal{L}}_{0})$ defined by \begin{equation*} \mathcal{J}_{i}(\bm{\mathcal{L}}_{0}) = \cfrac{1}{2\pi}\,\int_{0}^{2\pi}\mathcal{R}_{i}(-\tau) \,\bm{\mathcal{L}}_{0}\left(t,\tau,\mathbf{x}+\mathcal{R}_{1}(\tau)\,\mathbf{v}, \mathcal{R}_{2}(\tau)\,\mathbf{v}\right) d\tau \, . \end{equation*} \end{theorem} For obtaining higher order two-scale convergence terms, we first consider a fixed $k \in {\mathbb N}^{*}$ and we assume that the electric and magnetic fields satisfy the following hypotheses: \begin{hypothesis} \label{hyp_EiBi_FLR} Defining recursively the sequences $(\mathbf{E}_{\epsilon,i})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \begin{split} &\left\{ \begin{array}{ll} \mathbf{E}_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(\mathbf{E}_{\epsilon,i-1}(t,\mathbf{x}) - \bm{\mathcal{E}}_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\,i=1,\dots,k \, , \\ \mathbf{E}_{\epsilon,0}(t,\mathbf{x}) = \mathbf{E}_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \\ &\left\{ \begin{array}{ll} \mathbf{B}_{\epsilon,i}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,\left(\mathbf{B}_{\epsilon,i-1}(t,\mathbf{x}) - \bm{\mathcal{B}}_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right) \, , & \forall\,i=1,\dots,k \, , \\ \mathbf{B}_{\epsilon,0}(t,\mathbf{x}) = \mathbf{B}_{\epsilon}(t,\mathbf{x}) \, , \end{array} \right. \end{split} \end{equation*} we assume that, for all $i=0,\dots,k$ and up to the extraction of a subsequence, $(\mathbf{E}_{\epsilon,i})_{\epsilon\,>\,0}$ and $(\mathbf{B}_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converge to the profiles $\bm{\mathcal{E}}_{i} = \bm{\mathcal{E}}_{i}(t,\tau,\mathbf{x})$ and $\bm{\mathcal{B}}_{i} = \bm{\mathcal{B}}_{i}(t,\tau,\mathbf{x})$ respectively in $\left(L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;W^{1,q}(K)\right)\right)\right)^{3}$ for any compact subset $K \subset {\mathbb R}^{3}$. \end{hypothesis} \begin{hypothesis} \label{hyp_fi_FLR} Defining recursively the sequence $(f_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \left\{ \begin{array}{ll} f_{\epsilon,i}(t,\mathbf{x},\mathbf{v}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,i-1}(t,\mathbf{x},\mathbf{v}) - F_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x},\mathbf{v}\right)\right) \, , & \forall\,i=1,\dots,k-1\, , \\ f_{\epsilon,0}(t,\mathbf{x},\mathbf{v}) = f_{\epsilon}(t,\mathbf{x},\mathbf{v}) \, , \end{array} \right. \end{equation*} we assume that, up to a subsequence, the sequence $(f_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to the profile $F_{i} = F_{i}(t,\tau,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{6})\right)\right)$ for all $i=0,\dots,k-1$. \end{hypothesis} Hence, defining $\bm{\mathcal{L}}_{i}$ as \begin{equation*} \bm{\mathcal{L}}_{i}(t,\tau,\mathbf{x},\mathbf{v}) = \bm{\mathcal{E}}_{i}(t,\tau, \mathbf{x}) + \mathbf{v} \times \bm{\mathcal{B}}_{i}(t,\tau,\mathbf{x}) \, , \end{equation*} for all $i=0,\dots,k$, we define recursively the functions $W_{0},\dots,W_{k}$ such that $W_{0} = 0$ and, for any $i > 0$, \begin{equation} \label{def_Wi_FLR} \begin{split} W_{i}(t,\tau,\mathbf{x},\mathbf{v}) &= \sum_{j\,=\,0}^{i-1}\int_{0}^{\tau} \left( \begin{array}{c} \mathcal{J}_{1}(\bm{\mathcal{L}}_{j})(t,\mathbf{x},\mathbf{v})-\mathcal{R}_{1}(-\sigma)\,\bm{\mathcal{L}}_{j}\left(t,\sigma,\mathbf{x}+\mathcal{R}_{1}(\sigma)\,\mathbf{v}, \mathcal{R}_{2}(\sigma)\,\mathbf{v} \right) \\ \mathcal{J}_{2}(\bm{\mathcal{L}}_{j})(t,\mathbf{x},\mathbf{v})-\mathcal{R}_{2}(-\sigma)\,\bm{\mathcal{L}}_{j}\left(t,\sigma,\mathbf{x}+\mathcal{R}_{1}(\sigma)\,\mathbf{v}, \mathcal{R}_{2}(\sigma)\,\mathbf{v} \right) \end{array} \right) \\ &\qquad \qquad \qquad \qquad \cdot \left( \begin{array}{c} \nabla_{\mathbf{x}}G_{i-1-j}(t,\mathbf{x},\mathbf{v}) + \nabla_{\mathbf{x}}W_{i-1-j}(t,\sigma,\mathbf{x},\mathbf{v}) \\ \nabla_{\mathbf{v}}G_{i-1-j}(t,\mathbf{x},\mathbf{v}) + \nabla_{\mathbf{v}}W_{i-1-j}(t,\sigma,\mathbf{x},\mathbf{v}) \end{array} \right) \, d\sigma \\ &\quad - \int_{0}^{\tau} \left[ {\partial}_{t}W_{i-1}(t,\sigma,\mathbf{x},\mathbf{v}) - \cfrac{1}{2\pi}\,\int_{0}^{2\pi}{\partial}_{t}W_{i-1}(t,\zeta,\mathbf{x},\mathbf{v}) \, d\zeta \right] \, d\sigma \end{split} \end{equation} where, for $i = 0,\dots,k-1$, $G_{i}$ is defined on $[0,T]\times{\mathbb R}^{6}$ thanks to the relation \begin{equation*} \begin{split} F_{i}(t,\tau,\mathbf{x},\mathbf{v}) &= G_{i}\left(t,\mathbf{x}+\mathcal{R}_{1}(-\tau)\,\mathbf{v}, \mathcal{R}_{2}(-\tau)\,\mathbf{v} \right) + W_{i}\left(t,\tau,\mathbf{x}+\mathcal{R}_{1}(-\tau)\,\mathbf{v}, \mathcal{R}_{2}(-\tau)\,\mathbf{v} \right) \, . \end{split} \end{equation*} Hence we have the following result for obtaining the $k$-th order term $F_{k}$: \begin{theorem}\label{TSCVk_FLR} We assume that the hypotheses of Theorem \ref{TSCV0_FLR} and Hypotheses \ref{hyp_EiBi_FLR} and \ref{hyp_fi_FLR} are satisfied for a fixed $k \in {\mathbb N}^{*}$, and we introduce the function $R_{k-1}$ defined by \begin{equation} \begin{split} R_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) &= {\partial}_{t}F_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) + v_{||}\,{\partial}_{x_{||}}F_{k-1}(t,\tau,\mathbf{x},\mathbf{v}) \\ &\quad + \cfrac{1}{2\pi} \sum_{i=0}^{k-1} \Bigg[ \int_{0}^{2\pi} \left( \begin{array}{c} \mathcal{R}_{1}(-\sigma)\bm{\mathcal{L}}_{i}\left(t,\sigma+\tau,\mathbf{x}+\mathcal{R}_{1}(\sigma)\mathbf{v}, \mathcal{R}_{2}(\sigma)\mathbf{v}\right) \\ \mathcal{R}_{2}(-\sigma)\bm{\mathcal{L}}_{i}\left(t,\sigma+\tau,\mathbf{x}+\mathcal{R}_{1}(\sigma)\mathbf{v}, \mathcal{R}_{2}(\sigma)\mathbf{v}\right) \end{array} \right) d\sigma \\ &\qquad \qquad \qquad \qquad \qquad \cdot \left( \begin{array}{c} \nabla_{\mathbf{x}}F_{k-1-i}(t,\tau,\mathbf{x},\mathbf{v}) \\ \nabla_{\mathbf{v}}F_{k-1-i}(t,\tau,\mathbf{x},\mathbf{v}) \end{array} \right) \Bigg] \, . \end{split} \end{equation} In addition, taking $s'$ such that $\frac{1}{s'} = 1 - \frac{1}{q} - \frac{1}{r}$ with $r \in [1,\frac{6q}{6-q}[$ and defining $X^{s'}(K) = \left(W^{1,q}(K)\right)' \cup \left(W^{1,s'}(K)\right)$, we assume that, for any compact subset $K \subset {\mathbb R}^{6}$, \begin{itemize} \item $W_{k} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}(K)\right)\right)$, \item ${\partial}_{t}W_{k},R_{k-1} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;X^{s'}(K)\right)\right)$. \end{itemize} Then, if the sequence $(f_{\epsilon,k})_{\epsilon\,>\,0}$ defined by \begin{equation*} f_{\epsilon,k}(t,\mathbf{x},\mathbf{v}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,k-1}(t,\mathbf{x},\mathbf{v}) - F_{k-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x},\mathbf{v}\right)\right)\, , \end{equation*} is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$, it two-scale converges to the profile $F_{k} = F_{k}(t,\tau,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{6})\right)\right)$ up to the extraction of a subsequence. Furthermore, $F_{k}$ is fully characterized by \begin{equation} \begin{split} F_{k}(t,\tau,\mathbf{x},\mathbf{v}) &= G_{k}\left(t,\mathbf{x}+\mathcal{R}_{1}(-\tau)\,\mathbf{v}, \mathcal{R}_{2}(-\tau)\,\mathbf{v} \right) + W_{k}\left(t,\tau,\mathbf{x}+\mathcal{R}_{1}(-\tau)\,\mathbf{v}, \mathcal{R}_{2}(-\tau)\,\mathbf{v} \right) \, , \end{split} \end{equation} where $W_{k}$ is defined in \eqref{def_Wi_FLR} and where $G_{k} = G_{k}(t,\mathbf{x},\mathbf{v}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{6})\right)$ is the solution of \begin{equation} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}&G_{k}(t,\mathbf{x},\mathbf{v}) + v_{||}\,{\partial}_{x_{||}}G_{k}(t,\mathbf{x},\mathbf{v}) \\ &\qquad \qquad + \mathcal{J}_{1}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{x}}G_{k}(t,\mathbf{x},\mathbf{v}) + \mathcal{J}_{2}(\bm{\mathcal{L}}_{0})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{v}}G_{k}(t,\mathbf{x},\mathbf{v}) \\ &= -\cfrac{1}{2\pi}\,\int_{0}^{2\pi}\left[ {\partial}_{t}W_{k}(t,\tau,\mathbf{x},\mathbf{v}) + v_{||}\,{\partial}_{x_{||}}W_{k}(t,\tau,\mathbf{x},\mathbf{v}) \right] \, d\tau\\ &\quad - \cfrac{1}{2\pi}\sum_{i\,=\,0}^{k}\int_{0}^{2\pi} \Big[ \left( \begin{array}{c} \mathcal{R}_{1}(-\tau) \,\bm{\mathcal{L}}_{i}\left(t,\tau,\mathbf{x}+\mathcal{R}_{1}(\tau)\,\mathbf{v}, \mathcal{R}_{2}(\tau)\,\mathbf{v}\right) \\ \mathcal{R}_{2}(-\tau) \,\bm{\mathcal{L}}_{i}\left(t,\tau,\mathbf{x}+\mathcal{R}_{1}(\tau)\,\mathbf{v}, \mathcal{R}_{2}(\tau)\,\mathbf{v}\right) \end{array} \right) \\ &\qquad \qquad \qquad \qquad \cdot \left( \begin{array}{c} \nabla_{\mathbf{x}}W_{k-i}(t,\tau,\mathbf{x},\mathbf{v}) \\ \nabla_{\mathbf{v}}W_{k-i}(t,\tau,\mathbf{x},\mathbf{v}) \end{array} \right) \Big] \, d\tau \\ &\quad - \sum_{i\,=\,1}^{k} \left( \begin{array}{c} \mathcal{J}_{1}(\bm{\mathcal{L}}_{i})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{x}}G_{k-i}(t,\mathbf{x},\mathbf{v}) \\ \mathcal{J}_{2}(\bm{\mathcal{L}}_{i})(t,\mathbf{x},\mathbf{v}) \cdot \nabla_{\mathbf{x}}G_{k-i}(t,\mathbf{x},\mathbf{v}) \end{array} \right) \cdot \left( \begin{array}{c} \nabla_{\mathbf{x}}G_{k-i}(t,\mathbf{x},\mathbf{v}) \\ \nabla_{\mathbf{v}}G_{k-i}(t,\mathbf{x},\mathbf{v}) \end{array} \right) \, , \end{split} \\ G_{k}(t=0,\mathbf{x},\mathbf{v}) = 0 \, . \end{array} \right. \end{equation} \end{theorem} \subsection{Application to axisymmetric charged particle beams} In this last example, we focus on the following axisymmetric linear Vlasov equation: \begin{equation* \left\{ \begin{array}{l} \displaystyle {\partial}_{t}f_{\epsilon}(t,r,v_{r}) + \cfrac{v_{r}}{\epsilon}\,{\partial}_{r}f_{\epsilon}(t,r,v_{r}) + \left(E_{\epsilon}(t,r)-\cfrac{r}{\epsilon}\right)\,{\partial}_{v_{r}}f_{\epsilon}(t,r,v_{r}) = 0 \, , \\ f_{\epsilon}(t=0,r,v_{r}) = f^{0}(r,v_{r}) \, . \end{array} \right. \end{equation*} In this system, $f_{\epsilon} = f_{\epsilon}(t,r,v_{r})$ is the unknown distribution function of the particles, $E_{\epsilon} = E_{\epsilon}(t,r)$ is the radial component of the external magnetic field, the variables $(t,r,v_{r}) \in [0,T]\times {\mathbb R} \times {\mathbb R}$ stand for the time variable and the radial position and velocity variable, with the convention $f_{\epsilon}(t,r,v_{r}) = f_{\epsilon}(t,-r,-v_{r})$, $E_{\epsilon}(t,r) = -E_{\epsilon}(t,-r)$ (see \cite{Frenod-Gutnic-Hirstoaga,PIC-two-scale,Mouton_2009} for details). \\ \indent The two-scale convergence of $f_{\epsilon}$ at 0-th order has been studied by Fr\'enod, Sonnendr\"ucker and Salvarani in \cite{PIC-two-scale} in a more rich context. We recall here this result: \begin{theorem}[Fr\'enod, Sonnendr\"ucker, Salvarani \cite{PIC-two-scale}]\label{TSCV_F0_axibeam} We assume that the initial distribution $f^{0}$ is positive on ${\mathbb R}^{2}$ and that $f^{0} \in L^{1}({\mathbb R}^{2};rdrdv_{r}) \cap L^{2}({\mathbb R}^{2};rdrdv_{r})$. We also assume that the sequence $(E_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in the space $L^{\infty}\left(0,T;W^{1,3/2}(K;rdr)\right)$ for any $K \subset {\mathbb R}$ compact. Then, up to the extraction of a subsequence, $f_{\epsilon}$ two-scale converges to the profile $F_{0} = F_{0}(t,\tau,r,v_{r})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{2};rdrdv_{r})\right)\right)$ and $E_{\epsilon}$ two-scale converges to $\mathcal{E}_{0} = \mathcal{E}_{0}(t,r,v_{r})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;W^{1,3/2}(K;rdr)\right)\right)$ for any $K \subset {\mathbb R}$ compact, with $F_{0}$ defined by \begin{equation} F_{0}(t,\tau,r,v_{r}) = G_{0}(t,r\cos\tau-v_{r}\sin\tau,r\sin\tau + v_{r}\cos\tau) \, , \end{equation} with $G_{0} = G_{0}(t,r,v_{r}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{2};rdrdv_{r})\right)$ solution of \begin{equation} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}G_{0} + \mathcal{J}_{1}(\mathcal{E}_{0})\,{\partial}_{r}G_{0} + \mathcal{J}_{2}(\mathcal{E}_{0})\,{\partial}_{v_{r}}G_{0} = 0 \, , \\ G_{0}(t=0,r,v_{r}) = f^{0}(r,v_{r}) \, , \end{array} \right. \end{equation} where \begin{align} \mathcal{J}_{1}(\mathcal{E}_{0})(t,r,v_{r}) &= -\cfrac{1}{2\pi}\,\int_{0}^{2\pi} \sin(\tau) \, \mathcal{E}_{0}(t,\tau,r\cos\tau+v_{r}\sin\tau) \, d\tau \, , \\ \mathcal{J}_{2}(\mathcal{E}_{0})(t,r,v_{r}) &= \cfrac{1}{2\pi}\,\int_{0}^{2\pi} \cos(\tau) \, \mathcal{E}_{0}(t,\tau,r\cos\tau+v_{r}\sin\tau) \, d\tau \, . \end{align} \end{theorem} In order to establish higher order two-scale convergence results, it is necessary to add some hypotheses on the external electric field $E_{\epsilon}$. As in the previous paragraphes, we consider a fixed integer $k > 0$ and we formalize it as follows: \begin{hypothesis}\label{hyp_Ei_axibeam} Defining recursively the sequence $(E_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \left\{ \begin{array}{ll} E_{\epsilon,i}(t,r) = \cfrac{1}{\epsilon}\,\left(E_{\epsilon,i-1}(t,r)-\mathcal{E}_{i-1}\left(t,\cfrac{t}{\epsilon},r\right)\right) \, , &\forall\,i=1,\dots,k \, , \\ E_{\epsilon,0}(t,r) = E_{\epsilon}(t,r) \, , & \end{array} \right. \end{equation*} we assume that, for all $i=0,\dots,k$, $(E_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to the profile $\mathcal{E}_{i} = \mathcal{E}_{i}(t,\tau,r)$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;W^{1,3/2}(K;rdr)\right)\right)$ for any $K \subset {\mathbb R}$ compact. \end{hypothesis} We also add some hypotheses about the two-scale convergence of $f_{\epsilon}$ at $i$-th order for $i=0,\dots,k-1$: \begin{hypothesis}\label{hyp_fi_axibeam} Defining recursively the sequence $(f_{\epsilon,i})_{\epsilon\,>\,0}$ as \begin{equation*} \left\{ \begin{array}{ll} f_{\epsilon,i}(t,r,v_{r}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,i-1}(t,r,v_{r})-F_{i-1}\left(t,\cfrac{t}{\epsilon},r,v_{r}\right)\right) \, , &\forall\,i=1,\dots,k-1 \, , \\ f_{\epsilon,0}(t,r,v_{r}) = f_{\epsilon}(t,r,v_{r}) \, , & \end{array} \right. \end{equation*} we assume that, up to the extraction of a subsequence, $(f_{\epsilon,i})_{\epsilon\,>\,0}$ two-scale converges to $F_{i} = F_{i}(t,\tau,r,v_{r})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{2};rdrdvr)\right)\right)$ for $i=0,\dots,k-1$. \\ \end{hypothesis} Hence we can define recursively $W_{0},\dots,W_{k}$ as follows: \begin{equation} \label{def_Wi_axibeam} \begin{split} W_{i}(t,\tau,r,v_{r}) &= \int_{0}^{\tau} \Bigg[\sum_{j=0}^{i-1} \left( \begin{array}{c} \mathcal{J}_{1}(\mathcal{E}_{j})(t,r,v_{r}) + \sin(\sigma)\,\mathcal{E}_{j}(t,\sigma,r\cos\sigma+v_{r}\sin\sigma) \\ \mathcal{J}_{2}(\mathcal{E}_{j})(t,r,v_{r}) - \cos(\sigma)\,\mathcal{E}_{j}(t,\sigma,r\cos\sigma+v_{r}\sin\sigma) \end{array} \right) \\ &\qquad \qquad \qquad \qquad \cdot \left( \begin{array}{c} {\partial}_{r}G_{i-1-j}(t,r,v_{r}) + {\partial}_{r}W_{i-1-j}(t,\sigma,r,v_{r}) \\ {\partial}_{v_{r}}G_{i-1-j}(t,r,v_{r}) + {\partial}_{v_{r}}W_{i-1-j}(t,\sigma,r,v_{r}) \end{array} \right) \\ &\qquad \qquad - {\partial}_{t}W_{k-1}(t,\sigma,r,v_{r}) + \cfrac{1}{2\pi}\,\int_{0}^{2\pi} {\partial}_{t}W_{k-1}(t,\zeta,r,v_{r})\,d\zeta \Bigg] \, d\sigma \, , \end{split} \end{equation} where $G_{0},\dots,G_{k-1}$ are linked to $F_{0},\dots,F_{k-1}$ by the relations \begin{equation*} \begin{split} F_{i}(t,\tau,r,v_{r}) &= G_{i}(t,r\cos\tau-v_{r}\sin\tau,r\sin\tau + v_{r}\cos\tau) + W_{i}(t,\tau,r\cos\tau-v_{r}\sin\tau,r\sin\tau + v_{r}\cos\tau) \, . \end{split} \end{equation*} We finally introduce the function $R_{k-1}$ defined by \begin{equation} \begin{split} R_{k-1}(t,\tau,r,v_{r}) = {\partial}_{t}F_{k-1}(t,\tau,r,v_{r}) + \sum_{j=0}^{k-1} \Bigg[ \cfrac{1}{2\pi}\int_{0}^{2\pi} \left( \begin{array}{c} \sin(\sigma)\mathcal{E}_{j}(t,\sigma+\tau,r\cos\sigma+v_{r}\sin\sigma) \\ \cos(\sigma)\mathcal{E}_{j}(t,\sigma+\tau,r\cos\sigma+v_{r}\sin\sigma) \end{array} \right) d\sigma \\ \cdot \left( \begin{array}{c} {\partial}_{r}F_{k-1-j}(t,\tau,r,v_{r}) \\ {\partial}_{v_{r}}F_{k-1-j}(t,\tau,r,v_{r}) \end{array} \right) \Bigg] \, . \end{split} \end{equation} Hence we can extend the main result of \cite{Frenod-Gutnic-Hirstoaga} to the $k$-th order: \begin{theorem}\label{TSCV_Fk_axibeam} We assume that the hypotheses of Theorem \ref{TSCV_F0_axibeam} and Hypotheses \ref{hyp_Ei_axibeam} and \ref{hyp_fi_axibeam} are satisfied for a fixed $k \in {\mathbb N}^{*}$. In addition, taking $s'$ such that $\frac{1}{s'} = 1 - \frac{1}{q} - \frac{1}{r}$ with $r \in [1,\frac{2q}{2-q}[$ and defining $X^{s'}(K;rdrdv_{r}) = \left(W^{1,q}(K;rdrdv_{r})\right)' \cup \left(W^{1,s'}(K;rdrdv_{r})\right)$, we assume that, for any compact subset $K \subset {\mathbb R}^{2}$, \begin{itemize} \item $W_{k} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}(K;rdrdv_{r})\right)\right)$, \item ${\partial}_{t}W_{k},R_{k-1} \in L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;X^{s'}(K;rdrdv_{r})\right)\right)$. \end{itemize} Then, if the sequence $(f_{\epsilon,k})_{\epsilon\,>\,0}$ defined by \begin{equation*} f_{\epsilon,k}(t,r,v_{r}) = \cfrac{1}{\epsilon}\,\left(f_{\epsilon,k-1}(t,r,v_{r})-F_{k-1}\left(t,\cfrac{t}{\epsilon},r,v_{r}\right)\right) \, , \end{equation*} is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{2};rdrdv_{r})\right)$, it two-scale converges to the profile $F_{k}=F_{k}(t,\tau,r,v_{r})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,2\pi;L^{2}({\mathbb R}^{2};rdrdv_{r})\right)\right)$ with \begin{equation} \begin{split} F_{k}(t,\tau,r,v_{r}) &= G_{k}(t,r\cos\tau-v_{r}\sin\tau,r\sin\tau + v_{r}\cos\tau) \\ &\qquad + W_{k}(t,\tau,r\cos\tau-v_{r}\sin\tau,r\sin\tau + v_{r}\cos\tau) \, , \end{split} \end{equation} where $W_{k}$ is defined in \eqref{def_Wi_axibeam} and where $G_{k} = G_{k}(t,r,v_{r}) \in L^{\infty}\left(0,T;L_{loc}^{2}({\mathbb R}^{2};rdrdv_{r})\right)$ is the solution of \begin{equation}\label{def_Gk_axibeam} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}&G_{k}(t,r,v_{r}) + \mathcal{J}_{1}(\mathcal{E}_{0})(t,r,v_{r})\,{\partial}_{r}G_{k}(t,r,v_{r}) + \mathcal{J}_{2}(\mathcal{E}_{0})(t,r,v_{r})\,{\partial}_{v_{r}}G_{k}(t,r,v_{r}) \\ &= -\cfrac{1}{2\pi}\int_{0}^{2\pi} {\partial}_{t}W_{k}(t,\tau,r,v_{r}) \, d\tau \\ &\quad +\cfrac{1}{2\pi} \sum_{i\,=\,0}^{k}\int_{0}^{2\pi} \sin(\tau)\mathcal{E}_{i}(t,\tau,r\sin\tau+v_{r}\sin\tau)\,{\partial}_{r}W_{k-i}(t,\tau,r,v_{r}) \, d\tau \\ &\quad -\cfrac{1}{2\pi} \sum_{i\,=\,0}^{k}\int_{0}^{2\pi} \cos(\tau)\mathcal{E}_{i}(t,\tau,r\sin\tau+v_{r}\sin\tau)\,{\partial}_{v_{r}}W_{k-i}(t,\tau,r,v_{r}) \, d\tau \\ &\quad - \sum_{i\,=\,1}^{k} \mathcal{J}_{1}(\mathcal{E}_{i})(t,r,v_{r})\,{\partial}_{r}G_{k-i}(t,r,v_{r}) - \sum_{i\,=\,1}^{k} \mathcal{J}_{2}(\mathcal{E}_{i})(t,r,v_{r})\,{\partial}_{v_{r}}G_{k-i}(t,r,v_{r}) \, , \end{split} \\ G_{k}(t=0,r,v_{r}) = 0 \, . \end{array} \right. \end{equation} \end{theorem} \section{Characterization of each $U_{k}$} \indent In this section, we aim to prove the two-scale convergence results presented in Theorems \ref{def_U0}, \ref{eq_U0}, \ref{CV_Uk} and \ref{cor_eq_Uk}. For this purpose, we choose to detail the proofs on the generic equation of the form \begin{equation}\label{eq_geps_generic} \hspace{-0.3em}\left\{ \begin{array}{l} \displaystyle {\partial}_{t}g_{\epsilon}(t,\mathbf{x}) + \mathbf{A}_{\epsilon}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}g_{\epsilon}(t,\mathbf{x}) + \cfrac{1}{\epsilon}\,\mathbf{L}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \cdot \nabla_{\mathbf{x}}g_{\epsilon}(t,\mathbf{x}) = \cfrac{1}{\epsilon}\,f_{\epsilon}(t,\mathbf{x}) \, , \\ g_{\epsilon}(t=0,\mathbf{x}) = g^{0}(\mathbf{x}) \, , \end{array} \right. \end{equation} in which $f_{\epsilon}$, $\mathbf{A}_{\epsilon}$ and $\mathbf{L}$ are known and where $g_{\epsilon}$ is the unknown. The next lines are structured as follows: first, we detail some two-scale convergence results for the model \eqref{eq_geps_generic} under some well-chosen hypotheses for $\mathbf{A}_{\epsilon}$, $\mathbf{L}$ and $f_{\epsilon}$. Then we apply these results onto the equations satisfied by each $u_{\epsilon,i}$ recursively defined thanks to Hypothesis \ref{def_uepsi}. \subsection{Two-scale convergence of $g_{\epsilon}$} We aim to establish some two-scale convergence results for the sequence $(g_{\epsilon})_{\epsilon\,>\,0}$ under some well-chosen hypotheses for $\mathbf{A}_{\epsilon}$, $\mathbf{L}$ and $f_{\epsilon}$. These results are detailed in the following theorem: \begin{theorem}\label{TSCV_g} We consider $s' > 0$ such that $\frac{1}{s'} = 1-\frac{1}{q}-\frac{1}{r}$ with $r \in [1,\frac{nq}{n-q}[$ and, for all compact subset $K \subset {\mathbb R}^{n}$, we define $X^{s'}(K) = \left(W_{0}^{1,s'}(K)\right)' \cup \left(W_{0}^{1,q}(K)\right)'$. We assume that $\mathbf{A}_{\epsilon}$ and $\mathbf{L}$ satisfy Hypotheses \ref{hyp_U0} and that $g^{0}$ and $(f_{\epsilon})_{\epsilon\,>\,0}$ have the following properties: \begin{itemize} \item $g^{0} \in L^{p}({\mathbb R}^{n})$, \item $f_{\epsilon}$ is bounded independently of $\epsilon$ in $W^{1,\infty}\left(0,T;X^{s'}(K)\right)$ and admits $F = F(t,\tau,\mathbf{x})$ as a two-scale limit in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$, \item $F$ satisfies \begin{equation} \label{Speriodic} \forall\, (t,\mathbf{x}) \, , \qquad \int_{0}^{\theta} F(t,\tau,\mathbf{X}\left(\tau;\mathbf{x},t;0)\right)\, d\tau = 0 \, , \end{equation} \item The sequence $(f_{\epsilon,1})_{\epsilon\,>\,0}$ defined by \begin{equation*} f_{\epsilon,1}(t,\mathbf{x}) = \frac{1}{\epsilon}\, \left(f_{\epsilon}(t,\mathbf{x})-F\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\right) \, , \end{equation*} is bounded independently in $L^{\infty}\left(0,T;X^{s'}(K)\right)$ and two-scale converges to the profile $F_{1} = F_{1}(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$, \item Defining the function $S = S(t,\tau,\mathbf{x})$ as \begin{equation*} S(t,\tau,\mathbf{x}) = \int_{0}^{\tau} F(t,\sigma,\mathbf{X}\left(\sigma;\mathbf{x},t;0)\right)\, d\sigma \, , \end{equation*} we assume that $S$ lies in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L_{loc}^{p}({\mathbb R}^{n})\right)\right)$ and that ${\partial}_{t}S$ lies in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$. \end{itemize} If $(g_{\epsilon})_{\epsilon\,>\,0}$ is bounded in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$, it admits a two-scale limit $G = G(t,\tau,\mathbf{x})$ in the space $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$ and $G$ is characterized thanks to the relation \begin{equation} G(t,\tau,\mathbf{x}) = H\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) + S\left(t,\tau,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation} where $H = H(t,\mathbf{x}) \in L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ satisfies \begin{equation}\label{eq_H} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}H&(t,\mathbf{x}) + \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}H(t,\mathbf{x}) \\ &= \cfrac{1}{\theta}\, \int_{0}^{\theta} F_{1}\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) \, d\tau -\cfrac{1}{\theta}\,\int_{0}^{\theta} \left[{\partial}_{t}S(t,\tau,\mathbf{x}) - \bm{\alpha}_{0}(t,\tau,\mathbf{x})\cdot\nabla_{\mathbf{x}}S(t,\tau,\mathbf{x})\right] d\tau \, , \end{split} \\ H(t=0,\mathbf{x}) = g^{0}(\mathbf{x}) \, . \end{array} \right. \end{equation} \end{theorem} \begin{proof} Since $(g_{\epsilon})_{\epsilon\,>\,0}$ is assumed to be bounded in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ independently of $\epsilon$, it admits a two-scale limit $G = G(t,\tau,\mathbf{x})$ in the functional space $L^{\infty}\left(0,T;L_{\#}^{p}(0,\theta;L^{p}({\mathbb R}^{n})\right)$. In the same spirit of \cite{Finite_Larmor_radius}, the next step of the proof consists in finding an equation linking the first order derivative of $G$ in $\tau$ to the derivatives of $G$ in $\mathbf{x}$-direction. For this purpose, we consider a test function $\psi = \psi(t,\tau,\mathbf{x})$ defined on $[0,T] \times {\mathbb R} \times {\mathbb R}^{n}$ being $\theta$-periodic in $\tau$ direction and with compact support $K \subset {\mathbb R}^{n}$ in $\mathbf{x}$-direction. We multiply \eqref{eq_geps_generic} by $\psi(t,\frac{t}{\epsilon},\mathbf{x})$ and we integrate the result in $t$ and $\mathbf{x}$. Some integrations by parts give \begin{equation*} \begin{split} \int_{0}^{T} \int_{K} g_{\epsilon}(t,\mathbf{x}) \, \Bigg[ &({\partial}_{t}\psi)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) + \cfrac{1}{\epsilon}\, ({\partial}_{\tau}\psi)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) + \mathbf{A}_{\epsilon}(t,\mathbf{x}) \cdot (\nabla_{\mathbf{x}}\psi)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \\ &+ \cfrac{1}{\epsilon}\,\mathbf{L}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\cdot (\nabla_{\mathbf{x}}\psi)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \Bigg] \, d\mathbf{x} \, dt \\ &= -\cfrac{1}{\epsilon}\, \int_{0}^{T} \int_{K} f_{\epsilon}(t,\mathbf{x}) \, \psi\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, d\mathbf{x}\, dt + \int_{K} g^{0}(\mathbf{x})\,\psi(0,0,\mathbf{x}) \, d\mathbf{x} \, . \end{split} \end{equation*} Thanks to the considered assumptions for $g^{0}$, $\mathbf{A}_{\epsilon}$, $\mathbf{L}$ and $f_{\epsilon}$, we can multiply by $\epsilon$ and reach the limit $\epsilon \to 0$. This gives \begin{equation*} \begin{split} \int_{0}^{\theta} \int_{0}^{T} \int_{K} G(t,\tau,\mathbf{x}) \, \Bigg[ {\partial}_{\tau}\psi(t,\tau,\mathbf{x}) + &\mathbf{L}(t,\tau,\mathbf{x})\cdot \nabla_{\mathbf{x}}\psi(t,\tau,\mathbf{x}) \Bigg] \, d\mathbf{x} \, dt \\ &= -\int_{0}^{\theta}\int_{0}^{T} \int_{K} F(t,\tau,\mathbf{x}) \, \psi(t,\tau,\mathbf{x}) \, d\mathbf{x}\, dt \, d\tau \, . \end{split} \end{equation*} This means that $G$ satisfies the following equation in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L_{loc}^{p}({\mathbb R}^{n})\right)\right)$: \begin{equation*} {\partial}_{\tau}G(t,\tau,\mathbf{x}) + \mathbf{L}(t,\tau,\mathbf{x}) \cdot \nabla_{\mathbf{x}}G(t,\tau,\mathbf{x}) = F(t,\tau,\mathbf{x}) \, . \end{equation*} According to Lemma 2.1 from \cite{Two-scale_expansion} and thanks to the hypothesis \eqref{Speriodic}, we can write $G$ as follows \begin{equation*} G(t,\tau,\mathbf{x}) = H\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) + \int_{0}^{\tau} F\left(t,\sigma,\mathbf{X}(\sigma-\tau;\mathbf{x},t;0)\right) \, d\sigma \, , \end{equation*} with $H = H(t,\mathbf{x}) \in L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$. \\ \indent The next step consists in proving that $H$ satisfies \eqref{eq_H}. For this purpose, we introduce the sequence $(h_{\epsilon})_{\epsilon\,>\,0}$ defined as \begin{equation} \label{link_gh} g_{\epsilon}(t,\mathbf{x}) = h_{\epsilon}\left(t,\mathbf{X}(-\cfrac{t}{\epsilon};\mathbf{x},t;0)\right) + \int_{0}^{t/\epsilon} F\left(t,\sigma,\mathbf{X}(\sigma-\cfrac{t}{\epsilon};\mathbf{x},t;0)\right) \, d\sigma \, . \end{equation} Injecting this relation in \eqref{eq_geps_generic} gives \begin{equation} \label{eq_heps_eq} \left\{ \hspace{-0.4em} \begin{array}{l} \begin{split} &{\partial}_{t}h_{\epsilon}(t,\mathbf{x}) + \tilde{\mathbf{A}}_{\epsilon}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}h_{\epsilon}(t,\mathbf{x}) \\ &\hspace{0.2em}= f_{\epsilon,1}\left(t,\mathbf{X}\left(\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right) - ({\partial}_{t}S)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) - \tilde{\mathbf{A}}_{\epsilon}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}S\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) , \end{split} \\ h_{\epsilon}(t=0,\mathbf{x}) = g^{0}(\mathbf{x}) \, , \end{array} \right. \end{equation} where $\tilde{\mathbf{A}}_{\epsilon}$ is linked to $\mathbf{A}_{\epsilon}$ through the following relation: \begin{equation*} \tilde{\mathbf{A}}_{\epsilon}(t,\mathbf{x}) = \left((\nabla_{\mathbf{x}}\mathbf{X})\left(\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right)^{-1} \left( \mathbf{A}_{\epsilon}\left(t,\mathbf{X}\left(\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right)-({\partial}_{t}\mathbf{X})\left(\cfrac{t}{\epsilon};\mathbf{x},t;0\right) \right) \, . \end{equation*} From the definition of $h_{\epsilon}$ provided by \eqref{link_gh} and the hypotheses made for $F$ and $(g_{\epsilon})_{\epsilon\,>\,0}$, we can write \begin{equation*} \forall\, t, \qquad \left\|h_{\epsilon}(t,\cdot)\right\|_{L^{p}(K)} \leq \left\|g_{\epsilon}(t,\cdot)\right\|_{L^{p}(K)} + \theta\, \left\|F(t,\cdot,\cdot)\right\|_{L_{\#}^{\infty}\left(0,\theta;L^{p}(K)\right)} \, , \end{equation*} for all compact subset $K \subset {\mathbb R}^{n}$ so we deduce that the sequence $(h_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ and, up to a subsequence, two-scale converges to $H$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$. Indeed, if we consider a test function $\psi = \psi(t,\tau,\mathbf{x})$ on $[0,T] \times {\mathbb R} \times {\mathbb R}^{n}$ being $\theta$-periodic in $\tau$ direction and with compact support $K \subset {\mathbb R}^{n}$ in $\mathbf{x}$-direction, we have \begin{equation*} \begin{split} &\lim_{\epsilon \to 0} \int_{0}^{T}\int_{{\mathbb R}^{n}} h_{\epsilon}(t,\mathbf{x}) \, \psi\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, d\mathbf{x}\, dt \\ &\hspace{0.6em}= \lim_{\epsilon \to 0} \int_{0}^{T}\int_{{\mathbb R}^{n}} \left[g_{\epsilon}\left(t,\mathbf{X}\left(\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right) - S\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \right] \psi\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, d\mathbf{x}\, dt \\ &\hspace{0.6em}= \lim_{\epsilon \to 0} \int_{0}^{T}\int_{{\mathbb R}^{n}} \left[ g_{\epsilon}(t,\mathbf{x})\, \psi\left(t,\cfrac{t}{\epsilon},\mathbf{X}\left(-\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right) - S\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \psi\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\right] \, d\mathbf{x}\, dt \\ &\hspace{0.6em}= \cfrac{1}{\theta} \, \int_{0}^{\theta} \int_{0}^{T}\int_{{\mathbb R}^{n}} \left[ G(t,\tau,\mathbf{x}) \, \psi\left(t,\tau,\mathbf{X}\left(-\tau;\mathbf{x},t;0\right)\right) - S\left(t,\tau,\mathbf{x}\right) \psi\left(t,\tau,\mathbf{x}\right)\right] \, d\mathbf{x}\, dt \, d\tau \\ &\hspace{0.6em}= \cfrac{1}{\theta} \, \int_{0}^{\theta} \int_{0}^{T}\int_{{\mathbb R}^{n}} \left[ G\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) - S(t,\tau,\mathbf{x})\right] \psi(t,\tau,\mathbf{x}) \, d\mathbf{x}\, dt \, d\tau \\ &\hspace{0.6em}= \cfrac{1}{\theta} \, \int_{0}^{\theta} \int_{0}^{T}\int_{{\mathbb R}^{n}} H(t,\mathbf{x}) \, \psi(t,\tau,\mathbf{x}) \, d\mathbf{x}\, dt \, d\tau \, . \end{split} \end{equation*} Consequently, $h_{\epsilon}$ weakly-* converges to $H$ in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ according to Theorem \ref{TSCV_Allaire}. However, as in \cite{Finite_Larmor_radius}, we are able to obtain a strong convergence result for $h_{\epsilon}$ in a well-chosen functional space: \begin{lemma} \label{strongCV_heps} For any compact subset $K \subset {\mathbb R}^{n}$, the sequence $(h_{\epsilon})_{\epsilon\,>\,0}$ strongly converges to $H$ in $L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right)$. \end{lemma} \begin{proof} The procedure is almost similar to the proof of Lemma 4.1 of \cite{Finite_Larmor_radius}. Indeed, from the assumptions made for the sequences $(g_{\epsilon})_{\epsilon\,>\,0}$, $(\mathbf{A}_{\epsilon})_{\epsilon\,>\,0}$, $\mathbf{L}$, $(f_{\epsilon})_{\epsilon\,>\,0}$ and $(f_{\epsilon,1})_{\epsilon\,>\,0}$, we consider a compact subset $K$ of ${\mathbb R}^{n}$ and we sucessively prove that \begin{itemize} \item $(\tilde{\mathbf{A}}_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $\left(L^{\infty}\left(0,T;W^{1,q}(K)\right)\right)^{n}$ and satisfies $\nabla_{\mathbf{x}} \cdot \tilde{\mathbf{A}}_{\epsilon} = 0$, \item $(\tilde{\mathbf{A}}_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $\left(L^{\infty}\left(0,T;L^{r}(K)\right)\right)^{n}$ for any $r \in [1,\frac{nq}{n-q}[$, \item $(\tilde{\mathbf{A}}_{\epsilon}\,h_{\epsilon})_{\epsilon\,>\,0}$ and $(\tilde{\mathbf{A}}_{\epsilon}\,S(\cdot,\frac{\cdot}{\epsilon},\cdot))_{\epsilon\,>\,0}$ are bounded independently of $\epsilon$ in the space $\left(L^{\infty}\left(0,T;L^{s}(K)\right)\right)^{n}$ with $s$ satisfying $\frac{1}{s} = \frac{1}{q}+\frac{1}{r}$, \item $\left(\nabla_{\mathbf{x}} \cdot (\tilde{\mathbf{A}}_{\epsilon}\,h_{\epsilon})\right)_{\epsilon\,>\,0}$ and $\left(\nabla_{\mathbf{x}} \cdot (\tilde{\mathbf{A}}_{\epsilon}\,S(\cdot,\frac{\cdot}{\epsilon},\cdot))\right)_{\epsilon\,>\,0}$ are bounded in the space $\left(L^{\infty}\left(0,T;\left(W_{0}^{1,s'}(K)\right)'\right)\right)^{n}$ and consequently in $\left(L^{\infty}\left(0,T;X^{s'}(K)\right)\right)^{n}$ independently of $\epsilon$. \end{itemize} In addition of these results, we deduce from the hypotheses on $F$ that ${\partial}_{t}S$ is in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}(K)\right)\right)$. At this point, we distinguish 2 different cases according to the considered value of $s'$ in front of $q$: \begin{enumerate} \item Assume that $s' > q$. This leads to the continuous embedding $\left(L^{q}(K)\right)' \subset \left(L^{s'}(K)\right)'$ and, consequently, to the continuous embedding $\left(W_{0}^{1,q}(K)\right)' \subset \left(W_{0}^{1,s'}(K)\right)'$, so $X^{s'}(K) = \left(W_{0}^{1,s'}(K)\right)'$. On another hand, Rellich's theorem gives the compact embedding $L^{p}(K) \subset \left(W_{0}^{1,q}(K)\right)'$. Hence, ${\partial}_{t}S$ and $f_{\epsilon,1}$ lie in $L^{\infty}\left(0,T;\left(W_{0}^{1,s'}(K)\right)'\right)$, and the sequence $(f_{\epsilon,1})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in this space. Finally, we can write that $(h_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in the following space: \begin{equation*} \mathcal{U} = \left\{ h \in L^{\infty}\left(0,T;L^{p}(K)\right) \, : \, {\partial}_{t}h \in L^{\infty}\left(0,T;\left(W_{0}^{1,s'}(K)\right)'\right) \right\} \, . \end{equation*} Aubin-Lions' lemma indicates that $\mathcal{U}$ is compactly embedded in the space $L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right)$, so $h_{\epsilon}$ weakly-* converges to $H$ in $L^{\infty}\left(0,T;L^{p}(K)\right)$ and strongly converges to $H$ in $L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right)$. \item Assume that $s' \leq q$. As a consequence, we have the continuous embedding $\left(W_{0}^{1,s'}(K)\right)' \subset \left(W_{0}^{1,q}(K)\right)'$, $X^{s'}(K) = \left(W_{0}^{1,q}(K)\right)'$ and the compact embedding $L^{p}(K) \subset \left(W_{0}^{1,q}(K)\right)'$ so we are insured that $({\partial}_{t}h_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right)$ and that $(h_{\epsilon})_{\epsilon\,>\,0}$ is bounded in the functional space $\mathcal{U}$ defined by \begin{equation*} \mathcal{U} = \left\{ h \in L^{\infty}\left(0,T;L^{p}(K)\right) \, : \, {\partial}_{t}h \in L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right) \right\} \, . \end{equation*} Applying Aubin-Lions' lemma finally allows us to claim that the weak-* convergence of $h_{\epsilon}$ to $H$ in $L^{\infty}\left(0,T;L^{p}(K)\right)$ is a strong convergence in the space $L^{\infty}\left(0,T;\left(W_{0}^{1,q}(K)\right)'\right)$. \end{enumerate} \end{proof} In order to conclude the proof of Theorem \ref{TSCV_g}, we now consider a test function $\psi = \psi(t,\mathbf{x})$ on $[0,T] \times {\mathbb R}^{n}$ with compact support $K \subset {\mathbb R}^{n}$ in $\mathbf{x}$-direction. If we multiply \eqref{eq_heps_eq} by $\psi(t,\mathbf{x})$, integrate the result in $t$ and $\mathbf{x}$, we obtain \begin{equation*} \begin{split} -\int_{0}^{T}& \int_{{\mathbb R}^{n}} h_{\epsilon}(t,\mathbf{x}) \, \left[ {\partial}_{t}\psi(t,\mathbf{x}) + \tilde{\mathbf{A}}_{\epsilon}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}\psi(t,\mathbf{x}) \right] \, d\mathbf{x} \, dt - \int_{{\mathbb R}^{n}} g^{0}(\mathbf{x}) \, \psi(0,\mathbf{x}) \, d\mathbf{x} \\ &= \int_{0}^{T} \int_{{\mathbb R}^{n}} f_{\epsilon,1}(t,\mathbf{x}) \, \psi\left(t,\mathbf{X}\left(-\cfrac{t}{\epsilon};\mathbf{x},t;0\right)\right) \, d\mathbf{x} \, dt \\ &\quad - \int_{0}^{T} \int_{{\mathbb R}^{n}} \left[ ({\partial}_{t}S)\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\, \psi(t,\mathbf{x}) - S\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, \tilde{\mathbf{A}}_{\epsilon}(t,\mathbf{x})\cdot\nabla_{\mathbf{x}}\psi(t,\mathbf{x}) \right] \, d\mathbf{x}\, dt \, . \end{split} \end{equation*} Thanks to Lemma \ref{strongCV_heps} and to the hypotheses we have formulated for $\mathbf{A}_{\epsilon}$, $f_{\epsilon,1}$ and $S$, we can write the limit obtained when $\epsilon$ converges to 0: \begin{equation*} \begin{split} \int_{0}^{T} \int_{{\mathbb R}^{n}} &H(t,\mathbf{x}) \left[ {\partial}_{t}\psi(t,\mathbf{x}) + \left[\cfrac{1}{\theta}\int_{0}^{\theta}\bm{\alpha}_{0}(t,\tau,\mathbf{x}) \, d\tau \right] \cdot \nabla_{\mathbf{x}}\psi(t,\mathbf{x}) \right] d\mathbf{x} \, dt + \int_{{\mathbb R}^{n}} g^{0}(\mathbf{x}) \, \psi(0,\mathbf{x}) \, d\mathbf{x} \\ &= -\cfrac{1}{\theta}\, \int_{0}^{T} \int_{{\mathbb R}^{n}} \int_{0}^{\theta}F_{1}(t,\tau,\mathbf{x}) \, \psi\left(t,\mathbf{X}\left(-\tau;\mathbf{x},t;0\right)\right) \, d\tau\, d\mathbf{x} \, dt \\ &\qquad + \int_{0}^{T} \int_{{\mathbb R}^{n}} \left[\cfrac{1}{\theta}\,\int_{0}^{\theta}{\partial}_{t}S\left(t,\tau,\mathbf{x}\right) \, d\tau\right] \, \psi(t,\mathbf{x}) \, d\mathbf{x}\, dt \\ &\qquad - \int_{0}^{T}\int_{{\mathbb R}^{n}} \left[\cfrac{1}{\theta}\,\int_{0}^{\theta} S\left(t,\tau,\mathbf{x}\right) \, \bm{\alpha}_{0}(t,\tau,\mathbf{x})\, d\tau \right] \cdot\nabla_{\mathbf{x}}\psi(t,\mathbf{x}) \, d\mathbf{x}\, dt \, . \end{split} \end{equation*} This corresponds to the variational formulation of \eqref{eq_H} in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$. \end{proof} \subsection{Identification of each $U_{k}$} Having Theorem \ref{TSCV_g} in hands, we can apply it for identifying each term $U_{k}$ of the expansion \eqref{expansion}. For obtaining some equations for $U_{0}$, we simply use this theorem with the source term $f_{\epsilon} = 0$ on $[0,T] \times {\mathbb R}^{n}$ for each $\epsilon \geq 0$. As a consequence, assuming that $(u_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently in $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ in addition of Hypotheses \ref{hyp_U0} is sufficient to get the two-scale convergence of $u_{\epsilon}$ to the profile $U_{0} = U_{0}(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$ entirely characterized by \begin{equation*} U_{0}(t,\tau,\mathbf{x}) = V_{0}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation*} where $V_{0} = V_{0}(t,\mathbf{x}) \in L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ satisfies \begin{equation*} \left\{ \begin{array}{l} \displaystyle {\partial}_{t}V_{0}(t,\mathbf{x}) + \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}V_{0}(t,\mathbf{x}) = 0 \, , \\ V_{0}(t=0,\mathbf{x}) = u^{0}(\mathbf{x}) \, . \end{array} \right. \end{equation*} This is the conclusion of Theorem \ref{def_U0}. For reaching the results of Theorem \ref{eq_U0}, we derive in $\mathbf{x}$ and $t$ the relation \eqref{link_U0V0} and we obtain \begin{equation*} \nabla_{\mathbf{x}}U_{0}(t,\tau,\mathbf{x}) = \left((\nabla_{\mathbf{x}}\mathbf{X})(-\tau;\mathbf{x},t;0)\right)^{T}\, (\nabla_{\mathbf{x}}V_{0})\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation*} and \begin{equation*} \begin{split} {\partial}_{t}U_{0}(t,\tau,\mathbf{x}) &= ({\partial}_{t}V_{0})\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) + {\partial}_{t}\mathbf{X}(-\tau;\mathbf{x},t;0) \cdot (\nabla_{\mathbf{x}}V_{0})\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \\ &= \left[{\partial}_{t}\mathbf{X}(-\tau;\mathbf{x},t;0)-\tilde{\mathbf{a}}_{0}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \right] \cdot (\nabla_{\mathbf{x}}V_{0})\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \\ &= \left[\left((\nabla_{\mathbf{x}}\mathbf{X})(-\tau;\mathbf{x},t;0)\right)^{-1}\left[{\partial}_{t}\mathbf{X}(-\tau;\mathbf{x},t;0)-\tilde{\mathbf{a}}_{0}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \right] \right]\cdot \nabla_{\mathbf{x}}U_{0}(t,\tau,\mathbf{x}) \, . \end{split} \end{equation*} \indent For identifying the higher order terms, more calculations are needed. First, we consider a fixed integer $k \in {\mathbb N}^{*}$ and we assume that Hypotheses \ref{Hilbert_Aeps} and \ref{Hilbert_ueps_kth} are satisfied at step $k$ and that the results of Theorem \ref{CV_Uk} are true for $i=0,\dots,k-1$, meaning that $U_{0},\dots,U_{k-1}$ are fully characterized. These assumptions authorize the definitions of $\bm{\alpha}_{i}$, $\tilde{\mathbf{a}}_{i}$, $\mathbf{a}_{i}$, $W_{i}$ and $R_{i}$ for any $i=0,\dots,k$ as it is suggested in paragraph \ref{TSCV_Uk}. Then we can write an evolution equation for $u_{\epsilon,i}$ for any $i = 1,\dots,k$ thanks to a recurrence procedure: this equation writes \begin{equation*} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}u_{\epsilon,i}(t,\mathbf{x}) &+ \mathbf{A}_{\epsilon}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}u_{\epsilon,i}(t,\mathbf{x}) + \cfrac{1}{\epsilon}\,\mathbf{L}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \cdot \nabla_{\mathbf{x}}u_{\epsilon,i}(t,\mathbf{x}) \\ &= \displaystyle \cfrac{1}{\epsilon}\,\sum_{j\,=\,0}^{i-1}\left(\mathbf{a}_{j}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) - \mathbf{A}_{\epsilon,j}(t,\mathbf{x}) \right) \cdot \nabla_{\mathbf{x}}U_{i-1-j}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) - \cfrac{1}{\epsilon}\,R_{i-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, , \end{split} \\ u_{\epsilon,i}(t=0,\mathbf{x}) = 0 \, , \end{array} \right. \end{equation*} for any $i > 0$. \\ \indent As a consequence, we aim to apply Theorem \ref{TSCV_g} with $f_{\epsilon}$ defined by \begin{equation*} f_{\epsilon}(t,\mathbf{x}) = \sum_{i\,=\,0}^{k-1}\left[ \mathbf{a}_{i}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)-\mathbf{A}_{\epsilon,i}(t,\mathbf{x})\right] \cdot \nabla_{\mathbf{x}}U_{k-1-i}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) - R_{k-1}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, . \end{equation*} First, we have to verify if the sequence $(f_{\epsilon})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;X^{s'}(K)\right)$ for any compact subset $K \subset {\mathbb R}^{n}$. For this purpose, we first remark that Hypotheses \ref{hyp_U0} and \ref{Hilbert_Aeps} imply that there exists a constant $C = C(K) > 0$ such that \begin{equation*} \left\|\mathbf{a}_{i}\left(t,\cfrac{t}{\epsilon},\cdot\right)-\mathbf{A}_{\epsilon,i}(t,\cdot)\right\|_{W^{1,q}(K)} \leq C(K) \, , \end{equation*} for any $t \in [0,T]$ and $\epsilon > 0$. Hence, following the same methodology as in the proof of Lemma \ref{strongCV_heps} and assuming that $R_{k-1}$ is in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$ leads to the existence of a constant $C' = C'(K) > 0$ such that \begin{equation*} \|f_{\epsilon}(t,\cdot)\|_{X^{s'}(K)} \leq C'(K) \, , \end{equation*} for any $\epsilon > 0$ and $t \in [0,T]$, where the norm $\|\cdot\|_{X^{s'}(K)}$ is either the usual norm on $\left(W_{0}^{1,q}(K)\right)'$ or $\left(W_{0}^{1,s'}(K)\right)'$ according to the sign of $s'-q$. \\ \indent This result indicates that $f_{\epsilon}$ two-scale converges to the profile $F = F(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$ characterized by \begin{equation*} F(t,\tau,\mathbf{x}) = \sum_{i\,=\,0}^{k-1}\left[ \mathbf{a}_{i}(t,\tau,\mathbf{x})-\bm{\mathcal{A}}_{i}(t,\tau,\mathbf{x})\right] \cdot \nabla_{\mathbf{x}}U_{k-1-i}(t,\tau,\mathbf{x}) - R_{k-1}(t,\tau,\mathbf{x}) \, . \end{equation*} The next step consists in proving that $W_{k}$ defined by \begin{equation*} W_{k}(t,\tau,\mathbf{x}) = S(t,\tau,\mathbf{x}) = \int_{0}^{\tau} F\left(t,\sigma,\mathbf{X}(\sigma;\mathbf{x},t;0)\right) \, d\sigma \, , \end{equation*} is such that \begin{equation*} \begin{array}{rcl} W_{k} & \in & L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}(K)\right)\right) \, , \\ {\partial}_{t}W_{k} & \in & L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta; X^{s'}(K)\right)\right) \, , \\ W_{k}(t,\theta,\mathbf{x}) & = & 0 \, , \quad \forall\, t,\mathbf{x} \, . \end{array} \end{equation*} The first two points are handled thanks to the hypotheses for $W_{k}$ which are added for claiming Theorem \ref{CV_Uk}. The last point can be proved by using the definition of $R_{k-1}$. Indeed, we have \begin{equation} \label{Wktheta} \begin{split} W_{k}(t,\theta,\mathbf{x}) &= \int_{0}^{\theta} \left(\sum_{i\,=\,0}^{k-1}\left[ \mathbf{a}_{i}-\bm{\mathcal{A}}_{i}\right] \cdot \nabla_{\mathbf{x}}U_{k-1-i} - R_{k-1}\right)\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right)\, d\tau \, , \end{split} \end{equation} with \begin{equation} \label{eq_Rkm1} \begin{split} R_{k-1}\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) &= {\partial}_{t}W_{k-1}(t,\tau,\mathbf{x}) + \tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}W_{k-1}(t,\tau,\mathbf{x}) \\ &\quad - \cfrac{1}{\theta}\,\int_{0}^{\theta}({\partial}_{t}W_{k-1}+\bm{\alpha}_{0}\cdot\nabla_{\mathbf{x}}W_{k-1})(t,\sigma,\mathbf{x}) \, d\sigma \\ &\quad + \sum_{i\,=\,1}^{k-1}\Bigg[ \cfrac{1}{\theta}\,\int_{0}^{\theta} \bm{\alpha}_{i}(t,\sigma,\mathbf{x}) \cdot \left[ \nabla_{\mathbf{x}}W_{k-1-i}(t,\tau,\mathbf{x}) - \nabla_{\mathbf{x}}W_{k-1-i}(t,\sigma,\mathbf{x}) \right] d\sigma \Bigg] \, , \end{split} \end{equation} and \begin{equation} \label{eq_aiAinablaUkmim1} \begin{split} &\left(\big[\mathbf{a}_{i} - \bm{\mathcal{A}}_{i} \big] \cdot \nabla_{\mathbf{x}}U_{k-1-i} \right) \left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) \\ &\qquad = \left[(\nabla_{\mathbf{x}}\mathbf{X})(\tau;\mathbf{x},t;0) \left[ \tilde{\mathbf{a}}_{i}(t,\mathbf{x}) - \bm{\alpha}_{i}(t,\tau,\mathbf{x}) \right] \right] \cdot (\nabla_{\mathbf{x}}U_{k-1-i})\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) \\ &\qquad = \left[ \tilde{\mathbf{a}}_{i}(t,\mathbf{x}) - \bm{\alpha}_{i}(t,\tau,\mathbf{x}) \right] \cdot \left( \nabla_{\mathbf{x}}V_{k-1-i}(t,\mathbf{x}) + \nabla_{\mathbf{x}}W_{k-1-i}(t,\tau,\mathbf{x}) \right) \, , \end{split} \end{equation} for any $i=0,\dots,k-1$. Hence, using the links between $\tilde{\mathbf{a}}_{i}$, $\bm{\alpha}_{i}$ and $\bm{\mathcal{A}}_{i}$, we inject \eqref{eq_Rkm1} and \eqref{eq_aiAinablaUkmim1} in \eqref{Wktheta} for obtaining a new formulation for $W_{k}$: \begin{equation*} \begin{split} W_{k}(t,\tau,\mathbf{x}) &= \sum_{i\,=\,0}^{k-1} \left[\int_{0}^{\tau} \left(\tilde{\mathbf{a}}_{i}(t,\mathbf{x})-\bm{\alpha}_{i}(t,\sigma,\mathbf{x})\right) \, d\sigma \right] \cdot \nabla_{\mathbf{x}}V_{k-1-i}(t,\mathbf{x}) \\ &\quad + \sum_{i\,=\,0}^{k-1} \int_{0}^{\tau} \Bigg[ \cfrac{1}{\theta}\,\int_{0}^{\theta} \bm{\alpha}_{i}(t,\xi,\mathbf{x})\cdot \nabla_{\mathbf{x}}W_{k-1-i}(t,\xi,\mathbf{x}) \, d\xi - \bm{\alpha}_{i}(t,\sigma,\mathbf{x})\cdot \nabla_{\mathbf{x}}W_{k-1-i}(t,\sigma,\mathbf{x}) \Bigg] \, d\sigma \\ &\quad + \int_{0}^{\tau} \Bigg[\cfrac{1}{\theta}\,\int_{0}^{\theta}{\partial}_{t}W_{k-1}(t,\zeta,\mathbf{x})\, d\zeta - {\partial}_{t}W_{k-1}(t,\sigma,\mathbf{x}) \Bigg]\, d\sigma \, . \end{split} \end{equation*} It becomes straightforward that $W_{k}(t,\theta,\mathbf{x}) = 0$ for any $t$ and for any $\mathbf{x}$. \\ \indent The last property we have have to satisfy for completing the proof of Theorem \ref{CV_Uk} consists in proving that the sequence $(f_{\epsilon,1})_{\epsilon\,>\,0}$ defined by \begin{equation*} \begin{split} f_{\epsilon,1}(t,\mathbf{x}) &= \cfrac{1}{\epsilon}\,\left(f_{\epsilon}(t,\mathbf{x})-F\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)\right) \\ &= \cfrac{1}{\epsilon}\,\sum_{i\,=\,0}^{k-1}\left[ \bm{\mathcal{A}}_{i}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right)-\mathbf{A}_{\epsilon,i}(t,\mathbf{x})\right]\,\cdot \nabla_{\mathbf{x}}U_{k-1-i}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, , \end{split} \end{equation*} is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$ for any compact subset $K \subset {\mathbb R}^{n}$. To obtain this result, we remark that $f_{\epsilon,1}$ can write as \begin{equation*} f_{\epsilon,1}(t,\mathbf{x}) = -\sum_{i\,=\,1}^{k}\mathbf{A}_{\epsilon,i}(t,\mathbf{x})\cdot\nabla_{\mathbf{x}}U_{k-i}\left(t,\cfrac{t}{\epsilon},\mathbf{x}\right) \, , \end{equation*} thanks to Hypothesis \ref{Hilbert_Aeps}. Consequently, this sequence admits the profile $F_{1} = F_{1}(t,\tau,\mathbf{x})$ defined as \begin{equation*} F_{1}(t,\mathbf{x}) = -\sum_{i\,=\,1}^{k}\bm{\mathcal{A}}_{i}(t,\tau,\mathbf{x})\cdot\nabla_{\mathbf{x}}U_{k-i}(t,\tau,\mathbf{x}) \, , \end{equation*} as a two-scale limit in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;X^{s'}(K)\right)\right)$. \\ \indent We end the proof of Theorem \ref{CV_Uk} by assuming that $(u_{\epsilon,k})_{\epsilon\,>\,0}$ is bounded independently of $\epsilon$ in $L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$: we deduce that $u_{\epsilon,k}$ two-scale converges to the profile $U_{k} = U_{k}(t,\tau,\mathbf{x})$ in $L^{\infty}\left(0,T;L_{\#}^{\infty}\left(0,\theta;L^{p}({\mathbb R}^{n})\right)\right)$ defined by \begin{equation*} U_{k}(t,\tau,\mathbf{x}) = V_{k}\left(t,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) + W_{k}\left(t,\tau,\mathbf{X}(-\tau;\mathbf{x},t;0)\right) \, , \end{equation*} where $V_{k} = V_{k}(t,\mathbf{x}) \in L^{\infty}\left(0,T;L_{loc}^{p}({\mathbb R}^{n})\right)$ satisfies \begin{equation*} \left\{ \begin{array}{l} \begin{split} {\partial}_{t}V_{k}(t,\mathbf{x}) + &\tilde{\mathbf{a}}_{0}(t,\mathbf{x}) \cdot \nabla_{\mathbf{x}}V_{k}(t,\mathbf{x}) \\ &= -\cfrac{1}{\theta} \int_{0}^{\theta} \left[ \sum_{i\,=\,1}^{k}(\bm{\mathcal{A}}_{i}\cdot\nabla_{\mathbf{x}}U_{k-i})\left(t,\tau,\mathbf{X}(\tau;\mathbf{x},t;0)\right) \right] \, d\tau \\ &\qquad -\cfrac{1}{\theta} \int_{0}^{\theta} \left[ {\partial}_{t}W_{k}(t,\tau,\mathbf{x}) + \bm{\alpha}_{0}(t,\tau,\mathbf{x})\cdot\nabla_{\mathbf{x}}W_{k}(t,\tau,\mathbf{x}) \right] \, d\tau \, , \end{split} \\ V_{k}(t=0,\mathbf{x}) = \left\{ \begin{array}{ll} u^{0}(\mathbf{x}) \, , & \textnormal{if $k=0$,} \\ 0 & \textnormal{else.} \end{array} \right. \end{array} \right. \end{equation*} \section{Conclusions and perspectives} We have proposed some two-scale convergence results for a particular kind of convection equations in which a part of the convection term is singularly perturbed. These results can be viewed as an improvement of the calculations done by Fr\'enod, Raviart and Sonnendr\"ucker in \cite{Two-scale_expansion} since the properties of the convection terms $\mathbf{A}_{\epsilon}$ and $\mathbf{L}$ are less restrictive: indeed, in the present paper, the two-scale convergence can be proved with a $\epsilon$-dependent $\mathbf{A}_{\epsilon}$ and with $\mathbf{L}$ depending on $\epsilon$ in some particular sense. Along with these results, we have described the list of required hypotheses on $(\mathbf{A}_{\epsilon})_{\epsilon\,>\,0}$ and $\mathbf{L}$ for reaching the $k$-th order of two-scale convergence for $(u_{\epsilon})_{\epsilon\,>\,0}$. Finally, we have applied these new results to three different rescaled linear Vlasov equations that canbe considered in the context of MCF or charged particle beams. The limit systems that have been obtained consolidate the existing results and complete them by proposing $k$-th order two-scale limit models. \\ \indent From a numerical point of view, these new informations can be used for enriching the two-scale numerical methods that are currently based on the resolution of the 0-th order limit model: in particular, the limit model presented in Theorem \ref{TSCV_F0_axibeam} is discretized for approaching the solution of \eqref{Vlasov_axibeam_intro} but these numerical experiments are relevant for $\epsilon \ll 1$ (see \cite{PIC-two-scale,Mouton_2009}). Combining this approach with the numerical resolution of higher order two-scale limit models like \eqref{def_Gk_axibeam} may provide some relevant numerical results for values of $\epsilon$ which are less close to 0.
1,477,468,749,910
arxiv
\subsection{Neuromorphic processors} \label{ssec:cmos} \textbf{\emph{TrueNorth.}} TrueNorth~\cite{Merolla2014} is IBM's fully digital neuromorphic chip with one million neurons arranged in a tiled array of 4096 neurosynaptic cores enabling \emph{massive parallel processing}. Each core contains~13kB of \emph{local SRAM memory} to keep neurons and synapse's states along with the axonal delays and information on the fan-out destination. There are 256 Leaky-Integrator and Fire (LIF) neurons implemented by time-multiplexing and 256 million synapses are designed in the form of SRAM memory. Each core can support up to 256 fan-in and fan-out, and this connectivity can be configured such that a neuron in any core can communicate its spikes any other neuron in any other core. \\ Thanks to the \emph{event-driven}, the co-location of memory and processing units in each core, and the use of low-leakage silicon CMOS technology, TrueNorth can perform 46 billion synaptic operations per second (SOPS) per watt for real-time operation, with 26 pJ per synaptic event. Its power density of 20 mW/cm$^2$ is about three orders of magnitude smaller than that of typical CPUs. \textbf{\emph{SpiNNaker.}} The SpiNNaker machine~\cite{Furber2014}, designed by the University of Manchester, is a custom-designed ASIC based on \emph{massively parallel architecture} that has been designed to efficiently simulate large spiking neural networks. It consists of ARM968 processing cores arranged in a 2D array where the precise details of the neurons and their dynamics can be programmed into. Although the processing cores are synchronous microprocessors, the \emph{event-based} aspect of SpiNNaker is apparent in its message-handling paradigm. A message (event) gets delivered to a core generating a request for being processed. The communications infrastructure between these nodes is specially optimized to carry very large numbers of very small packets, optimal for spiking neurons. \\ A second generation of SpiNNaker was designed by Technical University of Dresden~\cite{mayr2019spinnaker}. Spinnaker2 continues the line of dedicated digital neuromorphic chips for brain simulation increasing the simulation capacity by a factor $>10$ while staying in the same power budget (i.e. 10x better power efficiency). The full-scale SpiNNaker2 consists of 10 Million ARM cores distributed across 70000 Chips in 10 server racks. This system takes advantage of advanced 22nm FDSOI technology node with Adaptive Body Biasing enabling reliable and ultra-low power processing. It also features incorporating numerical accelerators for the most common operations. \textbf{\emph{Loihi.}} Loihi~\cite{Davies_etal18} is Intel's neuromorphic chip with many core processing incorporating on-line learning designed in 14\,nm FinFET technology. The chip supports about 130000 neurons and 130 million synapses distributed in 128 cores. Spikes are transported between the cores in the chip using packetized messages by an asynchronous network on chip. It includes three embedded x86 processors and provides a very flexible learning engine on which diverse online learning algorithms such as \ac{STDP}, different 3 factor and trace-based learning rules can be implemented. The chip also provides hierarchical connectivity, dendritic compartments, synaptic delays as different features that can enrich a spiking neural network. The synaptic weights are stored on local SRAM memory and the bit precision can vary between 1 to 9 bits. All logic in the chip is digital, functionally deterministic, and implemented in an asynchronous bundled data design style. \textbf{\emph{DYNAP-SE.}} DYNAP-SE implements a multi-core neuromorphic processor with scalable architecture fabricated using a standard 0.18 $\mu m$ CMOS technology~\cite{Moradi_etal17}. It is a full-custom asynchronous mixed-signal processor, with a fully asynchronous inter-core and inter-chip hierarchical routing architecture. Each core comprises 256 adaptive exponential integrate-and-fire (AEI\&F) neurons for a total of 1k neurons per chip. Each neuron has a Content Addressable Memory (CAM) block, containing 64 addresses representing the pre-synaptic neurons that the neuron is subscribed to. Rich synaptic dynamics are implemented on the chip by using \ac{DPI} circuits ~\cite{Bartolozzi_Indiveri2007}. These circuits produce EPSCs and IPSCs (Excitatory/Inhibitory Post Synaptic Currents), with time constants that can range from a few $\mu s$ to hundreds of $ms$. The analog circuits are operated in the sub-threshold domain, thus minimizing the dynamic power consumption, and enabling implementations of neural and synaptic behaviors with biologically plausible temporal dynamics. The asynchronous CAMs on the synapses are used to store the tags of the source neuron addresses connected to them, while the SRAM cells are used to program the address of the destination core/chip that the neuron targets. \textbf{\emph{ODIN/MorphIC.}} ODIN (Online-learning DIgital spiking Neuromorphic) processor occupies an area of only 0.086mm$^2$ in 28nm FDSOI CMOS \cite{frenkel20180}. It consists of a single neurosynaptic core with 256 neurons and 256$^2$ synapses. Each neuron can be configured to phenomenologically reproduce the 20 Izhikevich behaviors of spiking neurons \cite{Izhikevich2004}. The synapses embed a 3-bit weight and a mapping table bit that allows enabling or disabling Spike-Dependent Synaptic Plasticity (SDSP) locally \cite{brader2007learning}, thus allowing for the exploration of both off-chip training and on-chip online learning setups. \\ MorphIC is a quad-core digital neuromorphic processor with 2k LIF neurons and more than 2M synapses in 65nm CMOS \cite{frenkel201965}. MorphIC was designed for high-density large-scale integration of multi-chip setups. The four 512-neuron crossbar cores are connected with a hierarchical routing infrastructure that enables neuron fan-in and fan-out values of 1k and 2k, respectively. The synapses are binary and can be either programmed with offline-trained weights or trained online with a stochastic version of SDSP. \subsection{Biomedical signal processing on Neuromorphic hardware} \label{sec:neuproc} Table~\ref{tab:neurochips} shows the summary of neuromorphic processors described previously and in which biomedical signal processing applications were used. These works show promising results for always-on embedded biomedical systems. The first chip presented in this table is DYNAP-SE, used to implement \acp{SNN} for the classification or detection of \ac{EMG}~\cite{donati2018processing, donati2019discrimination} and \ac{ECG}~\cite{Bauer_etal19, Corradi_etal19} and to implement a simple spiking perceptron as part of a design to detect \ac{HFO} in human intracranial \ac{EEG}~\cite{Sharifshazileh_etal19}. In particular, in~\cite{donati2018processing,Bauer_etal19} a spiking \ac{RNN} is deployed for \ac{ECG}/\ac{EMG} signal separation to facilitate the classification with a linear read-out. \ac{SVM} and linear least square approximation is used in the read out layer for \cite{Bauer_etal19,Corradi_etal19} and overall accuracy of $91\%$ and $95\%$ for anomaly detection were reached respectively. In \cite{donati2018processing}, the state property of the spiking \ac{RNN} on \ac{EMG} was investigated for different hand gestures. In~\cite{donati2019discrimination} the performance of a feedforward \ac{SNN} and a hardware-friendly spiking learning algorithm for hand gesture recognition using superficial \ac{EMG} was investigated and compared to traditional machine learning approaches, such as \ac{SVM}. Results show that applying \ac{SVM} on the spiking output of the hidden layer achieved a classification rate of $84\%$, and the spiking learning method achieved $74\%$ with a power consumption of about $0.05~mW$. The consumption was compared to state-of-the-art embedded system showing that the proposed spiking network is two orders of magnitude more power efficient~\cite{benatti2015versatile, montagna2018pulp}. Recently, the benchmark hand-gesture classification was processed and compared on two other digital neuromorphic platforms, i.e. Loihi and ODIN/MorphIC~\cite{frenkel20180, frenkel201965}. A spiking \ac{CNN} was implemented on Loihi and a spiking \ac{MLP} was implemented on ODIN/MorphIC~\cite{Ceolini_etal20}. Because of the properties of neuromorphic chips, on Loihi a late fusion was implemented combining the output from the spiking \ac{CNN} for vision, and the spiking \ac{MLP} for \ac{EMG} signals; While on ODIN/MorphIC hardware, the two spiking \acp{MLP} were fused in the last layer. Due to the neuromorphic chip properties the Loihi implemented a late fusion of a spiking \ac{CNN}, for vision and a spiking \ac{MLP} for \ac{EMG} signals. In the ODIN/MorphIC system two spiking \acp{MLP} were fused in the last layer. The comparison with the embedded GPU was performed in terms of accuracy, power consumption, and latency showing that the neuromorphic chips are able to achieve the same accuracy with significantly smaller energy-delay product, 30x and 600x more efficient for Loihi and ODIN/MorphIC, respectively~\cite{Ceolini_etal20}. \begin{table*}[t] \small \centering \caption{Summary of neuromorphic platforms and biomedical applications} \vspace{0.2cm} \label{tab:neurochips} \renewcommand{\arraystretch}{1.2} \begin{tabular} {>{\centering\arraybackslash} m{3.7cm} *1{ |>{\centering\arraybackslash} m{2.5cm}} *1{ |>{\centering\arraybackslash} m{2.5cm}} *1{| >{\centering\arraybackslash} m{2cm}} *1{| >{\centering\arraybackslash} m{2cm}} *1{| >{\centering\arraybackslash} m{2.5cm}} } \hline \toprule \textbf{Neuromorphic Chip} & \textbf{DYNAP-SE} & \textbf{SpiNNaker} & \textbf{Loihi} & \textbf{TrueNorth} & \textbf{ODIN} \\ \hline \hline \textbf{CMOS Technology} & 180nm & ARM968, 130 nm & 14nm FinFET & 28nm & 28 nm FDSOI \\ \hline \textbf{Implementation} & Mixed-signal & Digital & Digital ASIC & Digital ASIC & Digital ASIC \\ \hline \textbf{Energy per SOP} & 17 pJ @ 1.8V & Peak power 1W per chip & 23.6 pJ @ 0.75V & 26 pJ @ 0.775 & 12.7 [email protected] \\ \hline \textbf{Size} & 38.5 $mm^2$ & 102 $mm^2$ & 60 $mm^2$ & 0.093 $mm^2$ (core) & 0.086 $mm^2$ \\ \hline \textbf{On-chip learning} & No & Yes (configurable) & Yes (configurable) & No & Yes (SDSP)\\ \hline \textbf{Applications} & \ac{EMG}, \ac{ECG}, \ac{HFO} & \ac{EMG} and \ac{EEG} & \ac{EMG} & \ac{EEG} and \ac{LFP} & \ac{EMG} \\ \hline \end{tabular} \end{table*} \subsection{Encoding} \label{ssec:encoding} In \acp{SNN} a single spike by itself does not carry any information. However, the number and the timing of spikes produced by a neuron are important. Just as their biological counterpart, silicon neurons in neuromorphic devices produce spike trains at a rate that is proportional to their input current. At the input side, synapse circuits integrate the spikes they receive to produce analog currents, with temporal dynamics and time constants that can be made equivalent to their biological counterparts. The sum of all the positive (excitatory) and negative (inhibitory) synaptic currents afferent to the neuron is then injected into the neuron. To provide biomedical signals to the synapses of the \ac{SNN} input layer, it is necessary to first convert them into spikes. A common way to do this is to use a delta-modulator circuit~\cite{Corradi_etal15,Sharifshazileh_etal19} functionally equivalent to the one used in the Dynamic Vision Sensor (DVS)~\cite{Lichtsteiner_etal2008}. This circuit, in practice, is an ADC that produces two asynchronous digital pulse outputs (UP or DOWN) for every biosignal channel in the input. The UP (DOWN) spikes are generated every time the difference between the current and previous value exceeds a pre-defined threshold. The sign of the difference corresponds to the UP or DOWN channel where the spike is produced. This approach was used to convert \ac{EMG} signals, used in mixed-signal neuromorphic chips~\cite{donati2018processing, donati2019discrimination} and in digital ones~\cite{Behrenbeck_etal19, Ceolini_etal20}, \ac{ECG} signals~\cite{Corradi_etal19, Bauer_etal19}, and \ac{EEG} and \ac{HFO} ones~\cite{Corradi_etal15, Sharifshazileh_etal19}. \\ \subsection{Adaptation in neuromorphic processor} Local adaptation is an important aspect in extreme edge computing, specially when it comes to wearable devices. The current methods for training networks for biomedical signals rely on large datasets collected from different patients. However, when it comes to biological data, there is no ``one size fits all''. Each patient and person has their own unique biological signature. Therefore, the field of Personalized Medicine (PM) has gained lots of attention in the past few years and the online on-edge adaptation feature of neuromorphic chips can be a game changer for PM. As was discussed in Section \ref{sec:bio-algorithms}, there are lots of effort in designing spike-based online learning algorithms which can be implemented on neuromorphic chips. Example of today's state of the art for on-chip learning are Intel's Loihi \cite{Davies_etal18}, DynapSEL and ROLLS chip from UZH/ETHZ~\cite{qiao2016scaling, qiao2015reconfigurable}, BrainScales from Heidelberg \cite{Schemmel2010} and ODIN from UC Louvain \cite{frenkel20180}. Intel's Loihi includes a learning engine which can implement different learning rules such as simple pairwise STDP, triplet STDP, reinforcement learning with synaptic tag assignments or any 3 factor learning rule implementation. DynapSEL, ROLLS and ODIN encompass the SDSP, also known as the Fusi learning rule, which is a form of semi-supervised learning rule that can support both unsupervised clustering applications and supervised learning with labels for shallow networks \cite{brader2007learning}. BrainscaleS chip implements the STDP rule. Moreover, Spinnaker 1 and 2 \cite{Furber2013,mayr2019spinnaker} can implement a wide variety of on-chip learning algorithms since their designs make use of ARM microcontrollers providing lots of configurability for the users. \\ \subsection{Open challenges} Generally, implementing on-chip online learning is challenging because of these two core reasons: locality of the weight update and weight storage. \textbf{\emph{Locality}} The learning information for updating the weights of any on-chip network should be locally available to the synapse since otherwise this information should be ``routed'' to the synapse by wires which will take a significant amount of area on chip. The simplest form of learning which satisfies this requirement is Hebbian learning which has been implemented on a variety of neuromorphic chips forms of unsupervised/semi-supervised learning \cite{frenkel20180,qiao2015reconfigurable,Schemmel2010,qiao2016scaling}. However, Hebbian-based algorithms are limited in the tasks they can learn and to the best of our knowledge no large scale task has been demonstrated using this rule. Since gradient descent-based algorithms such as \ac{Backprop} has had lots of success in deep learning, there are more and more spike-based error \ac{Backprop} rules that are being developed as was discussed in Section \ref{sec:bio-algorithms}. These types of learning algorithms have recently been custom designed in the form of spike-based delta rule as back-bone of the \ac{Backprop} algorithm. For example, single layer implementation of the delta rule has been designed in \cite{payvand_indiveri_19} and employed for \ac{EMG} classification \cite{donati2019discrimination}. Expanding this to multi-layer networks involves non-local weight updates which limits its on-chip implementation. Making the \ac{Backprop} algorithm local is a topic of on-going research which we have discussed in Section \ref{sec:bio-algorithms}. Recently, a multi-layer perceptron error-triggered learning architecture has been proposed to overcome the non-locality of multi-layer networks solving the spatial credit assignment problem on chip \cite{payvand2019error,Payvand_etal_2020_errormlp} \textbf{\emph{Weight storage}} The ideal weight storage for online on-chip learning should have the following properties: (i) non-volatility to keep the state of the learnt weights even when the power shuts down to reduce the time and energy footprints of reloading the weights to the chip. (ii) Linear update which allows the state of the memory to change linearly with the calculated update. (iii) Analog states which allows a full-precision for the weights. Non-volatile memristive devices have been proposed as a great potential for the weight storage and there is a large body of work combining the CMOS technology with that of the memristive devices to get the best of two worlds. In the next Section we provide a thorough review on the state of the art for the emerging memory devices and the efforts to integrate and use them in conjunction with neuromorphic chips. \section{Introduction} \label{sec:intro} \input{intro.tex} \section{Wearable sensors} \label{sec:wearable} \input{wearable.tex} \section{Models for biologically plausible continual learning} \label{sec:models} \input{models.tex} \section{Signal processing for wearable devices on neuromorphic chip} \label{sec:cmos} \input{cmos.tex} \section{Memristive devices and computing} \label{sec:memristive} \input{memristive.tex} \section{Discussion and Conclusions} \label{sec:discussion} \input{discussion.tex} \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} All the Authors equally contributed to the manuscript, actively participating to the discussions and to the writing. The main contributors for each Section are as follows: X.L. and H.H. -- wearable sensors; D.K. -- biologically plausible models; M.P. and E.D. -- signal processing and neuromorphic computing. E.C. and W.W. -- memristive devices. E.C. led and coordinated the cooperative writing and all discussions. \section*{Funding} This work was partially supported by the UK EPSRC under grant EP/R511705/1. E.C. and M.P. acknowledge funding by the European Union‘s Horizon 2020 research and innovation programme under grant agreement No 871737. \section*{Acknowledgments} The Authors would like to thank Prof. Thomas Mikolajick and Dr. Stefan Slesazeck for useful discussion on ferroelectric and memristive devices. \subsection{Conventional and wearable memristive devices} \label{subsec:memdevices} Memristive devices, as the name suggested, are devices which can change and memorize their resistance states. They are usually two-terminal devices, however, can be implemented with various physical mechanisms, resulting in versatile existing forms, e.g. resistive random access memory (RRAM, Fig.~\ref{fig:mem_device}a and ~\ref{fig:mem_device}b) (\cite{Ielmini2018}), phase change memory (PCM, Fig.~\ref{fig:mem_device}c) (\cite{Zhang2019}), magnetic random access memory (MRAM, Fig.~\ref{fig:mem_device}d and Fig.~\ref{fig:mem_device}e) (\cite{Miron2011}), ferroelectric tunneling junction (FTJ, Fig.~\ref{fig:mem_device}f) (\cite{Wen2013}), etc. The resistance memory of these devices can mimic the memory effect of the basic components of biological neural system, while the resistance changing can mimic the plasticity of biological synapse. Facilitated with their simplicity of two-terminal configuration and scalability to nanoscale, they are inherently suitable for the hardware implementation of brain-inspired computation materializing an artificial neural network, i.e. neuromorphic computation (\cite{Jo2010,Wang2016c}). This notation, in recent years, has incited wide investigations on the various memristive devices and on their applications in neural network learning and recognition, or, in short, memristive learning (\cite{Ohno2011,Kuzum2012,Yang2013,Alibart2013,Eryilmaz2014,Ambrogio2018}). The memristive learning can enable energy efficient and low latency information process within a reduced size of systems abandoning the conventional von-Neumann architecture. Among other benefits, this will also make it possible to process information where they are acquired, i.e. within sensors, and reduce the bandwidth needed for transferring the sensor data to data center, accelerating the coming of the era of Internet-of-Things (IOT). Table \ref{tab:memdev} summarizes the key features of the main memristive device technologies for neuromorphic / wearable applications in terms of cell area, electrical characteristics, main advantages and challenges. It is worth noticing that some figures of merit in this context are radically different with respect to standard memory requirements. Indeed, while in the memory scenario higher read currents enable faster reading speed, in neuromorphic applications currents as low as possible are preferred, since the current is a limiting factor for neurons' fan-out. Similarly, SET and RESET times should be as fast as possible in memory applications, while in our applications this requirement can be relaxed thanks to the lower operating frequency of the neurons (20\,Hz to 100\,Hz). Moreover, the number achievable conductance levels has to be increased (\cite{ielmini2020AdvIntellSys}). Some non-idealities which are usually detrimental for memory applications, for instance stochasticity of switching parameters, are even beneficial for the neural networks. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{Figures/figure_meristive_device} \caption{Memristive devices for neuromorphic computing. (a) Interface type RRAM device; (b) Filamentary RRAM device; (c) Phase change memory device; (d) MRAM device with in-plane spin polarization; (e) MRAM device with perpendicular spin polarization; (f) FTJ device. } \label{fig:mem_device} \end{figure} \begin{table*}[t] \small \centering \caption{Key features of non-volatile memristive devices.} \label{tab:memdev} \vspace{0.2cm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|>{\centering\arraybackslash} m{2.7cm} *1{ |>{\centering\arraybackslash} m{3.2cm}} *1{ |>{\centering\arraybackslash} m{3.2cm}} *1{| >{\centering\arraybackslash} m{3.2cm}} *1{| >{\centering\arraybackslash} m{3.2cm}|} } \toprule \hline & \textbf{RRAM} & \textbf{PCM} & \textbf{MRAM} & \textbf{FTJ} \\ \hline \hline \textit{Cell area [min. feature size]} & $4F^2$ \cite{irds2020} & $4F^2$ \cite{irds2020} & $9F^2$ (\cite{rho2017ISSCC}) & $4F^2$ \cite{irds2020} \\ \hline \textit{Retention} & $>$10 years (\cite{goux2014VLSI}) & $>$10 years (\cite{cheng2012IEDM}) & $>$10 years (\cite{golonzka2018IEDM}) & $>$10 years (\cite{udayakumar2013IMW}) \\ \hline \textit{Endurance} & $10^{12}$ (\cite{kim2011VLSI,lee2011NatMat}) & $10^{11}$ (\cite{kim2010VLSI}) & $10^{12}$ (\cite{saida2017TED}) & $>10^{15}$ (\cite{udayakumar2013IMW}) \\ \hline \multirow{2}{*}{\textit{SET / RESET time}} & 100\,ps (\cite{torrezan2011Nanotech}) & $>$100\,ns, 10\,ns & 20\,ns (\cite{jan2018VLSI}) & 30\,ns, 30\,ns \\ & 85\,ps (\cite{choi2016AdvFunctMat}) & (\cite{irds2020}) & 3\,ns (\cite{kitagawa2012IEDM}) & (\cite{francois2019IEDM}) \\ \hline \textit{Read current} & 100\,pA (\cite{luo2016Nanoscale}) & 25\,$\mu$A (\cite{desandre2010ISSCC}) & 20\,$\mu$A (\cite{kitagawa2012IEDM}) & 0.8\,nA (\cite{bruno2016AdvElectrMat}, device diameter 300\,nm) \\ \hline \textit{Write energy per bit} & 20\,fJ (\cite{kang2015NanoEnergy}) & $\sim$100\,fJ (\cite{xiong2011Science}) & 90\,fJ (\cite{kitagawa2012IEDM}) & $<$10\,fJ (\cite{francois2019IEDM}) \\ \hline \textit{Main features} & Scalability, speed, low energy & Scalability, multilevel, low voltage & Endurance, low power & Endurance, low power, speed \\ \hline \textit{Challenges} & Variability & RESET current, temperature stability, resistance drift & Density, scalability, variability & Scalability \\ \hline \end{tabular} \end{table*} In addition to the commonly referred non-volatile type of memristive switching, the RRAM device can also show volatile behavior, which usually occurs when active materials such as silver or copper are used as electrode. The relatively long retention time of the volatile behavior (tens of milliseconds to seconds) is then found to be similar to the timescale of short term memory, and naturally was proposed to mimic the short term memory effect of biological synapses (\cite{ZhongruiWang2017,WeiWang2019ted2,covi2019ICECS}). Although most researches on memristive devices are carried on rigid silicon substrates, the simple structure of memristive devices can also be realized on flexible substrates (\cite{Shi2020}), which opens new interesting possibilities for realizing local computation within wearable devices (\cite{Shang2017,Dang2019}). \subsection{Memristive devices for neuromorphic computing} \label{subsec:memcomputing} \subsubsection{Memristive neural components} \label{subsec:memcomponents} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{Figures/figure_meristive_components} \caption{Memristive devices as synapse or neuron for neuromorphic computing. (a)-(c) memristive device act as threshold device for the firing function of biological neuron (\cite{Mehonic2016front}, reproduced under the CC BY license). (d) Conceptual illustration of memristive device as artificial synapse for brain-like neuromorphic computing (\cite{WeiWang2018sa}, reproduced under the CC BY-NC license). } \label{fig:mem_comp} \end{figure} As mentioned in Section~\ref{subsec:memdevices}, the primary function of memristive devices is the usage as synaptic devices to implement the memory and plasticity of biological synapses. However, there are increasing interests for these devices to be utilized to implement nanoscale and artificial neurons. On the neuron side, the memristive device gradual internal state change and its consequently abrupt switching closely mimic the integrate-and-fire behavior of biological neurons (\cite{Mehonic2016front,Tuma2016,suresh2019ICECS}, Fig.~\ref{fig:mem_comp}a-c). Due to the sample structure and nanometer level scalability, memristive neurons can be much more compact than current CMOS neurons which might consist of current sensor, analog-to-digital converter (ADC), and analog-to-digital converter (DAC), and capacitors, all of which are expensive to implement in current CMOS technology in terms of area and/or power consumption (\cite{kwon2018JAP}). The implementation of memristive neurons will also enable full memristive neuromorphic computing (\cite{wang2018NatElectr}), which promises further increases in the integration of the hardware neuromorphic computing. On the synaptic side, the key feature of the biological synapses is their plasticity, i.e. tunable weight, which can be generally implemented by resistance or conductance modification in the memristive devices (Fig.~\ref{fig:mem_comp}d). Fundamental learning rules based on \ac{STDP} have already been widely explored (\cite{Kuzum2012,WangZhongqiang2015,covi2016FrontNeurosci,mulaosmanovic2017VLSI,covi2018JPhysD}). Spatial spiking pattern recognition (\cite{Pedretti2017}), spiking co-incidence detection (\cite{Prezioso2018,Sebastian2017}), and spatial-temporal correlation (\cite{WeiWang2018sa,WeiWang2019fd}) has been reported recently. Synaptic metaplasticity, such as paired-pulse facilitation, can also be achieved via various device operation mechanism (\cite{Zhu2017,ZhongruiWang2017,Wu2018}). \\ \subsubsection{Memristive neural network architectures} There are generally two approaches for a hardware neuromorphic system implementing memristive devices as synapses: (i) deep learning accelerator, accelerating the artificial neural network computing with multiple layer and error back-propagation, as well as it's variations, like convolutional neural network, recurrent neural network, etc.; (ii) brain-like computing, attempting to closely mimicking the behaviors of biological neural system, like spike representation (Fig.~\ref{fig:mem_comp}d) and collective decision making behavior. In the deep learning accelerator approach, on-line training places more requirements for the memristive synapses. For instance, linear and symmetrical weight update is crucial for the on-line training (\cite{Burr2015,Ambrogio2018}), while off-line training ignores it since the synaptic weight can be programmed to the memristive device with fine tuning and iterative verify (\cite{Yao2020}). Collective decision making is an important feature of the brain computing, which requires high parallelism and, consequently, low current devices. For instance, this feature is the essential for Hopfield neural network (\cite{Hopfield1982}), cellular neural network (\cite{Duan2015}), and coupled oscillators (\cite{romera2018Nature}). In the Hopfield neural network, the system automatically evolves to its energy minimization points leading the functionality of associative memory. The use of Hopfield like recurrent neural networks (RNNs) with memristive devices has already been successfully demonstrated in a variety of tasks (\cite{milo2017IEDM,WangYanghao2020}). As an example of memristive based coupled oscillator network, \cite{ignatov2017SciAdv} used a network of self-sustained van der Pol oscillators coupled with oxide-based memristive devices to investigate the temporal binding problem, which is a well known issue in the field of cognitive neuroscience. In this experiment, the network is able to emulate an optical illusion which shows two patterns depending on the influence of attention. This means that the network is able to select relevant information from a pool of inputs, as in the case of a system collecting signals from multiple sensors. \vspace{6mm} \subsubsection{Applications of memristive neural networks} At present, memristive technology has been mainly used in relatively simple networks with Hebbian-based learning algorithms. However, more recently, systems able of solving different tasks, such as speech recognition (\cite{park2015SciRep}), and exploring different architectures and learning algorithms are being investigated. In particular, the benefits of exploiting sparsity, mentioned in Section~\ref{sec:pruning}, are demonstrated for feature extraction and image classification in networks trained with stochastic gradient descend and winner-take-all learning algorithms (\cite{sheridan2016TNNLS}), as well as in hierarchical temporal memory, which does not need training (\cite{krestinskaya2018AnICSigProc}). In the latest years, memristive devices have been used in applications closer to biology, enabling hybrid biological-artificial systems (\cite{serb2020SciRep}) and investigating biomedical applications, ranging from speech and emotion recognition (\cite{saleh2015CISDA}) to biosignal (\cite{kudithipudi2016FrontNeurosci}) and medical image (\cite{zhu2017Neurocomp}) processing. Finally, an interesting application is the one of memristive biosensors, which \cite{tzouvadaki2018ISCAS} used to implement a system for cancer diagnostic. The innovative use of memristive properties was demonstrated in hardware and opens the way to a broader use of memristive technology where sensors and computing co-exist in the same system or, possibly, in the same device. \subsection{Open challenges and future work} \label{subsec:memchallenge} \subsubsection{Device non-idealities} Implementation of mainstream deep learning algorithms with \ac{Backprop} learning rule and memristive synapses imposes some requirements for the memristive device, including linear current-voltage relation for reading, analog conductance tuning, linear and symmetric weight update, long retention time, high endurance, etc. (\cite{Gokmen2016}). However, no single device can fulfill all these requirements simultaneously. Various techniques have been proposed to compensate the device non-idealities. For instance, to compensate the non-linear current-voltage relation for reading, fixed read voltage with variable pulse width or pulse number can be used for synaptic weight reading, and the readout is represented by the charge accumulation in the output nodes (\cite{Cai2019}). Linear and symmetric weight update is crucial for accurate online learning of a memristive multilayer neural network with \ac{Backprop} learning rule (\cite{Burr2015}). However, PCM devices usually only show gradual switching in set direction (weight potentiation), while RRAM devices show gradual switching in reset direction (weight depression). To achieve linear and symmetric weight update, differential pair with two of these devices are usually used. For a differential pair with two PCM devices, the potentiation is achieved by applying set pulses on the positive part and the depression is achieved by applying set pulses on the negative part, thus gradual weight update in both potentiation and depression can be achieved. To further enhance the linearity of weight update, a minor conductance pair consisting of capacitors can be used for frequent but smaller weight update, and finally transferred to the major pair periodically (\cite{Ambrogio2018}). Another option to improve device linearity is limiting the device dynamic range in a region far from saturation and where the weight update is linear \cite{woo2016EDL,wang2016Nanoscale}. In addition to mitigate the non-idealities of memristive devices, more and more research efforts are made to exploit these non-idealities for brain-like computations. For instance, the stochasticity or noise in reading of memristive device can be used for the probability computation for restricted Boltzmann machine (\cite{Mahmoodi2019}), or escape for local minimization points in a Hopfield neural network (\cite{Cai2020}). The Ag filament based resistive switching device shows short retention time and high switching dynamics, thus was proposed for reservoir computing (\cite{Midya2019a}) and spatiotemporal computing (\cite{WeiWang2019ted2}) to process time-encoded information. \subsubsection{Co-integration of hybrid CMOS-memristive neuromorphic systems} The main steps to be taken to exploit the full potential of an \ac{ASIC} for end-to-end processing system go through the integration of memristive devices and sensors with CMOS technology. Indeed, the works presented so far are based either on simulations or on real device data, or on memristive chips interfaced with some standard digital hardware. Despite integration of CMOS technology has been demonstrated for non-volatile resistive switching devices already at a commercial level (\cite{scharlotta2014IIRW,hayakawa2015VLSI}), the design of co-integrated memristive-based neuromorphic processors is still under development. We envisage a three-phase process to achieve a fully integrated system. The first one is the co-integration of non-volatile memristive devices with some peripheral circuits (\cite{hirtzlin2020FrontNeurosci}) and to implement some logic and multiply-and-accumulate (MAC) operations (\cite{chen2019NatElectr}), which reaches the maturity with the demonstration of a fully cointegrated \ac{SNN} with analog neurons and memristive synapses (\cite{valentian2019IEDM}). The second phase is the co-integration of different technologies. Despite this approach results in higher fabrication costs, it presents several advantages in terms of system performance, which can be more compact and potentially more power efficient. In particular, the co-integration of non-volatile and volatile memristive devices can lead to a fully memristive approach. As an example, \cite{wang2018NatElectr} exploit volatile memristive devices to emulate stochastic neurons and non-volatile memristive devices to store the synaptic weights on the same chip, thus demonstrating the feasibility and the advantages of the dual technology co-integration process. Eventually, the final step which has to be taken in the development of a dedicated \ac{ASIC} for wearable edge computing is the co-integration of sensors and memristive-based systems. \cite{shulaker2017Nature} tackled this challenge by designing and fabricating a gas sensing system able of gas classification. The system uses RRAM arrays as memory, Carbon Nanotube field effect transistor (CNFET) for computation and gas sensing, both 3D monolithically integrated on CMOS circuits, which carry out computation and allow memory access. \subsubsection{Learning with memristive devices} Adaptability is a feature of paramount importance in smart wearable devices, which need to be able to learn the unique feature of their user. This calls for the implementation of lifelong learning paradigms, i.e. the ability of continuously learning new features from experience. Typically, a network has a limited memory capacity dependent on the network size and architecture. Once the maximum number of experiences is recorded, new features learned will erase old ones, thus originating the phenomenon of catastrophic forgetting. The problem of an efficient implementation of continual learning has been thoroughly investigated (\cite{parisi2018arxiv}). In the current scenario, a dichotomy exist between backprop-based \ac{ANN}s, which have very high accuracy but a limited memory capacity, and brain-inspired \ac{SNN}s, which feature higher memory capacity thanks to their higher flexibility, but at the cost of lower accuracy. Models used to overcome forgetting are described in Section~\ref{sec:model-challenge}. The use of memristive devices in such networks is still an open point. It is possible that memristive device will be beneficial to increase the network capacity (\cite{brivio2018Nanotech}) at no extra computational cost thanks to their slow approach to the boundaries (\cite{Frascaroli2018}), but so far this topic is still quite unexplored. An interesting approach is proposed by \cite{munozmartin2019JESSDCC}, where the key strengths of supervised convolutional \ac{ANN}s, unsupervised \ac{SNN}s, and memristive devices are combined in a single system. The results indicate that this approach is robust against catastrophic forgetting, whilst reaching 93\% accuracy when tested with both trained and non-trained classes. \subsection{Brain-inspired learning algorithms for neuromorphic hardware} \label{sec:bio-algorithms} Today, the dominating method for training artificial neural networks is the error \ac{Backprop} algorithm \cite{rumelhart1986learning}, which provides an efficient and scalable solution to adapting the network parameters to a set of training data. \ac{Backprop} is an iterative, gradient-based, supervised learning algorithm that operates in three phases. First, a given input activation is propagated through the network to generate the output based on the current set of parameters. Then, the mismatch between the generated outputs and target values is computed using a loss function, and propagated backwards through the network architecture to compute suitable weight changes. Finally, the network parameters are updated to reduce the loss. We will not go into the details behind \ac{Backprop} here, but see \cite{schmidhuber2015deep} for an excellent review and historical survey of the development of the algorithm. The problem of porting \ac{Backprop} to neuromorphic hardware stems form a well-known shortcoming of the algorithm known as \emph{locking} -- the weights of a network can only be updated after a full forwards propagation of the data through the network, followed by loss evaluation, then finally after waiting for the back-propagation of error gradients \cite{czarnecki2017understanding}. Locking prevents an efficient implementation of \ac{Backprop} on online distributed architectures. Also, \ac{Backprop} is not well suited for spiking neural networks which have non-differentiable output functions. These problems have been recently addressed in brain-inspired variants of the \ac{Backprop} algorithm. \vspace{4mm} \subsubsection{Brain-inspired alternatives to error backpropagation} In recent years a number of methods have been proposed to approximate the gradient computation performed by \ac{Backprop} in order to prevent locking (see \cite{richards2019deep} for a recent review). \cite{lillicrap2016random,samadi2017deep} proposed to replace the non-local error back-propagating term of the \ac{Backprop} algorithm by sending the loss through a fixed feedback network with random weights that are excluded from training. In this approach, named \emph{random feedback alignment} the back-propagating error signal acts as a local feedback to each synapse, similar to a reward signal in reinforcement learning. The fixed random feedback network de-correlates the error signals providing individual feedback to each synapse. Lillicrap et al. could show that this simple approach already provides a viable approximation to the exact \ac{Backprop} algorithm and performs well for practical machine learning problems of moderate size. In \cite{neftci2017event} an event-based version of random feedback alignment, that is well suitable for neuromorphic hardware, was introduced. This approach was further generalized in \cite{payvand2019error} to include a larger class of algorithms that use error feedback signals. An efficient model for learning complex sequences in spiking neural networks, named \emph{Superspike}, was introduced in \cite{zenke2018superspike}. The model also uses a learning rule that is modulated by error feedback signals and locally minimizes the mismatch between the network output and a target spike train. To overcome the problem of non-differentiable output, Superspike uses a surrogate gradient approach that replaces the infinitely steep spike events with a finite auxiliary function at the time points of network spike events \cite{hinton2012neural,bengio2013estimating}. As in random feedback alignment, learning signals are communicated to the synapses via a feedback network with fixed weights. Using this approach Zenke and others could demonstrate efficient learning of complex sequences in spiking networks. \vspace{4mm} Another approach to approximate \ac{Backprop} in spiking neural networks uses an anatomical detail of Cortical neurons. \cite{sacramento2017dendritic} introduced a biologically inspired two-compartment neuron model that approximates the error backpropagation algorithm by minimizing a local dendritic prediction error. \cite{goltz2019fast} port learning by \ac{Backprop} to neuromorphic hardware by incorporating dynamics with finite time constants and by optimizing the backward pass with respect to substrate variability. They demonstrate the algorithm on the BrainScaleS analog neuromorphic architecture. \vspace{8mm} \subsubsection{Brain-inspired alternatives to backpropagation through time} Recurrent neural network (RNN) architectures often show superior learning results for tasks that involve a temporal dimension, which is often the case for edge computing applications. Porting learning algorithms for RNNs is therefore of utmost importance for efficient machine learning on the edge. Backpropagation through time (BPTT) -- the standard RNN learning method used in most GPU implementations -- unfolds the network in time and keep this extended structure in memory to propagate information forward and backward which poses a severe challenge to the power and area constraints of edge computing. Recent theoretical results \cite{bellec2018long,bellec2019_eprop} show that the power of BPTT can be brought to biologically inspired spiking neural networks (SNN) while at the same time the unfolding can be prevented in an approximation that operates only forward in time, enabling \emph{online, always-on} learning. This algorithm operates at every synapse in parallel and incrementally updates the synaptic weights. As for random feedback alignment and Superspike discussed above, the weight update depends only on three factors, where the first two are determined by the states of the two related input/output neurons, and the third is given by synapse-specific feedback conveying the mismatch between the target and the actual output (see Fig.~\ref{fig:models}a for an illustration). The temporal gap between these factors is mitigated by an \emph{eligibility trace} describing a transient dynamic. Eligibility traces, have been theoretically predicted for a long time \cite{williams1992simple,izhikevich2007solving}, and have also recently been observed experimentally in the brain \cite{yagishita2014critical,he2015distinct,brzosko2015retroactive,bittner2017behavioral}. \vspace{8mm} \subsection{Efficient learning under stringent memory constraints} \label{sec:pruning} The amount of available resources in neuromorphic systems is kept low to increase energy efficiency. Memory elements are especially impactful on the energy budget. Therefore, algorithms are needed that make efficient use of the available memory resources. The largest amount of memory in a network is usually consumed by the synaptic weights. Since in practice, the weights of many connections in a network converge to values close to zero, several methods have been proposed to reduce the memory footprint of machine learning algorithms by exploiting sparsity in the network connectivity. We will discuss here two types of algorithms: (1) those that are based on \emph{pruning connections after learning} and (2) \emph{online} learning with \emph{sparse} networks. These two types of sparse learning algorithms are discussed in the following sections. \subsubsection{Pruning} Many approaches to exploit sparsity in learning algorithms focus on pruning the network after training (see \cite{gale2019state} for a recent review). Simple methods rely on pruning by magnitude, simply by eliminating the weakest (closest to zero) weights in the network \cite{strom1997sparse,collins2014memory,han2015learning}. Some methods based on this idea have reported impressive sparsity rates of over 95\% for standard machine learning benchmarks with negligible performance loss \cite{guo2016dynamic,zhu2017prune}. Other methods are based on theoretical motivations and classical sparsification and regularization techniques \cite{molchanov2017variational,louizos2017learning,ullrich2017soft}. These models reach high compression rates. \cite{dai2019nest} proposed a method to iteratively grow and prune a network in order to generate a compact yet precise solution. They provide a detailed comparison with state of the art dense networks and other pruning methods and reaching sparsity above 99\% for the LeNet-5 benchmark.\\ \subsubsection{Online learning in sparse networks} A number of authors also introduced methods that work directly with sparse networks during training, which is often the more interesting case for neuromorphic applications with online training. \cite{bellec2017deep} introduced an algorithm for online stochastic rewiring in deep neural networks that works with a fixed number of synaptic connections throughout learning. The algorithm showed close-to state of the art performance at up to 98\% sparsity. Sparse evolutionary training (SET) \cite{mocanu2018scalable} introduced a heuristic approach that prunes the smallest weights and regrows new weights in random locations. Dynamic Sparse Reparameterization \cite{mostafa2019parameter} introduces a prune-redistribute-regrowth cycle. They demonstrated compelling performance levels also for very deep neural network architectures. \cite{lee2018snip} introduced a single shot pruning algorithm that yields sparse networks based on a saliency criterion prior to the actual training. \cite{dettmers2019sparse} introduced a refined method for online pruning and redistribution that surpasses the previous methods in terms of sparsity and learning performance. \vspace{4mm} \subsection{Open challenges and future work} \label{sec:model-challenge} As outlined above, edge computing poses quite specific challenges to learning algorithms that are substantially different from requirements of classical applications. Some of the algorithms outlined above have already been succesfully ported to neuromorphic hardware. For example, the e-prop algorithm of \cite{bellec2018long} has been implemented on the SpiNNaker~2 chip yielding an additional energy reduction by two orders of magnitude compared to a X86 implementation \cite{liu2018memory}. See the next Section~\ref{sec:cmos} for more details on available neuromorphic hardware and their applications. In the remainder of this section we will highlight open challenges that remain to be solved for efficient learning in edge computing applications. In addition to the stringent memory and power constraints learning at the edge also has to function in an online scenario where data arrive in a continuous stream. Some dedicated hardware resources, e.g. like memristive devices discussed in Section~\ref{sec:memristive}, may also show high levels in intrinsic variability, so the learning algorithm should be robust against these noise sources. In this section we discuss recent advances in this line of research and provide food for thought on how these specific challenges can be approached in future work. \vspace{4mm} \subsubsection{Fault-tolerant robust learning algorithms for neuromorphic devices} Here we review recent advances in using inspiration from biology to make learning algorithms robust against device variability. Several authors have suggested that device noise and variability should not be seen as a nuisance, but rather can serve as a computational resource for network simulation and learning algorithms (see \cite{maass2014noise} for a thorough discussion). \cite{pecevski2016learning} have shown that variability in neuronal outputs can be exploited to learn complex statistical dependencies between sensory stimuli. The stochastic behavior of the neurons is used in this model to compute probabilistic inference, while biologically motivated learning rules, that only require local information at the synapses can be used to update the synaptic weights. A theoretical foundation of the model shows that the spiking network performs a Markov chain Monte Carlo sampling process, that allows the network to 'reason' about statistical problems. This idea is taken one step further in \cite{neftci2015unsupervised} by showing that also the variability of synaptic transmission can be used for stochastic computing. The intrinsic noise of synaptic release is used to drive a sampling process. It was shown that this model can be implemented in an event-based fashion and was benchmarked on the MNIST digit classification task, where it achieved $95.6\%$ accuracy. In \cite{kappel2015network} it was shown that the variability of learning rules and weight parameters gives rise to a biologically plausible model of online learning. The intrinsic noise of synaptic weight changes drives a sampling process that can be used to exploit redundancies in the task solution space (see Fig.~\ref{fig:models}b for an illustration). This model was applied to unsupervised learning in spiking neural networks, and to closed-loop reinforcement learning problems \cite{kappel2018dynamic, kaiser2019embodied}. In \cite{yan2019efficient} this model was also ported to the SpiNNaker~2 neuromorphic many-core system. \vspace{5mm} \subsubsection{Biologically motivated mechanisms to combat forgetting in always-on learning scenarios} Neuromorphic systems often operate in an environment where they are permanently on and learning a continuous stream of data. This mode of operation is quite different from most other machine learning applications that work with hand-labeled batches of training data. Always-on learning on a system with limited resources inevitably leads to situations where the system reaches the limits of its memory capacity and thus starts forgetting previously learned sensory experiences. Inspiration to overcome forgetting relevant information comes from biology. The mammalian brain seems to combat forgetting by actively protecting previously acquired knowledge in neocortical circuits \cite{cichon2015branch,pan2009survey,hayashi2015labelling,YangETAL:09,yang2014sleep}. When a new skill is acquired, a subset of synapses is strengthened, stabilized and persists despite the subsequent learning of other tasks \cite{YangETAL:09}. A theoretical treatment of the forgetting problem was conducted in the \emph{cascade model} of Stefano Fusi and others \cite{fusi2005cascade,benna2016computational}. They could show that learning an increasing number of patterns in a single neural network leads unavoidably to a state which they called catastrophic forgetting. Trying to train more patterns into the network will interfere with all previously learned ones, effectively wiping out the information stored in the network. The proposed cascade model to overcome this problem uses multiple parameters per synapse that are linked through a cascade of local interactions. This cascade of parameters selectively slows down weight changes, thus stabilizes synapses when required and effectively combats effects of forgetting. A related model, that uses multiple parameters per synapse to combat forgetting was used in \cite{KirkpatrickETAL:17} (see also \cite{huszar2018note} for a recently introduced variation of the model). They used a Bayesian approach that infers a prior distribution over parameter values at each synapse. Synapses that stabilize during learning (converge to a fixed solution) will be considered relevant in subsequent learning and Bayesian priors help to maintain their values (see Fig.~\ref{fig:models}c for an illustration). \vspace{5mm} \subsubsection{Biologically motivated mechanisms to enhancing transfer and sensor fusion} Distributed computing architectures at the edge need to make decisions by integrate information from different sensors and sensor modalities and they should be able best make use of the sensory information across a wide range of tasks. It is clearly not very efficient to learn from scratch when confronted with a new task. Therefore, to boost the performance of edge computing, we will consider two aspects of transferring information to new situations: transfer of knowledge between sensors (\emph{sensor fusion}), which has been treated in Section~\ref{sec:sensor-fusion}, and transfer of knowledge between multiple different tasks (\emph{transfer learning}). \emph{Transfer learning} denotes the improvement of learning in a new task through the use of knowledge from a related task that has already been learned previously \cite{caruana1997multitask,torrey2010transfer}. This contrasts most other of today's machine learning applications that focus on one very specific task. In transfer learning, when a new task is learned, knowledge from previous skills can be reused without interfering with them. E.g. the ability to perform a tennis swing can be transferred to playing ping pong, while maintaining the ability to do both sports. The literature on transfer learning is extensive and many different strategies have been developed depending on the relationship between the different task domains (see \cite{weiss2016survey} and \cite{lu2015transfer} for systematic reviews). In machine learning a number of approaches have been applied to a wide range of problems, including classification of images \cite{long2017deep,duan2012learning,kulis2011you,zhu2011heterogeneous}, text \cite{wang2011heterogeneous, zhou2014heterogeneous, prettenhofer2010cross, zhou2014hybrid} or human activity \cite{harel2010learning}. A very general approach to learn across multiple domains is followed in the \emph{learning to learn} framework of \cite{schmidhuber1992learning,schmidhuber1993neural}. Their model features networks that are able to modify their own weights through the network activity. These network are therefore able to tinker with their own processing properties. This approach has been taken to its most extreme form where a network leans to implement an optimization algorithm by itself \cite{andrychowicz2016learning}. This model consists of an outer-loop learning network (\emph{the optimizer}) that controls the parameters of an inner-loop network (\emph{the optimizee}). The training algorithm of the inner-loop network works on single tasks that are presented sequentially, whereas the outer-loop learner operates across tasks and can acquire strategies to transfer knowledge. This learning-to-learn framework was recently applied to SNNs to obtain properties of LSTM networks and use them to solve complex sequence learning tasks \cite{bellec2018long}. In \cite{bohnstingl2019neuromorphic} the learning-to-learn framework was also applied to a neuromorphic hardware platform. \section*{Brainstorming} - How do we want to sell it? - Wearable devices: exact definition! - What is our definition of edge? - What is neuromorphic, really? - There are wearable devices (one sentence to describe them), they are important for... and they have these limitations... Do they do machine learning? Do they process data from sensors? How? How good they are? Which are the challenges (resolution of the sensors, power consumption, signal to noise ratio, monitoring abilities / adaptation) - In general, the story should be focussing on one topic at a time where current challenges are illustrated and possible solutions introduce the next topic. - At the end, we need a general discussion which brings everything together. - Better sensors, better processing, better algorithms (ML needs feature extraction, DL in this is better but if one wants to do online learning one cannot do), learning, neuromorphic, encoding, limits of neuromorphic and memristive devices. \section*{Questions (David)} \begin{itemize} \item Should we target a specific application, e.g. autonomous driving? \item Title? \item related review: \textit{Edge Computing: A Promising Framework for Real-Time Fault Diagnosis and Dynamic Control of Rotating Machines Using Multi-Sensor Data} \citep{qian2019edge}. \end{itemize} \subsection*{Concept} \begin{itemize} \item Low power \item adaptation to patient (scenario) \item edge computing \end{itemize} \section*{Questions (Xiangpeng \& Hadi)} \begin{itemize} \item We agree to have more specific topic! Wearable Device? \item Title can be: "Neuromorphic Edge Computing for Wearable Devices" \item related review: O. Krestinskaya, A. P. James and L. O. Chua, "Neuromemristive Circuits for Edge Computing: A Review," in IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 1, pp. 4-23, Jan. 2020. https://ieeexplore.ieee.org/document/8667457 \end{itemize} \section*{Questions (Xiangpeng \& Hadi)} \begin{itemize} \item We agree to have more specific topic! Wearable Device? \item Title can be: "Neuromorphic Edge Computing for Wearable Devices" \item related review: O. Krestinskaya, A. P. James and L. O. Chua, "Neuromemristive Circuits for Edge Computing: A Review," in IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 1, pp. 4-23, Jan. 2020. https://ieeexplore.ieee.org/document/8667457 \end{itemize} \section*{Questions (Erika)} \begin{itemize} \item Wearable devices are fine for me. \item Title: \textit{Neuromorphic Edge Computing \textbf{with Memristive Devices} for Wearable Devices} ;-) \item I don't think there are related reviews, I only found this one up to now: \textit{Novel electronics for flexible and neuromorphic computing} \cite{lee2018AdvFunctMat}, but it's on flexible substrates. \end{itemize} \section{Introduction (all)} \begin{itemize} \item Motivation of multi-sensor fusion \item Local edge computing to boost sensor fusion performance \item Real time learning for smooth interaction with environment \item Importance of biologically plausible algorithms \item Hardware-based neuromorphic approaches \item Memristive devices \end{itemize} \section{Introduction (all*)} \begin{itemize} \item Importance of Edge computing for IoT (all) \item Sensors and sensory fusion (Hadi and Xiangpeng) \item State of the art for processing sensory information (low power accelators) (Melika) \item The next step: Ultra low power smart sensors that adapt to the environment (all) \item Adaptation: Continual learning (David) \item State of the art on CMOS neuromorphic hardware (Melika and Erika) \item Memristive devices (Erika) \item Ideal substrate: Mixed-signal hybrid CMOS-memristor chips (Melika and Erika) \end{itemize} \section{Sensors and techniques for sensor fusion (meLAB)} \begin{itemize} \item Overview on wearable sensors \item Large-scale sensor network \item Overview of standard architectures for sensor data processing \end{itemize} \section{Models for biologically plausible continual learning (G{\"o}ttingen)} \begin{itemize} \item Algorithms for low power machine learning. \item Algorithms that cope with high variability in neuromorphic devices. \item -- Supervised / unsupervised learning \item -- Biological models \item -- Short and long-term synaptic plasticity \item -- Hebbian learning and STDP \item -- Beyond STDP (reward modulated learning, continuous learning, and others) \end{itemize} \subsection*{continual learning} Maybe better discuss two problems of sensor fusion and continual learning. Link each of them to how biology is solving them and how they are approached in state of the art machine learning methods and neuromorphic devices. two problems: \begin{itemize} \item learning form multiple sensors ('spatial integration') \item transfer of knowledge between tasks, catastrophic forgetting ('temporal integration'). \end{itemize} \subsubsection*{Learning from multiple sensors simultaneously} sensor fusion: application in autonomous driving: \cite{sallab2017deep}. Deep network for sensor fusion with event-based implementation \cite{neil2016effective}. \subsubsection*{transfer of knowledge between tasks} transfer learning: \cite{caruana1997multitask,torrey2010transfer} learn on data from different domains - able to transfer knowledge from one domain to the other \cite{pan2009survey,weiss2016survey,thrun1998learning}. When a new task is learned knowledge from previous skills can be reused without interfering with them. E.g. the ability to perform a tennis swing can be transferred to playing ping pong, while maintaining the ability to do both sports. Evidence in biology: The mammalian brain seems to achieve transfer learning by protecting previously acquired knowledge in neocortical circuits \cite{cichon2015branch,pan2009survey,hayashi2015labelling,YangETAL:09,yang2014sleep}. When a new skill is acquired, a subset of excitatory synapses is strengthened \cite{YangETAL:09} and persists despite the subsequent learning of other tasks \cite{YangETAL:09}. in machine learning a number of approaches have been applied to a wide range of problems \cite{shimodaira2000improving,long2017deep,huh2016makes,lu2015transfer}, including classification of images \cite{duan2012learning,kulis2011you,zhu2011heterogeneous}, text \cite{wang2011heterogeneous, zhou2014heterogeneous, prettenhofer2010cross, zhou2014hybrid} or human activity \cite{harel2010learning}. A theoretically sound and systematic approach for transfer learning is based on hierarchical Bayesian models \cite{schwaighofer2005learning}. A related model, that is inspired by the cascade model: \cite{KirkpatrickETAL:17} (see also \cite{huszar2018note} for a recently introduced variation of the model). Transfer learning was also applied to spiking networks: \cite{bellec2018long}. \section{Memristive devices and memristive computing (Polimi)} \begin{itemize} \item Non-volatile devices (PCM, ReRAM, FeRAM STT, MTJ) \item Volatile devices (ReRAM, CBRAM, others) \item Plasticity and learning with memristive synaptic devices \item Neurons implementation with memristive devices \item Memristive neuromorphic computing for sensor fusion \end{itemize} Memristive devices, as the name suggested, are devices which can change and memorize their resistance states. They are usually two-terminal devices, however, can be implemented with various physical mechanisms, resulting in versatile existing forms, e.g. resistive random access memory (RRAM) (Ielmini and Wong 2018), phase change memory (PCM) (Zhang et al. 2019), magnetic random access memory (MRAM) (Miron et al. 2011), ferroelectric random access memory (FeRAM) (Wen et al. 2013), etc. The resistance memory of these devices can mimic the memory effect of the basic components of biological neural system, while, the resistance changing can mimic the plasticity of biological synapse. Facilitated with their simplicity of two-terminal configuration and scalability to nanoscale, they are inherently suitable for the hardware implementation of brain-inspired computation materializing an artificial neural network, i.e. neuromorphic computation (Jo et al. 2010; Wang et al. 2016). This notation, in recent years, has incited wide investigations on the various memristive devices and on their applications in neural network learning and recognition, or, in short, memristive learning (Ohno et al. 2011; Kuzum et al. 2012; Yang, Strukov, and Stewart 2013; Alibart, Zamanidoost, and Strukov 2013; Eryilmaz et al. 2014; Ambrogio et al. 2018). The memristive learning can enable energy efficient and low latency information process within a reduced size of systems abandoning the conventional von-Neumann architecture. Among other benefits, this will also make it possible to process information where they are acquired, i.e. within sensors, and reduce the bandwidth needed for transferring the sensor data to data center, accelerating the coming of the era of internet-of-things (IOT). \subsection{Memristive devices} The resistance states of memristive devices can be changed by applying of external stimuli on these devices. These devices can be classified into several categories depending on the mechanisms of the resistance states change and on the external stimuli types: RRAM, PCM, MRAM, FeRAM, and some others. We will give a brief introduction about all these devices, and will take RRAM device as example for the synaptic applications and memristive learning in following sections. \subsubsection{RRAM} RRAM device refer to the devics which has sandwich structure, i.e., an switching layer between two electrode layers, and can changes its resistance states by external voltage or current stimuli driven migration of cations or anions in the switching layer (Fig. X1a) (Yang et al. 2008; Wong et al. 2012). Under electrical stimuli, the migration of cations or anions can form a filamentary conductive path inducing a soft breakdown of the device, and the device simultaneously undergo a transition from high resistance state (HRS) to low resistance state (LRS). The soft breakdown means that this transition can be reversed by opposite electrical stimuli (bipolar RRAM device) or by thermal dissipation induced filamentary conductive path break (unipolar RRAM device) (Ielmini 2016). It is also possible that the applied electrical field drives the defects (cations or anions) in an uniform way that the interface of low conductive part and high conductive part of the switching layer moves resulting the resistance change of the devices (Strukov et al. 2008). In addition to the commonly referred non-volatile type of memristive switching. The RRAM device can also show non-volatile behavior, usually happens when silver or copper electrodes are used. This kind of volatile device is also named as diffusive memristor (Zhongrui Wang, NM, 2017). The volatile behavior is due to the surface tension effect of the nanoscale metallic filament connecting the top and bottom electrodes at on states (Wei, IEDM 2018; Wei Wang, NC, 2019). This volatile behavior was originally proposed to be used as selector device (Ming Wang, AM, 2018), however, this application is hindered by its relatively long retention/relaxation time. The long retention time of the volatile behavior is then found to be similar to the timescale of short term memory, and naturally was proposed to mimic the short term memory effect of biological synapses (Zhongrui Wang, 2017; Wei Wang, TED, 2019; Erika, ICECS, 2019). \subsubsection{PCM} In PCM device, the resistance switching is achieved by the change of phases (crystalline or amorphous states) of the switching materials, for instance, Ge2Sb2Te5 (GST) , by the annealing effect of joule heating by applied current (Tuma et al. 2016). Generating relatively high temperature in these materials and consequently rapid annealing would result in crystallization, while relative low temperature and low annealing process will lead to melt and amorphization (Suri et al. 2011). The crystalline and amorphous states correspond to low resistance states and high resistance states respectively (Ielmini and Zhang 2007). To maximize the joule heating effect, one of the electrodes is usually designed in pillar or vertical thin film shape. The crystallization and amorphization usually form a sphere shape. Thus the pillar and sphere phase change region forms a mushroom type shape, being a signature of PCM devices (Boybat et al. 2018; Roberto Carboni and Ielmini 2019), as shown in (Fig. X1b). \subsubsection{MRAM} MRAM device possesses an essential building block of magneto-tunnel junction (MTJ) (Chappert, Fert, and Van Dau 2007) with a thin insulating layer sandwiched between two ferromagnetic layers, one of which is a pinned layer with fixed magnetic polarization and the other a free layer with tunable magnetic polarization. The free layer can be tuned to be parallel to the pinned layer, then electrons can easily tunnel through the thin insulating layer and the device shows LRS. While if the free layer is tuned to be anti-parallel to the pinned layer, the device then is in HRS. The tune of the free layer polarization can be achieved by spin-tranfer torque (STT) or spin orbit torque (SOT) effects (Borders et al. 2017; Locatelli, Cros, and Grollier 2014). Since the magnetic polarization changing in the free layer does not involved atomic migration or structure changing, MRRAM devices are projected to have higher endurance ($> 10^{14}$) and faster switching speed ($< 1 ns$) (R. Carboni et al. 2016) than RRAM and PCM devices. While, the resistance window is usually 2 to 3 (Yuasa et al. 2004), much smaller than the RRAM and PCM devices. \subsubsection{FeRAM} FeRAM device also has a sandwiched structure, with a ferroelectric layer between two electrodes. The ferroelectric layer can obtain positive and negative permanent remnant polarizations, which are representing two different memory states of the device, by application of external negative and voltage biases on the two electrodes, respectively (Wen et al. 2013). Different from the former three types, FeRAM device’s memory states cannot be detected by directly measuring its resistance. In practice, one can embed the ferroelctric layer in a transistor replacing the conventional dielectric layer, thus the resistance of the channel is reflecting the polarization state of the ferroelectric layer (Si et al. 2018). \subsubsection{Mott memories} [Papers: Review paper: Shriram Ramanathan; Qi Liu, etc.] \subsubsection{Ion transport memory} [Papers: Y. van de Burgt; Wei Lu (Xiaojian Zhu); (Qing Wan; Chen Ge; Jia Huang)] \subsection{Memristive building blocks for neuromorphic computing} \subsubsection{Memristive synapses} [Papers: to be collected] \subsubsection{Memristive neurons} [Papers: Mott, PCM(IBM); Xumeng Zhang;] \subsection{Memristive neuromorphic computing} \subsubsection{Memristive artificial neural network} [Papers: JJ Yang; Demitr Strukov; Wei Lu; Burr/A. Sebastian (PCM)]. Device linearity; linearly and analogy weight update (for error backpropagation). \subsubsection{STDP and unsupervised learning /spiking neural network} [Papers: Ielmini (Stefano, Zhongqiang, Giacomo); Shimeng Yu; CNRS/CEA; A. Sebastian (PCM)] \subsubsection{Spatiotemporal neural network} [Papers: A. Sebastian (PCM); Wei] \subsubsection{Hopfield neural network} [Papers: Chicca; Milo; Wei Lu(Fuxi Cai); Oscillation network (Martin Zeigler, Damien Querlioz)] \subsection{Mixed-signal hybrid CMOS-memristor chips} The main advantages of using memristive devices in neuromorphic systems are \begin{itemize} \item Memristive devices are scalable and low power (area and energy saving). \item Different memristive devices have different time scale dynamics, therefore we can exploit the intrinsic physical dynamics of memristive devices to emulate short and long term dynamics in real time. \end{itemize} Memristive devices suffer from variability (both cycle-to-cycle and device-to-device) and they are intrinsically stochastic. However, CMOS systems can be designed to exploit these characteristics of the devices (bug or feature) \cite{suri2013TED,suri2015TNano,al-shedivat2015JETCAS,payvand2018ISCAS,payvand2019FarDisc}. The cointegration of memristive devices and neuromorphic circuits is challenging because it needs not only advanced technological skills, but also a device-centric approach to both the network architecture design and the development of the learning algorithms. \cite{hirtzlin2020FrontNeurosci} presented a digital biologically plausible Binarized Neural Network, but the cointegrated part is only the peripheral circuitry, there is no neuromorphic processor. \cite{shulaker2017Nature} demonstrated a computing system with RRAM arrays (memory), CNFET (computation and gas sensor), and CMOS (computation and memory access) 3D monolithically integrated. The system is capable of single feature gas classification. An off-chip classifier is used to classify multiple gases. \cite{chen2019NatElectr} integrated memristive non-volatile devices on a CMOS substrate to enable in-memory computing for AI edge processors, but this is not a neuromorphic processor. It implements two or three input logic operations and multiply-and-accumulate operations for binary-based convolutional neural network operations (note: 65 nm CMOS process). It can achieve an inference accuracy of 98.8\% on MNIST dataset. \cite{wang2018NatElectr} showed a fully memristive neural network with unsupervised learning. Volatile memristive devices are used to emulate stochastic neurons, whereas non-volatile memristive devices are used as synapses. Unsupervised learning is demonstrated on a convolutional neural network for pattern classification. \cite{valentian2019IEDM} presented a fully cointegrated Spiking Neural Network with analog neurons and memristive synapses on a 130 nm test chip. Classification accuracy up to 85.6\% (MNIST). Live demo successful. \section{Discussions and Conclusions (all)} Neuromorphic memristive hardware for local edge computing as an optimal solution for efficient sensor fusion \bibliographystyle{frontiersinSCNS_ENG_HUMS} \subsection{Wearable sensors with machine learning} Recently, the field of artificial intelligence further boosts the possibility of smart wearable sensory systems. The emerging intelligent applications and high-performance systems require more complexity and demand sensory units accurately describe the physical object. The decision-making unit or algorithm can therefore output a more reliable result (\cite{RN36, RN44, RN39, RN144, RN49}). Depending on the signal acquiring position, Fig.~\ref{fig:wearable} summaries the four biopotential sensors and two widely used wearable sensors along with their learning systems and applications. The sensors for the biopotential will be introduced first, and the other two wearable sensors will be provided separately. The biopotential signal can be extracted from the human body using a sensor with direct electrode contact. The electrochemical activity of the cells in nervous, muscular and glandular tissue generates ionic currents in the body. An electrode-electrolyte transducer is needed to convert the ionic current to electric current for the front-end circuit. The electrode that is normally made up of mental can be oxidized by the electrolyte, generating metal ions and free electrons. In addition, the anions in the electrolyte can also be oxidized to neutral atoms and free electrons. These free electrons result in current flow through the electrode. Thus, the surface potential generated by the electrochemical activities in cells can be sensed by the electrode. However, the bio-signals sensed by the electrode are weak and noisy. Before digitizing the collected signals by analog-to-digital converter, an analogue front-end is essential to provide a readable signal. The design requirements of the front-end for the biopotential electrodes can be summarized as follow: i) high common mode rejection ratio; ii) high signal-to-noise-ratio; iii) low-power consumption; iv) signal filtering, and v) configurable gain (\cite{RN88}). \textbf{\emph{\ac{ECG}.}} \ac{ECG} is the electrical activity generated by the electrochemistry around cardiac tissue. Containing morphological or statistical features, \ac{ECG} provides comprehensive information for analyzing and diagnosing cardiovascular diseases (\cite{RN72}). In the previous study, automatic \ac{ECG} classification has been achieved using machine learning algorithms, such as \ac{DNN} (\cite{RN74, RN45}), \ac{SVM} (\cite{RN34, RN84}), and \ac{RNN} (\cite{RN132, RN23}). According to Association for the Advancement of Medical Instrumentation, there are five classes of \ac{ECG} type of interest: normal, ventricular, supraventricular, fusion of normal and ventricular, and unknown beats. These methodologies can be evaluated by available \ac{ECG} database and yield over 90$\%$ accuracy and sensitivity for the five classes, which is essential for future cardiovascular health monitoring. In wearable application, \cite{RN75} and \cite{RN76} present systems that measure \ac{ECG} and send it to the cloud for classification and health monitoring. \textbf{\emph{\ac{EEG}.}} Our brain neurons communicate with each other through electrical impulses. An \ac{EEG} electrode can help to detect potential information associated with this activity through investigating \ac{EEG} (\cite{RN79, RN81}) in the surface of the skull. In comparison with other biopotential signals, surface \ac{EEG} is relatively weak (normally in the range of microvolt-level) and noisy (\cite{RN87, RN232}). Therefore, it requires high input impedance readout circuit and intensive signal pre-processing for clean \ac{EEG} data (\cite{RN79, RN88}). While wet-electrode (Ag/AgCl) is more precise and more suitable for clinical purpose, passive dry-electrode is more suitable for daily health monitoring and brain-machine interface (\cite{RN87, RN80}). Besides, the applications also include mental disorder (\cite{RN90}), driving safety (\cite{RN80, RN81}), and emotion evaluation (\cite{RN244}). A commercial biopotential data acquisition system, Biosemi Active Two, provides up to 256 channels for \ac{EEG} analysis (\cite{RN93}). For a specific application, we can reduce the number of electrodes to only detect the relevant areas, such as 19 channels for depression diagnosis (\cite{RN33}), four channels for evaluating driver vigilance (\cite{RN81}) and 64 channels for emotional state classification (\cite{RN244}). Although \ac{EEG} is on-body biopotential, most of the existing \ac{EEG} researches employed offline learning and analysis because of the system complexity and the high number of channels. In wearable real-time applications, usually a smaller number of channels were selected and the data were wirelessly sent to cloud for further processing (\cite{RN97, RN80, RN81, RN96}). \textbf{\emph{\ac{EOG}.}} The eye movement, which results in potential variations around eyes as \ac{EOG}, is a combined effect of environmental and psychological changes. It returns relatively weak voltage (0.01-0.1mV) and low frequency (0-10Hz) (\cite{RN232}). Differ from other eye tracking techniques using a video camera and infrared, \ac{EOG} provides a lightweight, inexpensive and fully wearable solution to access human’s eye movement (\cite{RN238}). It is the most widely used approach of wearable human-machine interface, especially for assisting quadriplegics (\cite{RN238}). It has been used to control a wheelchair (\cite{RN239}), control a prosthesis limb (\cite{RN240}),(\cite{Witkowski2014}) evaluate sleeping (\cite{RN237, RN236, RN234}). Additionally, recent studies fuse \ac{EEG} and \ac{EOG} to increase the degree of freedom of signal and enhance the system reliability because their similar implicit information such as sleepiness (\cite{RN237, RN241}) and mental health (\cite{RN243}). \ac{EOG} can also act as a supplement to provide additional functions or commands to an \ac{EEG} system (\cite{RN233, RN242, Witkowski2014}). \textbf{\emph{\ac{EMG}.}} \ac{EMG} is an electrodiagnostic method for recording and analyzing the electrical activity generated by skeletal muscles. \ac{EMG} is generated by skeletal muscle movement, which frequently occurs in arms and legs. It yields higher amplitude (up to 10 millivolts) and bandwidth (20-1000Hz) compared to the other biopotentials (\cite{RN232, RN88}). Near the active muscle, different oscillation signals can be measured by a dry electrode array, which allows the computer to sense and decode body motion (\cite{RN224, RN228, RN221}). A prime example is the Myo armband of Thalmic Labs, which is a commercial multi-sensor device that consists of \ac{EMG} sensors, gyroscope, accelerometer and magnetometer (\cite{RN222}). The sensory data is sent to phone or PC via Bluetooth, at which various body movements can be obtained by feature extraction and machine learning. Moreover, the application of \ac{EMG} is frequently linked to target control like a wheelchair (\cite{RN218}) and prosthetic hand (\cite{RN225, RN226}) for assisting disabled people. In addition, its application also includes sign language recognition (\cite{RN224}), diagnosis of neuromuscular disorders (\cite{RN228, RN227}), analysis of walking strides (\cite{RN221}) and virtual reality (\cite{RN229}). Machine learning enables the system to overcome the variation of \ac{EMG} signals from different users (\cite{RN224, RN228}). \textbf{\emph{\ac{PPG}.}} \ac{PPG} is an non-invasive and low-cost optical measurement method that is often used for blood pressure and heart rate monitoring in wearable devices. The optical properties in skin and tissue are periodically changes due to the blood flow driven by the heartbeat. By using a light emitter toward the skin surface, the photosensor can detect the variations in light absorption normally from wrist or finger. This variation signal is called \ac{PPG} which is highly relevant to the rhythm of the cardiovascular system (\cite{RN255}). Compared with \ac{ECG}, \ac{PPG} is easily accessible and low cost, which makes it an ideal intermedia of wearable heart rate measurement. The main disadvantage against \ac{ECG} is that the \ac{PPG} is not unique for different persons and body positions. Thus, further analysis of \ac{PPG} requires machine learning or other statistics tools for calibrating the signal to different scenarios. For example, it can be used in biometric identification after deep learning (\cite{RN252, RN256}). It is worth mentioning that \ac{PPG} is a strong supplementary in the application of \ac{ECG}. \textbf{\emph{\ac{BIS}.}} \ac{BIS} is another low-cost and powerful sensing technique that provides informative body parameters. The principle is that cell membrane behaves like a frequency-dependent capacitor and impedance. The emitter electrodes generate multifrequency excitation signal (0.1-100MHz) on the skin while the receiver electrodes collect these current for demodulating the impedance spectral data of the tissue in between (\cite{CAYTAK2019265, RN258}). Compared to homogeneous materials, body tissue presents more complicated impedance spectra because of the cell membranes and macromolecules. Therefore, the tissue conditions, such as muscle concentration, structural and chemical composition, can be analysed through BIS. The BIS can measure body composition such as fat and water (\cite{RN258}). Based on the different setup in terms of position and frequency, it can also be helpful in the early detection of diseases such as lymphedema, organ ischemia and cancer (\cite{RN259}). Furthermore, multiple pair-wise electrodes can form electrical impedance tomography that describes impedance distribution. By embedding these electrodes in a wristband, the tomography can estimate hand gesture after training, which is another novel solution of inexpensive human-machine interface (\cite{RN55}). \subsection{Multisensory fusion in wearable devices} \label{sec:sensor-fusion} Every sensor has its own limitation. In some demanding cases, an individual sensor itself cannot satisfy the system requirement such as accuracy or robustness (\cite{RN2, RN1, RN10, RN144}). The solution involves increasing the number and type of sensors to form a multisensory system or sensor network for one measurement purpose(\cite{RN2, RN1, RN10}). Multiple types of sensor synergistically working in a system provide more dimensions of input to fully map an object onto the data stream. Different sensors return different data with respect to sampling rate, number of input and the information behind the data. Machine learning models, such as \ac{ANN} and \ac{SVM}, can be designed to combine multiple sources of data. Depended on the application, sensor types and data structure, several approaches have been proposed for multisensory fusion. Generally, in such a system, machine learning is frequently used and plays an vital role in merging different sources of sensory data based on its multidimensional data processing mechanism. The machine learning algorithms allow sensory fusion occurs at the signal, feature or decision level(\cite{RN1, RN10}). The results showed that a multisensory system is advantageous in improving system performance. For example, the fusion of \ac{ECG} and \ac{PPG} pattern can be an informative physiological parameter for robust medical assessment (\cite{RN253}). Counting the peak intervals between \ac{PPG} and \ac{ECG} can estimate the arterial blood pressure (\cite{RN254}). Interestingly, a recent study shows that the QRS complex of \ac{ECG} can be reconstructed from \ac{PPG} by a novel transformed attentional neural networks after training (\cite{RN257}). This could be beneficial for the accessibility of wearable \ac{ECG}. \vspace{6mm} \subsection{Challenges towards smart wearable sensors with edge computing} Given the potential of the sensory system with machine learning, the main challenge raised is the shortage of power and computing efficient (\cite{Kanoun2004}). The novel applications using multiple sensors and high learning ability usually require more energy in the wearable computing unit (\cite{RN18}). Nevertheless, the power supply in the wearable domain is a difficulty with existing battery technologies. This weakness limits the further development of smart wearable device (\cite{RN18}). The existing solution is to wirelessly transfer the raw data onto a cloud where the computationally intensive algorithm is implemented (\cite{RN66}). However, this solution is not ideal considering 1) the complexity of using a wireless module, 2) the non-negligible power consumption, 3) the amount of data, 4) the space limitation due to the range of wireless transmission, 5) privacy issues due to the broadcast of signals, 6) non-negligible time latency due to communication channel. These drawbacks strongly limit the application of wearable sensors. Implementation of \ac{ANN} in von Neumann architectures, which has been frequently used in sensors, is power-hungry. Conversely, it has been reported that signal processing activity in the brain is several orders of magnitudes more power-efficient and one order in processing rate better than digital systems (\cite{RN98}). Compared to conventional approaches based on a binary digital system, brain-inspired neuromorphic hardware yet to be advanced in the contexts of data storage and removal as well as their transmission between different units. In this perspective, a neuromorphic chip with a built-in intelligent algorithm can act as a front-end processor next to the sensor. The conventional \acp{ADC} could be replaced by a delta encoder or feature extractor converting the sensor analog output to spike-based signal for the hardware (see Section~\ref{sec:cmos}). In the end, the output becomes the result of recognition or prediction instead of an intensive data stream. In this way, the computation occurs at the local edge under low power and brain-like architecture.
1,477,468,749,911
arxiv
\section{Introduction} The Rogers-Ramanujan identities, equations (\ref{eq.RR2}) and (\ref{eq.RR1}), have had a fructiferous influence in many, some of them unexpected, subjects in Mathematics and Physics. \begin{eqnarray}\label{eq.RR2}\sum_{n=0}^{\infty}\frac{ q^{n^2}}{(1-q)(1-q^2)\dots (1-q^n)}&=&\prod_{k=0}^{\infty}\frac{1}{(1-q^{5k+1})(1-q^{5k+4})}\\\label{eq.RR1} \sum_{n=0}^{\infty}\frac{q^{n(n+1)}}{(1-q)(1-q^2)\dots (1-q^n)}&=&\prod_{k=0}^{\infty}\frac{1}{(1-q^{5k+2})(1-q^{5k+3})} \end{eqnarray} They were discovered and proved by Rogers in 1894 \cite{rogers1894second}, rediscovered by Ramanujan (without proof) in 1913, and again by I. Schur in 1917 \cite{Schur1917}. It is impossible to summarize in a few lines the enormous amount of contributions related to the Rogers-Ramanujan identities and their generalizations. The reader is referred to the recent book of Sills \cite{sills2017invitation}, for further references and a nice introduction to the subject in its historical context. In \cite{rogers1894second} Rogers presented what is now known as the Rogers-Ramanujan continued fraction $\mathcal{R}(q)$, $$\mathcal{R}(q)=q^{\frac{1}{5}}\frac{1}{1+\cfrac{q}{1+\cfrac{q^2}{\ddots}}}$$ and proved that $$\mathcal{R}(q)=q^{\frac{1}{5}}\frac{\sum_{n=0}^{\infty}\frac{q^{n(n+1)}}{(1-q)(1-q^2)\dots (1-q^n)}}{\sum_{n=0}^{\infty}\frac{ q^{n^2}}{(1-q)(1-q^2)\dots (1-q^n)}}.$$ In what follows we shall drop the factor $q^ {\frac{1}{5}}$ from $\mathcal{R}(q)$, since our main concern here is about the combinatorial meaning of the Rogers-Ramanujan continued fraction and identities. MacMahon \cite{MacMahonbook} and Schur \cite{Schur1917} were the first in reporting the combinatorial meaning of the Rogers-Ramanujan identities. The left-hand side of (\ref{eq.RR2}) is the generating function for the number of partitions of positive parts with a difference of at least two among adjacent parts ($2$-distinct partitions in the terminology of \cite{Andrews2004}). Its right hand side is the generating function of the partitions with each part congruent either with one or four module five. Similarly, the left-hand side of (\ref{eq.RR1}) is the generating function for the number of $2$-distinct partitions, but having each part strictly greater than one. Its right hand side counts the number of partitions with each part congruent either with two or three module five. Hence, each of them establish an equipotence between two different sets of partitions. Garsia and Milne gave a bijective proof of the Rogers-Ramanujan identities by establishing a complicated bijection between these two kinds of partitions \cite{garsia1981rogers}. For that end they created what now is called the Garsia-Milne involution principle. The Garsia-Milne proof was later simplified in \cite{bressoud1982short}.\\ We introduce here a noncommutative version of $\mathcal{R}(-q)$ in an infinite number of variables $$X_0, X_1, X_2,X_3,\dots,$$ and prove that its expansion is the language of words associated to a combinatorial structure we call shift-plethystic trees. Our model based on shift-plethystic trees lead us to consider compositions (instead of partitions) whose risings are at most one, and express the non-commutative version of $\mathcal{R}(-q)$ as a quotient of two generating functions on this kind of compositions. We call a $q$-umbral evaluation on a noncommutative series the procedure of substituting each variable $X_k$ by $zq^k$ or simply by $q^k$. By $q$-umbral evaluation of those generating functions we obtain an alternative (dual) combinatorial interpretation of Rogers-Ramanujan identities in terms of signed compositions (Section \ref{sec.RRcompositions}). A combinatorial understanding of the cancellations taking place in the signed compositions that we obtain would provide an elegant and, hopefully, simple proof of the Rogers-Ramanujan identities. In Section \ref{sec.splety} we introduce shift plethysm of non-commutative series. It generalizes the classical substitution of $q$ series. By $q$-umbral evaluating shift-plethysm on a particular class of non-commutative series we obtain the classical substitution of $q$-series. By means of elementary computation of inverses on generalized shift-plethystic trees we recover some classical identities in Subsection \ref{sec.pletitrees}, and prove in Section \ref{sec.RRandnew} new ones relating Rogers-Ramanujan identities, compositions, partitions and shift plethystic trees. Previous work on non-commutative versions of the Rogers-Ramanujan continued fractions can be found in \cite{Berenstein2019} and \cite{Pak}. Although their approach does not rely on an infinite number of variables, a coupling of both approaches would lead to novel identities involving signed compositions. \section{Formal power series in non commuting variables} Let $\mathbb{A}$ be be an alphabet (a totally ordered set) with at most a countable number of elements (letters). Let ${\mathbb A}^*$ be the free monoid generated by ${\mathbb A}$. It consists of words or finite strings of letters in ${\mathbb A}$, $\omega=\omega_1 \omega_2\dots \omega_n$, including the empty string represented as $1$. We denote by $\ell(\omega)$ the length of $\omega$. Let $\mathbb{K}$ be a field of characteristic zero. A \emph{noncommutative} formal power series in ${\mathbb A}$ over $\mathbb{K}$ is a function $R:{\mathbb A}^*\rightarrow \mathbb{K}$. We denote $R(\omega)$ by $\langle R,\omega\rangle$ and represent $R$ as a formal series $$R=\sum_{\omega\in\mathbb{A}^*}\langle R,\omega\rangle\,\omega, \;\langle R,\omega\rangle\in\mathbb{K},$$ The sum and product of two formal power series $R$ and $S$ are respectively given by \begin{eqnarray*} R+S&=&\sum_{\omega\in{\mathbb A}^*}(\langle R,\omega\rangle+\langle S,\omega\rangle) \omega\\R.S&=&\sum_{\omega\in \mathbb{A}^*}(\sum_{\omega_1\omega_2=\omega}\langle R,\omega_1\rangle \langle S,\omega_2\rangle) \omega. \end{eqnarray*} The algebra of noncommutative formal power series is denoted by $\mathbb{K}\langle\la{\mathbb A}\rangle\ran$. There is a notion of convergence on $\mathbb{K}\langle\la{\mathbb A}\rangle\ran$. We say that $R_1, R_2, R_3,\dots$ converges to $R$ if for all $\omega\in {\mathbb A}^*$, $\langle R_n,\omega\rangle= \langle R,\omega\rangle$ for $n$ big enough. If $\langle R,1\rangle=\alpha\neq 0$, then $R$ has an inverse given by (see Stanley \cite{stanley5001enumerative}) $$R^{-1}=\frac{1}{\alpha}\sum_{n=0}^{\infty}\left(1-\frac{R}{\alpha}\right)^n.$$ Let $B$ be a series having constant term equal to zero, $\langle B,1\rangle =0$. We denote by $\frac{1}{1-B}$, the inverse of the series $1-B$, $$\frac{1}{1-B}:=(1-B)^{-1}=\sum_{n=0}^\infty B^n.$$ A {\em\textcolor{blue} {language}} (on ${\mathbb A}$) is a subset of ${\mathbb A}^*$. We identify a language $L$ with its generating function, the formal power series $$L=\sum_{\omega\in L} \omega.$$ We consider now a special kind of languages obtained from a given set of `links' $B\subseteq{\mathbb A}\times{\mathbb A}$. Define $$L_B=\{\omega|(\omega_i,\omega_{i+1})\in B, \mbox{ for every }i=1,2,\dots,\ell(\omega)-1\},$$ and the language $L$ associated to $B$ by \begin{equation}\label{eq.linked}L=1+{\mathbb A}+L_B. \end{equation} We shall call an $L$ of this form a {\em \textcolor{blue}{linked language}}. Define the K-dual $L^!$ to be the language associated with the complement set of links $$L^!=1+{\mathbb A}+L_{B^c}$$ For linked languages we define a second formal power series, $$L^{g}=\sum_{\omega\in L}(-1)^{\ell(\omega)}\omega.$$ We call it the {\em \textcolor{blue}{graded}} generating function of $L$. We have the following inversion formula for linked languages. It is a non-commutative version of Theorem 4.1. in Gessel PhD thesis, \cite{Gesselthesis}, from where we borrow the terminology of linked sets. Propositions \ref{prop.kdual1} and \ref{prop.kdual2} are indeed particular instances of inversion formulas on generating functions for Koszul algebras and Koszul modules over Koszul algebras. Koszul algebras were introduced in \cite{priddy1970koszul}, see also \cite{polishchuk2005quadratic} for more details on Koszul algebras and the inversion formulas for generating functions of Koszul algebras and modules. \begin{proposition}\label{prop.kdual1} Let $L$ be a linked language, and $L^!$ its K-dual. Then we have \begin{equation} L^!=(L^g)^{-1}. \end{equation} \end{proposition} \begin{proof} The product $L^g.L^!$ is equal to \begin{equation} L^g.L^!=\sum_{(\omega,\omega')\in L\times L^! }(-1)^{\ell(\omega')}\omega.\omega'. \end{equation} Define the function $\phi:L\times L^!\rightarrow L\times L^!$, $$\phi(\omega_1\omega_2\dots\omega_k,\omega'_1\omega'_2\dots\omega'_j)=(\omega_1\omega_2\dots\omega_k\omega'_1,\omega'_2\dots\omega'_j)$$ when $(\omega_k,\omega'_1)\in B$ or if $\omega=1$ and $\omega'\neq 1.$ Make $$\phi(\omega_1\omega_2\dots\omega_k,\omega'_1\omega'_2\dots\omega'_j)=(\omega_1\omega_2\dots,\omega_k\omega'_1\omega'_2\dots\omega'_j)$$ if $(\omega_k,\omega'_1)\in B^c$ or if $\omega'=1$ and $\omega\neq 1$. Finally, make $\phi(1,1)=(1,1)$. The function $\phi$ is a sign reversing involution when restricted to the signed set $L\times L^!-\{(1,1)\}$. Moreover $\phi(1,1)=(1,1)$. Hence $L^!L^g=1.1=1$. \end{proof} \begin{example} Let ${\mathbb X}_+$ be the infinite alphabet $\{X_1,X_2,X_3,\dots\}$. Denote by $\mathcal{P}$ the set of partitions $\lambda$ written in weak increasing order, $\lambda_1\leq\lambda_2\leq\lambda_3\dots$. The set $\mathcal{P}$ is represented as a language with letters in ${\mathbb X}_+$, \begin{equation*} \mathcal{P}=\sum_{\lambda \in }X_{\lambda}=\lim_{m\rightarrow \infty}\prod_{n=1}^{m}\frac{1}{1-X_n}=\prod_{n=1}^{\infty}\frac{1}{1-X_n}. \end{equation*} It is a linked language, with set of links $B=\{(X_i,X_j)|i\leq j\}$. The complement is the set $B^c=\{(X_i,X_j)|i>j\}$, and hence the $K$-dual language $\mathcal{P}^!$ is the generating functions of the set of partitions in decreasing order with distinct parts. The graded generating function of $\mathcal{P}$ is equal to \begin{equation*} \mathcal{P}^g=\sum_{\lambda \in \mathcal{P}}(-1)^{\ell(\lambda)}X_{\lambda}=\lim_{m\rightarrow\infty}\prod_{n=1}^{m}\frac{1}{1+X_n}=\prod_{n=1}^{\infty}\frac{1}{1+X_n} \end{equation*} By Proposition \ref{prop.kdual1}, since taking inverses is a continuous operation, \begin{equation*} \mathcal{P}^!=(\mathcal{P}^g)^{-1}=\lim_{m\rightarrow\infty}(1+X_m)(1+X_{m-1})\dots(1+X_1) \end{equation*} This limit can be (symbolically) written as the product $\prod_{n=\infty}^{1}(1+X_n).$ Since $$(1+X_{m})(1+X_{m-1})\dots(1+X_1)=1+\sum_{n=1}^m X_n(1+X_{n-1})(1+X_{n-2})\dots(1+X_1),$$ $\mathcal{P}^!$ is then equal to the series $$ \mathcal{P}^!=\prod_{n=1}^{\infty}(1+X_n)=1+\sum_{n=1}^m X_n(1+X_{n-1})(1+X_{n-2})\dots(1+X_1).$$ Analogously, the $K$-dual of the language of partitions written in decreasing order, is the language of partitions with different parts, written in increasing order \begin{equation}\label{eq.pertitiinsdecreasing} \left(\prod_{n=\infty}^{1}\frac{1}{1-X_n}\right)^!=(1+\sum_{n=1}^{\infty}X_n\prod_{j=n}^1\frac{1}{1-X_n})^!=\prod_{n=1}^{\infty}(1+X_n) \end{equation} \end{example} Let $L$ be a linked language $L=1+{\mathbb A}+L_B$. Given a subset ${\mathbb A}_1$ of ${\mathbb A}$, and another set of links $C\subseteq {\mathbb A}_1\times{\mathbb A}$, let $N\subseteq{\mathbb A}_1\times{\mathbb A}^*$, be the language defined by $$N={\mathbb A}_1+L_{C,B},$$ where $$L_{C,B}=\{\omega|(\omega_1,\omega_2)\in C\mbox{ and } (\omega_i,\omega_{i+1})\in B,\; i=2,3,\dots,\ell(\omega)-1\}.$$ The language $N$ will be called a right (linked) $L$-module. Denote by $N^!$ (called the $K$-dual of $N$) the $L^!$-module defined by $$N^!={\mathbb A}_1+L_{C^c,B^c},$$ where the complement of $C$ is taken over the set ${\mathbb A}_1\times {\mathbb A}$, $C^c={\mathbb A}_1\times {\mathbb A}-C$. \begin{proposition}\label{prop.kdual2} The generating function for the language $N^!$ defined as above is given by the formula \begin{equation}\label{eq.rightmodule} N^!=N^g L^!=N^g(L^g)^{-1}, \end{equation} \noindent where the graded generating function $N^g$ is defined as \begin{equation} N^g=\sum_{\omega\in N}(-1)^{\ell(\omega)-1}\omega. \end{equation} \end{proposition} \begin{proof} We have to prove that $N^g L^!=\sum_{\omega\in N^!}\omega.$ We have $$N^g L^!=\sum_{(\omega,\omega')\in N\times L^!}(-1)^{\ell(\omega)-1}\omega\om'.$$ Define the function $\psi:N\times L^!\rightarrow N\times L^!$ by considering the following cases. Assume first that $\ell(\omega)\geq 2$. If $(\omega_k,\omega'_1)\in B$ we make $$\psi(\omega_1\omega_2\dots\omega_k,\omega'_1\omega'_2\dots\omega'_j)=(\omega_1\omega_2\dots\omega_k\omega'_1,\omega'_2\dots\omega'_j),$$ If $(\omega_k,\omega'_1)\in B^c$ or if $\omega'= 1,$ define $$\psi(\omega_1\omega_2\dots\omega_k,\omega'_1\omega'_2\dots\omega'_j)=(\omega_1\omega_2\dots\omega_{k-1},\omega_k\omega'_1,\omega'_2\dots\omega'_j).$$ Assume now that $\ell(\omega)=1$. If $(\omega_1,\omega'_1)\in C$ define $$\psi(\omega_1,\omega'_1\omega'_2\dots\omega'_j)=(\omega_1\omega'_1,\omega'_2\omega'_3\dots\omega'_j).$$ Otherwise, if $(\omega_1, \omega_1')\in C^c$ or if $\omega'=1$ we make $\psi(\omega_1,\omega')=\psi(\omega_1,\omega')$. The function $\psi$ is a sign reversing involution, its fixed points being of the form $(\omega_1,\omega')$, if either $(\omega_1,\omega_1')\in C^c$ or $\omega'=1$. Then $$N^g L^!=\sum_{(\omega_1,1)\in {\mathbb A}_1\times{1}}\omega_11+\sum_{(\omega_1,\omega'):(\omega_1,\omega')\in C^c,\,\omega'\in L^!-\{1\}}\omega_1\omega'_1={\mathbb A}_1+L_{C^c,B^c}=N^!.$$ \end{proof} \section{Shift and the shift plethystic trees language} Consider the algebra $\mathbb{K}\langle\la{\mathbb X}\rangle\ran$, ${\mathbb X}$ being the alphabet $${\mathbb X}=\{X_0,X_1,X_2\dots\}.$$ Let $\kappa=(\kappa_1,\kappa_2,\dots,\kappa_m)$ be an element of ${\mathbb N}^m$ ( a weak composition). We denote by $X_{\kappa}$ the word $X_{\kappa_1}X_{\kappa_2}\dots X_{\kappa_n}$. As usual, the empty word will be denoted by $1$. We denote by $|\kappa|$ the sum of its parts, $$|\kappa|=\kappa_1+\kappa_2+\dots.$$ Let $R$ be an element of $\mathbb{K}\langle\la{\mathbb X}\rangle\ran$. The series $R$ is written as $$R=\sum_{\kappa\in {\mathbb N}^*}\langle R, X_{\kappa}\rangle X_{\kappa}.$$ \begin{remark}\normalfont{Let $\mathscr{S}$ be set of weak compositions. In the rest of the article, when no risk of confusion, we identify $\mathscr{S}$ with the associated language $\{X_{\kappa}|\kappa\in \mathscr{S}\}$, and its generating series $\sum_{\kappa\in \mathscr{S}}X_{\kappa}.$} \end{remark} We shall call $\kappa$ a (strong) composition if $\kappa_i\neq 0$, for every $i$. In what follows, word `composition' will mean by defect {\em \textcolor{blue}{strong composition}}. \begin{definition} Define $\sigma:\mathbb{K}\langle\la{\mathbb X}\rangle\ran\rightarrow\mathbb{K}\langle\la{\mathbb X}\rangle\ran$ by extending the shift $\sigma X_i=X_{i+1}$, $i=0,1,2\dots$, as a continuous algebra map. Equivalently, by making it multiplicative and to commute with the series sum symbol. \end{definition} \begin{definition}A {\em \textcolor{blue}{shift plethystic}} (SP) tree is a plane rooted tree whose vertices are colored with colors in ${\mathbb N}$. The color of a given vertex indicates its height (length of the path from the root). \end{definition} Let $T$ be a shift plethystic tree. We associate to $T$ the word $\omega_T$ on ${\mathbb X}$, obtained by reading the vertices of $T$ in preorder from left to right as follows. Assume that $T$ consist of $k\geq 0$ sub-trees, $T_1, T_2,\dots,T_k$, attached to The root (of color $0$). The preorder word of $T$ is then defined recursively by \begin{equation}\label{eq.recpre}\omega_T=\begin{cases}X_0&\mbox{ if $k=0$} \\ X_0\,\sigma\omega_{T_1}\,\sigma\omega_{T_2}\dots\sigma\omega_{T_k}&\mbox{ if $k> 0$.}\end{cases} \end{equation} We denote by $\mathscr{A}$ the language of shift plethystic trees, \begin{equation} \mathscr{A}=\sum_{T}\omega_T \end{equation} \begin{figure} \begin{center}\includegraphics[width=70mm]{Plethy1.png} \end{center}\caption{Shift plethystic tree and associated word.}\label{fig.plethystictree} \end{figure} It is easy to check that the tree $T$ is uniquely obtained from its word $\omega_T$. The series $\sigma \mathscr{A}$ gives us the language of shift plethystic trees with the root colored with color $1$, and every vertex colored with its height plus $1$. Similarly, $\sigma^n\mathscr{A}$ is the language of shift plethystic trees, the root colored $n$ and each vertex colored $n$ plus its height. \begin{theorem} The language $\mathscr{A}$ can be expanded as the noncommutative continued fraction \begin{equation} \mathscr{A}=X_0\cfrac{1}{1-X_1\cfrac{1}{1-X_2\cfrac{1}{\ddots}}}=\lim_{n\rightarrow\infty}X_0\cfrac{1}{1-X_1\cfrac{1}{1-X_2\cfrac{1}{\ddots 1- X_{n-1}\cfrac{1}{{1-X_n}}}}} \end{equation} \noindent \end{theorem} \begin{proof}Assume that the root of an SP tree has $k$ children, $k\geq 0$. By the definition of $\omega_T$ (Eq. (\ref{eq.recpre})), to read its colors we read first the root and then read in preorder from left to right the colors of each one (or none) of the $k$ trees above the root. Each of them will produce a word in $\sigma\mathscr{A}$. Hence we have the identity \begin{equation}\label{eq.rectree}\mathscr{A}=X_0(1+\sigma\mathscr{A}+(\sigma\mathscr{A})^2+(\sigma\mathscr{A})^3+\dots)=X_0\frac{1}{1-\sigma\mathscr{A}}\end{equation} Applying $\sigma^{j-1}$, $j=1,2,\dots,$ to both sides of the above identity we get $$\sigma^{j-1}\mathscr{A}=X_{j-1}\frac{1}{1-\sigma^j\mathscr{A}}.$$ Recursively from Eq. (\ref{eq.rectree}), we obtain \begin{equation}\label{eq.rectree1}\mathscr{A}=X_0\cfrac{1}{1-X_1\cfrac{1}{1-X_2\cfrac{1}{\ddots 1- X_{n-1}\cfrac{1}{{1-\sigma^n\mathscr{A}}}}}}.\end{equation} Denote by $\mathscr{A}_n$ the language $\mathscr{A}$ restricted to the symbols $\{X_0,X_1,\dots,X_n\}$ (the words of SP trees of height at most $n$). We have that $\lim_{n\rightarrow\infty}\mathscr{A}_n=\mathscr{A}$ and since $\sigma^n\mathscr{A}_n=X_n$, from Eq. (\ref{eq.rectree1}) we obtain the result. \end{proof} \begin{proposition}\label{prop.tree-comp}\normalfont{ The words coming from shift plethystic trees are completely characterized by the following properties:\begin{enumerate} \item Its first letter is $X_0$. \item If $\ell(\omega_T)>1$ it is followed by a word of the form $X_{\kappa}$, $\kappa$ being a composition with first element equal to $1$ and risings at most $1$, $\kappa_{i+1}-\kappa_{i}\leq 1$. \end{enumerate}} \end{proposition} \begin{proof} Easy from Eq. (\ref{eq.recpre}), by induction on the number of vertices. \end{proof} \begin{definition}We denote by $\mathscr{C}$ the language of compositions, and by $\mathscr{C}^{(1)}$ language of compositions with risings at most $1$. Observe that $\sigma\mathscr{C}^{ (1)}$ consists of the compositions in $\mathscr{C}^{(1)}$, but where every part is at least $2$. More generally we define $\mathscr{C}^ {(m)}$ to be the language of compositions with rising at most $m$.\end{definition} All the languages $\mathscr{C}$ and $\mathscr{C}^{(m)}$, $m\geq 1$, include the empty word. Proposition \ref{prop.tree-comp} can be now restated as follows, in terms of generating functions. \begin{proposition}\normalfont{The language $\mathscr{A}$ can be expanded as \begin{equation}\label{eq.splanguage} \mathscr{A}=X_0(1+\sum_{\kappa\in\mathscr{C}^{(1)},\,\kappa_1=1}X_{\kappa}). \end{equation}} \end{proposition} Define the alphabets $${\mathbb X}_+=\{X_1,X_2,X_3,\dots\}\mbox{ and }{\mathbb X}_{+2}=\{X_2,X_3,\dots\}.$$ \begin{definition} We denote by $N$ the $\mathscr{C}^{(1)}$-module of compositions $\kappa$ such that $\kappa_1\geq 2$. \begin{equation} N=\sum_{\kappa\in\mathscr{C}^{(2)},\;\kappa_1\geq 2}X_{\kappa}, \end{equation} \noindent The languages $\mathscr{C}^{(1)}$ and $\sigma\mathscr{C}^{(1)}$ are both linked. The language $\mathscr{C}^{(1)}\subset {\mathbb X}_+^*$ with set of links \begin{equation*} B=\{(X_i,X_j)|j-i\leq 1\}\subset {\mathbb X}_+\times{\mathbb X}_+, \end{equation*} and $\sigma\mathscr{C}^{(1)}\subset{\mathbb X}_{2+}^*$ with the shifted set of links \begin{equation*} \sigma B=\{(X_i,X_j)|j-i\leq 1,\; i,j\geq 2\}\subset {\mathbb X}_{2+}\times{\mathbb X}_{2+}. \end{equation*} The $\mathscr{C}^{(1)}$-module $N$ has as set of links $C\subset{\mathbb X}_2\times{\mathbb X}_+$, \begin{equation*} C=\{(X_i,X_j)|j-i\leq 1,\; i\geq 2\}\subset {\mathbb X}_{2+}\times{\mathbb X}_{+}. \end{equation*} \begin{definition} We denote by $\mathcal{P}_m$ the language of $m$-distinct partitions (in the terminology of \cite{Andrews2004}). Being more explicit, $\mathcal{P}_2$ is the language of words of the form $X_{\lambda}$, $\lambda$ being a partition (written in increasing order), $1\leq\lambda_1<\lambda_2<\lambda_3,\dots$, satisfying \begin{equation}\label{eq.rest}\lambda_{i+1}-\lambda_i\geq m,\end{equation} (the empty and the singleton words being included in $\mathcal{P}_m$). In particular we have that $\mathcal{P}_0=\mathcal{P}$ is the language of partitions with repetitions, $\mathcal{P}_1$, that of partitions without repetitions, and finally $\mathcal{P}_2$ is the language of $2$-distinct partitions, directly related to the combinatorics of the Rogers-Ramanujan identities. Observe that $\sigma\mathcal{P}_2$ is the language of $2$-distinct partitions, where each part is at least $2$. \end{definition} \begin{proposition}\normalfont{ We have that $\mathcal{P}_2$ is the $K$-dual of $\mathscr{C}^{(1)}$, $\sigma\mathcal{P}_2$ is the $K$- dual of $\sigma\mathscr{C}^{(1)}$. The $K$-dual of $N$ is the $\mathcal{P}_2$-module $\mathcal{N}$ of $2$-distinct partitions with first part greater than $2$, \begin{eqnarray*} \mathcal{P}_2&=&(\mathscr{C}^{(1)})^!\\ \sigma\mathcal{P}_2&=& (\sigma\mathscr{C}^{(1)})^!\\ \mathcal{N}&=&N^! \end{eqnarray*} } \end{proposition} \begin{proof} Easy, by simple inspection. \end{proof} Observe that since $\lambda$ is a partition, if $\lambda_1>1$, the rest of parts have also to be greater than $1$, and the series $\mathcal{N}$ equals the non constant part of $\sigma\mathcal{P}_2$, $$\mathcal{N}=\sigma\mathcal{P}_2-1.$$ Their graded generating functions are related as follows \begin{equation}\label{Eq.graded}\mathcal{P}_2^g=1+\sum_{\lambda_{i+1}-\lambda_{i}\geq 2}(-1)^{\ell(\lambda)}X_{\lambda}=1-\sum_{\lambda_{i+1}-\lambda_{i}\geq 2}(-1)^{\ell(\lambda)-1}X_{\lambda}=1-\mathcal{N}^g.\end{equation} \end{definition} \begin{theorem}\label{th.quotient} The language $\mathscr{A}$ can be expressed as the product \begin{equation}\label{eq.plethyRR}\mathscr{A}= X_0(\sigma\mathcal{P}_2^g)(\mathcal{P}_2^g)^{-1}. \end{equation} \end{theorem} \begin{proof} By Eq. (\ref{eq.splanguage}) we have \begin{equation*} \mathscr{A}=X_0(\mathscr{C}^{(1)}-N). \end{equation*} Since the operation of taking duals is involutive, $\mathcal{P}_2^!=\mathscr{C}^{(1)}$, $(\sigma\mathcal{P}_2)^!=\sigma\mathscr{C}^{(1)}$, and $\mathcal{N}^!=N$. By Eq. (\ref{Eq.graded}), Prop. \ref{prop.kdual2} and Prop. \ref{prop.kdual1}, $$\mathscr{C}^{(1)}-N=(\mathcal{P}^g_2)^{-1}-\mathcal{N}^g(\mathcal{P}^g_2)^{-1}=(1-\mathcal{N}^g)(\mathcal{P}_2^g)^{-1}=(\sigma\mathcal{P}_2^g)(\mathcal{P}^g_2)^{-1}.$$ \end{proof} Eq. (\ref{eq.plethyRR}) can be written more explicitly as the product \begin{equation}\label{eq.arbRR1} \mathscr{A}=X_0(\sigma\mathcal{P}_2^g)(\mathcal{P}_2^g)^{-1}=X_0(1+\sum_{\lambda_{i+1}-\lambda_i\geq 2,\,\lambda_1\geq 2}(-1)^{\ell(\lambda)}X_{\lambda})(1+\sum_{\lambda_{i+1}-\lambda_i\geq 2,\, \lambda_1\geq 1}(-1)^{\ell(\lambda)}X_{\lambda})^{-1}. \end{equation} Since $(\mathscr{C}^{(1)})^{-1}=\mathcal{P}_2^g$, and $(\sigma\mathscr{C}^{(1)})^{-1}=\sigma\mathcal{P}_2^g$, Theorem \ref{th.quotient} has the following dual form \begin{corollary}The language $\mathscr{A}$ can be written as the product \begin{equation}\label{eq.arbRR2} \mathscr{A}=X_0 (\sigma \mathscr{C}^{(1)})^{-1}\mathscr{C}^{(1)}=X_0(1+\sum_{\kappa_{i+1}-\kappa_i\leq 1}X_{\kappa})^{-1}(1+\sum_{\kappa_{i+1}-\kappa_i\leq 1,\, \kappa_i\geq 2}X_{\kappa}). \end{equation} \end{corollary} \section{Path length and $q$-series} For a series $S$ on the alphabet ${\mathbb X}$, making the \textcolor{blue}{ $q$-umbral evaluation $X_k\rightarrow zq^k$} in the pair of commuting variables $z$ and $q$, we obtain a $q$-series that by abuse of language we denote with the same symbol $S$, $S(z,q)$. Observe that for every series $S$ in $\mathbb{K}\langle \langle{\mathbb X}\rangle\ran$ we have \begin{equation}\label{eq.shiftq}(\sigma S)(z,q)=S(zq,q).\end{equation} Recall that the \textcolor{blue}{\em{path length}} of a rooted tree is defined to be the sum of the heights of its vertices. When we make the substitution $X_k\rightarrow zq^k$ in the word $\omega_T$ associated to a tree $T$ we get $z^n q^{\mathrm{pl(T)}}$, where $n$ is the number of vertices of $T$ and $\mathrm{pl}(T)$ its path length. For example, in the tree of Fig. \ref{fig.plethystictree}, the $q$-substitution in the word $X_0X_1X_2X_2X_1X_2X_2X_2X_3$ gives us $$X_0X_1X_2X_2X_1X_2X_2X_2X_3\mapsto z(zq)(zq^2)^2(zq)(zq^2)^3(zq^3)=z^9q^{15}.$$ Then, the $q$-series $\mathscr{A}(z,q)$ counts the number of plane rooted trees according with their path length. Observe that the path length of a plane rooted tree with $n$ vertices is bounded by the path length of the branch-less tree, which is equal to $0+1+2+3+\dots +n-1=\binom{n}{2}$. From Eq. (\ref{eq.rectree1}) we get \begin{equation} \mathscr{A}(z,q)=\sum_{n=1}^{\infty}(\sum_{m=0}^{\binom{n}{2}} P(n,m)q^m)z^n=\cfrac{z}{1-\cfrac{zq}{1-\cfrac{zq^2}{\ddots}}}, \end{equation} \noindent where $P(n,m)$ is the number of plane rooted tree on $n$ vertices having path length equal to $m$, \begin{eqnarray*}\mathscr{A}(z,q)&=&z + q z^2 + (q^2 + q^3) z^3+ (q^3 + 2 q^4 + q^5 + q^6) z^4\\ &+& (q^4 + 3 q^5 + 3 q^6 + 3 q^7 + 2 q^8 + q^9 + q^{10}) z^5\\ &+& (q^5 + 4 q^6 + 6 q^7 + 7 q^8 + 7 q^9 + 5 q^{10} + 5 q^{11} + 3 q^{12} + 2 q^{13} + q^{14} + q^{15}) z^6+\dots.\end{eqnarray*} From Eq. (\ref{eq.arbRR1}), \begin{equation}\label{eq.arbrr1} \mathscr{A}(z,q)=z\frac{ \mathcal{P}_2(zq,z)}{\mathcal{P}_2(z,q)}=z\frac{1+\sum_{\lambda\in\sigma\mathcal{P}_2}(-z)^{\ell(\lambda)}q^ {|\lambda|}}{1+\sum_{\lambda\in\mathcal{P}_2}(-z)^{\ell(\lambda)}q^{|\lambda|}}. \end{equation} From Eq. (\ref{eq.arbRR2}) we obtain the dual expression \begin{equation} \mathscr{A}(z,q)=z\frac{\mathscr{C}^{(1)}(z,q)}{\mathscr{C}^{(1)}(zq,q)}=z\frac{1+\sum_{\kappa \in\mathscr{C}^{(1)}}z^{\ell(\kappa)}q^{|\kappa|}}{1+\sum_{\kappa \in\sigma\mathscr{C}^{(1)}}z^{\ell(\kappa)}q^{|\kappa|}}. \end{equation} \section{Rogers-Ramanujan Identities and Compositions}\label{sec.RRcompositions} \begin{theorem}\label{th.RRcompositions} We have the following identities \begin{eqnarray} (\mathscr{C}^{(1)})(-1,q)= (\mathscr{C}^{(1)})^{g}(1,q)=1+\sum_{\kappa\in\mathscr{C}^{(-1)}}(-1)^{\ell(\kappa)}q^{|\kappa|}&=&\prod_{k=0}^{\infty}(1-q^{5k+1})(1-q^{5k+4})\\ (\sigma\mathscr{C}^{(1)})(-1,q)=(\sigma\mathscr{C}^{(1)})^g(1,q)=1+\sum_{\kappa\in\mathscr{C}_2}(-1)^{\ell(\kappa)}q^{|\kappa|}&=&\prod_{k=0}^{\infty}(1-q^{5k+2})(1-q^{5k+3}) \end{eqnarray} \end{theorem}\begin{proof} From Proposition \ref{prop.kdual1}, $(\mathscr{C}^{(1)})^g=(\mathcal{P}_2)^{-1}$ and $(\sigma\mathscr{C}^{(1)})^g=(\sf\mathcal{P}_2)^{-1}$. Then, $q$-umbral evaluation gives us \begin{eqnarray*} (\mathscr{C}^{(1)})^{g}(z,q)=\sum_{\kappa\in\mathscr{C}^{(1)}}(-1)^{\ell(\kappa)}z^{\ell(\kappa)}q^{|\kappa|}&=&\frac{1}{1+\sum_{\lambda\in\mathcal{P}_2} z^{\ell(\lambda)}q^{|\lambda|}}\\ (\sigma\mathscr{C}^{(1)})^{g}(z,q)=\sum_{\kappa\in\sigma\mathscr{C}^{(1)}}(-1)^{\ell(\kappa)}z^{\ell(\kappa)}q^{|\kappa|}&=&\frac{1}{\sum_{\lambda\in\sigma\mathcal{P}_2} z^{\ell(\lambda)}q^{|\lambda|}}\\ \end{eqnarray*} By the well known identities \begin{eqnarray*} \sum_{\lambda\in\mathcal{P}_2} z^{\ell(\lambda)}q^{|\lambda|}&=&\sum_{n=0}^{\infty}\frac{z^n q^{n^2}}{(1-q)(1-q^2)\dots (1-q^n)}\\ \sum_{\lambda\in\sigma\mathcal{P}_2} z^{\ell(\lambda)}q^{|\lambda|}&=&\sum_{n=0}^{\infty}\frac{z^n q^{n(n+1)}}{(1-q)(1-q^2)\dots (1-q^n)}, \end{eqnarray*} using the Rogers-Ramanujan identities (Eq. (\ref{eq.RR1}) and Eq. (\ref{eq.RR2})) we get the result. \end{proof} Let $\mathscr{C}^{(1)}[n]$ and $\mathscr{C}^{(1)}[n,k]$ respectively be the set of compositions of $n$ in $\mathscr{C}^{(1)}$, and the set of composititions of $n$ in $\mathscr{C}^{(1)}$ having exactly $k$ parts. Similarly, define $\sigma\mathscr{C}^{(1)}[n]$ and $\sigma\mathscr{C}^{(1)}[n,k]$. From Theorem \ref{th.RRcompositions}, we get the identities \begin{eqnarray}\label{eq.signedpartitions1} 1+\sum_{n=1}^{\infty} (\sum_{k=1}^{n}(-1)^k |\mathscr{C}^{(1)}[n,k]| ) q^n&=&\prod_{k=0}^{\infty}(1-q^{5k+1})(1-q^{5k+4})\}\\\label{eq.signedpartitions2}1+\sum_{n=2}^{\infty} (\sum_{k=1}^{n}(-1)^k |\sigma\mathscr{C}^{(1)}[n,k]| ) q^n&=&\prod_{k=0}^{\infty}(1-q^{5k+2})(1-q^{5k+3}) \end{eqnarray} Observe that the series in the right hand side of equation (\ref{eq.signedpartitions1}) gives us the partitions (in decreasing order) with distinct parts congruent with $1$ or $4$ module $5$, signed by its number of parts. The right hand side of (\ref{eq.signedpartitions2}) enumerates a similar kind of signed partitions, each part congruent with $2$ or $3$ module $5$. For example $4+1$ is the only partition of $5$ enumerated by the right hand side of Eq. (\ref{eq.signedpartitions1}), and $\{7+3,\; 8+2\}$ are the only partitions of $10$ enumerated by the right hand side of Eq. (\ref{eq.signedpartitions2}). The compositions in $\sigma\mathscr{C}^{(1)}(10)$ and in $\sigma\mathscr{C}^{(1)}(11)$ are given respectively in tables \ref{tab.comp1} and \ref{tab.comp2}. \begin{table}\begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|c|} \hline $k$ & & & & & & & & weight \\ \hline 1 &10 & & & & & & & \textcolor{red}{-1} \\ \hline 2 & $55$ & $64$ &$\textcolor{blue}{73}$ &$\textcolor{blue}{82}$ &&&&\textcolor{blue}{2} \\ \hline 3 &$532$ &$523$&$622$ &$442$ &$433$ &343 &334 &\textcolor{red}{-7} \\ \hline 4 &$2233$ & $2323$ &$3223$ &$2332$ &$3232$ & $3322$ &$4222$ & \textcolor{blue}{7} \\ \hline 5 & $22222$ & && && && \textcolor{red}{-1} \\ \hline \end{tabular}\caption{Compositions in $\sigma\mathscr{C}^{(1)}[10]$}\label{tab.comp1}\end{center}\end{table} \begin{table}\begin{center} \begin{tabular}{|l|l|l|l|l|l|l|l|c|} \hline $k$ & & & & & & & & weight \\ \hline 1 &11 & & & & & & & \textcolor{red}{-1} \\ \hline 2 & $56$ & $65$ &74 &$\textcolor{blue}{83}$ &92&&&\textcolor{blue}{4} \\ \hline 3 &$722$&$632$&$623$&$542$ &$533$ &$452$&$443$& \textcolor{red}{-9}\\ \hline &$434$&$344$&&&&&&\\\hline 4&$522$&$4322$&$4232$&$4223$&$3422$&$3332$&$3323$&\textcolor{blue}{11}\\\hline&$3233$&$2333$&$2342$&$2234$ &&&&\\\hline 5 &$32222$& $23222$ &$22322$ &$22232$&$22223$ &&& \textcolor{red}{-5} \\ \hline \end{tabular}\caption{Compositions in $\sigma\mathscr{C}^{(1)}[11]$}\label{tab.comp2}\end{center}\end{table} Consider the set $\widehat{\mathscr{C}^{(1)}}[n,k]$ of restricted compositions. This set only excludes from $\mathscr{C}^{(1)}[n,k]$ the strictly decreasing compositions with all its parts congruent with $1$ or $4$ module $5$. In a similar way, we define $\widehat{\sigma\mathscr{C}^{(1)}}[n,k]$. Then, the Rogers-Ramanujan identities have the following combinatorial form in terms of compositions. \begin{theorem}The signed sets $\widehat{\mathscr{C}^{(1)}}[n]$ and $\widehat{\sigma\mathscr{C}^{(1)}}[n]$ have both zero total weight \begin{eqnarray} &&\sum_{k=1}^n(-1)^k|\widehat{\mathscr{C}^{(1)}}[n,k]|=0\mbox{, for $n\geq 1$}\\ &&\sum_{k=1}^n(-1)^k |\widehat{\sigma\mathscr{C}^{(1)}}(n,k)|=0\mbox{, for $n\geq 2$}. \end{eqnarray} \end{theorem} \section{Shift plethysm, and general shift-plethystic trees}\label{sec.splety} \begin{definition} Let $R$ be a series in $\mathbb{K}\langle\langle {\mathbb X}\rangle\ran$ with zero constant term, $\langle R,1\rangle=0$. We define the \textcolor{blue}{{\em shift-plethystic substitution}} of $R$ in a word $X_\kappa=X_{\kappa_1}X_{\kappa_2}X_{\kappa_3}\dots X_{\kappa_l}$, as the substitution of the shift $\sigma^{\kappa_i}R$ on each of the letters of $X_{\kappa}$, \begin{equation*} X_{\kappa}\circ_s R:=(\sigma^{\kappa_1}R)(\sigma^{\kappa_2}R)\dots(\sigma^{\kappa_l}R). \end{equation*} For a formal power series $T$, define the shift plethysm $T\circ_s R$ by \begin{equation}\label{eq.shiftplethys} T\circ_s R=\sum_{\kappa\in{\mathbb N}^*}\langle T,X_{\kappa}\rangle X_{\kappa}\circ_s R=\sum_{\kappa\in{\mathbb N}^*}\langle T,X_{\kappa}\rangle (\sigma^{\kappa_1}R)(\sigma^{\kappa_2}R)\dots(\sigma^{\kappa_l}R). \end{equation} \end{definition} The series in the right hand side of Eq. (\ref{eq.shiftplethys}) is convergent. To see this, let us denote by $R^{\kappa}$ the product $(\sigma^{\kappa_1}R)(\sigma^{\kappa_2}R)\dots(\sigma^{\kappa_l}R)$. We see that $\langle R^{\kappa},X_{\tau}\rangle=0$ whenever either $l=\ell(\kappa)>|\tau|$ or $|\kappa|>|\tau|$. Hence the set $\{\tau|\langle R^{\kappa},X_{\tau}\rangle\neq 0\}$ is finite. Shift plethysm is an associative operation having $X_0$ as identity. \begin{proposition}\normalfont Every series $R$ with zero constant term and such that $\langle R,X_0\rangle\neq 0$ has a two sided shift plethystic inverse, denoted $R^{\langle-1\rangle}$, $$R\circ_s R^{\langle-1\rangle}=R^{\langle-1\rangle}\circ_s R=X_0$$ \end{proposition} \begin{proof}Let $\alpha\neq 0$ be the value of $R$ at $X_0$, $\langle R,X_0\rangle=\alpha\neq 0$. Define $R_+=R-\alpha X_0$, and the series $T$ by the implicit equation \begin{equation*} T=\alpha^{-1}(X_0-R_+\circ_s T). \end{equation*} From here we get $\alpha T+R_+\circ_s T=X_0$. Which can be written as $(\alpha X_0+R_+)\circ_s T=R\circ_s T=X_0$. Then, $R^{\langle -1\rangle}=T$. \end{proof} \subsection{Shift-plethysm and $q$-composition of series}\label{sec.pletitrees} In this subsection we show how shift-plethysm generalizes the classical definition of $q$-composition of series. This is relevant due to the importance of the $q$-Lagrange inversions formulas for applications in proving identities in $q$-series (see \cite{andrews1975qlagrange}, \cite{gessel1980noncommutative} \cite{garsia1981qLag}, \cite{gessel1983applications}, \cite{garsia1986novel}, \cite{krattenthaler1988qLag}). A general shift-plethystic Lagrange inversion, not yet found, would lead to new forms of $q$-Lagrange inversion as well as to the reinterpretation in a general context of the known ones. Shift-plethysm also offers the advantage, in contrast to $q$-substitution, of being an associative operation. From that, the plethystic inverse is a bilateral one, also in contrast to the known forms of $q$-composition inverse. \\ Let $C$ be the series $$C=\sum_{n=1}^{\infty}c_nX_0X_1X_2\dots X_{n-1}.$$ Consider the shift plethysm $H:=C\circ_s R$, $R$ being an arbitrary series with zero constant term. We have \begin{equation}\label{eq.qandshiftplethys} H= C\circ_s R=\sum_{n=1}^{\infty}c_n R(\sigma R)(\sigma^2R)\dots (\sigma^{n-1}R). \end{equation} Taking $q$-series, by Eq. (\ref{eq.shiftq}) we recover the classical $q$-substitution, \begin{equation} H(z,q)= (C\circ_s R)(z,q)=\sum_{n=1}^{\infty}c_n R(z,q)R(zq,q)R(zq^2,q)\dots R(zq^{n-1},q). \end{equation} Now consider $R$ to be a series in the variable $X_0$, and express it in the form $R(X_0)=X_0\phi^{-1}(X_0)$. Shift plethysm with $C$ will give us \begin{equation} H=\sum_{n=1}^{\infty}c_n\frac{X_0}{\phi(X_0)}\frac{X_1}{\phi(X_1)}\dots\frac{X_{n-1}}{\phi(X_{n-1})}. \end{equation} Which, by $q$-umbral evaluation, gives \begin{equation} H(z,q)=\sum_{n=1}^{\infty}c_n\frac{q^{\binom{n}{2}}z^n}{\phi(z)\phi(zq)\dots\phi(zq^{n-1})}. \end{equation} Obtaining $c_n$ in terms of the $h_n$ in the expansion of $H(z,q)$ is similar to the $q$-Lagrange inversion problem in \cite{andrews1975qlagrange}. \subsection{Enriched shift-plethystic trees} In this section we introduce the $M$-enriched shift-plethystic trees, $M$ being a normalized invertible (non-commutative) series, based in the similar notion formalized by Joyal in \cite{joyal1981theorie} and its plethystic generalization in the commutative framework of colored species \cite{Mendezava}. \begin{definition} Let $M$ be a series with constant term equal to $1$, $\langle M,1\rangle=1$. We define the $M$-enriched trees series by the implicit equation \begin{equation}\label{eq.shiftrees} \mathscr{A}_M=X_0(M\circ_s \mathscr{A}_M). \end{equation}\end{definition} \begin{proposition}\label{prop.treeinverse}The shift-plethystic inverse of $\mathscr{A}_M$ is given by the formula \begin{equation*}(\mathscr{A}_M)^{\langle -1\rangle}=X_0M^{-1}\end{equation*} \end{proposition} \begin{proof} From Eq. (\ref{eq.shiftrees}) we have $\mathscr{A}_M(M^{-1}\circ_s \mathscr{A}_M)=(X_0 M^{-1})\circ_s\mathscr{A}_M=X_0.$ \end{proof} The plethystic inversion for enriched trees, in the most elementary examples where the shift-plethystic inverse can be easily computed, leads by $q$-umbral evaluation to generalizations of some classical formulas.\\ \begin{example}\label{ex.shiftplethy} From Eq. (\ref{eq.rectree}), the SP trees $\mathscr{A}$ satisfy the implicit equation \begin{equation} \mathscr{A}=X_0(\frac{1}{1-X_1}\circ_s\mathscr{A}). \end{equation} Hence, they are obtained by enriching with the series $\frac{1}{1-X_1}$, $\mathscr{A}=\mathscr{A}_{\frac{1}{1-X_1}}$. From this we get \begin{equation*} \mathscr{A}(1-\sigma\mathscr{A})=\mathscr{A}-\mathscr{A}\sigma\mathscr{A}=(X_0-X_0X_1)\circ_s\mathscr{A}=X_0,\end{equation*} its shift plethystic inverse \begin{equation*} \mathscr{A}^{\langle -1\rangle}=X_0-X_0X_1, \end{equation*} and the implicit equation \begin{equation*} \mathscr{A}=X_0+(X_0X_1)\circ_s \mathscr{A}. \end{equation*} The $q$-series of the SP trees satisfies the implicit equations \begin{eqnarray*} \mathscr{A}(z,q)&=&z+\mathscr{A}(z,q)\mathscr{A}(zq,q)\\ \mathscr{A}(z,q)&=&\frac{z}{1-\mathscr{A}(zq,q)}. \end{eqnarray*} Those equations were studied by Garsia in \cite{garsia1981qLag} in relation with his $q$-Lagrange inversion formulas, but without any combinatorial interpretation. \end{example} \begin{example} Let $\mathbb{L}$ be the language $$\mathbb{L}=1+X_0+X_0X_1+X_0X_1X_2+X_0X_1X_2X_3+\dots.$$ The language of the branchless trees, enriched with $1+X_1$ is equal to $\mathbb{L}_+=\mathbb{L}-1$, $$\mathbb{L}_+=\mathscr{A}_{(1+X_1)}=X_0(1+\sigma\mathbb{L}_+).$$ Its shift-plethystic inverse is equal to $$\mathbb{L}_+^{\langle -1\rangle}=X_0\frac{1}{1+X_1}.$$ Then, $$\mathbb{L}_+\circ_s (X_0\frac{1}{1+X_1})=\sum_{n=1}^{\infty}\prod_{j=1}^{n}X_{j-1}\frac{1}{1+X_{j}}=X_0.$$ The $q$-umbral evaluation gives us $$\sum_{n=1}^{\infty}\frac{q^{\binom{n}{2}}z^n}{(1+zq)(1+zq^2)\dots(1+zq^n)}=z.$$ From that $$\sum_{n=0}^{\infty}\frac{q^{\binom{n}{2}}z^n}{(1+zq)(1+zq^2)\dots(1+zq^n)}=1+z.$$ Making $z=1$ and $z=-1$ we recover respectively the classical identities $A.1$ and $A.4$ in \cite{sills2017invitation}. \end{example} \begin{example} Let $\mathbb{L}^{(e)}_+$ be the even form of $\mathbb{L}_+$, \begin{equation*}\mathbb{L}^{(e)}_+=\sum_{n=0}^{\infty}X_0X_2X_4\dots X_{2n-2}=X_0(1+\sigma^2\mathbb{L}_+^{(e)})=\mathscr{A}_{(1+X_2)}. \end{equation*} Its shift-plethystic inverse is equal to \begin{equation*} (\mathbb{L}^{(e)}_+)^{\langle -1\rangle}=X_0\frac{1}{1+X_2}. \end{equation*} Hence we have the identity \begin{equation}\label{eq.Rogers171} \mathbb{L}^{(e)}_+\circ_s X_0\frac{1}{1+X_2}= \sum_{n=1}^{\infty}\prod_{j=0}^{n-1}X_{2j}\frac{1}{1+X_{2j+2}}=X_0. \end{equation} The odd version of $\mathbb{L}_+$, $\mathbb{L}^{(o)}_+$, is equal to the shift $\sigma\mathbb{L}^{(e)}_+$. $$\mathbb{L}^{(o)}_+=\sum_{n=1}^{\infty}X_1X_3X_5\dots X_{2n-1}.$$ From Eq.(\ref{eq.Rogers171}), by shifting and adding $1$ we obtain \begin{equation*} \mathbb{L}^{(o)}\circ_s X_0\frac{1}{1+X_2}=1+\mathbb{L}^{(o)}_+\circ_s X_0\frac{1}{1+X_2}=1+\sum_{n=1}^{\infty}\prod_{j=0}^{n-1}X_{2j+1}\frac{1}{1+X_{2j+3}}=1+X_1. \end{equation*} Multiplying by $(1+X_1)^{-1}$ the left of both sides of the rightmost equality $$\frac{1}{1+X_1}+\sum_{n=1}^{\infty}\frac{1}{1+X_1}\prod_{j=0}^{n-1}X_{2j+1}\frac{1}{1+X_{2j+3}}=1.$$ By $q$-umbral evaluation we obtain \begin{equation}\label{eq.Rogers17} \sum_{n=0}^{\infty}\frac{q^{n^2}z^n}{(1+zq)(1+zq^3)\dots(1+zq^{2n+1})}=1. \end{equation} \noindent Eq. (\ref{eq.Rogers17}) generalizes Rogers identity (C) 6 in \cite[p.~333]{Rogers17}, obtained by specializing to $z=-1$. See also \cite{sills2017invitation}, Formula (A.2).\end{example} \begin{example}Denote by $^{\Al_0}\mathbb{L}_+$ the language obtained from $\mathbb{L}_+$ by left shift plethysm with the series $ \mathbb{\Sigma}_0=\sum_{j=0}^{\infty}X_j,$ \begin{equation}\label{eq.sigmaL} ^{\Al_0}\mathbb{L}_+=\sum_{j=0}^{\infty}X_j\circ_s \mathbb{L}_+=\sum_{j=0}^{\infty}\sum_{n=1}^{\infty}X_jX_{j+1}\dots X_{n+j-1}. \end{equation} Since $\mathbb{\Sigma}_0-\sigma\mathbb{\Sigma}_0=X_0$, the shift-plethystic inverse of $\mathbb{\Sigma}_0$ is equal to $X_0-X_1$. Hence \begin{equation*}(^{\Al_0}\mathbb{L}_+)^{\langle -1\rangle}=(\mathbb{\Sigma}_0\circ_s\mathbb{L}_+)^{\langle -1\rangle}=(\mathbb{L}_+)^{\langle -1\rangle}\circ_s(X_0-X_1)=(X_0-X_1)\frac{1}{1+(X_0-X_1)}.\end{equation*} By plethystic composition with $^{\Al_0}\mathbb{L}_+$ we obtain the identity \begin{equation*} \sum_{j=0}^{\infty}\sum_{n=1}^{\infty}(X_j-X_{j+1})\frac{1}{1+(X_{j+1}-X_{j+2})}\dots (X_{n+j-1}-X_{n+j})\frac{1}{1+(X_{n+j}-X_{n+j+1})}=X_0. \end{equation*} Interchanging sums, by $q$-umbral evaluation, \begin{equation} \sum_{n=1}^{\infty}z^nq^{\binom{n}{2}}(1-q)^n\sum_{j=0}^{\infty}\frac{q^{jn}}{\prod_{k=1}^n({1-zq^{j+k}(1-q))}}=z. \end{equation} Equivalently, making the change $z(1-q)\mapsto z$, \begin{equation} \sum_{n=1}^{\infty}z^nq^{\binom{n}{2}}\sum_{j=0}^{\infty}\frac{q^{jn}}{\prod_{k=1}^n({1-zq^{j+k})}}=\frac{z}{1-q}. \end{equation} \end{example} \begin{example} The shift-plethystic trees enriched with the language $\sigma\mathbb{L}$, $\mathscr{A}_{\sigma\mathbb{L}}$, satisfy the equation \begin{equation*} \mathscr{A}_{\sigma\mathbb{L}}=X_0(1+\sigma\mathscr{A}_{\sigma\mathbb{L}}+(\sigma\mathscr{A}_{\sigma\mathbb{L}})(\sigma^2\mathscr{A}_{\sigma\mathbb{L}})+\dots). \end{equation*} Its shift-plethystic inverse is equal to \begin{equation*} \mathscr{A}_{\sigma\mathbb{L}}^{\langle -1\rangle}=X_0(\sigma\mathbb{L})^{-1}=X_0(1+X_1+X_1X_2+X_1X_2X_3+\dots)^{-1}. \end{equation*} \end{example} \begin{example} The series of shift-plethystic trees enriched with $$M=(1-\sigma\mathbb{L}_+)^{-1}=(1-(X_1+X_1X_2+X_1X_2X_3+\dots))^{-1}$$ satisfies the implicit equations \begin{eqnarray*} \mathscr{A}_{M}&=&X_0\frac{1}{1-\sigma\mathbb{L}_+\circ_s \mathscr{A}_{M}}\\\mathscr{A}_{M}&=&X_0+(\mathscr{A}_M)(\sigma\mathscr{A}_M)+(\mathscr{A}_M)(\sigma\mathscr{A}_M)(\sigma^2\mathscr{A}_M)+\dots. \end{eqnarray*} Its shift-plethystic inverse is equal to $$\mathscr{A}_M^{\langle -1\rangle}=X_0-X_0\sigma \mathbb{L}_+.$$ Taking $q$-series we obtain the implicit equation $$\mathscr{A}_M(z,q)=z+\mathscr{A}_M(z,q)\mathscr{A}_M(zq,q)+\mathscr{A}_M(z,q)\mathscr{A}_M(zq,q)\mathscr{A}_M(zq^2,q)+\dots$$ \end{example} \section{Some other shift-plethystic identities}\label{sec.RRandnew} In this section we establish some relations between the languages of partitions, compositions and shifted plethystic trees. As a motivating example for Theorem \ref{th.plethypartition}, let us take the following composition in $\mathscr{C}^{(1)},$ $$\kappa=56763454343342332.$$ Placing a bar before each local (non-strict) minimum of the sequence, $$\textcolor{red}{|}5676\textcolor{red}{|}3454\textcolor{red}{|}34\textcolor{red}{|}3\textcolor{red}{|}34\textcolor{red}{|}233\textcolor{red}{|}2.$$ We see that the local minima form a partition in weakly decreasing form $\lambda=5333322$. Each word between two bars is associated to a word of a shifted plethystic tree, $$X_{5676}X_{3454}X_{34}X_{3}X_{34}X_{233}X_2$$ is in the language $(\sigma^5\mathscr{A})(\sigma^3\mathscr{A})(\sigma^3\mathscr{A})(\sigma^3\mathscr{A})(\sigma^3\mathscr{A})(\sigma^2\mathscr{A})(\sigma^2\mathscr{A})=X_{\lambda}\circ_s\mathscr{A}.$ \begin{theorem}\label{th.plethypartition}We have the following identities \begin{eqnarray}\label{eq.l1identity} \mathscr{C}^{(1)}&=&\prod_{n=\infty}^{1}\frac{1}{1-X_n}\circ_s\mathscr{A}=\prod_{n=\infty}^{1}\frac{1}{1-\sigma^n\mathscr{A}}.\\ \label{eq.l2identity}\sigma\mathscr{C}^{(1)}&=&\prod_{n=\infty}^{2}\frac{1}{1-X_n}\circ_s\mathscr{A}=\prod_{n=\infty}^{2}\frac{1}{1-\sigma^n\mathscr{A}}. \end{eqnarray} \end{theorem} \begin{proof} Let $\kappa$ be a composition in $\mathscr{C}^{(1)}$. Define $i_1=1$ and $\lambda_1=\kappa_1$, and while the set $A_{r-1}=\{i>i_{r-1}|\kappa_i\leq\kappa_{i_{r-1}}=\lambda_{r-1}\}$ is nonempty define recursively $$i_r=\mbox{min}A_{r-1}\mbox{ and }\lambda_{r}=\kappa_{i_r}.$$ Let $\kappa^{(r)}$ be the segment of $\kappa$ after (and including) $\lambda_r=\kappa_{i_r}$ and before (and excluding) $k_{i_{r+1}}=\lambda_{r+1}$. We claim that each word $X_{\kappa^{(r)}}$ is in the language $\sigma^{\lambda_r}\mathscr{A}$. If $\ell(\kappa^{(r)})=1$ the statement is trivial. If $\ell(\kappa^{(r)})>1$, it follows since $\kappa^{(r)}$ is in $\mathscr{C}^{(1)}$ and all of its parts after $\lambda_r$ (the shifted height of the root) are greater than it. Hence, for each word $X_{\kappa}\in \mathscr{C}^{(1)}$, there exists a unique partition $\lambda_1\geq\lambda_2\geq\dots\geq\lambda_l,$ as defined above, such that $X_{\kappa}\in X_{\lambda}\circ_s\mathscr{A}$. Conversely, every word in $ X_{\lambda}\circ_s\mathscr{A}$ is in $\mathscr{C}^{(1)}$. Then $\mathscr{C}^{(1)}$ can be expanded as follows, using the generating function of the weakly decreasing partitions (\ref{eq.pertitiinsdecreasing}) \begin{equation*} \mathscr{C}^{(1)}=\sum_{\lambda}X_{\lambda}\circ_s\mathscr{A}=\prod_{n=\infty}^1\frac{1}{1-X_n}\circ_s\mathscr{A}. \end{equation*} Eq. (\ref{eq.l2identity}) follows immediately by shifting. \end{proof} From Eq.(\ref{eq.l1identity}) we get \begin{equation} (\mathscr{C}^{(1)})^{g}=(\prod_{n=\infty}^1\frac{1}{1-X_n})\circ_s\mathscr{A}(-{\mathbb X}) \end{equation} \noindent where $\mathscr{A}(-{\mathbb X})$ is the graded generating function of $\mathscr{A}$. Using the fact that the shift-plethystic inverses of $\mathscr{A}$ and $\mathscr{A}(-{\mathbb X})$ are respectively equal to $X_0-X_0X_1$ and $X_0X_1-X_0$, we get the identities \begin{eqnarray}\label{eq.inverseapp1} \mathscr{C}^{(1)}\circ_s (X_0-X_0X_1)&=&\prod_{n=\infty}^1\frac{1}{1-X_n}\\\label{eq.inverseapp2} \mathcal{P}_2\circ_s(X_0X_1-X_0)&=&((\mathscr{C}^{(1)})^{g})^{-1}\circ_s (X_0X_1-X_0)=\prod_{n=1}^{\infty}(1-X_n) \end{eqnarray} The left hand side of equations (\ref{eq.inverseapp1}) and (\ref{eq.inverseapp2}) are respectively equal to \begin{eqnarray*} \mathscr{C}^{(1)}\circ_s (X_0-X_0X_1)&=&\sum_{\kappa\in\mathscr{C}^{(1)}}\prod_{i=1}^{\ell(\kappa)}X_{\kappa_i}(1-X_{\kappa_i+1})\\ \mathcal{P}_2\circ_s(X_0X_1-X_0)&=&\sum_{\lambda\in\mathcal{P}_2}\prod_{i=1}^{\ell(\lambda)}X_{\lambda_i}(X_{\lambda_i+1}-1)\\ \end{eqnarray*} \noindent Substituting $X_{n}\mapsto q^n,$ we get the identities \begin{eqnarray*} \sum_{n=0}^{\infty}q^n\sum_{\kappa\in\mathscr{C}^{(1)}[n]}\prod_{i=1}^{\ell(\kappa)}(1-q^{\kappa_i+1})&=&\prod_{n=1}^{\infty}\frac{1}{1-q^n}\\ \sum_{n=0}^{\infty}q^n\sum_{\lambda\in\mathcal{P}_2[n]}\prod_{i=1}^{\ell(\lambda)}(q^{\lambda_i+1}-1)&=&\prod_{n=1}^{\infty}{(1-q^n)}. \end{eqnarray*} \bibliographystyle{amsplain}
1,477,468,749,912
arxiv
\section{Differential forms on rational varieties in characteristic $p>0$} \label{cohmodp} The action of ${\rm GL}_{d+1}(k)={\rm GL}(k^{d+1})$ on $(k^{d+1})^*=\Hom_k(k^{d+1},k)$ defines an action of ${\rm GL}_{d+1}(k)$ on the affine $k$-scheme associated with $(k^{d+1})^*$, and this action passes to an action of ${\rm GL}_{d+1}(k)$ on the projective space $$Y_0={\mathbb P}((k^{d+1})^*)\cong {\mathbb P}_k^{d}.$$For $0\le j\le d-1$ let ${\mathcal V}_0^j$ be the set of all $k$-rational linear subvarieties $Z$ of $Y_0$ with $\dim(Z)=j$, and let ${\mathcal V}_0=\bigcup_{j=0}^{d-1}{\mathcal V}_0^j$. The sequence of projective $k$-varieties$$Y=Y_{d-1}{\longrightarrow}Y_{d-2}{\longrightarrow}\ldots{\longrightarrow}Y_0$$is defined inductively by letting $Y_{j+1}\to Y_j$ be the blowing up of $Y_j$ in the strict transforms (in $Y_j$) of all $Z\in {\mathcal V}_0^j$. The set$${\mathcal V}=\mbox{the set of all strict transforms in}\,\,Y\,\,\mbox{of elements of}\,\,{\mathcal V}_0$$is a set of divisors on $Y$. The action of ${\rm GL}_{d+1}(k)$ on $Y_0$ naturally lifts to an action of ${\rm GL}\sb {d+1}(k)$ on $Y$. Let $\Xi_0,\ldots,\Xi_{d}$ be the standard projective coordinate functions on $Y_0$ and hence on $Y$ corresponding to the canonical basis of $(k^{d+1})^*$; hence $Y_0=\mbox{\rm Proj}(k[\Xi_i;\,0\le i\le d])$. Denote by $\Omega^{\bullet}_{Y}$ the de Rham complex on $Y$ with logarithmic poles along the normal crossings divisor $\sum_{V\in{\mathcal V}}V$ on $Y$. For $i,j\in\{0,\ldots,d\}$ and $g\in {\rm GL}_{d+1}(k)$ we call$$g \mbox{\rm dlog}(\frac{\Xi_i}{\Xi_j})$$a logarithmic differential $1$-form on $Y$. We call an exterior product of logarithmic differential $1$-forms on $Y$ a logarithmic differential form on $Y$. \begin{pro}\label{logdif} For each $0\le s\le d$ we have $H^t(Y,\Omega_Y^s)=0$ for all $t>0$. The $k$-vector space $H^0(Y,\Omega_Y^s)$ is the one generated by all logarithmic differential forms.\end{pro} {\sc Proof:} In \cite{holdis} we derive this from a general vanishing theorem for higher cohomology of a certain class of line bundles on $Y$. Note that a corresponding statement over a field $F$ of characteristic zero is shown in \cite{iovspi} section 3: the de Rham cohomology of the complement of a finite set of $F$-rational hyperplanes in ${\mathbb P}^d_F$ is generated by (global) logarithmic differential forms. And the analoguous statement for the Monsky-Washnitzer cohomology of $Y^0=Y-\cup_{V\in{\mathcal V}}V$ was shown in \cite{ds}.\hfill$\Box$\\ {\it Remark:} In \cite{holdis} we give a $k$-basis for $H^0(Y,\Omega_Y^s)$ consisting of logarithmic differential forms as follows. For a subset $\tau\subset\{1,\ldots,d\}$ let $$U(k)(\tau)=\{(a_{ij})_{0\le i,j\le d}\in{{{U}}}(k)\quad|\quad a_{ij}=0\,\mbox{if}\, j\notin\{i\}\cup\tau \}.$$For $0\le s\le d$ denote by ${\mathcal P_s}$ the set of subsets of $\{1,\ldots,d\}$ consisting of $s$ elements. The following set is a $k$-basis of $H^0(Y,\Omega_Y^s)$: $$\{A.\bigwedge_{t\in\tau}\mbox{\rm dlog}(\frac{\Xi_t}{\Xi_0})\quad|\quad \tau\in {\mathcal P_s}, A\in U(k)(\tau)\}.$$\hfill$\Box$\\ Let $D$ be a divisor on $Y$ of the type$$D=\sum_{V\in{\mathcal V}}b_{V}V$$with certain $b_{V}\in\mathbb{Z}$. We view ${\mathcal{L}}_{Y}(D)$ as a subsheaf of the constant sheaf $\underline{k(Y)}$ with value the function field $k(Y)$ of $Y$; hence we view $\Omega^{\bullet}_{Y}\otimes_{{\mathcal{O}}_{Y}}{\mathcal{L}}_{Y}(D)$ as a subsheaf of the constant sheaf with value the de Rham complex of $k(Y)/k$. The differential on the latter provides us with a differential on $\Omega^{\bullet}_{Y}\otimes_{{\mathcal{O}}_{Y}}{\mathcal{L}}_{Y}(D)$. Consider the open and ${\rm GL}_{d+1}(k)$-stable subscheme $$Y^0=Y-\cup_{V\in{\mathcal V}}V$$of $Y$; let us write $$\iota:Y^0\to Y$$ for the embedding and $\Omega^{\bullet}_{Y^0}=\Omega^{\bullet}_{Y}|_{Y^0}$. For $0\le s\le d$ let ${\mathbb L}^{s}_Y$ be the $k$-vector subspace of $\Omega_Y^s(Y^0)$ generated by all $s$-forms $\eta$ of the type\begin{gather}\eta=y_1^{m_1}\cdots y_s^{m_s}\mbox{\rm dlog}(y_1)\wedge\ldots\wedge\mbox{\rm dlog}(y_s)\label{frofred}\end{gather}with $m_j\in\mathbb{Z}$ and $y_1,\ldots,y_d\in{\mathcal O}_Y^{\times}(Y^0)$ such that $y_j=\theta_j/\theta_0$ for a suitable (adapted to $\eta$) isomorphism of $k$-varieties $Y_0\cong\mbox{\rm Proj}(k[\theta_j]_{0\le j\le d})$. From Proposition \ref{logdif} it follows that $H^0(Y,\Omega_Y^s)$ is the $k$-vector subspace of ${\mathbb L}^{s}_Y$ generated by all $s$-forms $\eta$ of type (\ref{frofred}) with $m_j=0$ for all $1\le j\le s$. Let $\underline{\mathbb L}_Y^{s}$, resp. $\underline{\mathbb L}_Y^{s,0}$, be the constant sheaf on $Y$ with value ${\mathbb L}_Y^{s}$, resp. with value $H^0(Y,\Omega_Y^s)$. For a divisor $D$ as above we define$${\mathbb L}^{s}(D)=\underline{\mathbb L}_Y^{s}\quad\bigcap\quad {\mathcal L}_Y(D)\otimes\Omega_Y^{s},$$$${\mathbb L}^{s,0}(D)=\underline{\mathbb L}^{s,0}_{Y}\quad\bigcap\quad{\mathcal L}_{Y}(D)\otimes\Omega_Y^{s},$$the intersections taking place inside the push forward $\iota_*\Omega_{Y^0}^{s}$. \begin{satz}\label{konskomop} (a) Suppose $b_V\in\{-1,0\}$ for all $V$. Then the inclusions ${\mathbb L}^{s,0}(D)\hookrightarrow{\mathbb L}^{s}(D)\hookrightarrow{\mathcal L}_Y(D)\otimes\Omega_Y^{s}$ induce for all $j$ isomorphisms$$H^j(Y,{\mathbb L}^{s,0}(D))\cong H^j(Y,{\mathbb L}^{s}(D))\cong H^j(Y,{\mathcal L}_{Y}(D)\otimes\Omega_Y^{s}).$$(b) Let $S$ be a non-empty subset of ${\mathcal V}$ such that $E=\cap_{V\in{S}}V$ is non-empty. Define the subsheaf ${\mathbb L}_E^s(0)$ of $\Omega^s_Y\otimes_{{\mathcal O}_Y}{\mathcal O}_E$ as the image of the composite ${\mathbb L}^s(0)\to\Omega^s_Y\to\Omega^s_Y\otimes_{{\mathcal O}_Y}{\mathcal O}_E$. Then the inclusion induces for all $j$ an isomorphism$$H^j(Y,{\mathbb L}_E^{s}(0))\cong H^j(Y,\Omega_Y^{s}\otimes_{{\mathcal O}_Y}{\mathcal O}_E).$$ \end{satz} {\sc Proof:} (a) First we consider the case $D=0$, i.e. $b_V=0$ for all $V$. The sheaf ${\mathbb L}^{s,0}(0)$ is constant with value $H^0(Y,\Omega_Y^{s})$, hence we get $H^j(Y,{\mathbb L}^{s,0}(0))=H^j(Y,\Omega_Y^{s})$ for all $j$ from Proposition \ref{logdif}. In order to also compare with $H^j(Y,{\mathbb L}^{s}(0))$ choose a sequence $(\eta_n)_{n\ge1}$ of elements of ${\mathbb L}^{s}_Y$ of the form (\ref{frofred}) such that $\{\eta_n;\,n\ge1\}$ is a $k$-basis of ${\mathbb L}^{s}_Y/H^0(Y,\Omega_Y^s)$. For $n\ge0$ let $\underline{\mathbb L}_Y^{s,n}$ be the constant subsheaf of $\underline{\mathbb L}_Y^{s}$ on $Y$ generated over $k$ by $H^0(Y,\Omega_Y^s)$ and $\{\eta_{i};\,n\ge i\ge1\}$. Letting$${\mathbb L}^{s,n}(0)=\underline{\mathbb L}_Y^{s,n}\quad\bigcap \quad\Omega_Y^{s}$$we have $${\mathbb L}^{s}(0)=\bigcup_{n\ge0}{\mathbb L}^{s,n}(0)$$and since $Y$ is quasicompact (so that taking cohomology commutes with direct limits) it suffices to show$$H^j(Y,{\mathbb L}^{s,n}(0))=\left\{\begin{array}{l@{\quad:\quad}l}H^0(Y,\Omega_Y^s)&j=0\\ensuremath{\overrightarrow{0}}&j>0\end{array}\right.$$for all $n\ge0$. For $n=0$ we already did this, for $n>0$ it suffices, by induction, to show$$H^j(Y,\frac{{\mathbb L}^{s,n}(0)}{{\mathbb L}^{s,n-1}(0)})=0$$for all $j$. Let $W\subset Y$ be the maximal open subscheme on which the class of $\eta_n$ as a section of ${\mathbb L}^{s,n}(0)/{\mathbb L}^{s,n-1}(0)$ is defined. Thus if $\xi:W\to Y$ denotes the open embeddding and $\underline{k}_Y$ the constant sheaf on $Y$ with value $k$ then sending $1\in k$ to $\eta_n$ defines an isomorphism$$\xi_!\xi^{-1}\underline{k}_Y\cong\frac{{\mathbb L}^{s,n}(0)}{{\mathbb L}^{s,n-1}(0)}.$$If we had $W=Y$ then the induction hypothesis $H^1(Y,{\mathbb L}^{s,n-1}(0))=0$ and the long exact cohomology sequence associated with $$0\longrightarrow {\mathbb L}^{s,n-1}(0)\longrightarrow{\mathbb L}^{s,n}(0)\longrightarrow \frac{{\mathbb L}^{s,n}(0)}{{\mathbb L}^{s,n-1}(0)}\longrightarrow0$$would imply that there existed $a_1,\ldots,a_{n-1}\in k$ such that $\eta_n+\sum_{i=1}^{n-1}a_i\eta_i$ is a global section of ${\mathbb L}^{s,n}(0)$, in particular of $\Omega_Y^{s}$. But this would contradict the fact that $\{\eta_n;\,n\ge1\}$ is a $k$-basis of ${\mathbb L}^{s}_Y/H^0(Y,\Omega_Y^s)$. Hence $W\ne Y$. On the other hand we may write $\eta_n=y_1^{m_1}\cdots y_s^{m_s}\mbox{\rm dlog}(y_1)\wedge\ldots\wedge\mbox{\rm dlog}(y_s)$ with $y_j=\theta_j/\theta_0$ as in (\ref{frofred}) and it is clear that $C=Y-W$ is the pull back under $Y\to Y_0$ of a union of some hyperplanes $\mathbb{V}(\theta_i)\subset Y_0$. In particular $C$ is connected. Denote by $\gamma:C\to Y$ the closed embedding. The long exact cohomology sequence associated with$$0\longrightarrow\xi_!\xi^{-1}\underline{k}_Y\longrightarrow\underline{k}_{Y}\longrightarrow\gamma_*\gamma^{-1}\underline{k}_{Y}\longrightarrow0$$shows $H^j(Y,\xi_!\xi^{-1}\underline{k}_{Y})=0$ for all $j$ because $C$ is non empty and connected. The induction and thus the discussion of the case $D=0$ is finished. To treat arbitrary $D$ with $b_V\in\{-1,0\}$ for all $V$ we induct on $\dim(Y)$ and on $r(D)=\sum_{V\in{\mathcal{V}}}|b_V|$. We will only show $H^j(Y,{\mathbb L}^{s}(D))= H^j(Y,{\mathcal L}_{Y}(D)\otimes_{{\mathcal O}_Y}\Omega_Y^{s})$ (which is the relevant statement for the subsequent sections), the proof of $H^j(Y,{\mathbb L}^{s,0}(D))= H^j(Y,{\mathcal L}_{Y}(D)\otimes_{{\mathcal O}_Y}\Omega_Y^{s})$ is literally the same (replace each occurence of ${\mathbb L}^{s}(D)$ with ${\mathbb L}^{s,0}(D)$). Assume $b_V=-1$ for some $V$. Let $D'=D+V$. We want to compare the exact sequences $$0\longrightarrow{\mathbb L}^{s}(D)\longrightarrow{\mathbb L}^{s}(D')\longrightarrow{\mathbb L}^{s}_{V}(D')\longrightarrow0$$(the sheaf ${\mathbb L}^{s}_{V}(D')$ being defined such that this is an exact sequence) and$$0\longrightarrow{\mathcal L}_Y(D)\otimes\Omega_Y^{s}\longrightarrow{\mathcal L}_Y(D')\otimes\Omega_Y^{s}\longrightarrow{\mathcal L}_Y(D')\otimes\Omega_Y^{s}\otimes{\mathcal O}_V\longrightarrow0.$$Since $r(D')<r(D)$ the induction hypothesis says that the map between the respective second terms induces isomorphisms in cohomology. It will be enough to prove the same for the respective third terms. From \cite{ito} (see also \cite{holdis}) it follows that $V$ decomposes as$$V=Y^1\times Y^2$$such that both $Y^t$ are successive blowing ups of projective spaces of dimensions smaller than $d$ in all $k$-rational linear subvarieties, just as $Y$ is the successive blowing up of projective space of dimension $d$ in all $k$-rational linear subvarieties. Denote by ${\mathcal V}^t$ the corresponding set of divisors on $Y^t$ (like the set ${\mathcal V}$ of divisors on $Y$) and let $\Omega^{\bullet}_{Y^t}$ denote the logarithmic de Rham complex on $Y^t$ with logarithmic poles along ${\mathcal V}^t$. Let $\Omega^{\bullet}_V$ denote the logarithmic de Rham complex on $V$ with logarithmic poles along all divisors which are pull backs of elements of ${\mathcal V}^1$ or ${\mathcal V}^2$. Then$$\Omega^{\bullet}_V=\Omega^{\bullet}_{Y^1}\otimes_k\Omega^{\bullet}_{Y^2}.$$Let $D_V$ be the divisor on $V$ induced by $D$. More precisely, $D_V=\sum b_{W}(W\cap V)$, the sum ranging over all $W\in\mathcal{V}$ which intersect $V$ transversally. It also follows from \cite{ito} (and \cite{holdis}) that $D_V$ is of the form $D_{Y^1}^V+D_{Y^2}^V$ where $D_{Y^t}^V$ for $t=1,2$ is the pull back to $V$, via the projection $V\to Y^t$, of a divisor $D_{Y^t}$ on $Y^t$ which is the sum, with multiplicities in $\{0,-1\}$, of elements of ${\mathcal V}^t$. The above then generalizes as\begin{gather}{\mathcal L}_V(D_V)\otimes\Omega^{\bullet}_V=({\mathcal L}_{Y^1}(D_{Y^1})\otimes\Omega^{\bullet}_{Y^1})\otimes_k{(\mathcal L}_{Y^2}(D_{Y^2})\otimes\Omega^{\bullet}_{Y^2}).\label{lokue}\end{gather}Choose an isomorphism $Y_0\cong\mbox{\rm Proj}(k[\theta_j]_{0\le j\le d})$ and elements $0\le j_1\ne j_2\le d$ such that $y=\theta_{j_1}/\theta_{j_2}\in{\mathcal O}_Y(U)$ is an equation for $V\cap U$ in a suitable open subset $U$ of $Y$ with $V\cap U\ne\emptyset$. We have an exact sequence\begin{gather}0\longrightarrow{\mathcal L}_V(D_V)\otimes\Omega^{s-1}_V\stackrel{\wedge\mbox{\rm dlog}(y)}{\longrightarrow}{\mathcal L}_Y(D')\otimes\Omega_Y^{s}\otimes{\mathcal O}_V\longrightarrow{\mathcal L}_V(D_V)\otimes\Omega^{s}_V\longrightarrow0\label{fullex}\end{gather}where by (\ref{lokue}) the extreme terms (take $s'=s$ and $s'=s-1$) decompose as \begin{gather}{\mathcal L}_V(D_V)\otimes\Omega^{s'}_V\cong\bigoplus_{s_1+s_2=s'}({\mathcal L}_{Y^1}(D_{Y^1})\otimes\Omega^{s_1}_{Y^1})\otimes_k({\mathcal L}_{Y^2}(D_{Y^2})\otimes\Omega^{s_2}_{Y^2}).\label{dide}\end{gather}On the other hand, define for $t=1,2$ the sheaves ${\mathbb L}^{\bullet}(D_{Y^t})$ on $Y^t$ just as we defined the sheaves ${\mathbb L}^{\bullet}(.)$ on $Y$ (and use the same name for their push forward to $Y$). Then using the decomposition (\ref{dide}) we may view the sheaf$$\bigoplus_{s_1+s_2=s'}{\mathbb L}^{s_1}(D_{Y^1})\otimes_k{\mathbb L}^{s_2}(D_{Y^2})$$as a subsheaf of ${\mathcal L}_V(D_V)\otimes\Omega^{s'}_V$ and a local consideration shows that (\ref{fullex}) restricts to an exact sequence \begin{gather}0\longrightarrow\bigoplus_{s_1+s_2=s-1}{\mathbb L}^{s_1}(D_{Y^1})\otimes_k{\mathbb L}^{s_2}(D_{Y^2})\stackrel{\wedge\mbox{\rm dlog}(y)}{\longrightarrow}{\mathbb L}^{s}_{V}(D')\longrightarrow\bigoplus_{s_1+s_2=s}{\mathbb L}^{s_1}(D_{Y^1})\otimes_k{\mathbb L}^{s_2}(D_{Y^2}){\longrightarrow}0.\label{lex}\end{gather}Comparing the long exact cohomology sequences associated with (\ref{fullex}) and (\ref{lex}) we conclude that to show that$$H^j(Y,{\mathbb L}^{s}_{V}(D'))\longrightarrow H^j(Y,{\mathcal L}_Y(D')\otimes\Omega_Y^{s}\otimes{\mathcal O}_V)$$is an isomorphism for any $j$, it suffices to show that$$H^j(Y,\bigoplus_{s_1+s_2=s'}{\mathbb L}^{s_1}(D_{Y^1})\otimes_k{\mathbb L}^{s_2}(D_{Y^2}))\longrightarrow H^j(Y,\bigoplus_{s_1+s_2=s'}({\mathcal L}_{Y^1}(D_{Y^1})\otimes\Omega^{s_1}_{Y^1})\otimes_k({\mathcal L}_{Y^2}(D_{Y^2})\otimes\Omega^{s_2}_{Y^2}))$$is an isomorphism, for $s'=s$ and $s'=s-1$. By the K\"unneth formula this reduces to showing that$$H^j(Y^t,{\mathbb L}^{s''}(D_{Y^t}))\longrightarrow H^j(Y^t,({\mathcal L}_{Y^t}(D_{Y^t})\otimes\Omega^{s''}_{Y^t}))$$is an isomorphism, for any $s''$ and $t\in\{1,2\}$. But this follows from our induction hypothesis since the dimension of $Y^t$ is smaller than that of $Y$.\\(b) We have an exact sequence$$0\longrightarrow{\mathcal L}_Y(-\sum_{V\in S}V)\longrightarrow\bigoplus_{T\subset S\atop|T|=|S|-1}{\mathcal L}_Y(-\sum_{V\in T}V)\longrightarrow\ldots\longrightarrow\bigoplus_{V\in S}{\mathcal L}_Y(-V)\longrightarrow{\mathcal O}_Y\longrightarrow{\mathcal O}_E\longrightarrow0$$which yields a similar exact sequence by tensoring with $\Omega_Y^s$. A local consideration shows that the latter exact sequence restricts to an exact sequence$$0\longrightarrow{\mathbb L}^s(-\sum_{V\in S}V)\longrightarrow\bigoplus_{T\subset S\atop|T|=|S|-1}{\mathbb L}^s(-\sum_{V\in T}V)\longrightarrow\ldots\longrightarrow\bigoplus_{V\in S}{\mathbb L}^s(-V)\longrightarrow{\mathbb L}^s(0)\longrightarrow{\mathbb L}_E^s(0)\longrightarrow0.$$It follows that it is enough to show that for all subsets $T\subset S$ and any $j$ the map$$H^j(Y,{\mathbb L}^s(-\sum_{V\in T}V))\longrightarrow H^j(Y,{\mathcal L}_Y(-\sum_{V\in T}V)\otimes_{{\mathcal O}_Y}\Omega_Y^{s})$$is an isomorphism. But this follows from part (a).\hfill$\Box$\\ {\it Remark:} (not needed in the sequel) If $-1\le b_V\le p-1$ for all $V$ then the inclusion ${\mathbb L}^{\bullet,0}(D)\hookrightarrow{\mathcal L}_Y(D)\otimes\Omega_Y^{\bullet}$ induces isomorphisms$$H^j(Y,{\mathbb L}^{\bullet,0}(D))\cong H^j(Y,{\mathcal L}_Y(D)\otimes\Omega_Y^{\bullet}).$$To see this let $D'=\sum_{V\in{\mathcal V}}b'_{V}V$ with $b'_V=\min\{0,b_V\}$. Then Theorem \ref{konskomop} applies to $D'$. Now note that on the one hand ${\mathbb L}^{\bullet,0}(D')={\mathbb L}^{\bullet,0}(D)$ (logarithmic differential forms have pole orders at most one) and on the other hand ${\mathcal L}_Y(D')\otimes\Omega_Y^{\bullet}\longrightarrow{\mathcal L}_Y(D)\otimes\Omega_Y^{\bullet}$ is a quasiismorphism (use that any $b_V>0$ is invertible in $k$). \section{Reduction of rational $G$-representations} \label{rera} Let $\overline{T}=T/K^{\times}$. For $\mu=\sum_{j=0}^da_j\epsilon_j\in X^*(T)$ let \begin{gather}\overline{\mu}=(\frac{1}{d+1}\sum_{j=0}^da_j)(\sum_{j=0}^d\epsilon_j)-\mu,\label{mubar}\end{gather}an element of the subspace $X^*(\overline{T})\otimes\frac{1}{d+1}.\mathbb{Z}$ of $X^*({T})\otimes\frac{1}{d+1}.\mathbb{Z}$. If for $0\le j\le d$ we let\begin{gather}\overline{a}_{j}(\mu)=\frac{(\sum_{i\ne j}a_i)-da_{j}}{d+1}\label{aquer}\end{gather}then$$\overline{\mu}=\sum_{j=0}^d\overline{a}_j(\mu)\epsilon_j.$$ Let $M$ be an irreducible $K$-rational representation of $G$. For a weight $\mu\in X^{*}(T)$ let $M_{\mu}$ be the maximal subspace of $M$ on which $T$ acts through $\mu$. \begin{lem}\label{dichot} The number $$|M|=\sum_{i=0}^da_{i}$$ for $\mu=\sum_{i=0}^da_{i}\epsilon_i\in X^*(T)$ such that $M_{\mu}\ne 0$ is independent of the choice of such a $\mu$; for such $\mu$ we have $\overline{\mu}\in X^*(\overline{T})$ if and only if $|M|\in(d+1).\mathbb{Z}$, if and only if there is a $h\in\mathbb{Z}$ such that the center of $G$ acts trivially on $M\otimes_K\det^h$. \end{lem} {\sc Proof:} This is clear since all $\mu$ with $M_{\mu}\ne 0$ differ by linear combinations of elements of $\Phi$ (see \cite{jan} II.2.2).\hfill$\Box$\\ We fix a ${\rm GL}\sb {d+1}/{\mathcal O}_K$-invariant ${\mathcal O}_K$-lattice $M^0$ in $M$ (see \cite{jan} I.10.4). \begin{lem}\label{intspan} We have $M^0=\bigoplus_{\mu\in X^*(T)}M^0_{\mu}$ with $M_{\mu}^0=M^0\cap M_{\mu}$. \end{lem} {\sc Proof:} We reproduce a proof from notes of Schneider and Teitelbaum. Fix $\mu\in X^*(T)$. It suffices to construct an element $\Pi_{\mu}$ in the algebra of distributions $\mbox{Dist}({\rm GL}\sb {d+1}/\mathbb{Z})$ (i.e. defined over $\mathbb{Z}$) which on $M$ acts as a projector onto $M_{\mu}$. For $0\le i\le d$ let $H_i=(de_i)(1)\in\mbox{Lie}({\rm GL}\sb {d+1}/\mathbb{Z})$; then $d\mu'(H_i)\in\mathbb{Z}$ (inside $\mbox{Lie}(\mathbb{G}_m/\mathbb{Z})$) for any $\mu'\in X^*(T)$. According to \cite{hum} Lemma 27.1 we therefore find a polynomial $\Pi\in{\mathbb{Q}}[X_0,\ldots,X_d]$ such that $\Pi(\mathbb{Z}^{d+1})\subset \mathbb{Z}$, $\Pi(d\mu(H_0),\ldots,d\mu(H_d))=1$ and $\Pi(d\mu'(H_0),\ldots,d\mu'(H_d))=0$ for any $\mu'\in X^*(T)$ such that $\mu'\ne\mu$ and $M_{\mu'}\ne0$. Moreover \cite{hum} Lemma 26.1 says that $\Pi$ is a $\mathbb{Z}$-linear combination of polynomials of the form $${X_0\choose b_0}\cdots{X_d\choose b_d}\quad\mbox{with integers}\quad b_0,\ldots,b_d\ge0.$$Thus \cite{jan} II.1.12 implies that $$\Pi_{\mu}=\Pi(H_0,\ldots,H_d)$$lies in $\mbox{Dist}({\rm GL}\sb {d+1}/\mathbb{Z})$. By construction it acts on $M$ as a projector onto $M_{\mu}$.\hfill$\Box$\\ We return to the setting from section \ref{cohmodp}. For $\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}$ denote by $V^0_{\sigma}$ the common zero set in $Y_0$ of all $\Xi_j$ with $j\in\sigma$, and let $V_{\sigma}\in{\mathcal V}$ be its strict transform under $Y\to Y_0$. Denote by $Y'$ the open subscheme of $Y$ obtained by deleting all divisors $V\in{\mathcal V}$ which are {\it not} of the particular form $V=V_{\sigma}$ for some $\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}$. Then $Y^0\subset Y'\subset Y$ and $U(k)$ acts on $Y$ and on $Y^0$ and moreover $U(k).Y'=Y$ (the $U(k)$-translates of $Y'$ cover $Y$). For each $V\in{\mathcal V}$ there is a unique $\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}$ such that there exists a $g\in U(k)$ with $gV_{\sigma}=V$, see \cite{holdis}. Let $\widetilde{M}=M^0/\pi.M^0$. The decomposition from Lemma \ref{intspan} induces a corresponding decomposition $$\widetilde{M}=\bigoplus_{\mu\in X^*(T)}\widetilde{M}_{\mu}.$$Denote again by ${\widetilde{M}}$ the constant sheaf on $Y$ with value $\widetilde{M}$. We define a subsheaf $\widetilde{\mathcal M}[{\mathcal O}_{Y'}]$ of ${\widetilde{M}}\otimes_k\iota_*{\mathcal O}_{Y^0}|_{Y'}$ on $Y'$ by\begin{gather}\widetilde{\mathcal M}[{\mathcal O}_{Y'}]=\bigoplus_{\mu\in X^*(T)}\widetilde{M}_{\mu}\otimes_k{\mathcal L}_{Y'}(\sum_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil (V_{\sigma}\cap Y')).\label{ystrdf}\end{gather} \begin{lem}\label{redspread} $\widetilde{\mathcal M}[{\mathcal O}_{Y'}]$ extends uniquely to a ${\rm GL}_{d+1}(k)$-stable subsheaf $\widetilde{\mathcal M}[{\mathcal O}_{Y}]$ of ${\widetilde{M}}\otimes_k\iota_*{\mathcal O}_{Y^0}$. \end{lem} {\sc Proof:} This can be checked directly, an easier variant of the proof of Theorem \ref{strumn} below. However, it is even a {\it consequence} of Theorem \ref{strumn}: explicitly,$$\widetilde{\mathcal M}[{\mathcal O}_{Y}]=\frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{\mathfrak{X}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_Y}{{\mathcal O}_Y\mbox{-torsion}}$$in the notations used there.\hfill$\Box$\\ {\bf Definition:} We say that the weights of $M$ are small if for any $\mu\in X^*(T)$ with $M_{\mu}\ne0$ and for any $\emptyset\ne \sigma\subsetneq\{0,\ldots,d\}$ we have\begin{gather}0\le\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil\le1.\label{smaldef}\end{gather} \begin{lem}\label{wesm} The weights of $M$ are small if and only if, when regarded as a representation of ${\rm SL}_{d+1}(K)$, it is one of the following: the trivial representation, the standard representation $K^{d+1}$, or the dual $(K^{d+1})^*$ of the standard representation of ${\rm SL}_{d+1}(K)$.\hfill$\Box$\\ \end{lem} {\sc Proof:} One easily checks that $\mu=\sum_{i=0}^da_{i}\epsilon_i\in X^*(T)$ satisfies inequality (\ref{smaldef}) for all $\sigma$ if and only if all coefficients $a_i$ are the same (case (i)) or if there is precisely one $0\le i\le d$ with $a_i=a_j+1$ for all $j\ne i$ (case (ii)) or with $a_i=a_j-1$ for all $j\ne i$ (case (iii)). If $M|_{{\rm SL}_{d+1}(K)}=K$ the only weight occuring is as in case (i), if $M|_{{\rm SL}_{d+1}(K)}=K^{d+1}$ the weights occuring are as in case (ii), if $M|_{{\rm SL}_{d+1}(K)}=(K^{d+1})^*$ the weights occuring are as in case (iii).\hfill$\Box$\\ For $0\le s\le d$ consider the following sheaf $\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$ on $Y$:$$\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]={\widetilde{M}}\otimes_k\underline{\mathbb L}_Y^{s}\quad\bigcap\quad\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s},$$the intersection taking place inside $\iota_*({\widetilde{M}}\otimes_k\Omega_Y^{s}|_{Y^0})$. \begin{satz}\label{klewe} If the weights of $M$ are small then the inclusion $\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]\to\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s}$ of sheaves on $Y$ induces isomorphisms$$H^*(Y,\widetilde{\mathcal M}[{\mathbb L}^{s}_Y])\cong H^*(Y,\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s}).$$ \end{satz} {\sc Proof:} Consider the following ordering on $X^*(T)$: define\begin{gather}\sum_{i=0}^da_i\epsilon_i>\sum_{i=0}^da'_i\epsilon_i\label{ordwei}\end{gather}if and only if there exists a $0\le i_0\le d$ such that $a_i=a_i'$ for all $i<i_0$, and $a_{i_0}>a'_{i_0}$. By \cite{jan} II.1.19 the filtration $(F^{\mu}M)_{\mu\in X^*(T)}$ of $M$ defined by\begin{gather}F^{\mu}M=\sum_{\mu'\in X^*(T)\atop \mu'\ge\mu}M_{\mu'}\label{weifiab}\end{gather}is stable for the action of ${U}(K)$. Hence the filtration $(F^{\mu}M^0)_{\mu\in X^*(T)}$ of $M^0$ defined by$$F^{\mu}M^0=\sum_{\mu'\in X^*(T)\atop \mu'\ge\mu}M^0_{\mu'}=F^{\mu}M\bigcap M^0$$is stable for the action of ${U}({\mathcal O}_K)$, and the induced filtrations $(F^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y])_{\mu\in X^*(T)}$ of $\widetilde{\mathcal M}[{\mathcal O}_Y]$ and $(F^{\mu}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y])_{\mu\in X^*(T)}$ of $\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$ are $U(k)$-stable. We denote by $$Gr^{\mu}(.)=\frac{F^{\mu}(.)}{\sum_{\mu'>\mu}F^{\mu'}(.)}$$the respective graded pieces. To prove Theorem \ref{klewe} it is enough to show that for all $\mu\in X^*(T)$ the maps$$H^*(Y,Gr^{\mu}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y])\longrightarrow H^*(Y,Gr^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s})$$are isomorphisms. By definition (\ref{ystrdf}), the restriction of $\widetilde{\mathcal M}[{\mathcal O}_Y]$ to $Y'$ comes with a canonical splitting of the filtration $(F^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y])_{\mu\in X^*(T)}$, and this splitting shows $$Gr^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s}|_{Y'}\cong {\widetilde{M}}_{\mu}\otimes_k{\mathcal L}_{Y'}(\sum_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil (V_{\sigma}\cap Y'))\otimes_{{\mathcal O}_Y}\Omega^s_Y|_{Y'}.$$Moreover, the subsheaf $Gr^{\mu}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]|_{Y'}$ of $Gr^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s}|_{Y'}$ can be identified with $$\widetilde{M}_{\mu}\otimes_k{\mathbb L}^{s}(\sum_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil V_{\sigma})|_{Y'}.$$Thus, by $U(k)$-equivariance and since $U(k)Y'=Y$, the inclusion $Gr^{\mu}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]\to Gr^{\mu}\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega_Y^{s}$ is of the form considered in Theorem \ref{konskomop}, tensored with (the constant sheaf generated by) $\widetilde{M}_{\mu}$. Hence we may conclude by Theorem \ref{konskomop}.\hfill$\Box$ \section{Sheaves of integral structures in $K[G]$-modules} \label{secints} The action of $G={\rm GL}_{d+1}(K)={\rm GL}(K^{d+1})$ on $(K^{d+1})^*=\Hom_K(K^{d+1},K)$ defines an action of $G$ on the affine $K$-scheme associated with $(K^{d+1})^*$, and this action passes to an action of $G$ on the projective space ${\mathbb P}((K^{d+1})^*)$. Drinfel'd's symmetric space $X$ is the $K$-rigid space$$X={\mathbb P}((K^{d+1})^*)-(\mbox{the union of all $K$-rational hyperplanes}).$$Clearly $X$ is stable for the action of $G$. Let ${\mathfrak{X}}$ be the strictly semistable formal ${\mathcal O}_K$-scheme with generic fibre $X$ introduced in \cite{mus}. Instead of recalling its formal definition here we recall its basic properties relevant for us. ${\mathfrak{X}}$ is covered by Zariski open subschemes which admit open immersions into the $\pi$-adic formal completion of $\spec({\mathcal O}_K[X_0,\ldots,X_d]/(X_0\ldots X_d-\pi))$. Each irreducible component of the reduction ${\mathfrak X}\otimes k$ of ${\mathfrak X}$ is isomorphic to the $k$-scheme $Y$ studied in the previous sections. The set of all irreducible components of ${\mathfrak X}\otimes k$ is in natural bijection with the set of vertices of the Bruhat Tits building of ${\rm PGL}\sb {d+1}/K$. More generally, if for $j\ge 0$ we let $F^j$ denote the set of non-empty intersections of $(j+1)$-many pairwise distinct irreducible components of ${\mathfrak{X}}\otimes k$, then $F^j$ is in natural bijection with the set of $j$-simplices of the Bruhat Tits building of ${\rm PGL}\sb {d+1}/K$. This bijection is $G$-equivariant for the natural extension of the action of $G$ on $X$ to an action of $G$ on ${\mathfrak{X}}$. We denote by $Y$ the central irreducible component of ${\mathfrak X}\otimes k$, i.e. the irreducible component of ${\mathfrak X}\otimes k$ which is characterized by the fact that the subgroup $K^{\times}.{\rm GL}\sb {d+1}({{\mathcal O}_K})$ of $G$ is the stabilizer of $Y$ (for the action of $G$ on the set $F^0$ of irreducible components of ${\mathfrak X}\otimes k$). We identify this $k$-scheme $Y$ (with its ${\rm GL}\sb {d+1}({{\mathcal O}_K}$)-action) with the one from the previous sections. We define the subset $$F^0_{A}=T.Y$$of $F^0$, the orbit of $Y\in F^0$ for the action of $T$ on $F^0$. (This set corresponds to the set of vertices in the standard apartment of the Bruhat Tits building of ${\rm PGL}\sb {d+1}/K$.) For $Z\in F^0_{A}$ and $\mu\in X^*(T)$ we let $\overline{\mu}\in X^*(\overline{T})\otimes\frac{1}{d+1}{\mathbb{Z}}$ as before and define $\overline{\mu}(Z)\in\frac{1}{d+1}.\mathbb{Z}$ as$$\overline{\mu}(Z)=-\omega(\overline{\mu}(t))\quad\mbox{with}\,\,t\in T\,\,\mbox{such that}\,\,t.Y=Z.$$For $Z\in F^0$ let ${\mathfrak{U}}_Z$ be the maximal {\it open} formal subscheme of ${\mathfrak X}$ such that ${\mathfrak{U}}_Z\otimes_{{\mathcal O}_K}k$ is contained in $Z$. For example, the indicated identification of the central irreducible component $Y$ of ${\mathfrak X}\otimes k$ with the $k$-scheme $Y$ from the previous sections restricts to an identification of open subschemes ${\mathfrak{U}}_Y\otimes_{{\mathcal O}_K}k=Y^0$ (with $Y^0\subset Y$ as defined in the previous sections). Also note that the union $\cup_{Z\in F^0}{\mathfrak{U}}_Z$ is disjoint in ${\mathfrak X}$ and that the closed points of $\cup_{Z\in F^0}{\mathfrak{U}}_Z\otimes k$ are exactly the non-singular closed points of the $k$-scheme ${\mathfrak X}\otimes k$.\\ Let again $M$ be an irreducible $K$-rational representation of ${\rm GL}\sb {d+1}$ and fix a ${\rm GL}\sb {d+1}/{\mathcal O}_K$-invariant ${\mathcal O}_K$-lattice $M^0$ in $M$. Define the character $$\chi:G\to \dot{K}^{\times},\quad\quad g\mapsto\dot{\pi}^{-\omega(\det(g))}$$and let $G$ act on ${M}\otimes_K{\dot{K}}$ by multiplying with $\chi^{|M|}$ the (scalar extension $K\to\dot{K}$ of the) already given action of $G$ on ${M}$. The point of this twisting is that the ${\mathcal O}_{\dot{K}}$-submodule $M^0\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}$ of ${M}\otimes_K{\dot{K}}$ is now stable not just for ${\rm GL}\sb {d+1}({{\mathcal O}_K})$ but even for the full stabilizer $K^{\times}.{\rm GL}\sb {d+1}({{\mathcal O}_K})$ of $Y$ in ${\mathfrak X}$. Of course, if $|M|\in(d+1).\mathbb{Z}$ then we could replace the above twisting by a twisting with a suitable power of the determinant character of $G$, and the base extension $K\to\dot{K}$ here and in the whole construction below could be avoided. Let $\underline{M}_{\dot{K}}$ be the constant sheaf on ${\mathfrak{X}}$ with value $M\otimes_K\dot{K}$. The above action of $G$ on $M\otimes_K\dot{K}$ makes $\underline{M}_{\dot{K}}$ into a $G$-equivariant sheaf on ${\mathfrak{X}}$. Define $M_{\mu}^0$ as in Lemma \ref{intspan}. For $\mu\in X^{*}(T)$ let $\underline{M}^0_{\mu,{\mathcal{O}}_{\dot{K}}}$ be the constant subsheaf of $\underline{M}_{\dot{K}}$ with value $M^0_{\mu}\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}$. For $Z\in F^0_{A}$ let \begin{gather}{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\mathfrak{U}_{Z}}=\bigoplus_{\mu\in X^{*}(T)}\dot{\pi}^{(d+1)\overline{\mu}(Z)}\underline{M}^0_{\mu,{\mathcal{O}}_{\dot{K}}}|_{\mathfrak{U}_{Z}}.\label{numdef}\end{gather} \begin{pro}\label{latteas} Formula (\ref{numdef}) (for all $Z\in F^0_{A}$) defines a subsheaf $${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\bigcup_{Z\in F_A^0}\mathfrak{U}_{Z}}\subset\underline{M}_{\dot{K}}|_{\bigcup_{Z\in F_A^0}\mathfrak{U}_Z}.$$It extends to a $G$-stable subsheaf ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}$ of $\underline{M}_{\dot{K}}$ in finitely generated ${\mathcal O}_{\dot{K}}$-modules such that $${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_{\dot{K}}}\dot{K}=\underline{M}_{\dot{K}}.$$ \end{pro} {\sc Proof:} (Here we benefited from notes of Schneider and Teitelbaum.) We need some more notations. For $0\le i,j\le d$ and $i\ne j$ consider the morphism of algebraic groups over $\mathbb{Z}$\begin{gather}\widetilde{\alpha}_{ij}:\mathbb{G}_a\longrightarrow {\rm GL}\sb {d+1},\quad u\mapsto I_{d+1}+u.e_{ij}\label{alphaij}\end{gather}where $I_{d+1}+u.e_{ij}$ is the matrix $(u_{rs})$ with $u_{rr}=1$ (all $r$), with $u_{ij}=u$ and with $u_{rs}=0$ for all other pairs $(r,s)$. For the root $\alpha=\epsilon_i-\epsilon_j\in \Phi$ and $r\in\mathbb{R}$ let $$U_{\alpha,r}=\widetilde{\alpha}_{ij}(\{u\in K;\,\omega(u)\ge r\})\subset G.$$ For $x\in X_*(T)\otimes{\mathbb{R}}$ let$$U_x=\mbox{the subgroup of}\,\,G\,\,\mbox{generated by all}\,\,U_{\alpha,-\alpha(x)}\,\,\mbox{for}\,\,\alpha\in\Phi.$$Let $W$ be the subgroup of permutation matrices in $G$ and let $N=T\rtimes W$ be the normalizer of $T$ in $G$. Let now $g\in G$ and $Z\in F^0_{A}$ such that also $g.Z\in F^0_{A}$. We claim that $g:\underline{M}_{\dot{K}}|_{\mathfrak{U}_{Z}}\cong \underline{M}_{\dot{K}}|_{\mathfrak{U}_{g.Z}}$ restricts to an isomorphism$$g:{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\mathfrak{U}_{Z}}\cong{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\mathfrak{U}_{g.Z}}.$$We have a canonical projection from $X_*(T)\otimes{\mathbb{R}}$ to the standard apartment in the Bruhat Tits building of ${\rm PGL}\sb {d+1}/K$ (see \cite{schtei}). Suppose that $x\in X_*(T)\otimes{\mathbb{R}}$ projects to the vertex corresponding to $Z\in F^0_{A}$. (In the above mentioned correspondence between $F^0_{A}$ and vertices in the standard apartment.) By the Bruhat decomposition, there exist $h_x\in U_x$, $h_{gx}\in U_{gx}$ and $n\in N$ such that $g=h_{gx}nh_{x}$. Therefore we may split up our task into the following cases (i)-(iii):\\ (i) $g\in T$,\\(ii) $g\in W$,\\ (iii) $x=gx$ and $g\in U_x$ for some $x\in X_*(T)\otimes{\mathbb{R}}$. (i) Suppose $g\in T$. We claim that in this case $g$ even respects weight spaces: we prove that $g$ induces for any $\mu\in X^*(T)$ with $M_{\mu}\ne0$ an isomorphism$$g:\dot{\pi}^{(d+1)\overline{\mu}(Z)}M_{\mu}^0\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}\cong\dot{\pi}^{(d+1)\overline{\mu}(g.Z)}M_{\mu}^0\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}.$$Indeed, $g$ induces an isomorphism $$g:M_{\mu}^0\cong\pi^{\omega(\mu(g))}M_{\mu}^0.$$Thus, according to our conventions regarding the action of $G$ on $M\otimes_K\dot{K}$, it induces an isomorphism$$g:M_{\mu}^0\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}\cong\dot{\pi}^{(d+1)\omega(\mu(g))-|M|\omega({\rm{det}}(g))}M_{\mu}^0\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}.$$But $$(d+1)\omega(\mu(g))-|M|\omega({\rm{det}}(g))=(d+1)(-\omega(\overline{\mu}(g)))=(d+1)(\overline{\mu}(g.Z)-\overline{\mu}(Z))$$and the claim follows. (ii) Now $g\in W$. The isomorphisms $g:M_{\mu}\cong M_{g.\mu}$ restrict to isomorphisms $g:M_{\mu}^0\cong M_{g\mu}^{0}$ since $M^0\subset M$ is stable under ${\rm GL}\sb {d+1}({\mathcal O}_K)$. On the other hand $\mu(Z)=(g.\mu)(g.Z)$ and hence $\overline{\mu}(Z)=\overline{(g.\mu)}(g.Z)$ for $\mu\in X^*(T)$. It follows that $g$ induces isomorphisms$$\dot{\pi}^{(d+1)\overline{\mu}(Z)}M_{\mu}^{0}\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}\cong\dot{\pi}^{(d+1)\overline{(g.\mu)}(g.Z)}M_{g.\mu}^{0}\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}$$for any $\mu\in X^*(T)$ and we are done for such $g$. (iii) Now consider the case $x=g.x$ and $g\in U_x$ for some $x\in X_*(T)\otimes{\mathbb{R}}$. Then also $Z=g.Z$ and $\overline{\mu}(x)=\overline{\mu}(Z)$. We may assume $g\in U_{\alpha,-\alpha(x)}$ for some $\alpha=\epsilon_i-\epsilon_j\in\Phi$. Thus $g=\widetilde{\alpha}_{ij}(u)$ for some $u\in K$ with $\omega(u)\ge-\alpha(x)$. It suffices to show that the automorphism $g$ of $M$ induces an automorphism$$g:\bigoplus_{\mu\in X^*(T)}\dot{\pi}^{(d+1)\overline{\mu}(x)}M_{\mu}^0\cong\bigoplus_{\mu\in X^*(T)}\dot{\pi}^{(d+1)\overline{\mu}(x)}M_{\mu}^0.$$Now $\omega(u)\ge-\alpha(x)$ implies $\overline{\mu+n\alpha}(x)\le\overline{\mu}(x)+n\omega(u)$ for all $\mu\in X^*(T)$, all $n\in\mathbb{N}_0$. Therefore it is enough to prove \begin{gather}\widetilde{\alpha}_{ij}(u)(m)\subset\sum_{n\ge0}u^n.M_{\mu+n(\epsilon_i-\epsilon_j)}^0\label{liearg}\quad\quad (m\in M_{\mu}^0).\end{gather}To see this define $X_{\alpha}=(d\widetilde{\alpha}_{ij})(1)\in\mbox{Lie}({\rm GL}\sb {d+1}/\mathbb{Z})$ for $\alpha=\epsilon_i-\epsilon_j$ and then $$X_{\alpha,n}=\frac{X_{\alpha}^n}{n!}\in\mbox{Dist}({\rm GL}\sb {d+1}/\mathbb{Z})\quad\mbox{for}\quad n\ge0$$(compare \cite{jan} II.1.11 and 1.12). By \cite{jan} II.1.19 we have$$X_{\alpha,n}M_{\mu}\subset M_{\mu+n\alpha}\quad\mbox{and}\quad\widetilde{\alpha}_{ij}(u)(m)=\sum_{n\ge0}u^nX_{\alpha,n}(m).$$Since $X_{\alpha,n}$ is defined over $\mathbb{Z}$ we in turn have $X_{\alpha,n}M_{\mu}^0\subset M_{\mu+n\alpha}^0$ and formula (\ref{liearg}) follows. The above claim is established. It follows that on the dense open formal subscheme ${\bigcup_{Z\in F^0}\mathfrak{U}_{Z}}$ of $\mathfrak{X}$ (the union is disjoint) there is a unique $G$-stable subsheaf $${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\bigcup_{Z\in F^0}\mathfrak{U}_{Z}}\subset\underline{M}_{\dot{K}}|_{\bigcup_{Z\in F^0}\mathfrak{U}_Z}$$whose restriction to ${\mathfrak{U}_{Z}}$ for $Z\in F^0_{A}$ is ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\mathfrak{U}_{Z}}$ as defined by (\ref{numdef}). We define ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}$ on all of $\mathfrak{X}$ as the maximal ${\mathcal O}_{\dot{K}}$-module subsheaf of $\underline{M}_{\dot{K}}$ restricting to ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{\cup_{Z\in F^0}\mathfrak{U}_{Z}}$.\hfill$\Box$\\ We now wish to compute the reduction modulo $\dot{\pi}$ of ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}}$ in terms of our sheaves ${\widetilde{\mathcal M}}[{\mathcal O}_Y]$ living on the central irreducible component $Y$. For open formal subschemes ${\mathfrak{U}}$ of ${\mathfrak{X}}$ we write ${\mathcal O}_{\dot{{\mathfrak{U}}}}={\mathcal O}_{\mathfrak{U}}\otimes_{{\mathcal O}_K}{\mathcal O}_{\dot{K}}$. Recall that in section \ref{rera} we defined the open subscheme $Y'$ of $Y$ with $U(k)Y'=Y$, defined the $T(k)$-equivariant sheaf $\widetilde{\mathcal M}[{\mathcal O}_{Y'}]$ on $Y'$ with a $T(k)$-equivariant $X^*(T)$-indexed grading and extended it to a ${\rm GL}_{d+1}(k)$-equivariant sheaf $\widetilde{\mathcal M}[{\mathcal O}_{Y}]$ with a $U(k)$-equivariant $X^*(T)$-indexed filtration on $Y$. We now perform a similar construction in our present global setting. Here the role of $Y'$ is played by ${\mathfrak Y}$: by definition, ${\mathfrak Y}$ is the open formal subscheme of ${\mathfrak X}$ such that for the open subscheme ${\mathfrak Y}\otimes k$ of ${\mathfrak X}\otimes k$ we have$${\mathfrak X}\otimes k-{\mathfrak Y}\otimes k=\bigcup_{Z\in F^0-F^0_{A}}Z.$$We have $U(K).{\mathfrak Y}={\mathfrak X}$. Moreover observe $Y'={\mathfrak Y}\cap Y$. For $Z\in F^0_{A}$ let ${\mathcal J}_{Z}\subset {\mathcal O}_{\mathfrak Y}$ be the ideal defining the closed subscheme $Z\cap {\mathfrak Y}$ inside ${\mathfrak Y}$. Note that ${\mathcal J}_{Z}$ is invertible inside ${\mathcal O}_{\mathfrak Y}\otimes_{{\mathcal O}_K}K$: indeed, small open formal subschemes of ${\mathfrak Y}$ admit open embeddings into the $\pi$-adic completion of $\spec({\mathcal O}_K[X_0,\ldots,X_d]/(X_0\ldots X_d-\pi))$, and for an appropriate numbering of $X_0,\ldots,X_d$ the element $X_0$ is a generator of ${\mathcal J}_{Z}$; in $K[X_0,\ldots,X_d]/(X_0\ldots X_d-\pi)$ its inverse is $\pi^{-1}X_1\ldots X_d$. Thus we may speak of negative integral powers of ${\mathcal J}_{Z}$ as ${\mathcal O}_{\mathfrak Y}$-submodules of ${\mathcal O}_{\mathfrak Y}\otimes_{{\mathcal O}_K}K$. Also note that on small open formal subschemes of ${\mathfrak Y}$ we have ${\mathcal J}_{Z}={\mathcal O}_{\mathfrak Y}$ for almost all $Z$, therefore the following infinite products of ${\mathcal O}_{{\mathfrak Y}}$-submodules in ${\mathcal O}_{\mathfrak Y}\otimes_{{\mathcal O}_K}{\dot{K}}$ make sense. For any $\mu\in X^*(T)$ we define the sheaf\begin{gather}({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}=\sum_{s=0}^d\dot{\pi}^s\prod_{Z\in F^0_{A}}{\mathcal J}_{Z}^{\lceil\overline{\mu}(Z)-\frac{s}{d+1}\rceil},\label{iii}\end{gather}on ${\mathfrak Y}$, the ${\mathcal O}_{\dot{\mathfrak Y}}$-submodule of ${\mathcal O}_{{\mathfrak{Y}}}\otimes_{{\mathcal O}_K}{\dot{K}}$ generated by the submodules $\dot{\pi}^s\prod_{Z\in F^0_{A}}{\mathcal J}_{Z}^{\lceil\overline{\mu}(Z)-\frac{s}{d+1}\rceil}$ for $s=0,\ldots,d$. Let $({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}$ be the unique ${U}(K)$-equivariant ${\mathcal O}_{\dot{\mathfrak X}}$-module subsheaf of ${\mathcal O}_{{\mathfrak{X}}}\otimes_{{\mathcal O}_K}{\dot{K}}$ (with its ${U}(K)$-action induced by that of $G$ on ${\mathfrak{X}}$) whose restriction to ${\mathfrak Y}$ is $({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}$. To describe the reduction of $({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}$ we need to parametrize the set ${\mathcal V}$ in terms of the action of $U(k)$ on it. For $\sigma\subsetneq\{0,\ldots,d\}$ let$${U}_{\sigma}=\{(a_{ij})_{0\le i,j\le d}\in{U}\quad|\quad a_{ij}=0\,\mbox{if}\,i\ne j\,\mbox{and}\,[j\notin\sigma\,\mbox{or}\,\{i,j\}\subset \sigma]\}.$$Let $${\mathcal N}=\{(\sigma,u)\quad|\quad\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}, u\in {U}_{\sigma}(k)\}.$$We have a bijection (see \cite{holdis})$${\mathcal N}\cong{\mathcal V},\quad\quad (\sigma,u)\mapsto u.V_{{\sigma}}$$and the set of orbits of ${U}(k)$ acting on the set ${\mathcal V}$ is in bijection with the set of all $\sigma$ with $\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}$. We will need the sheaves $({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}$ only for those $\mu$ with $M_{\mu}\ne0$. For such $\mu$ consider the partition of $F^0_A$, indexed by the $t\in\{0,1,\ldots,d\}$, into the subsets$$F_A^0(t)=\{Z\in F_A^0 \quad | \quad \overline{\mu}(Z)-\frac{t}{d+1}\in\mathbb{Z}\}.$$It provides the partition of $F^0$ into the subsets$$F^0(t)=U(K).F_A^0(t).$$Since $M$ is irreducible, all $\mu$ with $M_{\mu}\ne0$ differ by linear combinations of elements of $\Phi$ (see \cite{jan} II.2.2). For each such $\mu$, if we write $\overline{\mu}=\sum_{j=0}^d\overline{a}_j(\mu)\epsilon_j$ (cf. formula (\ref{aquer})), we have \begin{gather}\overline{a}_j(\mu)-\frac{|M|}{d+1}\in\mathbb{Z}\label{part}\end{gather}for all $0\le j\le d$. It follows that $F_A^0(t)$ and hence $F^0(t)$ does not depend on $\mu$ and moreover that $F^0(t)$ is non-empty if ond only if $t\equiv n|M|$ modulo $(d+1)$ for some $n\in\mathbb{Z}$, or in other words: we have defined a partition of $F^0$ indexed by the multiples of (the class of) $|M|$ in $\mathbb{Z}/(d+1)$. This partition is stable for the action of ${\rm SL}_{d+1}(K)$ on $F^0$ (this again follows from equation (\ref{part})) while the action of the full group $G$ on $F^0$ can be used to cycle through the parts of this partition. Endow $$\overline{\mathfrak{X}(t)}=\bigcup_{Z\in F^0(t)}Z$$with its structure of reduced closed subscheme of ${\mathfrak{X}}\otimes k$. \begin{lem}\label{lbdlext} We have natural isomorphisms\begin{gather}({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{K}}}k\cong\bigoplus_{t\in\{0,\ldots,d\}} \frac{({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}},\label{lbdlzerldeninu}\end{gather}\begin{gather}{\mathcal L}_{Y}(\sum_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil\sum_{u\in{U}_{\sigma}(k)}u.V_{\sigma})\cong \frac{({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{Y}}{{\mathcal O}_{Y}\mbox{-torsion}}.\label{lbdldeninu}\end{gather} \end{lem} {\sc Proof:} Let ${\mathfrak Y}(t)$ denote the maximal open formal subscheme of ${\mathfrak Y}$ such that the open subscheme ${\mathfrak Y}(t)\otimes k$ of ${\mathfrak Y}\otimes k$ is contained in $\cup_{Z\in F_A^0(t)}(Z\cap {\mathfrak Y})$. Let $$\overline{{\mathfrak Y}(t)}=\bigcup_{Z\in F^0(t)}(Z\cap{\mathfrak Y})$$with its reduced structure, or equivalently: $\overline{{\mathfrak Y}(t)}$ is the schematic closure of ${\mathfrak Y}(t)\otimes k$ in ${\mathfrak Y}\otimes k$. By formula (\ref{iii}) the restriction of $({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}$ to ${\mathfrak Y}(t)$ is the line bundle$$\dot{\pi}^{t}\prod_{Z\in F^0_{A}}{\mathcal J}_{Z}^{\lceil\overline{\mu}(Z)-\frac{t}{d+1}\rceil}|_{{\mathfrak Y}(t)}=\dot{\pi}^{t}\prod_{Z\in F^0_{A}(t)}{\mathcal J}_{Z}^{\overline{\mu}(Z)-\frac{t}{d+1}}|_{{\mathfrak Y}(t)}$$on ${\mathfrak Y}(t)$ (all ${\mathcal J}_{Z}|_{{\mathfrak Y}(t)}$ for $Z\in F^0_A-F^0_A(t)$ are trivial). We obtain: the reduction $({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{K}}}k$ of $({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}$ decomposes into a direct sum, indexed by the set $\{0,\ldots,d\}$, whose direct summand for $t\in\{0,\ldots,d\}$ is the image of the map$$\dot{\pi}^{t}\prod_{Z\in F^0_{A}}{\mathcal J}_{Z}^{\lceil\overline{\mu}(Z)-\frac{t}{d+1}\rceil}\longrightarrow({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\longrightarrow{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{\overline{{\mathfrak Y}(t)}}.$$This is a line bundle on $\overline{{\mathfrak Y}(t)}$ and maps isomorphically to the quotient of ${({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{\overline{{\mathfrak Y}(t)}}$ divided by its ${\mathcal O}_{\overline{{\mathfrak Y}(t)}}$-torsion. Thus\begin{gather}({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{K}}}k\cong\bigoplus_{t\in\{0,\ldots,d\}}\frac{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{\overline{{\mathfrak Y}(t)}}}{{\mathcal{O}}_{\overline{{\mathfrak Y}(t)}}\mbox{-torsion}}\label{deltaninu}\end{gather}and the direct summand for $t\in\{0,\ldots,d\}$ is an invertible ${\mathcal{O}}_{\overline{{\mathfrak Y}(t)}}$-module. Hence formula (\ref{lbdlzerldeninu}) by ${U}(K)$-equivariance. We also see $$\frac{({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}}(0)}}{{\mathcal O}_{\overline{\mathfrak{X}}(0)}\mbox{-torsion}}\otimes_{{\mathcal O}_{\overline{\mathfrak{X}}(0)}}{\mathcal O}_{Y}=\frac{({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{Y}}{{\mathcal O}_{Y}\mbox{-torsion}}$$and that this is the unique ${U}(k)$-equivariant subsheaf of the constant sheaf $k(Y)$ on $Y$ whose restriction to $Y'=Y\cap\mathfrak{Y}$ is $$\frac{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{Y'}}{{\mathcal{O}}_{Y'}\mbox{-torsion}}$$(for the uniqueness note that ${U}(k).Y'=Y$). If we define the divisor $D$ on $Y'$ by requiring$$\frac{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{Y'}}{{\mathcal{O}}_{Y'}\mbox{-torsion}}={\mathcal L}_{Y'}(D)$$(as subsheaves of the constant sheaf $k(Y)$ on $Y'$), then by ${U}(k)$-equivariance of its both sides and ${U}(k).Y'=Y$, to prove formula (\ref{lbdldeninu}) we only need to prove the identity of divisors$$D=\sum_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil V_{\sigma}|_{Y'}$$on $Y'$. To see this note that for $\emptyset\ne \sigma\subsetneq\{0,\ldots,d\}$ we have $V_{\sigma}=Z_{{\sigma}}\cap Y$ on $Y\in F^0_A$; here we write $Z_{\sigma}=t_{\sigma}Y\in F^0_A$ with $t_{\sigma}\in T\subset G$ defined as $t_{\sigma}=\mbox{\rm diag}(t_{\sigma,0},\ldots,t_{\sigma,d})$ with $t_{\sigma,j}=1$ if $j\notin\sigma$ and $t_{\sigma,j}=\pi$ if $j\in\sigma$. Now we only need to see that for $\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}$ the prime divisor $V_{\sigma}=Z_{{\sigma}}\cap Y$ occurs in $D$ with multiplicity $$-\lceil\overline{\mu}(Z_{\sigma})\rceil=-\lceil\sum_{j\in\sigma}\overline{a}_j(\mu)\rceil.$$But our discussion shows that $\frac{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{Y'}}{{\mathcal{O}}_{Y'}\mbox{-torsion}}$ can be identified with the image of the map$$\prod_{\emptyset\ne\sigma\subsetneq\{0,\ldots,d\}}{\mathcal J}_{Z_{\sigma}}^{\lceil\overline{\mu}(Z_{\sigma})\rceil}\longrightarrow({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}\longrightarrow{({\mathcal O}_{\dot{\mathfrak Y}})^{\overline{\mu}}}\otimes_{{\mathcal O}_{\dot{\mathfrak{Y}}}}{\mathcal{O}}_{Y'}$$and we can read off the correct multiplicity.\hfill$\Box$ \begin{satz}\label{strumn} Let $\iota_Y:Y\to{\mathfrak X}$ denote the closed embedding. We have natural isomorphisms$$({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k\cong\bigoplus_{t\in\{0,\ldots,d\}}\frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}}$$\begin{gather}\frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_Y}{{\mathcal O}_Y\mbox{-torsion}}\cong \iota_{Y,*}({\widetilde{\mathcal M}}[{\mathcal O}_Y]).\label{kofo}\end{gather} \end{satz} {\sc Proof:} To prove Theorem \ref{strumn} it suffices by $U(K)$-equivariance (resp. by $U({\mathcal O}_K)$-equivariance for the isomorphism (\ref{kofo})) to prove the statements on the sheaves restricted to ${\mathfrak Y}$ (resp. to $Y'$ for the isomorphism (\ref{kofo})). There, by construction, ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}}$ decomposes into a direct sum indexed by the weights $\mu$ of $M$. A small computation in local coordinates shows that formula (\ref{numdef}) implies that the summand for $\mu$ is of the form $M_{\mu}^0\otimes_{{\mathcal O}_K}({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}$ so that Lemma \ref{lbdlext} proves the first isomorphism. The isomorphism (\ref{kofo}) now follows from formula (\ref{ystrdf}) (in view of Lemma \ref{lbdlext}).\hfill$\Box$\\ {\it Remark:} If $|M|\in(d+1).\mathbb{Z}$, or equivalently if $\overline{\mu}\in X^*(\overline{T})$ for all $\mu$ with $M_{\mu}\ne0$, then $F^0=F^0(0)$ and $({\mathcal O}_{\dot{\mathfrak X}})^{\overline{\mu}}$ is a line bundle on ${\mathfrak X}$ for each such $\mu$, and ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}}$ is a locally free ${\mathcal O}_{{\mathfrak{X}}}$-module sheaf on ${\mathfrak{X}}$. \section{Coherent cohomology via logarithmic differential forms} \label{hodese} Let ${\mathfrak S}$ be a strictly semistable formal ${\mathcal O}_K$-scheme. Endow ${\mathfrak S}$ and $\spf({\mathcal O}_K)$ with the log structure defined by the respective special fibre and let $\Omega^{\bullet}_{{\mathfrak S}}$ denote the logarithmic de Rham complex for the log smooth morphism of formal log schemes ${\mathfrak S}\to\spf({\mathcal O}_K)$. Let $\Omega^{\bullet}_S$ denote the push forward to ${\mathfrak S}$ of the de Rham complex on $S={\mathfrak S}\otimes_{{\mathcal O}_K}K$ (a $K$-rigid space). Let $T$ be an irreducible component of the special fibre ${\mathfrak S}\otimes k$ of ${\mathfrak S}$, let $T^0$ denote the maximal open subscheme of ${\mathfrak S}\otimes k$ which is contained in $T$. Then $T-T^0$ is a normal crossings divisor on the smooth $k$-scheme $T$. Let $\Omega_T^{\bullet}$ denote the de Rham complex on $T$ with logarithmic poles along $T-T^0$. \begin{lem}\label{logderacom} There are canonical isomorphism of sheaf complexes$$\Omega^{\bullet}_S\cong\Omega^{\bullet}_{{\mathfrak S}}\otimes_{{\mathcal O}_K}K,\quad\quad\quad\Omega_T^{\bullet}\cong\Omega^{\bullet}_{{\mathfrak S}}\otimes_{{\mathcal O}_{{\mathfrak S}}}{\mathcal O}_T.$$ \end{lem} {\sc Proof:} The first isomorphism is clear. To prove the second one let $T'$ (resp. $\spec(k)'$) denote the log scheme whose underlying scheme is $T$ (resp. $\spec(k)$) and whose log structure is the pull back of that of ${\mathfrak S}$ (resp. that of $\spf({\mathcal O}_K)$). In other words, $T'\to {\mathfrak S}$ and $\spec(k)'\to \spf({\mathcal O}_K)$ are {\it exact} closed immersions of log schemes. Then $T'$ is a log scheme over the base $\spec(k)'$ (in general not log smooth). Let $\Omega_{T'}^{1}$ be the logarithmic differential module of $T'\to\spec(k)'$. We have a morphism of log schemes $T'\to T$. By functoriality we get natural morphisms of sheaves$$\Omega^{1}_{T}\longrightarrow\Omega^{1}_{T'}\longleftarrow\Omega^{1}_{{\mathfrak S}}\otimes_{{\mathcal O}_{{\mathfrak S}}}{\mathcal O}_T.$$We claim that both are isomorphisms. To see this we may assume that ${\mathfrak S}$ is the formal $\pi$-adic completion of $\spec({\mathcal O}_K[X_0,\ldots,X_d]/(X_0\cdots X_s-\pi))$ for some $0\le s\le d$ and that the kernel of ${\mathcal O}_{{\mathfrak S}}\to{\mathcal O}_T$ is generated by $X_0$. Then these sheaves are canonically identified with the free ${\mathcal O}_T$-module with basis $\{\mbox{\rm dlog}(X_1),\ldots,\mbox{\rm dlog}(X_s),{\rm d}(X_{s+1}),\ldots,{\rm d}(X_d)\}$. The lemma follows.\hfill$\Box$\\ \begin{lem}\label{comalg} Let $A$ be a discrete valuation ring with uniformizer $\lambda\in A$, residue field $k$ and fraction field $F$. Let $M$ be a $\lambda$-torsion free $A$-module.\\(a) The $A$-module $$M'=\lim_{\leftarrow\atop n}M/\lambda^nM$$ is $\lambda$-torsion free. For each $r\ge1$ the map $M/\lambda^rM\to M'/\lambda^rM'$ induced by the natural map $M\to M'$ is bijective; in particular we have $$M'=\lim_{\leftarrow\atop n}M'/\lambda^nM'.$$(b) Let $\widetilde{N}$ be a sub vector space of $M\otimes_AF$ and let $N=M\cap\widetilde{N}$ (intersection inside $M\otimes_AF$). The map $N\otimes_Ak\to M\otimes_Ak$ induced by the natural map $N\to M$ is injective. \end{lem} {\sc Proof:} (a) Suppose we are given $(m_n)_n\in M'$ and $s\ge1$ such that $\lambda^s(m_n)_n=0$ in $M'$. Let $n\ge1$ and choose $x\in M$ such that $\overline{x}=m_{n+s}\in M/\lambda^{n+s}M$ (where $\overline{x}$ denotes the image of $x$ in $M/\lambda^{n+s}M$). Then $\lambda^sm_{n+s}=0$ in $M/\lambda^{n+s}M$ implies $\lambda^sx\in\lambda^{n+s}M$, hence $x\in \lambda^{n}M$ since $M$ is $\lambda$-torsion free, hence $m_n=0$ in $M/\lambda^{n}M$ and we see that $M'$ is $\lambda$-torsion free. Next let $r\ge1$ and suppose we are given $(m_n)_n\in M'$. Let $m\in M$ be an arbitrary lift of $m_r\in M/\lambda^r M$. We find an element $(b_n)_n\in M'$ such that $\lambda^r b_n=\overline{m}-m_n\in M/\lambda^nM$ for all $n$ (here $\overline{m}$ denotes the class of $m$). Indeed, we know $\overline{m}-m_{n+r}\in \lambda^r M/\lambda^{n+r}M$. Choose $b'_n\in M$ with $\lambda^r b'_n=\overline{m}-m_{n+r}\in M/\lambda^{n+r}M$ and let $b_n$ be the image of $b'_n$ in $M/\lambda^{n}M$. Then $(b_n)_n\in M'$ because $(m_n)_n\in M'$ implies $\lambda^r(b'_{n+r}-b'_n)\in\lambda^{n+r}M$, hence $b'_{n+r}-b'_n\in\lambda^{n}M$ since $M$ is $\lambda$-torsion free. Now $\lambda^r((b_n)_n)=(\overline{m}-m_n)_n$ in $M'$, thus $m\in M$ and $(m_n)_n\in M'$ map to the same element in $M'/\lambda^r M'$. We have shown that $M/\lambda^rM\to M'/\lambda^rM'$ is surjective; the injectivity is clear.\\(b) This is very easy.\hfill$\Box$\\ In the sequel, for sheaves ${\mathcal G}$ on $X$ we write ${\mathcal G}$ also for the push forward sheaf on $\mathfrak{X}$ under the specialization map $sp:X\to \mathfrak{X}$; we use tacitly and repeatedly Kiehl's result \cite{kiaub} that if ${\mathcal G}$ is coherent we have ${\mathbb R}^tsp_*{\mathcal G}=0$ for all $t>0$. Denote by $\Omega^{\bullet}_{\mathfrak{X}}$ the logarithmic de Rham complex of the log smooth morphism of formal log schemes ${\mathfrak{X}}\to\spf({\mathcal O}_K)$, where we give the source and the target the respective log structures defined by the special fibres. Note that by Lemma \ref{logderacom} we have canonical identifications $$\Omega^{\bullet}_{\mathfrak{X}}\otimes_{{\mathcal O}_K}K=\Omega^{\bullet}_X,\quad\quad\quad\quad\Omega^{\bullet}_{\mathfrak{X}}\otimes_{{\mathcal O}_{\mathfrak{X}}}{{\mathcal O}_Y}=\Omega^{\bullet}_Y.$$ Recall that we view $X$ as a subspace of ${\mathbb P}_K^d$. For $0\le s\le d$ let ${\mathcal Log}^s$ be the $K$-vector subspace of $\Omega_X^s(X)$ generated by logarithmic differential $s$-forms on ${\mathbb P}_K^d$ with logarithmic poles along $K$-rational hyperplanes. In other words, ${\mathcal Log}^s$ is generated by $s$-forms $\eta$ of the type\begin{gather}\eta=\mbox{\rm dlog}(y_1)\wedge\ldots\wedge\mbox{\rm dlog}(y_s)\label{frofo}\end{gather}for which there exists a suitable (adapted to $\eta$) choice of projective coordinate system $\theta_0,\ldots,\theta_d$ on ${\mathbb P}_K^d$ (i.e. a suitable (adapted to $\eta$) isomorphism of $K$-varieties ${\mathbb P}_K^d\cong\mbox{\rm Proj}(K[\theta_0,\ldots,\theta_d])$) such that $y_j=\theta_j/\theta_0\in {\mathcal O}_X^{\times}(X)$ for all $1\le j\le s$. Clearly ${\mathcal Log}^s$ is a $G$-stable subspace of $\Omega_X^s(X)$. Let ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}$ be the $G$-equivariant integral structure in the constant sheaf $\underline{M}_{\dot{K}}$ defined in section \ref{secints}. For an open quasi-compact subscheme $U$ of ${\mathfrak X}\otimes k$ we have $M\otimes_K\dot{K}\otimes_K\Omega_X^s(U)=({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})(U)\otimes K$, hence for such $U$ we may view $M\otimes_K\dot{K}\otimes_K\Omega_X^s(X)$ and consequently also $M\otimes_K\dot{K}\otimes_K{\mathcal Log}^s$ as being contained in $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})(U)\otimes K$. Therefore we may define$${\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})(U)=M\otimes_K\dot{K}\otimes_K{\mathcal Log}^s\quad\bigcap\quad({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})(U),$$the intersection taking place inside $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})(U)\otimes K$. Since the restriction maps of the sheaf ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ are injective we have thus defined a $G$-stable subsheaf ${\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})$ of ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$. For open $U\subset{\mathfrak X}\otimes k$ we further let $${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})(U)=\lim_{\leftarrow\atop n}\frac{{\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})(U)}{\dot{\pi}^n{\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})(U)}.$$This defines a sheaf ${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})$ with $G$-action which by Lemma \ref{comalg} is $\dot{\pi}$-adically complete and $\dot{\pi}$-torsion free. Since also ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ is $\dot{\pi}$-adically complete and $\dot{\pi}$-torsion free we have a $G$-equivariant map\begin{gather}{\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\longrightarrow {\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}.\label{keymap}\end{gather}Recall that we view the $k$-scheme $Y$ from section \ref{cohmodp} as (the central) irreducible component of ${\mathfrak X}\otimes k$; in this way the open subscheme $Y^0\subset Y$ is also open in ${\mathfrak X}\otimes k$. \begin{lem}\label{lofore} ${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k$ is a $G$-equivariant subsheaf of $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k$ which on $Y^0$ restricts to $\widetilde{{M}}\otimes\underline{\mathbb L}^s_Y|_{Y^0}$. \end{lem} {\sc Proof:} From Lemma \ref{comalg} (b) we know that the inclusion ${\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\to{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ induces an injective map of sheaves\begin{gather}{\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes_{{\mathcal O}_{\dot{K}}}k\longrightarrow({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k.\label{injre}\end{gather}From Lemma \ref{comalg} (a) we know that the map$${\mathcal Log}_{alg}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes_{{\mathcal O}_{\dot{K}}}k\longrightarrow{\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes_{{\mathcal O}_{\dot{K}}}k$$is an epimorphism of sheaves. Together we conclude that the natural map$${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes_{{\mathcal O}_{\dot{K}}}k\longrightarrow({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k$$is injective and that its image is the same as that of (\ref{injre}). We now prove our statement concerning the restriction to $Y^0$ of this image sheaf. Since ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}|_{Y^0}$ is the constant sheaf generated by the free ${\mathcal O}_{\dot{K}}$-module $M^0\otimes_{{\mathcal O}_{{K}}}{\mathcal O}_{\dot{K}}$ with reduction $\widetilde{M}=M^0/\pi.M^0$, it is clear that we may assume $M=K$, the trivial representation. What we must show then is$$({\mathcal Log}^s\bigcap\Gamma(Y^0,\Omega^{s}_{\mathfrak X}))\otimes k={\mathbb{L}}_Y^{s}$$for all $s\ge0$. For $s=0$ both sides are identified with $k$, and the case $s>1$ is reduced to the case $s=1$ by taking exterior products. Thus we assume $s=1$. The containment of the left hand side in the right hand side is clear. Let now $z^{n}\mbox{\rm dlog}(z)$ be a typical generator of ${\mathbb{L}}_Y^{1}$ as in equation (\ref{frofred}); we need to show that it lies in $({\mathcal Log}^1\bigcap\Gamma(Y^0,\Omega^{1}_{\mathfrak X}))\otimes k$ (here $z=y_1$ for $y_1,\ldots,y_d$ as in equation (\ref{frofred})). The case $n=0$ is clear, and the case $n<0$ is reduced to the case $n>0$ observing $\mbox{\rm dlog}(z)=-\mbox{\rm dlog}(z^{-1})$), thus we assume $n>0$. We lift the system $y_1,\ldots,y_d$ on $Y$ to a system $y_1,\ldots,y_d$ on $X$ as in equation (\ref{frofo}), and we also write $z=y_1$ for the lifted $y_1$. Choose pairwise distinct $a_0,\ldots,a_{n}\in{\mathcal{O}}_K$. Since the matrix $(a_i^j)_{0\le i,j\le n}$ is invertible over $K$ (Vandermonde) we may find $x_0,\ldots,x_n\in{\mathcal{O}}_K$ such that, if we set $c_j=\sum_{i=0}^{n}x_ia_i^j$, then $c_j=0$ for $0\le j<n$ and $c_n\ne0$ (possibly a very small $c_n$ since $(a_i^j)_{0\le i,j\le n}$ may not be invertible over ${\mathcal{O}}_K$). Write $c_j=\sum_{i=0}^{n}x_ia_i^j$ for any $j\ge 0$. For $m\in\mathbb{N}$ we have$$\sum_{i=0}^n\frac{x_i}{1-a_i\pi^mz}=\sum_{j=0}^{\infty}c_j\pi^{mj}z^j.$$Now fix $m\in \mathbb{N}$ such that $|\pi^mc_j|<|c_n|$ for all $j>n$ with $|c_j|>|c_n|$. Then $|c_j\pi^{mj}|<|c_n\pi^{mn}|$ for all $j>n$. Hence$$(c_n\pi^{mn})^{-1}\sum_{i=0}^nx_i\mbox{\rm dlog}(1-a_i\pi^mz)\in{\mathcal Log}^1\bigcap\Gamma(Y^0,\Omega^{1}_{\mathfrak X})$$lifts the form $z^{n}\mbox{\rm dlog}(z)\in{\mathbb{L}}_Y^{1}$.\hfill$\Box$\\ For $j, t\in\{0,\ldots,d\}$ let$$F^j(t)=\{Z\in F^j\quad|\quad Z=Z_0\cap\ldots\cap Z_j\mbox{ with }Z_i\in F^0(t)\mbox{ for all }0\le i\le j\}.$$Note that $F^j(t)$ is stable under ${\rm SL}_{d+1}(K)$ (because $F^0(t)$ is stable under ${\rm SL}_{d+1}(K)$). For any $t\in\{0,\ldots,d\}$ with $F^0(t)\ne \emptyset$ (i.e. with $t\equiv n|M|$ modulo $(d+1)$ for some $n\in\mathbb{Z}$), the minimal number $j$ with $F^j(t)=\emptyset$ is the quotient of $d+1$ divided by the order of (the class of) $|M|$ in $\mathbb{Z}/(d+1)$ (we set $F^{d+1}(t)=\emptyset$). \begin{satz}\label{logred} There is a canonical assignment which to $j, t, s\in\{0,\ldots,d\}$ and $Z\in F^j(t)$ assigns a sheaf ${\mathbb L}^{s}(\widetilde{\mathcal M})_Z$ on ${\mathfrak{X}}\otimes k$ with the following properties. ${\mathbb L}^{s}(\widetilde{\mathcal M})_Z$ is supported on $Z$. For $Z=Y\in F^0(0)$ we have ${\mathbb L}^{s}(\widetilde{\mathcal M})_Y=\iota_{Y,*}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$ (as defined earlier). There is a ${\rm SL}_{d+1}(K)$-stable direct sum decomposition$${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k\cong\bigoplus_{t\in\{0,\ldots,d\}}({\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k)(t)$$and for each $t\in\{0,\ldots,d\}$ a ${\rm SL}_{d+1}(K)$-equivariant long exact sequence\begin{gather}0\longrightarrow ({\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k)(t)\longrightarrow\bigoplus_{Z\in F^0(t)}{\mathbb L}^{s}(\widetilde{\mathcal M})_Z\longrightarrow\bigoplus_{Z\in F^1(t)}{\mathbb L}^{s}(\widetilde{\mathcal M})_Z\longrightarrow\ldots.\label{seqlofo}\end{gather} \end{satz} {\sc Proof:} The direct sum decomposition of $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}{\mathcal O}_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k$ from Theorem \ref{strumn} yields the analoguous decomposition $$({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k\cong\bigoplus_{t\in\{0,\ldots,d\}}\frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}}$$where the summand for $t\in\{0,\ldots,d\}$ is locally free on $\overline{\mathfrak{X}(t)}$. We have$$\frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(0)}}}{{\mathcal O}_{\overline{\mathfrak{X}(0)}}\mbox{-torsion}}\otimes_{{\mathcal O}_{\overline{\mathfrak{X}(0)}}}{\mathcal O}_Y=\iota_{Y,*}(\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega^s_Y).$$Now $\iota_{Y,*}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]\subset \iota_{Y,*}(\widetilde{\mathcal M}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega^s_Y)$ by the definition of $\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$. We let ${\mathbb L}^{s}(\widetilde{\mathcal M})_Y=\iota_{Y,*}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$ and then we move this definition around by means of the action of $G$ to obtain for each $Z\in F^0(t)$ (any $t$) a subsheaf $${\mathbb L}^{s}(\widetilde{\mathcal M})_Z\subset \frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}}\otimes_{{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{\mathcal O}_Z.$$ We have ${\mathbb L}^{s}(\widetilde{\mathcal M})_Z|_{Z\cap Z'}={\mathbb L}^{s}(\widetilde{\mathcal M})_{Z'}|_{Z\cap Z'}$ for all $Z,Z'\in F^0(t)$ (because of $G$-equivariance: there are $g\in G$ which interchange $Z$ and $Z'$). This means that also for $j>0$ we obtain subsheaves$${\mathbb L}^{s}(\widetilde{\mathcal M})_Z\subset \frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}}\otimes_{{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{\mathcal O}_Z$$for each $Z\in F^j(t)$ and ${\rm SL}_{d+1}(K)$-stable subsheaves $${\mathcal F}(t)\subset \frac{({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{\mathfrak{X}}}}{\mathcal O}_{\overline{\mathfrak{X}(t)}}}{{\mathcal O}_{\overline{\mathfrak{X}(t)}}\mbox{-torsion}}$$such that there are long exact sequences$$0\longrightarrow {\mathcal F}(t)\longrightarrow\bigoplus_{Z\in F^0(t)}{\mathbb L}^{s}(\widetilde{\mathcal M})_Z\longrightarrow\bigoplus_{Z\in F^1(t)}{\mathbb L}^{s}(\widetilde{\mathcal M})_Z\longrightarrow\ldots.$$The restriction of ${\mathbb L}^{s}(\widetilde{\mathcal M})_Y=\iota_{Y,*}\widetilde{\mathcal M}[{\mathbb L}^{s}_Y]$ and hence of ${\mathcal F}(0)$ to the open subscheme $Y^0$ is just $\widetilde{{M}}\otimes\underline{\mathbb L}^s_Y|_{Y^0}$. In view of Lemma \ref{lofore} and $G$-equivariance we conclude that the subsheaves ${\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k$ and $\oplus_{t\in\{0,\ldots,d\}}{\mathcal F}(t)$ of $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k$ coincide when restricted to $Y^0$ and to each $G$-translate of $Y^0$ in ${\mathfrak{X}}\otimes k$. By their construction both these subsheaves are maximal inside $({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^s_{{\mathfrak{X}}})\otimes_{{\mathcal O}_{\dot{K}}}k$ with this given restriction to $Y^0$ and its $G$-translates, hence they coincide.\hfill$\Box$\\ \begin{pro}\label{globlo} If the weights of $M$ are small then the map (\ref{keymap}) induces for any $i$ an isomorphism$$H^i({\mathfrak{X}},{\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}))\cong H^i({\mathfrak{X}},{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}).$$ \end{pro} {\sc Proof:} For the sheaves ${\mathcal F}={\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})$ and ${\mathcal F}={\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ we have the spectral sequences\begin{gather} E_{2}^{pq}=R^p\lim_{\leftarrow\atop m}(H^q({\mathfrak X},{\mathcal F}_m))\Rightarrow H^{p+q}({\mathfrak X},\lim_{\leftarrow\atop m}{\mathcal F}_m)=H^{p+q}({\mathfrak X},{\mathcal F})\notag\end{gather}where $(.)_m$ denotes reduction modulo $\dot{\pi}^m$. The map (\ref{keymap}) induces a map between these spectral sequences and we see that it is enough to show\begin{gather}H^i({\mathfrak{X}},({\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}))_m)\cong H^i({\mathfrak{X}},({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})_m)\label{modem}\end{gather}for any $m\ge 1$, any $i\ge0$. Since ${\mathcal F}={\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})$ and ${\mathcal F}={\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ are ${\mathcal O}_{\dot{K}}$-flat we get exact sequences of sheaves$$0\to{\mathcal F}_{m-1}\stackrel{\dot{\pi}^{m-1}}{\longrightarrow}{\mathcal F}_m\longrightarrow{\mathcal F}_1\longrightarrow0.$$Comparing the associated long exact cohomology sequences we reduce our task to proving the isomorphism (\ref{modem}) in the case $m=1$, i.e. to proving$$H^*({\mathfrak{X}},{\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})\otimes k)\cong H^*({\mathfrak{X}},{\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}\otimes k).$$First suppose $|M|\notin(d+1).\mathbb{Z}$. Then our hypothesis that the weights of $M$ be small implies that the order of (the class of) $|M|$ in $\mathbb{Z}/(d+1)$ is $d+1$, cf. the proof of Lemma \ref{wesm}. Then comparing Theorem \ref{logred} with the result from Theorem \ref{strumn} and using $G$-equivariance we reduce to proving$$H^i(Y,\widetilde{\mathcal M}[{\mathbb L}^{s}_Y])\cong H^i(Y,{\widetilde{\mathcal M}}[{\mathcal O}_Y]\otimes_{{\mathcal O}_Y}\Omega^s_Y)$$for any $i$. But this we did in Theorem \ref{klewe}. Now suppose $|M|\in(d+1).\mathbb{Z}$. Under our hypothesis that the weights of $M$ be small this means $M|_{{\rm SL}_{d+1}(K)}$ is trivial, hence ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}$ is the constant sheaf with value ${\mathcal O}_{\dot{K}}$. Since $\Omega^{s}_{{\mathfrak{X}}}\otimes k$ is locally free over ${\mathcal O}_{{\mathfrak{X}}\otimes k}$ we have an exact sequence$$0\longrightarrow\Omega^{s}_{{\mathfrak{X}}}\otimes k\longrightarrow\bigoplus_{Z\in F^0}\Omega^{s}_{{\mathfrak{X}}}\otimes{\mathcal O}_Z\longrightarrow\bigoplus_{Z\in F^1}\Omega^{s}_{{\mathfrak{X}}}\otimes{\mathcal O}_Z\longrightarrow\ldots.$$On the other hand the exact sequence (\ref{seqlofo}) becomes $$0\longrightarrow {\mathcal Log}^s(\underline{{\mathcal O}_{\dot{K}}})\otimes k\longrightarrow\bigoplus_{Z\in F^0}{\mathbb L}^{s}(\underline{k})_Z\longrightarrow\bigoplus_{Z\in F^1}{\mathbb L}^{s}(\underline{k})_Z\longrightarrow\ldots$$with each ${\mathbb L}^{s}(\underline{k})_Z$ the push forward to ${\mathfrak X}$ of a constant sheaf on $Z$ (which we denote by ${\mathbb L}^{s}(\underline{k})_Z$, too). Comparing we reduce to proving$$H^*(Z,{\mathbb L}^{s}(\underline{k})_Z)\cong H^*(Z,\Omega^s_{{\mathfrak{X}}}\otimes{\mathcal O}_Z)$$for any $Z\in F^j$, any $j$. By $G$-equivariance we may ssume $Z\subset Y$. Let $\iota_Y^Z:Z\to Y$ denote the closed embedding. The proof of Theorem \ref{logred} shows $(\iota_Y^Z)_*{\mathbb L}^{s}(\underline{k})_Z={\mathbb L}^{s}_Z(0)$ as defined in Theorem \ref{konskomop} (b). Hence we may conclude by that Theorem.\hfill$\Box$\\ {\it Remarks:} (1) From Proposition \ref{globlo} it follows (take $M=K$) that every bounded differential $s$-form on $X$, i.e. every element of $H^0({\mathfrak{X}},\Omega^{s}_{{\mathfrak{X}}})\otimes K$, is in fact logarithmic, in particular it is closed. Thus $H^0({\mathfrak{X}},\Omega^{s}_{{\mathfrak{X}}})\otimes K$ must be the space of bounded logarithmic differential $s$-forms on $X$ studied in \cite{iovspi} (if $\kara(K)=0$).\\ (2) Suppose $M=K$, the trivial $G$-representation. Let $W\omega^{\bullet}_{\mathfrak{X}}$ denote the logarithmic de Rham complex of the special fibre of ${\mathfrak{X}}$. The same proof as for \ref{globlo} provides isomorphisms $$H^j({\mathfrak{X}},{\mathcal Log}^{\bullet}({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}))\cong H^j({\mathfrak{X}},W\omega^{\bullet}_{\mathfrak{X}})$$for any $j$, hence altogether isomorphisms$$H^j({\mathfrak{X}},\Omega^{{\bullet}}_{{\mathfrak{X}}})\cong H^j({\mathfrak{X}},W\omega^{\bullet}_{\mathfrak{X}}).$$Similarly, for the quotients ${\mathfrak{X}}_{\Gamma}$ of ${\mathfrak{X}}$ as in section \ref{norehose} we get by the same proof isomorphisms\begin{gather}H^j({\mathfrak{X}}_{\Gamma},\Omega^{{\bullet}}_{{\mathfrak{X}_{\Gamma}}})\cong H^j({\mathfrak{X}}_{\Gamma},W\omega^{\bullet}_{\mathfrak{X}_{\Gamma}}).\label{luc}\end{gather}These isomorphisms (\ref{luc}) are those constructed by Hyodo (see \cite{illast}) by means of $p$-adic \'{e}tale sheaves of vanishing cycles for general projective semistable schemes with ordinary reduction. They must not be confused with the Hyodo-Kato isomorphisms which are used to define the filtered $(\phi,N)$-modules which recover the $p$-adic \'{e}tale cohomology of the generic fibre of ${\mathfrak{X}}_{\Gamma}$. \section{The Hodge spectral sequence} \label{norehose} Let $\Gamma\subset {\rm SL}\sb {d+1}(K)$ be a discrete torsionfree and cocompact subgroup. It is proved in \cite{mus} that the quotient ${\mathfrak{X}}_{\Gamma}=\Gamma\backslash{\mathfrak{X}}$ is the $\pi$-adic formal completion of a projective ${\mathcal O}_K$-scheme. Passing to a smaller $\Gamma$ if necessary we may assume that ${\mathfrak{X}}_{\Gamma}$ has strictly semistable reduction, i.e. all irreducible components of ${\mathfrak{X}}_{\Gamma}$ are smooth. Let $X_{\Gamma}=\Gamma\backslash X={\mathfrak{X}}_{\Gamma}\otimes K$, the analytification of a smooth projective $K$-scheme. Let $M$ be a $K[\Gamma]$-module with $\dim_KM<\infty$; we write $\underline{M}={\mathcal M}$ for the constant sheaf on $X$, resp. on $\mathfrak{X}$, generated by $M$. For a $\Gamma$-equivariant sheaf ${\mathcal F}$ on ${\mathfrak{X}}$ or $X$ we write ${\mathcal F}^{\Gamma}$ for the descended sheaf on ${\mathfrak{X}}_{\Gamma}$ or $X_{\Gamma}$. For example, the constant local system $\underline{M}={\mathcal M}$ on $\mathfrak{X}$ or $X$ gives rise to a (non constant in general !) descended local system ${\mathcal {M}}^{\Gamma}$ on $\mathfrak{X}_{\Gamma}$ or $X_{\Gamma}$. We are interested in the cohomology of the sheaf complex ${\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}=(\underline{M}\otimes_K\Omega^{\bullet}_{X})^{\Gamma}$ on $\mathfrak{X}_{\Gamma}$ or $X_{\Gamma}$. The Hodge spectral sequence\begin{gather}E_1^{r,s}=H^s(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{r}_{X_{\Gamma}} )\Rightarrow H^{r+s}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ) \label{hodss}\end{gather}gives rise to the Hodge filtration$$H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F^0_{H}\supset F^1_{H}\supset\ldots\supset F^{d+1}_{H}=0.$$ \begin{kor}\label{hosssp} If $M$ is a $K$-rational $G$-representation with small weights then the Hodge spectral sequence (\ref{hodss}) degenerates in $E_1$. The Hodge filtration $F_H^{\bullet}$ has a canonical splitting defined through logarithmic differential forms (at least after base extension $K\to\dot{K}$). \end{kor} {\sc Proof:} We may extend scalars $K\to\dot{K}$. We continue to use the same names for coherent sheaves on $X_{\Gamma}$ and for their push forward to ${\mathfrak X}_{\Gamma}$. We have an inclusion of sheaf complexes ${\mathcal Log}^{\bullet}({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})^{\Gamma}\otimes_{{\mathcal O}_{\dot{K}}} \dot{K}\to{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}\otimes_K\dot{K}$ on ${\mathfrak X}_{\Gamma}$ with trivial differentials on the former. Therefore it is enough to prove that for any $0\le s\le d$ the natural maps$$H^*({\mathfrak X}_{\Gamma},{\mathcal Log}^{s}({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})^{\Gamma}\otimes_{{\mathcal O}_{\dot{K}}} \dot{K})\longrightarrow H^*({\mathfrak X}_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{s}_{{X}_{\Gamma}}\otimes_K\dot{K})$$are isomorphisms. Now $${\mathcal {M}}^{\Gamma}\otimes_K\Omega^{s}_{{X}_{\Gamma}}\otimes_K{\dot{K}}=({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})^{\Gamma}\otimes_{{\mathcal O}_{\dot{K}}}{\dot{K}}.$$Since ${\mathfrak X}_{\Gamma}$ is quasicompact, taking cohomology commutes with applying $(.)\otimes_{{\mathcal O}_{\dot{K}}}{\dot{K}}$. Therefore it will be enough to show$$H^*({\mathfrak X}_{\Gamma},{\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})^{\Gamma})\cong H^*({\mathfrak X}_{\Gamma},({\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}})^{\Gamma}).$$For both ${\mathcal F}={\mathcal Log}^s({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})$ and ${\mathcal F}={\mathcal M}^0_{{\mathcal O}_{\dot{K}}}\otimes_{{\mathcal O}_K}\Omega^{s}_{{\mathfrak{X}}}$ we have the spectral sequence$$E_2^{rt}=H^r(\Gamma, H^t({\mathfrak{X}},{\mathcal F}))\Rightarrow H^{r+t}({\mathfrak{X}}_{\Gamma},{\mathcal F}^{\Gamma}).$$ We conclude by Proposition \ref{globlo} (alternatively we could repeat the proof of Proposition \ref{globlo}). \hfill$\Box$\\ Let again $M$ be an arbitrary $K[\Gamma]$-module with $\dim_KM<\infty$. From now on we suppose $\kara(K)=0$. For an open subscheme $U$ of $\mathfrak{X}\otimes k$ we denote by $\overline{U}$ the Zariski closure of $U$ in ${\mathfrak{X}}\otimes k$, and by $]\overline{U}[=]\overline{U}[_{\mathfrak{X}}=sp^{-1}(\overline{U})$ its tube in $X$, the preimage under the specialization map $sp:X\to\mathfrak{X}\otimes k$. For $i\ge0$ we define the sheaf ${\mathbb{L}}^i(M)$ on ${\mathfrak{X}}\otimes k$ (or equivalently: on ${\mathfrak{X}}$) by setting\footnote{Logically the notation ${\mathbb{L}}$ as used here has nothing to do with the notation ${\mathbb{L}}$ as used in the previous sections; however, the ${\mathbb{L}}$'s play the same role in their respective contexts.}$${\mathbb{L}}^i(M)(U)=\ke[\underline{M}\otimes\Omega^{i}_{X}(]\overline{U}[)\longrightarrow\underline{M}\otimes\Omega^{i+1}_{X}(]\overline{U}[)]$$for open $U\subset{\mathfrak{X}}\otimes k$. We get a sheaf complex ${\mathbb{L}}^{\bullet}(M)$ on ${\mathfrak{X}}$ with trivial differentials. For $i\ge0$ let $\tau_i(\underline{M}\otimes\Omega^{\bullet}_X)$ be the subsheaf complex of $\underline{M}\otimes\Omega^{\bullet}_{X}$ on ${\mathfrak{X}}\otimes k$ whose value $\tau_i(\underline{M}\otimes\Omega^{\bullet}_X)(U)$ for open $U\subset{\mathfrak{X}}\otimes k$ is the complex $$\underline{M}\otimes\Omega^0_X(]\overline{U}[)\longrightarrow\ldots\longrightarrow\underline{M}\otimes\Omega^{i-1}_X(]\overline{U}[)\longrightarrow {\mathbb{L}}^i(M)(U)\longrightarrow0\longrightarrow\ldots.$$ We write $\tau_i({{\mathcal {M}}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})=\tau_i({\underline{M}}\otimes_K\Omega^{\bullet}_{X})^{\Gamma}$ for the descended sheaf complex on $\mathfrak{X}_{\Gamma}$ or $X_{\Gamma}$. For a complex $C^{\bullet}=(C^0\stackrel{d}{\to}C^1\stackrel{d}{\to}C^2\stackrel{d}{\to} \ldots)$ (of abstract groups, or sheaves) we put$$t_{\le i}C^{\bullet}=(C^0\stackrel{d}{\longrightarrow}\ldots\stackrel{d}{\longrightarrow}C^{i-1}\stackrel{d}{\longrightarrow}\ke(d)\stackrel{d}{\longrightarrow}0\stackrel{d}{\longrightarrow}\ldots).$$ \begin{pro}\label{acycanw} We have$$H^t({\mathfrak{X}},\tau_i({\underline{M}}\otimes_K\Omega^{\bullet}_{{X}}))= \left\{\begin{array}{l@{\quad:\quad}l}M\otimes_K H_{dR}^t(X)&0\le t\le i\\ensuremath{\overrightarrow{0}}&t>i\end{array}\right..$$In particular,$$H^d({\mathfrak{X}}_{\Gamma},\tau_i({{\mathcal {M}}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}))=H^d(\Gamma,M\otimes_Kt_{\le i}\Omega_X^{\bullet}(X)).$$ \end{pro} {\sc Proof:}We first deduce the second statement from the first one. Since $X$ is a Stein space we have $H^s(X,\Omega_X^r)=0$ for all $r\ge0$, all $s>0$ (see \cite{kiaub}), hence $$H^t_{dR}(X)=H^t(X,\Omega_X^{\bullet})=\frac{t_{\le t}\Omega_X^{\bullet}(X)}{t_{\le t-1}\Omega_X^{\bullet}(X)}[-t]$$for all $t$ (the last term is a complex concentrated in degree $0$). Together with the first statement we deduce that the natural map of sheaf complexes $\tau_i({\underline{M}}\otimes_K\Omega^{\bullet}_{{X}})\to M\otimes_K t_{\le i}\Omega_X^{\bullet}$ induces an isomorphism$${\mathbb R}\Gamma({\mathfrak{X}},\tau_i({\underline{M}}\otimes_K\Omega^{\bullet}_{{X}}))\cong M\otimes_Kt_{\le i}\Omega_X^{\bullet}(X).$$This gives the second statement. The first one will be deduced from de Shalit's acyclicity theorem. We may of course assume $M=K$. It will be enough to show$$H^t({\mathfrak{X}},\frac{\tau_i\Omega^{\bullet}_{{X}}}{\tau_{i-1}\Omega^{\bullet}_{{X}}})=\left\{\begin{array}{l@{\quad:\quad}l}H_{dR}^i(X)&t=i\\ensuremath{\overrightarrow{0}}&t\ne i\end{array}\right.$$where we set $\tau_{-1}(\underline{M}\otimes\Omega^{\bullet}_X)=0$. For $T\in F^s$ (any $s$) let $${\mathcal Z}(T)=\{Z\in F^0\quad|\quad T\subset Z\}$$and $\dot{T}=\cup_{Z\in{\mathcal Z}(T)}Z$. For all sufficiently small open neighbourhoods $U\subset{\mathfrak{X}}\otimes k$ of a given closed point of ${\mathfrak{X}}\otimes k$ we have $\overline{U}=\dot{T}$ for some $T$. Then $]\overline{U}[=]\dot{T}[$ is a Stein space, hence$$H^i(]\overline{U}[,\frac{\tau_i\Omega^{\bullet}_{{X}}}{\tau_{i-1}\Omega^{\bullet}_{{X}}})=H_{dR}^i(]\overline{U}[).$$Therefore, if ${\mathcal{H}}_{dR}^i$ denotes the sheaf associated with the presheaf$$U\mapsto H_{dR}^i(]\overline{U}[)$$on ${{\mathfrak{X}}}$, then we must show$$H^t({{\mathfrak{X}}},{\mathcal{H}}_{dR}^i)=\left\{\begin{array}{l@{\quad:\quad}l}H^i_{dR}(X)&t=0\\ensuremath{\overrightarrow{0}}&t\ne0\end{array}\right..$$For $T\in F^s$ (any $s$) let $\dot{T}^0$ denote the maximal open subscheme of ${\mathfrak{X}}\otimes k$ which is contained in $\dot{T}$. We compute $H^t({{\mathfrak{X}}},{\mathcal{H}}_{dR}^i)$ as Cech cohomology with respect to the open covering ${{\mathfrak{X}}}=\bigcup_{T\in F^d}\dot{T}^0$. Note that for any collection $(T_1,\ldots,T_{r+1})\in (F^d)^{r+1}$ the intersection $\dot{T}_1^0\cap\ldots\cap\dot{T}^0_{r+1}$ is empty or equals $\dot{T}^0$ for some $T\in F^s$, some $s$. In the latter case it follows that $\overline{\dot{T}_1^0\cap\ldots\cap\dot{T}^0_{r+1}}=\dot{T}$. From the definition of ${\mathcal{H}}_{dR}^i$ we know on the other hand that for all $T\in F^s$, all $s$, we have $H^r(]\dot{T}[,{\mathcal{H}}_{dR}^i)=0$ for all $r>0$. Together we get$$H^r(\dot{T}_1^0\cap\ldots\cap\dot{T}^0_{r+1},{\mathcal{H}}_{dR}^i)=0$$for all $r>0$. Therefore it will be enough to show that the complex$$\prod_{T\in F^d}H^i_{dR}(]\dot{T}[)\longrightarrow\prod_{(T_1,T_2)\in(F^{d})^2}H^i_{dR}(]\dot{T_1}\bigcap\dot{T_2}[)\longrightarrow\ldots$$is a resolution of $H_{dR}^i(X)$. By de Shalit's acyclicity theorem \cite{ds} (see also \cite{acy}) we know that the complex $$\prod_{T\in F^0}H^i_{dR}(]T[)\longrightarrow\prod_{T\in F^1}H^i_{dR}(]T[)\longrightarrow\ldots$$is a resolution of $H_{dR}^i(X)$. Both these complexes map to the total complex of the double complex$$K^{rs}=\prod_{(T_1,\ldots,T_{r+1}),T'}H_{dR}^i(]T'[)$$where the product is taken over all $(T_1,\ldots,T_{r+1})\in(F^{d})^{r+1}$ and all $T'\in F^s$ such that $T_1\cap\ldots\cap T_{r+1}\subset T'$. It will be enough to show that these maps are quasiisomorphisms. It is clear that for fixed $s$ the complex $K^{\bullet s}$ is a resolution of $\prod_{T\in F^s}H^i_{dR}(]T[)$. On the other hand it follows from Lemma \ref{lokazyk} below that for fixed $r$ the complex $K^{r\bullet}$ is a resolution of $$\prod_{(T_1,\ldots,T_{r+1})\in (F^{d})^{r+1}}H^i_{dR}(]\dot{T_1}\bigcap\ldots\bigcap\dot{T}_{r+1}[)$$and this completes the proof of \ref{acycanw}.\hfill$\Box$ \begin{lem}\label{lokazyk} (see \cite{acy} Corollary 2.9 (1)) For any $T\in F^s$ (any $s$) the sequence$$0\longrightarrow H_{dR}^i(]\dot{T}[_{\mathfrak{X}})\longrightarrow \prod_{Z\in{\mathcal Z}(T)}H_{dR}^i(]Z[_{\mathfrak{X}})\longrightarrow \prod_{R\subset {\mathcal Z}(T)\atop |R|=2}H_{dR}^i(]\bigcap_{Z\in R}Z[_{\mathfrak{X}})\longrightarrow\ldots$$is exact.\hfill$\Box$\\ \end{lem} We have the covering spectral sequence\begin{gather}E_2^{r,s}=H^r(\Gamma,M\otimes_KH^s_{dR}(X))\Rightarrow H^{r+s}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )\label{covss}\end{gather}which degenerates in $E_2$, as is shown in \cite{schn}. Denote by $$H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F^0_{\Gamma}\supset F^1_{\Gamma}\supset\ldots\supset F^{d+1}_{\Gamma}=0$$the filtration on $H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )$ induced by (\ref{covss}) (it turns out that the cohomology in other degrees is not interesting). By \cite{schn} Theorem 2 and Proposition 2, section 1, we have for $i=0,\ldots,d+1$:\begin{gather}\dim_KF^i_{\Gamma}=\left\{\begin{array}{l@{\quad:\quad}l}(d+1-i)\mu(\Gamma,M)&\mbox{}\,\,d\,\,\mbox{is odd or}\,\,2i>d\\(d+1-i)\mu(\Gamma,M)+\dim_KM^{\Gamma}&\mbox{}\,\,d\,\,\mbox{is even and}\,\,2i\le d\end{array}\right.\label{pecomp}\\\mu(\Gamma,M)=\mu(\Gamma,M^*)\label{dimdu}\end{gather}Here $\mu(\Gamma,M)=\dim_KH^d(\Gamma,M)$ and $M^*=\Hom_K(M,K)$ and we must assume $d\ge2$.\\ {\bf Conjecture:} (Schneider) \begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_H^{i+1}\bigoplus F_{\Gamma}^{d-i}\label{notag}\end{gather}for all $0\le i\le d-1$. \begin{satz}\label{genfro} The following (i) and (ii) are equivalent:\\(i) The map\begin{gather}H^d({\mathfrak X}_{\Gamma},\tau_i({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}) \bigoplus{\mathcal {M}}^{\Gamma}\otimes_K\Omega_{{{X}}_{\Gamma},\ge i+1}^{\bullet})\to H^d({\mathfrak X}_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )\label{surzi}\end{gather}and the analogous map for $M^*$ and $d-i$ (instead of $M$ and $i+1$) are surjective.\\(ii) We have the decomposition\begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_H^{i+1}\bigoplus F_{\Gamma}^{d-i}\label{spliteins}\end{gather}and the analogous decomposition for $M^*$ and $d-i$ (instead of $M$ and $i+1$). \end{satz} {\sc Proof:} By definition we have \begin{align}F_H^{i+1}&=\bi[H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega_{X_{\Gamma},\ge i+1}^{\bullet})\to H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )]\notag \\ensuremath{\mathbb{L}}_{\Gamma}^{d-i}&=\bi[H^d(\Gamma,M\otimes_Kt_{\le i}\Omega_X^{\bullet}(X))\to H^d(\Gamma,M\otimes_K\Omega_X^{\bullet}(X))]\notag \\{} &=\bi[H^d(\Gamma,M\otimes_Kt_{\le i}\Omega_X^{\bullet}(X))\to H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )]\notag\end{align}(the last equality holds since $X$ is a Stein space). From \ref{acycanw} it then follows that\begin{gather}F_{\Gamma}^{d-i}=\bi[H^d({\mathfrak{X}}_{\Gamma},\tau_i({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}))\to H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )].\label{charga}\end{gather}This shows that (ii) implies (i). Conversely, if (\ref{surzi}) is surjective then (\ref{charga}) shows\begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_H^{i+1}+ F_{\Gamma}^{d-i}\label{sumhalb}\end{gather}and similarly, if the analog of (\ref{surzi}) with $M^*$ and $d-i$ (instead of $M$ and $i+1$) is surjective then the analog of (\ref{charga}) with $M^*$ and $d-i$ (instead of $M$ and $i+1$) shows the analog of (\ref{sumhalb}) with $M^*$ and $d-i$ (instead of $M$ and $i+1$). By a formal duality argument one then concludes that the sum in (\ref{sumhalb}) is in fact direct. This argument is easily extracted from the proof of \cite{iovspi} Theorem 5.4 and is worked out in a completely analogous situation in the proof of \ref{intelde} below. It rests on Serre duality on the smooth projective $K$-scheme underlying $X_{\Gamma}$ and the computations (\ref{pecomp}) and (\ref{dimdu}) of $\dim_KF^j_{\Gamma}$.\hfill$\Box$\\ {\bf Remark:} As we just saw, the surjectivity of (\ref{surzi}) alone implies (\ref{charga}). This is the sheaf cohomology analog of \cite{schn} p.631, Lemma 2 (ii). To ask in addition for the surjectivity of the analog of (\ref{surzi}) for $M^*$ and $d-i$ for obtaining $F_H^{i+1}\cap F_{\Gamma}^{d-i}=0$ is the strategy of \cite{iovspi}, an alternative to the strategy \cite{schn} p.631, Lemma 2 (i). \\ The inclusion of sheaf complexes ${\mathbb{L}}^{\bullet}(M)^{\Gamma}\to{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} $ induces a map$$\nabla(M):H^d({\mathfrak X}_{\Gamma},{\mathbb{L}}^{\bullet}(M)^{\Gamma})\longrightarrow H^d({\mathfrak X}_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ).$$ \begin{kor}\label{nabkri} If $\nabla(M)$ and $\nabla(M^*)$ are surjective then (\ref{spliteins}) holds for all $0\le i\le d-1$. \end{kor} {\sc Proof:} The differential in the complex ${\mathbb{L}}^{\bullet}(M)^{\Gamma}$ is zero, consequently the inclusion$${\mathbb{L}}^{\bullet}(M)^{\Gamma}\longrightarrow\tau_i({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}) \bigoplus{\mathcal {M}}^{\Gamma}\otimes_K\Omega_{{{X}}_{\Gamma},\ge i+1}^{\bullet}$$is a morphism of complexes and \ref{genfro} proves the corollary.\hfill$\Box$\\ By \cite{mus} we know that ${X}_{\Gamma}$ is the analytification of a projective $K$-scheme ${X}_{\Gamma,alg}$. Similarly it follows from GAGA-theorems that the de Rham complex ${\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}$ on ${X}_{\Gamma}$ is the analytification of a complex $({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg}$ on ${X}_{\Gamma,alg}$. Consider the conjugate spectral sequence$$E_2^{pq}=H^p({X}_{\Gamma,alg},{\mathcal H}^q(({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg}))$$$$\Longrightarrow H^{p+q}({X}_{\Gamma,alg},({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg})=H^{p+q}({X}_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}).$$It gives rise to the conjugate filtration$$H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=H^d({X}_{\Gamma,alg},({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg})=F^0_{con}\supset F^1_{con}\supset\ldots\supset F^{d+1}_{con}=0.$$ \begin{pro}\label{conj} Assume\begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_H^{i+1}+F_{con}^{d-i}\label{splitcon}\end{gather}and the analogous decomposition for $M^*$ and $d-i$ (instead of $M$ and $i+1$). Then (\ref{spliteins}) holds. Conversely, if (\ref{spliteins}) holds then $F_H^{i+1}\cap F_{con}^{d-i}=0$. \end{pro} {\sc Proof:} In general we have $$F_{con}^{d-i}=\bi[H^d({X}_{\Gamma,alg},t_{\le i}({\mathcal {M}}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg})\to H^d({X}_{\Gamma,alg},({\mathcal {M}}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg})].$$Let ${\mathfrak X}_{\Gamma,alg}$ denote the ${\mathcal O}_K$-scheme (constructed in \cite{mus}) of which ${\mathfrak X}_{\Gamma}$ is the $\pi$-adic formal completion and ${X}_{\Gamma,alg}$ the generic fibre. If $t:{\mathfrak X}_{\Gamma}\to{\mathfrak X}_{\Gamma,alg}$ and $j:{X}_{\Gamma,alg}\to{\mathfrak X}_{\Gamma,alg}$ denote the natural maps then we have a canonical transformation$${\mathbb R}j_*t_{\le i}({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})^{alg}\longrightarrow t_*\tau_i({\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}).$$From (\ref{charga}) it then follows that $F_{con}^{d-i}\subset F_{\Gamma}^{d-i}$. Therefore our hypothesis implies (\ref{sumhalb}) and we may conclude as in the proof of \ref{genfro}.\hfill$\Box$\\ {\it Remarks:} (1) Observe that \ref{conj} formulates a purely algebraic approach to the splitting conjecture. In particular it invites trying to find a non-$p$-adic proof of the splitting conjecture. This remark may in particular be relevant in cases where ${X}_{\Gamma,alg}$ is the base change to $K$ of a Shimura variety defined over a global number field, see \cite{rz}. In these cases the complexes ${\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}$ occur in the de Rham complexes of the relative de Rham cohomology (with Gauss-Manin connection) of powers of the universal abelian scheme. Using the criterion \ref{conj} one may hope to prove the splitting conjecture with global methods ! \\ (2) From the point of view of $p$-adic Hodge theory the relevance comes from the following fact: in \cite{hk} it is shown that if ${\mathcal M}$ is endowed with a structure of isoclinic $F$-isocrystal, then $H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )$ receives a Frobenius structure and $F_{\Gamma}^{\bullet}$ is its corresponding (renumbered) slope filtration. \section{The reduced Hodge spectral sequence} \label{rehose} For general $M$ the Hodge spectral sequence (\ref{hodss}) does not degenerate in $E_1$. For {\it rational} representations $M$ Schneider constructs a new ('reduced') Hodge spectral sequence computing $H^{*}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )$ which he conjectures to degenerate in $E_1$. We discuss his conjecture in this section. If $\Xi_0,\ldots,\Xi_{d}$ denote the standard projective coordinate functions on $\mathbb{P}^{d}_K$, then $z_j=\Xi_j/\Xi_0$ for $j=1,\ldots,d$ are holomorphic functions on $X$. Let$$\overline{u}(z)= \left(\begin{array}{cc}1&-z_1\quad\cdots\quad-z_d\\{0}&I_d\end{array}\right)\in{\rm SL}\sb {d+1}({\mathcal O}_X(X)).$$Let now $M$ be an irreducible $K$-rational representation of ${\rm GL}\sb {d+1}$. Suppose it has highest weight $(\lambda_0\ge\lambda_1\ge\ldots\ge\lambda_d)$. By this we mean that there exists a non zero vector $m\in M$ such that $K.m$ is stable under upper triangular matrices and generates $M$ as a $G$-representation, and such that $gm=\prod_{i=0}^da_i^{\lambda_i}m$ for all diagonal matrices $g=e_0(a_0)\cdots e_d(a_d)\in G$. Assume $\lambda_d=0$. We grade $M$ by setting$$\mbox{\rm gr}^r M=\{m\in M\quad|\quad e_0(a_0)m=a_0^{\lambda_0-r}m\,\,\mbox{for all}\,\,a_0\in K\}$$for $r\in\mathbb{Z}$, and we filter $M$ by setting$$f^rM=\bigoplus_{r'\ge r}\mbox{\rm gr}^{r'}M.$$Then $f^{\lambda_0+1}M=0$ and $f^0M=M$. We get a corresponding filtration of the constant sheaf $\underline{M}$ on $\mathfrak{X}$ and on $X$. We filter $\underline{M}\otimes_K\Omega^{j}_X$ by setting$$f^r(\underline{M}\otimes_K\Omega^{j}_X)={\mathcal O}_X.\overline{u}(z)(f^r\underline{M})\otimes_{{\mathcal O}_X}\Omega^j_X.$$We let\begin{gather}{\mathcal F}^{r,\bullet}=[f^{r}(\underline{M}\otimes_K\Omega^0_X)\longrightarrow f^{r-1}(\underline{M}\otimes_K\Omega^1_X)\longrightarrow f^{r-2}(\underline{M}\otimes_K\Omega^2_X)\longrightarrow\ldots].\label{filder}\end{gather} It follows from \cite{schn} that this is a ${\rm SL}\sb {d+1}(K)$-stable filtration of $\underline{M}\otimes_K\Omega^{\bullet}_X$ by subcomplexes (notations and normalizations in loc. cit. are different, but equivalent). We obtain the spectral sequence\begin{gather}E_1^{r,s}=h^{r+s}({\mathcal F}^{r,\bullet}/{\mathcal F}^{r+1,\bullet})\Rightarrow h^{r+s}(\underline{M}\otimes_K\Omega^{\bullet}_X)\label{ogusfil}.\end{gather}The following is \cite{schn} Lemma 9, section 3 (observe that $X$ is a Stein space). \begin{pro}\label{peterco} (Schneider) The terms $\underline{D}^j(M)=E_1^{\lambda_0-\lambda_j+j,\lambda_j-\lambda_0}$ for $0\le j\le d$ are the only non vanishing $E_1$-terms in (\ref{ogusfil}).\hfill$\Box$\\ \end{pro} We define ${\rm SL}\sb {d+1}(K)$-invariant subobjects $B^j$ and $Z^j$ of $\underline{M}\otimes_K\Omega^{j}_X$ by requiring$$f^{\lambda_0-\lambda_j+1}(\underline{M}\otimes_K\Omega^{j}_X)\subset B^j\subset Z^j\subset f^{\lambda_0-\lambda_j}(\underline{M}\otimes_K\Omega^{j}_X),$$$$Z^j/f^{\lambda_0-\lambda_j+1}(\underline{M}\otimes_K\Omega^{j}_X)=\ker(\delta^j_{\lambda_j-j}),\quad\quad B^j/f^{\lambda_0-\lambda_j+1}(\underline{M}\otimes_K\Omega^{j}_X)=\bi(\delta^{j-1}_{\lambda_j-j})$$where $\delta^j_t:{\mathcal F}^{\lambda_0-t,j}/{\mathcal F}^{\lambda_0-t+1,j}\to{\mathcal F}^{\lambda_0-t,j+1}/{\mathcal F}^{\lambda_0-t+1,j+1}$ is the differential. Now \ref{peterco} implies (compare the proof of \cite{schn} Theorem 3, section 3) that\begin{gather}Z^0\longrightarrow Z^1+dB^0\longrightarrow Z^2+dB^1\longrightarrow\ldots\longrightarrow Z^d+dB^{d-1}\label{intco}\end{gather}is a subcomplex of $\underline{M}\otimes_K\Omega^{j}_X$ such that the inclusion into $\underline{M}\otimes_K\Omega^{j}_X$ is a quasiisomorphism. Moreover it implies that for any $j$ the map$$Z^j+dB^{j-1}\longrightarrow Z^j/B^j=\underline{D}^j(M)$$$$z+db\mapsto z\mod B^j$$is well defined and that if we take via these maps the quotient complex$$\underline{D}^0(M)\longrightarrow \underline{D}^1(M)\longrightarrow \underline{D}^2(M)\longrightarrow\ldots \longrightarrow \underline{D}^d(M)$$ of (\ref{intco}), then this quotient map is a quasiisomorphism, too. Hence an ${\rm SL}\sb {d+1}(K)$-equivariant isomorphism between $\underline{M}\otimes_K\Omega^{\bullet}_X$ and $\underline{D}^{\bullet}(M)$ in the derived category $D({\mathfrak{X}})$ of abelian sheaves on ${\mathfrak{X}}$. Let again $\Gamma<{\rm SL}\sb {d+1}(K)$ be as before. Consider the spectral sequences\begin{gather}E_1^{st}=H^{s+t}(X_{\Gamma},({\mathcal F}^{s,\bullet}/{\mathcal F}^{s+1,\bullet})^{\Gamma})\Rightarrow H^{s+t}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )\label{tilho}\end{gather} \begin{gather}E_1^{st}=H^t(X_{\Gamma},\underline{D}^{s}(M)^{\Gamma})\Rightarrow H^{s+t}(X_{\Gamma},\underline{D}^{\bullet}(M)^{\Gamma})=H^{s+t}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ).\label{redss}\end{gather}The latter is called the 'reduced' Hodge spectral sequence computing our object of interest $H^{*}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )$. Let $$H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )={F}^0_I\supset{F}^1_{I}\supset\ldots\supset{F}^{\lambda_0+d}_{I}\supset{F}^{\lambda_0+d+1}_{I}=(0)$$be the filtration induced by $(\ref{tilho})$, let $$H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F^0_{red}\supset F^1_{red}\supset\ldots\supset F^d_{red}\supset F^{d+1}_{red}=(0)$$be the filtration induced by $(\ref{redss})$. These filtrations have the dame jumps; namely, from \ref{peterco} it follows that for all $d\ge j\ge 1$ we have\begin{gather}F_{red}^j={F}_I^{\lambda_0-\lambda_{j-1}+j}={F}_I^{\lambda_0-\lambda_{j-1}+j+1}=\ldots={F}_I^{\lambda_0-\lambda_j+j}.\label{redhod}\end{gather} The irreducible $K$-rational ${\rm GL}\sb {d+1}$-representation$${{{}}}{M}^*=\ho_K(M,K)\otimes\det{}^{\lambda_0}$$has highest weight $({{{}}}{\lambda}_0^*\ge\ldots\ge{{{}}}{\lambda}_d^*)$ with ${{{}}}{\lambda}_d^*=0$, where ${{{}}}{\lambda}_i^*=\lambda_0-\lambda_{d-i}$ for $0\le i\le d$. A straightforward computation shows that the filtration $(f^r{{{}}}{M}^*)_r$ of ${{{}}}{M}^*$ is dual to the filtration $(f^r{M})_r$ of ${M}$, in the sense that the canonical perfect pairing$$M\times {{{}}}{M}^*\longrightarrow K$$induces perfect pairings$$\mbox{\rm gr}^{\lambda_0-j}M\times \mbox{\rm gr}^j {{{}}}{M}^*\longrightarrow K$$for any $0\le j\le \lambda_0={{{}}}{\lambda}_0^*$. These are not ${\rm SL}\sb {d+1}(K)$-equivariant objects. However, applying the ${\rm SL}\sb {d+1}(K)$-equivariance of the pairings$$\underline{M}\otimes_K\Omega^i_{X}\times{{{}}}{\underline{M}}^*\otimes_K{\Omega}^{d-i}_{X}\longrightarrow{\Omega}^d_{X},$$$$(m\otimes \eta,m^*\otimes\omega)\mapsto m^*(m)\eta\wedge\omega$$to the action of the element $\overline{u}(z)$ one deduces perfect pairings$$\frac{f^{\lambda_0-j}(\underline{M}\otimes_K\Omega^i_X)}{f^{\lambda_0-j+1}(\underline{M}\otimes_K\Omega^{i}_X)}\times \frac{f^j({{{}}}{\underline{M}}^*\otimes_K\Omega^{d-i}_X)}{f^{j+1}({{{}}}{\underline{M}}^*\otimes_K\Omega_X^{d-i})}\longrightarrow\Omega^d_X.$$Clearly they are compatible with the differential when $i$ varies, hence ${\rm SL}\sb {d+1}(K)$-equivariant perfect pairings$$\underline{D}^i(M)\times \underline{D}^{d-i}({{{}}}{M}^*)\longrightarrow \Omega^d_X.$$Passing to $\Gamma$-invariant sheaves on ${\mathfrak X}_{\Gamma}$ resp. ${X}_{\Gamma}$ we get the perfect pairing$$\underline{D}^i(M)^{\Gamma}\times \underline{D}^{d-i}({{{}}}{M}^*)^{\Gamma}\longrightarrow \Omega^d_{{X}_{\Gamma}}.$$In particular, Serre duality on the smooth projective $K$-scheme $X_{\Gamma}$ gives us perfect pairings\begin{gather}H^s(X_{\Gamma},\underline{D}^i(M)^{\Gamma})\times H^{d-s}(X_{\Gamma},\underline{D}^{d-i}({{{}}}{M}^*)^{\Gamma})\longrightarrow K.\label{serred}\end{gather} {\bf Conjecture:} (Schneider) For all $0\le i\le d-1$ we have \begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_{red}^{i+1}\bigoplus F_{\Gamma}^{d-i}.\notag\end{gather} \begin{satz}\label{intelde} If $\nabla(M)$ and $\nabla(M^*)$ are surjective then $F_{red}^{\bullet}=F_{H}^{\bullet}$ and\begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_{red}^{i+1}\bigoplus F_{\Gamma}^{d-i}\quad\quad(0\le i\le d-1).\label{splitzwei}\end{gather}\end{satz} {\sc Proof:} (i) We first claim that there exists a ${\rm SL}\sb {d+1}(K)$-equivariant morphism of sheaf complexes $$\nu:{\mathbb{L}}^{\bullet}(M)\longrightarrow \underline{D}^{\bullet}(M)$$ which in $D({\mathfrak{X}})$ coincides with the inclusion of sheaf complexes ${\mathbb{L}}^{\bullet}(M)\to \underline{M}\otimes_K\Omega_{{X}}^{\bullet}$, via the previous isomorphism between $\underline{M}\otimes_K\Omega^{\bullet}_X$ and $\underline{D}^{\bullet}(M)$ in $D({\mathfrak{X}})$.\\ For any $j$ denote by $d^j:\underline{M}\otimes_K\Omega_X^j\to\underline{M}\otimes_K\Omega_X^{j+1}$ the differential. By \ref{peterco} we know that $d^{i-1}$ induces a surjection\begin{gather}\underline{M}\otimes\Omega^{i-1}_X\longrightarrow\frac{\ke(d^i)}{f^{\lambda_0-\lambda_i}(\underline{M}\otimes\Omega^{i}_X)\cap\ke(d^i)}.\label{pecoco}\end{gather}Now let $\omega\in{\mathbb{L}}^{i}(M)$. Choose an element $\alpha\in \underline{M}\otimes\Omega^{i-1}_X$ which maps under (\ref{pecoco}) to the class represented by $\omega$. Then $d^{i-1}(\alpha)-\omega$ lies in $f^{\lambda_0-\lambda_i}(\underline{M}\otimes\Omega^{i}_X)\cap\ke(d^i)$ and we define $\nu(\omega)$ as its class in$$\frac{f^{\lambda_0-\lambda_i}(\underline{M}\otimes\Omega^{i}_X)\cap\ke(d^i)}{f^{\lambda_0-\lambda_i+1}(\underline{M}\otimes\Omega^{i}_X)+d^{i-1}(f^{\lambda_0-\lambda_i+1}(\underline{M}\otimes\Omega^{i-1}_X))}\subset\underline{D}^i(M).$$That $\nu$ has the stated property follows from \ref{peterco}. (ii) Next we claim \begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_{red}^{i+1}+F_{\Gamma}^{d-i}.\label{sumall}\end{gather}The map $\nu$ from (i) induces a surjective map$$H^{d}(X_{\Gamma},{\mathbb{L}}^{\bullet}(M)^{\Gamma})\longrightarrow H^{d}(X_{\Gamma},\underline{D}^{\bullet}(M)^{\Gamma})=H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ).$$This follows from \ref{nabkri} and the stated property of $\nu$. Let$$F_{\gamma}^{d-i}=\bi[H^{d}(X_{\Gamma},t_{\le i}{\mathbb{L}}^{\bullet}(M)^{\Gamma})\longrightarrow H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )],$$$$F_{\mathbb{L}}^{i+1}=\bi[H^{d}(X_{\Gamma},{\mathbb{L}}^{\bullet}(M)_{\ge i+1}^{\Gamma})\longrightarrow H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )].$$Then $F_{\mathbb{L}}^{i+1}\subset F_{red}^{i+1}$, again by (i), and $F_{\gamma}^{i+1}\subset F_{\Gamma}^{d-i}$, by \ref{acycanw} (since $t_{\le i}{\mathbb{L}}^{\bullet}(M)\subset \tau_i(\underline{M}\otimes_K\Omega_{{X}}^{\bullet})$). Since ${\mathbb{L}}^{\bullet}(M)=t_{\le i}{\mathbb{L}}^{\bullet}(M)\oplus{\mathbb{L}}^{\bullet}(M)_{\ge i+1}$ we get (\ref{sumall}). (iii) (The remaining arguments are copied from the proof of \cite{iovspi} Theorem 5.4.) Let us denote by $\check{F}_{\Gamma}^{\bullet}$ and $\check{F}_{red}^{\bullet}$ the filtrations on $H^{d}(X_{\Gamma},{\mathcal{M}}^{*,{\Gamma}}\otimes_K\Omega^{\bullet}_{X_{\Gamma}})=H^{d}(X_{\Gamma},\underline{D}^{\bullet}({{{}}}{M}^*)^{\Gamma})$ analogous to the filtrations ${F}_{\Gamma}^{\bullet}$ and $F_{red}^{\bullet}$ on $H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )$. Here we claim\begin{align}\dim_K(H^{d}(X_{\Gamma},{\mathcal{M}}^{*,{\Gamma}}\otimes_K\Omega^{\bullet}_{X_{\Gamma}}))&= \dim_K(H^{d}(X_{\Gamma},{\mathcal {M}}^{{\Gamma}}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ))\notag \\{} &=\dim_K(\check{F}_{red}^{d-i})+\dim_K(F_{red}^{i+1}).\notag\end{align}From the perfect pairings (\ref{serred}) we get perfect pairings$$H^{d}(X_{\Gamma},\underline{D}^{\bullet}(M)_{\ge i+1}^{\Gamma})\times H^{d}(X_{\Gamma},\underline{D}^{\bullet}({{{}}}{M}^*)^{\Gamma}_{\le d-i-1})\longrightarrow K$$$$H^{d}(X_{\Gamma},\underline{D}^{\bullet}(M)^{\Gamma})\times H^{d}(X_{\Gamma},\underline{D}^{\bullet}({{{}}}{M}^*)^{\Gamma})\longrightarrow K$$which commute with each other in the obvious sense. Thus $(F_{red}^{i+1})^{\bot}=\check{F}_{red}^{d-i}$ and claim (iii) follows. (iv) The theorem is well known in case $d=1$, thus we assume $d\ge2$. From formula (\ref{pecomp}) we get$$\dim_K(F_{\Gamma}^{d-i})+\dim_K(F_{\Gamma}^{i+1})=\dim_K(H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )).$$This formula together with (\ref{sumall}) implies\begin{gather}\dim_K(F_{red}^{i+1})\ge\dim_K(H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} ))-\dim_K(F_{\Gamma}^{d-i})=\dim_K(F^{i+1}_{\Gamma}).\label{ung}\end{gather}We compute\begin{align}\dim_K(F_{red}^{i+1})&=\dim_K(H^{d}(X_{\Gamma},{\mathcal{M}}^{*,{\Gamma}}\otimes_K\Omega^{\bullet}_{X_{\Gamma}}))-\dim_K(\check{F}_{red}^{d-i})\notag\\{}&\le\dim_K(\check{F}^{i+1}_{\Gamma})=\dim_K(F^{i+1}_{\Gamma}).\notag\end{align}Here the first equality follows from claim (iii), the inequality uses formula (\ref{ung}) for ${{{}}}{M}^*$ instead of $M$, and the last equality is a consequence of the formulae (\ref{pecomp}) and (\ref{dimdu}). Altogether we see that in (\ref{ung}) we even have equality, which concludes the proof of (\ref{splitzwei}) in view of (\ref{sumall}). (v) We have $F_{\mathbb{L}}^{i+1}\subset F_{red}^{i+1}$ and $F_{\gamma}^{d-i}\subset F_{\Gamma}^{d-i}$ (see (ii)) as well as $F_{\mathbb{L}}^{i+1}\subset F_{H}^{i+1}$. On the other hand $F_{\gamma}^{d-i}+F_{\mathbb{L}}^{i+1}=H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})$ by the surjectivity of $\nabla(M)$. Since we have $F_{red}^{i+1}\cap F_{\Gamma}^{d-i}=0=F_{H}^{i+1}\cap F_{\Gamma}^{d-i}$ we find $F_{\gamma}^{d-i}=F_{\Gamma}^{d-i}$ and $F_{red}^{d-i}=F_{\mathbb{L}}^{i+1}=F_{H}^{d-i}$.\hfill$\Box$\\ Denote by ${\mathbb L}_D^{\bullet}(M)$ the subsheaf complex of $\underline{D}^{\bullet}(M)$ on ${\mathfrak X}\otimes k$ defined by $${\mathbb L}_D^{i}(M)(U)=\ke[\underline{D}^{i}(M)(]U[)\longrightarrow\underline{D}^{i+1}(M)(]U[)]$$for open $U\subset{\mathfrak X}\otimes k$. The inclusion ${\mathbb L}_D^{\bullet}(M)^{\Gamma}\to\underline{D}^{\bullet}(M)^{\Gamma}$ induces a map$$\theta(M):H^d(\mathfrak{X}_{\Gamma},{\mathbb L}_D^{\bullet}(M)^{\Gamma})\longrightarrow H^d(\mathfrak{X}_{\Gamma},\underline{D}^{\bullet}(M)^{\Gamma}).$$ \begin{satz}\label{redcri} (a) If $\theta(M)$ and $\theta(M^*)$ are surjective then we have the decomposition (\ref{splitzwei}).\\(b) The following two statements (i) and (ii) are equivalent:\\(i) For any $i,j$ the following map is bijective:$$H^j(\mathfrak{X}_{\Gamma},{\mathbb L}_D^{i}(M)^{\Gamma})\longrightarrow H^j(\mathfrak{X}_{\Gamma},\underline{D}^{i}(M)^{\Gamma})$$(ii) We have (\ref{splitzwei}), and the reduced Hodge spectral sequence $(\ref{redss})$ degenerates in $E_1$. \end{satz} {\sc Proof:} (a) Proposition \ref{acycanw} also holds if $\tau_i(\underline{M}\otimes\Omega_X^{\bullet})$ is replaced by$$\tau_i\underline{D}^{\bullet}(M)=[\underline{D}^{0}(M)\longrightarrow\ldots\longrightarrow \underline{D}^{i-1}(M)\longrightarrow{\mathbb L}_D^{i}(M)\longrightarrow\ldots].$$Indeed, in view of the quasiisomorphism of {\it sheaf} complexes $\underline{D}^{\bullet}(M)\cong\underline{M}\otimes\Omega_X^{\bullet}$ this version is in fact reduced to \ref{acycanw}. As in \ref{genfro} we therefore obtain\begin{gather}F_{\Gamma}^{d-i}=\bi[H^d({\mathfrak{X}}_{\Gamma},\tau_i\underline{D}^{\bullet}(M)^{\Gamma})\to H^{d}(\mathfrak{X}_{\Gamma},\underline{D}^{\bullet}(M)^{\Gamma})]\label{gamred}\end{gather}and we get claim (a) just as in \ref{nabkri} and/or \ref{intelde}. Claim (b) is then also clear, again using (\ref{gamred}).\hfill$\Box$\\ {\it Remark:} In \cite{schn} it is conjectured that $(\ref{redss})$ always degenerates in $E_1$. Thus \ref{redcri} (b)(i) should be a sufficient {\it and necessary} condition to prove the decomposition (\ref{splitzwei}) ! \begin{kor}\label{gammafi} Suppose that $M$ is a $K$-rational $G$-representation with small weights.\\(a) The splitting in Corollary \ref{hosssp} is given by the filtration $F_{\Gamma}^{\bullet}$:\begin{gather}H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}} )=F_H^{i+1}\bigoplus F_{\Gamma}^{d-i}\quad\quad(0\le i\le d-1).\label{smaspli}\end{gather}(b) We have $F_H^{\bullet}=F_{red}^{\bullet}$ in $H^{d}(X_{\Gamma},{\mathcal {M}}^{\Gamma}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}})$. \end{kor} {\sc Proof:} This is the combination of Corollary \ref{nabkri} and Theorem \ref{intelde} with Corollary \ref{hosssp}. We may pass to the base field extension $K\to\dot{K}$. Then we have inclusions of sheaf complexes$${\mathcal Log}^{\bullet}({\mathcal M}^0_{{\mathcal O}_{\dot{K}}})^{\Gamma}\otimes_{{\mathcal O}_{\dot{K}}} \dot{K}\longrightarrow{\mathbb{L}}^{\bullet}(M)^{\Gamma}\otimes_K\dot{K}\longrightarrow{\mathcal {M}}\otimes_K\Omega^{\bullet}_{{X}_{\Gamma}}\otimes_K\dot{K}$$and similarly for $M^*$ instead of $M$. Thus Corollary \ref{hosssp} implies that $\nabla(M)$ and $\nabla(M^*)$ are surjective and (a) follows from \ref{nabkri} and (b) follows from \ref{intelde}.\hfill$\Box$\\ {\it Remarks:} (1) The decomposition (\ref{smaspli}) was proven for the trivial representation $M=K$ for the first time by Iovita and Spiess \cite{iovspi}. Our present proof appears to provide a geometric underpinning of the one given in \cite{iovspi}.\\(2) The degeneration of the Hodge spectral sequence (\ref{hodss}) is of course well known for $M=K$ and $\kara(K)=0$. On the other hand, for more general $K$-rational representations $M$ than those considered in \ref{hosssp} it can not be expected to degenerate (see \cite{schn}).\\(3) Let $I$ be a $K[\Gamma]$-module (with $\dim_KI<\infty$) which contains a $\Gamma$-stable free ${\mathcal O}_K$-lattice $I^0$. Let $\sigma\in{\rm Gal}(K/{\mathbb Q}_p)$, let $M$ be a $K$-rational $G$-representation and let $M_{\sigma}$ denote $M$ but with the $K$-vector space structure twisted by $\sigma$ --- then $G$ acts on $M_{\sigma}$ again by $K$-linear automorphisms. Everything we did in this paper with the local system defined by $M$ carries over to the local system defined by $I\otimes_K M_{\sigma}$: simply replace every occurence of ${\mathcal M}^0_{{\mathcal O}_{\dot{K}}}$ by ${I}^0\otimes_{{\mathcal O}_K}{\mathcal M}^0_{{\mathcal O}_{\dot{K}},\sigma}$.\\(4) Let $\breve{K}$ denote the (completed) maximal unramified extension of $K$. The formal scheme $\mathfrak{X}\times\spf({\mathcal O}_{\breve{K}})$ carries a certain universal $G$-equivariant formal group ${\mathcal G}$, see \cite{rz}. If $K={\mathbb Q}_p$ the de Rham complex ${\mathfrak E}\otimes\Omega_{X\dot{\otimes}\breve{K}}^{\bullet}$ of its relative Dieudonne module ${\mathfrak E}$ (as a filtered convergent $F$-isocrystal on $\mathfrak{X}\times\spf({\mathcal O}_{\breve{K}})$) can be identified with a sum of $d+1$ copies of $K^{d+1}\otimes_K\Omega_{X\dot{\otimes}_K\breve{K}}^{\bullet}$, filtered as in (\ref{filder}) and with isotypical Frobenius action of slope $d/(d+1)$. From our results in \cite{hk} it follows that the filtration $F_{\Gamma}^{\bullet}$ on $H^d(X_{\Gamma}\otimes \breve{K},{\mathfrak E}^{\Gamma}\otimes\Omega_{X_{\Gamma}\otimes \breve{K}}^{\bullet})$ is the (renumbered) slope filtration. Hence \ref{gammafi} states that the slope filtration on $H^d(X_{\Gamma}\otimes \breve{K},{\mathfrak E}^{\Gamma}\otimes\Omega_{X_{\Gamma}\otimes \breve{K}}^{\bullet})$ is opposite to the Hodge filtration. By the comparison isomorphisms of $p$-adic Hodge theory this is a statement on the cohomology of the relative Tate module of the $\Gamma$-quotient of ${\mathcal G}$.
1,477,468,749,913
arxiv
\section{Introduction} \label{sec:introduction} Sound-source localization (SSL) is an important task for many applications, e.g., robot audition, video conferencing, hearing aids, to cite just a few. In the framework of human-inspired binaural hearing, two interaural cues are widely used for SSL, namely the interaural phase difference (IPD) and the interaural level difference (ILD) \cite{viste2004,willert2006,stern2006,raspaud2010,woodruff2012,deleforge2015acoustic,deleforge2015colocalization}. \addnote[cla_general]{1}{In the general case where the sensor array is not free-field, i.e. the microphones are placed inside the ears of a dummy head or on a robot head, the interaural cues are frequency-dependent due to the effects on sound propagation induced by the shape of the outer ears, head and torso \cite{blauert1997}. This is true even for anechoic recordings, i.e. in the absence of reverberations. SSL is then based on the relationship between interaural cues and direction of arrival (DOA) of the emitting source. } When the short-time Fourier transform (STFT) is used, the ILD and IPD correspond to the magnitude and argument, respectively, of the relative transfer function (RTF), which is the ratio between the acoustic transfer functions (ATF) of the two channels \cite{gannot2001}. In a reverberant environment, the RTF contains both direct-path information, namely the direct wave propagation path from the source location to the microphone locations, and information representing early and late reverberations. Extracting the direct path is of crucial importance for SSL. In an anechoic and noise-free environment the source direction can be easily estimated from the RTF. However, in practice, noise and reverberations are often present and contaminate SSL estimation. In the presence of noise, based on the stationarity of the noise and the non-stationarity of the desired signal, the RTF was estimated in \cite{gannot2001} by solving a set of linear equations, and in \cite{dvorkind2005} by solving a set of nonlinear decorrelation equations. In \cite{dvorkind2005}, the time difference of arrival (TDOA) was estimated based on RTF, and a TDOA tracking method was also proposed. These methods have the limitation that a significant amount of noisy frames are included in the estimation. An RTF identification method based on the probability of speech presence and on spectral subtraction was proposed in \cite{cohen2004}: this method uses only the frames which are highly likely to contain speech. The unbiased RTF estimator proposed in \cite{mine2015assp} is based on segmental power spectral density matrix subtraction, which is a more efficient method to remove noise compared with the approaches just mentioned. The performance of these spectral subtraction techniques was analyzed and compared with eigenvalues decomposition techniques in \cite{markovich2015}. The RTF estimators mentioned above assume a multiplicative transfer function (MTF) approximation \cite{avargel2007spl}, i.e., the source-to-microphone filtering process is assumed to be represented by a multiplicative process in the STFT domain. Unfortunately, this is only justified when the length of the filter impulse response is shorter than the length of the STFT window, which is rarely the case in practice. Moreover, the RTF is usually estimated from the ratio between two ATFs that include reverberation, rather than from the ratio between ATFs that only correspond to the direct-path sound propagation. Therefore, currently available RTF estimators are poorly suitable for SSL in reverberant environments. The influence of reverberation on the interaural cues is analyzed in \cite{zannini2011}. The relative early transfer function was introduced in \cite{schwartz2015} to suppress reverberation. Several techniques were proposed to extract the RTF that corresponds to the direct-path sound propagation, e.g., based on detecting time frames with less reverberations. The precedence effect, e.g., \cite{litovsky1999}, widely used for SSL, relies on the principle that signal onsets are dominated by the direct path. Based on band-pass filter banks, the localization cues are extracted only from reliable frames, such as the onset frames in \cite{bechler2005}, the frames preceding a notable maximum \cite{heckmann2006}, the frames weighted by the precedence model \cite{hummersone2013}, etc. Interaural coherence was proposed in \cite{faller2004} to select binaural cues not contaminated by reverberations. Based on Fourier transform, the coherence test \cite{mohan2008}, and the direct-path dominance test \cite{nadiri2014} are proposed to detect the frames dominated by one active source, from which localization cues can be estimated. However, in practice, there are always reflection components in the frames selected by these methods, due to an inaccurate model or an improper decision threshold. \textbf{Contributions and Method Overview:} In this paper, we propose a direct-path RTF estimator suitable for the localization of a single speech-source in noisy and reverberant environments. We build on the cross-band filter proposed in \cite{avargel2007} for system identification in the STFT domain. This filter represents the impulse response in the STFT domain by a cross-band convolutive transfer function instead of the multiplicative (MTF) approximation. In practice we consider the use of a simplified convolutive transfer function (CTF) approximation, as used in \cite{talmon2009}. The first coefficient of the CTF at different frequencies represents the STFT of the first segment of the channel impulse response, which is composed of the direct-path impulse response, plus possibly few early reflections. In particular, if the time delay between the direct-path wave and the first notable reflection is large, less reflections are included. Therefore, we refer to the first coefficient of the CTF as the direct-path acoustic transfer function, and the ratio between the coefficients from two channels is referred to as the \textit{direct-path relative transfer function} (DP-RTF). Inspired by \cite{benesty2000} and based on the relationship of the CTFs between the two channels, we use the auto- and cross-power spectral densities (PSD) estimated over multiple STFT frames, to construct a set of linear equations in which the DP-RTF is the unknown variable. Therefore, the DP-RTF can be estimated via standard least squares. In the presence of noise, an inter-frame spectral subtraction technique is proposed, extending our previous work \cite{mine2015assp}. The auto- and cross-PSD estimated in a frame with low speech power are subtracted from the PSDs estimated in a frame with high speech power. After subtraction, low noise power and high speech power are left due to the stationarity of the noise and the non-stationarity of the speech signal. The DP-RTF is estimated using the remaining signal's auto- and cross-PSD. This PSD subtraction process does not require an explicit estimation of the noise PSD, hence it does not suffer from noise PSD estimation errors. Finally, the estimated DP-RTFs are concatenated over frequencies and plugged into an SSL method, e.g., \cite{deleforge2015acoustic}. Experiments with simulated and real data were conducted under various acoustic conditions, e.g., different reverberation times, source-to-sensor distances, and signal-to-noise ratios. The experimental results show that the proposed method performs well, even in adverse acoustic conditions, and outperforms the MTF-based method \cite{mine2015assp}, the coherence test method \cite{mohan2008} and the conventional SRP-PHAT method in most of the tested conditions. The remainder of this paper is organized as follows. Section~\ref{sec:ctf} formulates the sensor signals based on the crossband filter. Section~\ref{sec:dprtf} presents the DP-RTF estimator in a noise-free environment. The DP-RTF estimator in the presence of noise is presented in Section~\ref{sec:dprtfn}. In Section~\ref{sec:ssl}, the SSL algorithm is described. Experimental results are presented in Section~\ref{sec:experiments1} and \ref{sec:experiments2}, and Section~\ref{sec:conclusion} draws some conclusions. \section{Cross-band Filter and Convolutive Transfer Function} \label{sec:ctf} We consider first a non-stationary source signal $s(n)$, e.g., speech, emitted in a noise-free environment. The received binaural signals are \begin{align}\label{xn} \begin{array}{l} x(n)=s(n)\star a(n)\\ y(n)=s(n)\star b(n), \end{array} \end{align} where $\star $ denotes convolution, and $a(n)$ and $b(n)$ are the binaural room impulse responses (BRIR) from the source to the two microphones. \addnote[exp_BRIR]{1}{The BRIRs combine the effects of the room acoustics (reverberations) and the effects of the sensor set-up (e.g., dummy head/ears). } Applying the STFT, (\ref{xn}) is approximated in the time-frequency (TF) domain as \begin{align}\label{xpk} \begin{array}{l} x_{p,k}=s_{p,k}\; a_k \\ y_{p,k}=s_{p,k}\; b_k, \end{array} \end{align} where $x_{p,k}$, $y_{p,k}$ and $s_{p,k}$ are the STFT of the corresponding signals ($p$ is the time frame index and $k$ is the frequency bin index), and $a_k$ and $b_k$ are the ATFs corresponding to the BRIRs. Let $N$ denote the length of a time frame or, equivalently, the size of the STFT window. Eq.~(\ref{xpk}) corresponds to the MTF approximation, which is only valid when the impulse response $a(n)$ is shorter than the STFT window. In the case of non-stationary acoustic signals, such as speech, a relatively small value for $N$ is typically chosen to assume \textit{local} stationarity, i.e., within a frame. Therefore, the MTF approximation (\ref{xpk}) is questionable in a reverberant environment, since the room impulse response could be much longer than the STFT window. To address this problem cross-band filters were introduced \cite{avargel2007} to represent more accurately a linear system with long impulse response in the STFT domain. Let $L$ denote the frame step. The cross-band filter model consists in representing the STFT coefficient $x_{p,k}$ in (\ref{xpk}) as a summation over multiple convolutions across frequency bins (there is an equivalent expression for $y_{p,k}$): \begin{align}\label{xpk2} x_{p,k} = \sum_{p'=-C}^{Q_k-1} \sum_{k'=0}^{N-1} s_{p-p',k'} \; a_{p',k',k}. \end{align} From \cite{avargel2007}, if $L<N$, then $a_{p',k',k}$ is non-causal, with $C = \lceil N/L \rceil -1$ non-causal coefficients. The number of causal filter coefficients $Q_k$ is related to the reverberation time at the $k$-th frequency bin, which will be discussed in detail in Section~\ref{sec:experiments1}. The TF-domain impulse response $a_{p',k',k}$ is related to the time-domain impulse response $a(n)$ by: \begin{align}\label{hp} a_{p',k',k}={(a(n)\star \zeta_{k,k'}(n))}|_{n=p'L}, \end{align} which represents the convolution with respect to the time index $n$ evaluated at frame steps, with \begin{align}\label{phik} \zeta_{k,k'}(n) = e^{j\frac{2\pi}{N}k'n}\sum_{m=-\infty}^{+\infty} \overline{\omega}(m) \: \omega(n+m) \: e^{-j\frac{2\pi}{N}m(k-k')}, \end{align} where $\overline{\omega}(n)$ and $\omega(n)$ denote the STFT analysis and synthesis windows, respectively. A convolutive transfer function (CTF) approximation is further introduced and used in \cite{talmon2009} to simplify the analysis, i.e., only band-to-band filters are considered, $k=k'$. Hence, (\ref{xpk2}) is rewritten as \begin{align} \label{eq:xpk3} x_{p,k} = \sum_{p'=0}^{Q_k-1} s_{p-p',k}a_{p',k}= s_{p,k}\star a_{p,k}, \end{align} where we assumed $L\approx N$ such that non-causal coefficients are disregarded. Note that $a_{p',k',k}$ is replaced with $a_{p',k}$ to simplify the notations. The cross-band filter and CTF formalism will now be used to extract the impulse response of the direct-path propagation. \section{Direct-Path Relative Transfer Function} \label{sec:dprtf} From (\ref{hp}) and (\ref{phik}), with $k'=k$ and $p'=0$, the first coefficient of $a_{p',k}$ in the CTF approximation \eqref{eq:xpk3} can be derived as \begin{align} a_{0,k} = ({a(n)\star \zeta_{k,k}(n)})|_{n=0} & =\sum_{t=0}^{T-1} a(t)\zeta_{k,k}(-t) \nonumber \\ &=\sum_{t=0}^{N-1} a(t)\nu(t)e^{-j\frac{2\pi}{N}kt}, \end{align} where $T$ is the length of the BRIR and \begin{equation} \nu(n)= \begin{cases} \sum_{m=0}^{N}\overline{\omega}(m)\omega(m-n) & \mbox{if } 1-N\le n\le N-1, \\ 0, & \mbox{otherwise.} \end{cases} \nonumber \end{equation} Therefore, $a_{0,k}$ (as well as $b_{0,k}$) can be interpreted as the $k$-th Fourier coefficient of the impulse response segment $a(n)|_{n=0}^{N-1}$ windowed by $\nu(n)|_{n=0}^{N-1}$. Without loss of generality, we assume that the room impulse responses $a(n)$ and $b(n)$ begin with the impulse responses of the direct-path propagation. If the frame length $N$ is properly chosen, $a(n)|_{n=0}^{N-1}$ and $b(n)|_{n=0}^{N-1}$ are composed of the impulse responses of the direct-path and a few reflections. Particularly, if the initial time delay gap (ITDG), i.e. the time delay between the direct-path wave and the first notable reflection, is large compared to $N$, $a(n)|_{n=0}^{N-1}$ and $b(n)|_{n=0}^{N-1}$ mainly contain the direct-path impulse response. Therefore we refer to $a_{0,k}$ and $b_{0,k}$ as the direct-path ATFs. By definition, the DP-RTF is given by (we remind that the direct path is relevant for sound source localization): \begin{align}\label{eq:dp-rtf} d_{k} = \frac{b_{0,k}}{a_{0,k}}. \end{align} \addnote[exp_DP]{1}{In summary, the CTF approximation offers a nice framework to encode the direct-path part of a room impulse response into the first CTF coefficients. Applying this to each channel of a BRIR and taking the ratio between the first CTF coefficients of each channel provides the DP-RTF. Of course, in practice, the DP-RTF must be estimated from the sensor signals. } \subsection{Direct-Path Estimation} \label{sec:dprtf:estimation} Since both channels are assumed to follow the CTF model, we can write: \begin{align}\label{xyha} x_{p,k}\star b_{p,k}=s_{p,k}\star a_{p,k}\star b_{p,k}=y_{p,k}\star a_{p,k}. \end{align} This relation was proposed in \cite{benesty2000,benesty1995} for the time-domain TDOA estimation and is here extended to the CTF domain. In vector form \eqref{xyha} can be written as \begin{align}\label{mxa} \mathbf{x}_{p,k}\tp \mathbf{b}_k = \mathbf{y}_{p,k}\tp \mathbf{a}_k, \end{align} where $\tp$ denotes vector or matrix transpose, and \begin{align} \mathbf{x}_{p,k} &= [x_{p,k},x_{p-1,k},\dots,x_{p-Q_k+1,k}]\tp, \nonumber \\ \mathbf{y}_{p,k} &= [y_{p,k},y_{p-1,k},\dots,y_{p-Q_k+1,k}]\tp, \nonumber \\ \mathbf{b}_k &= [b_{0,k},b_{1,k},\dots,b_{Q_k-1,k}]\tp, \nonumber \\ \mathbf{a}_k &= [a_{0,k},a_{1,k},\dots,a_{Q_k-1,k}]\tp. \nonumber \end{align} Dividing both sides of (\ref{mxa}) by $a_{0,k}$ and reorganizing the terms, we can write: \begin{align}\label{zpk} y_{p,k} = \mathbf{z}_{p,k}\tp \mathbf{g}_k, \end{align} where \begin{align} \mathbf{z}_{p,k} &=[x_{p,k},\dots,x_{p-Q_k+1,k},y_{p-1,k},\dots,y_{p-Q_k+1,k}]\tp, \nonumber \\ \mathbf{g}_k &=\left[\frac{b_{0,k}}{a_{0,k}},\dots,\frac{b_{Q_k-1,k}}{a_{0,k}},-\frac{a_{1,k}}{a_{0,k}},\dots,-\frac{a_{Q_k-1,k}}{a_{0,k}}\right]\tp. \nonumber \end{align} We see that the DP-RTF appears as the first entry of $\mathbf{g}_k$. Hence, in the following, we base the estimation of the DP-RTF on the construction of $y_{p,k}$ and $\mathbf{z}_{p,k}$ statistics. More specifically, multiplying both sides of (\ref{zpk}) by $y_{p,k}^*$ (the complex conjugate of $y_{p,k}$) and taking the expectation, $E\{\cdot\}$, we obtain: \begin{align}\label{phi} \phi_{yy}(p,k) = \phivect_{zy}\tp(p,k) \: \mathbf{g}_k, \end{align} where $\phi_{yy}(p,k)=E\{y_{p,k}y_{p,k}^{*}\}$ is the PSD of $y(n)$ at TF bin $(p,k)$, and \begin{align} \phivect_{zy}(p,k) = & [E\{x_{p,k}y_{p,k}^*\},\dots,E\{x_{p-Q_k+1,k}y_{p,k}^*\}, \nonumber \\ &E\{y_{p-1,k}y_{p,k}^*\},\dots,E\{y_{p-Q_k+1,k}y_{p,k}^*\}]\tp \nonumber \end{align} is a vector composed of cross-PSD terms between the elements of $\mathbf{z}_{p,k}$ and $y_{p,k}$.\footnote{More precisely, $\phivect_{zy}(p,k)$ is composed of $y$ PSD `cross-terms', i.e., $y$ taken at frame $p$ and previous frames, and of $x,y$ cross-PSD terms for $y$ taken at frame $p$ and $x$ taken at previous frames.} In practice, these auto- and cross-PSD terms can be estimated by averaging the corresponding auto- and cross-STFT spectra over $D$ frames: \begin{align}\label{hphi} \hat{\phi}_{yy}(p,k) = \frac{1}{D}\sum_{d=0}^{D-1}y_{p-d,k} \: y_{p-d,k}^*. \end{align} The elements in $\phivect_{zy}(p,k)$ can be estimated by using the same principle. Consequently, in practice (\ref{phi}) is approximated as \begin{align}\label{hatphi} \hat{\phi}_{yy}(p,k) = \hat{\phivect}_{zy}\tp(p,k) \: \mathbf{g}_k. \end{align} Let $P$ denote the total number of the STFT frames. $Q_k$ is the minimum index of $p$ to guarantee that the elements in $\mathbf{z}_{p,k}$ are available from the STFT coefficients of the binaural signals. For PSD estimation, the previous $D-1$ frames of the current frame are utilized as shown in (\ref{hphi}). Therefore, $p_f=Q_k+D-1$ is the minimum index of $p$ to guarantee that all the frames for computing $\hat{\phivect}_{zy}(p,k)$ are available from the STFT coefficients of the binaural signals. By concatenating the frames from $p_f$ to $P$, (\ref{hatphi}) can be written in matrix-vector form: \begin{align}\label{Phi} \hat{\phivect}_{yy}(k) = \hat{\Phimat}_{zy}(k) \: \mathbf{g}_k, \end{align} with \begin{align} &\hat{\phivect}_{yy}(k)=[\hat{\phi}_{yy}(p_f,k),\dots,\hat{\phi}_{yy}(p,k),\dots,\hat{\phi}_{yy}(P,k)]\tp, \nonumber \\ &\hat{\Phimat}_{zy}(k)=[\hat{\phivect}_{zy}(p_f,k),\dots,\hat{\phivect}_{zy}(p,k),\dots,\hat{\phivect}_{zy}(P,k)]\tp. \nonumber \end{align} Note that $\hat{\phivect}_{yy}(k)$ is a $(P-p_f+1)\times1$ vector and $\hat{\Phimat}_{zy}(k)$ is a $(P-p_f+1)\times(2Q_k-1)$ matrix. In principle, an estimate $\hat{{\mathbf{g}}}_k$ of ${\mathbf{g}}_k$ can be found be solving this linear equation. However, in practice, the sensor signals contain noise and thus the estimated PSD contain noise power. Therefore, we have to remove this noise power before estimating ${\mathbf{g}}_k$. \section{DP-RTF Estimation in the Presence of Noise} \label{sec:dprtfn} Noise always exists in real-world configurations. In the presence of noise, some frames in (\ref{Phi}) are dominated by noise. Besides, the PSD estimate of speech signals is deteriorated by noise. In this section, an inter-frame subtraction technique enabling to improve the DP-RTF estimation in noise is described, based on a speech frame selection process. \subsection{Noisy Signals and PSD Estimates}\label{sec:dprtfn:noise} In the presence of additive noise (\ref{xn}) becomes \begin{align}\label{xnu} \begin{array}{l} \tilde{x}(n)=x(n)+u(n)=a(n)\star s(n)+u(n), \\ \tilde{y}(n)=y(n)+v(n)=b(n)\star s(n)+v(n), \end{array} \end{align} where $u(n)$ and $v(n)$, the noise signals, are assumed to be \addnote[exp_noise_stationarity]{1}{individually wide-sense stationary (WSS) and uncorrelated with $s(n)$. Moreover, $u(n)$ and $v(n)$ are assumed to be either uncorrelated, or correlated but jointly WSS. } Applying the STFT to the binaural signals in (\ref{xnu}) leads to \begin{align*} \tilde{x}_{p,k} &= x_{p,k}+u_{p,k} \\ \tilde{y}_{p,k} &= y_{p,k}+v_{p,k}, \end{align*} in which each quantity is the STFT coefficient of its corresponding time-domain signal. Similarly to ${\mathbf{z}}_{p,k}$, we define \begin{align} \tilde{\mathbf{z}}_{p,k} &= [\tilde{x}_{p,k},\dots,\tilde{x}_{p-Q_k+1,k},\tilde{y}_{p-1,k},\dots,\tilde{y}_{p-Q_k+1,k}]\tp \nonumber \\ & =\mathbf{z}_{p,k}+\mathbf{w}_{p,k} \nonumber \end{align} where \begin{align} \mathbf{w}_{p,k}=[u_{p,k},\dots,u_{p-Q_k+1,k},v_{p-1,k},\dots,v_{p-Q_k+1,k}]\tp. \nonumber \end{align} The PSD of $\tilde{y}_{p,k}$ is $\phi_{\tilde{y}\tilde{y}}(p,k)$. We define the PSD vector $\phivect_{\tilde{z}\tilde{y}}(p,k)$ composed of the auto- and cross-PSDs between the elements of $\tilde{\mathbf{z}}_{p,k}$ and $\tilde{y}_{p,k}$. Following (\ref{hphi}), these PSDs can be estimated as $\hat{\phi}_{\tilde{y}\tilde{y}}(p,k)$ and $\hat{\phivect}_{\tilde{z}\tilde{y}}(p,k)$ by averaging the auto- and cross-STFT spectra of input signals over $D$ frames. Since the speech and noise signals are uncorrelated, we can write \begin{align}\label{hatphin} \begin{array}{l} \hat{\phi}_{\tilde{y}\tilde{y}}(p,k) = \hat{\phi}_{yy}(p,k)+\hat{\phi}_{vv}(p,k), \\ \hat{\phivect}_{\tilde{z}\tilde{y}}(p,k) = \hat{\phivect}_{zy}(p,k)+\hat{\phivect}_{wv}(p,k), \end{array} \end{align} where $\hat{\phi}_{vv}(p,k)$ is an estimation of the PSD of $v_{p,k}$, and $\hat{\phivect}_{wv}(p,k)$ is a vector composed of the estimated auto- or cross- PSDs between the entries of ${\mathbf{w}}_{p,k}$ and ${v}_{p,k}$. \subsection{Inter-Frame Spectral Subtraction}\label{sec:dprtfn:ss} From (\ref{hatphi}) and (\ref{hatphin}), we have for any frame $p$: \begin{equation} \hat{\phi}_{\tilde{y}\tilde{y}}(p,k) - \hat{\phi}_{vv}(p,k) = (\hat{\phivect}_{\tilde{z}\tilde{y}}(p,k) - \hat{\phivect}_{wv}(p,k))\tp \mathbf{g}_k, \end{equation} or alternately: \begin{equation} \label{linear_noisy} \hat{\phi}_{\tilde{y}\tilde{y}}(p,k) = \hat{\phivect}_{\tilde{z}\tilde{y}}(p,k)\tp \mathbf{g}_k + \hat{\phi}_{vv}(p,k) - \hat{\phivect}_{wv}(p,k)\tp \mathbf{g}_k.\end{equation} By subtracting the estimated PSD $\hat{\phi}_{\tilde{y}\tilde{y}}(p,k)$ of one frame, e.g. $p_2$, from the estimated PSD of another frame, e.g. $p_1$, we obtain \begin{align} \label{sub1} \hat{\phi}_{\tilde{y}\tilde{y}}^{s}(p_1,k) &\triangleq \hat{\phi}_{\tilde{y}\tilde{y}}(p_1,k)-\hat{\phi}_{\tilde{y}\tilde{y}}(p_2,k) \nonumber \\ &= \hat{\phi}_{yy}^s(p_1,k)+e_{vv}(p_1,k) \end{align} with \begin{align*} \hat{\phi}_{yy}^s(p_1,k) &= \hat{\phi}_{yy}(p_1,k)-\hat{\phi}_{yy}(p_2,k), \\ e_{vv}(p_1,k) &= \hat{\phi}_{vv}(p_1,k)-\hat{\phi}_{vv}(p_2,k). \end{align*} Applying the same principle to $\hat{\phivect}_{\tilde{z}\tilde{y}}(p,k)$, we have: \begin{align} \label{sub2} \hat{\phivect}_{\tilde{z}\tilde{y}}^s(p_1,k)& \triangleq \hat{\phivect}_{\tilde{z}\tilde{y}}(p_1,k)-\hat{\phivect}_{\tilde{z}\tilde{y}}(p_2,k) \nonumber \\ &= \hat{\phivect}_{zy}^s(p_1,k)+\mathbf{e}_{wv}(p_1,k), \end{align} with \begin{align*} \hat{\phivect}_{zy}^s(p_1,k)&= \hat{\phivect}_{zy}(p_1,k)-\hat{\phivect}_{zy}(p_2,k), \\ \mathbf{e}_{wv}(p_1,k) &= \hat{\phivect}_{wv}(p_1,k)-\hat{\phivect}_{wv}(p_2,k). \end{align*} Applying (\ref{linear_noisy}) to frames $p_1$ and $p_2$ and subtracting the resulting equations, we obtain: \begin{align}\label{hatphis} \hat{\phi}_{\tilde{y}\tilde{y}}^{s}(p_1,k) &= \hat{\phivect}_{\tilde{z}\tilde{y}}^{s}(p_1,k)\tp \mathbf{g}_k+e(p_1,k), \end{align} where \begin{align} e(p_1,k)=e_{vv}(p_1,k)-\mathbf{e}_{wv}(p_1,k)\tp \mathbf{g}_k. \end{align} Because $v(n)$ is stationary, $e_{vv}(p_1,k)$ is small. Conversely, the fluctuations of speech signals are much larger than the fluctuations of the noise signal because the speech signals are both non-stationarity and sparse, i.e., speech power spectrum can vary significantly over frames. Thence, by properly choosing the frame indexes $p_1$ and $p_2$, for instance in such a way that the speech power $\hat{\phi}_{yy}(p_1,k)$ is high and the speech power $\hat{\phi}_{yy}(p_2,k)$ is low, we have $\hat{\phi}_{yy}^s(p_1,k)\gg e_{vv}(p_1,k)$, or equivalently $\hat{\phi}_{\tilde{y}\tilde{y}}^s(p_1,k)\gg e_{vv}(p_1,k)$. \addnote[exp_jointly]{1}{The same reasoning applies to $\mathbf{e}_{wv}(p_1,k)$, except that the $u$-$v$ cross-terms of $\mathbf{e}_{wv}(p_1,k)$ are small compared to $\hat{\phi}_{\tilde{y}\tilde{y}}^{s}(p_1,k)$ either if $u$ and $v$ are uncorrelated, or if $u$ and $v$ are jointly WSS, which are our (quite reasonable) working assumptions. } The choice of the frame index necessitates to classify the frames into two sets, $\mathcal{P}_1$ and $\mathcal{P}_2$, which have high speech power and very low speech power, respectively. This is done in Subsection~\ref{sec:dprtfn:fc} using the minimum and maximum statistics of noise spectrum. Before that, we finalize the estimation of the DP-RTF in the noisy case, based on (\ref{hatphis}). \subsection{DP-RTF Estimation}\label{sec:dprtfn:extraction} Let $P_1=|\mathcal{P}_1|$ denote the cardinality of $\mathcal{P}_1$. The PSD subtractions (\ref{sub1}) and (\ref{sub2}) are applied to all the frames $p_1\in\mathcal{P}_1$ using their corresponding frames $p_2 \in\mathcal{P}_2$, denoted as $p_2(p_1)$. In practice, $p_2(p_1)$ is the frame in $\mathcal{P}_2$ that is nearest to $p_1$, since the closer the two frames, the smaller the difference of their noise PSD and the difference of their transfer function. The resulting PSDs and cross-PSD vectors are gathered into a $P_1\times1$ vector and a $P_1\times(2Q_k-1)$ matrix, respectively, as: \begin{align} \hat{\phivect}_{\tilde{y}\tilde{y}}^s(k)&=[\hat{\phi}_{\tilde{y}\tilde{y}}^{s}(1,k),\dots,\hat{\phi}_{\tilde{y}\tilde{y}}^{s}(p_1,k),\dots,\hat{\phi}_{\tilde{y}\tilde{y}}^{s}(P_1,k)]\tp, \nonumber \\ \hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)&=[\hat{\phivect}_{\tilde{z}\tilde{y}}^s(1,k),\dots,\hat{\phivect}_{\tilde{z}\tilde{y}}^s(p_1,k),\dots,\hat{\phivect}_{\tilde{z}\tilde{y}}^s(P_1,k)]\tp. \nonumber \end{align} Let us denote $\mathbf{e}(k) = [e(1,k),\dots,e(p_1,k),\dots,e(P_1,k)]\tp$ the $P_1\times1$ vector that concatenates the residual noise for the $P_1$ frames. Then, from (\ref{hatphis}) we obtain the following linear equation, which is the ``noisy version'' of (\ref{Phi}): \begin{align}\label{Phin} \hat{\phivect}_{\tilde{y}\tilde{y}}^s(k) = \hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)\mathbf{g}_k+\mathbf{e}(k). \end{align} \addnote[exp_noise]{1}{Assuming that the sequence of residual noise entries in $\mathbf{e}(k)$ is i.i.d.\footnote{This assumption is made to simplify the analysis. In practice, $e(p_1,k)$ may be a correlated sequence because of the possible correlation of $\hat{\phi}_{vv}(p,k)$ (or $\hat{\phivect}_{wv}(p,k)$) across frames. Taking this correlation into account would lead to a weighted least square solution to (\ref{Phin}), involving a weight matrix in (\ref{eq:wls}). This weight matrix is not easy to estimate, and in practice, (\ref{eq:wls}) delivers a good estimate of $\hat{g}_{0,k}$, as assessed in our experiments.} and also assuming $P_1 \geq (2Q_k-1)$, the least square solution to (\ref{Phin}) is given by: \begin{align}\label{eq:wls} \hat{\mathbf{g}}_k = (\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)^H\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k))^{-1}\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)^H\hat{\phivect}_{\tilde{y}\tilde{y}}^s(k), \end{align}} where $^H$ denotes matrix conjugate transpose. Finally, the estimation of the DP-RTF $d_k$ defined in \eqref{eq:dp-rtf} is provided by the first element of $\hat{\mathbf{g}}_k$, denoted as $\hat{g}_{0,k}$. Note that if two frames in $\mathcal{P}_1$ are close to each other, their corresponding elements in vector $\hat{\phivect}_{\tilde{y}\tilde{y}}^s(k)$ (or corresponding rows in matrix $\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)$) will be correlated. This correlation yields some redundancy of the linear equations. However, in practice, we keep this redundancy to make full use of data and give a more robust solution to (\ref{Phin}). Still assuming that $e(p_1,k)$ is i.i.d and denoting its variance by $\sigma_k^2$, the covariance matrix of $\hat{\mathbf{g}}_k$ is given by \cite{manolakis2005}: \begin{align} \mathbf{cov}\{\hat{\mathbf{g}}_k\}=\sigma_k^2(\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)^H\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k))^{-1}. \end{align} The statistical analysis of the auto- and cross-PSD estimates show that $\sigma_k^2$ is inversely proportional to the number of smoothing frames $D$ \cite{manolakis2005}. Thence using a large $D$ leads to a small error variance $\sigma_k^2$. However, increasing $D$ decreases the fluctuation of the estimated speech PSD among frames and thus makes the elements in the matrix $\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)^H\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)$ smaller, which results in a larger variance of $\hat{\mathbf{g}}_k$. Therefore, an appropriate value of $D$ should be chosen to achieve a good tradeoff between smoothing the noise spectrum and preserving the fluctuation of speech spectrum. \addnote[exp_g]{1}{Finally, to improve the robustness of the DP-RTF estimation, we also calculate (\ref{eq:wls}) after exchanging the roles of the two channels in the whole process. This delivers an estimate $\hat{g}_{0,k}'$ of the inverse of \eqref{eq:dp-rtf}, i.e. an estimate of the inverse DP-RFT $\frac{a_{0,k}}{b_{0,k}}$. Both $\hat{g}_{0,k}$ and ${\hat{g}_{0,k}'}{\inverse}$ are estimates of $\frac{b_{0,k}}{a_{0,k}}$. The final DP-RTF estimate is given by averaging these two estimates as:} \begin{equation} \label{eq:dp-estimation} \hat{c}_k = \frac{1}{2}(\hat{g}_{0,k}+ {\hat{g}_{0,k}'}{\inverse}). \end{equation} \subsection{Frame Classification}\label{sec:dprtfn:fc} We adopt the minimum-maximum statistics for frame classification, which was first introduced in \cite{mine2015assp}, and is applied to a different feature in this paper. Frame classification is based on the estimation of $\tilde{y}$ PSD, i.e., $\hat{\phi}_{\tilde{y}\tilde{y}}(p,k)$. The frame $p_1$ is selected such that $\hat{\phi}_{\tilde{y}\tilde{y}}^s(p_1,k)$ in (\ref{hatphis}) is large compared to $e(p_1,k)$, and thus (\ref{hatphis}) matches well the noise-free case. As shown in (\ref{hatphin}), the PSD estimation $\hat{\phi}_{\tilde{y}\tilde{y}}(p,k)$ is composed of both speech and noise powers. A minimum statistics formulation was proposed in \cite{martin2001}, where the minimum value of the smoothed periodograms with respect to the index $p$, multiplied by a bias correction factor, is used as the estimation of the noise PSD. Here we introduce an equivalent sequence length for analyzing the minimum and maximum statistics of noise spectra, and propose to use two classification thresholds (for two classes $\mathcal{P}_1$ and $\mathcal{P}_2$) defined from the ratios between the maximum and minimum statistics. In short, we classify the frames by using the minimum controlled maximum border. Formally, the noise power in $\hat{\phi}_{\tilde{y}\tilde{y}}(p,k)$ is \begin{align}\label{hphiv} \xi_{p,k} \triangleq \hat{\phi}_{vv}(p,k) = \frac{1}{D}\sum_{d=0}^{D-1}|v_{p-d,k}|^2. \end{align} For a stationary Gaussian signal, the probability density function (PDF) of periodogram $|v_{p,k}|^2$ obeys the exponential distribution \cite{martin2001} \begin{align} f(|v_{p,k}|^2;\lambda)=\frac{1}{\lambda}e^{-|v_{p,k}|^2/\lambda} \end{align} where $\lambda=E\{|v_{p,k}|^2\}$ is the noise PSD. Assume that the sequence of $|v_{p,k}|^2$ values at different frames are i.i.d. random variables. The averaged periodogram $\xi_{p,k}$ obeys the Erlang distribution \cite{forbes2011statistical} with scale parameter $\mu=\lambda/D$ and shape parameter $D$: \begin{equation}\label{erl} f(\xi_{p,k};D,\mu)=\frac{\xi_{p,k}^{D-1}e^{-\frac{\xi_{p,k}}{\mu}}}{\mu^D(D-1)!}. \end{equation} We are interested in characterizing and estimating the ratio between the maximum and minimum statistics of the sequence $\xi_{p,k}$. Since the maximum and minimum statistics are both linearly proportional to $\mu$ \cite{martin2001}, we assume, without loss of generality, that $\mu=1$. Consequently the mean value of $\xi_{p,k}$ is equal to $D$. As mentioned in Section \ref{sec:dprtf:estimation}, the frame index of the estimated PSDs $\hat{\phi}_{yy}(p,k)$ and $\xi_{p,k}$ is confined to the range $p_f$ to $P$. Let $R$ denote the increment of the frame index $p$ of the estimated PSDs. If $R$ is equal to or larger than $D$, for two adjacent estimated PSD $\xi_{p,k}$ and $\xi_{p+R,k}$, there is no frame overlap. The sequence $\xi_{p,k}, \ p=p_f:R:P$ is then an independent random sequence. The length of this sequence is $\tilde{P}=\lceil\frac{P-p_f+1}{R}\rceil$. The PDFs of the minimum and maximum of these $\tilde{P}$ independent variables are \cite{martin1994}: \begin{equation}\label{fmin} \begin{array}{l} f_{min}(\xi) = \tilde{P}\cdot(1-F(\xi))^{\tilde{P}-1}\cdot f(\xi), \\ f_{max}(\xi) = \tilde{P}\cdot F(\xi)^{\tilde{P}-1} \cdot f(\xi), \end{array} \end{equation} where $F(\cdot)$ denotes the cumulative distribution function (CDF) associated with the PDF (\ref{erl}). Conversely, if $R<D$, $\xi_{p,k}$ is a correlated sequence, and the correlation coefficient is linearly proportional to the frame overlap. \addnote[exp_e37]{1}{For this case, (\ref{fmin}) will not be valid anymore. Based on a large amount of simulations using white Gaussian noise (WGN),\footnote{The simulations are done with the following procedure: applying STFT to a number of WGN signals with identical long duration. For each time-frequency bin, estimate the PSD by averaging the periodograms of the past $D$ frames. Without loss of generality, the scale parameter $\mu$ of the PSD estimation can be set to 1 by adjusting the noise PSD $\lambda$ to $D$. A sequence of correlated PSD estimates is generated by picking PSD estimates from the complete sequence, with frame increment $R$ (with $R < D$). The length of the correlated sequence is $\tilde{P}$. The minimum/maximum values of each correlated sequence are collected at each frequency for all the WGN signals. The PDF and CDF of the minimum/maximum statistics are simulated by the histograms of these minimum/maximum values. Fig. \ref{figmcmt} shows some examples of this empirical CDF.} it was found that the following approximate equivalent sequence length \begin{equation}\label{f12} \tilde{P}' = \frac{\tilde{P}R}{D}\cdot\left(1+\textrm{log}\left(\frac{D}{R}\right)\right) \end{equation} can replace $\tilde{P}$ in order to make (\ref{fmin}) valid for the correlated sequence. We observe that the ratio between the number $D$ of frames used for spectrum averaging and the frame increment $R$ of PSD estimates, is replaced with its logarithm. Note that this is an empirical result, for which theoretical foundation remains to be investigated.} Then, the expectation of the minimum can be approximately computed as \begin{equation}\label{f13} \bar{\xi}_{min} \approx \frac{\sum\nolimits_{\xi_i}\xi_i\cdot f_{min}(\xi_i)}{\sum\nolimits_{\xi_i}f_{min}(\xi_i)}, \end{equation} where $\xi_i\in\{0,0.1D,0.2D,\dots,3D\}$ is a grid used to approximate the integral operation, which well covers the support of the Erlang distribution with shape $D$ and scale 1. Similarly, the CDF of the maximum can be estimated as \begin{align} F_{max}(\xi) \approx \sum\nolimits_{\xi_i} f_{max}(\xi_i). \end{align} Finally, we define two classification thresholds that are two specific values of the maximum and minimum ratios, namely \begin{align} r_1=\frac{\xi_{F_{max}(\xi)=0.95}}{\bar{\xi}_{min}}, \mbox{ and } r_2=\frac{\xi_{F_{max}(\xi)=0.5}}{\bar{\xi}_{min}}, \end{align} where $\xi_{F_{max}(\xi)=0.95}$ and $\xi_{F_{max}(\xi)=0.5}$ are the values of $\xi$ for which the CDF of the maximum is equal to 0.95 and 0.5, respectively. Classes $\mathcal{P}_1$ and $\mathcal{P}_2$ are then obtained with \begin{align}\label{f15} \mathcal{P}_1 &= \{ p \ | \ \xi_{p,k}> r_1 \cdot \min_p\{\xi_{p,k}\} \}, \\ \mathcal{P}_2 &= \{ p \ | \ \xi_{p,k}\le r_2 \cdot \min_p\{\xi_{p,k}\} \}. \end{align} These two thresholds are set to ensure that the frames in $\mathcal{P}_1$ contain large speech power and the frames in $\mathcal{P}_2$ contain negligible speech power. The speech power for the other frames are probabilistically uncertain, making them unsuitable for either $\mathcal{P}_1$ or $\mathcal{P}_2$. Using two different thresholds evidently separates speech region and noise-only region. In other words, there is a low probability to have a frame classified into $\mathcal{P}_1$ in the proximity of $\mathcal{P}_2$ frames, and vice versa. Therefore, in general, the PSD of a frame in $\mathcal{P}_1$ is estimated using $D$ frames that are not included in the noise-only region, and vice versa. Note that if there are no frames with speech content, e.g., during long speech pauses, class $\mathcal{P}_1$ will be empty with a probability of 0.95 due to threshold $r_1$. \begin{figure}[t] \centering \vspace{-0.0cm} \includegraphics[width=0.45\textwidth]{cdf.jpg} \caption{\small{Cumulative distribution function (CDF) of the minimum and maximum statistics of $\xi_{p,k}$ for $D=12$.}} \label{figmcmt} \end{figure} As an illustration of (\ref{f12}), Fig. \ref{figmcmt} shows the CDF for $D=12$. The empirical curves are simulated using WGN, and the analytical curves are computed using the equivalent sequence length in (\ref{f12}). The minimum CDF and maximum CDF of two groups of simulations are shown, for which the equivalent sequence lengths $\tilde{P}'$ are fixed at 20 and 100, respectively. For each equivalent sequence length $\tilde{P}'$, two empirical curves with frame increment $R=1$ and $R=6$ are simulated using WGN, whose corresponding original sequence lengths are $\tilde{P}=69$ and $\tilde{P}=24$ for $\tilde{P}'=20$, and $\tilde{P}=344$ and $\tilde{P}=118$ for $\tilde{P}'=100$, respectively. This shows that the equivalent sequence length in (\ref{f12}) is accurate for the minimum and maximum statistics. \section{Sound Source Localization method} \label{sec:ssl} The amplitude and the phase of DP-RTF represent the amplitude ratio and phase difference between two source-to-microphone direct-path ATFs. In other words, in case of two microphones, the DP-RTF is equivalent to the interaural cues, ILD and IPD, associated to the direct path. More generally, we consider here $J$ microphones. This is a slight generalization that will directly exploit the previous developments, since we consider these $J$ microphones pair-wise. As in \cite{araki2007,mine2015eusipco}, we consider the normalized version of the DP-RTF estimate \eqref{eq:dp-estimation} between microphones $i$ and $j$: \begin{align}\label{bfc} c_{k,ij}=\frac{\hat{c}_{k,ij}}{\sqrt{1+|\hat{c}_{k,ij}|^2}}. \end{align} Compared to the amplitude ratio, the normalized DP-RTF is more robust. In particular, when the reference transfer function $a_{0,k}$ is much smaller than $b_{0,k}$, the amplitude ratio estimation is sensitive to noise present in the reference channel. By concatenating \eqref{bfc} across $K$ frequencies and across $(J-1)J/2$ microphone pairs, we obtain a high-dimensional feature vector $\mathbf{c}\in\mathbb{R}^{J(J-1)K/2}$. Since speech signals have a sparse STFT representation, we denote by $\hvect\in\mathbb{C}^{J(J-1)K/2}$ an indicator vector whose elements are either equal to 1 if the energy at the corresponding frequency is significant, or equal to 0 if the energy is negligible. \addnote[exp_under]{1}{In practice, the indicator vector entries at a given frequency $k$ are set to 0 if the corresponding matrix $\hat{\Phimat}_{\tilde{z}\tilde{y}}^s(k)$ is underdetermined, i.e. $P_1<(2Q_k-1)$ for that frequency. This way, we do not use any DP-RTF calculated from (\ref{eq:wls}) for such ``missing frequency'' (see below).} \addnote[cla_doa]{1}{The proposed DP-RTF estimation method is suitable for the most general case of microphone setup where the microphones are not necessarily placed in free-field. In other words it can be applied to any microphone pair in any microphone array setup. For instance, in the present paper, the microphones are placed in the ears of a dummy head or on the head of a robot. In these cases, there is no clear (analytical) relationship between the HRIR/HRTF/DP-RTF and the DOA of the emitting source, even after removal of the noise and reverberations. } In order to perform SSL based on the feature vector $\mathbf{c}$, we adopt here a supervised framework: A training set $D_{\mathbf{c,q}}$ of $I$ pairs $\{\mathbf{c}_i, \mathbf{q}_i\}_{i=1}^I$ is available, where $\mathbf{c}_i$ is a DP-RTF feature vector generated with an anechoic head-related impulse response (HRIR), and $\mathbf{q}_i$ is the corresponding source-direction vector. Then, for an observed (test) feature vector $\mathbf{c}$ that is extracted from the microphone signals, the corresponding direction is estimated using either (i)~nearest-neighbor search in the training set (considered as a look-up table) or (ii)~a regression whose parameters have been tuned from the training set. \addnote[cla_tt]{1}{Note that the training set and the observed test features should be recorded using the same microphone set-up. This way, the HRIR of the training set (corresponding to an anechoic condition) corresponds to the direct-path of the BRIR of the test condition (recorded in reverberant condition). } Nearest-neighbor search corresponds to solving the following minimization problem (\addnote[cla_odot]{1}{$\odot$ denotes the Hadamard product, i.e. entry-wise product}): \begin{equation}\label{f16} \hat{\mathbf{q}} = \mathop{\textrm{argmin}}_{i \in [1,I]} \parallel \hvect \odot ( \mathbf{c}-\mathbf{c}_i) \parallel. \end{equation} \addnote[exp_under2]{1}{As mentioned above, the indicator vector $\hvect$ enables to select the relevant DP-RTF vector components, i.e. the ones corresponding to frequencies with (over)determined solution to (\ref{Phin}). } Because of the sparse nature of the test feature vectors, not any regression technique could be used. Indeed, one needs a regression method that allows training with full-spectrum signals and testing with sparse-spectrum signals. Moreover, the input DP-RTF vectors are high dimensional and not any regression method can handle high-dimensional input data. For these reasons we adopted the probabilistic piece-wise linear regression technique of \cite{deleforge2015acoustic}. \section{Experiments with Simulated Data} \label{sec:experiments1} We report results with experiments carried out in order to evaluate the performance of the proposed method. We simulated various experimental conditions in terms of reverberation and additive noise. \subsection{The Dataset} The BRIRs are generated with the ROOMSIM simulator \cite{campbell2004} and with the head related transfer function (HRTF) of a KEMAR dummy head \cite{gardner1995}. The responses are simulated in a rectangular room of dimension $8$~m~$\times$~$5$~m~$\times$~$3$~m. The KEMAR dummy head is located at $(4, 1, 1.5)$~m. The sound sources are placed in front of the dummy head with azimuths varying from $-90^\circ$ to $90^\circ$, spaced by 5$^\circ$, an elevation of 0$^\circ$, and distances of $1$~m, $2$~m, and $3$~m., see Fig.\ref{figroom}. The absorption coefficients of the six walls are equal, and adjusted to control $T_{60}$ at 0.22~s, 0.5~s and 0.79~s, respectively. Two other quantities, i.e. the ITDG and the direct-to-reverberation ratio (DRR), are also important to measure the intensity of the reverberation. In general, the larger the source-to-sensors distance is, the smaller the ITDG and DRR are. For example, when $T_{60}$ is 0.5~s, the DRRs for $1$, $2$, $3$~m are about $1.6$, $-4.5$ and $-8.1$ dB, respectively. Speech signals from the TIMIT dataset \cite{garofolo1988} are used as the speech source signals, which are convolved with the simulated BRIRs to generate the sensor signals. Each BRIR is convolved with 10 different speech signals from TIMIT to achieve reliable SSL results. Note that the elevation of the speech sources is always equal to $0^\circ$ in the BRIR dataset, thence in these simulated-data experiments the source direction corresponds to the azimuth only. The feature vectors in the training set $\{\mathbf{c}_i\}_{i=1}^I$ are generated with the anechoic HRIRs of the KEMAR dummy head from the azimuth range $[-90^\circ\, , 90^\circ]$, spaced by 5$^\circ$, i.e. $I=37$. In this section, the nearest-neighbor search is adopted for localization. Two types of noise signals are generated: (i)~a ``directional noise'' is obtained by convolving a single channel WGN signal with a BRIR corresponding to position beside the wall with azimuth of $120^\circ$, elevation of $30^\circ$ and distance of $2.2$~m, see Fig. \ref{figroom}; (ii)~an ``uncorrelated noise'' consists of an independent WGN signal on each channel. Noise signals are added to the speech sensor signals with various signal-to-noise ratios. \begin{figure}[t!] \centering \includegraphics[width=0.40\textwidth]{room.jpg} \caption{\small{Configurations of room, dummy head, speech sources and noise source for the BRIR dataset.}} \label{figroom} \end{figure} \begin{table*}[t] \centering \caption{\small{Localization errors (degrees) for different values of $Q$ in different conditions. $T_{60}=0.5$~s. ``Distance'' stands for source-to-sensors distance. The bold value is the minimum localization error for each condition.}} \begin{tabular}{| ccc |ccccccc|} \hline \multicolumn{3}{|c|}{Conditions} & \multicolumn{7}{c|}{$Q/T_{60}$ ($T_{60}=0.5$ s)} \\ Noise type & SNR & Distance & 0.1 & 0.15 & 0.2 & 0.25 & 0.3 & 0.35 & 0.4 \\ \hline Uncorrelated & $10$~dB& 1 m & 0.122 & 0.081 & \textbf{0.077} & 0.081 & 0.099 & 0.108 & 0.113 \\ Uncorrelated & $10$~dB& 2 m & 1.338 & 0.847 & 0.716 & 0.649 & 0.629 & 0.608 & \textbf{0.568} \\ Directional& $10$~dB& 1 m & 0.135 & \textbf{0.113} & 0.122 & 0.131 & 0.149 & 0.158 & 0.162 \\ Directional& $10$~dB& 2 m & 1.437 & 0.869 & 0.829 & 0.680 & 0.644 & 0.626 & \textbf{0.617} \\ Uncorrelated& $-5$~dB& 2 m & 7.824 &6.833 & 6.703 &\textbf{6.680} & 6.802 &6.964 & 7.149 \\ Directional& $-5$~dB& 2 m & 13.36 &12.25 & 11.90 & 11.23 & 10.96 & 10.52 &\textbf{10.38} \\ \hline \end{tabular} \vspace{0mm} \label{tQ} \end{table*} \begin{table*} \centering \caption{\small{Localization errors (degrees) for different values of $D$ in different conditions. $T_{60}=0.5$~s. ``Distance'' stands for source-to-sensors distance. The bold value is the minimum localization error for each condition.}} \begin{tabular}{| c cc | c c c c cccc|} \hline \multicolumn{3}{|c|}{Conditions} & \multicolumn{8}{c|}{$D$ frames} \\ Noise type & SNR & Distance & 6 & 8 & 10 & 12 & 14 & 16 & 18 & 20 \\ \hline Uncorrelated & $-5$ dB& 1 m & 2.59 & 2.15 & 2.09 & 1.99 & 1.86 & 1.81 & 1.64 & \textbf{1.59} \\ Uncorrelated & $-5$ dB& 2 m & 7.37 & 6.03 & 6.17 & 6.68 & 6.08 & 6.40 & 6.90 & 6.50 \\ Directional& $-5$ dB& 1 m & 3.83 & 3.42 & 3.51 & 3.23 & 3.70 & 3.47 & 2.96 & 3.45 \\ Directional& $-5$ dB& 2 m &\textbf{9.80} & 10.28 & 10.32 & 11.23 & 11.60 & 13.18 & 13.62 &15.35 \\ \hline \end{tabular} \vspace{0mm} \label{tD} \end{table*} \subsection{Setting the Parameters} \label{sec:experiments1:parameter} The sampling rate is $16$~kHz. Only the frequency band from $0$ to $4$~kHz is considered for speech source localization. \addnote[settings]{1}{The setting of all three parameters $N$, $Q_k$ and $D$ is crucial for a good estimation of the DP-RTF. Intuitively, $Q_k$ should correspond to the value of $T_{60}$ at the $k$-th frequency bin. For simplicity, we set $Q_k$ to be the same for all frequencies and denote it as $Q$. In the following of this subsection, we present preliminary SSL experiments that were done in order to tune $N$, $Q$ and $D$ to an ``optimal tradeoff'' setting that would ensure good SSL performance for a large range of acoustic conditions. Since considering all possible joint settings of these three parameters is a hard task, when exploring the setting of one of them, we may fix the others. } In all the following, the localization error is taken as the performance metric. It is computed by averaging the absolute errors between the localized directions and their corresponding ground truth (in degrees) over the complete test dataset. Let us first consider the setting of $Q$. Here we fix $N=256$ with $50\%$ overlap, and $D=12$. Table \ref{tQ} shows the localization errors for $Q$ values corresponding to CTF length $\in [0.1T_{60}, \dots, 0.4T_{60}]$ with $T_{60}=0.5$~s. When the SNR is high (first four lines; SNR~=~$10$~dB), the influence of noise is small, and the DRR plays a dominant role. Comparing the localization errors for source-to-sensors distances between $1$~m and $2$~m, we see that small localization errors are obtained with rather small $Q$ values for $1$ m, and with the larger $Q$ values for $2$~m. This result indicates that, for a given $T_{60}$, $Q$ should be increased when the DRR is decreased. The CTF should cover most of the energy of the room impulse response. By comparing the results for the uncorrelated noise of 10~dB and $-5$~dB, source at 2~m (second and fifth lines), we observe that the smallest localization error is achieved by a smaller $Q$ for the low SNR case, compared to the high SNR case. Note that a larger $Q$ corresponds to a greater model complexity, which needs more reliable (less noisy) data to be estimated. The intense uncorrelated noise degrades the data, thence a small $Q$ is preferred. In contrast, for the directional noise, a large $Q$ is also suitable for the low SNR case (sixth line). The reason is possibly that the directional noise signal has a similar convolution structure as the speech signal, and the noise residual $\mathbf{e}(k)$ also has a similar convolution structure. Thence the data reliability is not degraded much. In conclusion, the optimal $Q$ varies with the $T_{60}$, DRR, noise characteristics, and noise intensity. In practice, it is difficult to obtain these features automatically, thence we assume that $T_{60}$ is known, and we set $Q$ to correspond to $0.25T_{60}$ as a tradeoff for different acoustic conditions. Let us now consider the setting of $D$. Here, we set $Q$ to correspond to $0.25T_{60}$, and $N=256$ with $50\%$ overlap. The number of frames $D$ is crucial for an efficient spectral subtraction (Section \ref{sec:dprtfn:ss}). A large $D$ yields a small noise residual. However, the remaining speech power after spectral subtraction may also be small because of the small fluctuations of the speech PSD estimate between frames when $D$ is large. Table \ref{tD} shows the localization errors for $D\in [6, \dots, 20]$ under different conditions. Note that only the results for the low SNR case ($-5$~dB) are shown, for which the effect of noise suppression plays a more important role. It can be seen (first line) that a large $D$ yields the smallest localization error, which means that removing noise power is more important than retaining speech power for this condition. The reason is that the DRR is large for source-to-sensors distance of $1$~m, so that the direct-path speech power is relatively large. As $D$ increases, the remaining direct-path speech power decreases only slightly, compared to the decrease of the noise residual. In contrast, a small $D$ yields the smallest localization error for the directional noise at $2$~m (fourth line), which means that retaining speech power is more important than removing noise power for this condition. The reasons are that (i)~as described above, the data reliability is not degraded much by the directional noise in the sense of convolution, and (ii)~the direct-path speech power is relatively small for a source-to-sensors distance of $2$~m. The conditions of the second and third lines fall in between the first line and the fourth line, and these results do not strongly depend on $D$. It is difficult to choose a $D$ value that is optimal for all the acoustic conditions. In the following, we set $D=12$ frames ($100$~ms) as a fair tradeoff. As for the setting of $N$, let us remind that the reflections present in $a(n)|_{n=0}^N$ lead to a biased definition of DP-RTF. In order to minimize the reflections contained in $a(n)|_{n=0}^N$, the STFT window length $N$ should be as small as possible, while still capturing the direct-path response. However, in practice, a small $N$ requires a large $Q$ for the CTF to cover well the room impulse response, which increases the complexity of the DP-RTF estimate. We tested the localization performance for three STFT window sizes: $8$~ms ($N=128$ samples), $16$~ms ($N=256$ samples), and $32$~ms ($N=512$ samples), with $50\%$ overlap. Again, $Q$ corresponds to $0.25T_{60}$. For example, with $T_{60}=0.79$~s and with $N=128$, $256$, $512$ respectively, $Q$ is equal to $50$, $25$, $13$ frames respectively. $D$ is set to $100$~ms. For $N=128$, $256$, $512$, $D$ is $24$, $12$, $6$ frames, respectively. Table \ref{tN} shows the localization errors under various acoustic conditions. We first discuss the case of high SNR (first three lines). When the source-to-sensors distance is small ($1$~m; first line), the ITDG is relatively large and we observe that $N=128$ and $N=256$ ($8$~ms and $16$~ms windows) achieve comparable performance. This indicates that, if the ITDG is relatively large, there are not much more reflections in $a(n)|_{n=0}^N$ for a $16$-ms window, compared with an $8$-ms window. The next results (second line) show that, when $T_{60}$ is small ($0.22$~s), the localization performance decreases much more for a $16$-ms and a $32$-ms window than for an $8$-ms window, as the sensor-to-noise distance increases from $1$~m to $3$~m. A lower ITDG yields a larger DP-RTF estimation error due to the presence of more reflections in $a(n)|_{n=0}^N$. When $T_{60}$ increases to $0.79$~s, $Q$ becomes larger, especially for $N=128$. It can be seen (third line) that here $N=256$ yields a better performance than other values. This is because the lack of data leads to a large DP-RTF estimation error for $N=128$, and the reflections in $a(n)|_{n=0}^N$ bring a large DP-RTF estimation error for $N=512$. When the SNR is low ($-5$ dB; last three lines), less reliable data are available due to noise contamination. In that case, a large $N$ achieves the best performance. Finally, we set $N=256$ ($16$-ms STFT window) as a good overall tradeoff between all tested conditions. \setlength{\tabcolsep}{5.0pt} \begin{table}[t!] \centering \caption{\small{Localization errors (degrees) for three values of $N$. ``Distance'' is the sensors-to-source distance. The bold value is the minimum localization error. In this experiment, the noise signal is generated by summing the directional noise and uncorrelated noise with identical powers.} } \label{tN} \begin{tabular}{| c c c| c c c |} \hline \multicolumn{3}{|c|}{Conditions} & \multicolumn{3}{c|}{STFT window length $N$} \\ SNR & Distance & $T_{60}$ & 128 (8 ms) & 256 (16 ms) & 512 (32 ms) \\ \hline & $1$~m & $0.22$~s &\textbf{0.01} &\textbf{0.01} & 0.02 \\ $10$ dB & $3$~m& $0.22$ s &\textbf{0.58} &1.19 & 1.89 \\ & $3$~m &$0.79$ s &9.60 &\textbf{9.22} & 9.55 \\ \hline & $1$~m &$0.22$ s &1.89 &1.62 & \textbf{1.49} \\ $-5$ dB & $3$~m &$0.22$ s &8.07 &\textbf{6.30} & 7.04 \\ & $3$~m &$0.79$ s &22.66 &20.81 & \textbf{17.75} \\ \hline \end{tabular} \end{table} \subsection{DP-RTF Estimation} We provide several representative examples showing the influence of both reverberation and noise on the DP-RTF estimates. The phase and normalized amplitude of the estimated DP-RTF for three acoustic conditions are shown in Fig. \ref{figdprtf}. The SNR is set to $30$~dB in the first two examples, hence the noise is negligible. The difference between the estimated and the ground-truth phase is referred to as the phase estimation error. It can be seen that, for most frequency bins, the mean value (over ten trials) of the phase estimation error is very small (but nonzero, which indicates that the estimated DP-RTF is biased). As mentioned above, the bias is brought in by the reflections in the impulse response segment $a(n)|_{n=0}^N$. In addition, if the DRR gets smaller, a longer CTF is required to cover the room impulse response. However, for a given $T_{60}$, the CTF length $Q$ is set as a constant, for instance $0.25 T_{60}$. In this example, this improper value of $Q$ leads to an inaccurate CTF model, which causes the DP-RTF estimate bias. When the source-to-sensors distance increases, both the ITDG and DRR become smaller. Therefore, for both phase and amplitude, the estimation bias of the second example of Fig. \ref{figdprtf} (middle) is larger than the bias of the first example (left). Moreover, the DP-RTF $\frac{b_{0,k}}{a_{0,k}}$ in $\mathbf{g}_k$ plays a less important role relative to other elements, with decreasing DRR, which makes the variance of both the phase and amplitude estimation errors to be larger than in the first example. By comparing the first and last examples of Fig. \ref{figdprtf}, it is not surprising to observe that the estimation error increases as noise power increases. When the SNR is low, less reliable speech frames are available in the high frequency band, due to the intense noise. Therefore, there is no DP-RTF estimation for the frequency bins satisfying $P_1<2Q_k-1$. \begin{figure*}[t!] \centering \includegraphics[width=0.3\textwidth]{1_30_p.jpg} \includegraphics[width=0.3\textwidth]{2_30_p.jpg} \includegraphics[width=0.3\textwidth]{1_0_p.jpg} \\ \includegraphics[width=0.3\textwidth]{1_30_a.jpg} \includegraphics[width=0.3\textwidth]{2_30_a.jpg} \includegraphics[width=0.3\textwidth]{1_0_a.jpg} \caption{\small{The phase (top) and normalized amplitude (bottom) of the normalized estimated DP-RTF (\ref{bfc}) as a function of frequency bins. The source direction is 30$^\circ$. $T_{60}=0.5$ s. The continuous curve corresponds to the ground-truth DP-RTF $d_k$ computed from the anechoic HRTF. Left: $1$~m source-to-sensors distance, $30$~dB SNR. Middle: $2$~m source-to-sensors distance, $30$~dB SNR. Right: $1$~m source-to-sensors distance, $0$~dB SNR. For each acoustic condition, the BRIR is convolved with 10 different speech recordings as the sensor signals, whose DP-RTF estimations are all shown. In this experiment, the noise signal is generated by summing the directional noise and uncorrelated noise with identical powers.}} \label{figdprtf} \end{figure*} \subsection{Baseline Methods} \addnote[cla_exp1]{1}{In our previous work \cite{mine2015assp}, the proposed inter-frame spectral subtraction scheme was applied to RTF estimators (as opposed to the DP-RTF estimators proposed in the present paper). The results were compared with the RTF estimators proposed in \cite{gannot2001} and \cite{cohen2004} in the presence of WGN or babble noise. The efficiency of the inter-frame spectral subtraction to remove the noise was demonstrated. Thence, the focus of the present set of experiments is mainly aimed at (i) comparing the robustness to reverberation of the proposed DP-RTF feature with respect to other features, in a similar SSL framework, and at (ii) comparing the proposed SSL method with a conventional SSL method.} To this aim, we compare our method with three other methods: (i) an unbiased RTF identification method \cite{mine2015assp}, in which a spectral subtraction procedure (similar to the one described in Section~\ref{sec:dprtfn:ss}) is used to suppress noise. Since this RTF estimator is based on the MTF approximation, we refer to this method as RTF-MTF. (ii) a method based on a STFT-domain coherence test (CT) \cite{mohan2008}.\footnote{Note that \cite{faller2004} introduces a similar technique based on interaural coherence, using features extracted from band-pass filter banks. Also, a binaural coherent-to-diffuse ratio approach was proposed in \cite{schwarz2015coherent,zheng2015} and applied to dereverberation but not to SSL.} We refer to this method as RTF-CT. The coherence test is used in \cite{mohan2008} to search the rank-1 time-frequency bins which are supposed to be dominated by one active source. We adopt the coherence test for single speaker localization, in which one active source denotes the direct-path source signal. The TF bins that involve notable reflections have low coherence. We first detect the maximum coherence over all the frames at each frequency bin, and then set the coherence test threshold for each frequency bin to $0.9$ times its maximum coherence. In our experiments, this threshold achieves the best performance. The covariance matrix is estimated by taking a $120$~ms ($15$ adjacent frames) averaging. \addnote[exp_ctss]{1}{The auto- and cross-PSD spectral subtraction is applied to the frames that have high speech power and a coherence larger than the threshold, and then are averaged over frames for RTF estimation.} \addnote[exp_srp]{1}{ (iii) a classic one-stage algorithm: the steered-response power (SRP) utilizing the phase transform (PHAT) \cite{dibiase2001,do2007}. The azimuth directions $-90^\circ:5^\circ:90^\circ$ are taken as the steering directions, and their HRIRs are used as the steering responses. } \addnote[cla_mc]{1}{Note that for both RTF-MTF and RTF-CT methods, the features used in the SSL are obtained after the inter-frame spectral subtraction procedure. The SSL method presented in Section \ref{sec:ssl} is adopted. The training set used as a look-up table or used for training the regression is the same as for the DP-RTF.} \subsection{Localization Results} Fig. \ref{fig:simulated-results} shows the localization results in terms of localization error (let us remind that this error is an average absolute error between the localized directions and their corresponding ground truth (in degrees) over the complete test dataset). Note that in real world, directional noise source, e.g. fan, refrigerator, etc., and diffuse background noise co-exist. Thence in this experiment, the noise signal was generated by summing the directional noise and uncorrelated noise with identical powers. Let us first discuss the localization performance shown in Fig. \ref{fig:simulated-results}-top for $T_{60}=0.22$ s. When the DRR is high ($1$~m source-to-sensors distance; solid-line), compared with the proposed method, RTF-MTF has a comparable performance under high SNR conditions, and a slightly better performance under low SNR conditions (lower than $0$~dB). This indicates that when the reverberation is low, the MTF approximation is valid. When less reliable data are available (under low SNR conditions), the proposed method perform slightly worse than RTF-MTF due to its greater model complexity. Note that both the RTF-MTF and the proposed DP-RTF methods achieve very good localization performance: The localization error goes from almost $0^\circ$ at SNR = $10$~dB to about $5^\circ$ at SNR = $-10$~dB. RTF-CT achieves the worst performance. This indicates that when the direct-path impulse response is slightly contaminated by the reflections, employing all the data (as done by RTF-MTF and DP-RTF) obtains a smaller localization error than employing only the data selected by the coherence test. In general, for mild reverberations, the performance gap between RTF-MTF, RTF-CT and the proposed method is small and the noise level plays a decisive role for good localization. \addnote[exp_phat1]{1}{The SRP-PHAT method achieves comparable performance measures with the three other methods when the SNR is high ($10$~dB). However, the performance measures of SRP-PHAT degrades immediately and dramatically when the SNR decreases. The steered-response power is severely influenced by intense noise, especially by the directional noise. This indicates that the inter-frame spectral subtraction algorithm applied to RTF-MTF, RTF-CT and the proposed method is efficient to reduce the noise. } \begin{figure}[!t] \centering \includegraphics[width=0.3\textwidth]{lege.png}\\ \includegraphics[width=0.45\textwidth]{T022.jpg}\\ \includegraphics[width=0.45\textwidth]{T050.jpg}\\ \includegraphics[width=0.45\textwidth]{T079.jpg} \caption{\small{Localization errors under various reverberation and noise conditions. Top: $T_{60}=0.22$ s. Middle: $T_{60}=0.5$ s. Bottom: $T_{60}=0.79$ s. The localization errors are shown as a function of SNR for source-to-sensors distances of $1$~m, $2$~m and $3$~m.}} \label{fig:simulated-results} \end{figure} When the DRR decreases ($2$~m source-to-sensors distance, grey lines; $3$~m source-to-sensors distance, dashed lines), the performance measures of RTF-MTF degrades notably. For SNR~=~$10$~dB, the localization error of RTF-MTF increases from $0.07^\circ$ to $1.51^\circ$ and to $6.35^\circ$ for source-to-sensors distances of $1$~m, $2$~m and $3$~m, respectively. The direct-path impulse response is severely contaminated by the reflections. At high SNRs, RTF-CT performs slightly better than RTF-MTF. Indeed, RTF-CT selects the frames that contain less reverberations for calculating the RTF estimate, which improves the performance at high SNR conditions. However, when the noise level increases, the precision of RTF-CT also degrades. The performance of RTF-CT is influenced not only by the residual noise but also by the decline of the coherence test precision, which make it fall even faster than RTF-MTF with decreasing SNR (it has a larger localization error at $-5$ dB and $-10$ dB). The proposed method also has a larger localization error when the source-to-sensors distance increases: the DP-RTF estimation is possibly influenced by the increased amount of early reflections in the impulse response segment $a(n)|_{n=0}^N$, by the effect of an improper $Q$ setting, and by the decreased importance of $\frac{b_{0,k}}{a_{0,k}}$ in vector $\mathbf{g}_k$. However, the performance of the proposed DP-RTF method degrades much slower than the ones of RTF-MTF when the source distance increases. For an SNR of $10$~dB, the localization error of the proposed method increases from $0.06^\circ$ to $0.16^\circ$ and $1.19^\circ$ as the source-to-sensors distance increases from $1$~m to $2$~m and $3$~m. It can be seen that the performance of the proposed method also falls faster than RTF-MTF with decreasing SNR, since the available data is less reliable. The localization error of the proposed method is larger than the MTF error at -10 dB. It is observed that the proposed method prominently outperforms RTF-CT. It is shown in \cite{nadiri2014} that the coherence test is influenced by the coherent reflections (very early reflections) of the source signal. Moreover, it is difficult to automatically set a coherence test threshold that could perfectly select the desired frames. Many frames that have a coherence larger than the threshold include reflections. \addnote[exp_phat2]{1}{The performance of SRP-PHAT also degrades with the DRR decrease. It is known that PHAT-based method are quite sensible to reverberations and noise in general. Briefly, the performance measures of SRP-PHAT are in between the performance measures of RTF-MTF and RTF-CT for high SNRs, which indicates that the PHAT weight could suppress the reverberations only to a certain extent. Below $5$~dB, SRP-PHAT performs worst of the four methods. } Fig. \ref{fig:simulated-results}~(bottom) displays the results for $T_{60}=0.79$ s. Obviously, the performance measures of all four methods degrade as $T_{60}$ increases. Indeed, the MTF approximation is not accurate; there are only a few time-frequency bins with a rank-1 coherence; and a large value of $Q$ has to be utilized in the proposed method, for which there may not always be enough reliable data. Here, it can be seen that RTF-CT performs better than RTF-MTF for any SNR value and source-to-sensors distance. \addnote[exp_phat3]{1}{Even SRP-PHAT performs better than RTF-MTF (for $2$~m and $3$~m source-to-sensors distance). } This shows that the RTF estimation error brought by the MTF approximation largely increases as $T_{60}$ increases. For $1$~m source-to-sensors distance, the proposed method performs slightly better than all other three methods. For $2$~m and $3$~m source-to-sensors distance, the proposed method largely outperforms the other three methods, at all SNRs. For example, at SNR~=~$0$~dB, the proposed method achieves about $6.5^\circ$ of localization error at $2$~m source-to-sensors distance, while RTF-CT (the best of the three baseline methods) achieves about $15.8^\circ$, hence the gain for the proposed method over the best baseline is about $9.3^\circ$. However, the performance of the proposed method and of RTF-CT still have a faster degradation with decreasing SNR compared to RTF-MTF. Finally, we can see from Fig. \ref{fig:simulated-results}~(middle), that the performance of the different methods for $T_{60}=0.5$~s falls in between the other two cases shown on the same figure, and the trends of performance evolution with $T_{60}$ is consistent with our comments above. In summary, the proposed method outperforms the three other methods under most acoustic conditions. In a general manner, the gain over the baseline methods increases as the source-to-sensors distance increases (or the DRR decreases) and as the reverberation time increases (but the influence of the noise level is more intricate). As a result, the proposed method achieves acceptable localization performance in quite adverse conditions. For example (among many others), with $T_{60}=0.5$~s, source-to-sensors distance of $3$~m and an SNR of $0$~dB, the localization error is about $9^\circ$, and with $T_{60}=0.79$~s, source-to-sensors distance of $2$~m, and an SNR of $0$~dB, the localization error is about $6.5^\circ$. In all the above results, the duration of the signal used for localization was not considered with great attention: The localization errors were averaged over 10 sentences of TIMIT of possibly quite different duration, from $1$~s to $5$~s. Yet the number of available frames that are used to construct (\ref{Phin}) depends on the speech duration, which is crucial for the least square DP-RTF estimation in (\ref{eq:wls}). Here we complete the simulation results with a basic test of the influence of the speech duration on localization performance. To this aim we classified our TIMIT test sentences according to their duration (closer to $1$~s, $2$~s, $3$~s or $4$~s) and proceeded to localization evaluation for each new group (of 10 sentences), for a limited set of acoustic conditions (SNR~=~$10$~dB and $0$~dB, $T_{60}=0.5$~s). Table \ref{td} shows the localization errors of the proposed method, the RTF-MTF, and the RTF-CT method, for the four tested approximate speech durations. We can see that, as expected, all three methods achieve a smaller localization error when increasing speech duration, for both tested SNRs. The improvement is more pronounced for the proposed method and the RTF-CT method compared to the RTF-MTF method. For example, for SNR~=~$10$~dB, the localization error is reduced by $66\%$ (from $1.57^\circ$ to $0.54^\circ$) for the proposed method, and by $49\%$ (from $6.24^\circ$ to $3.21^\circ$) for the RTF-CT method when the speech duration rises from $1$~s to $4$~s. In contrast, the localization error of RTF-MTF is quite larger and is only reduced by $11\%$ (from $12.60^\circ$ to $11.16^\circ$). \begin{table}[t!] \caption{\small{Localization errors (in degrees) as a function of speech duration, for $T_{60}=0.5$~s and a source-to-sensors distance of $2$~m.}} \label{td} \centering \begin{tabular}{| c c| c c c c |} \hline & & \multicolumn{4}{c|}{Speech duration (s)} \\ SNR & Method & 1 & 2 & 3 & 4 \\ \hline &Proposed &1.57 &0.88 & 0.79 & 0.54 \\ $10$ dB & RTF-CT &6.24 &4.43 & 3.86 & 3.21 \\ & RTF-MTF &12.60 &12.01 & 11.25 & 11.16 \\ \hline & Proposed &7.36 &4.62 & 4.05 & 3.07 \\ $0$ dB & RTF-CT &12.97 &11.33 & 10.04 & 9.67 \\ & RTF-MTF &17.56 &15.29 & 14.94 & 15.01 \\ \hline \end{tabular} \end{table} \section{Experiments with the NAO Robot} \label{sec:experiments2} In this section we present several experiments that were conducted using the NAO robot (Version 5) in various real-world environments. NAO is a humanoid companion robot developed and commercialized by Aldebaran Robotics.\footnote{https://www.ald.softbankrobotics.com.} NAO's head has four microphones that are nearly coplanar, see Fig.~\ref{fignao}. The recordings contain ego-noise, i.e. noise produced by the robot. In particular, it contains a loud fan noise, which is stationary and partially interchannel correlated \cite{loellmann2014}. The spectral energy of the fan noise is notable up to 4 kHz, thence the speech signals are significantly contaminated. Note that the experiments reported in this section adopt the parameter settings discussed in Section \ref{sec:experiments1:parameter}. \subsection{The Datasets} The data are recorded in three environments: laboratory, office, e.g., Fig.~\ref{fig:scenario}-(right), and cafeteria, with reverberation times ($T_{60}$) that are approximately $0.52$~s, $0.47$~s and $0.24$~s, respectively. Two \textbf{test datasets} are recorded in these environments: \\ 1) The \emph{audio-only} dataset: In the laboratory, speech utterances from the TIMIT dataset \cite{garofolo1988} are emitted by a loudspeaker in front of NAO. Two groups of data are recorded with a source-to-robot distance of $1.1$~m and $2.1$~m, respectively. For each group, $174$ sounds are emitted from directions uniformly distributed in azimuth and elevation, in the range $[-120^\circ,120^\circ]$ (azimuth), and $[-15^\circ, 25^\circ]$ (elevation).\\ 2) The \emph{audio-visual} dataset: Sounds are emitted by a loudspeaker lying in the field of view of NAO's camera. The image resolution is of $640 \times 480$ pixels, corresponding to approximately $60^\circ$ ($-30^\circ$ to $30^\circ$) azimuth range and to approximately $48^\circ$ ($-24^\circ$ to $24^\circ$) elevation range, so $1^\circ$ of azimuth/elevation corresponds to approximately $10.5$ horizontal/vertical pixels. A LED placed on the loudspeaker enables to estimate the loudspeaker location in the image, hence ground-truth localization data are available with the audio-visual dataset. Three sets of audio-visual data are recorded in three different rooms. For each set, sounds are emitted from about $230$ directions uniformly distributed in the camera field-of-view. Fig.~\ref{fig:scenario}-(left) shows the source positions shown as blue dots in the image plane. The source-to-robot distance is about $1.5$~m in this dataset. In both datasets, ambient noise is much lower than fan noise, hence the noise of recorded signals mainly corresponds to fan noise. In the case of the audio-only dataset, the SNR is $14$~dB and $11$~dB for source-to-robot distances of $1.1$~m and $2.1$~m, respectively. For the audio-visual dataset the SNR is $2$~dB. The \textbf{training dataset} for the \emph{audio-only} localization experiments is generated with the NAO head HRIRs of $1,002$ directions uniformly distributed over the same azimuth-elevation range as the test dataset. The training dataset for \emph{audio-visual} experiments is generated with the NAO head HRIR of $378$ directions uniformly distributed over the camera field-of-view. HRIRs are measured in the laboratory: white Gaussian noise is emitted from each direction, and the cross-correlation between the microphone and source signals yields the BRIR of each direction. In order to obtain anechoic HRIRs, the BRIRs are manually truncated before the first reflection. The regression method of \cite{deleforge2015acoustic}, outlined in Section~\ref{sec:ssl}, is used for supervised localization. The SRP-PHAT method takes the source directions in the training set as the steering directions. \begin{figure}[t] \centering {\includegraphics[width=0.8\columnwidth]{micro.png}} \caption{\small{NAO's head has four microphones and one camera.}} \label{fignao} \end{figure} \begin{figure}[t] \centering \includegraphics[height=0.31\columnwidth]{grid.png} \hspace{0.05cm} \includegraphics[height=0.31\columnwidth]{office-ssl.png} \caption{ \small{ The \textit{audio-visual} training dataset (left) is obtained by moving a loudspeaker in front of a microphone/camera setup. Sounds are emitted by a loudspeaker. A LED placed on the loudspeaker enables to associate each sound direction with an image location (a blue circle). The data contain pairs of acoustic recordings and sound directions. A typical localization scenario with the NAO robot (right). }} \label{fig:scenario} \end{figure} \subsection{Localization Results for the \textit{Audio-Only} Dataset} Experiments with the audio-only dataset first show that elevation estimation in the range $[-15^\circ\, 25^\circ]$ is unreliable for all the four methods. This can be explained by the fact that the four microphones are coplanar. Therefore we only present the azimuth estimation results in the following. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{audio11.jpg} \\ \includegraphics[width=0.95\columnwidth]{audio21.jpg} \caption{\small{Azimuth estimation for the audio-only dataset. Source-to-robot distance is $1.1$~m (top) and $2.1$~m (bottom). }} \label{figres} \end{figure} The azimuth estimation results for the audio-only dataset are given in Fig.~\ref{figres}. The results are quite consistent across the two conditions, i.e. source-to-robot distance of $1.1$~m (Fig.~\ref{figres}-top) and $2.1$~m (Fig.~\ref{figres}-bottom). Globally, for the azimuth range $[-50^\circ,50^\circ]$ all four methods provide good localization, i.e. they follow the ground-truth line quite well, for both source-to-robot distances. In this range, the proposed method achieves slightly better results than the RTF-MTF and RTF-CT methods. The performance of all methods drops significantly for directions out of this range, but globally, the proposed method remains the closest to the ground-truth. \addnote[exp_phat4]{1}{In more details, in the approximate range $[-120^\circ,-50^\circ]$ and $[50^\circ,120^\circ]$ it can be seen that SRP-PHAT and RTF-MTF have the largest localization error and many localization outliers caused by reverberations (SRP-PHAT performs slightly better than RTF-MTF in the zones just after $-50^\circ$ and $50^\circ$, possibly due to PHAT weighting ). } By selecting frames that involve less reverberations, RTF-CT performs slightly better than RTF-MTF. The proposed method outperforms the others by extracting the binaural cues associated with the direct-path propagation. Importantly, in the extremities of the range, the proposed method does not generate major outliers nor large deviation from the ground-truth, as opposed to the other methods. \subsection{Localization Results for the Audio-Visual Dataset} The azimuth and elevation in the audio-visual dataset are limited to a small range around $0^\circ$ azimuth. As a consequence, both the azimuth and elevation localization results of this dataset are better than the results of audio-only dataset in average. Table~\ref{tle} shows the localization errors for azimuth (Azim.) and elevation (Elev.) for the audio-visual dataset. The elevation errors are always larger than the azimuth errors, due to the low elevation resolution of the microphone array that we already mentioned (the microphone are coplanar and the microphone plane is horizontal). The cafeteria has the smaller reverberation time, $T_{60}=0.24$~s. Consequently, the RTF-MTF and RTF-CT methods yields performance measures that are comparable with the proposed method. The office and laboratory have larger reverberation times, $0.47$~s and $0.52$~s, respectively, so the MTF approximation is no more accurate. A bit surprisingly RTF-MTF performs better than RTF-CT for the office (though the errors are quite close), this is probably due to the fact that the coherence test does not work well under low SNR conditions (let us remind that the SNR of the audio-visual dataset is around $2$~dB). \addnote[exp_phat5]{1}{Globally, SRP-PHAT performs the worst, due to the intense noise. } As a result of the presence of notable reverberations, the proposed method performs here significantly better than the three other methods. For example, in the laboratory environment, the proposed method provides $0.84^\circ$ azimuth error and $1.84^\circ$ elevation error, vs. $1.41^\circ$ azimuth error and $2.30^\circ$ elevation error for the best baseline methods (for instance SRP-PHAT and RTF-MTF respectively). \begin{table}[t!] \caption{\small{Localization error (in degrees) for the audio-visual dataset. The best results are shown in bold.}} \label{tle} \centering \begin{tabular}{| c | c c | c c | c c |} \hline & \multicolumn{2}{c|}{Cafeteria} & \multicolumn{2}{c|}{Office} & \multicolumn{2}{c|}{Laboratory} \\ Method & Azim. & Elev. & Azim. & Elev. & Azim. & Elev. \\ \hline RTF-MTF & 0.47 & 1.58 & 0.62 & 2.14 & 1.46 & 2.30 \\ RTF-CT & \textbf{0.43} & 1.49 & 0.68 & 2.30 & 1.59 & 2.40 \\ SRP-PHAT & 0.77 & 1.95 & 1.03 &2.80 & 1.41 & 3.33 \\ Proposed & 0.48 & \textbf{1.46} & \textbf{0.55} & \textbf{1.86} & \textbf{0.84} & \textbf{1.84} \\ \hline \end{tabular} \end{table} \section{Conclusion}\label{sec:conclusion} We proposed a method for the estimation of the direct-path relative transfer function (DP-RTF). Compared with the conventional RTF, the DP-RTF is defined as the ratio between two direct-path acoustic transfer functions. Therefore, the DP-RTF definition and estimation implies the removal of the reverberations, and it provides a more reliable feature, in particular for sound source localization. To estimate the DP-RTF, we adopted the convolutive transfer function (CTF) model instead of the multiplicative transfer function (MTF) approximation. By doing this, the DP-RTF can be estimated by solving a set of linear equations constructed from the reverberant sensor signals. Moreover, an inter-frame spectral subtraction method was proposed to remove noise power. This spectral subtraction process does not require explicit estimation of the noise PSD, hence it does not suffer from noise PSD estimation errors. Based on the DP-RTF we proposed a supervised sound-source localization algorithm. \addnote[exp_tra]{1}{The latter relies on a training dataset that is composed of pairs of DP-RTF feature vectors and their associated sound directions. The training dataset is pre-processed in such a way that it only contains anechoic head-related impulse responses. Hence the training dataset does not depend on the particular acoustic properties of the recording environment. Only the sensors set-up must be consistent between training and testing (e.g. using the same dummy/robot head). } In practice we implemented two supervised methods, namely a nearest-neighbor search and a mixture of linear regressions. Experiments with both simulated data and real data recorded with four microphones embedded in a robot head, showed that the proposed method outperforms an MTF-based method and a method based on a coherence test, \addnote[exp_phat6]{1}{as well as a conventional SRP-PHAT method, } in reverberant environments. In the presented experiments the model parameters $Q$, $D$ and $N$ (Section \ref{sec:experiments1:parameter}) were set to constant values which were chosen as a tradeoff yielding good results in a variety of acoustic conditions. In the future, to improve the robustness of DP-RTF, we plan to estimate the acoustic conditions using the microphone signals, such that an optimal set of parameters can be adaptively adjusted. We also plan to extend the DP-RTF estimator and its use in SSL to the more complex case of multiple sound sources.
1,477,468,749,914
arxiv
\section*{Appendix} \label{sec:app} \noindent{\bf Proof of Lemma~\ref{lem:lemma1}} Assume the first estimated preamble bit is at $\hat{t_0}$, and its actual time $t_0$. Denote $s[n]$ as the central time of a three bit sequence on ViReader\xspace rx, and $t[n]$ as the central time of a three bit sequence on ViTag\xspace tx, where $t[n+1]-t[n]$ is the time period of one bit ($n:0, 1, ..., +\infty$). We have \begin{align*} t[n]=t_0+k\cdot s[n] \end{align*} where $k\cdot s[n]$ is a mapping from the ViReader\xspace rx to the actual bit boundaries, which we suppose is linear on the small bit-period time scale. The problem is then, given $\hat{t_0}$, $s$ and $t[i]$, estimate the next actual bit boundary $t[i+1]$. Our method is to approach the above equation by drawing a line that connects $(s[i], t[i])$ and $(0, \hat{t_0})$ as the following \begin{align*} \hat t[i+1]=\hat{t_0}+\frac{t[i]-\hat{t_0}}{s[i]}s[i+1] \end{align*} Therefore \begin{align*} error_{time}=&\lim_{i\to\infty}\hat t[i+1]-t[i+1]\\ =&\lim_{i\to\infty}\hat{t_0}+\frac{(t_0+k\cdot s[i])-\hat{t_0}}{s[i]}s[i+1]\\ & \qquad -(t_0+k\cdot s[i+1])\\ =&\lim_{i\to\infty}(\hat{t_0}-t_0)(1-\frac{s[i+1]}{s[i]})=0 \end{align*} The result highlights that the deviation of the bit boundary estimate will not propagate, and will converge to zero for infinitely long packets. \iffalse \noindent{\bf B. Proof of Lemma~\ref{lem:lemma2}} Our goal is to prove that using the combination of an LCD and a retro-reflector as a passive emitter is more energy-efficient than using an LED as an active emitter when both systems have the same ViReader\xspace\ whose LED has a power $P_0$, bit rate $1/\Delta t$, ViReader\xspace-to-ViTag\xspace\ distance $r$, energy used for transmitting per bit $E_{tx}$ and receiving per bit $E_{rx}$, and noise power. We compare the SNRs at ViReader\xspace\ receiver for the two methods. Further, since the noise power in the two scenarios are the same, we need only compare the quantities of the signal energy per bit $E_{s1}$ and $E_{s2}$. The system with the larger one has a better energy efficiency. First, for the LCD tag with a retro-reflector, all the energy it transmits is received by the ViReader\xspace\ receiver, assuming the LED on ViReader\xspace\ is at the same location with the light sensor. Also, the signal ViTag\xspace\ receives is modulated and bounced back. Therefore \begin{align} E_{s1}=\eta_1\frac{P_0\Delta t}{4\pi r^2}\Delta S_{tag} \end{align} where $\eta_1$ accounts for the energy dissipation caused by the absorption of the retro-reflector and the direct reflection of the LCD, and $S_{tag}$ denotes the equivalent reflective area on the tag. Second, for the LED tag that actively transmits, in a bit period, we have \begin{align} E_{s2}=\eta_2\frac{E_{tx}}{4\pi r^2}\Delta S_{reader} \end{align} where $\eta_1$ is the efficiency of the LED tag hardware, and $S_{reader}$ denotes the light sensor area on the reader. Finally, as we have assumed that the power supplies for both the systems are identical, and for the LCD tag, $E_{tx}=kCV^2$, where $k$ captures the efficiency of the energy reuse module and $CV^2$ denotes the energy cost per LCD capacitor period that corresponds to transmitting one bit, we have \begin{align} \frac{E_{s1}}{E_{s2}}=\frac{\eta_1 P_0\Delta t\Delta S_{tag}}{\eta_2 kCV^2\Delta S_{reader}} \end{align} Typically, $V=5V$, $C=2000pF$, $k\approx 0.4$, $\eta_1\approx 10^{-2}$, $\eta_2\approx 0.8$, $P_0=8W$, $\Delta S_{reader}=2\times 10^{-5}$ and $\Delta S_{tag}=10^{-3}$. So $\frac{E_{s1}}{E_{s2}}=10^9 \left|\Delta t\right|$. This result shows that if the data rate $1/\Delta t$ is smaller than $10^9bps(=1Gbps)$, then an LCD tag always enables higher energy per bit at the ViReader\xspace\ receiver than an LED tag. In typical indoor settings, LEDs are primarily used for lighting, the upper bound of whose flickering rate is orders of magnitude smaller than $1GHz$; That is to say, our method is always more energy-efficient than the alternative LED tag. \fi \section{Preliminaries} \label{sec:background} \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{cccc} \epsfig{file=fig/mrr1-eps-converted-to.pdf, width=0.5\columnwidth} & \epsfig{file=fig/mrr2-eps-converted-to.pdf, width=0.5\columnwidth} & \epsfig{file=fig/mrr3-eps-converted-to.pdf, width=0.5\columnwidth} & \epsfig{file=fig/mrr4-eps-converted-to.pdf, width=0.5\columnwidth} \\ {(a) Same Direction} & {(b) Different Directions} & {(c) With LCD on} & {(d) Retro-reflector Principle}\\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf Illustration of the Retro-reflector.} Blah Blah.} \label{fig:retrolcd} \vspace{-1em} \end{figure*} In this section, we introduce a few concepts which are used throughout the paper. \paragraph{Visible Light Communication} Visible lights are electromagnetic waves that span frequencies roughly from $400$ to $800 THz$. Commonly, white LEDs are used for illumination. The instantaneous On/Off feature turns LEDs into effective transmitters for visible light communication (VLC). Specifically, information bits can be embedded on the light by modulating the on/off state of the LED with on-off keying (OOK) or variable pulse-position modulation (VPPM) \cite{standard}. To receive such signals, while in some cases a cell phone camera or a digital camera will be sufficient \hl{need citations, ipsn14 and mobicom14}, photodiodes are generally used, because phone cameras can only achieve around 0.25MHz sampling rate~\cite{camera1} (in other words, low data rate), whereas photodiodes have generally higher bandwidth up to 0.5GHz~\cite{pdsheet}. Moreover, compared with a phone camera, the SNR of the photodiode is orders of magnitude higher at the same distance \hl{ipsn14}. \paragraph{Retro-reflector} Retro-reflector is a device or surface that operate by returning light back to the light source along the same light direction with a minimum of scattering \hl{cite the wikipage}. Retro-reflectors are widely used on road signs, bicycles, and clothing for road safety. From a microscopic view, a retro-reflector is composed of cells, each of which is a corner cube as shown in \hl{Fig.xx}. When a light beam hits one of the cells, the light is turned around via two adjacent reflections. Modulating retro-reflector (MRR) \hl{cite the wikipage} is a system typically consisting of a retro-reflector and a modulator for optical communications. The MRR operates as a passive sources which transmits bits by varying the intensity of the reflected light beam. MRR is widely used in free space communication where the other side is a laser. Existing MRR \hl{cite some papers} is usually with large size, and modulation is commonly achieved with a high-end electroabsorption modulator altering the absorption spectrum by applying an electric field. Consequently, such setting is ill-suited for our scenarios which require a low-cost solution. \paragraph{Liquid-crystal display (LCD)} LCD is a type of display widely used in portable devices like digital watches and phones. LCDs do not emit light directly; Rather, they use the light properties of liquid crystals that are controlled by the voltage added on them. When the LCD is on, the incoming light is able to pass the LCD and hit the retro-reflector; When the LCD is off, the incoming light is rejected by the LCD. Therefore, LCDs can be used as modulators for MRRs. However, one disadvantage of LCDs are their low refresh rate, e.g., 60 or 75 Hz, which is too low for data communication. Fortunately, we find \textit{LCD shutters} with much higher refresh rate (up to 1KHz \hl{cite}). Fig.~\ref{fig:retrolcd} shows the basic principle of a retro-reflector with an LCD coverage. \section{Requirements and Preliminaries} \label{sec:background} \subsection{Requirements} Our goal is to establish a bi-directional communication link using visible lights. As the dual-paradigm nature of VLC over the lighting infrastructure entails that the primary function is illumination and the primary usage scenario is communicating with low power mobile devices or sensor nodes, we have the following two basic requirements behind the goal. \noindent\emph{\textbf{Requirement 1:}} Establish a low-power, duplex visible light communication link with a battery-free mobile end that harvests light energy from the illumination LED. \noindent\emph{\textbf{Requirement 2:}} Impose no constraints on the actual usage. This implies a practical working range in normal indoor situations, ease of use, and that the size of the device should be small for convenience. To achieve a duplex visible link, one possibility is to use a symmetric design, that is, using an LED on the mobile device or sensor node to actively emit signals, which are picked up by a light sensor on the lighting LED. Unfortunately, to be able to reach a practical working distance (with the light typically installed on the ceiling) costs prohibitively high energy on the mobile device of a credit card size. Being electromagnetic radio in nature, the light energy attenuates quickly as propagation proceeds~\cite{lightwave}. One way to extend the communication range is to use directional signals, ideally a laser. However, that would require manual alignment between the light source and the mobile device. Another possible way towards more affordable power is to leverage the light from the lighting infrastructure, which is usually of high power. This is similar to the design of passive RFID systems where the tag communicates with the reader by reflecting the incoming radio signal. For instance, we may use a \textit{mirror} to reflect the light from the LED back to a light sensor collocated on the LED. However, a mirror makes highly directional reflection. Such a design would also require carefully orienting the mobile device and violate the practicality requirement. Inspired by free space laser communication systems~\cite{mrr}, we use a retro-reflector to meet requirement 2. Below we introduce the retro-reflector and present some favorable properties about retro-reflector materials. \subsection{Preliminaries} \begin{figure}[tb] \centering \subfigure[Corner Cube Illustration]{ \includegraphics[width=0.45\columnwidth]{../illustrations/retroreflector.eps} \label{fig:cornercube} } \hfill \subfigure[Bicycle Retro-reflector]{ \includegraphics[width=0.45\columnwidth]{1280px-BicycleRetroreflectors.jpg} \label{fig:bike-cc} } \caption{\label{fig:retro-reflector}Principle of a retro-reflector and sample products.} \end{figure} \paragraph{Retro-reflector} A retro-reflector is a device or surface that, unlike mirrors, reflects light back to its source along the same incoming direction with little scattering~\cite{rr}. A retro-reflector can be produced using spherical lens, much like the mechanism of a cat's eye. A more feasible way to obtain retro-reflection is to use a corner reflector, which consists of a set of cubes each with three mutually perpendicular reflective surfaces. The principle of such a retro-reflector is shown in \figref{fig:cornercube}. A large yet relatively thin retroreflector is possible by combining many small corner reflectors, using the standard triangular tiling. Such thin retro-reflectors are widely used on road signs, bicycles, and clothing for safety. \fyi{\figref{fig:bike-cc} shows a flattened retro-reflector seen on bicycles.} \todo{May merge Fig 1 and 2, by removing the current fig 2(d).} \iftrue \begin{figure}[th] \begin{center} \subfigure[Flash at $90{\,^{\circ}}\xspace$, Camera at $90{\,^{\circ}}\xspace$]{ \includegraphics[width=0.46\columnwidth]{tx90-rx90.jpg}\label{fig:mrr4} } \hfill \subfigure[Flash at $45{\,^{\circ}}\xspace$, Camera at $90{\,^{\circ}}\xspace$]{ \includegraphics[width=0.45\columnwidth]{tx45-rx90.jpg}\label{fig:mrr3} } \\ \subfigure[Flash at $45{\,^{\circ}}\xspace$, Camera at $45{\,^{\circ}}\xspace$]{ \includegraphics[width=0.45\columnwidth]{tx45-rx45.jpg}\label{fig:mrr1} } \hfill \hspace{-1ex} \subfigure[Reflection dispersion]{ \includegraphics[width=0.5\columnwidth]{rr-angle.png} \label{fig:rr-angle} } \caption{Optical reflection properties of a retro-reflector fabric (Scotchlite from 3M) (right), compared with a white paper (left) and a planar mirror (middle).}\label{fig:rr-reflection} \end{center} \end{figure} \else \begin{figure}[!th] \begin{center} \subfigure[Reflection property]{ \includegraphics[width=0.9\columnwidth]{RR-property-exp.pdf} \label{fig:rr-exp} } \\ \subfigure[Reflection property]{ \includegraphics[width=0.7\columnwidth]{rr-angle.png} \label{fig:rr-angle} } \caption{Optical properties of a retro-reflector fabric, comparing with a white paper and a planar mirror. \todo{Need to separate the figures without LCD. Make sure the exposure to be the same. Pay attention to the aspect ratio.}}\label{fig:rr-reflection} \end{center} \end{figure} \fi We conduct experiments to measure the reflecting properties of our retro-reflector fabric (Scotchlite from 3M~\cite{rrsheet}). We compare it against a plain white paper which features diffusing reflection and a planar mirror that does mirror reflection. We place the three materials side by side and let the light source (a flash light) emit light at different angles while retaining the same distance to the materials. We use a camera to capture reflected light. \figref{fig:rr-reflection} shows the resulting images from experiments conducted in a dark chamber. In the figures, we can see that the retro-reflector fabric is bright as long as the light source and the camera are along the same direction, be it $45{\,^{\circ}}\xspace$ or $90{\,^{\circ}}\xspace$, whereas the mirror is bright only when both the camera and the flash are at $90{\,^{\circ}}\xspace$. In case (c), the images of the mirror and the retro-reflector are dark. On the contrary, the white paper is always slightly turned on despite the flash and camera positions because of the diffusion. We notice that the brightness of the retro-reflector tends to be weaker than that of the mirror but more uniform. This is because the retro-reflector fabric is not perfect. \fyi{Further measurements show that such dispersion is severed when the incidence angle is over $\pm20{\,^{\circ}}\xspace$, as shown in \figref{fig:rr-angle}.} The ability to bounce back light from any incidence angle leads to a favorable property of a retro-reflector: when the light source emits omni-directional lights, the retro-reflector will concentrate the lights as it reflects them. This is illustrated in \figref{fig:retro1}. From experiments, we empirically found that the concentrated energy is directly proportional to the size of the retro-reflector fabric, as shown in \figref{fig:retro2}. This property enables us to achieve higher reflected signal strength by using larger retro-reflector. \begin{figure}[!th] \begin{center} \subfigure[Energy Concentration]{ \includegraphics[width=0.46\columnwidth]{retro-reflector.pdf}\label{fig:retro1} } \hfill \subfigure[Reflected Energy vs. Area]{ \includegraphics[width=0.46\columnwidth]{retro_size.png}\label{fig:retro2} } \caption{Energy concentrating property of a retro-reflector when the light source emits omni-directional lights and the relationship between reflected energy and the retro-reflector size. }\label{fig:retro} \end{center} \end{figure} \paragraph{Liquid Crystal Display (LCD)} In terms of embedding information bits on the reflected light, special retro-reflector can alter the amplitude by electronically controlling the reflection or absorption using for example MEMS technologies \cite{expensive,expensive2}. In our case, we hope to use ordinary, off-the-shelf retro-reflector fabrics. In order to modulate the lights reflected by such fabric, we resort to a liquid crystal display that can pass or block light under the control of electrical field. \begin{figure}[th] \begin{center} \subfigure[LCD Principle \cite{eavesdrop2}]{ \includegraphics[width=0.48\columnwidth]{lcdworks-crop.png}\label{fig:lcdworks} } \hfill \subfigure[LCD Driver]{ \includegraphics[width=0.46\columnwidth]{../illustrations/conventional_LCD_driver.eps}\label{fig:lcd-circuits} } \caption{The structure and principle of LCD, and its typical driving circuits. }\label{fig:lcd} \end{center} \end{figure} An LCD is a multi-layer sandwich structure. At the two far ends of the LCD panel are two polarizer films. The two polarizers can be parallel or perpendicular to each other. In the middle are two glass electrodes that encompassing a layer of the nematic phase liquid crystals, as shown in \figref{fig:lcdworks}. An LCD works as follows: when the incoming light passes through the first polarizer, it becomes polarized. Depending on the actual liquid crystal state, the polarity of the light will be changed or remain unchanged. In the natural state of the liquid crystal, its molecules are twisted. It will change the polarity of the light passing through it. If an electric field is imposed (by the two surrounding glass electrodes) on the liquid crystal, its molecules will become untwisted. The polarity of the light will not be affected when passing through. The light will finally be passed or blocked by the second polarizer on the other end, depending the conformance of their polarity \cite{eavesdrop2}. At a high level, the polarization changes as the voltage added on it. At a low voltage, the incoming light traverses the LCD to hit the retro-reflector, and reflected light also traverses the LCD; At a high voltage, the incoming light is rejected by the LCD. \section{Preliminaries} \label{sec:background} \begin{figure}[tb] \minipage{0.48\columnwidth} \subfigure[Corner Cube Illustration]{ \includegraphics[width=\columnwidth]{corner_cube.pdf} \label{fig:cornercube} } \hfill \endminipage \hfill \minipage{0.45\columnwidth} \includegraphics[width=0.8\columnwidth]{tx90-rx90.JPG} (b) \\ \includegraphics[width=0.8\columnwidth]{tx45-rx90.JPG} (c) \\ \includegraphics[width=0.8\columnwidth]{tx45-rx45.JPG} (d) \endminipage \hfill \caption{Illustration of the reflection principle of a retro-reflector (a), and the comparison of the reflection property (b)-(d). The flash and camera are at positions of ($90{\,^{\circ}}\xspace$, $90{\,^{\circ}}\xspace$), ($45{\,^{\circ}}\xspace$, $90{\,^{\circ}}\xspace$) and ($45{\,^{\circ}}\xspace$, $45{\,^{\circ}}\xspace$) in (b), (c), and (d), respectively. The three side-by-side put testing materials are, from left to right, white paper, mirror and retro-reflector fabric. }\label{fig:retro-reflector} \end{figure} Our goal is to establish a bi-directional communication link using visible lights. As the dual-paradigm nature of VLC over the lighting infrastructure entails that the primary function is illumination and the primary usage scenario is communicating with low power mobile devices or sensor nodes, we have the following two basic requirements behind the goal. \begin{Itemize} \item \paragraph{Efficiency Requirement} Establish a low-power, duplex visible light communication link with a battery-free mobile end that harvests light energy from the illumination LED. \item \paragraph{Practicality Requirement} Impose no constraints on actual use. This implies a practical working range in normal indoor situations, flexible tag orientation, and that the size of the device be small. \end{Itemize} To achieve a duplex link on visible light, one possibility is to employ a symmetric design, that is, using an LED on the mobile device or sensor node to actively emit signals, and pick up the signals with a light sensor on the illuminating LED. Unfortunately, reaching a practical working distance (with the light typically installed on the ceiling) costs prohibitively high energy on the mobile or sensor device. The light energy attenuates quickly as the propagation proceeds~\cite{lightwave}. One way to extend the communication range is to use directional signals, ideally a laser, or using intermediate light concentrating optical components (\textit{e.g.}\xspace lenses). However, that would require careful alignment between the light source and the mobile device, which may further require steerable optical components and precise tag positioning. Thus, it is not quite applicable. Another possible way towards more affordable power is to leverage the light from the illuminating infrastructure, which is usually of high power. This is similar to the design of passive RFID systems where a tag communicates with a reader by reflecting the incoming radio signal. For instance, reflecting the light using a \textit{mirror} to a light sensor that sits beside the LED uses this principle. However, use of a mirror would then require carefully orienting the mobile device, thus violating the practicality requirement. Inspired by free space laser communication systems~\cite{mrr}, we use a retro-reflector to meet both requirements. Below we introduce the retro-reflector and present some favorable properties about retro-reflector materials. \paragraph{Retro-reflector} A retro-reflector is a device or surface that, unlike mirrors, reflects light back to its source along the same incoming direction with little scattering~\cite{rr}. A retro-reflector can be produced using spherical lens, much like the mechanism of a cat's eye. A more feasible way to obtain retro-reflection is to use a corner reflector, which consists of a set of corner cubes each with three mutually perpendicular reflective surfaces. The principle of such a retro-reflector is shown in \figref{fig:cornercube}. A large yet relatively thin retro-reflector is possible by combining many small corner reflectors, using the standard triangular tiling. Cheap retro-reflector fabric are readily available, \textit{e.g.}\xspace the Scotchlite series from 3M~\cite{rrsheet}, and are widely used on road signs, bicycles, and clothing for traffic safety at night. We conduct experiments to measure the reflecting properties of a retro-reflector fabric (Scotchlite 9910 from 3M). We compare it against a plain white paper which features diffusing reflection and a planar mirror that does mirror reflection. We place the three materials side by side and let the light source (a flash light) emit light at different angles while in the same distance from the materials. We capture the reflection effects with a camera from multiple angles. \figref{fig:retro-reflector}(b)-(d) shows the resulting images from experiments conducted in a dark chamber. In the figures, we can see that the retro-reflector fabric is bright as long as the light source and the camera are along the same direction, be it $45{\,^{\circ}}\xspace$ or $90{\,^{\circ}}\xspace$, whereas the mirror is bright only when both the camera and the flash are at $90{\,^{\circ}}\xspace$. In the case of \figref{fig:retro-reflector}(c), the images of the mirror and the retro-reflector are dark. On the contrary, the white paper is always slightly turned on because of its diffusion, despite the flash and camera positions. We notice that the brightness of the retro-reflector fabric tends to be weaker than that of the mirror but more uniform. This is because the fabric we used is not a perfect retro-reflector and has small dispersion \cite{rrsheet}. The ability to bounce back light from any incidence angle leads to a favorable property of the retro-reflector: when the light source emits omni-directional lights, the retro-reflector will concentrate the lights as it reflects them. This is illustrated in \figref{fig:retro1}. From experiments, we empirically found that the concentrated energy is directly proportional to the size of the retro-reflector fabric, as shown in \figref{fig:retro2}. \begin{figure}[!th] \begin{center} \subfigure[Energy Concentration]{ \includegraphics[width=0.46\columnwidth]{retro-reflector.pdf}\label{fig:retro1} } \hfill \subfigure[Reflected Energy vs. Area]{ \includegraphics[width=0.46\columnwidth]{fig3b.png}\label{fig:retro2} } \vspace{-1em} \caption{Energy concentrating property of a retro-reflector when the light source emits omni-directional lights and the relationship between reflected energy and the retro-reflector size. }\label{fig:retro} \end{center} \end{figure} \paragraph{Modulating with LCD} In terms of embedding information bits on the reflected light, special retro-reflector can alter the amplitude by electronically controlling the reflection or absorption using, for example, MEMS technologies \cite{expensive,expensive2}. However, we hope to use ordinary, off-the-shelf retro-reflector fabrics. In order to modulate the lights reflected by such fabric, we resort to a liquid crystal display that can pass or block light under the control of the electrical field. \begin{figure}[th] \begin{center} \subfigure[LCD Principle \cite{eavesdrop2}]{ \includegraphics[width=0.5\columnwidth]{lcdworks-crop.PNG}\label{fig:lcdworks} } \hfill \subfigure[LCD Driver]{ \includegraphics[width=0.44\columnwidth]{conventional_LCD_driver-eps-converted-to.pdf}\label{fig:lcd-circuits} } \vspace{-1.5em} \caption{The structure and principle of LCD, and its typical driving circuits. }\label{fig:lcd} \end{center} \end{figure} An LCD has a multi-layer sandwich structure. At the two ends of the LCD panel are two polarizer films; the two polarizers can be parallel or perpendicular to each other. In the middle are two glass electrodes that encompass a layer of nematic phase liquid crystals, as shown in \figref{fig:lcdworks}. An LCD works as follows: when the incoming light passes through the first polarizer, it becomes polarized. Depending on the actual liquid crystal state, the polarity of the light will be changed or remain unchanged. In the natural state, liquid crystal molecules are twisted. It will change the polarity of the light passing through it. If an electric field is imposed (by the two surrounding glass electrodes) on the liquid crystal, its molecules will become untwisted. The polarity of the light will not be affected when passing through. The light will finally pass or be blocked by the second polarizer on the other end, depending the conformance of their polarity \cite{eavesdrop2}. \figref{fig:lcd-circuits} shows a typical driving circuit for charging or discharging an LCD. We use it to toggle on/off the LCD shutter. At a high level, the polarization changes with the voltage added on it: with a low voltage, the incoming light traverses the LCD and hits the retro-reflector, and the reflected light also traverses the LCD; with a high voltage, the incoming light is rejected by the LCD. \section{conclusion} In this paper, we have presented a bi-directional VLC system called Retro-VLC\xspace that consists of a modified LED and a tag device. The tag can run battery-free by harvesting light energy with solar cells. The ViTag\xspace\ transmits by reflecting and modulating incoming light back to the LED using a retro-reflector and an LCD modulator. The system overcomes the power consumption challenge on the ViTag\xspace and interferences and clock offsets on the LED end, achieving $10kbps$ downlink rate and $0.5kbps$ uplink rate over a distance up to $2.4m$. The system also shows security advantages, preventing readers nearby from overhearing uplink data. We believe Retro-VLC\xspace have wide application scenarios. \section{Retro-VLC\xspace System Design}\label{design} In this section, we describe our design of Retro-VLC\xspace in more detail. The system consists of a ViReader\xspace and a ViTag\xspace, each of which contains the transmitting and receiving logic. We elaborate their design one by one, starting with the transmitter of the ViReader\xspace. Its detailed diagram is shown in \figref{fig:diagram_reader}. \subsection{ViReader-Tx\xspace Design} The ViReader-Tx\xspace employs a standard VLC design as in other work: it performs encoding using an MCU and toggles the LED light to control the power amplifier. Specifically, we employ a 1MHz carrier and perform on-off keying (OOK) and Manchester coding. The communication bandwidth we use is 10kHz. Note that we may use even higher carrier frequency and larger communication bandwidth. We made the choice due to the limitation of ordinary commercial off-the-shelf LED we have. If we toggle at a faster rate, the amplitude difference between On and Off state will be too small to serve as an effective carrier. We use 10kHz bandwidth as it suffices applications we have in mind, e.g., send back the tag ID and certain sensor information it may carry. \begin{figure}[!th] \centering \includegraphics[width=\columnwidth]{fig/read_diagram.pdf} \vspace{-1em} \caption{Circuit diagram of ViReader\xspace. } \label{fig:diagram_reader} \end{figure} \subsection{ViReader-Rx\xspace Design}\label{ssec:readerrx} The major challenges that arise in the design of the ViReader-Rx\xspace are the following. First of all, the signal from the ViTag\xspace reflection is extremely weak, especially due to the use of the small retro-reflector on the ViTag\xspace. Second, the signal is severely interfered by other light and electrical sources. In particular, as the light sensor sits next to the LED, it is likely that there is leakage from downlink signals and carrier, in additional to the diffusing reflections from the ambient sources. Because of the close distance, the interference is several orders of magnitude greater than the actual reflected signal from the ViTag\xspace. As measured \fyi{in one implementation of 12W LED lamp}, the power of the ViTag\xspace-reflected signal is about $-80dBm$ \fyi{at 1.5 meters} while the ViReader-Tx\xspace emitted light signal can be up to $30dBm$. In fact, these interference could cause the ViReader-Rx\xspace amplifiers to saturate without careful design. In practice, the light reflected by the movement of humans and other objects around also causes such interference. Thirdly, the converted electrical signal is also interfered by commercial FM radios that operate around 1MHz. The harmonics of the $50-60$Hz AC supply of the lighting infrastructure also matters, which is on par with the toggling rate (0.5kHz) of our LCD modulator. Last but not the least, our choice of using a small and low frequency RC oscillator at the ViTag\xspace, instead of high-precision oscillator (for sake of energy consumption reduction), makes the reflected signal suffer from clock offsets and drifts. In our design, we first try to isolate the receiving path, both the circuit and light sensor, from the transmitting path. In practice, we use 4-layer PCB and always ensure the wires are covered by two copper layers connected to ground. We also shield the light sensor to avoid leakage of the downlink signals. In the rest of this section, we elaborate the modular and algorithmic designs of ViReader-Rx\xspace that overcome these challenges. \paragraph{Amplification and Demodulation} As shown in \figref{fig:diagram_reader}, an external light sensor with a parallel inductor captures the ViTag\xspace signal and performs preliminary band-pass filtering. The photocurrent is then amplified by a subsequent preamplifier and further transmitted to the internal (\textit{i.e.}\xspace on the ViReader\xspace board hosted within the lamp) amplifier and processing circuit. An impedance matching module is incorporated. The pair of transmission lines is relatively long, decoupling the front end and the subsequent processing unit As the two wires are equally affected by the common-mode noises, we thus design a tuned differential amplifier as the first-stage internal amplifier. By subtracting the signals from the two wire, the differential amplifier effectively eliminates the common-mode noises. It further suppresses other off-band noises through LC resonance at 1MHz carrier frequency. As the reflected signal from ViTag\xspace\ is extremely weak, we further amplify it through two additional LC-structured amplifiers. The overall amplification gain is $80dB$. This signal then goes through a high precision envelope detector to pick up the baseband signal from the carrier. Finally, the baseband signal is amplified and fed to the MCU, which performs analog-to-digital conversion and decoding therein. Note that the gain of the differential amplifier is programmable and controlled by the micro-controller. We also use two-stage amplifiers (instead of one-stage amplifier with very large gain) both with feedback mechanisms. These mechanisms helps pull the circuit state away from self-excitation. \begin{figure}[!th] \begin{center} \subfigure[Normal]{ \includegraphics[width=0.45\columnwidth]{../illustrations/waveform1.eps}\label{fig:waveform1} } \hfill \subfigure[Top-truncated]{ \includegraphics[width=0.45\columnwidth]{../illustrations/waveform2.eps}\label{fig:waveform2} } \\ \vspace{-1em} \subfigure[Bottom-truncated]{ \includegraphics[width=0.45\columnwidth]{../illustrations/waveform3.eps}\label{fig:waveform3} } \hfill \subfigure[Average-drifted]{ \includegraphics[width=0.45\columnwidth]{../illustrations/waveform5.eps}\label{fig:waveform5} } \vspace{-1em} \caption{Possible waveform patterns after the baseband amplifier of ViReader-Rx\xspace. }\label{fig:dynamicRange} \end{center} \vspace{-1em} \end{figure} \paragraph{Decoding and Handling Clock Drift} The clock offset and drift caused by the RC-clock of the ViTag\xspace bring challenges as we try to extract the timing information from the signal and perform the decoding at the same time. There are several common decoding methods. One method is based on peak (or edge) detection. Its principle is to extract the extreme (discontinuous) points in the signal to detect clock beats. A second approach is averaging-based algorithm in which signal samples are averaged to generate a threshold, and samples above this threshold denote ones and below denote zeros. A third approach is symbol-based match filter that tries to match the waveform of one symbol and detects the convolution peaks to determine the accurate timing. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{../figures/slidingWindow.eps} \vskip -0.05in \caption{All possible 3-bit patterns (left) and illustration of their actual voltage levels (middle), and corresponding matching templates (right) for edge detection. } \label{fig:swmsmf} \end{figure} However, none of these methods work for us. Take for example the normal signal input waveform shown in \figref{fig:dynamicRange}(a). Due to the possible lag of the automatic gain control at the ViReader-Rx\xspace, the high dynamic range of interference, we may obtain top- or bottom-truncated waveforms, as shown in \figref{fig:dynamicRange}(b) and (c), or both top- and bottom-truncated waveform (not shown due to space limit). Such situation would fail peak/edge detection algorithms. Similarly, the ambient brightness changes (e.g., caused by human body reflection) will likely cause a time-varying shift in average value, as shown in \figref{fig:dynamicRange}(d). This would fail the averaging-based approach. Furthermore, due to Manchester coding, one bit contains two chips -- a high volt chip followed by a low volt chip, or vice versa, indicating a `0' and `1', respectively. Other than an all `0' or all `1' sequence, the rising edge and the falling edge are not evenly spaced in time. This results in two (and at most two) consecutive low (or high) voltage chips for bit sequence `01' (or `10'). A low voltage chip corresponds to the LCD discharging phase at the ViTag\xspace; Consecutive low chips thus correspond to continuous discharging of the LCD. As a result, the second low chip will have a lower voltage than the first one. Similarly, the second high chip will have a higher voltage than the first one in two consecutive high voltage chips. The consecutive low or high voltage chips and their corresponding voltage level are depicted in \figref{fig:swmsmf}. In the face of these distortions, the single-symbol match filter method will fail, because the correlation peak will be skewed for those unevenly spaced high/low voltage chips. We develop a novel algorithm, termed \textit{sliding-window multi-symbol match filter} algorithm, to decode the signals from Manchester coding that subject to a huge dynamic range. The basic idea is to avoid the biased timing caused by skewed correlation peaks in the conventional symbol-based match filter method, by matching \textit{all possible patterns} of the waveform that may result from Manchester encoding, and iteratively adjusting the local clock by every bit period. To begin with, the algorithm exploits standard correlation to detect the preamble of a packet. In our design, preambles last for a time period equivalent to 3 Manchester chips. Upon finding the preamble, the algorithm estimates the length of a bit using the knowledge of the ViTag\xspace clock which is known but subject to offsets and drifting. Then the algorithm iteratively performs the following two steps: \\[4pt] \noindent\textit{Step 1: Template Matching.} For all the samples within the three-bit span, we match them against the corresponding template, as shown in \figref{fig:swmsmf}. Note that the template is of amplitude of $\pm1$, so as to most precisely detect the rising or falling edges of the bit in the middle. Ideally, the right correlation yields a peak in the middle of the three bits. \\[4pt] \noindent\textit{Step 2: Local Time Recovery.} Due to the time variance with the start of the packet and the frequency deviation between the clocks of the ViReader\xspace and the ViTag\xspace, the peak correlation does not necessarily align with the actual timing of the edge. This yields errors in clock estimation. To bound this error as the decoding goes, we perform linear regression $t=k\cdot s+s_0$ to estimate the central time of every three bits, where $s_0$ is the initial time estimate after preamble detection, $s$ and $t$ are the central time from the ViReader\xspace's view and from the ViTag\xspace's view, respectively, and $k$ denotes the clock sampling ratio of the ViTag\xspace\ over ViReader\xspace. Every round $k$ is re-estimated, and then the algorithm moves the three-bit window one bit forward. The whole procedure is repeated till reaching the end of the packet. This time recovery algorithm bounds the error on $t$ from diverging as the decoding process proceeds. We formally describe this in the following lemma, for which we give a proof in Appendix. \begin{lemma} The time recovery algorithm ensures the error of the ViTag\xspace clock estimation to converge to zero asymptotically if a packet contains infinite number of bits. \label{lem:lemma1} \end{lemma} \begin{figure}[!th] \centering \includegraphics[width=\columnwidth]{fig/tag_diagram.pdf} \vspace{-1em} \caption{Circuit diagram of ViTag\xspace.} \label{fig:diagram_tag} \end{figure} \subsection{ViTag-Rx\xspace Design} For a normal design of ViTag-Rx\xspace, the major energy consumption would be from the ADC and digital signal processing. In our design, we perform most of the processing in analog domain and avoid using the ADC while retaining accuracy. \paragraph{Demodulation} As shown in \figref{fig:diagram_tag}, the incoming light is first captured by the light sensor. The light sensor has an equivalent capacitance, which, with an inductor parallel to it, makes up a preliminary LC filter. Two triode amplifiers successively amplify the RF signal, after which the signal is passed along for demodulation. Our demodulator contains a constant voltage source and a low-pass amplifier. The constant voltage source sets a ultra-low quiescent current that flows into the base of the triode in the low-pass amplifier, making it work at a critical conduction mode, so that the positive half of the signal can pass through and be amplified while the negative part can only make the triode into cut-off mode. Hence, the 1MHz AC carrier is turned into the unipolar signal with a low frequency DC bias which can represent its primary envelope. Finally, the envelope signal is obtained by a smoothing capacitor and then fed into a comparator for digitization. \paragraph{Digitization and Decoding} \fyi{To achieve better energy efficiency, power-hungry ADCs should be avoid and MCU should not always be waking-up. We digitize the analog signal with a comparator. For most of the time, the CPU of MCU is sleeping with only one timer running to measure the time. When a positive jump appears on the TP4 shown in \figref{fig:diagram_tag} (i.e., output of the comparator), the CPU of MCU is waken up, and record the time stamp of the timer, then the CPU halt again. Together with the last wake-up time stamp, we can know the period of the clock cycle on the output of comparator. To align with this working pattern, we adopt clock-period coding. For example, a 185us clock period denotes 00, 195us clock cycle denotes 01, 205 us denotes 10, and 215us denotes 11. So we can receive 2 bits with MCU being waken up just one time.} \fyi{This enables MCU to sleep most of the time, and upon waking up, it records the time stamp, determines the received bits, and goes into sleep mode again. In our implementation, with a MSP430 MCU running at 1MHz, these routines are done within $16us$. So when there is no carrier or no data on the carrier, the MCU sleep for all the time. When the Tag is receiving data, it still sleeps for most of the time, and just work 16us in the receiving cycle of 200us. } In summary, we achieve low energy reception at ViTag\xspace by using only analog elements and a low-power MCU (MSP430). The MCU is in sleep mode for most of the time. In the analog circuit design, we further set the transistors to work at a lower DC operating point to maximally reduce energy consumption. \subsection{ViTag-Tx\xspace Design}\label{subsec:tagtrans} Our ViTag\xspace\ transmitter transmits by passively backscattering the incoming light. The core of the transmitter is the combination of an LCD and a retro-reflector that serves as a modulator. While the LCD has an ultra low quiescent current, more than $70\%$ of the power consumption during transmission is caused by LCD. The reason is that the LCD has a considerable equivalent capacitance ($~9nF$), which must be charged to $5.5V$ to turn the LCD off and be discharged to turn the LCD on. It is this charging-discharging process that consumes energy. To conserve energy, we design an energy reuse module that recycles the discharging current. The LCD requires a voltage high enough (\textit{e.g.}\xspace at least $5.5V$) to drive it to achieve desired polarization level. This high voltage is nearly 3 times of solar cell's voltage and cannot be directly fed by solar cells. We design a voltage boosting module that achieves this. The overall design of the ViTag\xspace transmitter is presented in \figref{fig:diagram_tag}. \paragraph{Energy Reuse} A conventional LCD driving circuit would discharge LCD from the high driving voltage to Ground and thus waste the energy. The design of our energy reuse module is depicted in Fig.~\ref{fig:energyreuse}. \begin{figure}[tg] \centering { \epsfig{file=../illustrations/EnergyReuseCircuits_2.eps, width=\columnwidth} } \vspace{-1ex} \caption{Energy reuse design for LCD driver. } \label{fig:energyreuse} \end{figure} During the charging phase, the DC/DC boosts voltage supplied by the solar cell to the high voltage needed for driving the LCD towards a blocking state. The MCU sets this high voltage on and activates the transistor $Q_0$ (the PNP transistor in the conventional LCD driver module). This operation puts the LCD into the charging mode and will pump up the voltage of the LCD. In the discharging phase, the MCU sets the $Q_0$ to the cut-off state and thus closes the charging path, and activates the NPN transistor $Q_1$ on the discharging path. Unlike a conventional LCD driver that discharges directly to the ground, in our design, the current flows back to the input of DC/DC circuits. This helps reduce the current drawn from the solar cell. Measurements show that the total power consumed by LCD while switching at $0.5kHz$ decreases from $84uA$ to $46uA$ with energy reuse. We note two things about our design. First, the two signals controlling the on/off state of LCD are generated by an MCU, and is alternately activated with a short interval ($2us$). This ensures only one transistor of $Q_0$ and $Q_1$ is open at a time and avoids the transient high current that would be caused if both semi-conductive transistors are activated during the switching. Second, the diode $D_0$ is critical. It prevents the charge on the LCD from discharging to the solar cell. Without it, the initial high voltage ($5.5V$) of LCD will be pull down to that of the solar cell ($2.1V$) immediately after $Q_1$ is ON, a high transient current would result and most energy would be wasted on the BE junction of $Q_1$. \section{Retro-VLC\xspace System Design}\label{design} In this section, we describe our design of Retro-VLC\xspace in more detail. Retro-VLC\xspace consists of a ViReader\xspace and a ViTag\xspace, each of which contains the transmitting and receiving logic. We elaborate their design one by one, starting with the transmitter of the ViReader\xspace. Its detailed diagram is shown in \figref{fig:diagram_reader}. \subsection{ViReader-Tx\xspace Design} The ViReader-Tx\xspace employs a standard VLC design as in other work: it performs encoding using an MCU and toggles the LED light to control the power amplifier. Specifically, we employ a 1MHz carrier and perform on-off keying (OOK) and Manchester coding. The communication bandwidth we use is 10kHz. Note that we may use even higher carrier frequency and larger communication bandwidth. We made the choice due to the limitation of ordinary commercial off-the-shelf LED we have. If we toggle at a faster rate, the amplitude difference between On and Off state will be too small to serve as an effective carrier. We use 10kHz bandwidth as it suffices applications we have in mind, e.g., send back the tag ID and certain sensor information it may carry. \begin{figure}[!th] \centering \includegraphics[width=\columnwidth]{read_diagram.pdf} \vspace{-1em} \caption{Circuit diagram of ViReader\xspace. } \label{fig:diagram_reader} \end{figure} \subsection{ViReader-Rx\xspace Design}\label{ssec:readerrx} The major challenges that arise in the design of the ViReader-Rx\xspace are the following. First of all, the signal from the ViTag\xspace reflection is extremely weak, especially due to the use of the small retro-reflector on the ViTag\xspace. Second, the signal is severely interfered by other light and electrical sources. In particular, as the light sensor sits next to the LED, it is likely that there is leakage from downlink signals and carrier, in additional to the diffusing reflections from the ambient sources. Because of the close distance, the interference is several orders of magnitude greater than the actual reflected signal from the ViTag\xspace. As measured \fyi{in one implementation of 12W LED lamp}, the power of the ViTag\xspace-reflected signal is about $-80dBm$ \fyi{at 1.5 meters} while the ViReader-Tx\xspace emitted light signal can be up to $30dBm$. In fact, these interference could cause the ViReader-Rx\xspace amplifiers to saturate without careful design. In practice, the light reflected by the movement of humans and other objects around also causes such interference. Thirdly, the converted electrical signal is also interfered by commercial FM radios that operate around 1MHz. The harmonics of the $50-60$Hz AC supply of the lighting infrastructure also matters, which is on par with the toggling rate (0.5kHz) of our LCD modulator. Last but not the least, our choice of using a small and low frequency RC oscillator at the ViTag\xspace, instead of high-precision oscillator (for sake of energy consumption reduction), makes the reflected signal suffer from clock offsets and drifts. In our design, we first try to isolate the receiving path, both the circuit and light sensor, from the transmitting path. In practice, we use 4-layer PCB and always ensure the wires are covered by two copper layers connected to ground. We also shield the light sensor to avoid leakage of the downlink signals. In the rest of this section, we elaborate the modular and algorithmic designs of ViReader-Rx\xspace that overcome these challenges. \paragraph{Amplification and Demodulation} As shown in \figref{fig:diag-reader}, an external light sensor with a parallel inductor captures the ViTag\xspace signal and performs preliminary band-pass filtering. The photocurrent is then amplified by a subsequent preamplifier and further transmitted to the internal (\textit{i.e.}\xspace on the ViReader\xspace board hosted within the lamp) amplifier and processing circuit. An impedance matching module is incorporated. The pair of transmission lines is relatively long, decoupling the front end and the subsequent processing unit As the two wires are equally affected by the common-mode noises, we thus design a tuned differential amplifier as the first-stage internal amplifier. By subtracting the signals from the two wire, the differential amplifier effectively eliminates the common-mode noises. It further suppresses other off-band noises through LC resonance at 1MHz carrier frequency. As the reflected signal from ViTag\xspace\ is extremely weak, we further amplify it through two additional LC-structured amplifiers. The overall amplification gain is $80dB$. This signal then goes through a high precision envelope detector to pick up the baseband signal from the carrier. Finally, the baseband signal is amplified and fed to the MCU, which performs analog-to-digital conversion and decoding therein. Note that the gain of the differential amplifier is programmable and controlled by the micro-controller. We also use two-stage amplifiers (instead of one-stage amplifier with very large gain) both with feedback mechanisms. These mechanisms helps pull the circuit state away from self-excitation. \begin{figure}[!th] \begin{center} \subfigure[Normal]{ \includegraphics[width=0.45\columnwidth]{waveform1-eps-converted-to.pdf}\label{fig:waveform1} } \hfill \subfigure[Top-truncated]{ \includegraphics[width=0.45\columnwidth]{waveform2-eps-converted-to.pdf}\label{fig:waveform2} } \\ \vspace{-1em} \subfigure[Bottom-truncated]{ \includegraphics[width=0.45\columnwidth]{waveform3-eps-converted-to.pdf}\label{fig:waveform3} } \hfill \subfigure[Average-drifted]{ \includegraphics[width=0.45\columnwidth]{waveform5-eps-converted-to.pdf}\label{fig:waveform5} } \vspace{-1em} \caption{Possible waveform patterns after the baseband amplifier of ViReader-Rx\xspace. }\label{fig:dynamicRange} \end{center} \vspace{-1em} \end{figure} \paragraph{Decoding and Handling Clock Drift} The clock offset and drift caused by the RC-clock of the ViTag\xspace bring challenges as we try to extract the timing information from the signal and perform the decoding at the same time. There are several common decoding methods. One method is based on peak (or edge) detection. Its principle is to extract the extreme (discontinuous) points in the signal to detect clock beats. A second approach is averaging-based algorithm in which signal samples are averaged to generate a threshold, and samples above this threshold denote ones and below denote zeros. A third approach is symbol-based match filter that tries to match the waveform of one symbol and detects the convolution peaks to determine the accurate timing. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{slidingWindow-eps-converted-to.pdf} \vskip -0.05in \caption{All possible 3-bit patterns (left) and illustration of their actual voltage levels (middle), and corresponding matching templates (right) for edge detection. } \label{fig:swmsmf} \end{figure} However, none of these methods work for us. Take for example the normal signal input waveform shown in \figref{fig:dynamicRange}(a). Due to the possible lag of the automatic gain control at the ViReader-Rx\xspace, the high dynamic range of interference, we may obtain top- or bottom-truncated waveforms, as shown in \figref{fig:dynamicRange}(b) and (c), or both top- and bottom-truncated waveform (not shown due to space limit). Such situation would fail peak/edge detection algorithms. Similarly, the ambient brightness changes (e.g., caused by human body reflection) will likely cause a time-varying shift in average value, as shown in \figref{fig:dynamicRange}(d). This would fail the averaging-based approach. Furthermore, due to Manchester coding, one bit contains two chips -- a high volt chip followed by a low volt chip, or vice versa, indicating a `0' and `1', respectively. Other than an all `0' or all `1' sequence, the rising edge and the falling edge are not evenly spaced in time. This results in two (and at most two) consecutive low (or high) voltage chips for bit sequence `01' (or `10'). A low voltage chip corresponds to the LCD discharging phase at the ViTag\xspace; Consecutive low chips thus correspond to continuous discharging of the LCD. As a result, the second low chip will have a lower voltage than the first one. Similarly, the second high chip will have a higher voltage than the first one in two consecutive high voltage chips. The consecutive low or high voltage chips and their corresponding voltage level are depicted in \figref{fig:swmsmf}. In the face of these distortions, the single-symbol match filter method will fail, because the correlation peak will be skewed for those unevenly spaced high/low voltage chips. We develop a novel algorithm, termed \textit{sliding-window multi-symbol match filter} algorithm, to decode the signals from Manchester coding that subject to a huge dynamic range. The basic idea is to avoid the biased timing caused by skewed correlation peaks in the conventional symbol-based match filter method, by matching \textit{all possible patterns} of the waveform that may result from Manchester encoding, and iteratively adjusting the local clock by every bit period. To begin with, the algorithm exploits standard correlation to detect the preamble of a packet. In our design, preambles last for a time period equivalent to 3 Manchester chips. Upon finding the preamble, the algorithm estimates the length of a bit using the knowledge of the ViTag\xspace clock which is known but subject to offsets and drifting. Then the algorithm iteratively performs the following two steps: \\[4pt] \noindent\textit{Step 1: Template Matching.} For all the samples within the three-bit span, we match them against the corresponding template, as shown in \figref{fig:swmsmf}. Note that the template is of amplitude of $\pm1$, so as to most precisely detect the rising or falling edges of the bit in the middle. Ideally, the right correlation yields a peak in the middle of the three bits. \\[4pt] \noindent\textit{Step 2: Local Time Recovery.} Due to the time variance with the start of the packet and the frequency deviation between the clocks of the ViReader\xspace and the ViTag\xspace, the peak correlation does not necessarily align with the actual timing of the edge. This yields errors in clock estimation. To bound this error as the decoding goes, we perform linear regression $t=k\cdot s+s_0$ to estimate the central time of every three bits, where $s_0$ is the initial time estimate after preamble detection, $s$ and $t$ are the central time from the ViReader\xspace's view and from the ViTag\xspace's view, respectively, and $k$ denotes the clock sampling ratio of the ViTag\xspace\ over ViReader\xspace. Every round $k$ is re-estimated, and then the algorithm moves the three-bit window one bit forward. The whole procedure is repeated till reaching the end of the packet. This time recovery algorithm bounds the error on $t$ from diverging as the decoding process proceeds. We formally describe this in the following lemma, for which we give a proof in Appendix. \begin{lemma} The time recovery algorithm ensures the error of the ViTag\xspace clock estimation to converge to zero asymptotically if a packet contains infinite number of bits. \label{lem:lemma1} \end{lemma} \begin{figure}[!th] \centering \includegraphics[width=\columnwidth]{tag_diagram.pdf} \vspace{-1em} \caption{Circuit diagram of ViTag\xspace.} \label{fig:diagram_tag} \end{figure} \subsection{ViTag-Rx\xspace Design} For a normal design of ViTag-Rx\xspace, the major energy consumption would be from the ADC and digital signal processing. In our design, we perform most of the processing in analog domain and avoid using the ADC while retaining accuracy. \paragraph{Demodulation} As shown in \figref{fig:diagram_tag}, the incoming light is first captured by the light sensor. The light sensor has an equivalent capacitance, which, with an inductor parallel to it, makes up a preliminary LC filter. Two triode amplifiers successively amplify the RF signal, after which the signal is passed along for demodulation. Our demodulator contains a constant voltage source and a low-pass amplifier. The constant voltage source sets a ultra-low quiescent current that flows into the base of the triode in the low-pass amplifier, making it work at a critical conduction mode, so that the positive half of the signal can pass through and be amplified while the negative part can only make the triode into cut-off mode. Hence, the $1MHz$ AC carrier is turned into the unipolar signal with a low frequency DC bias which can represent its primary envelope. Finally, the envelope signal is obtained by a smoothing capacitor and then fed into a comparator for digitization. \paragraph{Digitization and Decoding} \fyi{To achieve better energy efficiency, power-hungry ADCs should be avoid and MCU should be running as less as possible. We digitize the analog signal with a comparator. For most of the time, the CPU of MCU is sleeping with only one timer running to measure the time. When a positive jump appears on the TP4 shown in \figref{fig:diagram_tag} (i.e., output of the comparator), the CPU of MCU is waken up, and record the time stamp of the timer, then the CPU halt again. Together with the last wake-up time stamp, we can know the period of the clock cycle on the output of comparator. To align with this working pattern, we adopt clock-period coding. For example, a 185us clock period denotes 00, 195us clock cycle denotes 01, 205 us denotes 10, and 215us denotes 11. So we can receive 2 bits with MCU being waken up just one time.} \fyi{This enables MCU to sleep most of the time, and upon waking up, it records the time stamp, determines the received bits, and goes into sleep mode again. In our implementation, with a MSP430 MCU running at $1MHz$, these routines are done within $16us$. So when there is no carrier or no data on the carrier, the MCU sleep for all the time. When the Tag is receiving data, it still sleeps for most of the time, and just work 16us in the receiving cycle of 200us. } In summary, we achieve low energy reception at ViTag\xspace by using only analog elements and a low-power MCU (MSP430). The MCU is in sleep mode for most of the time. In the analog circuit design, we further set the transistors to work at a lower DC operating point to maximally reduce energy consumption. \subsection{ViTag-Tx\xspace Design}\label{subsec:tagtrans} Our ViTag\xspace\ transmitter transmits by passively backscattering the incoming light. The core of the transmitter is the combination of an LCD and a retro-reflector that serves as a modulator. While the LCD has an ultra low quiescent current, more than $70\%$ of the power consumption during transmission is caused by LCD. The reason is that the LCD has a considerable equivalent capacitance ($~9nF$), which must be charged to $5.5V$ to turn the LCD off and be discharged to turn the LCD on. It is this charging-discharging process that consumes energy. To conserve energy, we design an energy reuse module that recycles the discharging current. The LCD requires a voltage high enough (\textit{e.g.}\xspace at least $5.5V$) to drive it to achieve desired polarization level. This high voltage is nearly 3 times of solar cell's voltage and cannot be directly fed by solar cells. We design a voltage boosting module that achieves this. The overall design of the ViTag\xspace transmitter is presented in \figref{fig:diag_tag}. \paragraph{Energy Reuse} A conventional LCD driving circuit would discharge LCD from the high driving voltage to Ground and thus waste the energy. The design of our energy reuse module is depicted in Fig.~\ref{fig:energyreuse}. \begin{figure}[t] \centering { \epsfig{file=EnergyReuseCircuits_2-eps-converted-to.pdf, width=\columnwidth} } \vspace{-1ex} \caption{Energy reuse design for LCD driver. } \label{fig:energyreuse} \end{figure} During the charging phase, the DC/DC boosts voltage supplied by the solar cell to the high voltage needed for driving the LCD towards a blocking state. The MCU sets this high voltage on and activates the transistor $Q_0$ (the PNP transistor in the conventional LCD driver module). This operation puts the LCD into the charging mode and will pump up the voltage of the LCD. In the discharging phase, the MCU sets the $Q_0$ to the cut-off state and thus closes the charging path, and activates the NPN transistor $Q_1$ on the discharging path. Unlike a conventional LCD driver that discharges directly to the ground, in our design, the current flows back to the input of DC/DC circuits. This helps reduce the current drawn from the solar cell. Measurements show that the total power consumed by LCD while switching at $0.5kHz$ decreases from $84uA$ to $46uA$ with energy reuse. We note two things about our design. First, the two signals controlling the on/off state of LCD are generated by an MCU, and is alternately activated with a short interval ($2us$). This ensures only one transistor of $Q_0$ and $Q_1$ is open at a time and avoids the transient high current that would be caused if both semi-conductive transistors are activated during the switching. Second, the diode $D_0$ is critical. It prevents the charge on the LCD from discharging to the solar cell. Without it, the initial high voltage ($5.5V$) of LCD will be pull down to that of the solar cell ($2.1V$) immediately after $Q_1$ is ON, a high transient current would result and most energy would be wasted on the BE junction of $Q_1$. \section{Discussions} \paragraph{Full Duplex vs Half Duplex} Unlike radio backscattering systems where achieving full duplex is extremely challenging due to shared antenna and RF front-end, full duplexing is natural to Retro-VLC\xspace. This attributes to the fact that separate components are responsible for emitting (LED/retro-reflector) and receiving (photodiode) light. The only difference is that, in full duplexing, the reflected light contains downlink signals whereas in half duplexing, the reflected light is the pure carrier. The different reflected carriers have no impact on the decoding of uplink, due to LPF at the reader frontend. Full duplexing also incurs extra power consumption as both the receiving and transmitting logics are active and the MCU will be kept at a high working frequency \paragraph{Size Tradeoff} In the ViTag\xspace\ implementation, we dedicate two-thirds of the area to solar cell and one-third to retro-reflector. The primary reason is that we have only access to that sized LCD (obtained from 3D glasses) and the availability of solar cells. For a target environment (mainly concerning the illumination condition) and LED power, we expect an optimal ratio between the area of the solar cell to that of retro-reflector so as to achieve maximum communication range. This is of interest when making real products. \paragraph{Working with infrared} Since the retro-reflector, the LCD, the receiving module on the tag and the receiving module on the LED side can all work on the infrared band, the overall system can be used even under a totally dark condition, as long as the transmitting module is replaced with an infrared transmitter. This property can be beneficial in scenarios such as reading with a mobile device in the evening without bothering others' sleep, and controlling home appliances without turning on the light. \section{Evaluation} \label{sec:eva} We evaluate Retro-VLC\xspace using our prototype implementation with a testbed shown in \figref{fig:setup}. The LED on the ViReader\xspace is 12 Watt and the ViTag\xspace is of credit card size. As ViReader\xspace is externally powered and the downlink signal are strong, (we achieved the designed data rate $10kbps$ on the downlink) we have thus focused on measuring the bottleneck uplink performance. The following system aspects are evaluated, namely, packet loss rate, response time, channel response and also the angle within which the uplink signal can be detected. The latter is to show the Retro-VLC\xspace system's ability against eavesdropping attacks. Unless otherwise noted, the evaluation on angle and response time is with the lamp reader. \begin{figure}[tb!] \centering \includegraphics[width=0.7\columnwidth]{setup.jpg} \vskip -0.05in \caption{Evaluation testbed setup with a pair of ViReader and ViTag (For experiment with flash light reader, the lamp is replaced with flashlight reader).} \label{fig:setup} \vskip -0.05in \end{figure} \paragraph{Testing Environments} Being a VLC system specially designed for the indoor environments with lighting structure, we carried experiments in typical office environment, where the ambient light is maintained in a comfortable range around 300$lx$. The ViTag harvests energy not only from the ViReader\xspace, but also from ambient light. On the other hand, the office environment comes with human movements and other disturbances that may affect communication. To give a sense of the environmental impact, we also test it in a dark chamber, as a baseline for comparison. In the dark chamber, the ViReader\xspace LED is the sole light/energy source. \paragraph{Summary of Key Findings} The key findings are highlighted as follows: \begin{Itemize} \item The experiments verify that we are able to get a ViTag\xspace\ to operate battery-free up to $2.4m$ away with lamp reader and $10.6m$ with flashlight reader (with package loss rate below $80\%$, or equivalent BER below $8.26\%$) and $0.5kbps$ data on the uplink. The system works for a wide range of ViTag\xspace orientations. \item Reader-to-tag communication is resilient to eavesdropping. {ViReader\xspace}s can only sense the ongoing communication in a visible range, within a narrow the field of view of about $\pm 15{\,^{\circ}}\xspace$. \end{Itemize} \subsection{Packet Loss Rate}\label{sec:plr} In this subsection, we focus on evaluating the packet loss rate (PLR) of the uplink tag-to-reader communication. For VLC, the received signal strength is mainly affected by three factors, i.e., the distance between ViTag and ViReader, the incidence angle, and the irradiation angle \cite{location3}. We first measure the impact of distance on PLR by varying the distance between ViReader and ViTag. We keep the ViReader perpendicular to the ViTag, i.e., $0{\,^{\circ}}\xspace$ incidence or irradiation angles. To measure the PLR, the ViTag continuously sends packets for 20 minutes to ViReader with a constant rate. Each packet is consisted of $4 bytes$ ID data. We count the number of packets received successfully at ViReader. \figref{fig:plr} shows the resulting PLR versus distance. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{PackageLostRate_Dark-eps-converted-to.pdf} \vskip -0.05in \caption{Distance vs. PLR of 12W LED Lamp.} \label{fig:plr} \vskip -0.05in \end{figure} Figure ~\ref{fig:plr} shows that in a dark chamber, the PLR remains below $0.7\%$ in a distance up to $1.4m$. As the tag moves past $1.4m$, the PLR increases dramatically; Packets are barely received beyond $2.0m$. The drastic increase in PLR is because the energy obtained from the solar cell becomes insufficient in a long distance. In contrast, the PLR increases slower in the office environment thanks to the energy the ViTag harvests from the ambient light in addition to that from the ViReader. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{PackageLostRate_flashlight-eps-converted-to.pdf} \vskip -0.05in \caption{Distance vs. PLR of 3W flash light reader. X-axis starts from 6.5 meters} \label{fig:plr_torch} \vskip -0.05in \end{figure} Figure ~\ref{fig:plr_torch} presents the PLR as a function of the range for the 3W flash-light reader. The experiment shows that with the 3W flash-light reader, a much longer communication range can be reached. Specifically, in a dark chamber, instead of $1.4m$, the energy for receiving begins to drop significantly at $7.0m$, and nearly exhausts at $7.4m$. Under the situation with normal office lights, the system performs even better in terms of the communication range. The PLR remains at nearly 0 until the tag-reader distance reaches $8.5m$, and reaches $80\%$ at 10.6m. We can still receive package in a distance of $11.4m$. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{angle_plr_100cm-eps-converted-to.pdf} \vskip -0.05in \caption{Angle of incidence (irradiation) vs.\ packet loss rate.}\label{fig:readerAoI} \vskip -0.05in \end{figure} We then evaluate the PLR under different incidence or irradiation angles. Fix the distance between ViReader and the ViTag plane (the plane where the ViTag resides in 3D space), and move ViTag along the plane. In this setting, the incidence angle always equals the irradiation angl In our evaluation, we fixed the distance at $100$cm. The measured results are shown in \figref{fig:readerAoI}. We note that despite the seeming high PLR (e.g., 80\%), for certain applications such as ID tag, we can still obtain the information after a few trials. This is similar to RFID systems. \subsection{Response Time}\label{sec:bootstrap} Response time accounts for the time from the ViReader issuing a query to receiving a response from the ViTag. Therefore, the response time consists of \textit{charging time}, downlink packet reception time, and uplink packet transmission time. Response time is a important metric for user experience. Generally, a response time below $100ms$ is thought to be negligible by human. In our system, due to the limitation of the LCD frequency, the uplink packet transmission time is slow, taking over $100ms$ to send a 32-bit ID. We envision faster LCD shutters in the future, and only focus on the charging time in the following. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth]{chargingtime_distance-eps-converted-to.pdf} \vskip -0.05in \caption{Charging time vs.\ distance in dark chamber and office room.}\label{fig:charging_distance} \vskip -0.05in \end{figure} If ViReader and ViTag are close enough, ViTag can quickly harvest enough energy to start conversation. Inversely, if the distance is long, ViTag needs a longer charging time before responding. We define the charging time as the time used to charge a \textbf{zero-initial-energy} ViTag. Charging time is affected by a number of factors like the solar cell size, ViTag energy consumption, and environment illumination level. As ViTag size is fixed, we only evaluate the impact from the environment illumination. First we evaluate the charging time as we vary the distance from $0.1m$ to $1.8m$, counting the time when the operation voltage raises from $10\%$ to $82.5\%$ (min operation voltage). The result is presented in Fig. \ref{fig:charging_distance}. We can see that, when the distance is small, the charging time in both cases are close. For instance, when the distance are $10$ or $20cm$, the charging time are around $50$ and $100ms$, respectively. The two curves begin to separate after around $0.6m$. The charging time in office environment grows slowly due to extra energy supply from the ambient light. \begin{figure}[!ht] \centering \includegraphics[width=0.8\columnwidth] {angle_chargingtime-eps-converted-to.pdf} \vskip -0.05in \caption{Charging time vs.\ incidence (irradiation) angles.}\label{fig:charging_angle} \vskip -0.05in \end{figure} We note that the charging efficiency of the solar cell is also affected by the irradiation angle of the ViReader and also the incidence angle at the solar cell. For simplicity, we fix the distance between ViReader and the ViTag at $60$ and $120cm$, respectively, and observe charging time versus the incidence/irradiation angle shown in \figref{fig:charging_angle}. We indeed see increase in charging time with larger angles. However, the charging time grows slowly especially when the angle is small, e.g., below $30{\,^{\circ}}\xspace$. This means the ViTag tolerates flexible orientations without experiencing serious performance degradation. In particular, we see much less sensitive reaction to the angles in office environment due to energy harvest from ambient light, which further highlights the benefit of using visible light as the power source. In practice, ViTag can always harvest energy from ambient light (sunlight or artificial lighting systems) no matter whether a ViReader exists. Thus, the actual bootstrap can be instantaneous. This is a key difference from RFID/NFC tags where the operation energy can only be gained from a dedicated reader. \subsection{Channel Response} This subsection shows how energy of light signal attenuates against travelling distance along the visible channel. Here, the visible channel means the path along which the light signal traverses until it is received by the receiver of the reader, including the downlink, the retro-reflector, the LCD and the uplink. For all backscatter systems, often times the energy of the signal received by the reader, which is reflected or backscattered by the tag, tends to be much weaker than the energy received by the tag, which poses a bottleneck for the system. Thus, the energy efficiency is a crucial factor. To get an accurate picture of how energy diffuses as a function of the communication range, we measured the observed channel response for the lamp reader and flashlight reader. Fig.~\ref{fig:ChannelResponse} and Fig.~\ref{fig:ChannelResponse_flash} shows the energy calculated at the MCU, as the square of the output voltage. Note that the signal is captured and then measured by MCU, so it goes through Auto Gain Control(AGC) amplifier. From both figures, we see that when the tag is close to the LED, the signal is very strong and the AGC is effective. As a result, the portion of curves before AGC turned off goes down slowly. It actually almost completely suppresses the amplification when the signal is extremely strong (i.e., very close distance to the flashlight reader), as shown in Fig.~\ref{fig:ChannelResponse_flash}. For both figures, after the point when AGC is turned off, i.e., always exerting maximum amplification, both curves attenuates at a rate square or cube of the distance. \begin{figure}[!ht] \centering \includegraphics[width=0.77\columnwidth]{ChannelResopnse_lamp-eps-converted-to.pdf} \vskip -0.05in \caption{Channel responses of lamp reader} \label{fig:ChannelResponse} \vskip -0.05in \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=0.77\columnwidth]{ChannelResopnse_flashlight-eps-converted-to.pdf} \vskip -0.05in \caption{Channel responses of flashlight reader} \label{fig:ChannelResponse_flash} \vskip -0.05in \end{figure} We note that, for a typical \textit{battery-free} backscatter system~\cite{abc1}, the wave front of the modulated backscattered signal received by reader from the tag attenuates proportionally to the power four of the communication range. A detailed formula can be found in paper~\cite{backscatterdeclay}. As a comparison, we fit the part of the curve after AGC turns off to a negative quartic function, as the dotted line in Fig.~\ref{fig:ChannelResponse} and Fig.~\ref{fig:ChannelResponse_flash}. We can see that the negative quartic function attenuates much faster than our measurements. Thus, Retro-VLC\xspace achieves much better energy efficiency and can work at longer communication distance than typical battery-free backscatter systems for the same source emission power. This is perhaps due to the fact that Retro-VLC\xspace actually help concentrates lights from a scattering light source \subsection{Maximum Working Range} We have so far evaluate both the PLR and energy harvesting. We then define the working range as the area within which the ViTag can harvest enough energy and talk with the ViReader with a chance above $20\%$, i.e., package loss rate is less than $80\%$. We measure the working range in office environment, and show the result in Fig. \ref{fig:ContinuesWorkingRange}. The working range in Fig. \ref{fig:ContinuesWorkingRange} is the area within the closed blue curve. With an upright orientation of the ViTag, the maximum working distance is up to $2.6m$. With ViReader perpendicular to the ViTag plane, the Field of View (FoV) is around $50{\,^{\circ}}\xspace$. In our evaluation, we always make sure the same incidence angle and irradiation angle. Thus, the measured working range is conservative. In practice, if we orient the ViTag towards the ViReader, the FoV can be even larger. \begin{figure}[!ht] \centering \includegraphics[width=0.7\columnwidth] {ContinuesWorkingRange_flash-eps-converted-to.pdf} \vskip -0.05in \caption{Working area measured in office environment. Reader is located at (0,0).}\label{fig:ContinuesWorkingRange} \vskip -0.05in \end{figure} As to flash-light reader, the max distance from the reader to the edge of the working area is 10.6m as shown in the figure. In our experiments, we can still receive packets at a maxim range of 11.3m. Note but, due to saturation, the flashlight reader can not work if the tag-reader distance is overly close, e.g., smaller than 0.5m, as show in \figref{fig:ContinuesWorkingRange}. This is due to the saturation of the sensors and amplification circuits. \subsection{Eavesdropping Range}\label{sec:secure} Eavesdropping attacks in our system refer to a device secretly listening to the conversation between a ViTag and a ViReader. It is shown that eavesdropping is usually an early step of other attacks like man-in-the-middle attacks \cite{rfidsec1,rfidsec2}. One of the promising applications of Retro-VLC\xspace is using ViTag as a badge or payment card. Therefore, it is important that we protect the communication safety against eavesdropping attacks. \begin{figure}[!ht] \centering \includegraphics[width=0.7\columnwidth] {security_experiment_figure2-eps-converted-to.pdf} \vskip -0.05in \caption{Signal detection radius of uplink. }\label{fig:security} \vskip -0.05in \end{figure} A key feature of Retro-VLC\xspace compared with RFID/NFC is that the tag-to-reader communication is \textbf{directional}. Therefore, it is expected that a conversation from ViTag can only be detected within a narrow FoV. It is shown in \cite{eavesdrop2} that a sniffer can overhear NFC communication even over 1 meter away. In our evaluation, we place a ViReader and ViTag pair $0.6m$ apart from each other. The ViTag faces squarely to the ViReader, as shown in Fig. \ref{fig:security}. We use another reader as the attacker and measure the area where the attack can sniff the transmission from the ViTag. The area is plotted in Fig. \ref{fig:security}. The signal can actually be detected quite far away as shown in Fig. \ref{fig:security}. As discussed in Fig. \ref{fig:plr}, The reason is that the retro-reflector is not perfect, it reflects the light back with a small diffusion angle. The intensity of light decays quickly with the angle. In our experiment, we use a sniffer that have {$100dBm$} gain(the same as our ViReader\xspace), and the result shows the detectable area is nearly $2m$ in the back, excluding the shadow of the ViReader\xspace. However, the whole area resides within a small FoV of the ViTag, making it much easier for the user to discern the sniffer and can be blocked by a larger cover of ViReader\xspace. Usually, the reader is fixed on the wall (e.g., a badge reader) which further reduces the signal-detectable area. \section{Prototyping and Potential Applications} \label{sec:proto} \subsection{Prototype Implementation} To demonstrate the effectiveness of our design, we implement the proposed Retro-VLC\xspace\ system. Our prototype is shown in \figref{fig:proto} (a) and (b). The ViTag\xspace\ is battery-free and we harvest light energy using solar cell. The size of ViTag\xspace\ is $8.2cm\times 5.2cm$, same as a credit card. About two-thirds area is used for solar cells and one-third for the LCD and retro-reflector. \begin{figure}[tb!] \centering \minipage{.75\columnwidth} \subfigure[ViTag\xspace Front]{ \includegraphics[width=0.45\columnwidth]{tag-front.jpg} } \subfigure[ViTag\xspace Back]{ \includegraphics[width=0.45\columnwidth]{tag-back2.jpg} } \subfigure[Lamp]{ \includegraphics[width=0.47\columnwidth]{reader_lamp_2.jpg} } \subfigure[Flashlight]{ \includegraphics[width=0.44\columnwidth]{reader_torch.jpg} } \vspace{-1ex} \endminipage \caption{Prototype.} \label{fig:proto} \end{figure} \iffalse \begin{figure}[!ht] \centering \minipage{.7\columnwidth} \subfigure[Lamp]{ \includegraphics[width=0.47\columnwidth]{reader_lamp_2.jpg} } \subfigure[Flashlight]{ \includegraphics[width=0.44\columnwidth]{reader_torch.jpg} } \vspace{-1ex} \endminipage \caption{Reader prototype.} \label{fig:proto_reader} \end{figure} \fi We use the schematics in \figref{fig:sysdiagram} in the implementation with printed circuit boards (PCBs) and off-the-shelf circuit components, which we summarize in Table~\ref{table:components}. The retro-reflector fabric we use is Scotchlite from 3M~\cite{rrsheet}. \begin{table}[th] \begin{center} \small \begin{tabular}{| l | l || l | l |} \hline \multicolumn{2}{ |c|| }{ ViTag\xspace\ } & \multicolumn{2}{ c| }{ ViReader\xspace\ } \\ \hline\hline Photodiode & BPW34 & Photodiode & SFH213 \\ \hline MCU & MSP430G & MCU & LPC4357 \\ \hline DC/DC & BQ25504 & MOSFET & IRF510 \\ \hline Comparator & TLV2762 & Amplifier & \footnotesize{LM6172, AD620} \\ \hline Transistor & S9018 & Transistor & \footnotesize{S9018, 2SC3357} \\ \hline LCD & SF110147 & LED Bulb & Apollo BR30 \\ \hline \end{tabular} \normalfont \caption{Concrete models of electronic components used in Retro-VLC\xspace prototype}\label{table:components} \end{center} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{| l | l | l |} \hline Component\textbackslash Voltage & 2.0V & 2.6V \\ \hline\hline \multirow{2}{*}{Receiving Circuit} & $43.8\mu A$ & $48.4\mu A$ \\ & ($87.6\mu W$) & ($125.8\mu W$) \\ \hline \multirow{2}{*}{Transmitting Circuit} & $45.1\mu A$ & $36.7\mu A$ \\ & ($90.2\mu W$) & ($95.4\mu W$) \\ \hline \multirow{2}{*}{Total} & $91.9\mu A$ & $90.0\mu A$ \\ & ($183.8\mu W$) & ($234.0\mu W$) \\ \hline \end{tabular} \caption{Overall and component-wise energy consumption of ViTag\xspace.}\label{table:energy} \end{center} \end{table} The ViReader\xspace\ is implemented in two forms. The first one is a lamp reader, which is modified from an $12W$ white LED lamp, as shown in \figref{fig:proto}(c). We put the light sensor inside the center of front surface of the lamp and isolate it with copper foil to reduce the leakage from the LED light. The second one is a flashlight reader, shown in \figref{fig:proto}(d). It uses a $3W$ LED as the transmitter. Three light sensors are used to improve the SNR. The energy consumption of ViTag\xspace is related to the voltage output of solar cell. We measure the overall and component-specific energy consumption for ViTag\xspace for two typical operating voltages, as shown in Table~\ref{table:energy}. The measurement shows that the ViTag\xspace prototype indeed achieves ultra-low power consumption. With such low power consumption, we are able to drive it by harvesting light energy using only small solar cells. \iffalse \vskip 0.05in\noindent{\it Experiments.} We test how far the tag can be reached as we cover part of the solar cell. \vskip 0.05in\noindent{\it Results.} Fig.~\ref{fig:solar} shows the solar cell area does not affect the communication range within a certain region. \hl{there is a threshold, above which succeed, below which fail} \begin{figure}[tb!] \centering \includegraphics[width=0.7\columnwidth]{../evaluation/SolarCellSize_Range.eps} \vskip -0.05in \caption{\footnotesize{\bf Solar Cell Size V.S. Communication Range.} \todo{Need to explain the units of X-axis.}.} \label{fig:solar} \vskip -0.05in \end{figure} \vskip 0.05in\noindent{\it Experiments.} We test how far the tag can be reached as we cover part of the retro-reflector. \vskip 0.05in\noindent{\it Results.} Fig.~\ref{fig:retro} shows \hl{area covered proportional to range} \begin{figure}[tb!] \centering \includegraphics[width=0.7\columnwidth]{../evaluation/ReflectorSize_Range.eps} \vskip -0.05in \caption{\footnotesize{\bf Retro-Reflector Size V.S. Communication Range.} \todo{Need to explain the units of X-axis. It's area, should be square cm.}.} \label{fig:retro} \vskip -0.05in \end{figure} \fi \subsection{Potential Applications} The low power duplex Retro-VLC\xspace system has many potential application scenarios. \paragraph{Home sensor bearer} Sensors such as motion, temperature, humidity and other sensors can be integrated with ViTag\xspace. Sensor readings can be streamed to a ViReader\xspace-capable lighting LED. Such an application would benefit from the battery-free property of ViTag\xspace: deployment is extremely simple and sensors can remain untethered afterwards. \paragraph{Visible-light identification (VLID)} Taking visible light as the communicating media, VLID has many advantages over radio-frequency based identification systems, such as can achieve distant communication with battery free Tags, immune to electromagnetic interference, and more secure, thus it has the potential of replacing RFID in many scenarios such as in warehouses, storage and transportation systems. \paragraph{Interactive road side traffic signs} The battery-free design of ViTag\xspace can be applied to road-side signs. Cars can communicate with them using LED headlights. Similarly, it can be used for automatic tollgate. \paragraph{NFC communication/payment} The use of visible light and the directional reflection property of the retro-reflector makes it a securer and faster means than other wireless NFC system. The tag size can made smaller if only for short range communication. \section{Introduction} \begin{figure}[!t] \vskip -0.03in \centering { \epsfig{file=../figures/prototype-photo.eps, width=0.6\columnwidth} } \caption{{\bf Our ViTag\xspace\ Prototype} \hl{integrates both mmm in a single design. It can operate using both RFID and TV transmissions}.} \label{fig:tag} \vskip -0.05in \end{figure} LEDs are prevalently deployed for illumination. This ubiquity opens a door for the LEDs to communicate with small computing devices through the vehicle of visible lights. The last few years have seen significant advances in VLC systems --- VLC systems today can achieve bit rates up to \hl{ss Mbps} from an LED transmitter to a receiver over a distance of \hl{ss}~\cite{flawedsys1,flawedsys2,flawedsys3,flawedsys4}. Researchers have also demonstrated the capability of providing location services using VLC~\cite{location1,location2}. These impressive capabilities are made possible by systems that encode and modulate the light emitted by the LED. However, all these systems, on one hand, either support only one-directional communication from the LED to the device where the device does not have the transmission capability, or entail the mobile device having additional LED~\cite{led2led1} or Bluetooth~\cite{ble1} facilities to transmit the uplink data; On the other hand, achieving such high data rate requires complicated modulation schemes and power-expensive analog-to-digital converters (ADCs) that consume power on the order of \hl{10mW}~\cite{flawedsys1}; this is orders of magnitude more power than is available on handhold devices that harvest power from ambient light sources. In this paper, we ask if it is possible to achieve a bi-directional communication system on battery-free mobile devices only using the visible light as the carrier. Such a system would not only work at any location with LEDs at any time, but would also remedy the security flaws typically faced by existing RFID systems, where the uplink transmission tends to be exposed to a wide area of space, leaving chances for sniffers and attackers to temper the system. Thus, a positive answer would enable ubiquitous communication with access to existing network infrastructures at unprecedented spacial and temporal scales with user security preservation. Designing such a system, however, is challenging as (1) generating light waves and modulating them typically require much more power than can be harvested from ambient light sources by a small device (see~\ref{sec:app}), and moreover, (2) detecting signals sent by the device at the LED receiver against interferences is hard. In this paper, we introduce ViTag\xspace, a VLC system that provides duplex communication capabilities on battery-free devices only using visible light carriers. To address the challenge of detecting signals against interferences on the LED receiver, ViTag\xspace\ uses a novel algorithm specifically designed for the LCD-modulated signals. Instead of generating signals, ViTag\xspace\ addresses the power constraint challenge by backscattering the incoming light waves using retro-reflectors and modulating using Liquid-Crystal Displays (LCDs). This helps the backscattered signal focus at the receiver on the LED, and avoids generating signals on ViTag\xspace. In addition, ViTag\xspace\ recycles the LCD modulation energy with an energy reuse module, further optimizing the mobile device size to the size of a regular credit card. Finally, there are a set of other techniques behind the ViTag\xspace\ design that make it feasible in real-world illumination system deployment, which we will describe in later sections. \textbf{ViTag\xspace.} To understand our ViTag\xspace\ design on battery-free ID card-sized tags, consider an LCD whose emitted light can be manipulated by an additional small circuit inside the light bulb. This circuit embeds information in the light and modulates it, while keeping the brightness of the light the same without any flickering. On the ViTag\xspace\ side, a light sensor captures the light signal that conveys information from the LED. To conserve energy, ViTag\xspace\ only uses analog components to demodulate the signal without an ADC. Upon detecting data, a low-power micro-controller on ViTag\xspace\ is waked up for uplink data transmission. It drives up the LCD to flicker, therefore sending data back to the LED by backscattering the light with a retro-reflector behind the LCD. This uplink data can be captured by a light sensor placed on the LED, along with interferences caused by downlink transmission, nearby human and object movements, household electricity fluctuations, and so on. A specifically designed receiver associated with the LED then performs the time recovery and demodulates the signal. To get a network of ViTag\xspace\/s and LEDs into play, we design a Media Access Control (MAC) protocol to mediate the communications in LEDs' illumination range. To show the feasibility of our ideas, we have built a hardware prototype, shown in Fig.~\ref{fig:tag}, that is approximately the size of a credit card. Our prototype includes multiple off-the-shelf LEDs emitting white lights, LED transmitters each stuffed inside an LED, LED receiver PCBs and ViTag\xspace\ PCBs. LEDs and ViTag\xspace\/s are equipped with a micro-controller, respectively. Our prototype also includes a solar cell of size \hl{ss $\times$ ss} on ViTag\xspace. We evaluate our system in locations where illuminating LEDs are typically deployed, including office settings and evenings, as well as a dark chamber without other light sources as the baseline. We measure the longest communication range between the LED and ViTag\xspace, the area in which ViTag\xspace\ uplink transmission can be sniffed or tempered, and concurrent transmission capabilities. Our experiments show the following: \begin{Itemize} \item Our \hl{ss $\times$ ss} ViTag\xspace\ prototype can achieve a bit rate of \hl{10kbps} on the downlink and \hl{1kbps} on the uplink at distances of up to \hl{2.2m} in dark chambers, \hl{2.0m} on sidewalks in the evening, and \hl{ss m} in offices, under the power budget of \hl{bb $\mu W$}. \item ViTag\xspace's uplink transmission cannot be detected over the radius of \hl{m}, and malicious readers cannot detect uplink transmissions when tempering \hl{m} away from working ViTag\xspace\/s. \item Via the MAC protocol described in Section~\ref{sec:mac}, \hl{blah} \end{Itemize} \vskip 0.05in\noindent{\bf Contributions:} We make the following contributions: \begin{Itemize} \item We present ViTag\xspace, the first visible light duplex communication system design that operates on battery-free devices while retaining a small size for them. \item We develop a secure communication primitive applicable to RFID systems that acts against side sniffers and malicious transmitters. \item Finally, we present designs and build a prototype which shows how all of the above, from ViTag\xspace, modified LEDs, through to the network stack, can be implemented on credit-card sized battery-free devices at a low cost. \end{Itemize} \section{Introduction} \begin{figure}[!t] \vskip -0.03in \centering { \epsfig{file=fig/sysdiagram-eps-converted-to.pdf, width=0.6\columnwidth} } \caption{{\bf Our ViTag\xspace\ Prototype} \hl{integrates both mmm in a single design. It can operate using both RFID and TV transmissions}.} \label{fig:tag} \vskip -0.05in \end{figure} Nowadays, LEDs have been prevalently deployed for illumination purpose for its advantageous properties such as high energy efficiency, long lifetime, environment friendliness, to name a few. Being semiconductor devices, LEDs also possess another feature, i.e.\, it can be turned on and off instantaneously. This effectively turns LED lights into a carrier and gives rise to a new ''dual-paradigm'' -- illumination and communication -- of LED lighting. Thanks to the ubiquity of lighting infrastructure, visible light communication (VLC) has thus attracted lots of research interest~\cite{fundamental1, fundamental2, standard}. A standard (IEEE 802.15.7) has been established recently~\cite{standard} to ensure inter-operation among device manufacturers. Some practical systems such as ByteLight~\cite{ble0} have debuted the exploitation of LED lighting infrastructures for both communication and localization~\cite{location1,location2}. Existing work has primarily focused on exploring or improving the one-way link from the LEDs to the mobile ends, i.e.\, from the LEDs to the receiving small computing devices. Indeed, significant advances have been made, e.g.\, VLC systems today can achieve bit rates up to $~10Mbps$ from an LED transmitter to a receiver over a distance of $~10km$~\cite{expensive,expensive2,retro1,retro2}. However, despite these promising results, it solves only half of an actual communication system where two-way communications are essential. In response, in ByteLight and many other smart light system with remote control, a BlueTooth Low Energy link is typically leveraged as the uplink to send information or control messages from a mobile device to the light. The dual-paradigm nature of LED lighting, with illumination as the primary goal, naturally leads to an asymmetric communication setting. The transmitters, i.e., LEDs, are externally powered and can emit strong signals, whereas the receivers are typically small computing devices such as mobile phones and even weaker sensors. Applying the LED-photodiode link straightforwardly from a mobile device or sensor node to the lighting LED is not affordable. A typical VLC system consists of an LED, a photodiode and intermediate optical components (e.g., lenses). To achieve high speeds as reported in literatures~\cite{expensive,expensive2,flawedsys3,flawedsys4}, special and expensive hardware pieces such as multiple quantum well electro-absorption modulators and lasers have to be used~\cite{expensive}; They consume orders of magnitude more power than is available on credit-card-sized devices that harvest energy from LED lights~\cite{solarsheet}. In this light, we focus on using ordinary lighting LEDs whose primary goal is illumination. In such a setting, the LED flickering rate is relatively low. For instance, we measured that the rising and falling frequency of an ordinary commodity LED bulb is at most $1MHz$. In this paper, we ask if it is possible to achieve very low power bi-directional VLC where the mobile end is affordable solely by harvesting energy from the LED light. Such a system would not only work at any location with LEDs at any time, but would also remedy the security flaws typically faced by existing RFID systems, where the uplink transmission tends to be exposed to a wide area of space, leaving chances for sniffers and attackers to temper the system. Thus, a positive answer would enable ubiquitous communication with access to existing network infrastructures at unprecedented spacial and temporal scales with user security preservation. Our core idea is as follows. To avoid generating power-expensive visible lights on the mobile end, we use a toggling \textit{Liquid-Crystal Display (LCD)} to modulate a \textit{retro-reflector} that directionally bounces the incoming light back to the LED, which forms a low-power uplink. Designing such a system, however, is challenging for the following reasons: \begin{Itemize} \item Demodulating and decoding the high throughput LED-transmitted data is power consuming. \item Transmitting with the LCD at a high toggling frequency consumes even more power than the receiver. \item The LED has to detect retro-reflected signals 3 orders of magnitude weaker than its interfering transmissions. \item The LED must handle clock offsets on the mobile end that uses a low-cost clock with low oscillating frequency. \end{Itemize} To address these challenges, we apply the following design principles: \begin{Itemize} \item Use analog components over digital ones while achieving comparable accuracy. \item Recycle LCD energy. \item Design energy efficient amplifiers. \item Design algorithms to decode signals that suffer from severe clock offsets caused by low-cost mobile end. \end{Itemize} We demonstrated the design with a battery-free credit-card-sized \textbf{ViTag\xspace}, as shown in Fig.~\ref{fig:tag}, that harvests energy of off-the-shelf LED lights. We also explored the tradeoff between the solar panel area and the retro-reflector area for achieving a maximum working range at a typical $lux$ level. We evaluate our system in locations where illuminating LEDs are typically deployed, including offices and evening streets. We also evaluate in dark chambers as the baseline. We measure the maximum communication range between the LED and the ViTag\xspace\ in varying $lux$, ViTag\xspace\ orientation, angle of incidence, solar panel area and retro-reflector area settings. Our experiments show that our \hl{ss $\times$ ss} ViTag\xspace\ prototype can achieve a bit rate of \hl{10kbps} on the downlink and \hl{1kbps} on the uplink at distances of up to \hl{2.2m} in dark chambers, \hl{2.0m} on sidewalks in the evening, and \hl{1.5 m} in offices, under the power budget of \hl{400 $\mu W$}. We also evaluate the area in which ViTag\xspace\ uplink transmission can be sniffed or tempered. Experiments show ViTag\xspace's uplink transmission cannot be detected over the radius of \hl{m}, and malicious readers cannot detect uplink transmissions when \hl{m} away from ViTag\xspace\/s. \vskip 0.05in\noindent{\bf Contributions:} We make the following contributions: \begin{Itemize} \item We propose the first practical bi-directional VLC primitive that works for small battery-free devices using retro-reflectors and LCDs and ordinary white LEDs. \item We address various challenges by designing energy-efficient components on the ViTag\xspace\ and an unsynchronized decoding scheme on the LED. \item Finally, we build and evaluate a prototype system which shows all of the above. \end{Itemize} \section{Introduction} Nowadays, white LEDs have been prevalently deployed for illumination purpose for its advantageous properties such as high energy efficiency, long lifetime, environment friendliness, to name a few. Being semiconductor devices, LEDs also possess another feature, i.e.\, it can be turned on and off \textit{instantaneously}. This effectively turns LED lights into a carrier and gives rise to a new ''dual-paradigm'' -- illumination and communication. The ubiquity of lighting infrastructure makes this dual-paradigm VLC (i.e., communication over existing lighting infrastructure) especially well suited to communicate with mobile devices or sensor nodes, e.g., streaming video to one's mobile phone or to collecting environment data from home sensors. For any communication system or link, it is essential to have bi-directional communication capability to ensure reliability and flexibility. A minimum requirement would be able to acknowledge correct or incorrect receiving of packets. To realize bi-directional communication, a straightforward way is to put together two one-way communication links. Unlike existing radio communication systems where the radio front-end is shared for both transmitting and receiving, a VLC system uses an LED for transmitting and use a light sensor (e.g., photodiode) for receiving. In consequence, existing work on VLC has primarily focused on improving the throughput for one-way link using power hungry, expensive, and dedicated devices \cite{expensive,expensive2,retro1,retro2}. However, the dual-paradigm nature of the system, with illumination being the primary goal, naturally leads to an asymmetric communication setting. On the one end is the externally powered LED, on the other end is the power-constrained device such as mobile phones and even weaker sensor nodes. Such an asymmetric setting makes it \textit{unfit} to symmetrically put two one-way communication links because the weak end cannot afford the power-intensive LED. Even if an LED were used, the glaring light would be very annoying to human eyes. In fact, some early VLC-based practical systems such as ByteLight~\cite{ble0}, which exploits LED lighting infrastructure for both communication and localization~\cite{location1,location2}, have adopted another medium (e.g., Bluetooth Low Energy (BLE)) for the device-to-LED communication, i.e., \textit{uplink}. This unfortunately incurs additional cost, bandwidth and system complexity. Also, the adoption of BLEs or LEDs for the uplink implies nearly omni-directional emissions, which accounts for extra power demand on the uplink. In this paper, we present the design and implementation of \textit{Retro-VLC\xspace -- a low-power duplex VLC system}. Inspired by recent work on backscatter communication systems~\cite{abc1,abc4}, we avoid the usage of an LED for the uplink communication by backscattering the incoming light from the lighting LED. Central to Retro-VLC\xspace\ is the adoption of retro-reflecting material which can bounce back light of an arbitrary incidence angle \textit{along its incoming path}. This essentially forms an uplink that concentrates energy without other concentrating optical elements. We further modulate the reflected light by applying an LCD as a modulator. By tuning the translucence of the LCD, we control the passing of the light reflected by the retro-reflector, and encode information bits using On-Off-Keying (OOK). The modulated reflected light is then picked up by the photodiode on the LED side and decoded by a dedicated system. To achieve low power consumption, we identify and overcome several challenges as follows: 1) power consumption of LCD; To push to the extreme the low power consumption; 2) weak signal detection at LED; 3) clock offset handling To address these challenges, we apply the following design principles: \begin{itemize} \item Use analog components over digital ones while achieving comparable accuracy; Design low-power amplifiers. \item Recycle LCD energy. \item Avoid using crystal oscillators when transmitting. \item Design algorithms to decode signals that suffer from severe clock offsets. \end{itemize} To demonstrate the effectiveness of Retro-VLC\xspace\ design, we implement a battery-free tag device, which operates by harvesting the energy of incoming light. The tag device is the same size of a credit card, one-third of the area is the retroreflector and two-thirds is the polycrystalline silicon solar cell. \figref{fig:system} depicts the tag. We also explored the tradeoff between the solar panel area and the retro-reflector area for achieving a maximum working range in the worst case where the LED is the only light source, \textit{i.e.}\xspace no ambient light. \begin{figure}[th] \centering \includegraphics[width=.9\columnwidth]{system.pdf} \caption{System architecture (can be put side by side with the hardware prototype).} \label{fig:system} \vskip -3mm \end{figure} We evaluate our system in locations where illuminating LEDs are typically deployed such as office environments. We also evaluate in dark chambers as the baseline. We measure the maximum communication range between the LED and the ViTag\xspace\ with various LED illuminations, ViTag\xspace\ orientations, solar panel areas and retro-reflector areas. Our experiments show that our $8.2cm\times 5.2cm$ ViTag\xspace\ prototype can achieve $10kbps$ on the downlink and $1kbps$ on the uplink at distances of up to $2.2m$ in dark chambers and $1.5m$ in offices, under the power budget of 400$\mu W$. We also evaluate the area around the ViTag\xspace\ in which uplink transmissions can be sniffed. Experiments show ViTag\xspace's uplink transmission cannot be detected outside of a spindle-shaped area of a radius $0.2m$. \paragraph{Contributions} We make the following contributions: \begin{itemize} \item We propose the first practical bi-directional VLC primitive that works for small battery-free devices using retro-reflectors and LCDs and ordinary white LEDs. \item We address various challenges by designing energy-efficient components on the ViTag\xspace\ and an unsynchronized decoding scheme on the LED. \item Finally, we build and evaluate a prototype system which shows all of the above. \end{itemize} \section{Introduction} \label{sec:intro} Nowadays, white LEDs have been prevalently deployed for illumination purpose for its advantageous properties such as high energy efficiency, long lifetime, environment friendliness, to name a few. Being semiconductor devices, LEDs also possess another feature, i.e.\, it can be turned on and off \textit{instantaneously} \cite{location3}. This effectively turns illuminating LED lights into a carrier and gives rise to a new ``dual-paradigm'' of simultaneous illumination and visible light communication (VLC). The ubiquity of illuminating infrastructure makes this dual-paradigm VLC (i.e., communication over existing lighting infrastructures) especially well suited for communication with mobile devices or sensor nodes such as streaming video to one's mobile phone or collecting environmental data from home sensors. Like any communication system, it is essential to have bi-directional (\textit{i.e.}\xspace both LED-to-device downlink and device-to-LED uplink) communication capability to ensure reliability and flexibility. For instance, a minimum requirement would be to acknowledge correct or incorrect reception of packets. One immediate solution would be using another medium such as a radio link to complement the VLC link. For instance, ByteLight~\cite{ble0}, which exploits LED lighting infrastructure for both communication and localization~\cite{location1,location2}, has resorted to Bluetooth Low Energy (BLE) for the uplink device-to-LED communication. But such solution incurs additional cost and increased overall system complexity, and undermines the benefits of VLC such as security. In this paper, we are interested in a bi-directional communication system solely relying on VLC. An intuitive way to realize bi-directional VLC system is to put together two one-way VLC links with reverse transmitting direction, \textit{i.e.}\xspace a \textit{symmetric solution}. It is indeed a viable solution for dedicated VLC systems. It is perhaps a widely taken assumption, as existing work on VLC has primarily been focused on improving the throughput for one-way link using power hungry, expensive, dedicated sending/receiving devices and intermediate light concentrating optical components (\textit{e.g.}\xspace lenses) \cite{expensive,expensive2,retro1,retro2}. However, the dual-paradigm nature of a VLC system and the practicality considerations render such symmetric solution not suitable, for two basic reasons. First, the dual-paradigm VLC system, with illumination being the primary goal, has quite asymmetric capabilities at the two ends: one end is the externally powered lighting LED and the other end is the power-constrained mobile or sensor device. Secondly, while the position of lights are usually fixed, that of a mobile or sensor node can be arbitrary and changing. In particular, the weak end cannot afford lighting up a high power LED to transmit information especially when communicating at a relative large distance (\textit{e.g.}\xspace a few meters for typical indoor environments). Using optical light concentrating components may allow low-power LEDs being used, but it would require precise relative positioning and careful orienting (with the optical components being steerable) between the two ends, and is obviously impractical \begin{figure}[tb!] \centering \includegraphics[width=.9\columnwidth]{system.pdf} \caption{System architecture.} \label{fig:system} \vskip -3mm \end{figure} Inspired by recent work on backscatter communication systems~\cite{abc1,abc4}, in this paper, we present the design and implementation of \textit{Retro-VLC\xspace} -- \textit{a low-power duplex VLC system} that consists of a reader (ViReader\xspace) residing in the lighting infrastructure and tags ({ViTag\xspace}s) integrated in mobile devices or sensor nodes. The ViReader\xspace is made up of an externally powered lighting LED, a light sensor (\textit{e.g.}\xspace photodiode) and the control circuits. The ViTag\xspace consists of a light sensor, a retro-reflective fabric, a transparent LCD shutter and the control circuits. One example tag implementation is shown in \figref{fig:system}. Central to Retro-VLC\xspace is the adoption of retro-reflective fabric which retrospectively reflects light, \textit{i.e.}\xspace bounces light back to the lighting source \textit{exactly along its incoming direction}. Its reflecting nature helps to establish an uplink over the \textit{same} visible light channel established by the high power lighting LED, which thus avoids using another high-power LED on the weak end and makes it possible to achieve the low-power design goal. Its retrospective nature further not only allows arbitrary relative positioning between the lighting source and the tag, but also helps to concentrate the reflected light from a scattering light source. The two favorable properties render Retro-VLC\xspace an effective visible light based backscattering communication system. Retro-VLC\xspace works as follows. For the downlink (LED-to-tag), the LED in ViReader\xspace switches on and off at a high frequency (\textit{e.g.}\xspace 1MHz, to avoid human perceptible flickering), turning the illuminating light into a communication carrier. Information bits are carried using certain modulation method (\textit{e.g.}\xspace Manchester coding). The light signals are picked up by the light sensor on ViTag\xspace and decoded to restore the information. For the uplink (tag-to-LED) communication, the same carrier is leveraged via reflection. To carry bits over the reflected light carrier, we cover the retro-reflector fabric with a transparent LCD that serves as a shutter, and adopt On-Off-Keying (OOK) modulation over the reflected light carrier by controlling the passing or blocking state of the LCD shutter. The modulated reflected light carrier is then picked up by a photodiode on the ViReader\xspace and decoded by a dedicated subsystem. Two major challenges arose in the design of the Retro-VLC\xspace system, especially the uplink. The root causes are the practicality considerations of the system and the low-power requirement of the tag. Specifically, the first challenge is the extremely weak and noisy signal (reflected by the remote tag) received by at the ViReader\xspace. We use a photodiode with wide field of view (FoV) on the ViReader\xspace to avoid constraining the range of possible tag deployment. The wide FoV of the photodiode not only makes it less sensitive to the reflected lights (as only a tiny portion of its view actually corresponds to the retro-reflecting area of a tag), but also invites severe interference from the leakage and ambient reflection of the strong downlink signal and carrier. Secondly, the low power consumption requirement of ViTag\xspace (in hope to achieve battery-free operation by only harvesting energy from the illuminating LED) entails careful design as well. The receiving (demodulation and decoding) unit and modulation unit (the LCD) on the ViTag\xspace consume significant energy. The LCD shutter leverages the electric field to control the arrangement of liquid crystal molecules (to polarize the light). It itself is a capacitor. Frequent charging and discharging the LCD consumes relatively significant energy, especially when the refresh rate is high. In addition, for sake of cost and energy consumption, we do not use any high precision oscillator on the ViTag\xspace. There is no clock synchronization between a ViReader\xspace and ViTag\xspace(s) either. \iffalse \begin{figure}[tb!] \centering \minipage{.7\columnwidth} \subfigure[Front]{ \includegraphics[width=0.45\columnwidth]{tag-front.jpg} } \subfigure[Back]{ \includegraphics[width=0.45\columnwidth]{tag-back2.jpg} } \vspace{-1ex} \endminipage \caption{ViTag\xspace prototype.} \label{fig:proto} \end{figure} \fi We have addressed these challenges with the following design. We employ a differential amplifier in the ViReader\xspace receiver to filter out the noises; we adopt a multi-stage amplification design with feedbacks for automatic gain control to pull the system away from self-excitation. With these designs, we amplify the signal by up to $120dB$ while ensuring the stability of the system. We devise a sliding-window multi-symbol match filter to handle possible clock offsets and drifts between the ViReader\xspace and the ViTag\xspace. To achieve low power consumption of the ViTag\xspace, we have followed the principles of using as much analog components as possible, making the circuit work at the most energy-efficient (\textit{i.e.}\xspace close to cut-off) state, and seeking maximal energy reuse. In particular, we avoid energy-demanding analog-to-digital converters (ADCs) with a specially designed comparator. The microcontroller (MCU) in ViTag\xspace\ handles only simple tasks such as parity check and duty cycling, and the control of LCD states. We further design an energy reuse module that collects almost half of the LCD's discharging current. \fyi{We have implemented several prototypes that demonstrate the effectiveness of our Retro-VLC\xspace design. We built battery-free ViTag\xspace device, which operates by harvesting energy from the incoming light. \figref{fig:system} depicts the architecture of a ViTag\xspace. It is the same size of a credit card, one-third of the area being the retro-reflector and two-thirds the polycrystalline silicon solar cell. We made two types of ViReader\xspace, modified from a normal LED bulb and a flashlight, respectively.} We evaluate our system in locations where illuminating LEDs are typically deployed such as office environments. We also evaluate in dark chambers for benchmark purpose. We measure the maximum communication range between the LED and the ViTag\xspace\ with various LED illumination levels, ViTag\xspace\ orientations, solar panel areas and retro-reflector areas. Our experiments show that our $8.2cm\times 5.2cm$ ViTag\xspace\ prototype can achieve $10kbps$ downlink speed and $0.5kbps$ uplink speed over distances of up to $1.7m$ in dark chambers and $2.4m$ in offices, under a $200\mu W$ power budget. We also demonstrate its merit in security by evaluating the area around the ViTag\xspace in which uplink transmissions can be sniffed. \paragraph{Contributions} We make the following contributions: \begin{Itemize} \item We propose a practical bi-directional VLC primitive that works for small battery-free devices using retro-reflectors and LCDs and ordinary white LEDs. The design is well suited for the communication between a mobile or sensor device and the illuminating infrastructure. \item We address various challenges through energy-efficient analog circuit design and energy reuse components on the ViTag\xspace, and weak signal detection and unsynchronized decoding scheme on the ViReader\xspace. \item We build and evaluate real working prototypes, confirm the effectiveness of our design and provide a sense of its practicality. \end{Itemize} \section{Media Access Control} \label{sec:mac} The discussion so far focuses on the communication aspects of a single ViTag\xspace-ViReader\xspace\ pair. However, when many of these devices are in range of each other, we need mechanisms to arbitrate the channel between them. Unlike traditional RFID, the communication uplink from ViTag\xspace\ to ViReader\xspace\ is highly directional because of the retro-reflectors. In addition, as a system with multiple access points that connect to the Internet, which is also different from RFID systems, ViTag\xspace\ needs mechanisms to provide roaming support. In this section, we explore the Media Access Control (MAC) design for ViTag\xspace\ and ViReader\xspace\ with a break-down into four scenarios, namely, one ViReader\xspace\ to multiple ViTag\xspace\/s, multiple ViTag\xspace\/s to one ViReader\xspace, multiple ViReader\xspace\/s to one ViTag\xspace, and one ViTag\xspace to multiple ViReader\xspace\/s. \subsection{One ViReader\xspace\ to Multiple ViTag\xspace\/s} \label{subsec:onereaderto} One of the problems is how a ViReader\xspace\ identifies a ViTag\xspace\ with a specific serial number from a number of ViTag\xspace\/s in range. This is necessary because if multiple ViTag\xspace\/s respond simultaneously to a query from ViReader\xspace, they will jam each other. In ViTag\xspace, we set all ViTag\xspace\/s in a passive state, waiting for polling requests sent by ViReader\xspace. When the serial number of a tag is called, the tag with this serial number responds within an assigned time slot. The rest of the ViTag\xspace\/s will ignore the payload that follows the serial number in the query as they notice that the serial number do not align with their own. For the requested ViTag\xspace\ to respond, it only needs to modulate the LCD and \textit{directionally} sends information back to the ViReader\xspace\ that initiated the conversation. Other ViTag\xspace\/s and ViReader\xspace\/s nearby will not hear anything from the requested ViTag\xspace. \subsection{Multiple ViTag\xspace\/s to One ViReader\xspace} \label{subsec:multitagto} When multiple ViTag\xspace\/s want to talk to one ViReader\xspace\ simultaneously, every ViTag\xspace\ has to wait for it's own time slot scheduled by the ViReader\xspace\ to transmit. \subsection{Multiple ViReader\xspace\/s to One ViTag\xspace} \label{subsec:multireaderto} One other problem is when multiple ViReader\xspace\/s want to talk to one ViTag\xspace, how to arbitrate the media. To solve this problem, ViReader\xspace\/s run ALOHA with carrier sensing. Specifically, if ViReader\xspace\ has data to send, send the data. If, while ViReader\xspace\ is transmitting data, it receives any data from another ViReader\xspace, there has been a message collision, in which case all involved ViReader\xspace\/s back off for an arbitrary period of time before retrying. Unlike ViTag\xspace, ViReader\xspace\/s do not have a tight energy constraint, and so carrying out carrier sensing on them is possible. \subsection{One ViTag\xspace\ to Multiple ViReader\xspace\/s} \label{onetagto} The reverse problem is when ViTag\xspace\ wants to talk to one ViReader\xspace, how it will prevent other ViReader\xspace\/s in range from being interrupted. In principle, ViTag\xspace\ is supposed to respond to the polling request sent by the ViReader\xspace\ that has the strongest illuminance on ViTag\xspace, so as to get the best performance. However, detecting light strength is too energy consuming for ViTag\xspace\/s. On the other hand, ViReader\xspace\ does not have a tight energy budget and can assess the strength as well, of the signal backscattered by ViTag\xspace\/s, which is negatively correlated with the distance from a ViTag\xspace\ to a ViReader\xspace. To provide ViTag\xspace\/s in the network with the best connection, ViReader\xspace\/s estimate the accessibility of every ViTag\xspace\ in range using the feedback signal in each ViTag\xspace's time slot. Specifically, the network of ViReader\xspace\/s works out a mapping between best service-providing ViReader\xspace\/s and every ViTag\xspace\ in range\footnote{One could exploit the link-state routing protocol to achieve consensus on such a mapping across ViReader\xspace\/s}, and keeps this information in each ViReader\xspace's "ViTag\xspace\ table". Now, as every ViReader\xspace\ knows which ViTag\xspace\ to serve to get the best performance, it will send polling requests only to ViTag\xspace\/s that are in the instantaneous "ViTag\xspace\ table". \subsection{Putting Things Together} The discussion so far brings together the physical layer and the MAC layer of the ViTag\xspace\ system. The physical layer protocol deals with point-to-point communications on the downlink (from one ViReader\xspace\ to one ViTag\xspace) and the uplink (from one ViTag\xspace\ to one ViReader\xspace) as well as ViTag\xspace\ duty-cycling, ViTag\xspace\ waking-up, and error-correction. The MAC layer protocol addresses the multi-ViReader\xspace\ to multi-ViTag\xspace\ problem in a way different from existing RFID or WLAN because of the different constraints. With these protocols, the networked system of ViTag\xspace\/s can provide services like Internet connection to batter-free tags in the home-area sensor network scenario, and identification service in traditional RFID scenarios, with better security guarantees. We will showcase the prototype we build for the RFID scenario, and evaluate its performances and security preservation strength. \section{Requirements and Preliminaries} \label{sec:background} Our goal is to design a bi-directional VLC system that runs on battery-free devices like smartphones and sensor nodes. The system features an LED and a mobile device. One closed-loop communication paradigm of the system is as follows. The LED sends a packet carried on the white light it emits modulated to $1MHz$ (downlink). The ViTag\xspace\ senses the transmission and wakes up. Upon successfully receiving and demodulating the packet, the ViTag\xspace\ sends a packet back to the LED (uplink). The LED is slightly modified so as to integrate the receiver in as shown in Fig.~\ref{fig:system}. In sending the packet, instead of generating the signal using power-consuming LEDs or other communication channels such as infrared or ultraviolet carriers, we adopt the modulating retro-reflector framework, with the combination of an LCD and a retro-reflector. Driving the LCD costs only $400-\mu W$ energy. In addition, a ViTag\xspace\ passively sends signals, thus the power consumption of which can be maintained at a low level. In the rest of the paper, we describe the design of Retro-VLC\xspace\ that consists of a modified off-the-shelf white LED and a ViTag\xspace\ in more details. \paragraph{Retro-reflector} Retro-reflector is a device that operates by returning light back to the light source along the same light direction with a minimum of scattering~\cite{rr}. Retro-reflectors are widely used on road signs, bicycles, and clothing for road safety. From a microscopic view, a retro-reflector is composed of cells, each of which is a corner cube as shown in Fig.~\ref{fig:retrolcd} (e). When a light beam hits one of the cells, the light is turned around via two adjacent reflections. \paragraph{Modulating Retro-reflector} Modulating retro-reflector (MRR)~\cite{mrr} consists of a retro-reflector and a modulator for optical communications. An MRR operates as a passive sources which transmits bits by varying the intensity of the reflected light beam. MRR is widely used in free space communication where the other side is a laser. Existing MRR systems~\cite{expensive,expensive2} is usually of a large size, and modulation is commonly achieved with a high-end electroabsorption modulator altering the absorption spectrum by applying an electric field. Consequently, such setting is ill-suited for our scenarios which require a low-cost solution. \p{Need double check. It is really hard to understand the existing MRR solutions. Cannot find much material.} LCD is \todo{polarized glass + liquid, liquid can polarize incoming light, } LCDs use the light properties of liquid crystals that are controlled by the voltage added on them. When the LCD is on, the incoming light is able to pass the LCD and hit the retro-reflector; When the LCD is off, the incoming light is rejected by the LCD. Therefore, LCDs can be used as modulators for MRRs. \begin{figure}[th] \centering \includegraphics[width=0.8\columnwidth]{link.pdf} \caption{Downlink and uplink.} \label{fig:link} \vskip -3mm \end{figure} \figref{fig:link} illustrates the modulated uplink lights using LCD. We notice that the commercially off-the-shelf LCDs are of low refreshing rate. Therefore, the symbol length is relatively much longer than the carrier waveform. \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{ccccc} \epsfig{file=../illustrations/mrr1.eps, height=0.2\columnwidth} & \epsfig{file=../illustrations/mrr2.eps, height=0.2\columnwidth} & \epsfig{file=../illustrations/mrr3.eps, height=0.2\columnwidth} & \epsfig{file=../illustrations/mrr4.eps, height=0.2\columnwidth} & \epsfig{file=../illustrations/retroreflector.eps, height=0.2\columnwidth} \\ {(a) TX $45{\,^{\circ}}\xspace$, RX $45{\,^{\circ}}\xspace$} & {(b) TX $45{\,^{\circ}}\xspace$, RX $135{\,^{\circ}}\xspace$} & {(c) TX $45{\,^{\circ}}\xspace$, RX $90{\,^{\circ}}\xspace$} & {(d) TX $90{\,^{\circ}}\xspace$, RX $90{\,^{\circ}}\xspace$} & {(e) Ideal Retro-reflector} \\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf Illustration of the Retro-reflector.} (a) When the LED (TX) is at $45{\,^{\circ}}\xspace$ arrival of incidence, and the camera (RX) is at $45{\,^{\circ}}\xspace$, the retro-reflector area (on the right) is bright, the mirror area (in the middle) is dark, and the paper area (on the left) is dark. (b) When the LED is at $45{\,^{\circ}}\xspace$, and the camera is at $135{\,^{\circ}}\xspace$, the mirror area is bright, while the retro-reflector and the paper areas are dark. (c) When the LED is at $45{\,^{\circ}}\xspace$, and the camera is at $90{\,^{\circ}}\xspace$, the retro-reflector and the mirror areas are dark, while the paper area is slightly bright due to diffused reflections. (d) When the LED is at $90{\,^{\circ}}\xspace$, and the camera is at $90{\,^{\circ}}\xspace$, both the retro-reflector and mirror areas are bright. (e) depicts an ideal retro-reflector and how it responds to incoming lights. Our actual retro-reflector has a scattering angle as small as $5{\,^{\circ}}\xspace$} \label{fig:retrolcd} \vspace{-1em} \end{figure*} \section{{\bf Retro-VLC\xspace} Overview} \label{sec:ov} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{link.pdf} \vskip -1ex \caption{Concept illustration of the Retro-VLC\xspace system.} \label{fig:link} \vskip -1em \end{figure} The basic design of Retro-VLC\xspace\ is to backscatter the incoming light using a retro-reflector fabric and to modulate it with an LCD. The overall concept is illustrated \figref{fig:link}. which depicts how our design support both half-duplex and full-duplex modes. \subsection{Challenges} While retro-reflecting and modulating the retro-reflected light makes it possible to establish a visible light uplink from a mobile device to the illuminating infrastructure, the actual design of Retro-VLC\xspace still faces two major challenges, rooted from the practicality and the low-power requirement of the system. \paragraph{Weak, Noisy Reflected Signal} The signal collected by the light sensor collocating at the light source is weak, about $4$ orders of magnitude weaker than the LED emission (measured with the tag at a 1.5-meter distance and a 12W LED lamp), due to the small size of the retro-reflector and relatively large working range. We use a photodiode with wide field of view (FoV) on the ViReader\xspace to avoid constraining the range of possible tag deployment. The wide FoV of the photodiode not only makes it less sensitive to the reflected lights (as only a tiny portion of its view actually corresponds to the retro-reflecting area of a tag), but also invites severe interference from the leakage and ambient reflection of the strong downlink signal and carrier. The converted electrical signal is further interfered by the harmonics of 50Hz (or 60Hz) AC current. \paragraph{Energy Efficiency} Secondly, the low power consumption requirement of ViTag\xspace (in hope to achieve battery-free operation by only harvesting energy from the illuminating LED) entails careful design as well. The receiving (demodulation and decoding) unit and modulation unit (the LCD) on the ViTag\xspace consume significant energy. The LCD shutter leverages the electric field to control the arrangement of liquid crystal molecules (to polarize the light). It itself is a capacitor. Frequent charging and discharging the LCD consumes relatively significant energy. Its power consumption increases linearly with the refreshing rate. In our measurement, it consumes $84\mu A$ current at a 500Hz refreshing rate. In addition, for sake of cost and energy consumption, we do not use any high precision oscillator on the ViTag\xspace. There is no clock synchronization between a ViReader\xspace and ViTag\xspace(s) either. These consideration introduces additional challenges. \subsection{Principles} Inspired by design principles of some recent backscattering systems \cite{abc1,abc2, abc3}, we we apply the following design principles in addressing the challenges: \begin{Itemize} \item Use analog components for signal detection. This is to avoid the expensive ADC and relieve the MCU from heavy digital signal processing. \item Make the transistors in the circuit work at a low DC operation point (\textit{e.g.}\xspace close to cut-off state). This is an exploitation of the nonlinear relationship between the amplification gain and DC work current (hence energy consumption) of a triode. \item Reuse energy as much as possible. This is particularly to reduce LCD energy consumption. \end{Itemize} \iffalse \begin{figure*}[t] \begin{center} \includegraphics[width=\textwidth]{../illustrations/ReadAndTag.eps} \vspace{-2em} \caption{Retro-VLC\xspace system diagram. The left part is the ViReader\xspace and the right part is the ViTag\xspace. }\label{fig:sysdiagram} \end{center} \end{figure*} \else \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{reader_tag_simplified.pdf} \vspace{-2em} \caption{Retro-VLC\xspace system block diagram.}\label{fig:sysdiagram} \end{center} \end{figure} \fi \subsection{Design Overview} \figref{fig:sysdiagram} shows the architecture of a Retro-VLC\xspace system. It consists of a ViReader\xspace and a ViTag\xspace. The ViReader\xspace resides on the lighting infrastructure, consisting of an illumination LED and transmission logic (termed ViReader-Tx\xspace hereafter), a light sensor and the subsequent receiving circuit (ViReader-Rx\xspace). The ViTag\xspace\ consists of a light sensor and receiving circuits (ViTag-Rx\xspace), and a retro-reflector, a modulating LCD and other circuitry components (ViTag-Tx\xspace). The ViReader-Tx\xspace and ViTag-Rx\xspace together make the \textit{downlink} visible light channel, and the ViTag-Tx\xspace and ViReader-Rx\xspace together make the \textit{uplink}. Retro-VLC\xspace operates as follows: \paragraph{Downlink} For the downlink communication, the ViReader\xspace sends out information by modulating the carrier using On/Off Keying (OOK) and employing Manchester coding. This signal is captured by the light sensor of ViTag\xspace, amplified, demodulated and decoded by ViTag-Rx\xspace in analog domain. \paragraph{Uplink} As for the uplink communication, the MCU on the ViTag\xspace controls the LCD to modulate the light carrier reflected by the retro-reflector fabric. The reflected light travels back to the light sensor that collocates with the LED. Upon capture, the weak signal is first amplified with a differential amplifier to mitigate noises, further amplified, demodulated, digitized and finally decoded. Special logic has been designed to account for the possible clock drift at the ViTag\xspace when modulating the reflected carrier as we have used a cheap RC oscillator to avoid high energy cost and overly large size of crystal oscillators. The downlink and uplink can work concurrently on their respective bands. Hence it is capable of full-duplexing. Normally, when there is no traffic, the ViReader-Tx\xspace sends out the carrier by switching the LED light at a high frequency $f_0$, which should be fast enough to avoid perceivable flickering (i.e., $f_0 \gg 200$Hz). In our implementation, we set $f_0$ to $1MHz$. We support dimming of the LED by changing its DC bias. Both the receiving logic on ViReader\xspace and ViTag\xspace (when turned on) keep on monitoring their own incoming light channel. With this design, a ViTag\xspace can initiate the communication to the ViReader\xspace. An alternative design would be turning on the ViTag-Tx\xspace only when ViTag\xspace\ receives certain information. This is the half-duplexing mode where only the ViReader\xspace can initiate a communication session, similar to how existing RFID system works. \section{\@startsection {section}{1}{\z@}% \newif\ifdebugdoc\debugdocfalse \ifdebugdoc \newcommand{\todo}[1]{\textcolor{red}{\textbf{Todo:} #1}} \newcommand{\fyi}[1]{\textcolor{blue}{#1}} \newcommand{\fye}[1]{\textcolor{red}{#1}} \newcommand{\remind}[1]{\footnote{\textit{\textcolor{red}{\textbf{Remind:} #1}}}} \newcommand{\repl}[2]{\textcolor{red}{#1}\textcolor{blue}{\sout{#2}}} \newcommand{\add}[1]{\textcolor{red}{#1}} \newcommand{\del}[1]{\textcolor{blue}{\sout{#1}}} \newcommand{\p}[1]{\vskip 1ex \noindent\colorbox{yellow}{\parbox{\columnwidth}{#1}}\vskip 4pt} \newcommand{\note}[1]{\vskip 4ex \noindent\colorbox{yellow}{\parbox{\columnwidth}{#1}}\vskip 6ex} \newcommand{\qm}[1]{\textcolor{red}{\uwave{#1}}} \newcommand{\q}[1]{\vskip 1ex \noindent\colorbox{magenta}{\parbox{\columnwidth}{\textbf{Question:} #1}}\vskip 4pt} \newcommand{\qa}[1]{\noindent\colorbox{yellow}{\parbox{\columnwidth}{\textbf{Answer:} #1}}\vskip 2ex} \else \newcommand{\todo}[1]{} \newcommand{\fyi}[1]{#1} \newcommand{\fye}[1]{} \newcommand{\remind}[1]{} \newcommand{\repl}[2]{#1} \newcommand{\add}[1]{#1} \newcommand{\del}[1]{} \newcommand{\p}[1]{} \newcommand{\note}[1]{} \newcommand{\qm}[1]{#1} \newcommand{\q}[1]{} \newcommand{\qa}[1]{} \fi \def\textit{i.e.}\xspace{\textit{i.e.}\xspace} \def\textit{et al.}\xspace{\textit{et al.}\xspace} \def\textit{etc.}\xspace{\textit{etc.}\xspace} \def\textit{e.g.}\xspace{\textit{e.g.}\xspace} \def\textit{w.r.t.}\xspace{\textit{w.r.t.}\xspace} \def{\,^{\circ}}\xspace{{\,^{\circ}}\xspace} \def\textsc{Retro-VLC}\xspace{\textsc{Retro-VLC}\xspace} \defViTag\xspace{ViTag\xspace} \defViReader\xspace{ViReader\xspace} \defRetro-VLC\xspace{Retro-VLC\xspace} \defViReader-Tx\xspace{ViReader-Tx\xspace} \defViReader-Rx\xspace{ViReader-Rx\xspace} \defViTag-Tx\xspace{ViTag-Tx\xspace} \defViTag-Rx\xspace{ViTag-Rx\xspace} \newcommand{\xref}[1]{\S\ref{#1}} \newcommand{\pxref}[1]{(\S\ref{#1})} \newcommand{\figref}[1]{Fig.~\ref{#1}} \newcommand{\para}[1]{\vskip 0.06in\noindent {\bf #1: } } \renewcommand{\paragraph}[1]{\vspace{4pt}\noindent\textbf{#1: }} \setlength{\pdfpagewidth}{8.5in} \setlength{\pdfpageheight}{11in} \section{Design Overview} Our goal is to design a bi-directional VLC system that runs on battery-free devices like smartphones and sensor nodes. The system features an LED and a mobile device. One closed-loop communication paradigm of the system is as follows. The LED sends a packet carried on the white light it emits modulated to $1MHz$ (downlink). The ViTag\xspace\ senses the transmission and wakes up. Upon successfully receiving and demodulating the packet, the ViTag\xspace\ sends a packet back to the LED (uplink). The LED is slightly modified so as to integrate the receiver in as shown in Fig. \ref{system}. In sending the packet, instead of generating the signal using power-consuming LEDs or other communication channels such as infrared or ultraviolet carriers, we adopt the modulating retro-reflector framework, with the combination of an LCD and a retro-reflector. Driving the LCD costs only \hl{xx $\mu W$} energy. In addition, a ViTag\xspace\ passively sends signals, thus the power consumption of which can be maintained at a low level. In the rest of the paper, we describe the design of Retro-VLC\xspace\ that consists of a modified off-the-shelf white LED and a ViTag\xspace\ in more details. \p{We need a system architecture figure here} \begin{figure}[th] \centering \includegraphics[width=0.8\columnwidth]{link.pdf} \caption{Downlink and uplink.} \label{fig:link} \vskip -3mm \end{figure} \section{Our Design Principle} Our goal is to design primitives that enhance visible light communication capabilities on battery-free devices while preserving user privacy and security of both the transmitters and the receivers, with the omnipresence of readily available LEDs that serve as the lightening devices. The key challenge in achieving these is two-fold. First, devices running at visible light bands are power-intensive because of the broad bandwidth these bands can provide. Second, the noise caused by ambient signals on the visible light spectrum and the interference triggered by the the data transmitted by the system itself stays in the same band as what the receiver expects to receive at. To address these challenges, we use the following guiding principles: we use as many analog components and recycle as much energy as possible on the tag to enable the tag to transmit to the reader with a decent data rate. Also, we diminish the scattering area as much as possible by making use of directional backscattering materials of small Field of View (FoV). Such approaches, as we show in the rest of the paper, can provide an order of magnitude reduction in the power consumption of these communication primitives and in the scattering area of the signals on the backscattering channel. In the rest of this paper, we describe ViTag\xspace, our battery-free tag and show how it can enable backscatter communications with no battery and with a higher energy efficiency with respect to communication range than tags with LEDs on. We then describe ViReader\xspace, our system that can be easily integrated onto commercial LED lights for transmitting data at no discernible flickers with dimming support and receiving with \hl{hhh dB} signal-to-interference-plus-noise-ratio (SINR). Finally, we show that our designs can be used to enable concurrent transmissions in a network of battery-free devices without the need for synchronization. \section{Related Work} Our work is related to prior work in VLC systems and backscatter communication systems: \vskip 0.05in\noindent{\bf (a) VLC Systems:} Recently, there have been many efforts exploring communication mediums wherein visible lights carry information. These work, however, either deal with only one-way communication without an uplink~\cite{flawedsys1,flawedsys2,flawedsys3,flawedsys4}, or go in a two-way fashion with both sides supplied by battery~\cite{led2led1,led2led2,led2led3}, which limit real-world practicality. Specifically, LED-to-phone systems~\cite{location1,location2,location3} only support downlink transmissions, targeted at phone localization. LED-to-LED systems~\cite{led2led4,led2led5} consider visible light networks, where each end is not meant to be mobile, and is not battery-free. By contrast, our work augments the existing systems with an additional uplink channel from the mobile device to the LED on the same band as the downlink, with an emphasis on the low power design and system robustness. \vskip 0.05in\noindent{\bf (b) Backscatter Systems:} Backscattering is a way to provide transmission capability for extremely low-power devices, substituting the need for devices actively generating signals. The technique has been primarily used by RFID tags~\cite{rfid1,rfid2}. Recently, Wi-Fi ~\cite{abc3} and TV-based ~\cite{abc1,abc2} systems started employing and advancing this technique. Our Retro-VLC\xspace\ system also achieves low-energy design using backscattering and further shares design principles with \cite{abc1,abc2, abc3}, that is, using analog components on the energy-constrained end. The major differences lie in the fact that we are dealing with visible light using a retro-reflector, whereas the ambient backscatter systems are backscattering radio waves. On the tag side, we use a light sensor to receive and a retro-reflector to send (by reflection) information, which is also different from the shared antenna and RF front-end in other backscattering systems. In comparison, we can easily achieve full-duplex while other systems are essentially half-duplex and require intensive tricks and significant overhead to achieve full-duplex \cite{fullduplex1,fullduplex2,fullduplex3}. In addition, because of the backscattering nature, these wireless systems tend to expose their transmissions to a wide surrounding area, leaving a good chance for side readers to overhear the information being transmitted~\cite{abc1,abc2,abc3}. By contrast, ViTag\xspace relies on visible light communication, which implies that eavesdroppers are easily discernible. The use of retro-reflectors further constraints the uplink transmission to stick along the tag-reader path. As a result, our system ViTag\xspace comes with a good security property inherently, while other systems have to enhance their security with extra efforts~\cite{eavesdrop1,eavesdrop2}. \section{Retro-VLC\xspace\ Design} \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{cc} \epsfig{file=../illustrations/tag.eps, width=2\columnwidth} \\ {(a) ViTag\xspace\ Block Diagram}\\ \epsfig{file=../illustrations/reader.eps, width=2\columnwidth} \\ {(b) LED Block Diagram}\\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf System Diagram.} Blah Blah.} \label{fig:sysdiagram} \vspace{-1em} \end{figure*} Retro-VLC\xspace\ is a new form of communication in which battery-free ViTag\xspace\/s can communicate with LEDs without any additional infrastructure. Harvesting LED light energy, a ViTag\xspace\ reflects visible light signals to communicate. Because of the ubiquitous deployment of illuminating LEDs, building such a system will enable ubiquitous communications at an unprecedented scale. It will also enhance the security of current RFID systems as it limits the uplink signal exposure to line-of-sight, and the use of retro-reflectors will further focus the signal to an even thiner area, leaving less chance for the hidden attackers and sniffers to temper the system. Designing such a system, however, faces several challenges, which we will break down in the following subsections, and we describe how our design address these challenges. \subsection{Overview} Fig.~\ref{fig:sysdiagram} (a) shows a block diagram of our ViTag\xspace\ design. It consists of a transmitter, a receiver, an MCU and a harvester. The transmitter and receiver communicate by modulated backscattering of visible light signals, and the harvester absorbs energy from visible lights to power the device. Further, they operate independently of each other. The receiver immediately awakes the MCU by rising the tag voltage to a certain value and enabling the analog circuit to start receiving as it detects a data packet coming in. The MCU then shuts down the analog circuit and powers up the transmitting circuit. The receiving circuit and the transmitting circuit work alternately to save energy. Alongside the communication modules is the energy harvester, which consists of a number of solar cells that transfer visible light energy into electric energy. The harvester sets the upper bound of the power ViTag\xspace\ can take on, thus posing a strict constraint to the design of each component on the tag. Fig.~\ref{fig:sysdiagram} (b) shows a block diagram of our ViReader\xspace\ design. The major part of it is an LED bulb that emits fast-fluctuating light human eyes cannot detect flickering in. ViReader\xspace\ builds on off-the-shelf LEDs and makes minor modifications on them. It consists of a transmitter, a receiver and an MCU. The transmitter encodes and modulates the data, feeding the data to the LED for transmission, while keeping the LED not flickering and with dimming support. The transmitter also takes the goal of making ViTag\xspace\ battery-free in mind, emitting lights with data such that ViTag\xspace\ can demodulate and decode the data at a low energy cost. The receiver deals with the light backscattered from ViTag\xspace, and also cancels noise and interference as much as possible. The MCU processes the data transmitted from ViTag\xspace. Next, we describe our design of the transmitter and receiver on the ViTag\xspace\ and LED in more detail. \subsection{ViReader\xspace\ Transmitter} \label{subsec:LEDtrans} As shown in Fig.~\ref{fig:sysdiagram} (b), the transmitter on ViReader\xspace\ is composed of a crystal oscillator that runs at 1MHz\footnote{The fastest flickering rate of our LED is 1MHz}, and an MCU followed by two amplifiers. The MCU-generated bits toggle the crystal oscillator to OOK the signal with Manchester encoding. The modulated signal will then be amplified and fed to a LED. To make sure the brightness of the LED does not pose a difference between the transmitting and not transmitting state, we add a DC component to the signal. The magnitude of the DC component is adaptable so that we can dim the brightness of the LED emission. In addition to the dimming support, flickers are avoided in ViReader\xspace\ transmitter. We use 10kHz as the data rate, which far exceeds the frequency (200 Hz) below which human eyes will feel of fluctuations, which could disturbing and unacceptable. \subsection{ViTag\xspace\ Receiver} \begin{tcolorbox} \vskip 0.05in\noindent{\bf Challenge:} It is power consuming to Demodulate and decode the high throughput LED-transmitted data. \vskip 0.05in\noindent{\bf Solution Principle:} Use analog components as many as possible (to avoid ADCs) and avoid complicated digital signal processing on the mobile end. Also, design amplifiers that work at an almost-cut-off state to save energy cost. \end{tcolorbox} Following the flow of the signal, we describe the design of our ViTag\xspace\ receiver. We first use a light sensor\footnote{A normal light sensor has a bandwidth of over 1GHz} that captures the visible light. The captured signals then go through filters, detectors/demodulators, and another feedback amplifier before being sent to a comparator be digitized. We only use analog elements and a low-power microcontroller for receiving, and design modules that set the amplifiers to work at an almost-cut-off state, where the energy consumption is extremely low. The block diagram of the receiver is presented in Fig.~\ref{fig:sysdiagram} (a). \subsubsection{Extracting Information from the Visible Light} ViReader\xspace\ transmitted signals are captured with a light sensor. The light sensor has an equivalent capacitance, which, with an inductor parallel to it, performs preliminary front-end filtering. Next, two amplifiers successively amplify the RF signal while avoiding auto-excitation using a feedback mechanism. Upon preliminary filtering and amplification, the signal is ready for demodulation. Avoiding the downside of a conventional envelope detector on high power consumption, our demodulator contains a constant current source, a diode, and a frequency selection amplifier. The constant current source sets the current that flows into the base of the triode in the frequency selection amplifier, making it work at an almost-cut-off state, such that the amplifier is turned on and the signal is amplified when the positive half of the signal that has an average of 0 flows in. On the other hand, when the negative half of the signal flows in, the amplifier is turned off. This process essentially demodulates the signal from the 1MHz carrier. Finally, the signal passes through an RC filter, which further smooths the signal, ready to be fed to a comparator for digitization. Overall, our demodulator saves more energy than a conventional envelope detector because of the low-energy working region ours works in. We also note that the output of our demodulator is smoother, while having a higher gain than a traditional envelope detector, which is favorable in the energy-constrained decoding phase. \subsubsection{Decoding on an Ultra-Low-Power Device} While the output of the envelope detector is a smooth wave, it is still analog with continuous values. In principle, a receiver with an ADC can distinguish between the two signal levels by processing the digital samples. Specifically, say we have two signals with different voltages, $V_0$ and $V_1$, $V_0 > V_1$, where $V_0$ and $V_1$ correspond to the power levels for the zero and one bits. To distinguish between them, the receiver would first compute a threshold value. If the duty cycle of the signal is $50\%$, and odds of the occurrence of zeros and ones are equal, then the threshold is the average of the two power levels. If the received signal is greater than this threshold, we conclude that the received signal is $V_1$; else, we conclude that the received signal is $V_0$. Since we seek to eliminate the need for a full ADC in order to reduce power, the receiver should imitate this operation using analog hardware. In favor of Manchester coding we use (see~\ref{subsec:LEDtrans}), the goal of our design, however, is different from a typical ADC. The Manchester-encoded signal uses edges to denote bits. Specifically, an up edge denotes one, and a down edge denotes zero. To align with this pattern, we design a comparator that detects changes of the voltage. First, using a resister and a capacitor, the comparator sets a time constant that features its detection delay that corresponds to the input data rate. The comparator consistently compares the current analog value $V_{now}$ with the signal value at the last bit $V_{previous}$. If $V_{now} > V_{previous}$, the comparator outputs $1$; otherwise, it outputs $0$. We note that while a traditional design only performs absolute comparison, our design traces the relative changes in the voltage. Further, using a traditional detector would lead to jagged waveform, which would further cause false comparison results at any edge-based comparator. Instead, our comparator design is more suitable for Manchester coding, which is widely adopted for preventing long consecutive $1$s or $0$s in a bit stream, in avoidance of picking a threshold value that would suffer from drifting. THe microcontroller then works to check the correctness of the packet. In the receiving phase, microcontroller consumes $120\mu A$. \subsection{ViTag\xspace\ Transmitter} \label{subsec:tagtrans} \begin{tcolorbox} \vskip 0.05in\noindent{\bf Challenge:} Transmitting with the LCD at a high toggling frequency consumes even more power than the receiver. \vskip 0.05in\noindent{\bf Solution Principle:} Recycle energy spared by the LCD at every toggle; And use an RC oscillator instead of the crystal oscillator used in the receiving phase. \end{tcolorbox} Our ViTag\xspace\ transmitter transmits by passively backscattering the incoming light. The core of the transmitter is the combination of an LCD and a retro-reflector that serves as a modulator. To conserve energy, we design an energy reuse module that, in every modulation cycle, recollects $50\%$ of the energy that would have been wasted by the LCD without such a module. To further save energy, instead of using power-demanding crystal oscillators, we use a simple RC oscillator to generate control signals to toggle the LCD. While the RC oscillator does have worse clock stability and a lower frequency, we design modules that address these issues in~\ref{subsec:LEDreceiver}. Finally, the LCD requires a voltage high enough to drive it to its fullest capacity, and this high voltage cannot be directly fed by the low voltage the solar cell can provide. We design a voltage boosting module that achieves this. The overall design of the ViTag\xspace\ transmitter is presented in Fig.~\ref{fig:sysdiagram} (a). We now break down the design into the following key points. \subsubsection{Modulating the Retro-Reflector with an LCD} To avoid actively generating light signals, which may cost way more power than affordable on a battery-free ViTag\xspace, we instead take the advantage of retro-reflectors that passively bounce the incoming light back. As described in Section~\ref{sec:background}, the retro-reflector has the merit that it can directionally bounce the light at a direction same as the one the light arrives at. To modulate the light, we cover the retro-reflector with an LCD. When at work, as electromagnetic waves hit the LCD, the LCD lets the light pass through or blocks it depending on the polarization, i.e., the orientation of the liquid-crystal molecules, in the LCD, which is controlled by the voltage added on it. If the applied voltage is large enough, the pixels will appear black. The highest voltage applied to our LCD that turns it completely black is $6.1V$, and the lowest voltage with which it starts to be polarized is $2.1V$. We are able to flicker it as fast as $1kHz$, which also is our highest data rate on the uplink. Note, that there are other voltage-controlled reflective materials existing that may have higher flicking rate or lower power. Applying one of those in the system might enhance the energy efficiency and the capacity of the system. \subsubsection{Energy Reuse} The current that feeds the LCD which has an equivalent capacitance of $9nF$ would be wasted to the ground when the LCD discharges. However, we found that by recollecting this portion of the current in the LCD discharging phase, $50\%$ of the energy could be saved. In response, we designed an energy reuse module, illustrated in Fig.~\ref{fig:energyreuse}, that features a DC/DC converter that boosts the voltage from the lowest voltage that triggers the LCD to the highest. The lowest voltage, as mentioned earlier, is $2.1V$, which is about the voltage the solar cell array can supply. In the charging phase, in which the LCD turns to the blocking mode, the DC/DC boosts the voltage the solar cell supplies to one the LCD reaches its full capacity at. This voltage effectively applies to the LCD when the MCU sets $T_0$ to open, while $T_1$ is set to close. The voltage on the LCD pumps up to $6.1V$ and stays until the discharging phase. In the discharging phase, the MCU sets $T_0$ to close, and $T_1$ to open. Since there is a diode that blocks the only path along which the LCD could discharge towards the ground, the current that flows from the discharging LCD does not go straight to the ground; Instead, it flows back to the DC/DC, charging $C_0$ and $C_1$, until the voltage on the LCD equals to that on the solar cell. We note two things about our design. First, the two signals that control the on/off state of the switches are generated by an MCU. When one of the two is set to high, the other is set to low, and vice versa, As a result, there is only one triode between $T_0$ and $T_1$ open at a time, so the LCD is either in a charging phase or in a discharging phase, with no conflict. Second, we always make sure that the data rate is slower than the rate at which the two signals from the MCU alternate, so that the LCD alternates in a balanced manner without saturation in a long run, i.e., $C_0$ and $C_1$ will not constantly be charged or discharged. In real world settings, fortunately, the ViTag\xspace\ has up to $1kbps$ data rate, which is much smaller than the average time used for charging an empty LCD or discharging a full LCD. \begin{figure}[!t] \vskip -0.03in \centering { \epsfig{file=../illustrations/EnergyReuseCircuits.eps, width=0.6\columnwidth} } \caption{{\bf Our Energy Reuse Module} \hl{blah}.} \label{fig:energyreuse} \vskip -0.05in \end{figure} \subsection{ViReader\xspace\ Receiver} \label{subsec:LEDreceiver} \begin{tcolorbox} \vskip 0.05in\noindent{\bf Challenge:} The LED receiver must handle time offsets caused by the ViTag\xspace's RC-based clock with low oscillating frequency, and has to detect the retro-reflected signal which is 3 orders of magnitude weaker than interfering LED transmissions. \vskip 0.05in\noindent{\bf Solution Principle:} Design a sliding-window multi-symbol match filter algorithm to remedy clock drifts and LCD-caused signal distortions. \end{tcolorbox} The ViReader\xspace\ captures the signal transmitted by ViTag\xspace\ at an up to $1kbs$ data rate on a 1MHz carrier. Upon transferring the light signal to voltage, the ViReader\xspace\ receiver demodulates and decodes the data using RF-end analog circuits and the same microcontroller the ViReader\xspace\ transmitter uses. While there is less power limit on ViReader\xspace\ than ViTag\xspace, the ViReader\xspace\ receiver design still faces multi-fold challenges. (1) First, the signal sent by the ViTag\xspace\ is mixed with the ViReader\xspace-transmitted signal, which, too, is on a 1MHz carrier. Since the ViReader\xspace\ receiver is much closer to the ViReader\xspace\ transmitter than the ViTag\xspace\ transmitter, the strength of the ViReader\xspace-transmitted signal with the reflections of this signal by the surroundings 3 orders of magnitude greater than the ViTag\xspace-transmitted signal on ViReader\xspace. As we measure, the typical voltage of the received signal is on the order of magnitude of $mV$, while the voltage supply for the LED receiver is $8V$. (2) Second, the signal carried on 1MHz to be sent by the ViReader\xspace\ transmitter can also leak to the light sensor, which exacerbates the interference. (3) Third, commercial radios that run on 1MHz could also interfere with the signal that carries useful information. (4) Fourth, the noise caused by the movement of humans and other objects around could be huge. The voltage output of the light sensor on the ViReader\xspace\ is therefore prone to saturating. (5) On top of these, as we noted in~\ref{subsec:tagtrans}, the received signal suffers from timing offsets caused by the low-cost low-frequency clock on the ViTag\xspace. In the rest of this section, we describe our ViReader\xspace\ receiver design (Fig.~\ref{fig:sysdiagram} (b)) that addresses these challenges. \subsubsection{Extracting Backscatter Information from noise} Since the signal to interference ratio is extremely low (~mV against ~V), we have to amplify the received signal more than $1000$ times in magnitude. Using a single amplifier though, is not feasible, as the potential of introducing self-excitation to the circuit with a huge amplification is high, and the noise introduced by a single amplifier tends to be unacceptable. So we apply five successive amplifiers instead. The detailed design is the following. To extract the useful information from the noisy background, we first apply an amplifier at the RF end. This amplifier directly follows the light sensor with an LC filter that performs preliminary band-pass filtering. Subsequently, we add a differential amplifier following a transmission line which decouples the transmitted signal from leaking towards the RF end. There is an impedance matching module between the light sensor filter and the transmission line. Now, the center frequency of the differential amplifier is $1MHz$ with the help of an LC module, and its gain is controlled by the microcontroller. The third and fourth amplifiers are also with LC modules that perform further RF-band filtering. The fourth amplifier is with feedback mechanism. The use of this amplifier is to further increase the signal gain and filter the residual RF-band noise, while dragging the circuit state farther from self-excitation. After the amplifications, this signal is sent to an envelope detector, which performs regular demodulation that picks out the baseband signal from the $1MHz$ carrier. Finally, the same comparator used on the ViTag\xspace\ is used to digitize the Manchester-encoded signal. \subsubsection{Decoding in the Presence of Clock Offsets} \label{subsubsec:clockoffset} Traditionally, the following approaches have been used to extract the timing information from the signal and perform the decoding thereafter. \vskip 0.05in\noindent{\bf Peak (edge) detection-based algorithms} are ones where samples are taken differentials, and so the extreme (discontinuous) points in the signal are found which denote the clock beats. However, as shown in Fig.~\ref{fig:dynamicRange}, the signal received has a huge dynamic range. If the signal appears to be the case in (a), (b) or (c), the peak approach will fail, as the peak detected will always be ahead of or behind the actual one. In other cases, the edge approach will fail, as the edge is not clearly identified. \vskip 0.05in\noindent{\bf Averaging-based algorithms} are ones where samples of the signal are averaged to generate a threshold, and samples above this threshold denote ones and below denote zeros. However, as shown in Fig.~\ref{fig:dynamicRange} (e), when the average value of the signal within a packet is drifting, which may happen for the reasons explained earlier, this approach will likely fail. While one could identify a trend for the change in the average value, it is hard to accurately do so in practice. The reason for this is that as the average value is changing in a coherent way, a wrong prediction of the average value will cause severe errors in the process of decoding. \vskip 0.05in\noindent{\bf Multi-symbol correlation algorithms} use a string of symbols to represent one bit, and does correlation on the receiver side to decode the bit by finding the peak of the correlation result. The algorithm can pinpoint the clock and hence decode the signal in a low SNR situation, but it is not suitable for low symbol-rate channels like the uplink in our case. \vskip 0.05in\noindent{\bf One-symbol match filter algorithms} try to match the wave form of the signal and detects the convolution peaks to determine the accurate timing. However, due to the rough equivalence of the LCD on the tag side that shapes a bit and a capacitor, the basic wave form tends to be sawtooth, as shown in Fig.~\ref{fig:dynamicRange} (d) when there is no top or bottom truncation. Further, as the bits are encoded in Manchester code, the rising edge and the falling edge are not necessarily symmetric in time. A typical example is three chips that contain one high-volt chip followed by two low-volt chips, in which case the correlation peak will be skewed, compared to the case where the voltage is high for one chip and low for the next. Therefore, the timing will be biased. We develop a novel algorithm to decode the signal encoded with Manchester code that has a large dynamic range. We term it a sliding-window multi-symbol match filter algorithm. The approach extends the one-symbol match filter to that with an addition regarding clock adjustment using a multi-symbol match filter. The basic idea is to avoid the biased timing caused by skewed correlation peaks that happen in the one-symbol match filter case, by matching all possible patterns of the wave form the Manchester-encoded signal appears to have, and iteratively adjusting the local clock by every bit period. To begin with, the algorithm exploits standard correlation to detect the preamble of a packet, using the fact that the preamble is different from any possible bit patterns in the payload. Upon finding the start of a packet, the algorithm estimates the length of a bit first using the knowledge of the ViTag\xspace\ clock that has a known frequency yet subject to offsets. Then the algorithm iteratively does the following two things: \vskip 0.05in\noindent{\bf Step 1: Wave Form Recognition.} As every bit is Manchester-encoded, one bit contains two chips. A bit-period of a square wave that contains a high volt chip and a low volt chip (not necessarily in this order) correlates against the first three bit-periods of the signal. After this, the algorithm knows what these three bits are out of eight possible wave forms. \vskip 0.05in\noindent{\bf Step 2: Local Time Recovery.} Then the algorithm adjusts the clock estimation by correlating the three bits from the raw signal against the corresponding gold pattern from the eight patterns. Ideally, the correlation yields a peak in the middle of the three bits. However, due to the time variance with the start of the packet and the frequency deviation between the reader clock and the tag clock, the peak of the correlation result does not necessarily align with the ideal peak. To bound the clock estimation error along the bits in a packet, we perform linear regression $t=k\dot s+s_0$ to estimate the timing of the next three bits, using the estimate of the beginning of the packet as $s_0$ and the estimate of the current three bits and their timing as the training data ${s,t}$. Every round $k$ is re-estimated, and then the algorithm moves the three-bit window one bit forward, and jumps back to perform step one, until reaching the end of the packet. This time recovery algorithm designed specifically for our coding and modulation bounds the error on $k$ from propagating as the decoding proceeds. We formally describe this in the following lemma, for which we give a proof in Appendix. \begin{lemma} The time recovery renders the clock estimation error's convergence to zero if a packet contains infinite number of bits. \label{lem:lemma1} \end{lemma} We note that we choose a three-bit time span as a correlation unit because three bits are the smallest number of bits that contain all possible wave forms when the signal is Manchester-encoded and LCD capacitor-filtered. Following the local time recovery and decoding, ViReader\xspace\ checks the correctness of the bits received using CRC8. The polynomial used is $x^8 + x^2 + x + 1$. \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{ccccc} \epsfig{file=../illustrations/waveform1.eps, width=0.35\columnwidth} & \epsfig{file=../illustrations/waveform2.eps, width=0.35\columnwidth} & \epsfig{file=../illustrations/waveform3.eps, width=0.35\columnwidth} & \epsfig{file=../illustrations/waveform4.eps, width=0.35\columnwidth} & \epsfig{file=../illustrations/waveform5.eps, width=0.35\columnwidth}\\ {(a) Normal} & {(b) Up-truncated} & {(c) Down-truncated} & {(d) Up and Down-truncated} & {(e) Average-drifted}\\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf Varying Wave patterns.} Blah Blah.} \label{fig:dynamicRange} \vspace{-1em} \end{figure*} \section{Retro-VLC\xspace\ Design} \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{cc} \epsfig{file=../figures/corr-bd.eps, width=\columnwidth} & \epsfig{file=../figures/corr-bd.eps, width=\columnwidth}\\ {(a) ViTag\xspace\ Block Diagram} & {(b) LED BLock Diagram}\\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf System Diagram.} Blah Blah.} \label{fig:sysdiagram} \vspace{-1em} \end{figure*} Retro-VLC\xspace\ is a new form of communication in which battery-free ViTag\xspace\/s can communicate with LEDs without any additional infrastructure. Harvesting LED light energy, a ViTag\xspace\ reflects visible light signals to communicate. Because of the ubiquitous deployment of illuminating LEDs, building such a system will enable ubiquitous communications at an unprecedented scale. It will also enhance the security of current RFID systems as it limits the uplink signal exposure to line-of-sight, and the use of retro-reflectors will further focus the signal to an even thiner area, leaving less chance for the hidden attackers and sniffers to temper the system. Designing such a system, however, faces several challenges, which we will break down in the following subsections, and we describe how our design address these challenges. \subsection{Overview} Fig.~\ref{fig:sysdiagram} (a) shows a block diagram of our ViTag\xspace\ design. It consists of a transmitter, a receiver, an MCU and a harvester. The transmitter and receiver communicate by modulated backscattering of visible light signals, and the harvester absorbs energy from visible lights to power the device. Further, they operate independently of each other. The receiver immediately awakes the MCU by rising the tag voltage to a certain value and enabling the analog circuit to start receiving as it detects a data packet coming in. The MCU then shuts down the analog circuit and powers up the transmitting circuit. The receiving circuit and the transmitting circuit work alternately to save energy. Alongside the communication modules is the energy harvester, which consists of a number of solar cells that transfer visible light energy into electric energy. The harvester sets the upper bound of the power ViTag\xspace\ can take on, thus posing a strict constraint to the design of each component on the tag. Fig.~\ref{fig:sysdiagram} (b) shows a block diagram of our ViReader\xspace\ design. The major part of it is an LED bulb that emits fast-fluctuating light human eyes cannot detect flickering in. ViReader\xspace\ builds on off-the-shelf LEDs and makes minor modifications on them. It consists of a transmitter, a receiver and an MCU. The transmitter encodes and modulates the data, feeding the data to the LED for transmission, while keeping the LED not flickering and with dimming support. The transmitter also takes the goal of making ViTag\xspace\ battery-free in mind, emitting lights with data such that ViTag\xspace\ can demodulate and decode the data at a low energy cost. The receiver deals with the light backscattered from ViTag\xspace, and also cancels noise and interference as much as possible. The MCU processes the data transmitted from ViTag\xspace. Next, we describe our design of the transmitter and receiver on the ViTag\xspace\ and LED in more detail. \subsection{ViReader\xspace\ Transmitter} As shown in Fig.~\ref{fig:sysdiagram} (b), the transmitter on ViReader\xspace\ is composed of a crystal oscillator that runs at 1MHz, and an MCU followed by two amplifiers. The data generated by the MCU controls a switch on waves generated by the crystal oscillator so that the data is modulated. The modulated signal will then be amplified and fed to a LED. To make sure the brightness of the LED does not pose a difference between the transmitting and not transmitting state, we add a DC component of a particular value to signal when at the transmitting state such that the the average strength of the light at the transmitting state is the same as solely lighting. The magnitude of the DC component is such that it is adjustable to provide dimming support as what existing LEDs are capable of. In addition to the dimming support, flickers are avoided in ViReader\xspace\ transmitter. We use 10kHz as the data rate, which far exceeds the frequency (200 Hz) below which human eyes will feel of fluctuations, which could disturbing and unacceptable. We note the following about our design: Firstly, we use a white-colored LED, which emits electromagnetic waves of wavelengths between 380 and 780 nm. The highest flickering frequency of the LED with full on/off transitioning is 1MHz, and we are exploiting the flickering rate to its fullest, in order to leave room for downlink data rate as high as possible. Secondly, different components in the transmitter on the reader side require different voltage supply. We are using two DC/DCs to boost the voltage for the switch and the amplifiers. Specifically, the voltage used for supply the crystal oscillator, the amplifiers for the MCU-output signals and the switch is 5V, and the voltage for the amplifiers for the RF signals is 12V. Finally, the 1MHz frequency, as mentioned earlier, is used because of the full flickering capability we would like to draw from the LED. However, around this frequency with a bandwidth on the order of 100kHz, there are commercial radios using AM modulation. For example, there are radios from 540 through 1610 kHz with 10 kHz spacing in America. Another form is situated at a similar spectrum spanning from 531 to 1611 kHz with 9 kHz spacing. These are sources of interference to what we transmit with the LED, and we are going to address this issue with the receiver design on the reader, which also deals with other sources of interference and noise. \subsection{ViTag\xspace\ Receiver} Looking along the flow of the signal, we then describe the design of our ViTag\xspace\ receiver. Designing a visible light receiver entails a light sensor that captures the fluctuation of the visible light with a satisfying bandwidth. Then the captured signals go through filters, detectors/demodulators, and another feedback amplifier before sending to a comparator be digitized. On top of these, the design should be energy-efficient, such that the harvested energy is enough to supply the operation of the tag. We avoid using power-demanding ADCs while achieving \hl{data rate} at \hl{x} accuracy at the distance of \hl{sss} using only analog elements and a low-power MCU. The block diagram of the receiver is presented in Fig.~\ref{fig:sysdiagram} (a). \subsubsection{Extracting Information from the Visible Light} ViReader\xspace\ transmitted signals will first be sensed by a light sensor BPW34. Since the light sensor has an equivalent capacitance, along with an inductor parallel to it, it performs preliminary front-end filtering. Since natural lights do not have a component at 1MHz, this filter does a good job singling out the useful signal and filtering out lights from other sources. Next, two amplifiers will successively increase the RF signal strength while avoiding auto-excitation using feedback mechanisms. After the preliminary filtering and amplification, the signal is ready for demodulation. These processing blocks consume power up to \hl{aaa}, of which the light sensor consumes \hl{ddd}, and they are shut down once the tag turns into the transmitting mode in order to save energy. Next, we describe the mechanism of our simple yet powerful signal detector under a strict energy constraint. The circuit is a variant of an envelope detector. Unlike a conventional envelope detector, the detector is targeted specifically for energy-constraint conditions, while providing a higher gain and a bigger bandwidth. So it is especially suitable for ViTag\xspace, where the downlink data rate is high but with being battery-free as its goal. At a high level, it contains a constant current source, a diode, and a frequency selection amplifier. The constant current source sets the current that flows into the base of the triode in the frequency selection amplifier, making it work in the near-conduction state, such that when the positive half of the signal that has an average of 0 flows in, the amplifier is turned on and the signal is amplified. In contrast, when the negative half of the signal flows in, the amplifier is turned off, such that looking at the wave that comes out of the amplifier, the high frequency component, the carrier wave, is cut off half. Then the signal passes through an RC filter, which further smooths the signal. This smoothen signal has little carrier wave component in it, ready to be fed to a comparator for digitization. Overall, the bandwidth of this envelope detector is \hl{sss}, the power that supplies it is \hl{aa}, and the gain it multiplies to the input signal is \hl{sss}, as compared to \hl{aa, bb, cc}, respectively, when using a normal envelope detector. Considering our energy budget on ViTag\xspace, this amount of saving is significant. We illustrate the signal at the input port, at the output port of a traditional envelope detector and at the output port of our low-power envelope detector in Fig.~\ref{fig:sysdiagram} (a). It shows that compared to the output of a traditional detector, the output of our detector is smoother, while having a higher gain. We will see in the following section that the smoother version of the signal is easier to detect using our comparison decoder under the Manchester coding assumption. \subsubsection{Decoding on an Ultra-Low-Power Device} Challenges: \begin{Itemize} \item Demodulating and decoding the high throughput LED-transmitted data is power consuming. \item Transmitting with the LCD at a high toggling frequency consumes even more power than the receiver. \end{Itemize} Solution principles: \begin{Itemize} \item Use analog components as many as possible (to avoid analog-to-digital converters (ADCs)) and avoid complicated digital signal processing on the mobile end. \item Recycle energy spared by the LCD at every toggle. \end{Itemize} The output of the envelope detector produces a smoothen wave, yet it is still analog, with continuous values. In principle, a receiver with an ADC can distinguish between the two signal levels by processing the digital samples. Specifically, say we have two signal with different voltages, $V_0$ and $V_1$, $V_0 > V_1$, where $V_0$ and $V_1$ correspond to the power levels for the zero and one bits. To distinguish between then, the receiver would first compute a threshold value. If the duty cycle of the signal is $50\%$, and probabilities of the occurrence of zeros and ones are equal, then the threshold is the average of the two power levels. When the received signal is greater than this threshold, we conclude that the received signal is $V_1$; otherwise, we conclude that the received signal is $V_0$. Since we choose to eliminate the need for a full ADC in order to reduce power, the receiver is supposed to imitate this operation using analog hardware. This design, though, is slightly different from a conventional ADC, in favor of Manchester coding we use (see~\ref{subsubsec:clockoffset}). The Manchester-encoded signal uses edges to denote bits. Specifically, an \hl{up} edge denotes one, and a \hl{down} edge denotes zero. To achieve consent with this pattern, we design a comparator that detects changes of the voltage. First, using a resister and a capacitor, the comparator sets a time constant that features its detection delay that corresponds to the input data rate. The comparator consistently compares the current analog value $V_{now}$ with the signal value at the last bit $V_{previous}$. If $V_{now} > V_{previous}$, the comparator outputs \hl{one}; otherwise, it outputs \hl{zero}, after a time span of the time constant. We note that while a traditional design only performs absolute comparison, our design traces the relative changes in the voltage. Further, using a traditional detector would lead to jagged waveform, which would further cause false comparison results at any edge-oriented comparator. Instead, our comparator design is more suitable for Manchester coding, which is widely adopted for preventing long consecutive ones or zeros in a bit stream, in avoidance of picking a threshold value that is subject to drifting. \subsection{ViTag\xspace\ Transmitter} The design of our ViTag\xspace\ transmitter builds on off-the-shelf Liquid-Crystal Displays (LCD) and retro-reflectors. The LCD and the retro-reflector combined serve as the modulator for the signal generated by an MCU and amplified on the tag. The LCD's energy is spared each modulation cycle is partially recollected by an energy reuse module that saves $50\%$ of the energy that the tag would have wasted otherwise. The amount of energy saved is of significant importance as to lessening the area of the tag. In addition, the LCD requires a voltage high enough to drive it to its fullest capacity, and this high voltage cannot be directly fed by the low voltage the solar cell can provide. We design a voltage boost module that achieves this with low energy consumption. Overall, the design of the ViTag\xspace\ transmitter is presented in Fig.~\ref{fig:sysdiagram} (a). \subsubsection{LCD and Retro-Reflector-based Modulation} In the ViTag\xspace\ design, we use a retro-reflector to backscatter the incoming light and an LCD put on top of the retro-reflector to modulate the light. In our design, we use a \hl{sss $\times$ bbb} retro-reflector. The reflectivity of the retro-reflector is \hl{sss}, and the Field of View (FoV) is \hl{sss}, which effectively signifies to what degree the reflected signal is focused. The LCD we use is sized \hl{aaa}. As electromagnetic waves hit the LCD, the LCD lets the light pass through or blocks it depending on the polarization, i.e., the orientation of the liquid-crystal molecules, in the LCD, which is controlled by the voltage added on it. If the applied voltage is large enough, the liquid crystal molecules are almost completely untwisted and the polarization of the incident light is not rotated as it passes through the liquid crystal layer. This light will then be mainly polarized perpendicular to the second filter, and thus be blocked and the pixel will appear black. The highest voltage applied to the LCD that turns it completely black is \hl{6.1V}, and the lowest voltage with which it starts to be polarized is \hl{2.1V}. We are able to flicker it as fast as \hl{500Hz}. We use the fastest rate as the communication data rate on the uplink, making a \hl{www} power consumption of the LCD. Note, that there are other voltage-controlled reflective materials existing that may have higher flicking rate or lower power. Applying one of those in the system might enhance the energy efficiency and the capacity of the system. \subsubsection{Energy Reuse} The LCD has an equivalent capacitance that corresponds to its periodical charging and discharging. In the discharging phase, if the transmitter has a path along which the current can go all the way from the LCD to the ground, then the energy is wasted. In response, we design an energy reuse module that collects the energy in the LCD discharging phase. The design features a DC/DC converter that boosts the voltage from the lowest voltage that triggers the LCD to the highest. The lowest voltage, as mentioned earlier, is \hl{fff}, which is about the voltage the solar cell array can supply. In the charging phase, in which the LCD turns to the blocking mode, the DC/DC boosts the voltage the solar cell supplies to the highest voltage the LCD takes in, as illustrated in Fig.~\ref{fig:sysdiagram} (a). This voltage, through switch $T_0$ the MCU sets to open, applies to the LCD. In this phase, switch $T_1$ is set to close, so the voltage on the LCD gets to pump up until it reaches the discharging phase. In the discharging phase, the MCU sets the switch $T_0$ to close, and alternatively sets the switch on the discharging path of the LCD $T_1$ to open. Since there is a diode that blocks the only path along which the LCD is supposed to discharge to the ground, the current that flows from the discharging LCD does not go straight to the ground, and instead, it flows back to the DC/DC, charging \hl{$C_0$ and $C_1$}, until the voltage reaches a balance between the LCD and the solar cell. The two signals that control the on/off state of the switches are generated by an MCU. When one of the two is set to high, the other is set to low, and vice versa, such that there is only one triode of $T_0$ and $T_1$ open at a time, and so the LCD is either in a charging phase or in a discharging phase, without any conflicts. In addition, we need to make sure that the data rate is slower than the rate at which the two signals from the MCU alternate, so that the voltage on the capacitor-equivalent LCD alternates in a balanced manner in a long run, and $C_0$ and $C_1$ will not constantly be charged or discharged. ViTag\xspace\ has \hl{250 bps} data rate, which is smaller than \hl{ccc}, the average time used for charging an empty LCD or discharging a full LCD. \subsection{ViReader\xspace\ Receiver} Challenges: \begin{Itemize} \item Detecting the retro-reflected signal against same-band interferences with the LED receiver is hard. \item The LED must handle clock drifts without explicit synchronization between the LED and the mobile device. \end{Itemize} Solution principles: \begin{Itemize} \item Design amplifiers that work at an almost cut-off state to further reduce energy consumption. \item Design a sliding-window multi-symbol match filter algorithm to remedy clock drifts and LCD-caused signal distortions. \end{Itemize} ViReader\xspace\ captures the signal transmitted by ViTag\xspace\ at 250 bps data rate modulated by a 1MHz wave with a light sensor. Upon transferring the fluctuations of the light to voltage, the ViReader\xspace\ receiver demodulates and decodes the data using RF-end analog circuits and an MCU. While there is less power limit on ViReader\xspace\ than ViTag\xspace, which makes the design more flexible than ViTag\xspace, the challenges the ViReader\xspace\ receiver faces are multi-fold. (1) First, the signal that carries useful information sent by ViTag\xspace\ is mixed with the ViReader\xspace-transmitted signal, which is on a 1MHz carrier as well. What's worse is that since the ViReader\xspace\ receiver is much closer to the ViReader\xspace\ transmitter than the ViTag\xspace\ transmitter, the strength of the ViReader\xspace-transmitted signal with the reflections of this signal by the surroundings is in the measurement \hl{ss} times greater than the ViTag\xspace-transmitted signal on ViReader\xspace, making it difficult for the ViReader\xspace receiver to decouple the two components. (2) Second, the signal carried on 1MHz to be sent by the ViReader\xspace\ transmitter can also leak to the light sensor, which exacerbates the interference on the light sensor. (3) Third, commercial radios that run on 1MHz could also interfere with the signal that carries useful information. (4) Fourth, the noise caused by the movement of humans and other objects around, along with the circuit noise, could be huge, making the voltage output of the light sensor on ViReader\xspace\ prone to saturating. In the rest of this section, we describe our ViReader\xspace\ receiver design (Fig.~\ref{fig:sysdiagram} (b)) that addresses all these challenges, and the techniques that finally make the ViTag\xspace\ and ViReader\xspace\ system practical. \subsubsection{Extracting Backscatter Information from noise} Compared with background noise and still-existing interference, the signal strength is weak. For ViReader\xspace\ receiver, the SINR is \hl{dB}. Specifically, the information bits received \hl{at the light sensor (or after the interference cancellation)} is \hl{dB}, and the interference and noise combined are \hl{dB}. What we finally send to the digital circuit to decode should be a much amplified version of the signal after interference cancellation. Therefore, we need to filter out the carrier wave from the signal and amplify it by \hl{dB}. Using a single amplifier though, is not enough, as there is much more potential of introducing self-excitation to the circuit with too large of an amplification, and the noise introduced by a single amplifier tends to be huge. So we have chosen to apply four separate amplifiers that straddle from the RF-end to the baseband, while performing filtering at each amplifier. The detailed design of this part is the following. To extract the useful information from the noisy background, we first apply an amplifier at the RF end that has the with the amplification of \hl{aa}. This amplifier directly follows the light sensor \hl{or the interference cancellation} with a preliminary LC oscillator that filters out part of the channel noise. Subsequently we add a differential amplifier following a transmission line which decouples the transmitted signal from leaking towards to ViReader\xspace\ transmitter RF end. The amplification of the differential amplifier is \hl{aa}, with a inductor in parallel with a capacitor that picking out the signal from the carrier wave. Upon filtering, the signal is led to a third amplifier with the amplification of \hl{vv}. The third amplifier is also simultaneously performing filtering, which further picks out a purer signal wave. What follows is the fourth amplifier with feedback and an LC filter. The use of this amplifier is to further increase the signal gain and filter the residual RF-band noise, while bringing the circuit farther from the self-excitation state. After the four levels of amplification, the information signal strength is boosted to \hl{dB}, whereas the background noise is \hl{dB}. And this signal is sent to a envelope detector, which performs regular detection and picks out the baseband signal from that on the 1MHz carrier. Overall, this part gives amplification of \hl{dB} on the signal, and achieves \hl{sdd} reduction in the noise. \subsubsection{Decoding in the Presence of Clock Offsets} \label{subsubsec:clockoffset} Traditionally, the following approaches have been used to extract the timing information from the signal and perform the decoding thereafter. \begin{Itemize} \item Peak (edge) detection-based algorithms are ones where samples are taken differentials, and so the extreme (discontinuous) points in the signal are found which denote the clock beats. However, as shown in Fig.~\ref{fig:dynamicRange}, the signal received has a huge dynamic range. If the signal appears to be the case in (a), (b) or (c), the peak approach will fail, as the peak detected will always be ahead of or behind the actual one. In other cases, the edge approach will fail, as the edge is not clearly identified. \item Averaging-based algorithms are ones where samples of the signal are averaged to generate a threshold, and samples above this threshold denote ones and below denote zeros. However, as shown in Fig.~\ref{fig:dynamicRange} (e), when the average value of the signal within a packet is drifting, which may happen for the reasons explained earlier, this approach will likely fail. While one could identify a trend for the change in the average value, it is hard to accurately do so in practice. The reason for this is that as the average value is changing in a coherent way, a wrong prediction of the average value will cause severe errors in the process of decoding. \item Multi-symbol correlation algorithm uses a stream of symbols to represent one bit, and does correlation on the receiver side to decode the bit by finding the peak of the correlation result. The algorithm can pinpoint the clock and hence decode the signal in a low SNR situation, but it is not suitable for low symbol-rate channels like the uplink in our case. \item One-symbol match filter tries to match the wave form of the signal and detects the convolution peaks to determine the accurate timing. However, due to the rough equivalence of the LCD on the tag side that shapes a bit and a capacitor, the basic wave form tends to be sawtooth, as shown in Fig.~\ref{fig:dynamicRange} (d) when there is no top or bottom truncation. Further, as the bits are encoded in Manchester code, the rising edge and the falling edge are not necessarily symmetric in time. A typical example is three chips that contain one high-volt chip followed by two low-volt chips, in which case the correlation peak will be skewed, compared to the case where the voltage is high for one chip and low for the next. Therefore, the timing will be biased. \end{Itemize} However, decoding on ViReader\xspace\ is challenging, due to the following reasons: (1) First, the clock information is hard to extract from the received signal. This in part is due to the fact that the start of a frame is hard to accurately identify; this introduces time deviation to ViReader\xspace\ receiver. Also, there is a frequency offset associated with the clock on the tag that helps generate backscatter signals directionally sent to the reader. On top of these, the clock is subject to drifts. (2) Second, when operating in a network of tags and readers, a tag is likely to receive signals from other readers (although it's designated reader is not transmitting because of the half-duplex mode the system is working in). These signals are 10kHz bit streams sent on a 1MHz carrier. Therefore, when performing demodulation, the 10kHz interference will likely mix with the \hl{125bps} signal of our interest. (3) Compared to the 10kHz interference, what is harder to get rid of from the signal of interest is the noise caused by body movements, the fluctuation in the household electric power supply and its harmonic waves, and changes in environmental brightness that are embedded in the reader-transmitted signal. This set of interference is of low frequencies (humans do not move much faster than 125Hz, so does environmental brightness). Consequently, they mix with the signal transmitted by the tag after preliminary filtering, bringing ever-changing offsets to the signal of interests. We develop a novel approach to decoding the signal in Manchester code that has a large dynamic range and is saturated with the noise that scatters at about the same frequency. The approach extends the one-symbol match filter to that with an addition regarding clock adjustment using a multi-symbol match filter. The basic idea is to avoid the biased timing caused by skewed correlation peaks that happen in the one-symbol match filter case, by matching all possible patterns of the wave form the Manchester-encoded signal has and accordingly adjusting the local clock frequency in every bit period. To begin with, the algorithm first detects the preamble of a packet using standard correlation, where the preamble is different from any possible bit patterns in the payload. Upon finding the start of a packet, the algorithm estimates the length of a bit first with the knowledge the clock on ViTag\xspace that has a known frequency yet subject to drifting. Then the algorithm iteratively does the following two things: \begin{Itemize} \item As every bit is Manchester-encoded, there are two chips within one bit period. A bit-period of a square wave that contains a high volt chip and a low volt chip (not necessarily in this order) correlates against the first three bit-periods of the signal. Based on the difference in value between the first half and the last half within each of the three bit periods, namely, the correlation result within each bit, the algorithm knows what these three bits are among all eight possibilities. \item Then the algorithm adjusts the clock estimation by correlating the original three bit-periods of the signal against the corresponding one out of the eight patterns that associate with the eight possible wave forms. Ideally, the correlation yields a peak in the middle of the three bits. However, due to the time variance with the start of the packet and the frequency deviation between the reader clock and the tag clock, the peak of the correlation result does not necessarily align with the ideal peak. To bound the clock estimation error along the bits in a packet, we perform linear regression with the estimate of the beginning of the packet and the observations of the bit boundary to get the estimated beginning of the next bit. With this estimate, the algorithm moves the three-bit window one bit next, and jumps back to perform step one, until reaching the end of the packet. This time recovery algorithm designed specifically for our coding and modulation bounds the error from propagating as the decoding proceeds, as described in the following lemma, for which we give a proof in Appendix. \end{Itemize} \begin{lemma} The time recovery renders the clock estimation error's convergence to zero if a packet contains infinite number of bits. \label{lem:lemma1} \end{lemma} We note that the reason we choose three-bit period of time as a correlation unit is because three bits are the least number of bit periods that contain all possible wave forms when the signal is Manchester-encoded and LCD capacitor-filtered. We prove this conclusion in~\ref{sec:app}. Following the bit recovery, ViReader\xspace\ checks the correctness of the bits received using CRC8 \hl{citation}. The polynomial used is $x^8 + x^2 + x + 1$. Decoding is taken care of by an MCU. Specifically, we are using \hl{Cortex xxxx} to do that. It has a CPU frequency of \hl{sss} and a \hl{sss} memory, hence good for performing complex algorithms and dealing with fast streaming data. \begin{figure*}[!t] \vskip -0.1in \centering {\footnotesize \begin{tabular}{ccccc} \epsfig{file=fig/waveform1-eps-converted-to.pdf, width=0.4\columnwidth} & \epsfig{file=fig/waveform2-eps-converted-to.pdf, width=0.4\columnwidth} & \epsfig{file=fig/waveform3-eps-converted-to.pdf, width=0.4\columnwidth} & \epsfig{file=fig/waveform4-eps-converted-to.pdf, width=0.4\columnwidth} & \epsfig{file=fig/waveform5-eps-converted-to.pdf, width=0.4\columnwidth}\\ {(a) Normal} & {(b) Up-truncated} & {(c) Down-truncated} & {(d) Up and Down-truncated} & {(e) Average-drifted}\\ \end{tabular} } \vskip -0.1in \caption{\footnotesize{\bf Varying Wave patterns.} Blah Blah.} \label{fig:dynamicRange} \vspace{-1em} \end{figure*}
1,477,468,749,915
arxiv
\section{Packing and traversal numbers for nowhere dense graphs}\label{sec:ep} In this section, we give an application of~\cref{thm:vc-density}, proving a duality result for nowhere dense graph classes. A \emph{set system} is a family $\cal F$ of subsets of a set $X$. Its \emph{packing} is a subfamily of $\cal F$ of pairwise disjoint subsets, and its \emph{traversal} (or \emph{hitting set}) is a subset of $X$ which intersects every member of $\cal F$. The \emph{packing number} of~$\cal F$, denoted $\nu(\cal F)$, is the largest cardinality of a packing in $\cal F$, and the \emph{transversality} of $\cal F$, denoted $\tau(\cal F)$, is the smallest cardinality of a traversal of $\cal F$. Note that if $\cal G$ is a finite set system, then $\nu({\cal G})\le \tau(\cal G)$. The set system $\cal F$ has the \emph{Erd{\H o}s-P\'{o}sa property} if there is a function $f\colon\mathbb{N}\to\mathbb{N}$ such that every finite subfamily $\cal G$ of $\cal F$ satisfies $\tau({\cal G})\le f(\nu(\cal G))$. We prove that set systems defined by first order formulas in nowhere dense graph classes have the Erd{\H o}s-P\'{o}sa property, in the following sense. \setcounter{aux}{\thetheorem} \setcounter{theorem}{\theep} \begin{theorem} Fix a nowhere dense class of graphs $\mathcal{C}$ and a formula $\phi(x,y)$ with two free variables $x,y$. Then there is a function $f\colon \mathbb{N}\to\mathbb{N}$ with the following property. Let $G\in \mathcal{C}$ be a graph and let $\cal G$ be a family of subsets of $V(G)$ consisting of sets of the form $\setof{v\in V(G)}{\phi(u, v)}$, where~$u$ is some vertex of $V(G)$. Then~$\tau({\cal G})\le f(\nu(\cal G))$. \end{theorem} \setcounter{theorem}{\theaux} We will apply the following result of Matou{\v s}ek~\cite{Matousek:2004:BVI:1005787.1005789}, which relies on the proof of Alon and Kleitman~\cite{ALON1992103} of the conjecture of Hardwiger and Debrunner. In the result of Matou{\v s}ek, the set system $\cal F$ is infinite. For $m\in \mathbb{N}$, by $\pi_{\cal F}^*(m)$ we denote the \emph{dual shatter function} of $\cal F$, which is defined as the maximal number of occupied cells in the Venn diagram of $m$ sets in $\cal F$. \begin{theorem}[Matou{\v s}ek, \cite{Matousek:2004:BVI:1005787.1005789}]\label{thm:pq} Let $\cal F$ be a set system with $\pi^*_{\cal F}(m)=o(m^k)$, for some integer $k$, and let $p\ge k$. Then there is a constant $T$ such that the following holds for every finite family $\cal G\subset \cal F$: if $\cal G$ has the $(p,k)$-property, meaning that among every $p$ sets in $\cal G$ some $k$ have a non-empty intersection, then $\tau ({\cal G})\le T$. \end{theorem} \begin{proof}[of \cref{thm:erdos-posa}] For a graph $G$, define the set system ${\cal F}_G$ on the ground set $V(G)$ as $${\cal F}_G = \setof{\setof{v\in V(G)}{\phi(u, v)}}{u\in V(G)}.$$ Let then $\cal F$ be the disjoint union of set systems ${\cal F}_G$ for $G\in \mathcal{C}$. That is, the ground set of $\cal F$ is the disjoint union of the vertex sets $V(G)$ for $G\in \mathcal{C}$, and for each $G\in \mathcal{C}$ we add to ${\cal F}$ a copy of ${\cal F}_G$ over the copy of relevant $V(G)$. Then the following claim follows directly from~\cref{thm:vc-density}. \begin{claim} The dual shatter function of $\cal F$ satisfies $\pi^*_{\cal F}(m)=\mathcal{O}(m^{1+\epsilon})$, for every fixed $\epsilon>0$. In particular, $\pi^*_{\cal F}(m)=o(m^{2})$. \end{claim} Consider the function $f\colon \mathbb{N} \to \mathbb{N}$ defined so that $f(\nu)$ is the value $T$ obtained from~\cref{thm:pq} applied to $\cal F$, $k=2$, and $p=\nu+1$. Suppose now that $G\in \mathcal{C}$ is a graph and $\mathcal{G}\subseteq \mathcal{F}_G$ is a family of subsets of $V(G)$ consisting of sets of the form $\{v\in V(G)\,\colon\,\phi(u,v)\}$, where $u$ is some vertex of $G$. We identify $\mathcal{G}$ with a subfamily of $\mathcal{F}$ in the natural way, following the embedding of $\mathcal{F}_G$ into $\mathcal{F}$ used in the construction of the latter. Let $\nu$ be the packing number of $\mathcal{G}$. In particular, for every $\nu+1$ subsets of $\mathcal{G}$ there is a vertex $v\in V(G)$ which is contained in two elements of~$\mathcal{G}$. Hence, $\mathcal{G}$ is a $(p,2)$-family for $p=\nu+1$. By~\cref{thm:pq}, $\tau(\mathcal{G})\le T=f(\nu)=f(\nu(\mathcal{G}))$, as required. \end{proof} \section{Types and locality}\label{sec:gaifman} In this section, we develop auxiliary tools concerning first order logic on graphs, in particular we develop a convenient abstraction for Gaifman's locality property that can be easily combined with the notion of $r$-separation. We begin by recalling some standard notions from logic. \subsection{Logical notions} \paragraph{Formulas.} All formulas in this paper are first order formulas on graph, i.e., they are built using variables (denoted $x,y,z$, etc.), atomic predicates $x=y$ or $E(x,y)$, where the latter denotes the existence of an edge between two nodes, quantifiers $\forall x,\exists x$, and boolean connectives $\lor,\land,\neg$. Let $\phi(\bar x)$ be a formula with free variables $\bar x$. (Formally, the free variables form a set. To ease notation, we identify this set with a tuple by fixing any its enumeration.) If $\bar w\in V^{|\bar x|}$ is a tuple of vertices of some graph $G=(V,E)$ (treated as a valuation of the free variables $\bar x$), then we write $G,\bar w\models \phi(\bar x)$ to denote that the valuation $\bar w$ satisfies the formula $\phi$ in the graph $G$. The following example should clarify our notation. \begin{example}\label{ex:dist-formula} The formula $$\phi(x,y)\equiv \exists z_1\, \exists z_2\, (E(x,z_1)\lor (x=z_1))\land (E(z_1,z_2)\lor (z_1=z_2))\land (E(z_2,y)\lor (z_2=y))$$ with free variables $x,t$ expresses that $x$ and $y$ are at distance at most $3$. That is, for two vertices $u,v$ of a graph $G$, the relation $G,u,v\models \phi(x,y)$ holds if and only if the distance between $u$ and $v$ is at most $3$ in $G$. \end{example} We will consider also \emph{colored graphs}, where we have a fixed set of colors $\Lambda$ and every vertex is assigned a subset of colors from $\Lambda$. If $C\in \Lambda$ is a color then the atomic formula $C(x)$ holds in a vertex $x$ if and only if $x$ has color $C$. Finally, we will consider \emph{formulas with parameters} from a set $A$, which is a subset of vertices of some graph. Formally, such formula with parameters is a pair consisting of a (standard) formula $\phi(\bar x,\bar y)$ with a partitioning of its free variables into $\bar x$ and $\bar y$, and a valuation $\bar v\in A^{|\bar y|}$ of the free variables $\bar y$ in $A$. We denote the resulting formula with parameters by $\phi(\bar x,\bar v)$, and say that its free variables are $\bar x$. For a valuation $\bar u\in A^{|\bar x|}$, we write $G,\bar u\models \phi(\bar x,\bar v)$ iff $G,\bar u\bar v\models \phi(\bar x,\bar y)$. Here and later on, we write $\bar u\bar v$ for the concatenation of tuples $\bar u$ and $\bar v$. \newcommand{\mathrm{tp}}{\mathrm{tp}} \paragraph{Types.} Fix a formula $\phi(\bar x,\bar y)$ together with a distinguished partitioning of its free variables into \emph{object variables} $\bar x$ and \emph{parameter variables} $\bar y$. Let $G=(V,E)$ be a graph, and let $A\subset V$. If $\bar u\in V^{|\bar y|}$ is a tuple of nodes of length $|\bar y|$, then the \emph{$\phi$-type of $\bar u$ over $A$}, denoted $\mathrm{tp}^\phi_G(\bar u/A)$, is the set of all formulas $\phi(\bar x,\bar v)$, with parameters $\bar v\in A^{|\bar y|}$ replacing the parameter variables $\bar z$, such that $G,\bar u\models \phi(\bar x,\bar v)$. Note that since $\phi$ is fixed in this definition, formulas $\phi(\bar x,\bar v)$ belonging to the $\phi$-type of $\bar u$ are in one-to-one correspondence with tuples $\bar v\in A^{|\bar y|}$ satisfying $G,\bar u,\bar v\models \phi(\bar u,\bar v)$. Therefore, up to this bijection, we have the following identification: \begin{equation}\label{eq:bijection} \mathrm{tp}^\phi_G(\bar u/A)\quad\leftrightarrow\quad\setof{\bar v\in A^{|\bar y|}}{G, \bar u\bar v\models \phi(\bar x,\bar y)}. \end{equation} If $q\in \mathbb{N}$ is a number and $\bar u\in V^{d}$ is a tuple of some length $d$, then by $\mathrm{tp}^q_G(\bar u/A)$ we denote the set of all formulas $\phi(\bar x,\bar v)$ of quantifier rank at most $q$, with parameters $\bar v$ from $A$, and with $|\bar x|=d$, such that $G,\bar u\models \phi(\bar y,\bar v)$. Therefore, up to the correspondence \eqref{eq:bijection}, we have the following identification: \begin{equation*} \mathrm{tp}^q_G(\bar u/A)\quad\leftrightarrow\quad\set{\mathrm{tp}^\phi(\bar u/A)}_{\phi(\bar x,\bar y)}, \end{equation*} where $\phi(\bar x,\bar y)$ ranges over all formulas of quantifier rank $q$, and all partitions of its free variables into two sets $\bar x,\bar y$, where $|\bar x|=d$. In particular, the set $\mathrm{tp}^q_G(\bar u/A)$ is infinite. It is not difficult to see, however, that in the case when $A$ is finite, the set $\mathrm{tp}^q_G(\bar u/A)$ is uniquely determined by its finite subset, since up to syntactic equivalence, there are only finitely many formulas of quantifier rank $q$ with $|\bar u|$ free variables and parameters from $A$ (we can assume that each such formula has $|A|+|\bar u|$ free variables). In particular, the set of all possible types $\mathrm{tp}^q_G(\bar u/A)$ has cardinality upper bounded by some number computable from $q,|\bar u|$ and $|A|$. When $\Delta$ is either a formula $\phi(\bar x,\bar y)$ with a distinguished partitioning of its free variables, or a number $q$, we simply write $\mathrm{tp}^\Delta(\bar u/A)$ if the graph $G$ is clear from the context. In the case $A=\emptyset$, we omit it from the notation, and simply write $\mathrm{tp}^\Delta(\bar u)$ or $\mathrm{tp}^\Delta_G(\bar u)$. Observe that in particular, if $\Delta=q$ and $A=\emptyset$, then $\mathrm{tp}^q_G(\bar u)$ consists of all first order formulas $\phi(\bar x)$ of quantifier rank at most~$q$ and with $|\bar x|=|\bar u|$ such that $G,\bar u\models \phi(\bar x)$. This coincides with the standard notion of the first order type of quantifier rank $q$ of the tuple $\bar u$. \begin{example} Let $\phi(x,y)$ be the formula from~\cref{ex:dist-formula}, denoting that the distance between~$x$ and $y$ is at most $3$. We partition the free variables of $\phi$ into $x$ and $y$. Let $A$ be a subset of vertices of a graph $G=(V,E)$ and $u\in V$ be a single vertex. The $\phi$-type of $u$ over $A$ corresponds, via the said bijection, to the set of those vertices in $A$ whose distance from $u$ is at most $3$ in $G$. \end{example} For a fixed formula $\phi(\bar y,\bar z)$, graph $G=(V,E)$ and sets $A,W\subset V$, define $S^\phi(W/A)$ as the set of all $\phi$-types of tuples from $W$ over $A$ in $G$; that is, \begin{equation*} S^\phi(W/A)=\setof{\mathrm{tp}^\phi_G(\bar u/A)}{\bar u\in W^{|\bar y|}}. \end{equation*} Although not visible in the notation, the set $S^\phi(W/A)$ depends on the chosen partitioning $\bar x,\bar y$ of the free variables of $\phi$. In case $W=V(G)$ we write $S^{\phi}_d(G/A)$ instead of $S^{\phi}_d(W/A)$. Note that this definitions differs syntactically from the one given in \cref{sec:intro}, as here $S^{\phi}(G/A)$ consists of $\phi$-types, and not of subsets of tuples. However, as we argued, there is a one-to-one correspondence between them, as expressed in~\eqref{eq:bijection}. The following lemma is immediate. \begin{lemma}\label{lem:types-over-B} Let $G$ be a graph and let $A\subseteq B\subseteq V(G)$. Then for each formula $\phi(\bar x,\bar y)$, it holds that $|S^\phi(G/A)|\leq |S^\phi(G/B)|$. \end{lemma} \subsection{Locality} We will use the following intuitive notion of functional determination. Suppose $X,A,B$ are sets and we have two functions: $f\colon X\to A$ and $g\colon X\to B$. We say that $f(x)$ {\em{determines}} $g(x)$ for $x\in X$ if for every pair of elements $x,x'\in X$ the following implication holds: $f(x)=f(x')$ implies $g(x)=g(x')$. Equivalently, there is a function $h\colon A\to B$ such that $g=h\circ f$. Recall that if $A,B,S$ are subsets of vertices of a graph $G$ and $r\in\mathbb{N}$, then $A$ and $B$ are $r$-separated by $S$ in $G$ if every path from $A$ to $B$ of length at most $r$ contains a vertex from $S$. \medskip The following lemma is the main result of this section. \begin{lemma}\label{lem:types} For any given numbers $q$ and $d$ one can compute numbers $p$ and $r$ with the following properties. Let $G=(V,E)$ be a fixed graph and let $A,B,S\subset V$ be fixed subsets of its vertices such that $A$ and $B$ are $r$-separated by~$S$ in $G$. Then, for tuples $\bar u\in A^{d}$, the type $\mathrm{tp}^q(\bar u/B)$ is determined by the type $\mathrm{tp}^{p}(\bar u/S)$. \end{lemma} We will only use the following consequence of~\cref{lem:types}. \begin{corollary}\label{cor:bound} For every formula $\phi(\bar x,\bar y)$ and number $s\in \mathbb{N}$ there exist numbers $T,r\in \mathbb{N}$, where~$r$ is computable from $\phi$ and $T$ is computable from $\phi$ and $s$, such that the following holds. For every graph $G$ and vertex subsets $A,B,S\subset V(G)$ where~$S$ has at most $s$ vertices and $r$-separates $A$ from $B$, we have $|S^\phi(A/B)|\le T$. \end{corollary} \begin{proof} Apply~\cref{lem:types} to $q$ being the quantifier rank of $\phi$ and $d=|\bar x|$, yielding numbers $p$ and $r$. By~\cref{lem:types} we have $|S^\phi(A/B)|\leq |S^\phi(A/S)|$. However, $|S^\phi(A/S)|$ is the number of quantifier rank $p$ types of $d$-tuples of elements over a set of parameters of size $s$, and, as we argued, this number is bounded by a value computable from $p$, $d$, and $s$. \end{proof} The remainder of this section is devoted to the proof of~\cref{lem:types}. This result is a consequence of two fundamental properties of first order logic: Gaifman's locality and Feferman-Vaught compositionality. We recall these results now. The following statement is an immediate corollary of the main result in a paper of Gaifman~\cite{gaifman1982local}. \begin{lemma}[Gaifman locality,~\cite{gaifman1982local}]\label{lem:gaif} For all numbers $d,q\in \mathbb{N}$ there exists a number $t\in \mathbb{N}$, computable from $d$ and $q$, such that the following holds. Let $G=(V,E)$ be a graph colored by a fixed set colors, and $A\subset V$ be a set of vertices of $G$. Then, for tuples $\bar u\in V^d$, the type $\mathrm{tp}^q(\bar u)$ is determined by the type $\mathrm{tp}^{t}(B^r(\bar u))$, where $r=7^q$. \end{lemma} The next result expresses compositionality of first order logic. Its proof is a standard application of Ehrenfeucht-Fra\"iss\'e games, so we only sketch it for completeness. \begin{lemma}[Feferman-Vaught]\label{lem:fv} Let $G,H$ be two fixed vertex-disjoint graphs colored by a fixed set of colors $\Lambda$, and let $c,d\in\mathbb{N}$ be numbers. Then, for valuations $\bar u\in V(G)^{c}$ and $\bar v\in V(H)^{d}$, the type $\mathrm{tp}^q_{G\cup H}(\bar u\bar v)$ is determined by the pair of types $\mathrm{tp}^q_G(\bar u)$ and $\mathrm{tp}^q_H(\bar v)$. \end{lemma} \begin{proof}[sketch]The proof proceeds by applying the following, well-known characterization of $\mathrm{tp}^q_G(\bar w)$ in terms of Ehrenfeucht-Fra\"iss\'e games: $\mathrm{tp}^q_{G}(\bar w)=\mathrm{tp}^q_{G}(\bar w')$ if and only if duplicator has a strategy to survive for $q$-rounds in a certain pebble game. To prove the lemma, we combine two strategies of duplicator: one on $G$ and one on $H$. \end{proof} Before proving~\cref{lem:types}, we introduce the following notions. Fix a graph $G=(V,E)$. For a set of vertices $S\subset V$, define the color set $\Lambda_S=\{C_s\colon s\in S\}$, where we put one color $C_s$ for each vertex $s\in S$. Define a graph $G^S$ colored with $\Lambda_S$, which is the subgraph of $G$ induced by $V-S$ in which, additionally, for every vertex $s\in S$, every vertex $v\in V-S$ which is a neighbor of $s$ in $G$ is colored by color $C_s$. In other words, every vertex $v$ of $G^S$ is colored with a subset of colors from $\Lambda_S$ corresponding to the neighborhood of $v$ in $S$. A sequence of elements of $S\cup\set\star$, where $\star$ is a fixed placeholder symbol, will be called an \emph{$S$-signature}. If $H$ is any (colored) graph with vertex set contained in $V-S$, and $\bar u\in V^d$ is a $d$-tuple of vertices, define the {$S$-signature} of $\bar u$ as the tuple $\bar s\in (S\cup\set\star)^d$ obtained from $\bar u$ by replacing the vertices in $V-S$ by the symbol~$\star$. Define $\mathrm{tp}^q[H,\bar u]$ as the pair consisting of the following components: \begin{itemize} \item the type $\mathrm{tp}^q_H(\bar v)$, where $\bar v$ is the tuple obtained from $\bar u$ by removing the vertices which belong to $S$; and \item the $S$-signature of $\bar u$. \end{itemize} Given a graph $G$, a subset of its vertices $S$, and a tuple of vertices $\bar u$, by $N^r_S(\bar u)$ we denote the subgraph of $G^S$ induced by the set of vertices reachable from a vertex in $\bar u$ by a path of length at most $r$ in $G-S$ (the graph $G$ is implicit in the notation $N^r_S(\bar u)$). Note that $N^r_S(\bar u)$ is a colored graph, with colors inherited from $G^S$. \begin{comment} We will prove the following strengthening of~\cref{lem:types}: \begin{lemma \label{lem:types1} For any given number $q\in\mathbb{N}$ one can compute a number $r\in\mathbb{N}$ with the following property. For any graph $G=(V,E)$, sets of vertices $A,B,S\subset V$ such that $A$ and $B$ are $r$-separated by $S$, for every tuple $\bar u\in A^{d}$, the type $\mathrm{tp}^q_G(\bar u/B)$ is computable from $\mathrm{tp}^{q}[B^r_S(\bar u), \bar u]$, and $G$ and $S$. % \end{lemma} To show that~\cref{lem:types1} implies~\cref{lem:types}, define $p$ as $q\cdot r$. It suffices to show that $\mathrm{tp}^{q}[B^r_S(\bar u), \bar u]$ is computable from $\mathrm{tp}_G^p(\bar u/S)$, and $G$ and $S$. This is the case, since a formula $\phi(\bar y)$ can be relativized to $B^r_S(\bar u)$ by replacing each quantifier $\exists x$ by a formula $\exists x\exists x_1\ldots\exists x_r\psi (x,x_1,\ldots,x_r)$, where $\psi$ specifies that $x_1,\ldots,x_r,x$ form a path starting in one of the vertices in $\bar u$, ending in $x$, and omitting all vertices in $S$, which are enumerated as parameters. It remains to prove~\cref{lem:types1}. \end{comment} With all these definitions and results in place, we may proceed to the proof of \cref{lem:types}. \begin{proof}[of~\cref{lem:types}] We prove the lemma for $r=7^q$. Let $t$ be the constant given by Gaifman's lemma, \cref{lem:gaif}, for $q$ and $d$, and let $p=t+r$. Fix $G,A,B,S$ as in the statement of the lemma, and fix a tuple $\bar w\in B^\ell$, for some length $\ell$. To prove the lemma, it is enough to show that for tuples $\bar u\in A^d$, the type $\mathrm{tp}^q_G(\bar u\bar w)$ is determined by the type $\mathrm{tp}^p(\bar u/S)$. Indeed, applying this to every tuple $\bar w$ of parameters from $B$ implies that $\mathrm{tp}^q_G(\bar u/B)$ is determined by $\mathrm{tp}^q_G(\bar u/S)$, as requested. We will prove the following sequence of determinations, where an arrow $a\rightarrow b$ signifies that $b$ is determined by $a$: \begin{align*} \mathrm{tp}^p_G(\bar u/S) \ \longrightarrow\ \mathrm{tp}^{t}[N^r_S(\bar u), \bar u] \ \stackrel{\textrm{(\ref{lem:fv})}}{\longrightarrow}\ \mathrm{tp}^{t}[N^r_S(\bar u\bar w), \bar u\bar w] \ \stackrel{\textrm{(\ref{lem:gaif})}}\longrightarrow\ \mathrm{tp}^q[G^S, \bar u\bar w] \ \longrightarrow\ \mathrm{tp}^q_G(\bar u\bar w). \end{align*} The second arrow follows from Feferman-Vaught's lemma, \cref{lem:fv}, as the colored graph $N^r_S(\bar u\bar w)$ is the disjoint union of the colored graphs $N^r_S(\bar u)$ and $N^r_S(\bar w)$, because $\bar u$ and $\bar w$ are $r$-separated by $S$. The third arrow is directly implied by the Gaifman's lemma,~\cref{lem:gaif}. We are left with arguing the first and the last arrow, which both follow from simple rewriting arguments, presented below. \medskip For the first arrow, obviously already $\mathrm{tp}^0_G(\bar u/S)$ determines the $S$-signature of $\bar u$. Let $\bar s$ be any enumeration of $S$. To see that $\mathrm{tp}^p_G(\bar u/S)$ determines $\mathrm{tp}^{t}_{N^r_S(\bar v)}(\bar v)$, where $\bar v$ is $\bar u$ with vertices of~$S$ removed, take any formula $\phi(\bar x)$ with $|\bar x|=|\bar v|$. Let $\phi'(\bar x,\bar s)$ be the formula with parameters~$\bar s$ from $S$ that is syntactically derived from $\phi(\bar x)$ as follows: to every quantification $\exists y$ in $\phi(\bar x)$ we add a guard $\delta(y,\bar s)$ stating that there is a path from some element of $\bar x$ to $y$ that has length at most $r$ and does not pass through any vertex of $\bar s$; it is easy to see that there is such a guard $\delta(y,\bar s)$ with quantifier rank $r$. Then $\phi'(\bar x,\bar s)$ has quantifier rank at most $t+r=p$, and it is straightforward to see that for every $\bar v\in (A-S)^{|\bar v|}$, we have $G,\bar v\models \phi'(\bar x,\bar s)$ if and only if $N^r_S(\bar v),\bar v\models \phi(\bar x)$. Therefore, to check whether $\phi(\bar x)$ belongs to $\mathrm{tp}^{t}_{N^r_S(\bar v)}(\bar v)$ it suffices to check whether $\phi'(\bar x,\bar s)$ belongs to $\mathrm{tp}^p_G(\bar u/S)$, so the latter type determines the former. \medskip The argument for the last arrow is provided by the following claim. \begin{claim}\label{cl:rewrite} Let $\phi$ be a formula with $k$ free variables and quantifier rank at most $q$, and let $\sigma$ be an $S$-signature of length $k$. One can compute a formula $\phi^S$ of quantifier rank at most $q$ whose free variables correspond to the $\star$'s in $\sigma$, such that for every tuple $\bar v$ of elements of $G$ whose $S$-signature is~$\sigma$, $\phi(\bar v)$ holds in $G$ if and only if $\phi^S(\bar v^S)$ holds in $G^S$, where $\bar v^S$ is obtained from $\bar v$ by removing those elements that belong to $S$. \end{claim} \begin{clproof}[Sketch] The proof proceeds by a straightforward induction on the structure of the formula $\phi$. In essence, every quantification $\exists y$ of a vertex~$y$ in $G$ is replaced by quantification of $y$ in $G-S$ plus a disjunction over $s\in S$ of formulas where we assume $y=s$. Atomic formulas of the form $E(x,y)$ and $x=y$ have to be replaced accordingly. Say for $E(x,y)$: if both~$x$ and~$y$ are assumed to be in $G-S$, then we leave $E(x,y)$ intact; if $x$ is assumed to be in $S$ (say we assume $x=s$) and $y$ is assumed to be in $G-S$, then we substitute $E(x,y)$ by $C_s(y)$; and if both~$x$ and~$y$ are assumed to be in $S$, then we replace $E(x,y)$ by $\bot$ or $\top$ depending on whether the vertices assumed to be equal to $x$ and $y$ are adjacent or not. We leave the details to the reader. \end{clproof} \begin{comment} \begin{clproof} The proof proceeds by induction on the structure of the formula $\phi$. If $\phi$ is an atomic formula $E(x,x')$ or $x=x'$, then the formula $\phi^S$ is constructed by case analysis. If $\alpha(x),\alpha(x')\in Y$ then $\phi^S$ is obtained from $\phi$ by substituting the variables $x,x'$ with variables from $Y$ according to~$\alpha$. If $\alpha(x),\alpha(x')\in S$ then $\phi'$ is the truth value $\bot$ or $\top$ of the formula $\phi$ in the graph $G$ under the valuation which maps $x$ to $\alpha(x)$ and $x'$ to $\alpha(x')$. Finally, suppose that $\alpha(x)=y\in Y$ and $\alpha(x')=s\in S$. If $\phi$ is $E(x,x')$ then $\phi'$ is the formula $C_{s}(y)$, and if $\phi$ is $x=x'$ then $\phi'$ is the formula $\bot$. For the inductive step, we consider two cases. If $\phi$ is a boolean combination of formulas $\phi_1,\ldots,\phi_k$, then apply the inductive assumption to each formula $\phi_i$, yielding formulas $\phi_1',\ldots,\phi_k'$. Then let $\phi'$ be the analogous boolean combination of the formulas $\phi_1',\ldots,\phi_k'$. Finally, suppose that $\phi$ is of the form $\exists x\, \psi$, where $Y$ are the free variables of $\phi$ and $x\not \in Y$. For $w$ being either the variable $x$ or an element $s\in S$, let $\psi^w$ be the formula obtained from the inductive assumption applied to the formula $\psi$ and pre-valuation $\alpha$ extended to a valuation which maps $x$ to $w$. Then let $\phi'$ be the formula $\exists x\, \psi^x \lor \bigvee_{v\in S}\psi^v$. The case of $\forall$ is dual. In each case, it follows from the inductive assumption that $\phi'$ satisfies the required condition. \end{clproof} \end{comment} \cref{cl:rewrite}, applied to $k=|\bar u\bar w|$, implies that $\mathrm{tp}^q_G(\bar u\bar w)$ is determined by $\mathrm{tp}^q[G^S, \bar u\bar w]$, finishing the proof of~\cref{lem:types}. \end{proof} We remark that in all the results of this section, whenever some type determines some other type, it is actually true that the latter type can be {\em{computed}} given the former type together with the graph $G$ and, if applicable, also the set of vertices $S$. For Gaifman's locality lemma, the effectiveness follows from the original proof of Gaifman~\cite{gaifman1982local}, and it is not hard to see that the proof of the Feferman-Vaught lemma (\cref{lem:fv}) can be also made effective. By examining our proofs, one can readily verify that all the stated determination relations can be made effective in this sense. \section{Introduction}\label{sec:intro} Nowhere dense classes of graphs were introduced by Ne\v set\v ril and Ossona de Mendez~\cite{nevsetvril2010first,nevsetvril2011nowhere} as a general and abstract model capturing uniform sparseness of graphs. These classes generalize many familiar classes of sparse graphs, such as planar graphs, graphs of bounded treewidth, graphs of bounded degree, and, in fact, all classes that exclude a fixed topological minor. Formally, a class $\mathcal{C}$ of graphs is {\em{nowhere dense}} if there is a function $t\colon \mathbb{N}\to \mathbb{N}$ such that for every $r\in \mathbb{N}$, no graph $G$ in~$\mathcal{C}$ contains the clique $K_{t(r)}$ on $t(r)$ vertices as {\em{depth-$r$ minor}}, i.e., as a subgraph of a graph obtained from $G$ by contracting mutually disjoint subgraphs of radius at most $r$ to single vertices. The more restricted notion of {\em{bounded expansion}} requires in addition that for every fixed $r$, there is a constant (depending on $r$) upper bound on the ratio between the number of edges and the number of vertices in depth-$r$ minors of graphs from $\mathcal{C}$. The concept of nowhere denseness turns out to be very robust, as witnessed by the fact that it admits multiple different characterizations, uncovering intricate connections to seemingly distant branches of mathematics. For instance, nowhere dense graph classes can be characterized by upper bounds on the density of bounded-depth (topological) minors~\cite{nevsetvril2010first,nevsetvril2011nowhere}, by uniform quasi-wideness~\cite{nevsetvril2011nowhere} (a notion introduced by Dawar~\cite{dawar2010homomorphism} in the context of homomorphism preservation properties), by low tree-depth colorings~\cite{nevsetvril2008grad}, by generalized coloring numbers~\cite{zhu2009coloring}, by sparse neighborhood covers~\cite{GroheKRSS15,grohe2014deciding}, by a game called the splitter game~\cite{grohe2014deciding}, and by the model-theoretic concepts of stability and independence~\cite{adler2014interpreting}. For a broader discussion on graph theoretic sparsity we refer to the book of Ne\v{s}et\v{r}il and Ossona de Mendez~\cite{sparsity}. The combination of combinatorial and logical methods yields a powerful toolbox for the study of nowhere dense graph classes. In particular, the result of Grohe, Kreutzer and the second author~\cite{grohe2014deciding} exploits this combination in order to prove that a given first order sentence $\varphi$ can be evaluated in time $f(\varphi)\cdot n^{1+\epsilon}$ on $n$-vertex graphs from a fixed nowhere dense class of graphs $\mathcal{C}$, for any fixed real $\epsilon>0$ and some function $f$. On the other hand, provided $\mathcal{C}$ is closed under taking subgraphs, it is known that if $\mathcal{C}$ is not nowhere dense, then there is no algorithm with running time of the form $f(\varphi)\cdot n^c$ for any constant~$c$ under plausible complexity assumptions~\cite{dvovrak2013testing}. In the terminology of parameterized complexity, these results show that the notion of nowhere denseness exactly characterized subgraph-closed classes where model-checking first order logic is fixed-parameter tractable, and conclude a long line of research concerning the parameterized complexity of the model checking problem for sparse graph classes (see \cite{grokre11} for a survey). \paragraph{Summary of contribution.} In this paper, we continue the study of the interplay of combinatorial and logical properties of nowhere dense graph classes, and provide new upper bounds on several quantities appearing in their logical study. Our main focus is on the notion of \emph{VC-density} for first order formulas. This concept originates from model theory and aims to measure the complexity of set systems definable by first order formulas, similarly to the better-known VC-dimension. We give optimal bounds on the VC-density in nowhere dense graph classes, and in particular we show that these bounds are ``as good as one could hope for''. We also provide new upper bounds on quantities related to {\em{stability}} and {\em{uniform quasi-wideness}} of nowhere dense classes. For stability, we provide explicit and computable upper bounds on the \emph{ladder index} of any first order formula on a given nowhere dense class. For uniform quasi-wideness, we give a new, purely combinatorial proof of polynomial upper bounds on {\em{margins}}, that is, functions governing this notion. We remark that the existence of upper bounds as above is known~\cite{adler2014interpreting,siebertz2016polynomial}, but the proofs are based on nonconstructive arguments, notably the compactness theorem for first order logic. Therefore, the upper bounds are given purely existentially and are not effectively computable. Contrary to these, our proofs are entirely combinatorial and effective, yielding computable upper bounds. We now discuss the relevant background from logic and model theory, in order to motivate and state our results. \paragraph{Model theory.}Our work is inspired by ideas from model theory, more specifically, from \emph{stability theory}. The goal of {stability theory} is to draw certain dividing lines specifying abstract properties of logical structures which allow the development of a good structure theory. There are many such dividing lines, depending on the specifics of the desired theory. One such dividing line encloses the class of \emph{stable structures}, another encloses the larger class of \emph{dependent structures} (also called \emph{NIP}). A general theme is that the existence of a manageable structure is strongly related to the non-existence of certain forbidden patterns on one hand, and on the other hand, to bounds on cardinalities of certain \emph{type sets}. Let us illustrate this phenomenon more concretely. For a first order formula $\phi(\tup{x},\tup{y})$ with free variables split into $\tup{x}$ and $\tup{y}$, a \emph{$\phi$-ladder} of length $n$ in a logical structure $\mathbb A$ is a sequence $\tup{u}_1,\ldots, \tup{u}_{n}, \tup{v}_1,\ldots, \tup{v}_{n}$ of tuples of elements of $\mathbb A$ such that $$\str{A}\models\phi(\tup{u}_i,\tup{v}_j)\ \Longleftrightarrow\ i\leq j\qquad \text {for all $1\leq i,j\le n$.}$$ The least $n$ for which there is no $\phi$-ladder of length $n$ is the \emph{ladder index} of $\phi(\tup{x},\tup{y})$ in~$\mathbb A$ (which may depend on the split of the variables, and may be $\infty$ for some infinite structures $\mathbb A$). A class of structures $\mathcal{C}$ is \emph{stable} if the ladder index of every first order formula $\phi(\tup{x},\tup{y})$ over structures from~$\mathcal{C}$ is bounded by a constant depending only on $\phi$ and~$\mathcal{C}$. This notion can be applied to a single infinite structure $\mathbb A$, by considering the class consisting of $\mathbb A$ only. Examples of stable structures include $(\mathbb N,=)$, the field of complex numbers $(\mathbb C,+,\times,0,1)$, as well as any vector space $V$ over the field of rationals, treated as a group with addition. On the other hand, $(\mathbb Q,\le)$ and the field of reals $(\mathbb R,+,\times,0,1)$ are not stable, as they admit a linear ordering which is definable by a first order formula. Stable structures turn out to have more graspable structure than unstable ones, as they can be equipped with various notions useful for their study, such as \emph{forking independence} (generalizing linear independence in vector spaces) and \emph{rank} (generalizing dimension). We refer to the textbooks~\cite{pillay,tent2012course} for an introduction to stability theory. One of concepts studied in the early years of stability theory is a property of infinite graphs called \emph{superflatness}, introduced by Podewski and Ziegler~\cite{podewski1978stable}. The definition of superflatness is the same as of nowhere denseness, but Podewski and Ziegler, instead of applying it to an infinite class of finite graphs, apply it to a single infinite graph. The main result of~\cite{podewski1978stable} is that every superflat graph is stable. As observed by Adler and Adler~\cite{adler2014interpreting}, this directly implies the following: \begin{theorem}[\cite{adler2014interpreting,podewski1978stable}]\label{thm:adleradler} Every nowhere dense class of graphs is stable. Conversely, any stable class of finite graphs which is subgraph-closed is nowhere dense. \end{theorem} Thus, the notion of nowhere denseness (or superflatness) coincides with stability if we restrict attention to subgraph-closed graph classes. The proof of Adler and Adler does not yield effective or computable upper bound on the ladder index of a given formula for a given nowhere dense class of graphs, as it relies on the result of Podewski and Ziegler, which in turn invokes compactness for first order logic. \paragraph{Cardinality bounds.} One of the key insights provided by the work of Shelah is that stable classes can be characterized by admitting strong upper bounds on the cardinality of \emph{Stone spaces}. For a first order formula $\phi(\tup x,\tup y)$ with free variables partitioned into \emph{object variables} $\bar x$ and \emph{parameter variables} $\tup y$, a logical structure $\mathbb A$, and a subset of its domain $B$, define the set of \emph{$\phi$-types} with parameters from~$B$, which are realized in $\str{A}$, as follows\footnote{Here, $S^\phi(\mathbb A/B)$ is the set of types which are \emph{realized} in $\mathbb A$. In model theory, one usually works with the larger class of \emph{complete types}. This distinction will not be relevant here.}: \begin{equation}\label{eq:stone-def} S^\phi(\str{A}/B)=\left\{\big\{\tup v\ \in B^{|\bar y|}\, \colon\, \str{A}\models\phi(\tup u,\tup v)\big\} \colon\, \tup u\in V(\str{A})^{|\bar x|}\right\}\ \subset\ \mathcal{P}(B^{|\bar y|}). \end{equation} Here, $V(\str{A})$ denotes the domain of $\str{A}$ and $\mathcal{P}(X)$ denotes the powerset of $X$. Putting the above definition in words, every tuple $\tup u\in V(\str{A})^{|\bar x|}$ defines the set of those tuples $\tup v\in B^{|\bar y|}$ for which $\phi(\tup u,\tup v)$ holds. The set $S^\phi(\str{A}/B)$ consists of all subsets of $B^{|\bar y|}$ that can be defined in this way. Note that in principle, $S^\phi(\mathbb A/B)$ may be equal to $\mathcal{P}(B^{|\bar x|})$, and therefore have very large cardinality compared to $B$, even for very simple formulas. The following characterization due to Shelah (cf. \cite[Theorem 2.2, Chapter II]{shelah1990classification}) shows that for stable classes this does not happen. \begin{theorem}\label{thm:Shelah-stone-space} A class of structures $\cal C$ is stable if and only if there is an infinite cardinal $\kappa$ such that the following holds for all structures $\mathbb A$ in the elementary closure\footnote{The elementary closure of $\cal C$ is the class of all structures $\mathbb A$ such that every first order sentence $\phi$ which holds in $\mathbb A$ also holds in some $\mathbb B\in \cal C$. Equivalently, it is the class of models of the theory of $\cal C$.} of~$\cal C$, and all~$B\subset V(\mathbb A)$: $$ \textit{if\quad}|B|\le \kappa\textit {\quad then\quad} |S^\phi(\mathbb A/B)|\le \kappa.$$ \end{theorem} Therefore, Shelah's result provides an upper bound on the number of types, albeit using infinite cardinals, elementary limits, and infinite parameter sets. The cardinality bound provided by \cref{thm:Shelah-stone-space}, however, does not seem to immediately translate to a result of finitary nature. As we describe below, this can be done using the notions of {\em{VC-dimension}} and {\em{VC-density}}. \paragraph{VC-dimension and VC-density.} The notion of VC-dimension was introduced by Vapnik and Chervonenkis~\cite{chervonenkis1971theory} as a measure of complexity of set systems, or equivalently of hypergraphs. Over the years it has found important applications in many areas of statistics, discrete and computational geometry, and learning theory. Formally, VC-dimension is defined as follows. Let $X$ be a set and let $\mathcal{F}\subseteq \mathcal{P}(X)$ be a family of subsets of $X$. A subset $A\subseteq X$ is \emph{shattered by $\mathcal{F}$} if $\{A\cap F\colon F\in \mathcal{F}\}=\mathcal{P}(A)$; that is, every subset of $A$ can be obtained as the intersection of some set from $\mathcal{F}$ with $A$. The \emph{VC-dimension}, of $\mathcal{F}$ is the maximum size of a subset $A\subseteq X$ that is shattered by $\mathcal{F}$. As observed by Laskowski~\cite{laskowski1992vapnik}, VC-dimension can be connected to concepts from stability theory introduced by Shelah. For a given structure $\mathbb A$, parameter set $B\subseteq V(\mathbb A)$, and formula $\phi(\bar x,\bar y)$, we may consider the family $S^\phi(\mathbb A/B)$ of subsets of $B^{|\tup y|}$ defined using equation~\eqref{eq:stone-def}. The \emph{VC-dimension} of $\phi(\bar x,\bar y)$ on $\mathbb A$ is the VC-dimension of the family $S^\phi(\mathbb A/V(\mathbb A))$. In other words, the VC-dimension of $\phi(\bar x,\bar y)$ on $\mathbb A$ is the largest cardinality of a finite set $I$ for which there exist families of tuples $(\bar u_i)_{i\in I}$ and $(\bar v_J)_{J\subset I}$ of elements of $\mathbb A$ such that $$\str{A}\models\phi(\tup{u}_i,\tup{v}_J)\Longleftrightarrow i\in J\qquad \text {for all $i\in I$ and $J\subset I$.}$$ A formula $\phi(\bar x,\bar y)$ is \emph{dependent} on a class of structures $\cal C$ if there is a bound $d\in\mathbb{N}$ such that the VC-dimension of $\phi(\bar x,\bar y)$ on $\mathbb A$ is at most $d$ for all $\mathbb A\in\cal C$. It is immediate from the definitions that if a formula $\phi(\bar x,\bar y)$ is stable over $\cal C$, then it is also dependent on $\cal C$ (the bound being the ladder index). A class of structures $\cal C$ is {\em{dependent}} if every formula $\phi(\bar x,\bar y)$ is dependent over $\cal C$. In particular, every stable class is dependent, and hence, by \cref{thm:adleradler}, every nowhere dense class of graphs is dependent. Examples of infinite dependent structures (treated as singleton classes) include $(\mathbb Q,\le )$ and the field of reals $(\mathbb R,\times,+,0,1)$. One of the main properties of VC-dimension is that it implies polynomial upper bounds on the number of different ``traces'' that a set system can have on a given parameter set. This is made precise by the well-known Sauer-Shelah Lemma, stated as follows. \begin{theorem}[Sauer-Shelah Lemma, \cite{chervonenkis1971theory,sauer1972density, shelah1972combinatorial}]\label{thm:sauer-shelah} For any family $\mathcal{F}$ of subsets of a set $X$, if the VC-dimension of $\mathcal{F}$ is $d$, then for every finite $A\subset X$, $$|\setof{A\cap F}{F\in {\cal F}}|\le c\cdot |A|^d, \qquad\textit{where $c$ is a universal constant.}$$ \end{theorem} In particular, this implies that in a dependent class of structures $\cal C$, for every formula $\phi(\bar x,\bar y)$ there exists some constant $d\in \mathbb{N}$ such that \begin{equation}\label{eq:nip} |S^\phi(\mathbb A/B)|\le c\cdot |B|^d, \end{equation} for all $\mathbb A\in\cal C$ and finite $B\subset V(\mathbb A)$. Unlike~\cref{thm:Shelah-stone-space}, this result is of finitary nature: it provides quantitative upper bounds on the number of different definable subsets of a given finite parameter set. Together with~\cref{thm:adleradler}, this implies that for every nowhere dense class of graphs $\cal C$ and every first order formula $\phi(\bar x,\bar y)$, there exists a constant $d\in\mathbb{N}$ such that~\eqref{eq:nip} holds. However, the VC-dimension $d$ may be enormous and it highly depends on $\cal C$ and the formula $\phi(\bar x,\bar y)$. This suggests investigating quantitative bounds of the form \eqref{eq:nip} for exponents smaller than the VC-dimension $d$, as it is conceivable that the combination of bounding VC-dimension and applying Sauer-Shelah Lemma yields a suboptimal upper bound. Our main goal is to decrease this exponent drastically in the setting of nowhere dense graph classes. The above discussion motivates the notion of \emph{VC-density}, a notion closely related to VC-dimension. The \emph{VC-density} (also called the \emph{VC-exponent}) of a set system $\cal F$ on an infinite set $X$ is the infimum of all reals $\alpha>0$ such that $|\setof{A\cap F}{F\in \cal F}|\in \mathcal{O}(|A|^\alpha)$, for all finite $A\subset X$. Similarly, the VC-density of a formula $\phi(\bar x,\bar y)$ over a class of structures~$\cal C$ is the infimum of all reals $\alpha>0$ such that $|S^\phi(\mathbb A/B)|\in \mathcal{O}(|B|^\alpha)$, for all $\mathbb A\in \cal C$ and all finite $B\subset V(\mathbb A)$. The Sauer-Shelah Lemma implies that the VC-density (of a set system, or of a formula over a class of structures) is bounded from above by the VC-dimension. However, in many cases, the VC-density may be much smaller than the VC-dimension. Furthermore, it is the VC-density, rather than VC-dimension, that is actually relevant in combinatorial and algorithmic applications~\cite{Bronnimann1995,matouvsek1998geometric,Matousek:2004:BVI:1005787.1005789}, see also \cref{sec:ep}. We refer to~\cite{aschenbrenner2016vapnik} for an overview of applications of VC-dimension and VC-density in model theory and to the surveys~\cite{furedi1991traces,matouvsek1998geometric} on uses of VC-density in combinatorics. \paragraph{The main result.} Our main result, \cref{thm:vc-density} stated below, improves the bound~\eqref{eq:nip} for classes of sparse graphs by providing essentially the optimum exponent. \newcounter{vcupper} \setcounter{vcupper}{\thetheorem} \begin{theorem}\label{thm:vc-density} Let $\mathcal{C}$ be a class of graphs and let $\phi(\tup x,\tup y)$ be a first order formula with free variables partitioned into object variables $\bar x$ and parameter variables $\bar y$. Let $\ell=|\bar x|$. Then: \begin{enumerate}[(1)] \item If $\mathcal{C}$ is nowhere dense, then for every $\epsilon>0$ there exists a constant~$c$ such that for every $G\in \mathcal{C}$ and every nonempty $A\subseteq V(G)$, we have $|S^\phi(G/A)|\leq c\cdot |A|^{\ell+\epsilon}.$ \item If $\mathcal{C}$ has bounded expansion, then there exists a constant~$c$ such that for every $G\in \mathcal{C}$ and every nonempty $A\subseteq V(G)$, we have $|S^\phi(G/A)|\leq c\cdot |A|^\ell$. \end{enumerate} \end{theorem} In particular, \cref{thm:vc-density} implies that the VC-density of any fixed formula $\phi(\bar x,\bar y)$ over any nowhere dense class of graphs is $|\bar x|$, the number of object variables in $\phi$. To see that the bounds provided by \cref{thm:vc-density} cannot be improved, consider a formula $\phi(\bar x,y)$ (i.e. with one parameter variable) expressing that $y$ is equal to one of the entries of $\bar x$. Then for each graph $G$ and parameter set $A$, $S^{\phi}(G/A)$ consists of all subsets of $A$ of size at most~$|\tup x|$, whose number is $\Theta(|A|^{|\tup x|})$. Note that this lower bound applies to any infinite class of graphs, even edgeless ones. We moreover show that, as long as we consider only subgraph-closed graph classes, the result of \cref{thm:vc-density} also cannot be improved in terms of generality. The following result is an easy corollary of known characterizations of obstructions to being nowhere dense, respectively having bounded expansion. \newcounter{vclower} \setcounter{vclower}{\thetheorem} \begin{theorem}\label{thm:vc-density-lower-bound} Let $\mathcal{C}$ be a class of graphs which is closed under taking subgraphs. \begin{enumerate}[(1)] \item If $\mathcal{C}$ is not nowhere dense, then there is a formula $\phi(x,y)$ such that for every $n\in \mathbb{N}$ there are $G\in\mathcal{C}$ and $A\subseteq V(G)$ with $|A|=n$ and $|S^\phi(G/A)|=2^{|A|}$. \item If $\mathcal{C}$ has unbounded expansion, then there is a formula $\phi(x,y)$ such that for every $c\in \mathbb{R}$ there exist $G\in\mathcal{C}$ and a nonempty $A\subseteq V(G)$ with $|S^\phi(G/A)|>c|A|$. \end{enumerate} \end{theorem} \paragraph{Neighborhood complexity.} To illustrate \cref{thm:vc-density}, consider the case when $G$ is a graph and $\phi(x,y)$ is the formula with two variables $x$ and $y$ expressing that the distance between $x$ and $y$ is at most $r$, for some fixed integer $r$. In this case, $S^\phi(G/A)$ is the family consisting of all intersections $U\cap A$, for $U$ ranging over all balls of radius $r$ in $G$, and $|S^\phi(G/A)|$ is called the \emph{$r$-neighborhood complexity} of $A$. The concept of $r$-neighborhood complexity in sparse graph classes has already been studied before. In particular, it was proved by Reidl et al.~\cite{reidl2016characterising} that in any graph class of bounded expansion, the $r$-neighborhood complexity of any set of vertices $A$ is $\mathcal{O}(|A|)$. Recently, Eickmeyer et al.~\cite{eickmeyer2016neighborhood} generalized this result to an upper bound of $\mathcal{O}(|A|^{1+\epsilon})$ in any nowhere dense class of graphs. Note that these results are special cases of \cref{thm:vc-density}. The study of $r$-neighborhood complexity on classes of bounded expansion and nowhere dense classes was motivated by algorithmic questions from the field of parameterized complexity. More precisely, the usage of this notion was crucial for the development of a linear kernel for the {\sc{$r$-Dominating Set}} problem on any class of bounded expansion~\cite{drange2016kernelization}, and of an almost linear kernel for this problem on any nowhere dense class~\cite{eickmeyer2016neighborhood}. We will use the results of~\cite{drange2016kernelization,eickmeyer2016neighborhood,reidl2016characterising} on $r$-neighborhood complexity in sparse graphs in our proof of \cref{thm:vc-density}. \paragraph{Uniform quasi-wideness.} One of the main tools used in our proof is the notion of \emph{uniform quasi-wideness}, introduced by Dawar~\cite{dawar2010homomorphism} in the context of homomorphism preservation theorems. Formally, a class of graphs $\mathcal{C}$ is \emph{uniformly quasi-wide} if for each integer $r\in\mathbb{N}$ there is a function $N\colon \mathbb{N}\rightarrow \mathbb{N}$ and a constant $s\in \mathbb{N}$ such that for every $m\in \mathbb{N}$, graph $G\in \mathcal{C}$, and vertex subset $A\subseteq V(G)$ of size $\abs{A}\geq N(m)$, there is a set $S\subseteq V(G)$ of size $\abs{S}\leq s$ and a set $B\subseteq A\setminus S$ of size $\abs{B}\geq m$ which is $r$-independent in $G-S$. Recall that a set $B\subseteq V(G)$ is {\em{$r$-independent}} in $G$ if all distinct $u,v\in B$ are at distance larger than $r$ in $G$. Ne\v{s}et\v{r}il and Ossona de Mendez proved that the notions of uniform quasi-wideness and nowhere denseness coincide for classes of finite graphs~\cite{nevsetvril2011nowhere}. The proof of Ne\v{s}et\v{r}il and Ossona de Mendez goes back to a construction of Kreidler and Seese~\cite{kreidler1998monadic} (see also Atserias et al.~\cite{atserias2006preservation}), and uses iterated Ramsey arguments. Hence the original bounds on the function $N_r$ are non-elementary. Recently, Kreutzer, Rabinovich and the second author proved that for each radius $r$, we may always choose the function~$N_r$ to be a polynomial~\cite{siebertz2016polynomial}. However, the exact dependence of the degree of the polynomial on $r$ and on the class $\mathcal{C}$ itself was not specified in~\cite{siebertz2016polynomial}, as the existence of a polynomial bound is derived from non-constructive arguments used by Adler and Adler in~\cite{adler2014interpreting} when showing that every nowhere dense class of graphs is stable. We remark that polynomial bounds for uniform quasi-wideness are essential for some of its applications: the fact that $N_r$ can be chosen to be polynomial was crucially used by Eickmeyer et al.~\cite{eickmeyer2016neighborhood} both to establish an almost linear upper bound on the $r$-neighborhood complexity in nowhere dense classes, and to develop an almost linear kernel for the {\sc{$r$-Dominating Set}} problem. We use this fact in our proof of \cref{thm:vc-density} as well. In our quest for constructive arguments, we give a new construction giving polynomial bounds for uniform quasi-wideness. The new proof is considerably simpler than that of~\cite{siebertz2016polynomial} and gives explicit and computable bounds on the degree of the polynomial. More precisely, we prove the following theorem; here, the notation $\mathcal{O}_{r,t}(\cdot)$ hides computable factors depending on $r$ and $t$. \newcounter{uqw} \setcounter{uqw}{\thetheorem} \begin{theorem}\label{thm:new-uqw} For all $r,t\in \mathbb{N}$ there is a polynomial $N\colon \mathbb{N}\to \mathbb{N}$ with $N(m)= \mathcal{O}_{r,t}{(m^{{(4t+1)}^{2rt}})}$, such that the following holds. Let $G$ be a graph such that $K_t\not\preccurlyeq_{\lfloor 9r/2\rfloor} G$, and let $A\subseteq V(G)$ be a vertex subset of size at least $N(m)$, for a given $m$. Then there exists a set $S\subseteq V(G)$ of size $|S|<t$ and a set $B\subseteq A\setminus S$ of size $|B|\geq m$ which is $r$-independent in $G-S$. Moreover, given~$G$ and $A$, such sets $S$ and $B$ can be computed in time $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$. \end{theorem} We remark that even though the techniques employed to prove \cref{thm:new-uqw} are inspired by methods from stability theory, at the end we conduct an elementary graph theoretic reasoning. In particular, as asserted in the statement, the proof be turned into an efficient algorithm. We also prove a result extending~\cref{thm:new-uqw} to the case where $A\subset V(G)^d$ is a set of \emph{tuples} of vertices, of any fixed length $d$. This result is essentially an adaptation of an analogous result due to Podewski and Ziegler~\cite{podewski1978stable} in the infinite case, but appears to be new in the context of finite structures. This more general result turns out to be necessary in the proof of \cref{thm:vc-density}. \paragraph{Local separation.} A simple, albeit important notion which permeates our proofs is a graph theoretic concept of \emph{local separation}. Let $G$ be a graph, $S\subset V(G)$ a set of vertices, and let $r\in \mathbb{N}$ be a number. We say that two sets of vertices $A$ and $B$ are \emph{$r$-separated} by $S$ (in $G$) if every path from a vertex in $A$ to a vertex in $B$ of length at most $r$ contains a vertex from $S$ (cf.~Fig.~\ref{fig:sep}). \begin{figure}[h!] \centering \includegraphics[scale=0.3,page=1]{pics} \caption{The sets $A$ and $B$ are $2$-separated by $S$. } \label{fig:sep} \end{figure} Observe that taking $r=\infty$ in $r$-separation yields the familiar notion of a separation in graph theory. From the perspective of stability, separation (for $r=\infty$) characterizes \emph{forking independence} in superflat graphs~\cite{ivanov}. Therefore, $r$-separation can be thought of as a local analogue of forking independence, for nowhere dense graph classes. A key lemma concerning $r$-separation (cf.~\cref{cor:bound}) states that if $A$ and $B$ are $r$-separated by a set $S$ of size $s$ in $G$, then for any fixed formula $\phi(\bar x,\bar y)$ of quantifier rank $\mathcal{O}(\log r)$, the set $\{\{\tup v\ \in B^{|\bar y|} : G\models\phi(\tup u,\tup v)\} : \tup u\in A^{|\bar x|}\}$ has cardinality bounded by a constant depending on $s$ and $\phi$ only (and not on $G,A,$ and $B$). This elementary result combines Gaifman's locality of first order logic (cf.~\cite{gaifman1982local}) and a Feferman-Vaught compositionality argument. This, in combination with the polynomial bounds for uniform quasi-wideness (\cref{thm:new-uqw}, and its extension to tuples~\cref{thm:uqw-tuples}), as well as the previous results on neighborhood complexity~\cite{drange2016kernelization,eickmeyer2016neighborhood}, are the main ingredients of our main result,~\cref{thm:vc-density}. \paragraph{A duality theorem.} As an example application of our main result,~\cref{thm:vc-density}, we prove the following result. \newcounter{ep} \setcounter{ep}{\thetheorem} \begin{theorem}\label{thm:erdos-posa} Fix a nowhere dense class of graphs $\mathcal{C}$ and a formula $\phi(x,y)$ with two free variables $x,y$. Then there is a function $f\colon \mathbb{N}\to\mathbb{N}$ with the following property. Let $G\in \mathcal{C}$ be a graph and let $\cal G$ be a family of subsets of $V(G)$ consisting of sets of the form $\setof{v\in V(G)}{\phi(u, v)}$, where~$u$ is some vertex of $V(G)$. Then~$\tau({\cal G})\le f(\nu(\cal G))$. \end{theorem} \noindent Above, $\tau(\cal G)$ denotes the \emph{transversality} of $\cal G$, i.e., the least number of elements of a set $X$ which intersects every set in $\cal G$, and $\nu(\cal G)$ denotes the \emph{packing number} of $\cal G$, i.e., the largest number of pairwise-disjoint subsets of $\cal G$.~\cref{thm:erdos-posa} is an immediate consequence of the bound given by~\cref{thm:vc-density} and a result of Matou{\v s}ek~\cite{Matousek:2004:BVI:1005787.1005789}. We remark that a similar, but incomparable result is proved by Bousquet and Thomass{\'e}~\cite{BousquetT15}. In their result, the assumption on $\mathcal{C}$ is weaker, since they just require that it has \emph{bounded distance VC-dimension}, but the assumption on $\cal G$ is stronger, as it is required to be the set of all balls of a fixed radius. \paragraph{Stability.} Finally, we observe that we can apply our tools to give a constructive proof of the result of Adler and Adler~\cite{adler2014interpreting} that every nowhere dense class is stable, which yields computable upper bounds on ladder indices. More precisely, we translate the approach of Podewski and Ziegler~\cite{podewski1978stable} to the finite and replace the key non-constructive application of compactness with a combinatorial argument based on Gaifman's locality, in the flavor served by our observations on $r$-separation (\cref{cor:bound}). The following theorem summarizes our result. \newcounter{stable} \setcounter{stable}{\thetheorem} \begin{theorem}\label{thm:new-stable} There are computable functions $f\colon \mathbb{N}^3\to\mathbb{N}$ and $g\colon\mathbb{N}\to\mathbb{N}$ with the following property. Suppose $\phi(\bar x,\bar y)$ is a formula of quantifier rank at most $q$ and with $d$ free variables. Suppose further that $G$ is a graph excluding $K_t$ as a depth-$g(q)$ minor. Then the ladder index of $\phi(\bar x,\bar y)$ in $G$ is at most $f(q,d,t)$. \end{theorem} \pagebreak \paragraph{Organization.} In \cref{sec:prelim} we recall some standard concepts from the theory of sparse graphs. In~\cref{sec:uqw} we prove \cref{thm:new-uqw}, improving the previously known bounds and making them constructive. We remark that this result is not needed in the proof of our main result,~\cref{thm:vc-density}. The following two sections contain the main tools needed in the proof of the main result: in~\cref{sec:uqw-tuples} we formulate and prove the generalization of uniform quasi-wideness to tuples,~\cref{thm:uqw-tuples}, and in \cref{sec:gaifman} we discuss Gaifman locality for first order logic and derive an elementary variant concerning local separators. In \cref{sec:types} we prove our main result, \cref{thm:vc-density}, and the corresponding lower bounds, \cref{thm:vc-density-lower-bound}. Finally, in \cref{sec:stable} we provide a constructive proof of the result of Adler and Adler, \cref{thm:new-stable}. \paragraph{Acknowledgments.} We would like to thank Patrice Ossona de Mendez for pointing us to the question of studying VC-density of nowhere dense graph classes. \section{Bounds for uniform quasi-wideness}\label{sec:uqw} In this section we prove \cref{thm:new-uqw}, which strengthens \cref{thm:krs} by providing an explicit polynomial $N$ and bound $s$, whereas the bounds in~\cref{thm:krs} rely on non-constructive arguments. We note that~\cref{thm:krs} is sufficient to prove our main result,~\cref{thm:vc-density}, but is required in our proof of~\cref{thm:new-stable}, which is the effective variant of the result of Adler and Adler, \cref{thm:new-stable}. \paragraph{General strategy.} Our proof follows the same lines as the original proof of Ne\v set\v ril and Ossona de Mendez~\cite{nevsetvril2011nowhere}, with the difference that in the key technical lemma (\cref{lem:apex} below), we improve the bounds significantly by replacing a Ramsey argument with a refined combinatorial analysis. The new argument essentially originates in the concept of {\em{branching index}} from stability theory. We first prove a restricted variant,~\cref{lem:engine} below, in which we assume that $A$ is already $(r-1)$-independent. Then, in order to derive \cref{thm:new-uqw}, we apply the lemma iteratively for $r$ ranging from $1$ to the target value. \begin{lemma}\label{lem:engine} For every pair of integers $t,r\in \mathbb{N}$ there exists an integer $d<9r/2$ and a function $L\colon \mathbb{N}\to \mathbb{N}$ with $L(m)=\mathcal{O}_{r,t}{(m^{{(4t+1)}^{2rt}})}$ such that the following holds. For each $m\in \mathbb{N}$, graph~$G$ with $K_t\not\preccurlyeq_{d} G$, and $(r-1)$-independent set $A\subseteq V(G)$ of size at least $L(m)$, there is a set $S\subseteq V(G)-A$ of size less than $t$ such that $A$ contains a subset $B$ of size $m$ which is $r$-independent in $G-S$. Moreover, if $r$ is odd then $S$ is empty, and if $r$ is even, then every vertex of $S$ is at distance exactly $r/2$ from every vertex of $B$. Finally, given $G$ and $A$, the sets $B$ and $S$ can be computed in time $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$. \end{lemma} We prove~\cref{lem:engine} in \Cref{sec:engine}, but a very rough sketch is as follows. The case of general~$r$ reduces to the case $r=1$ or $r=2$, depending on the parity of $r$, by contracting the balls of radius $\lfloor \frac {r-1} 2\rfloor $ around the vertices in $A$ to single vertices. The case of $r=1$ follows immediately from Ramsey's theorem, as in~\cite{nevsetvril2011nowhere}. The case $r=2$ is substantially more difficult. We start by formulating and proving the main technical result needed for proving the case $r=2$. \subsection{The main technical lemma} \label{sec:main-tech} The following, Ramsey-like result is the main technical lemma used in the proof of~\cref{thm:new-uqw}. \pagebreak \begin{lemma}\label{lem:apex} Let $\ell,m,t\in \mathbb{N}$ and assume $\ell\geq t^{8}$. If~$G$ is a graph and $A$ is a $1$-independent set in~$G$ with at least $(m+\ell)^{2t}$ elements, then at least one of the following conditions hold: \begin{itemize} \item $K_t\preccurlyeq_{4} G$, \item $A$ contains a $2$-independent set of size $m$, \item some vertex $v$ of $G$ has at least $\ell^{1/4}$ neighbors in $A$. \end{itemize} Moreover, if $K_t\not\preccurlyeq_4G$, the structures described in the other two cases (a $2$-independent set of size~$m$, or a vertex $v$ as above) can be computed in time $\mathcal{O}_t(|A|\cdot |E(G)|)$. \end{lemma} We remark that a statement similar to that of \cref{lem:apex} can be obtained by employing Ramsey's theorem, as has been done in~\cite{nevsetvril2011nowhere}. This, however, does not give a bound which is polynomial in $m+\ell$, and thus cannot be used to prove~\cref{thm:new-uqw}. \medskip The remainder of this section is devoted to the proof of~\cref{lem:apex}. We will use the following bounds on the edge density of graphs with excluded shallow minors obtained by Alon et al.~\cite{alon2003turan}. \begin{lemma}[Theorem 2.2 in~\cite{alon2003turan}]\label{lem:densitynd} Let $H$ be a bipartite graph with maximum degree $d$ on one side. Then there exists a constant $c_H$, depending only on $H$, such that every $n$-vertex graph $G$ excluding~$H$ as a subgraph has at most $c_H\cdot n^{2-1/d}$ edges. \end{lemma} Observe that if $K_t\not\preccurlyeq_1G$, then in particular the $1$-subdivision of $K_t$ is excluded as a subgraph of $G$ (the $1$-subdivision of a graph $H$ is obtained by replacing every edge of $H$ by a path of length $2$). Moreover, the $1$-subdivision of $K_t$ is a bipartite graph with maximum degree $2$ on one side. Furthermore, it is easy to check in the proof of Theorem 2.2 in~\cite{alon2003turan} that $c_H\leq |V(H)|$ in case $d=2$. Since the $1$-subdivision of $K_t$ has $\binom{t+1}{2}$ vertices, we can choose $c_{K_t}=\binom{t+1}{2}$ and conclude the following. \begin{corollary}\label{crl:densitynd} Let $G$ be an $n$-vertex graph such that $K_t\not\preccurlyeq_1 G$ for some constant $t\in \mathbb{N}$. Then $G$ has at most $\binom{t+1}{2}\cdot n^{3/2}$ edges. \end{corollary} We will use the following standard lemma saying that a shallow minor of a shallow minor is a shallow minor, where the parameters of shallowness are appropriately chosen. \begin{lemma}[adaptation of Proposition 4.11 in~\cite{sparsity}]\label{lem:combineminors} Suppose $J,H,G$ are graphs such that $H\preccurlyeq_a G$ and $J\preccurlyeq_b H$, for some $a,b\in \mathbb{N}$. Then $J\preccurlyeq_c G$, where $c=2ab+a+b$. \end{lemma} We will need one more technical lemma. \begin{lemma}\label{lem:diversity} Let $G$ be a graph such that $K_t\not\preccurlyeq_4G$ for some $t\in\mathbb{N}$ and let $A\subseteq V(G)$ with $|A|\geq t^{8}$. Assume furthermore that every pair of elements of $A$ has a common neighbor in $V(G)\setminus A$. Then there exists a vertex $v$ in $V(G)\setminus A$ which has at least $|A|^{1/4}$ neighbors in $A$. \end{lemma} \begin{proof} Denote $k=\max\{\,|N(w)\cap A|\ \colon\ w\in V(G)-A\,\}$; our goal is to prove that $k\geq |A|^{1/4}$. Let $B\subseteq V(G)-A$ be the set of those vertices outside of $A$ that have a neighbor in $A$. Construct a function $f\colon B\to A$ by a random procedure as follows: for each vertex $v\in B$, choose $f(v)$ uniformly and independently at random from the set $N(v)\cap A$. Next, for each $u\in A$ define branch set $I_u=G[\{u\}\cup f^{-1}(u)]$. Observe that since, by construction, $v$ and $f(v)$ are adjacent for all $v\in B$, each branch set $I_u$ has radius at most $1$, with $u$ being the central vertex. Also, the branch sets $\{I_u\}_{u\in A}$ are pairwise disjoint. Finally, construct a graph $H$ on vertex set $A$ by making distinct $u,v\in A$ adjacent in $H$ whenever there is an edge in $G$ between the branch sets~$I_u$ and $I_v$. Then the branch sets $\{I_u\}_{u\in A}$ witness that $H$ is a $1$-shallow minor of $G$. For distinct $u,v\in A$, let us estimate the probability that the edge $uv$ appears in $H$. By assumption, there is a vertex $w\in B$ that is adjacent both to $u$ and to $v$. Observe that if it happens that $f(w)=u$ or $f(w)=v$, then $uv$ for sure becomes an edge in $H$. Since $w$ has at most $k$ neighbors in $A$, the probability that $f(w)\in \{u,v\}$ is at least $\frac{2}{k}$. By the linearity of expectation, the expected number of edges in $H$ is at least $\binom{|A|}{2}\cdot \frac{2}{k}=\frac{|A|(|A|-1)}{k}$. Hence, for at least one run of the random experiment we have that $H$ indeed has at least this many edges. On the other hand, observe that $K_t\not\preccurlyeq_1 H$; indeed, since $H\preccurlyeq_1 G$, by Lemma~\ref{lem:combineminors} we infer that $K_t\preccurlyeq_1 H$ would imply $K_t\preccurlyeq_4 G$, a contradiction with the assumptions on $G$. Then Corollary~\ref{crl:densitynd} implies $H$ has at most $\binom{t+1}{2}\cdot |A|^{3/2}$ edges. Observe that $\binom{t+1}{2}\cdot |A|^{3/2}\leq 3t^2/4\cdot |A|^{3/2}\leq \frac{3}{4}|A|^{7/4}$, where the first inequality holds due to $t\geq 2$, while the second holds by the assumption that $|A|\geq t^8$. By combining the above bounds, we obtain $$\frac{|A|(|A|-1)}{k}\leq \frac{3}{4}|A|^{7/4},$$ which implies $k\geq |A|^{1/4}$ due to $|A|\geq t^8\geq 64$. \end{proof} We proceed with the proof of \cref{lem:apex}. The idea is to arrange the elements of $A$ in a binary tree and prove that provided $A$ is large, this tree contains a long path. From this path, we will extract the set $B$. In stability theory, similar trees are called \emph{type trees} and they are used to extract long indiscernible sequences, see e.g.~\cite{malliaris2014regularity}. \newcommand{\mathrm{D}}{\mathrm{D}} \newcommand{\mathrm{S}}{\mathrm{S}} We will work with a two-symbol alphabet $\set{\mathrm{D},\mathrm{S}}$, for {\em{daughter}} and {\em{son}}. We identify words in $\set{\mathrm{D},\mathrm{S}}^*$ with \emph{nodes} of the infinite rooted binary tree. The \emph{depth} of a node $w$ is the length of $w$. For $w\in \set{\mathrm{D},\mathrm{S}}^*$, the nodes $w\mathrm{D}$ and $w\mathrm{S}$ are called, respectively, the \emph{daughter} and the \emph{son} of $w$, and $w$ is the \emph{parent} of both $w\mathrm{S}$ and $w\mathrm{D}$. A node $w'$ is a {\em{descendant}} of a node $w$ if $w'$ is a prefix of $w$ (possibly $w'=w$). We consider finite, labeled, rooted, binary trees, which are called simply trees below, and are defined as follows. For a set of labels $U$, a ($U$-labeled) \emph{tree} is a partial function $\tau\colon \set{\mathrm{D},\mathrm{S}}^*\to U$ whose domain is a finite set of nodes, called the \emph{nodes of $\tau$}, which is closed under taking parents. If $v$ is a node of $\tau$, then $\tau(v)$ is called its \emph{label}. Let $G$ be a graph, $A\subset V(G)$ be a $1$-independent set in $G$, and $\bar a$ be any enumeration of $A$, that is, a sequence of length $|A|$ in which every element of $A$ appears exactly once. We define a binary tree $\tau$ which is labeled by vertices of $G$. The tree is defined by processing all elements of~$\bar a$ sequentially. We start with $\tau$ being the tree with empty domain, and for each element $a$ of the sequence $\bar a$, processed in the order given by $\bar a$, execute the following procedure which results in adding a node with label $a$ to $\tau$. When processing the vertex $a$, do the following. Start with $w$ being the empty word. While~$w$ is a node of $\tau$, repeat the following step: if the distance from $a$ to $\tau(w)$ in the graph $G$ is at most~$2$, replace $w$ by its son, otherwise, replace $w$ by its daughter. Once $w$ is not a node of $\tau$, extend $\tau$ by setting $\tau(w)=a$. In this way, we have processed the element $a$, and now proceed to the next element of $\bar a$, until all elements are processed. This ends the construction of $\tau$. Thus, $\tau$ is a tree labeled with vertices of $A$, and every vertex of $A$ appears exactly once in $\tau$. Define the \emph{depth} of $\tau$ as the maximal depth of a node of $\tau$. For a word $w$, an \emph{alternation} in~$w$ is any position $\alpha$, $1\leq \alpha\leq |w|$, such that $w_\alpha\neq w_{\alpha-1}$; here, $w_\alpha$ denotes the $\alpha$th symbol of~$w$, and~$w_0$ is assumed to be $\mathrm{D}$. The \emph{alternation rank} of the tree $\tau$ is the maximum of the number of alternations in $w$, over all nodes $w$ of $\tau$. \begin{lemma}\label{lem:number-of-nodes} Let $h,t\ge 2$. If $\tau$ has alternation rank at most $2t-1$ and depth at most $h-1$, then~$\tau$ has fewer than $h^{2t}$ nodes. \end{lemma} \begin{proof} With each node $w$ of $\tau$ associate function $f_w\colon \set{1,\ldots,2t}\to\set{1,\ldots,h}$ defined as follows: $f_w$ maps each $i\in \set{1,\ldots,2t}$ to the $i$th alternation of $w$, provided $i$ is at most the number of alternations of $w$, and otherwise we put $f_w(i)=|w|+1$. It is clear that the mapping $w\mapsto f_w$ for nodes $w$ of $\tau$ is injective and its image is contained in monotone functions from $\set{1,\ldots,2t}$ to $\set{1,\ldots,h}$, whose number is less than $h^{2t}$. Hence, the domain of~$\tau$ has fewer than $h^{2t}$ elements. \end{proof} \begin{lemma}\label{thm:alternation-rank-type-tree} Suppose that $K_t\not\preccurlyeq_{2} G$. Then $\tau$ has alternation rank at most $2t-1$. \end{lemma} \begin{proof} Let $w$ be a node of $\tau$ with at least $2k$ alternations, for some $k\in \mathbb{N}$. Suppose $\alpha_1,\beta_1,\ldots,\alpha_k,\beta_k$ be the first $2k$ alternations of $w$. By the assumption that $w_0=\mathrm{D}$ we have that~$w$ contains symbol $\mathrm{S}$ at all positions $\alpha_i$ for $i=1,\ldots,k$, and symbol $\mathrm{D}$ at all positions $\beta_i$ for $i=1,\ldots,k$. For each $i\in \set{1,\ldots,k}$, define $a_i\in V(G)$ to be the label in $\tau$ of the prefix of $w$ of length $\alpha_i-1$, and similarly define $b_i\in V(G)$ to be the label in $\tau$ of the prefix of $w$ of length $\beta_i-1$. It follows that for each $i\in \set{1,\ldots,k}$, the following assertions hold: the nodes in $\tau$ with labels $b_i,a_{i+1},b_{i+1},\ldots,a_k,b_k$ are descendants of the son of the node with label $a_i$, and the nodes with labels $a_{i+1},b_{i+1},\ldots,a_k,b_k$ are descendants of the daughter of the node with label $b_i$. \begin{claim}\label{claim:minor} For every pair $a_i,b_j$ with $1\le i\le j\le k$, there is a vertex $z_{ij}\not\in A$ which is a common neighbor of $a_i$ and $b_j$, and is not a neighbor of any $b_s$ with $s\neq j$. \end{claim} \begin{clproof} Note that since $i\le j$, the node with label $b_j$ is a descendant of the son of the node with label $a_i$, hence we have $\mathrm{dist}_G(a_i,b_j)\le 2$ by the construction of $\tau$. However, we also have $\mathrm{dist}_G(a_i,b_j)>1$ since $A$ is $1$-independent. Therefore $\mathrm{dist}_G(a_i,b_j)=2$, so there is a vertex $z_{ij}$ which is a common neighbor of $a_i$ and $b_j$. Suppose that $z_{ij}$ was a neighbor of $b_s$, for some $s\neq j$. This would imply that $\mathrm{dist}_G(b_j,b_s)\le 2$, which is impossible, because the nodes with labels~$b_s$ and $b_j$ in $\tau$ are such that one is a descendant of the daughter of the other, implying that $\mathrm{dist}_G(b_s,b_j)>2$. \end{clproof} Note that whenever $i\leq j$ and $i'\leq j'$ are such that $j\neq j'$, the vertices $z_{ij}$ and $z_{i'j'}$ are different, because $z_{ij}$ is adjacent to $b_{j}$ but not to $b_{j'}$, and the converse holds for $z_{i'j'}$. However, it may happen that $z_{ij}=z_{i'j}$ even if $i\neq i'$. This will not affect our further reasoning. For each $j\in\set{1,\ldots,k}$, let $B_j$ be the subgraph of $G$ induced by the set $\set{a_j,b_j}\cup\set{z_{ij}\colon 1\le i\le j}$. Observe that $B_j$ is connected and has radius at most $2$, with $b_j$ being the central vertex. By \cref{thm:alternation-rank-type-tree} and the discussion from the previous paragraph, the graphs $B_j$ for $j\in \set{1,\ldots,k}$ are pairwise disjoint. Moreover, for all $1\le i\le j\le k$, there is an edge between $B_i$ and $B_j$, namely, the edge between $z_{ij}\in B_j$ and $a_i\in B_i$. Hence, the graphs $B_j$, for $j\in \set{1,\ldots,k}$, define a depth-$2$ minor model of $K_k$ in $G$. Since $K_t\not\preccurlyeq_{2}G$, this implies that $k<t$, proving~\cref{thm:alternation-rank-type-tree}. \end{proof} We continue with the proof of~\cref{lem:apex}. Fix integers $\ell\ge t^8$ and~$m$, and define $h=m+\ell$. Let $A$ be a $1$-independent set in $G$ of size at least $h^{2t}$. Suppose that the first case of \cref{lem:apex} does not hold. In particular $K_t\not\preccurlyeq_2 G$, so by \cref{thm:alternation-rank-type-tree},~$\tau$ has alternation rank at most $2t-1$. From \cref{lem:number-of-nodes} we conclude that $\tau$ has depth at least~$h$. As $h=m+\ell$, it follows that either $\tau$ has a node $w$ which contains at least $m$ letters~$\mathrm{D}$, or $\tau$ has a node $w$ which contains at least $\ell$ letters $\mathrm{S}$. Consider the first case, i.e., there is a node $w$ of $\tau$ which contains at least $m$ letters $\mathrm{D}$, and let $X$ be the set of all vertices $\tau(u)$ such that $u\mathrm{D}$ is a prefix of $w$. Then, by construction, $X$ is a $2$-independent set in $G$ of size at least $m$, so the second case of the lemma holds. Finally, consider the second case, i.e., there is a node $w$ in $\tau$ which contains at least $\ell$ letters~$\mathrm{S}$. Let $Y$ be the set of all vertices $\tau(u)$ such that $u\mathrm{S}$ is a prefix of $w$. Then, by construction, $Y\subset A$ is a set of at least $\ell$ vertices which are mutually at distance exactly $2$ in $G$. Since $K_t\not\preccurlyeq_4 G$ and $\ell\geq t^8$, by~\cref{lem:diversity} we infer that there is a vertex $v\in G$ with at least $\ell^{1/4}$ neighbors in $Y$. This finishes the proof of the existential part of~\cref{lem:apex}. For the algorithmic part, the proof above yields an algorithm which first constructs the tree $\tau$, by iteratively processing each vertex $w$ of $A$ and testing whether the distance between $w$ and each vertex processed already is equal to $2$. This amounts to running a breadth-first search from every vertex of $A$, which can be done in time $\mathcal{O}(|A|\cdot |E(G)|)$. Whenever a node with $2t$ alternations is inserted to $\tau$, we can exhibit in $G$ a depth-$2$ minor model of $K_t$. Whenever a node with least $m$ letters $\mathrm{D}$ is added to~$\tau$, we have constructed an $m$-independent set. Whenever a node with at least $\ell$ letters $\mathrm{S}$ is added to $\tau$, as argued, there must be some vertex $v\in V(G)-A$ with at least $\ell^{1/8}$ neighbors in~$A$. To find such a vertex, scan through all neighborhoods of vertices $v\in A$ in the graph $G$, and then select a vertex $w\in V(G)$ which belongs to the largest number of those neighborhoods; this can be done in time $\mathcal{O}(|E(G)|)$. The overall running time is $\mathcal{O}(|A|\cdot |E(G)|)$, as required. \subsection{Proof of~\cref{lem:engine}} \label{sec:engine} With \cref{lem:apex} proved, we can proceed with~\cref{lem:engine}. We start with the case $r=1$, then we move to the case $r=2$. Next, we show how the general case reduces to one of those two cases. \paragraph{Case $r=1$.} We put $d=0$, thus we assume that $K_t\not\preccurlyeq_0 G$; that is, $G$ does not contain a clique of size $t$ as a subgraph. By Ramsey's Theorem, in every graph every vertex subset of size $\binom{m+t-2}{t-1}$ contains an independent set of size $m$ or a clique of size $t$. Therefore, taking $L(m)$ to be the above binomial coefficient yields~\cref{lem:engine} in case $r=0$, for $S=\emptyset$. Note here that $\binom{m+t-2}{t-1}\in\mathcal{O}_{t}{(m^{{(4t+1)}^{2t}})}$. Moreover, such independent set or clique can be computed from $G$ and $A$ in time~$\mathcal{O}(|A|\cdot |E(G)|)$ by simulating the proof of Ramsey's theorem. \paragraph{Case $r=2$.} We put $d=2$, thus we assume that $K_t\not\preccurlyeq_4 G$. We show that if $A$ is a sufficiently large $1$-independent set in a graph $G$ such that $K_t\not\preccurlyeq_4 G$, then there is a set of vertices~$S$ of size less than $t$ such that $A\setminus S$ contains a subset of size $m$ which is $2$-independent in $G-S$. Here, by ``sufficiently large'' we mean of size of size at least $L(m)$, for $L(m)$ emerging from the proof. To this end, we shall iteratively apply \cref{lem:apex} as long as it results in the third case, yielding a vertex $v$ with many neighbors in $A$. In this case, we add $v$ vertex to the set $S$, and apply the lemma again, restricting $A$ to $A\cap N(v)$. Precise calculations follow. \newcommand{\widehat{m}}{\widehat{m}} Fix a number $\beta>4t$. For $k\ge 0$, define $m_k=((k+1)\cdot m)^{(2\beta)^k}$. In the following we will always assume that $m\geq t^8$. We will apply~\cref{lem:apex} in the following form. \begin{claim}\label{cor:apex} If $G$ is a graph such that $K_t\not\preccurlyeq_4 G$, and $A\subset V(G)$ is an $1$-independent set in $G$ which does not contain a $2$-independent set of size $m$ and satisfies $|A|\ge m_k$, for some $k\geq 1$, then there exists a vertex $v\in V(G)-A$ such that $|N_G(v)\cap A| \ge m_{k-1}$. \end{claim} \begin{clproof} Let $\ell=(k\cdot m)^{4\cdot(2\beta)^{k-1}}$. Then $m\ge t^8$ implies that $\ell\ge t^8$. Observe that \[|A|\ge \left((k+1)\cdot m\right)^{(2\beta)^k}\ge\left ((m+ k\cdot m)^{4\cdot(2\beta)^{k-1}} \right)^{2t} \ge \left(m+(k\cdot m)^{4\cdot (2\beta)^{k-1}}\right)^{2t}=(m+\ell)^{2t}.\] Therefore, we may apply \cref{lem:apex}, yielding a vertex $v$ with at least $\ell^{1/4}=(k\cdot m)^{(2\beta)^{k-1}}=m_{k-1}$ neighbors in~$A$. \end{clproof} We will now find a subset of $A$ of size $m$ which is $2$-independent in $G-S$, for some $S$ with $|S|<t$. Assume that $|A|\ge m_t$. By induction, we construct a sequence $A=A_0\supseteq A_1\supseteq\ldots$ of \mbox{$1$-independent} vertex subsets of $G$ of length at most $t$ such that $|A_i|\ge m_{t-i}$, as follows. Start with $A_0=A$. We maintain a set $S$ of vertices of $G$ which is initially empty, and we maintain the invariant that $A_i$ is disjoint with $S$ at each step of the construction. For $i=0,1,2,\ldots$ do as follows. If $A_{i}$ contains a subset of size $m$ which is $2$-independent set in $G-S$, terminate. Otherwise, apply~\cref{cor:apex} to the graph $G-S$ with $1$-independent set $A_{i}$ of size $|A_i|\ge m_{t-i}$. This yields a vertex $v_{i+1}\in V(G)-(S\cup A_i)$ whose neighborhood in $G-S$ contains at least $m_{t-i-1}$ vertices of $A_{i}$. Define $A_{i+1}$ as the set of neighbors of $v_{i+1}$ in $A_i$, and add $v_{i+1}$ to the set~$S$. Increment $i$ and repeat. \begin{claim}\label{claim:at-most-t} The construction halts after less than $t$ steps. \end{claim} \begin{clproof} Suppose that the construction proceeds for $k\le t$ steps. By construction, each vertex~$v_i$, for $i\le k$, is adjacent in $G$ to all the vertices of $A_{j}$, for each $i\le j\le k$. In particular, all the vertices $v_1,\ldots,v_k$ are adjacent to all the vertices of $A_{k}$ and $|A_k|\ge m_{t-k}\ge m\ge t$. Choose any pairwise distinct vertices $w_1,\ldots,w_k\in A_k$ and observe that the connected subgraphs $G[\set{w_i,v_i}]$ of~$G$ yield a depth-$1$ minor model of $K_k$ in $G$. Since $K_t\not\preccurlyeq_2 G$, we must have $k<t$. \end{clproof} Therefore, at some step $k<t$ of the construction we must have obtained a $2$-independent subset $B$ of $G-S$ of size $m$. Moreover, $|S|\le k<t$. This proves~\cref{lem:engine} in the case $r=2$, for the function $L(m)$ defined as $L(m)=m_t=((t+1)\cdot m)^{\beta^{2t}}$ for $m\ge t^8$, and $L(m)=L(t^8)$ for $m<t^8$, where $\beta>4t$ is any fixed constant. It is easy to see that then $L(m)\in \mathcal{O}_{t}{(m^{{(4t+1)}^{2t}})}$, provided we put $\beta=4t+1$. Also, the proof easily yields an algorithm constructing the sets~$B$ and $S$, which amounts to applying at most $t$ times the algorithm of~\cref{lem:apex}. Hence, its running time is $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$, as required. \paragraph{Odd case.} We now prove~\cref{lem:engine} in the case when $r=2s+1$, for some integer $s\geq 1$. We put $d=s=\frac{r-1}{2}$. Let $G$ be a graph such that $K_t\not\preccurlyeq_s G$, and let $A$ be a $2s$-independent set in $G$. Consider the graph $G'$ obtained from $G$ by contracting the (pairwise disjoint) balls of radius $s$ around each vertex $v\in A$. Let $A'$ denote the set of vertices of $G'$ corresponding to the contracted balls. There is a natural correspondence (bijection) between $A$ and $A'$, where each vertex $v\in A$ is associated with the vertex of $A'$ resulting from contracting the ball of radius $s$ around $v$. From $K_t\not\preccurlyeq_s G$ it follows that~$G'$ does not contain $K_t$ as a subgraph. Applying the already proved case $r=1$ to $G'$ and $A'$, we conclude that provided $|A|=|A'|\ge {m+t-2\choose t-1}$, the set $A'$ contains a $1$-independent subset $B'$ of size $m$, which corresponds to a $(2s+1)$-independent set $B$ in $G$ that is contained in $A$; thus, we may put $S=\emptyset$ again. Hence, the obtained bound is $L(m)={m+t-2\choose t-1}$, and we have already argued that then $L(m)\in \mathcal{O}_{r,t}{(m^{{(4t+1)}^{2t}})}$. \paragraph{Even case.} Finally, we prove~\cref{lem:engine} in the case $r=2s+2$, for some integer $s\geq 1$. We put $d=9s+4=9r/2-5$. Let $G$ be such that $K_t\not\preccurlyeq_{d} G$, and let $A$ be a $(2s+1)$-independent set in~$G$. Consider the graph $G'$ obtained from $G$ by contracting the (pairwise disjoint) balls of radius $s$ around each vertex $v\in A$. Let $A'$ denote the set of vertices of $G'$ corresponding to the contracted balls. Again, there is a natural correspondence (bijection) between $A$ and $A'$. Note that this time, $A'$ is a $1$-independent set in $G'$. Since $G'\preccurlyeq_s G$, from $K_t\not\preccurlyeq_{9s+4} G$ it follows by \Cref{lem:combineminors} that $K_t\not\preccurlyeq_4 G'$. Apply the already proved case $r=2$ to $G'$ and $A'$. Then, provided $|A|=|A'|\ge L_t(m)$, where $L_t(m)$ is the function as defined in the case $r=2$, we infer that $A'$ contains a subset $B'$ of size $m$ which is $2$-independent in $G'-S'$, for some $S'\subset V(G')-A'$ of size less than $t$. Since $S'\cap A'=\emptyset$, each vertex of $S'$ originates from a single vertex of $G$ before the contractions yielding $G'$; thus, $S'$ corresponds to a set $S$ consisting of less than $t$ vertices of~$G$ which are at distance at least $s+1$ from each vertex in $A$. In turn, the set $B'$ corresponds to some subset $B$ of $A$ which is $(2s+2)$-independent in $G-S$. Moreover, as in $G'$ each vertex of~$S'$ is a neighbor of each vertex of $B'$, each vertex of $S$ has distance exactly $s+1=r/2$ from each vertex of $B$. \medskip An algorithm computing the sets $B$ and $S$ (in either the odd or even case) can be given as follows: simply run a breadth-first search from each vertex of $A$ to compute the graph $G'$ with the balls of radius $\lfloor \frac{r-1}2 \rfloor$ around the vertices in $A$ contracted to single vertices, and then run the algorithm for the case $r=1$ or $r=2$. This yields a running time of $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$. \medskip This finishes the proof of~\cref{lem:engine}. \subsection{Proof of \cref{thm:new-uqw}} We now wrap up the proof of \cref{thm:new-uqw} by iteratively applying~\cref{lem:engine}. We repeat the statement for convenience. \setcounter{aux}{\thetheorem} \setcounter{theorem}{\theuqw} \begin{theorem} For all $r,t\in \mathbb{N}$ there is a polynomial $N\colon \mathbb{N}\to \mathbb{N}$ with $N(m)= \mathcal{O}_{r,t}{(m^{{(4t+1)}^{2rt}})}$, such that the following holds. Let $G$ be a graph such that $K_t\not\preccurlyeq_{\lfloor 9r/2\rfloor} G$, and let $A\subseteq V(G)$ be a vertex subset of size at least $N(m)$, for a given $m$. Then there exists a set $S\subseteq V(G)$ of size $|S|<t$ and a set $B\subseteq A\setminus S$ of size $|B|\geq m$ which is $r$-independent in $G-S$. Moreover, given~$G$ and $A$, such sets $S$ and $B$ can be computed in time $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$. \end{theorem} \setcounter{theorem}{\theaux} \begin{proof} Fix integers $r,t$, and a graph $G$ such that $K_t\not\preccurlyeq_{d} G$, for $d=\lfloor 9r/2 \rfloor$. Let $\beta>4t$ be a fixed real. As in the proof of \cref{lem:engine}, we suppose $m\geq t^8$; this will be taken care by the final choice of the function $N(m)$. Denote $\gamma=\beta^{2t}$, and define the function $L(m)$ as $L(m)=((t+1)\cdot m)^\gamma$. Define sequence $m_0,m_1,\ldots,m_r$ as follows: \begin{eqnarray*} m_r & = & m\\ m_i & = & L(m_{i+1}) \qquad \textrm{for }0\leq i<m. \end{eqnarray*} A straightforward induction yields that \begin{equation*} m_i=(t+1)^{\frac{\gamma^{r-i}-1}{\gamma-1}}\cdot m^{\gamma^{r-i}}\qquad \textrm{for all }i\in \set{0,\ldots,r}. \end{equation*} Suppose that $A$ is a set of vertices of $G$ such that $|A|\ge m_0=(t+1)^{\frac{\gamma^{r}-1}{\gamma-1}}\cdot m^{\gamma^{r}}$. We inductively construct sequences of sets $A= A_0\supseteq A_1\supseteq \ldots \supseteq A_r$ and $\emptyset=S_0\subseteq S_1\subseteq S_2\ldots$ satisfying the following conditions: \begin{itemize} \item $|A_i|\ge m_i=L(m_{i+1})$, \item $A_i\cap S_i=\emptyset$ and $A_i$ is $i$-independent in $G-S_i$. \end{itemize} To construct $A_{i+1}$ out of $A_i$, apply~\cref{lem:engine} to the graph $G-S_i$ and the $i$-independent set $A_i$ of size at least $L(m_{i+1})$. This yields a set $S\subseteq V(G)$ which is disjoint from $S_i\cup A_i$, and a subset $A_{i+1}$ of $A_i-S$ of size at least $m_{i+1}$ which is $(i+1)$-independent in $G-S_{i+1}$, where $S_{i+1}=S\cup S_i$. This completes the inductive construction. In particular, $|A_r|\ge m_r=m$ and $A_r$ is a subset of $A$ which is $r$-independent in $G-S_r$. Observe that by construction, $|S_r|<r t/2$, as in the odd steps, the constructed set $S$ is empty, and in the even steps, it has less than $t$ elements. We show that in fact we have $|S_r|<t$ using the following argument, similar to the one used in~\cref{claim:at-most-t}. By the last part of the statement of~\cref{lem:engine}, at the $i$th step of the construction, each vertex of the set $S$ obtained from \cref{lem:engine} is at distance exactly $i/2$ from all the vertices in $A_{i+1}$ in the graph $G-S_i$. For $a\in A_r$, let $\overline{N}(a)$ denote the $\lfloor r/2\rfloor$-neighborhood of $a$ in $G-S_r$; note that sets $\overline{N}(a)$ are pairwise disjoint. The above remark implies that each vertex $v$ of the final set $S_r$ has a neighbor in the set $\overline{N}(a)$ for each $a\in A_r$. Indeed, suppose $v$ belonged to the set $S$ added to $S_r$ in the $i$th step of the construction; i.e. $v\in S_{i+1}\setminus S_i$. Then there exists a path in $G-S_i$ from $v$ to $a$ of length exactly~$i/2$, which traverses only vertices at distance less than $i/2$ from $a$. Since in this and further steps of the construction we were removing only vertices at distance at least $i/2$ from $a$, this path stays intact in $G-S_r$ and hence is completely contained in $\overline{N}(a)$. By assumption that $m\ge t$, we may choose pairwise different vertices $a_1,\ldots,a_t\in A_r$. To reach a contradiction, suppose that $S_r$ contains $t$ distinct vertices $s_1,\ldots,s_t$. By the above, the sets $\overline{N}(a_i)\cup\set{s_i}$ form a minor model of $K_t$ in $G$ at depth-$(\lfloor r/2\rfloor+1)$. This contradicts the assumption that $K_t\not\preccurlyeq_d G$ for $d=\lfloor 9r/2 \rfloor$. Hence, $|S|<t$. Define the function $N:\mathbb{N}\to\mathbb{N}$ as $N(m)=(t+1)^{\frac{\gamma^{r}-1}{\gamma-1}}\cdot m^{\gamma^{r}}$ for $m\ge t^8$ and $N(m)=N(t^8)$ for $m<t^8$; this justifies the assumption $m\geq t^8$ made in the beginning. Recalling that $\gamma=\beta^{2t}$ and putting $\beta=4t+1$, we note that $N(m)\in \mathcal{O}_{r,t}{(m^{{(4t+1)}^{2rt}})}$. The argument above shows that if $|A|\ge N(m)$, then there is a set $S\subset V(G)$, equal to $S_r$ above, and a set $B\subset A$, equal to $A_r$ above, so that $B$ is $r$-independent in $G-S$. Given~$G$ and $A$, the sets~$S$ and $B$ can be computed by applying the algorithm of \cref{lem:engine} at most~$r$ times, so in time $\mathcal{O}_{r,t}(|A|\cdot |E(G)|)$. This finishes the proof of~\cref{thm:new-uqw}. \end{proof} \section{Uniform quasi-widness for tuples}\label{sec:uqw-tuples} We now formulate and prove an extension of~\cref{thm:new-uqw} which applies to sets of tuples of vertices, rather than sets of vertices. This more general result will be used later on in the paper. The result and its proof are essentially adaptations to the finite of their infinite analogues introduced by Podewski and Ziegler (cf.~\cite{podewski1978stable}, Corollary 3), modulo the numerical bounds. We generalize the notion of independence to sets of tuples of vertices. Fix a graph $G$ and a number $r\in \mathbb{N}$, and let $S\subseteq V(G)$ be a subset of vertices of $G$. We say that vertices $u$ and $v$ are {\em{$r$-separated}} by $S$ in $G$ if every path of length at most $r$ connecting $u$ and $v$ in $G$ passes through a vertex of $S$. We extend this notion to tuples: two tuples $\bar u,\bar v$ of vertices of $G$ are \emph{$r$-separated} by $S$ every vertex appearing in $\bar u$ is $r$-separated by $S$ from every vertex appearing in $\bar{v}$. Finally, if $A\subseteq V(G)^d$ is a set of $d$-tuples of vertices, for some $d\in\mathbb{N}$, then we say that $A$ is \emph{mutually $r$-separated} by $S$ in $G$ if any two distinct $\bar u,\bar v\in A$ are $r$-separated by $S$ in $G$. \newcommand{\mathrm{UQW}}{\mathrm{UQW}} \newcommand{\mathrm{PUQW}}{\mathrm{PUQW}} With these definitions set, we may introduce the notion of uniform quasi-wideness for tuples. \begin{definition} Fix a class $\cal C$ and numbers $r,d\in\mathbb{N}$. For a function $N\colon\mathbb{N}\to\mathbb{N}$ and number $s\in\mathbb{N}$, we say that $\cal C$ satisfies property $\mathrm{UQW}^d_r(N,s)$ if the following condition holds: \begin{quote}\itshape for every $m\in \mathbb{N}$ and every subset $A\subseteq V(G)^d$ with $|A|\ge N(m)$, there is a set $S\subset V(G)$ with $|S|\le s$ and a subset $B\subset A$ with $|B|\ge m$ which is mutually $r$-separated by $S$ in $G$. \end{quote} We say that $\cal C$ satisfies property $\mathrm{UQW}^d_r$ if $\cal C$ satisfies $\mathrm{UQW}^d_r(N,s)$ for some $N\colon\mathbb{N}\to\mathbb{N}$ and $s\in\mathbb{N}$. If moreover one can take $N$ to be a polynomial, then we say that $\cal C$ satisfies property $\mathrm{PUQW}^d_r$. \end{definition} When $d=1$, we omit it from the superscripts. Note that there is a slight discrepancy in the definition of uniform quasi-wideness and the property of satisfying $\mathrm{UQW}_r$, for all $r\in \mathbb{N}$. This is due to the fact that in the original definition, the set $B$ must be disjoint from $S$, whereas in the property $\mathrm{UQW}_r$, some vertices of $S$ may belong to $B$. This distinction is inessential when it comes to dimension $1$, since $|S|\le s_r$ for some constant $s_r$, so passing from one definition to the other requires modifying the function $N_r$ by an additive constant $s_r$. In particular, a class of graphs $\cal C$ is uniformly quasi-wide if and only if it satisfies $\mathrm{UQW}_r$, for all $r\in \mathbb{N}$. However, generalizing to tuples of dimension $d$ requires the use of the definition above, where the tuples in~$B$ are allowed to contain vertices which occur in $S$. For example, if the graph $G$ is a star with many arms and $A$ consists of all pairs of adjacent vertices in $G$, then $S$ needs to contain the central vertex of $G$, and therefore $S$ will contain a vertex from every tuple in $A$. We may take $B$ to be equal to $A$ in this case. \medskip Using the above terminology, \cref{thm:new-uqw} states that for every fixed $r\in\mathbb{N}$, if there is a number $t\in\mathbb{N}$ such that $K_t\not\preccurlyeq_{\lfloor 9r/2\rfloor} G$ for all $G\in \cal C$, then $\cal C$ satisfies $\mathrm{PUQW}_r$, and more precisely $\mathrm{UQW}_r(N_r,s_r)$ for a polynomial $N_r\colon\mathbb{N}\to\mathbb{N}$ and number $s_r\in \mathbb{N}$, where $N_r$ and $s_r$ can be computed from $r$ and $t$. The following result provides a generalization to higher dimensions. \begin{theorem}\label{thm:uqw-tuples}If $\cal C$ is a nowhere dense class of graphs, then for all $r,d\in\mathbb{N}$, the class $\cal C$ satisfies $\mathrm{PUQW}^d_r$. More precisely, for any class of graphs $\cal C$ and numbers $r,t\in\mathbb{N}$, if $K_t\not\preccurlyeq_{18r} G$ for all $G\in \cal C$, then for all $d\in \mathbb{N}$ the class $\cal C$ satisfies $\mathrm{UQW}^d_{r}(N^d_r,s^d_r,)$ for some number $s^d_r\in \mathbb{N}$ and polynomial $N^d_r\colon \mathbb{N}\to\mathbb{N}$ that can be computed given $r$, $t$, and $d$. \end{theorem} \cref{thm:uqw-tuples} is an immediate consequence of~\cref{thm:new-uqw} (or~\cref{thm:krs} if only the first part of the statement is concerned) and of the following result. \begin{proposition}\label{prop:uqw-tuples} For all $r,d\in\mathbb{N}$, if $\cal C$ satisfies $\mathrm{UQW}_{2r}(N_{2r},s_{2r})$ for some $s_{2r}\in\mathbb{N}$ and \mbox{$N_{2r}\colon\mathbb{N}\to \mathbb{N}$}, then $\cal C$ satisfies $\mathrm{UQW}^d_r(N^d_r,s^d_r)$ for $s^d_r=d\cdot s_{2r}$ and function $N^d_r\colon \mathbb{N}\to \mathbb{N}$ defined as $N^d_r(m)=f^d((d^2+1)\cdot m)$, where $f(m')=m'\cdot N_{2r}(m')$ and $f^d$ is the $d$-fold composition of $f$ with itself. \end{proposition} The rest of~\cref{sec:uqw-tuples} is devoted to the proof of \cref{prop:uqw-tuples}. Fix a class $\cal C$ such that $\mathrm{UQW}_{2r}(N_{2r},s_{2r})$ holds for some number $s_{2r}\in \mathbb{N}$ and function $N_{2r}\colon \mathbb{N}\to \mathbb{N}$. We also fix the function $f$ defined in the statement of \cref{prop:uqw-tuples}. \medskip Let us fix dimension $d\in \mathbb{N}$, radius $r\in \mathbb{N}$, and graph $G\in \cal C$. For a coordinate $i\in\set{1,\ldots,d}$, by $\pi_i\colon V(G)^d\to V(G)$ we denote the {\em{projection}} onto the $i$th coordinate; that is, for $\bar{x}\in V(G)^d$ by $\pi_i(\bar{x})$ we denote the $i$th coordinate of $\bar{x}$. Our first goal is to find a large subset of tuples that are mutually $2r$-separated by some small~$S$ on each coordinate separately. Note that in the following statement we ask for $2r$-separation, instead of $r$-separation. \begin{lemma}\label{lem:step1} For all $r,m\in \mathbb{N}$ and $A\subset V(G)^d$ with $|A|\ge f^d(m)$, there is a set $B\subset A$ with $|B|\ge m$ and a set $S\subset V(G)$ with $|S|\le d\cdot s_{2r}$ such that for each coordinate $i\in\set{1,\ldots,d}$ and all distinct $\bar x,\bar y\in B$, the vertices $\pi_i(\bar x)$ and $\pi_i(\bar y)$ are $2r$-separated by $S$. \end{lemma} \begin{proof} We will iteratively apply the following claim. \begin{claim}\label{claim:ith-coord} Fix a coordinate $i\in\set{1,\ldots,d}$, an integer $m'\in\mathbb{N}$, and a set $A'\subset V(G)^d$ with $|A'|\ge f(m')$. Then there is a set $B'\subset A'$ with $|B'|\ge m'$ and a set $S'\subset V(G)$ with $|S'|\le s_{2r}$, such that for all distinct $\bar x,\bar y\in B$, the vertices $\pi_i(\bar x)$ and $\pi_i(\bar y)$ are $2r$-separated by $S$. \end{claim} \begin{clproof} We consider two cases, depending on whether $|\pi_i(A')|\geq N_{2r}(m')$. Suppose first that $\pi_i(A')$ contains at least $N_{2r}(m')$ distinct vertices. Then we may apply the property $\mathrm{UQW}_{2r}$ to $\pi_i(A')$, yielding sets $S'\subset V(G)$ and $X\subseteq \pi_i(A')$ such that $|X|\ge m'$, $|S'|\le s_{2r}$, and $X$ is mutually $2r$-separated by $S'$ in $G$. Let $B'\subseteq A'$ be a subset of tuples constructed as follows: for each $u\in X$, include in $B'$ one arbitrarily chosen tuple $\bar x\in A'$ such that the $i$th coordinate of $\bar x$ is $u$. Clearly $|B'|=|X|\ge m'$ and for all distinct $\bar x,\bar y\in B'$, we have that $\pi_i(\bar x)$ and $\pi_i(\bar y)$ are different and $2r$-separated by $S'$ in $G$; this is because $X$ is mutually $2r$-separated by $S'$ in $G$. Hence $B'$ and $S'$ satisfy all the required properties. Suppose now that $|\pi_i(A')|<N_{2r}(m')$. Then choose a vertex $a\in \pi_i(A')$ for which the pre-image $\pi_i^{-1}(a)$ has the largest cardinality. Since $|A'|\geq f(m')=m'\cdot N_{2r}(m')$, we have that $$|\pi_i^{-1}(a)|\geq \frac{|A'|}{|\pi_i(A')|}\geq \frac{m'\cdot N_{2r}(m')}{N_{2r}(m')}=m'.$$ Hence, provided we set $S'=\set{a}$ and $B'=\pi_i^{-1}(a)$, we have that $B'$ is mutually $2r$-separated by~$S'$, $|B'|\geq m$, and $|S'|=1$. \end{clproof} We proceed with the proof of \cref{lem:step1}. Let $A\subset V(G)^d$ be such that $|A|\ge f^d(m)$. We inductively define subsets $B_0\supseteq B_1\supseteq \ldots \supseteq B_d$ of $A$ and sets $S_1,\ldots,S_d\subseteq V(G)$ as follows. First put $B_0=A$. Then, for each $i=1,\ldots,d$, let $B_{i}$ and $S_i$ be the $B'$ and $S'$ obtained from \cref{claim:ith-coord} applied to the set of tuples $B_{i-1}\subset V(G)^d$, the coordinate $i$, and $m'=f^{d-i}(m)$. It is straightforward to see that the following invariant holds for each $i\in \set{1,\ldots,d}$: $|B_i|\ge f^{d-i}(m)$ and for all $j\leq i$ and distinct $\bar x,\bar y\in B_i$, the vertices $\pi_j(\bar x)$ and $\pi_j(\bar{y})$ are $2r$-separated by $S_1\cup\ldots\cup S_i$ in $G$. In particular, by taking $B=B_d$ and $S=S_1\cup\ldots \cup S_d$, we obtain that $|B|\ge m$, $|S|\le d\cdot s_{2r}$, and $B$ and $S$ satisfy the condition requested in the lemma statement. \end{proof} The next lemma will be used to turn mutual $2r$-separation on each coordinate to mutual $r$-separation of the whole tuple set. \begin{lemma}\label{lem:step2} Let $B\subset V(G)^d$ and $S\subset V(G)$ be such that for each $i\in \set{1,\ldots,d}$ and all distinct $\bar{x},\bar{y}\in B$, the vertices $\pi_i(\bar{x})$ and $\pi_i(\bar{y})$ are $2r$-separated by $S$ in $G$. Then there is a set $C$ with $C\subset B$ and $|C|\geq\frac{|B|}{d^2+1}$ such that $C$ is mutually $r$-separated by $S$ in $G$. \end{lemma} \begin{proof} Let $C$ be a maximal subset of $B$ that is mutually $r$-separated by $S$ in $G$. By the maximality of $C$, with each tuple $\bar a\in B-C$ we may associate a tuple $\bar b\in C$ and a pair of indices $(i,j)\in \set{1,\ldots,d}^2$ that witness that $a$ cannot be added to $C$, namely $\pi_i(\bar a)$ and $\pi_j(\bar b)$ are not $r$-separated by $S$ in $G$. Observe that two different tuples $\bar a,\bar a'\in B-C$ cannot be associated with exactly the same $\bar b\in C$ and same pair of indices $(i,j)$. Indeed, then both $\pi_i(\bar a)$ and $\pi_i(\bar a')$ would not be $r$-separated from $\pi_j(\bar b)$ by $S$ in $G$, which would imply that $\pi_i(\bar a)$ and $\pi_i(\bar a')$ would not be $2r$-separated from each other by $S$, a contradiction with the assumption on $B$. Hence, $|B-C|$ is upper bounded by the number of tuples of the form $(\bar b,i,j)\in C\times \set{1,\ldots,d}^2$, which is $d^2|C|$. We conclude that $|B-C|\leq d^2|C|$, which implies $|C|\geq \frac{|B|}{d^2+1}$. \end{proof} \begin{comment} \begin{proof} We construct a sequence $C_0\subset C_1\subset \ldots$ of subsets of $B$ which are mutually $r$-independent in $G-S$, as follows. We start with $C_0=\emptyset$. Suppose that $C_s\subset B$ is already constructed for some $s\ge 0$ and is mutually $r$-independent in $G-S$; we construct $C_{s+1}$. With each element $a\in B-C_s$, we associate an arbitrarily chosen function $f_a\colon \set{1,\ldots,d}^2\to C_s\cup \set{\bot}$ with the following properties: \begin{itemize} \item If $f_a(i,j)=b$ then the $i$th coordinate of $a$ and the $j$th coordinate of $b$ are not $r$-separated by $S$. \item If $f_a(i,j)=\bot$ then there is no element $b\in C_s$ such that the $i$th coordinate of $a$ and the $j$th coordinate of $b$ are at not $r$-separated by $S$. \end{itemize} Observe that whenever $a_1, a_2$ are two distinct elements of $B-C_s$, then for all $i,j\in \set{1,\ldots,d}^2$, the values $f_{a_1}(i,j)$ and $f_{a_2}(i,j)$ cannot be equal to the same element $b\in C_s$: otherwise, we would have that the $i$th coordinate of $a_1$ and the $i$th coordinate of $a_2$ are not $2r$-separated by $S$, which is impossible by the assumption on $B$. In particular, if $|B-C_s|> |C_s|\cdot d^2$ then there must be some element $a\in B-C_s$ such that $f_a(i,j)=\bot$ for all $i,j\in\set{1,\ldots,d}$. Let $C_{s+1}=C_s\cup \set a$. By construction, $C_{s+1}$ is mutually $r$-independent in $G-S$. We may repeat the construction as long as $|B|>|C_s|\cdot (d^2+1)=s\cdot (d^2+1)$, and we stop when this inequality no longer holds. Define the set $C$ as the last constructed set $C_s$. By construction, $|C_s|=s\ge \frac{|B|}{d^2+1}$. \end{proof} \end{comment} To finish the proof of \cref{prop:uqw-tuples}, given a set $A\subset V(G)^d$ and integer $m\in\mathbb{N}$, first apply \cref{lem:step1} with $m'= m\cdot (d^2+1)$. Assuming that $|A|\ge f^d(m')$, we obtain a set $B\subseteq A$ with $|B|\ge m\cdot (d^2+1)$ and a set $S\subset V(G)$ with $|S|\le d\cdot s_{2r}$, such that for each $i\in \set{1,\ldots,d}$ and all distinct $\bar{x},\bar{y}\in B$, the vertices $\pi_i(\bar{x})$ and $\pi_i(\bar{y})$ are $2r$-separated by $S$ in $G$. Then, apply \cref{lem:step2} to $B$ and $S$, yielding a set $C\subset B$ which is mutually $r$-separated by $S$ and has size at least $m$. This concludes the proof of \cref{prop:uqw-tuples}. \begin{comment} \begin{remark}\label{rem:local-tuples} A detailed analysis of the presented proof allows to obtain a stronger statement than in \cref{prop:uqw-tuples}, which we now describe. For $d,r,m\in\mathbb{N}$, let $\textrm{UQW}(d,r)$ denote the following statement: \begin{quote}\itshape There exists a constant $s^d_r\in \mathbb{N}$ and a polynomial $N^d_r\colon\mathbb{N}\to\mathbb{N}$ such that for all $m\in \mathbb{N}$ and all subsets $A\subset V(G)^d$ with $|A|\ge N^d_r(m)$ there is a set $S\subset V(G)$ of size $|S|\le s^d_r$ and a subset $B\subset A$ of size $|B|\ge m$ which is mutually $r$-independent in $G-S$. \end{quote} Our proof shows that the statement $\textrm{UQW}(d,r)$ can be concluded from the statement $\textrm{UQW}(1,4r)$. This is because in our proof, we have~obtained: $$ N^d_{2r}(m)=(N_{4r})^d(m(d^2+1))\qquad\textrm{and}\qquad s^d_{2r}=d\cdot s_{4r}. $$ Thus, when establishing the values of $N^d_{2r}(m)$ and $s^d_{2r}$, we refer to the quasi-wideness of $\mathcal{C}$ only by using numbers $s_{4r}$ and $N_{4r}(m')$ for $m'\in \mathbb{N}$. On the other hand, by \cref{thm:new-uqw}, the statement $\textrm{UQW}(1,4r)$ follows from the existence of a number $t\in\mathbb{N}$ such that $K_t\not\preccurlyeq_{10r} G$ for $G\in \mathcal{C}$. To summarize, for all $r\in\mathbb{N}$, the existence of a number $t\in\mathbb{N}$ such that $K_t\not\preccurlyeq_{10r} G$ for $G\in \mathcal{C}$ implies the statement $\textrm{UQW}(d,r)$, for all $d\in\mathbb{N}$. % \end{remark} \end{comment} \section{Preliminaries}\label{sec:prelim} We recall some basic notions from graph theory. \medskip All graphs in this paper are finite, undirected and simple, that is, they do not have loops or parallel edges. Our notation is standard, we refer to~\cite{diestel2012graph} for more background on graph theory. We write $V(G)$ for the vertex set of a graph $G$ and $E(G)$ for its edge set. The {\em{distance}} between vertices $u$ and $v$ in $G$, denoted $\mathrm{dist}_G(u,v)$, is the length of a shortest path between $u$ and $v$ in~$G$. If there is no path between $u$ and $v$ in $G$, we put $\mathrm{dist}_G(u,v)=\infty$. The {\em{(open) neighborhood}} of a vertex $u$, denoted $N(u)$, is the set of neighbors of $u$, excluding $u$ itself. For a non-negative integer $r$, by $N_r[u]$ we denote the {\em{(closed) $r$-neighborhood}} of $u$ which comprises vertices at distance at most $r$ from $u$; note that $u$ is always contained in its closed $r$-neighborhood. The \emph{radius} of a connected graph $G$ is the least integer $r$ such that there is some vertex $v$ of $G$ with $N_r[v]=V(G)$. A {\em{minor model}} of a graph $H$ in $G$ is a family $(I_u)_{u\in V(H)}$ of pairwise vertex-disjoint connected subgraphs of $G$, called {\em{branch sets}}, such that whenever $uv$ is an edge in~$H$, there are $u'\in V(I_u)$ and $v'\in V(I_v)$ for which $u'v'$ is an edge in $G$. The graph $H$ is a {\em{depth-$r$ minor}} of $G$, denoted $H\preccurlyeq_rG$, if there is a minor model $(I_u)_{u\in V(H)}$ of~$H$ in $G$ such that each $I_u$ has radius at most $r$. A class $\mathcal{C}$ of graphs is \emph{nowhere dense} if there is a function $t\colon \mathbb{N}\rightarrow \mathbb{N}$ such that for all $r\in \mathbb{N}$ it holds that $K_{t(r)}\not\preccurlyeq_r G$ for all $G\in \mathcal{C}$, where $K_{t(r)}$ denotes the clique on $t(r)$ vertices. The class~$\mathcal{C}$ moreover has \emph{bounded expansion} if there is a function $d\colon\mathbb{N}\rightarrow\mathbb{N}$ such that for all $r\in \mathbb{N}$ and all $H\preccurlyeq_rG$ with $G\in\mathcal{C}$, the {\em{edge density}} of $H$, i.e. $|E(H)|/|V(H)|$, is bounded by $d(r)$. Note that every class of bounded expansion is nowhere dense. The converse is not necessarily true in general~\cite{sparsity}. A set $B\subseteq V(G)$ is called {\em{$r$-independent}} in a graph $G$ if $\mathrm{dist}_G(u,v)>r$ for all distinct $u,v\in B$. A class $\mathcal{C}$ of graphs is \emph{uniformly quasi-wide} if for every $r\in \mathbb{N}$ there is a number $s\in \mathbb{N}$ and a function $N\colon\mathbb{N}\to\mathbb{N}$ such that for every $m\in \mathbb{N}$, graph $G\in \mathcal{C}$, and vertex subset $A\subseteq V(G)$ of size $\abs{A}\geq N(m)$, there is a set $S\subseteq V(G)$ of size $\abs{S}\leq s$ and a set $B\subseteq A-S$ of size $\abs{B}\geq m$ which is $r$-independent in $G-S$. Recall that Ne\v set\v ril and Ossona de Mendez proved~\cite{nevsetvril2011nowhere} that nowhere dense graph classes are exactly the same as uniformly quasi-wide classes. The following result of Kreutzer, Rabinovich and the second author~\cite{siebertz2016polynomial} improves their result, by showing that the function $N$ can be taken polynomial: \begin{theorem}[\cite{siebertz2016polynomial}]\label{thm:krs} For every nowhere dense class $\mathcal{C}$ and for all $r\in \mathbb{N}$ there is a polynomial $N\colon \mathbb{N}\to\mathbb{N}$ and a number $s\in \mathbb{N}$ such that the following holds. Let be $G\in \mathcal{C}$ be aa graph and let $A\subset V(G)$ be a vertex subset of size at least $N(m)$, for a given $m$. Then there exists a set $S\subset V(G)$ of size $|S|<s$ and a set $B\subset A-S$ of size $|B|\ge m$ which is $r$-independent in $G-S$. \end{theorem} As we mentioned, the proof of Kreutzer et al.~\cite{siebertz2016polynomial} relies on non-constructive arguments and does not yield explicit bounds on $s$ and (the degree of) $N$. In the next section, we discuss a further strengthening of this result, by providing explicit, computable bounds on $N$ and $s$. \section{Bounds for stability}\label{sec:stable} As mentioned, Adler and Adler~\cite{adler2014interpreting}, proved that every nowhere dense class of graphs is stable. In this section, we prove its effective variant,~\cref{thm:new-stable}, which we repeat for convenience. \setcounter{aux}{\thetheorem} \setcounter{theorem}{\thestable} \begin{theorem} There are computable functions $f\colon \mathbb{N}^3\to\mathbb{N}$ and $g\colon\mathbb{N}\to\mathbb{N}$ with the following property. Suppose $\phi(\bar x,\bar y)$ is a formula of quantifier rank at most $q$ and with $d$ free variables. Suppose further that $G$ is a graph excluding $K_t$ as a depth-$g(q)$ minor. Then the ladder index of $\phi(\bar x,\bar y)$ in $G$ is at most $f(q,d,t)$. \end{theorem} \setcounter{theorem}{\theaux} Recall that a class $\cal C$ is stable if and only if for every first order formula $\varphi(\bar x,\bar y)$, its ladder index over graphs from $\cal C$ is bounded by a constant depending only on $\cal C$ and $\varphi$; see \cref{sec:intro} to recall the background on stability. Thus the result of Adler and Adler is implied by \cref{thm:new-stable}, and is weaker in the following sense: \cref{thm:new-stable} asserts in addition that there is a computable bounds on the ladder index of any formula that depends only on the size of an excluded clique minor at depth bounded in terms of formula's quantifier rank and number of free variables. We now prove~\cref{thm:new-stable}. \begin{proof}[of~\cref{thm:new-stable}] Fix a formula $\phi(\bar x,\bar y)$ of quantifier rank $q$ and a partitioning of its free variables into $\bar y$ and $\bar z$. Let $d=|\bar x|+|\bar y|$ be the total number of free variables of $\phi$. Let $r\in \mathbb{N}$ be the number given by~\cref{cor:bound}, which depends on $\phi$ only. Let $\cal C$ be the class of all graphs such that $K_t\not\preccurlyeq_{18r} G$. Then, by~\cref{thm:uqw-tuples}, $\cal C$ satisfies $\mathrm{UQW}^d_{r}(N^d_{r},s^d_r)$, for some polynomial $N^d_r\colon\mathbb{N}\to\mathbb{N}$ and number $s=s^d_r\in \mathbb{N}$ computable from $d,t,r$. Let $T$ be the number given by~\cref{cor:bound} for $\phi$ and $s$. Finally, let $\ell=N^d_r(2T+1)$. We show that every $\phi$-ladder in a graph $G\in\cal C$ has length smaller than $\ell$. For the sake of contradiction, assume that there is a graph $G\in\cal C$ and tuples $\bar u_1,\ldots,\bar u_\ell\in V(G)^{|\bar x|}$ and $ \bar v_1,\ldots, \bar v_\ell\in V(G)^{|\bar y|}$ which form a $\phi$-ladder in $G$, i.e., $\phi(\bar u_i,\bar v_j)$ holds in~$G$ if and only if $i\le j$. Let $A=\setof{ \bar u_i \bar v_i}{i=1,\ldots,\ell}\subset V(G)^d$. Note that $|A|=\ell\ge N^d_r(2T+1)$, since tuples $\bar u_i$ have to be pairwise different. Applying property $\mathrm{UQW}^d_r(N^d_r,s^d_r)$ to the set $A$, radius $r$, and target size $m=2T+1$ yields a set $S\subset V(G)$ with $|S|\le s$ and a set $B\subset A$ with $|B|\geq 2T+1$ of tuples which are mutually $r$-separated by $S$ in $G$. Let $J\subseteq \set{1,\ldots,\ell}$ be the set of indices corresponding to $B$, i.e., $J=\set{j\colon\bar u_j\bar v_j\in B}$. Since $|J|=2T+1$, we may partition $J$ into $J_1\uplus J_2$ with $|J_1|=T+1$ so that the following condition holds: for each $i,k\in J_1$ satisfying $i<k$, there exists $j\in J_2$ with $i<j<k$. Indeed, it suffices to order the indices of $J$ and put every second index to $J_1$, and every other to $J_2$. Let~$X$ be the set of vertices appearing in the tuples $\bar u_i$ with $i\in J_1$, and let $Y$ be the set of vertices appearing in the tuples $\bar v_j$ with $j\in J_2$. Since the tuples of $B$ are mutually $r$-separated by $S$ in~$G$, it follows that $X$ and $Y$ are $r$-separated by $S$. As $|J_1|=T+1$, by \cref{cor:bound} we infer that there are distinct indices $i,k\in J_1$, say $i<k$, such that $\mathrm{tp}^\phi(\bar u_i/Y)= \mathrm{tp}^\phi(\bar u_{k}/Y)$. This implies that for each $j\in J_2$, we have $G,\bar u_i,\bar v_j\models \phi(\bar x,\bar y)$ if and only if $G,\bar u_{k},\bar v_j\models \phi(\bar x,\bar y)$. However, there is an index $j\in J_2$ such that $i<j<k$, and for this index we should have $G,\bar u_i,\bar v_j\models \phi(\bar x,\bar y)$ and $G,\bar u_{k},\bar v_j\not\models \phi(\bar x,\bar y)$ by the definition of a ladder. This contradiction concludes the proof. \end{proof} We remark that~\cref{thm:new-stable} also holds for classes of edge- and vertex-colored graphs, with the same proof, but using a version of the results in~\cref{sec:gaifman} lifted to edge- and vertex-colored graphs. \section{Bounds on the number of types}\label{sec:types} In this section we prove \Cref{thm:vc-density} and \Cref{thm:vc-density-lower-bound}. Recall that \Cref{thm:vc-density} provides upper bounds on the number of types in classes of graphs which are nowhere dense, and stronger bounds for classes which have bounded expansion. On the other hand, the complementary \Cref{thm:vc-density-lower-bound} shows that for subgraph-closed classes, in the absence of structural sparsity we cannot hope for such upper bounds. \subsection{Upper bounds for sparse classes} We first prove \Cref{thm:vc-density}, which we recall for convenience. \setcounter{aux}{\thetheorem} \setcounter{theorem}{\thevcupper} \begin{theorem}\label{thm:vc-density-recall} Let $\mathcal{C}$ be a class of graphs and let $\phi(\tup x,\tup y)$ be a first order formula with free variables partitioned into object variables $\bar x$ and parameter variables $\bar y$. Let $\ell=|\bar x|$. Then: \begin{enumerate}[(1)] \item If $\mathcal{C}$ is nowhere dense, then for every $\epsilon>0$ there exists a constant~$c$ such that for every $G\in \mathcal{C}$ and every nonempty $A\subseteq V(G)$, we have $|S^\phi(G/A)|\leq c\cdot |A|^{\ell+\epsilon}.$ \item If $\mathcal{C}$ has bounded expansion, then there exists a constant~$c$ such that for every $G\in \mathcal{C}$ and every nonempty $A\subseteq V(G)$, we have $|S^\phi(G/A)|\leq c\cdot |A|^\ell$. \end{enumerate} \end{theorem} \setcounter{theorem}{\theaux} We remark that the theorem holds also for colored graphs, in the following sense. A class of graphs whose vertices and edges are colored by a fixed finite number of colors is nowhere dense if the underlying class of graphs obtained by forgetting the colors is nowhere dense. Then~\cref{thm:vc-density} holds also for such classes of colored graphs, with the same proof. Namely, all graph theoretic notions are applied to the underlying colorless graphs, only the definition of types takes the colors into account. The results of~\cref{sec:gaifman} then need to be lifted to edge- and vertex-colored graphs, but this is straightforward. \medskip The proof of~\cref{thm:vc-density} spans the remainder of this section. In this proof, we will will first enlarge the set $A$ to a set $B$, called an \emph{$r$-closure of~$A$} (where $r$ is chosen depending on $\phi$), such that the connections of elements from $V(G)-B$ toward $B$ are well controlled. This approach was first used in Drange et al.~\cite{drange2016kernelization} in the context of classes of bounded expansion, and then for nowhere dense classes in Eickmeyer et al.~\cite{eickmeyer2016neighborhood}. We start by recalling these notions. Let $G$ be a graph and let $B\subseteq V(G)$ be a subset of vertices. For vertices $v\in B$ and $u\in V(G)$, a path $P$ leading from $u$ to $v$ is called {\em{$B$-avoiding}} if all its vertices apart from~$v$ do not belong to~$B$. Note that if $u\in B$, then there is only one $B$-avoiding path leading from $u$, namely the one-vertex path where $u=v$. For a positive integer $r$ and $u\in V(G)$, the {\em{$r$-projection}} of $u$ on $B$, denoted $M^G_r(u,B)$, is the set of all vertices $v\in B$ such that there is a $B$-avoiding path of length at most $r$ leading from $u$ to $v$. Note that for $u\in B$, we have $M^G_r(u,B)=\{b\}$. Equivalently, $M^G_r(u,B)$ is the unique inclusion-minimal subset of $B$ which $r$-separates $u$ from $B$ (cf.~Fig.~\ref{fig:projection}). \begin{figure}[h!] \centering \includegraphics[scale=0.35,page=2]{pics} \caption{The $r$-projection of $u$ on $B$ (here $r=2$) is the minimal set $S\subset B$ which $r$-separates $ u$ from $B$. } \label{fig:projection} \end{figure} We will use the following results from~\cite{drange2016kernelization,eickmeyer2016neighborhood}. \begin{lemma}[\cite{drange2016kernelization}]\label{lem:closure-be} Let $\mathcal{C}$ be a class of bounded expansion. Then for every $r\in \mathbb{N}$ there is a constant $c\in \mathbb{N}$ such that for every $G\in \mathcal{C}$ and $A\subseteq V(G)$ there exists a set $B$, called an {\em{$r$-closure}} of $A$, with the following properties: \begin{enumerate}[(a)] \item $A\subseteq B\subseteq V(G)$; \item $|B|\leq c\cdot |A|$; and \item $|M_r^G(u,B)|\leq c$ for each $u\in V(G)$. \end{enumerate} Moreover, for every set $X\subset V(G)$, it holds that \begin{enumerate}[(d)] \item $|\setof{M_r^G(u,X)}{u\in V(G)}|\leq c\cdot |X|$. \end{enumerate} \end{lemma} \begin{lemma}[\cite{eickmeyer2016neighborhood}]\label{lem:closure-nd} Let $\mathcal{C}$ be a nowhere dense class. Then for every $r\in\mathbb{N}$ and $\delta>0$ there is a constant $c\in\mathbb{N}$ such that for every $G\in \mathcal{C}$ and $A\subseteq V(G)$ there exists a set $B$, called an {\em{$r$-closure}} of $A$, with the following properties: \begin{enumerate}[(a)] \item $A\subseteq B\subseteq V(G)$; \item $|B|\leq c\cdot |A|^{1+\delta}$; and \item $|M_r^G(u,B)|\leq c\cdot |A|^{\delta}$ for each $u\in V(G)$. \end{enumerate} Moreover, for every set $X\subset V(G)$, it holds that \begin{enumerate}[(d)] \item $|\setof{M_r^G(u,X)}{u\in V(G)}|\leq c\cdot |X|^{1+\delta}$. \end{enumerate} \end{lemma} We note that in~\cite{drange2016kernelization,eickmeyer2016neighborhood} projections on $B$ are defined only for vertices outside of $B$. However, adding singleton projections for vertices of $B$ to the definition only adds $|B|$ possible projections of size $1$ each, so this does not influence the validity of the above results. We proceed with the proof of \cref{thm:vc-density}. To focus attention, we present the proof only for the nowhere dense case (first statement). The proof in the bounded expansion case (second statement) can be obtained by replacing all the parameters $\epsilon,\delta,\epsilon_1,\epsilon_2$ below by~$0$, and substituting the usage of \cref{lem:closure-nd} with \cref{lem:closure-be}. Let us fix: a nowhere dense class of graphs $\mathcal{C}$, a graph $G\in \mathcal{C}$, a vertex subset $A\subseteq V(G)$, a real $\epsilon>0$, and a first order formula $\phi(\bar x,\bar y)$, where $\bar x$ is the distinguished $\ell$-tuple of object variables. Our goal is to show that $|S^\phi(G/A)|=\mathcal{O}(|A|^{\ell+\epsilon})$. In the sequel, $d$ denotes a positive integer depending on ${\cal C},\ell,\phi$ only (and not on $G, A$ and $\epsilon$), and will be specified later. We may choose positive reals $\delta,\epsilon_1$ such that $(\ell+\epsilon_1)(1+\delta) \le \ell+\epsilon$ and $\epsilon_1>\delta(d+\ell)> \delta\ell$, for instance as follows: $\epsilon_1=\epsilon/2$ and $\delta=\frac{\epsilon}{4d+4\ell}$. The constants hidden in the $\mathcal{O}(\cdot)$ notation below depend on $\epsilon,\delta,\epsilon_1,\cal C, \ell$ and $\phi$, but not on $G$ and $A$. By \emph{tuples} below we refer to tuples of length $\ell$. Let $q$ be the quantifier rank of $\phi$ and let $p,r$ be the numbers obtained by applying \cref{lem:types} to $q$ and $\ell$. Let $B$ be an $r$-closure of $A$, given by~\cref{lem:closure-nd}. By~\cref{lem:closure-nd}, the total number of distinct $r$-projections onto $B$ is at most $\mathcal{O}(|B|^{1+\delta})$, and each of these projections has size $\mathcal{O}(|B|^{\delta})$. Figure~\ref{fig:sketch} serves as an illustration to the steps of the proof in the case $\ell=1$. \begin{figure}[h!] \centering \includegraphics[scale=0.346,page=4]{pics} \caption{The proof of~\cref{thm:vc-density} in case $\ell=1$. The logical implications flow from right to left, but our description below proceeds in the other direction. } \label{fig:sketch} \end{figure} \setcounter{claim}{0} The first step is to reduce the statement to the following claim. \begin{claim}\label{claim2} If $X$ is a set of tuples with pairwise different $\phi$-types over $B$, then $|X|=\mathcal{O}(|B|^{\ell+\epsilon_1})$. \end{claim} \cref{claim2} implies that $|S^\phi(G/B)|=\mathcal{O}(|B|^{\ell+\epsilon_1})$, which is bounded by $\mathcal{O}(|A|^{(\ell+\epsilon_1)(1+\delta)})$ since $|B|=\mathcal{O}(|A|^{1+\delta})$. As $(\ell+\epsilon_1)(1+\delta)\le \ell+\epsilon$, this shows that $|S^\phi(G/B)|=\mathcal{O}(|A|^{\ell+\epsilon})$. Then \cref{lem:types-over-B} implies that also $|S^\phi(G/A)|=\mathcal{O}(|A|^{\ell+\epsilon})$, and we are done. Therefore, it remains to prove~\cref{claim2}. \medskip For a tuple $\bar w=w_1\ldots w_\ell\in V(G)^\ell$, define its \emph{projection} to be the set $C_1\cup\ldots\cup C_\ell\subset B$ where $C_i=M^G_r(w_i, B)$. Note that there are at most $\mathcal{O}(|B|^{\ell(1+\delta)})$ different projections of tuples in total, and each projection has size $\mathcal{O}(|B|^\delta)$. To prove~\cref{claim2}, we consider the special case when all the tuples have the same projection, say $C\subset B$, and obtain a stronger conclusion, for $\epsilon_2\coloneqq \epsilon_1-\delta\ell>0$. \begin{claim}\label{claim3} If $Y$ is a set of tuples with pairwise different $\phi$-types over $B$, and each $u\in V$ has the same projection $C\subset B$, then $|Y|=\mathcal{O}(|B|^{\epsilon_2})$. \end{claim} Since there are at most $\mathcal{O}(|B|^{\ell(1+\delta)})$ different projections in total and $\ell(1+\delta)+\epsilon_2=\ell+\epsilon_1$, \cref{claim2} can be proved by summing the bound given by \cref{claim3} through all different projections~$C$. It therefore remains to prove~\cref{claim3}. \medskip We apply~\cref{thm:uqw-tuples} to the set of $\ell$-tuples $Y$, for $m$ being the largest integer such that $|Y|\ge N^{\ell}_{2r}(m)$. As a conclusion, we obtain a set $Z\subseteq Y$ of $m$ tuples that is mutually $2r$-separated by $S$ in $G$, for some set of vertices $S\subseteq V(G)$ of size $s\coloneqq s^{\ell}_{2r}$. Let $d$ be the degree of the polynomial $N^\ell_{2r}(\cdot)$ obtained from~\cref{thm:uqw-tuples}. Note that $s=\mathcal{O}(1)$ and $|Y|=\mathcal{O}(m^d)$. \begin{claim}\label{claim4} It holds that $|Z|=\mathcal{O}(|C|)$. \end{claim} We first show how~\cref{claim4} implies~\cref{claim3}. Since $m=|Z|=\mathcal{O}(|C|)$, and $|C|=\mathcal{O}(|B|^\delta)$, it follows that $|Y|=\mathcal{O}(m^d)=\mathcal{O}(|B|^{d\delta})$. As $\delta(d+\ell)>\epsilon_1$, this implies that $d\delta<\epsilon_2$, yielding~\cref{claim3}. We now prove~\cref{claim4}. \medskip Let $Z_0\subset Z$ be the set of those tuples in $Z$ which are $r$-separated by $S$ from $B$ in $G$, and let $Z_1=Z-Z_0$ be the remaining tuples. Since tuples from $Z_0$ have pairwise different $\phi$-types over $B$, and each of them is $r$-separated by $S$ from $B$ in $G$, by~\cref{cor:bound} we infer that $|Z_0|=\mathcal{O}(1)$. On the other hand, by the definition of $Z_1$, with each tuple $\bar u\in Z_1$ we may associate a vertex $b(\bar u)\in C$ which is not $r$-separated from $\bar u$ by $S$ in $G$. Since the set $U$ is mutually $2r$-separated by $S$ in $G$, it follows that for any two different tuples $\bar u,\bar v\in Z_1$ we have $b(\bar u)\neq b(\bar v)$. Hence $b(\cdot)$ is an injection from $Z_1$ to $C$, which proves that $|Z_1|\leq |C|$. To conclude, we have $|Z|=|Z_0|+|Z_1|=\mathcal{O}(1)+\mathcal{O}(|C|)=\mathcal{O}(|C|)$. This finishes the proof of~\cref{claim4} and ends the proof of~\cref{thm:vc-density}. \subsection{Lower bounds for non-sparse classes} We now move to the proof of \Cref{thm:vc-density-lower-bound}, whose statement we repeat for convenience. \setcounter{aux}{\thetheorem} \setcounter{theorem}{\thevclower} \begin{theorem} Let $\mathcal{C}$ be a class of graphs which is closed under taking subgraphs. \begin{enumerate}[(1)] \item If $\mathcal{C}$ is not nowhere dense, then there is a formula $\phi(x,y)$ such that for every $n\in \mathbb{N}$ there are $G\in\mathcal{C}$ and $A\subseteq V(G)$ with $|A|=n$ and $|S^\phi(G/A)|=2^{|A|}$. \item If $\mathcal{C}$ has unbounded expansion, then there is a formula $\phi(x,y)$ such that for every $c\in \mathbb{R}$ there exist $G\in\mathcal{C}$ and a nonempty $A\subseteq V(G)$ with $|S^\phi(G/A)|>c|A|$. \end{enumerate} \end{theorem} \setcounter{theorem}{\theaux} \begin{proof} The first part follows easily from the following lemma. Let $\mathcal{G}_r$ be the class of $(r-1)$-subdivisions of all simple graphs, that is, the class comprising all the graphs that can be obtained from any simple graph by replacing every edge by a path of length $r$. \begin{lemma}[\cite{nevsetvril2011nowhere}]\label{lem:lower-nd} For every somewhere dense graph class $\mathcal{C}$ that is closed under taking subgraphs, there exists a positive integer $r$ such that $\mathcal{G}_{r}\subseteq \mathcal{C}$. \end{lemma} To prove the first statement of \Cref{thm:vc-density-lower-bound}, for $n\in \mathbb{N}$, let $P(n)$ denote the graph with $n+2^n$ vertices $V(P(n))\coloneqq \{v_1,\ldots, v_n\}\cup \{w_M \colon M\subseteq \{1,\ldots, n\}\}$ and edges $E(P(n))\coloneqq \{v_iw_M \colon 1\leq i\leq n,\, M\subseteq \{1,\ldots, n\},\, i\in M\}$. If $\mathcal{C}$ is somewhere dense and closed under taking subgraphs, according to \Cref{lem:lower-nd}, there exists an integer $r$ such that $\mathcal{G}_{r}\subseteq \mathcal{C}$. In particular, for every $n\in \mathbb{N}$ the $(r-1)$-subdivision $P^{r}(n)$ of the graph $P(n)$ is contained in $\mathcal{C}$. Now consider the formula $\phi(x,y)$ stating that $x$ and~$y$ are at distance at most $r$. Then for every $n\in \mathbb{N}$ we have $S^\phi(P^{r}(n)/A)=\mathcal{P}(A)$, where $A\subseteq V(P^{r}(n))$ denotes the set $\{v_1,\ldots, v_n\}$. This implies the first part of the theorem. \medskip We now move to the second part of~\cref{thm:vc-density-lower-bound}. A graph $H$ is a \emph{topological depth-$r$ minor} of~$G$ if there is a mapping $\phi$ that maps vertices of~$H$ to vertices of $G$ such that $\phi(u)\neq \phi(v)$ for $u\neq v$, and edges of $H$ to paths in $G$ such that if $uv\in E(H)$, then $\phi(uv)$ is a path of length at most $2r+1$ between $u$ and $v$ in $G$ and furthermore, if $uv, xy\in E(H)$, then $\phi(uv)$ and $\phi(xy)$ are internally vertex disjoint. We write $H\preccurlyeq_r^{\mathrm{t}} G$. Note that the above definition makes sense for half-integers, i.e., numbers $r$ for which $2r$ is an integer. It is well-known that classes of bounded expansion can be alternatively characterized by the sparsity of shallow topological minors. \begin{lemma}[Corollary 4.1 of \cite{sparsity}]\label{lem:top-bnd-exp} A class $\mathcal{C}$ of graphs has bounded expansion if and only if for every $r\in \mathbb{N}$ there exists a constant $c_r$ such that $|E(H)|/|V(H)|\leq c_r$ for all graphs $H$ such that $H\preccurlyeq_r^{\mathrm{t}} G $ for some $G\in \mathcal{C}$. \end{lemma} For $r\in \mathbb{N}$ and a graph $G$, by $\nu_r(G)$ we denoted the \emph{normed $r$-neighborhood complexity} of $G$, as defined by Reidl et al.~\cite{reidl2016characterising}: \begin{equation*} \nu_r(G)\coloneqq\max_{H\subseteq G,\,\emptyset\neq A\subseteq V(G)}\frac{|\{N_r^H[v]\cap A\, \colon\, v\in V(H)\}|}{|A|}. \end{equation*} We will need the following result relating edge density in shallow topological minors and normed neighborhood complexity. \begin{lemma}[Theorem 4 of \cite{reidl2016characterising}]\label{lem:lower-be} Let $G$ be a graph, $r$ be a half-integer, and let $H\preccurlyeq_r^{\mathrm{t}}G$. Then $$\frac{|E(H)|}{|V(H)|}\leq (2r + 1)\cdot \max \left\{\nu_1(G)^4\cdot \log^2\nu_1(G),\nu_2(G),\ldots, \nu_{\left\lceil r+\frac{1}{2}\right\rceil}(G)\right\}.$$ \end{lemma} For the second part of~\cref{thm:vc-density-lower-bound}, we use the contrapositive of \Cref{lem:top-bnd-exp}. Since $\mathcal{C}$ has unbounded expansion, for some $r\in \mathbb{N}$ we have that the value $|E(H)|/|V(H)|$ is unbounded among depth-$r$ topological minors $H$ of graphs from $\mathcal{C}$. By applying \Cref{lem:lower-be}, we find that for some $q\leq r$ the value $\nu_{q}(G)$ is unbounded when $G$ ranges over all graphs from $\mathcal{C}$. Since $\mathcal{C}$ is closed under taking subgraph, we infer that also the ratio $\frac{|\{N_q^G[v]\cap A \colon v\in V(G)\}|}{|A|}$ is unbounded when~$G$ ranges over graphs from $\mathcal{C}$ and $A$ ranges over nonempty subsets of $V(G)$. This is equivalent to the sought assertion for the formula $\phi(x,y)$ expressing that $x$ and~$y$ are at distance at most $q$. \mbox{} \end{proof}
1,477,468,749,916
arxiv
\section{Introduction} Over the past few years, the Internet of Things (IoT) has connected objects, sensors, and appliances of all kinds. One of the most promising future applications is the Industrial IoT~\cite{varga20205g}, i.e., the use of IoT technology and standards to help automate and monitor industrial processes. In particular, remotely monitoring a process by using cheap, low-power wireless sensors can significantly reduce operation costs of machinery in remote or dangerous environments, such as windmills or photovoltaic farms~\cite{bedi2018review}, flagging anomalies in the sensory data and requesting human intervention when needed. The strict size, energy, and complexity requirements that make IoT solutions attractive for these applications pose significant challenges. The standard techniques used in Industry 4.0 contexts are often complicated and rely on an abundance of data, often using deep learning over massive datasets~\cite{zhou2020variational}. Naturally, this does not fit an IoT scenario, in which the sensors are resource-constrained in terms of power and computation capabilities. Furthermore, the devices cannot transmit complete measurements at a high rate for processing in the cloud, such as raw sound or vibration recordings, as they would quickly deplete their batteries, as well as congesting the radio channel. Instead, a more reasonable approach is to let the devices transmit \emph{feature vectors} of their measurements. However, the statistics of anomalous conditions are often hard to model and may not be known a priori~\cite{ukil2016iot}, which makes appropriate source coding complex: optimal quantization requires knowledge of the input distribution, and anomalies outside this range are likely to suffer from high distortion. On the other hand, applying universal quantization techniques increases the distortion for nominal samples, which also makes it difficult to discriminate between nominal and anomalous samples. In general, quantizers tend to suppress anomalies, and as a result, uncoded transmission can be a valid alternative in some cases~\cite{gastpar2003code}. In fact, analog transmission might even outperform digital encoding in terms of sample reconstruction accuracy in the short blocklength regime~\cite{kostina2013lossy}. This has been observed in other practical deep learning tasks over channels, such as classification~\cite{jankowski2020joint} and image retrieval~\cite{jankowski2020wireless}. In this work, we consider the problem of remote anomaly detection in an edge-computing scenario, in which a wireless sensor has constrained computational and communication resources, and transmits feature vectors to an edge server that performs the anomaly detection. We compare uncoded and coded transmission of the feature vectors, and in line with the typical anomaly detection situation, we assume that only data from nominal conditions are known, and that anomalies follow unknown statistics outside the nominal operational region. We consider two types of anomaly detectors at the edge node. The first is based on Principal Component Analysis (PCA)~\cite{lakhina2004diagnosing}, and while being analytically and computationally tractable, it can only detect linearly separable anomalies. The second detector is an autoencoder, which is based on a neural network and can detect more complex anomalies. We first examine the PCA method on a constructed scenario to reveal the fundamental differences between the two transmission schemes, and then evaluate the autoencoder on a dataset of real acoustic data from industrial equipment. Our results show that for both the PCA method and the autoencoder, coded transmission is best when the channel signal-to-noise ratio (SNR) is low, while uncoded transmission performs best in the high SNR regime. The rest of the paper is divided as follows. The system model is presented in Sec.~\ref{sec:system}, along with the considered coded and uncoded transmission schemes. PCA anomaly detection is then described and analyzed in Sec.~\ref{sec:pca}, while an autoencoder-based anomaly detector is introduced in Sec.~\ref{sec:autoenc}. The numerical results for both the PCA and autoencoder methods are shown in Sec.~\ref{sec:results}, and finally Sec.~\ref{sec:concl} concludes the paper. \section{System model}\label{sec:system} We consider an edge-computing scenario comprising a single device, which has a wireless connection to a base station equipped with an edge server. While the device is resource-constrained and can only do a limited computation, the edge server is assumed to be unconstrained. Based on a sensor observation, the device constructs a sample $\mathbf{x}\in\mathbb{R}^N$, also referred to as a feature vector, which either belongs to the class of nominal observations, or to the alternative class of anomalous observations. Without loss of generality, we shall assume that the sample vector is normalized, i.e. $(1/N)\mathbb{E}[\|\mathbf{x}\|^2]=1$ where $\|\cdot\|$ is the $\ell_2$ norm. Due to the resource constraints of the device, the classification is assumed to take place at the edge server, and thus the device needs to transmit the sample before it can be classified. In line with existing literature on anomaly detection, we will assume that the statistics are known only for nominal samples and not anomalies. Such statistics are typically gathered from a dataset assumed to include only nominal samples. Furthermore, we will restrict our focus to residual based anomaly detectors, where $\mathbf{x}$ is determined based on the \emph{reconstruction error} $\|\hat{\mathbf{x}}-\mathbf{x}\|$, where $\hat{\mathbf{x}}=g^{-1}\left(g(\mathbf{x})\right)$ is the reconstruction of $\mathbf{x}$ after applying some transformation. The intrinsic assumption is that by properly choosing $g(\mathbf{x})$, the residual for a nominal sample is small but large for an anomalous sample. In particular, $\mathbf{x}$ is declared an outlier if the magnitude of the residual exceeds a given threshold $\delta$ \begin{equation} \|\hat{\mathbf{x}}-\mathbf{x}\|^2 > \delta^2, \end{equation} and otherwise the sample is assumed to be nominal, see e.g., ~\cite{jackson1979control,lakhina2004diagnosing,purohit2019mimii}. Note that $\delta$ controls the trade-off between sensitivity and the false positive probability. To characterize the performance of a detector, we define the accuracy for a dataset with $P$ positive and $N$ negative samples as \begin{equation} \text{acc} = \sup_{\delta}\, \frac{P\Pr(\text{TP}|\delta)+N(1-\Pr(\text{FP}|\delta))}{P+N}, \end{equation} where $\Pr(\text{TP}|\delta)$ and $\Pr(\text{FP}|\delta)$ are the true positive and false positive probabilities, respectively. In this initial paper, we restrict ourselves to considering the additive White Gaussian Noise (AWGN) channel, which is instructive to compare the coded and uncoded transmission strategies. Prior to transmission, the device encodes the sample to $D$ symbols using the encoder $f:\mathbb{R}^N\to\mathbb{R}^D$. Throughout the paper we will assume that $D\ge N$. Denoting the encoded sample by $\bar{\mathbf{x}}=f(\mathbf{x})$, which is assumed to be normalized with respect to the symbol power, $(1/D)\mathbb{E}[\|\bar{\mathbf{x}}\|^2]=1$, the signal received at the receiver is \begin{equation} \mathbf{y} = \bar{\mathbf{x}} + \mathbf{z}, \end{equation} where $\mathbf{z}\sim\mathcal{N}(\mathbf{0}, \Gamma^{-1}\mathbf{I})$ is additive white Gaussian noise. The signal-to-noise ratio (SNR) is then equal to $\Gamma$. We shall denote the decoded feature vector by $\mathbf{x}'=f^{-1}(\mathbf{y})$, and note that this channel can be seen as a stationary instance of a block-fading channel where the noise power is determined by the amount of fading. We consider both coded and uncoded transmission strategies as outlined next. \subsection{Coded transmission} With the coded transmission strategy, the device first quantizes its sample with a given rate (i.e. to a given number of bits), and then transmits the quantized sample over the channel. We assume that the device uses a capacity-achieving channel code so that the transmission rate is $C=(1/2)\log_2(1+\Gamma)$ measured in bits per (real) symbol. We note that this reflects an idealized scenario and thus is an upper bound on what can be achieved in practice where channel estimation, etc. is needed. Under this assumption, the total number of bits that the device has available to represent the sample is $B=CD$ bits. Because of resource constraints, we assume that the device employs scalar quantizers for each entry in the sample vector designed to minimize the expected squared error distortion of the nominal samples (since only these are known). The quantizers are designed in two steps. First, the $B$ bits are allocated to the individual entries in the sample based on their expected distortion by iteratively allocating one bit to the entry with the highest expected distortion (see e.g.~\cite{gershogray} for a discussion on this greedy approach). For simplicity, we assume that the entries in the sample vector are Gaussian so that the expected distortion of the $i$-th entry is given by $\sigma_i^2 2^{-2b_i}$, where $\sigma_i^2$ is the variance of the entry and $b_i$ is the number of allocated bits. After this procedure we have $\sum_{i=1}^N b_i=B$. In the second step, the individual quantizers for each sample entry are designed using Lloyd's algorithm~\cite{lloyd82}, such that $2^{b_i}$ quantization points are allocated to entry $i$. To encode a sample vector, the device simply picks, for each entry $i$ in the vector, the quantization point from the set of $2^{b_i}$ points that is closest to entry $i$ in the sample vector. \subsection{Uncoded transmission} In the uncoded transmission scheme, the device transmits the sample vector using uncoded analog (real) symbols. Since there is no need for source and channel coding, this scheme has the advantage of simplifying the transmitter device. Note that the analog symbols can be concatenated with a coded fragment, e.g. a header to allow for metadata, device addressing, etc., but this part is not important for this paper. To match the number of available symbols $D$, the transmitter spreads the signal using a matrix $\mathbf{Q}\in\mathbb{R}^{D\times N}$ satisfying $\mathbf{Q}^T\mathbf{Q}=\mathbf{I}$, i.e. the encoder and decoder are $f(\mathbf{x})=\mathbf{Q}\mathbf{x}$ and $f^{-1}(\mathbf{y})=\mathbf{Q}^T\mathbf{y}$, respectively. When the sample is transmitted uncoded, the received signal can be written as \begin{equation} \mathbf{x}' = \mathbf{x} + \mathbf{w}, \end{equation} where $\mathbf{w}\sim\mathcal{N}\left(\mathbf{0},\frac{N\Gamma^{-1}}{D}\mathbf{I}\right)$. \section{PCA based anomaly detection}\label{sec:pca} In this section, we study a PCA based anomaly detector to gain insight into the impact of the channel on the detection accuracy. We first introduce the PCA subspace method, and then quantify the false positive and true positive probabilities for a given threshold $\delta$. Due to the difficulties associated with modelling the quantization effects for low rates (which are our main interest), we limit our analysis to the case of uncoded transmissions, and note that this also includes the case without a channel as the special case with $\Gamma^{-1}=0$. The performance of the coded transmission scheme with quantization effects will be evaluated using Monte Carlo simulations. Although the PCA subspace method is general, to ease exposition and to keep the analysis tractable, we will assume that the nominal samples follow a Gaussian distribution with zero mean, i.e. $\mathbf{x}\sim\mathcal{N}(\mathbf{0}, \mathbf{\Sigma})$ where $\text{tr}(\mathbf{\Sigma})=N$ to satisfy the normalization assumption. We will denote the eigenvalues of $\mathbf{\Sigma}$ corresponding to the normalized eigenvectors by $\hat{\sigma}_1\ge\hat{\sigma}_2\ge\ldots\ge\hat{\sigma}_N$. \subsection{PCA subspace method} The PCA subspace method has been applied successfully to many practical problems, see e.g.~\cite{lakhina2004diagnosing}, and while more sophisticated non-linear methods for anomaly detection exist, such as kernel methods and deep autoencoders~\cite{hoffmann07,sakurada14}, these methods are typically difficult to analyze and share many similarities with the PCA method. We assume that the received (decoded) samples $\mathbf{x}'$ have zero mean and covariance matrix $\mathbf{\Sigma}'$, but in general the distribution does not have to be Gaussian. This assumption is valid both for the coded and uncoded policies---in particular, in the uncoded case $\mathbf{\Sigma}'=\mathbf{\Sigma}+(N/D)\Gamma^{-1}\mathbf{I}$. Let $\mathbf{V}\mathbf{\Lambda}\mathbf{V}^T=\mathbf{\Sigma}'$ be the eigendecomposition of $\mathbf{\Sigma}'$ with normalized eigenvectors ordered by the eigenvalues $\lambda_1 \ge \lambda_2\ge\ldots\ge \lambda_N$, and denote by $\hat{\mathbf{V}}\in\mathbb{R}^{N\times K}$ the matrix composed by the $K$ eigenvectors of $\mathbf{\Sigma}$ with the largest eigenvalues. The PCA subspace method decomposes a sample $\mathbf{x}'$ (which is either nominal or anomalous) into two components \begin{equation} \mathbf{x}'=\hat{\mathbf{x}}+\tilde{\mathbf{x}}, \end{equation} where $\hat{\mathbf{x}}$ and $\tilde{\mathbf{x}}$ are referred to as the \emph{modeled} and \emph{residual} components, respectively. The modeled component is the projection of $\mathbf{x}'$ onto the subspace spanned by $\hat{\mathbf{V}}$, i.e. $\hat{\mathbf{x}}=\hat{\mathbf{V}}\hat{\mathbf{V}}^T\mathbf{x}'$, and $\tilde{\mathbf{x}}=(\mathbf{I}-\hat{\mathbf{V}}\hat{\mathbf{V}}^T)\mathbf{x}'=\tilde{\mathbf{P}}\mathbf{x}'$. As with other residual based anomaly detectors, the assumption is that, provided that a sufficient number of eigenvectors are included in $\hat{\mathbf{V}}$, the residual vector $\tilde{\mathbf{x}}$ is going to be small in magnitude when $\mathbf{x}'$ follows the assumed model, but large when $\mathbf{x}'$ is anomalous. \subsection{False positive probability} False positives occur when a nominal sample $\mathbf{x}_0'$ exceeds the detection threshold $\delta$. We are interested in characterizing \begin{equation} \Pr(\text{FP}|\delta)=\Pr(\|\tilde{\mathbf{x}}_0'\| > \delta). \end{equation} As mentioned, we restrict our analysis to the uncoded case where $\mathbf{x}_0'\sim\mathcal{N}(\mathbf{0}, \mathbf{\Sigma}')$ and $\mathbf{\Sigma}'=\mathbf{\Sigma}+(N/D)\Gamma^{-1}\mathbf{I}$. In this case, the residual vector $\tilde{\mathbf{x}}_0'=\tilde{\mathbf{P}}\mathbf{x}_0'$ is also Gaussian with zero mean and covariance matrix $\tilde{\mathbf{\Sigma}}'=\tilde{\mathbf{P}}\mathbf{\Sigma}'\tilde{\mathbf{P}}^T$. It can be shown that $\|\tilde{\mathbf{x}}_0'\|^2$ is distributed as \begin{equation} \|\tilde{\mathbf{x}}_0'\|^2\sim\sum_{i=K+1}^N\lambda_i Z_i^2,\label{eq:normdist_fpr} \end{equation} where $\lambda_i$ is the $i$-th largest eigenvalue of $\mathbf{\Sigma}'$ and $Z_i$ are i.i.d. standard Gaussian random variables~\cite{jackson1979control}. To make the impact of the additive noise more explicit, the expression can also be written using the eigenvalues of $\mathbf{\Sigma}$ by noticing that $\lambda_i=\hat{\sigma}_i+(N/D)\Gamma^{-1}$ where $\hat{\sigma}_i$ is the $i$-th largest eigenvalue of $\mathbf{\Sigma}$. The distribution of the squared magnitude of the residual $\|\tilde{\mathbf{x}}_0'\|^2$ can then be approximated by the quantity \begin{equation} c(\|\tilde{\mathbf{x}}_0'\|^2)=\frac{\theta_1\left(\left(\|\tilde{\mathbf{x}}_0'\|^2/\theta_1\right)^{h_0} - 1 - \theta_2h_0(h_0-1)/\theta_1^2\right)}{\sqrt{2\theta_2h_0^2}}, \end{equation} which follows a standard Gaussian distribution where $\theta_j=\sum_{i=K+1}^N\left(\hat{\sigma}_i+(N/D)\Gamma^{-1}\right)^j$ and $h_0=1-2\theta_1\theta_3/(3\theta_2^2)$~\cite{jackson1979control}. Thus, to compute the false positive rate, we can evaluate \begin{equation} \Pr(\|\tilde{\mathbf{x}}_0'\| > \delta) \approx 1-\Phi(c(\delta^2)),\label{eq:magnitude_normal_approx} \end{equation} where $\Phi(\cdot)$ is the cumulative distribution function of a standard Gaussian distribution. \subsection{True positive probability} To derive the true positive probability, let us assume that an anomaly is reflected as an additive term in the sample $\mathbf{x}_{\mathbf{f}}=\sqrt{\eta}\mathbf{x}_0+\mathbf{f}$, where $\mathbf{x}_0$ represents a sample under nominal conditions and $\eta=1-(1/N)\|\mathbf{f}\|^2$ is introduced to have $(1/N)\mathbb{E}[\|\mathbf{x}_{\mathbf{f}}\|^2]=1$ (assuming $\|\mathbf{f}\|^2\le N$). After being transmitted over the noisy channel, the received sample $\mathbf{x}_{\mathbf{f}}'=\mathbf{x}_{\mathbf{f}}+\mathbf{w}$ is Gaussian with mean $\mathbf{f}$ and covariance $\mathbf{\Sigma}'=\eta\mathbf{\Sigma}+(N/D)\Gamma^{-1}\mathbf{I}$. The residual vector of the anomalous sample can be written \begin{align} \tilde{\mathbf{x}}_{\mathbf{f}}' &= \tilde{\mathbf{P}}\mathbf{x}_{\mathbf{f}}'=\tilde{\mathbf{x}}_0'+\tilde{\mathbf{f}}, \end{align} where $\tilde{\mathbf{x}}_0'=\tilde{\mathbf{P}}(\sqrt{\eta}\mathbf{x}_0+\mathbf{w})$ and $\tilde{\mathbf{f}}=\tilde{\mathbf{P}}\mathbf{f}$. We define the true positive probability as \begin{equation} \Pr(\text{TP}|\delta)=\Pr(\|\tilde{\mathbf{x}}_{\mathbf{f}}'\| > \delta). \end{equation} In order for an anomaly $\mathbf{f}$ to be detectable by the PCA method, it must leave a nonzero residual, i.e. $\tilde{\mathbf{f}}\neq \mathbf{0}$ ($\mathbf{f}$ must not be in the null space of $\tilde{\mathbf{P}}$). Furthermore, $\mathbf{f}$ must leave a sufficiently large residual so that $\|\tilde{\mathbf{x}}_{\mathbf{f}}'\|>\delta$. Due to these conditions, the false positive probability generally depends on the anomaly vector distribution. In this work, we will restrict the analysis to the conditional false positive probability given an anomaly vector $\mathbf{f}$, i.e. $\Pr(\|\tilde{\mathbf{x}}_{\mathbf{f}}'\|>\delta|\mathbf{f})$. The marginal false positive probability can be obtained by averaging the conditional distribution over any distribution of the anomaly vectors. Because we condition on $\mathbf{f}$, and thus $\tilde{\mathbf{f}}$ is deterministic, the residual vector is distributed as $\tilde{\mathbf{x}}_{\mathbf{f}}\sim\mathcal{N}(\tilde{\mathbf{f}}, \tilde{\mathbf{\Sigma}}')$ where $\tilde{\mathbf{\Sigma}}'=\tilde{\mathbf{P}}\mathbf{\Sigma}'\tilde{\mathbf{P}}^T$. By generalizing the result from~\cite{jackson1979control} to the case with non-zero mean we obtain \begin{equation} \|\tilde{\mathbf{x}}_{\mathbf{f}}\|^2\sim\sum_{i=K+1}^N\lambda_i (Z_i + t_i)^2, \end{equation} where $t_i$ is the $i$-th entry of $\mathbf{t} = \mathbf{V}^T(\eta\mathbf{\Sigma}+(N/D)\Gamma^{-1})^{-1/2}\tilde{\mathbf{f}}$, see e.g.~\cite{mathai1992quadratic}. This distribution can be approximated like before as \begin{equation} \Pr(\|\tilde{\mathbf{x}}_{\mathbf{f}}'\| > \delta|\mathbf{f}) \approx 1-\Phi(c(\delta^2)), \end{equation} where $\theta_j$ are computed as~\cite{jensen1972gaussian} \begin{equation} \theta_j=\sum_{i=K+1}^N \lambda_i^j(1+jt_i^2). \end{equation} Here we may again exploit that $\lambda_i=\left(\eta\hat{\sigma}_i+(N/D)\Gamma^{-1}\right)$ to write the expression in terms of the eigenvalues of $\mathbf{\Sigma}$. \section{Autoencoder based anomaly detection}\label{sec:autoenc} The main limitation of the PCA based method is that it requires anomalous samples to be linearly separable from the nominal ones. A more sophisticated alternative is to use an autoencoder, which is an artificial neural network that learns a compact representation of the input. Specifically, an autoencoder is typically composed of a set of encoder layers and a set of decoder layers, separated by a low-dimensional bottleneck layer, see e.g.~\cite{sakurada14}. The network is then trained to minimize the reconstruction loss between the input and the output given as \begin{equation} J(\phi) = \lVert \mathbf{x}' - h(\mathbf{x}'|\phi) \rVert^2, \end{equation} where $\mathbf{x}'$ is the decoded input feature vector at the receiver, $h(\mathbf{x}'| \phi)$ denotes the output of the autoencoder and $\phi$ indicates the autoencoder parameters. Similar to the PCA method, the reconstruction error of the autoencoder can be used for anomaly detection by assuming that anomalous samples produce large reconstruction errors. However, contrary to the PCA the autoencoder is not limited to detecting linear anomalies in the samples. This flexibility comes at the cost of being more difficult to analyze, and thus we only consider a numerical evaluation of the autoencoder based method. \section{Numerical results}\label{sec:results} \subsection{PCA results} We first evaluate the two transmission schemes for the PCA method on a constructed scenario that matches the assumptions of the analysis. Specifically, we assume that nominal samples are Gaussian with zero mean and covariance matrix $\mathbf{\Sigma}$. Because the PCA is independent to orthonormal transformations of the eigenvectors, the results only depend on the eigenvalues of $\mathbf{\Sigma}$ under the constraints that $\mathbf{\Sigma}$ has full rank and $\text{tr}(\mathbf{\Sigma})=\sum_{n=1}^N\hat{\sigma}_n=N$. We consider case where the eigenvalues of $\mathbf{\Sigma}$ are given as $\hat{\sigma}_1=50\beta,\hat{\sigma}_2=40\beta,\hat{\sigma}_3=30\beta,\hat{\sigma}_4=20\beta,\hat{\sigma}_5= 10\beta,\hat{\sigma}_5=\ldots=\hat{\sigma}_N=\beta$, where $\beta=1/(50+40+30+20+10+N-5)$ is to ensure that the eigenvalues sum to $N$. We study the case with $N=128$, and because the choice of eigenvectors represents a scenario with five dominant principal components we pick $K=5$. The fault vectors $\mathbf{f}$ are drawn uniformly from the $N$-sphere of various radii $\|\mathbf{f}\|^2$. The results are shown in \cref{fig:pca_res}. As can be seen, the accuracy is highest for the coded transmission at low SNRs, while the uncoded transmission scheme provides higher accuracy once the SNR exceeds a given threshold around 0 dB. This indicates that for low SNRs, the additive noise from the uncoded scheme is dominant and makes it impossible to detect the faults. On the other hand, despite only transmitting few coarse features, the fact that the features only are affected by quantization noise makes the accuracy of the uncoded scheme higher at low SNRs. \begin{figure} \centering \input{pca_res.tex} \caption{Accuracy of the PCA subspace method with digital and analog transmission. The dashed horizontal lines indicate the channel-free accuracy.} \label{fig:pca_res} \end{figure} \subsection{Autoencoder results} To study the impact of feature coding in the more complex autoencoder setting, we consider the baseline autoencoder model proposed for the MIMII dataset in~\cite{purohit2019mimii}. The MIMII dataset contains a collection of nominal and anomalous sounds of valves, pumps, fans, and slide rails from a factory environments. In particular, the sounds were recorded as 16-bit audio signals at 16 kHz with an eight-channel circular microphone array placed 50 cm away from the target machine (10 cm for valves). Besides the machine sounds, background noise signals from real factories were recorded with the same equipment and mixed with the recordings of the machine at three SNRs ($-6$ dB, $0$ dB, and $6$ dB). In this work, we conduct experiments using only recordings from a single microphone. The recordings are used to construct feature vectors, which are the concatenation of five consecutive log-mel spectrogram frames, as further outlined in~\cite{purohit2019mimii}. As with other reconstruction based methods, the anomaly detection is performed by thresholding the reconstruction error. The autoencoder consists of six fully-connected layers of size 64, 64, 8, 64, 64, and 320, with a 320-element input vector. Each layer is followed by a ReLU activation function except for the last one which has linear activations. For each SNR there are 26092 nominal and 6065 anomalous audio segments of 10 s, where the anomalous conditions include contamination, clogging, damage, etc. The anomalous segments are used only for testing along with a random subset nominal samples of the same size so that the testing dataset contains the same number of nominal and anomalous examples. In line with our overall assumption, the baseline method used to detect the anomalous machines in~\cite{purohit2019mimii} is based on an unsupervised learning approach, where the training phase only includes the signals of nominal machines. The accuracy of the autoencoder anomaly detector with features transmitted over an AWGN channel is shown in \cref{fig:autoenc}. The autoencoder is trained for the specific SNR with data that includes noise (for the uncoded case) or is quantized (in case of coded transmissions). Furthermore, in the coded case, the quantizers are designed based on statistics estimated from the training set of nominal samples. As in the case of PCA, the coded scheme gives higher accuracy than the uncoded scheme when the SNR is low. However, somewhat surprisingly, the accuracy decreases as the SNR increases up to a certain point around 5 dB, before it starts to increase again. A possible reason for this may be that at low SNRs, the autoencoder is able to learn a good model of the few (coarse) features that are transmitted, while it is unable to learn a good model at slightly higher SNRs, where it receives many coarse features. In the high SNR regime, all features are transmitted with high resolution, and thus the autoencoder can again learn a good representation of the features. However, as was the case with PCA, the uncoded scheme generally performs better than the coded scheme in the high SNR regime. \begin{figure} \centering \input{autoencoder_res.tex} \caption{Accuracy of the autoencoder on the MIMII dataset for various channel SNRs. The number in the legend indicates the SNR of the sound recordings.} \label{fig:autoenc} \end{figure} \section{Conclusion}\label{sec:concl} Motivated by wireless monitoring of processes in an industrial scenario using resource-constrained IoT devices, we have compared uncoded and coded transimssion schemes for anomaly detection. We have considered anomaly detection methods based on PCA and autoencoder, and both cases reveal that coded transmissions perform better at low SNRs while uncoded transmissions are better at high SNRs. Part of the reason is that at low SNRs, the noise from the uncoded transmission tends to be dominant and hide the anomalous parts, while the coded scheme provides coarse but less noisy samples. On the other hand, at high SNRs the quantizer in the coded transmission scheme, which is designed for nominal samples, is likely to suppress anomalous signals, whereas the anomalous signals in the uncoded scheme are only affected by the Gaussian noise. \input{main.bbl} \end{document}
1,477,468,749,917
arxiv
\section{Introduction} We consider diffuse light propagation in a bounded domain $\Omega\subset\mathbb{R}^n$ ($n\ge2$). In diffuse optical tomography, coefficients of the diffusion equation are determined from boundary measurements. The time-independent diffusion equation is given by \begin{equation} \cases{ -D_0\Delta u+\mu_au=f, &\quad $x\in\Omega$, \\ D_0\partial_{\nu}u+\frac{1}{\zeta}u=0, &\quad $x\in\partial\Omega$. } \label{de1} \end{equation} The outgoing light $u(x)$ is detected on a subboundary $\Gamma$ of the boundary ($x\in\Gamma\subset\partial\Omega$). On the boundary, we suppose $u\in L^p(\Gamma)$ with some $p\ge1$. Since the cost function for the inverse problem of determining coefficients of the diffusion equation in (\ref{de1}) has a complicated landscape, the reconstructed value is trapped in a local minimum if iterative schemes such as the Levenberg-Marqusrdt, Gauss-Newton, and conjugate gradient methods are used. An alternative approach is the use of direct methods in which perturbations of coefficients are reconstructed. The Born and Rytov approximations are frequently used in cooperation with linearization of the nonlinear inverse problem. When the (first) Born approximation is compared with the (first) Rytov approximation, the superiority of the latter has been discussed \cite{Arridge99,Keller69,Kirkinis08}. A systematic way of inverting the Born series has been studied \cite{Markel-OSullivan-Schotland03,Markel-Schotland07,Moskow-Schotland08,Moskow-Schotland09,Panasyuk-Markel-Carney-Schotland06}. That is, higher-order Born approximations can be implemented with the inverse Born series. In this way, the direct methods can be applied to nonlinear inverse problems without linearization. In Ref.~\cite{Machida-Schotland15}, the inverse Born series was implemented for the transport-based optical tomography. In addition to optical tomography, the Calder\'{o}n problem was considered with the inverse Born series \cite{Arridge-Moskow-Schotland12}. The inverse Born series was applied to inverse problems for scalar waves \cite{Kilgore-Moskow-Schotland12} and for electromagnetic scattering \cite{Kilgore-Moskow-Schotland17}. The series was developed for discrete inverse problems \cite{Chung-Gilbert-Hoskins-Schotland17}. The technique of the inverse Born series was used to investigate the inversion of the Bremmer series \cite{Shehadeh-Malcolm-Schotland17}. The inverse Born series was extended to Banach spaces \cite{Bardsley-Vasquez14,Lakhal18}. Recently, a modified Born series with unconditional convergence was proposed and its inverse series was studied \cite{Abhishek-Bonnet-Moskow20}. The convergence theorem for the inverse Born series has recently been improved \cite{Hoskins-Schotland22}. See Ref.~\cite{Moskow-Schotland19} for recent advances. Based on the success of past studies on the inverse Born series, in this paper we consider the inversion of the Rytov series. In experimental and clinical researches on optical tomography, quite often the Born approximation is impractical and tomographic images are obtained with the Rytov approximation. After linearization, the Rytov approximation was used for detecting breast cancer \cite{Choe-etal05,Choe-etal09} and used when the brain function was studied through the neurovascular coupling \cite{Eggebrecht-etal14}. Indeed, the inverse Rytov series was considered for the Helmholtz equation and it was numerically observed that the inverse Rytov series with the first through third approximations give better reconstructed images than the inverse Born series \cite{Tsihrintzis-Devaney00}. In \cite{Marks06}, intermediate approximations between the Born and Rytov approximations were explored. The relation between the inverse Rytov series and Newton's method was investigated \cite{Park-etal11}. In these papers, however, no systematic way of computing higher-order terms was presented. The remainder of the paper is organized as follows. The Born series is introduced in Sec.~\ref{born} and the Rytov series is introduced in Sec.~\ref{rytov}. Then the inverse Rytov series is discussed in Sec.~\ref{invrytov}. Section \ref{implem} describes how the inverse Rytov series is recursively implemented. Concluding remarks are given in Sec.~\ref{concl}. \section{The Born series} \label{born} Let $g$ be a positive constant. We write \[ \mu_a(x)=g\left(1+\eta(x)\right),\quad\eta\ge-1. \] We suppose that $\eta$ is supported in a closed ball $B_a$ of radius $a$: \[ \mathop{\mathrm{supp}}\eta\subset B_a\subset\Omega. \] It will be seen below that the Born series converges for sufficiently small $a>0$. We suppose that $\eta\in L^q(B_a)$ for some $q\ge2$. Let $u_0(x)$ be the solution to the equation (\ref{de1}) in which $\mu_a(x)$ is replaced by $g$. We assume that there exists a constant $\xi>0$ such that $\xi\le u$ on $\Gamma$. Let $G(x,y)$ be the Green's function which corresponds to $u_0$. Then the following identity holds. \[ u(x)=u_0(x)-g\int_{\Omega}G(x,y)\eta(y)u(y)\,dy. \] From the above identity, the Born series can be constructed as \[ u=u_0+u_1+\cdots, \] where \[ u_j(x)=-g\int_{\Omega}G(x,y)\eta(y)u_{j-1}(y)\,dy\quad(j=1,2,\dots). \] The first two terms of the Born series are obtained as \begin{eqnarray*} u_1(x) &= -g\int_{\Omega}G(x,y)\eta(y)u_0(y)\,dy, \\ u_2(x) &= g^2\int_{\Omega}\int_{\Omega}G(x,y)\eta(y)G(y,z)\eta(z)u_0(z)\,dydz. \end{eqnarray*} Let us introduce the multilinear operators $K_j:L^q(B_a\times\cdots\times B_a)\to L^p(\Gamma)$ such that \[ u_j=-K_j\eta^{\otimes j}, \] where $\eta^{\otimes j}=\eta\otimes\cdots\otimes\eta$ is the $j$-fold tensor product. Here we have \begin{eqnarray*} &\fl K_1\eta= g\int_{B_a}G(x,y)u_0(y)\eta(y)\,dy, \\ &\fl K_2\eta\otimes\eta= -g^2\int_{B_a}\int_{B_a}G(x,y)G(y,z)u_0(z)\eta(y)\eta(z)\,dydz. \end{eqnarray*} In general, the $j$th term is given by \begin{eqnarray*} K_j\eta^{\otimes j} &= (-1)^{j+1}g^j\int_{B_a\times\cdots\times B_a}G(x,y_1)G(y_1,y_2)\cdots G(y_{j-1},y_j) \\ &\times u_0(y_j)\eta(y_1)\cdots\eta(y_j)\,dy_1\cdots dy_j. \end{eqnarray*} Let us define the operators $\check{K}_j:L^p(B_a\times\cdots\times B_a)\to L^q(\Gamma)$ such that \[ \frac{1}{u_0}K_j\eta^{\otimes j}=\check{K}_j\eta^{\otimes j}. \] We introduce \[\fl \mu=g\sup_{x\in B_a}\left\|G(x,\cdot)\right\|_{L^r(B_a)},\quad \nu=g|B_a|^{1/r}\sup_{y_1,y_2\in B_a}\left\|G(\cdot,y_1) \frac{u_0(y_2)}{u_0(\cdot)}\right\|_{L^p(\Gamma)}, \] where $r=q/(q-1)$. \begin{lem} For $j=1,2,\dots$, $\|\check{K}_j\|\le\nu\mu^{j-1}$. \end{lem} \begin{proof} For any $f\in L^q(B_a\times\cdots\times B_a)$, the multilinear operators $\check{K}_j$ are written as \begin{eqnarray*} (\check{K}_jf)(x) &= \frac{(-1)^{j+1}g^j}{u_0(x)} \int_{B_a\times\cdots\times B_a}G(x,y_1)G(y_1,y_2)\cdots G(y_{j-1},y_j) \\ &\times u_0(y_j)f(y_1,\dots,y_j)\,dy_1\cdots dy_j, \quad x\in\Gamma. \end{eqnarray*} Using H\"{o}lder's inequality, we have \begin{eqnarray*} \|\check{K}_jf\|_{L^p(\Gamma)}^p &= \left(g^j\right)^p\int_{\Gamma}\Biggl|\int_{B_a\times\cdots\times B_a} G(x,y_1)G(y_1,y_2)\cdots G(y_{j-1},y_j) \\ &\times \frac{u_0(y_j)}{u_0(x)}f(y_1,\dots,y_j)\,dy_1\cdots dy_j\Biggr|^p\,dx \\ &\le \left(g^j\right)^p\int_{\Gamma}\Biggl| \left(\int_{B_a\times\cdots\times B_a}|f(y_1,\dots,y_j)|^q\,dy_1\cdots dy_j \right)^{1/q} \\ &\times \left(\int_{B_a\times\cdots\times B_a} \left|G(x,y_1)G(y_1,y_2)\cdots G(y_{j-1},y_j)\frac{u_0(y_j)}{u_0(x)}\right|^r \,dy_1\cdots dy_j\right)^{1/r} \Biggr|^p\,dx \\ &\le \left(g^j\right)^p\|f\|_{L^q(B_a\times\cdots\times B_a)}^p \int_{\Gamma}\left|\sup_{y_1,y_j\in B_a}G(x,y_1)\frac{u_0(y_j)}{u_0(x)}\right|^p\,dx \\ &\times \left(\int_{B_a\times\cdots\times B_a} \left|G(y_1,y_2)\cdots G(y_{j-1},y_j)\right|^r \,dy_1\cdots dy_j\right)^{p/r}. \end{eqnarray*} We define \begin{eqnarray*} \nu&= g|B_a|^{1/r}\sup_{y_1,y_2\in B_a}\left\|G(\cdot,y_1) \frac{u_0(y_2)}{u_0(\cdot)}\right\|_{L^p(\Gamma)}, \\ \mu&= g\sup_{x\in B_a}\|G(x,\cdot)\|_{L^r(B_a)}, \end{eqnarray*} and \[ I_{j-1}=g^{j-1}\left(\int_{B_a\times\cdots\times B_a} \left|G(y_1,y_2)\cdots G(y_{j-1},y_j)\right|^r\,dy_1\cdots dy_j\right)^{1/r}. \] Similar to the calculation in \cite{Moskow-Schotland08}, we have \[ I_{j-1}\le\mu I_{j-2},\quad I_1\le|B_a|^{1/r}\mu. \] Hence, \[ I_{j-1}\le\mu^{j-1}|B_a|^{1/r}\quad(j=2,3,\dots). \] We obtain \[ \|\check{K}_jf\|_{L^p(\Gamma)}^p\le \|f\|_{L^q(B_a\times\cdots\times B_a)}^p\nu^p|B_a|^{-\frac{p}{r}} I_{j-1}^p. \] By the definition of the operator norm, it is shown that \[ \|\check{K}_j\|_p\le\nu\mu^{j-1}. \] \end{proof} \section{The Rytov series} \label{rytov} Let us consider the Rytov series: $u=u_0e^{-\psi_1-\psi_2-\cdots}$. The function $\psi_j$ ($j=1,2,\dots$) is proportional to $g^j$. In particular, we consider boundary values of $u,u_0$ at $x\in\Gamma$. We introduce \[ \psi=\psi(x)=\ln\frac{u_0(x)}{u(x)},\quad x\in\Gamma. \] We assume $\psi\in L^p(\Gamma)$. We have \begin{eqnarray*} -\psi &= \ln\frac{u_0+u_1+\cdots}{u_0}= \ln\left(1+\sum_{j=1}^{\infty}\frac{u_j}{u_0}\right) \\ &= \sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}\left(\sum_{j=1}^{\infty}\frac{u_j}{u_0}\right)^k \\ &= \frac{u_1+u_2+\cdots}{u_0}- \frac{(u_1+u_2+\cdots)^2}{2u_0^2}+ \frac{(u_1+u_2+\cdots)^3}{3u_0^3}-\cdots \\ &= -\psi_1-\psi_2-\cdots. \end{eqnarray*} The first two terms of the Rytov series are explicitly written as \[ \psi_1=-\frac{u_1}{u_0},\quad \psi_2=-\frac{u_2}{u_0}+\frac{1}{2}\left(\frac{u_1}{u_0}\right)^2. \] In general, we have \[ \psi_j=\sum_{m=1}^j\frac{(-1)^m}{mu_0^m}\sum_{i_1+\cdots+i_m=j}u_{i_1}\cdots u_{i_m},\quad j=1,2,\dots. \] We note that the number of $j$th order terms in $(u_1+\cdots)^m$ is \[ \left(\begin{array}{c}j-1\\m-1\end{array}\right). \] In total, the number of terms in $\psi_j$ is \[ \sum_{m=1}^{j-1}\left(\begin{array}{c}j-1\\m-1\end{array}\right)=2^{j-1}. \] We introduce the forward operators $J_j:L^q(B_a\times\cdots\times B_a)\to L^p(\Gamma)$ such that \[ \psi_j=J_j\eta^{\otimes j}\quad(j=1,2,\dots). \] We have \begin{eqnarray*} &\fl J_1\eta=\check{K}_1\eta= \frac{g}{u_0(x)}\int_{\Omega}G(x,y)u_0(y)\eta(y)\,dy, \\ &\fl J_2\eta\otimes\eta= \check{K}_2\eta\otimes\eta+\frac{1}{2}\check{K}_1\eta\otimes\check{K}_1\eta \\ &\fl= \frac{g^2}{u_0(x)}\int_{\Omega}\int_{\Omega}G(x,y)G(y,z)u_0(z)\eta(y)\eta(z)\,dydz +\frac{g^2}{2u_0(x)^2}\left(\int_{\Omega}G(x,y)u_0(y)\eta(y)\,dy\right)^2. \end{eqnarray*} In general, the $j$th term is given by \[ J_j=\sum_{m=1}^j\frac{1}{m}\sum_{i_1+\cdots+i_m=j}\check{K}_{i_1}\otimes\cdots\otimes\check{K}_{i_m}. \] \begin{lem} We have $\|J_j\|\le\nu\left(\mu+\nu\right)^{j-1}$ for $j=1,2,\dots$. Moreover the Rytov series converges if $\|\eta\|_{L^q(B_a)}<(\mu+\nu)^{-1}$. \end{lem} \begin{proof} We note the binomial formula: \begin{equation} x(x+y)^{j-1}=\sum_{m=1}^j\left(\begin{array}{c}j-1\\m-1\end{array}\right)x^my^{j-m}. \label{binomial} \end{equation} We have \begin{eqnarray*} \|J_j\| &\le \left\|\sum_{m=1}^j\frac{1}{m}\sum_{i_1+\cdots+i_m=j}\check{K}_{i_1}\otimes\cdots\otimes\check{K}_{i_m}\right\| \\ &\le \sum_{m=1}^j\frac{1}{m}\sum_{i_1+\cdots+i_m=j} \|\check{K}_{i_1}\|\cdots\|\check{K}_{i_m}\| \\ &\le \sum_{m=1}^j\left(\begin{array}{c}j-1\\ m-1\end{array}\right)\nu^m\mu^{j-m} \\ &= \nu\left(\mu+\nu\right)^{j-1}. \end{eqnarray*} Since we have \begin{eqnarray*} \sum_{j=1}^{\infty}\|\psi_j\|_{L^p(\Gamma)} &= \sum_{j=1}^{\infty}\|J_j\eta\otimes\cdots\otimes\eta\|_{L^p(\Gamma)} \le\sum_{j=1}^{\infty}\|J_j\|\|\eta\|_{L^q(B_a)}^j \\ &\le \nu\left(\mu+\nu\right)^{-1} \sum_{j=1}^{\infty}\left(\mu+\nu\right)^j\|\eta\|_{L^q(B_a)}^j, \end{eqnarray*} the series converges if $\|\eta\|_{L^q(B_a)}<(\mu+\nu)^{-1}$. \end{proof} \section{Inverse Rytov series} \label{invrytov} We begin by formally expanding the perturbation $\eta$ as \begin{eqnarray*} \eta &= \eta_1+\eta_2+\cdots \\ &= \mathcal{J}_1\psi+\mathcal{J}_2\psi\otimes\psi+\cdots. \end{eqnarray*} We refer to the above series as the inverse Rytov series. If we substitute the series $\psi=J_1\eta+J_2\eta\otimes\eta+\cdots$, we have \begin{eqnarray*} &\fl \eta= \mathcal{J}_1\left(J_1\eta+J_2\eta\otimes\eta+\cdots\right)+ \mathcal{J}_2\left(J_1\eta+J_2\eta\otimes\eta+\cdots\right)\otimes\left(J_1\eta+J_2\eta\otimes\eta+\cdots\right)+\cdots \\ &\fl= \mathcal{J}_1J_1\eta+\left(\mathcal{J}_1J_2+\mathcal{J}_2J_1\otimes J_1\right)\eta\otimes\eta+\cdots. \end{eqnarray*} Thus we obtain \begin{eqnarray*} && \mathcal{J}_1J_2+\mathcal{J}_2J_1\otimes J_1=0, \\ && \mathcal{J}_3J_1\otimes J_1\otimes J_1+\mathcal{J}_2J_1\otimes J_2+ \mathcal{J}_2J_2\otimes J_1+\mathcal{J}_1J_3=0,\dots. \end{eqnarray*} Indeed, the equality $\eta=\mathcal{J}_1J_1\eta$ does not hold due to the ill-posedness of this inverse problem. To consider $\mathcal{J}_1$, let us introduce $\eta^*$ as \cite{Machida-Schotland15} \[ \eta^*=\mathop{\mathrm{arg\,min}}_{\eta\in B_a}\left(\frac{1}{2}\|J_1\eta-\psi\|_{L^p(\Gamma)}^2+\alpha R(\eta)\right), \] where $R(\eta)$ is a penalty function with a regularization parameter $\alpha>0$ \cite{Engl-Hanke-Neubauer96,Morozov93,Schuster-Kaltenbacher-Hofmann-Kazimierski12}. The regularized pseudoinverse of $J_1$ is defined as $\mathcal{J}_1:\psi\mapsto\eta^*$. With this operator $\mathcal{J}_1$, we have \[ \mathcal{J}_2= -\mathcal{J}_1J_2\mathcal{J}_1\otimes\mathcal{J}_1= -\mathcal{J}_1\left[\check{K}_2\mathcal{J}_1\otimes\mathcal{J}_1- \frac{1}{2}\check{K}_1\mathcal{J}_1\otimes\check{K}_1\mathcal{J}_1\right], \] and \begin{eqnarray*} \mathcal{J}_3 &= -\left(\mathcal{J}_2J_1\otimes J_2+\mathcal{J}_2J_2\otimes J_1+\mathcal{J}_1 J_3\right)\mathcal{J}_1\otimes\mathcal{J}_1\otimes\mathcal{J}_1 \\ &= -\Biggl[\mathcal{J}_2\check{K}_1\otimes\left(\check{K}_2-\frac{1}{2}\check{K}_1\otimes\check{K}_1\right)+ \mathcal{J}_2\left(\check{K}_2-\frac{1}{2}\check{K}_1\otimes\check{K}_1\right) \otimes\check{K}_1 \\ &+ \mathcal{J}_1\left(\check{K}_3-\frac{1}{2}(\check{K}_1\otimes\check{K}_2+\check{K}_2\otimes\check{K}_1)+\frac{1}{3}\check{K}_1\otimes\check{K}_1\otimes\check{K}_1\right) \Biggr]\mathcal{J}_1\otimes\mathcal{J}_1\otimes\mathcal{J}_1. \end{eqnarray*} For $j\ge2$, we have \[ \mathcal{J}_j=-\left(\sum_{m=1}^{j-1}\mathcal{J}_m\sum_{i_1+\cdots+i_m=j} J_{i_1}\otimes\cdots\otimes J_{i_m}\right) \mathcal{J}_1\otimes\cdots\otimes\mathcal{J}_1. \] \begin{thm} \label{thm4_1} Assume that there exists a constant $M_1<1$ such that $(\mu+2\nu)\|\mathcal{J}_1\|\le M_1$. Then the operator $\mathcal{J}_j:L^p(\Gamma\times\cdots\times\Gamma)\to L^q(B_a)$ is bounded and \[ \|\mathcal{J}_j\|\le C_1\left(\mu+2\nu\right)^j\|\mathcal{J}_1\|, \] where constant $C_1=C_1(M_1)>0$ is independent of $j$. Moreover for any $\psi\in L^p(\Gamma)$, there exists $C_2=C_2(M_1,\mu,\nu)$ such that \[ \left\|\mathcal{J}_j\psi^{\otimes j}\right\|_{L^q(B_a)}\le C_2\left(\mu+2\nu\right)^j\|\mathcal{J}_1\psi\|_{L^q(B_a)}^j. \] \end{thm} \begin{proof} We find that for $j\ge2$, \begin{eqnarray*} &\fl \|\mathcal{J}_j\|= \left\|\left(\sum_{m=1}^{j-1}\mathcal{J}_m\sum_{i_1+\cdots+i_m=j} J_{i_1}\otimes\cdots\otimes J_{i_m}\right) \mathcal{J}_1\otimes\cdots\otimes\mathcal{J}_1\right\| \\ &\fl\le \left\|\sum_{m=1}^{j-1}\mathcal{J}_m\nu^m \sum_{i_1+\cdots+i_m=j}\left(\mu+\nu\right)^{i_1-1}\cdots \left(\mu+\nu\right)^{i_m-1}\right\|\|\mathcal{J}_1\|^j \\ &\fl\le \sum_{m=1}^{j-1}\|\mathcal{J}_m\|\nu^m \left(\begin{array}{c}j-1\\ m-1\end{array}\right)\left(\mu+\nu\right)^{j-m}\|\mathcal{J}_1\|^j \\ &\fl\le \|\mathcal{J}_1\|^j \left(\sum_{m=1}^{j-1}\|\mathcal{J}_m\|\right) \left(\sum_{m=1}^{j-1} \left(\begin{array}{c}j-1\\ m-1\end{array}\right)\nu^m\left(\mu+\nu\right)^{j-m}\right). \end{eqnarray*} By using (\ref{binomial}), we have \begin{eqnarray*} \|\mathcal{J}_j\| &\le \|\mathcal{J}_1\|^j \left(\sum_{m=1}^{j-1}\|\mathcal{J}_m\|\right) \left(\nu\left(\mu+2\nu\right)^{j-1}-\nu^j\right) \\ &\le \nu\|\mathcal{J}_1\|^j\left(\mu+2\nu\right)^{j-1} \sum_{m=1}^{j-1}\|\mathcal{J}_m\| \\ &\le \|\mathcal{J}_1\|^j\left(\mu+2\nu\right)^j \sum_{m=1}^{j-1}\|\mathcal{J}_m\|. \end{eqnarray*} By noticing the recursive structure of the above inequality, we can write \[ \|\mathcal{J}_j\|\le c_j\left[\left(\mu+2\nu\right)\|\mathcal{J}_1\|\right]^j \|\mathcal{J}_1\|, \] where \[ c_{j+1}=c_j+ \left[\left(\mu+2\nu\right)\|\mathcal{J}_1\|\right]^jc_j,\quad c_2=1. \] Hence we obtain \[ c_j=\prod_{m=2}^{j-1}\left(1+\left[\left(\mu+2\nu\right)\|\mathcal{J}_1\|\right]^m\right),\quad j\ge3. \] We note that \begin{eqnarray*} \ln{c_j} &\le \sum_{m=1}^{j-1}\ln\left(1+ \left[\left(\mu+2\nu\right)\|\mathcal{J}_1\|\right]^m\right) \\ &\le \sum_{m=1}^{j-1}\left[\left(\mu+2\nu\right)\|\mathcal{J}_1\|\right]^m \\ &\le \frac{1}{1-\left(\mu+2\nu\right)\|\mathcal{J}_1\|} \\ &\le \frac{1}{1-M_1}. \end{eqnarray*} Thus $c_j$ ($j\ge2$) are bounded. We put $C_1=\exp(1/(1-M_1))$. We note that \[ \left\|\mathcal{J}_j\psi^{\otimes j}\right\|_{L^q(B_a)}\le \|\mathcal{J}_1\psi\|_{L^q(B_a)}^j\left(\mu+2\nu\right)^j \sum_{m=1}^{j-1}\|\mathcal{J}_m\|, \] and \[ \sum_{m=1}^{j-1}\|\mathcal{J}_m\|\le C_1\|\mathcal{J}_1\|\left(\mu+2\nu\right) \frac{1-\left(\mu+2\nu\right)^{j-1}}{1-\left(\mu+2\nu\right)}. \] Hence we obtain \begin{eqnarray*} &\fl \left\|\mathcal{J}_j\psi^{\otimes j}\right\|_{L^q(B_a)} \le C_1\left(\mu+2\nu\right)^{j+1}\|\mathcal{J}_1\| \frac{1-\left(\mu+2\nu\right)^{j-1}}{1-\left(\mu+2\nu\right)} \|\mathcal{J}_1\psi\|_{L^q(B_a)}^j \\ &\fl\le \frac{C_1M_1}{1-\left(\mu+2\nu\right)} \left(\mu+2\nu\right)^j\|\mathcal{J}_1\psi\|_{L^q(B_a)}^j. \end{eqnarray*} The proof is complete if we set \[ C_2=\frac{C_1M_1}{1-\left(\mu+2\nu\right)}. \] \end{proof} Let us consider the convergence of the inverse Rytov series. If the inverse Rytov series converges, we write \[ \eta\approx\widetilde{\eta}, \] where \[ \widetilde{\eta}=\sum_{j=1}^{\infty}\mathcal{J}_j\psi^{\otimes j}. \] \begin{thm} Suppose that $\|\mathcal{J}_1\|<(\mu+2\nu)^{-1}$, $\|\mathcal{J}_1\psi\|_{L^q(B_a)}<(\mu+2\nu)^{-1}$. Let $M_2=\max(\|\eta\|_{L^q(B_a)},\|\mathcal{J}_1J_1\eta\|_{L^q(B_a)})$. We assume that $M_2<(\mu+2\nu)^{-1}$. Then for any $N\in\mathbb{N}$ there exists constants $C_3=C_3(M_1,M_2,\mu,\nu)>0$ such that \[\fl \left\|\eta-\sum_{j=1}^N\mathcal{J}_j\psi^{\otimes j}\right\|_{L^q(B_a)} \le C_3\|(I-\mathcal{J}_1J_1)\eta\|_{L^q(B_a)}+C_2 \frac{\left[(\mu+2\nu)\|\mathcal{J}_1\psi\|_{L^q(B_a)}\right]^{N+1}}{1-(\mu+2\nu)\|\mathcal{J}_1\psi\|_{L^q(B_a)}}, \] where constant $C_2>0$ is given in Theorem \ref{thm4_1}. \end{thm} \begin{proof} If we expand $\psi$ in the inverse Rytov series by the Rytov series, we can write \[ \widetilde{\eta}=\sum_{j=1}^{\infty}\widetilde{\mathcal{J}}_j\eta\otimes\cdots\otimes\eta, \] where \[ \widetilde{\mathcal{J}}_1=\mathcal{J}_1J_1, \] and \[\fl \widetilde{\mathcal{J}}_j= \left(\sum_{m=1}^{j-1}\mathcal{J}_m\sum_{i_1+\cdots+i_m=j}J_{i_1}\otimes\cdots\otimes J_{i_m}\right)+\mathcal{J}_jJ_1\otimes\cdots\otimes J_1, \quad j\ge2. \] We have \[ \widetilde{\mathcal{J}}_j= \sum_{m=1}^{j-1}\mathcal{J}_m\sum_{i_1+\cdots+i_m=j}J_{i_1}\otimes\cdots\otimes J_{i_m} \left(I-\mathcal{J}_1J_1\otimes\cdots\otimes\mathcal{J}_1J_1\right). \] Since \[ \eta-\widetilde{\eta}= (I-\mathcal{J}_1J_1)\eta- \mathcal{J}_1J_2\left(\eta\otimes\eta-\mathcal{J}_1J_1\eta\otimes\mathcal{J}_1J_1\eta\right)+\cdots, \] we have \begin{eqnarray*} \left\|\eta-\widetilde{\eta}\right\|_{L^q(B_a)} &\le \sum_{j=1}^{\infty}\sum_{m=1}^{j-1}\sum_{i_1+\cdots+i_m=j} \|\mathcal{J}_m\|\|\mathcal{J}_{i_1}\|\cdots\|\mathcal{J}_{i_m}\| \\ &\times \left\|(\eta\otimes\cdots\otimes\eta)-\left(\mathcal{J}_1J_1\eta\otimes\cdots\otimes\mathcal{J}_1J_1\eta\right)\right\|_{L^q(B_a^j)}. \end{eqnarray*} We note the identity \begin{eqnarray*} & \left(\eta_1\otimes\cdots\otimes\eta_1\right)- \left(\eta_2\otimes\cdots\otimes\eta_2\right) \\ &= (\eta_1-\eta_2)\otimes\eta_2\otimes\cdots\otimes\eta_2+ \eta_1\otimes(\eta_1-\eta_2)\otimes\eta_2\otimes\cdots\otimes\eta_2 +\cdots \\ &+ \eta_1\otimes\eta_1\otimes\cdots\otimes(\eta_1-\eta_2)\otimes\eta_2+ \eta_1\otimes\eta_1\otimes\cdots\otimes\eta_1\otimes(\eta_1-\eta_2). \end{eqnarray*} Hence, \[\fl \left\|\eta\otimes\cdots\otimes\eta-\mathcal{J}_1J_1\eta\otimes\cdots\otimes\mathcal{J}_1J_1\eta\right\|_{L^q(B_a^j)} \le jM_2^{j-1}\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)}. \] We obtain \[\fl \left\|\eta-\widetilde{\eta}\right\|_{L^q(B_a)}\le \sum_{j=1}^{\infty}\sum_{m=1}^{j-1}\sum_{i_1+\cdots+i_m=j} \|\mathcal{J}_m\|\|\mathcal{J}_{i_1}\|\cdots\|\mathcal{J}_{i_m}\| jM_2^{j-1}\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)}. \] Furthermore, \begin{eqnarray*} &\fl \left\|\eta-\widetilde{\eta}\right\|_{L^q(B_a)} \\ &\fl\le \sum_{j=1}^{\infty}\sum_{m=1}^{j-1}jM_2^{j-1}\|\mathcal{J}_m\| \left(\begin{array}{c}j-1\\m-1\end{array}\right)\nu^m \left(\mu+\nu\right)^{j-m} \left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \\ &\fl\le \left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \sum_{j=1}^{\infty}jM_2^{j-1}\left(\sum_{m=1}^{j-1}\|\mathcal{J}_m\|\right) \left(\sum_{m=1}^{j-1}\left(\begin{array}{c}j-1\\m-1\end{array}\right) \nu^m\left(\mu+\nu\right)^{j-m}\right) \\ &\fl= \nu\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \sum_{j=1}^{\infty}jM_2^{j-1}\left(\sum_{m=1}^{j-1}\|\mathcal{J}_m\|\right) \left[\left(\mu+2\nu\right)^{j-1}-\nu^{j-1}\right] \\ &\fl\le \left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \sum_{j=1}^{\infty}jM_2^{j-1}\left(\mu+2\nu\right)^j \left(\sum_{m=1}^{j-1}\|\mathcal{J}_m\|\right). \end{eqnarray*} We obtain \begin{eqnarray*} &\fl \left\|\eta-\widetilde{\eta}\right\|_{L^q(B_a)} \le C_1\|\mathcal{J}_1\|\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \sum_{j=1}^{\infty}jM_2^{j-1}\left(\mu+2\nu\right)^{j+1} \frac{1-\left(\mu+2\nu\right)^{j-1}}{1-\left(\mu+2\nu\right)} \\ &\fl\le C_1\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)} \frac{\mu+2\nu}{1-\left(\mu+2\nu\right)} \sum_{j=1}^{\infty}j\left[M_2\left(\mu+2\nu\right)^{j-1}\right] \\ &\fl= C_3\left\|\eta-\mathcal{J}_1J_1\eta\right\|_{L^q(B_a)}, \end{eqnarray*} where \[ C_3=C_1\frac{\mu+2\nu}{1-\left(\mu+2\nu\right)} \sum_{j=1}^{\infty}j\left[M_2\left(\mu+2\nu\right)^{j-1}\right]. \] We have \[ \left\|\widetilde{\eta}\right\|_{L^q(B_a)}\le \sum_{j=1}^{\infty}\|\mathcal{J}_j\psi^{\otimes j}\|_{L^q(B_a)} \le C_2\sum_{j=1}^{\infty}\left(\mu+2\nu\right)^j\|\mathcal{J}_1\psi\|_{L^q(B_a)}^j. \] Hence $\widetilde{\eta}$ converges. We note that \begin{eqnarray*} \left\|\widetilde{\eta}-\sum_{j=1}^N\mathcal{J}_j\psi^{\otimes j}\right\|_{L^q(B_a)} &\le \sum_{j=N+1}^{\infty}\left\|\mathcal{J}_j\psi\otimes\cdots\otimes\psi\right\|_{L^q(B_a)} \\ &\le C_2\sum_{j=Ns+1}^{\infty}\left(\mu+2\nu\right)^j\|\mathcal{J}_1\psi\|_{L^q(B_a)}^j \\ &= C_2\frac{\left[\left(\mu+2\nu\right)\|\mathcal{J}_1\psi\|_{L^q(B_a)}\right]^{N+1}} {1-\left(\mu+2\nu\right)\|\mathcal{J}_1\psi\|_{L^q(B_a)}} \end{eqnarray*} The proof is complete. \end{proof} The stability of the reconstruction is studied as follows. \begin{thm} Assume $\|\mathcal{J}_1\|<(\mu+2\nu)^{-1}$. Let $\eta_1,\eta_2$ denote the limits of the inverse Rytov series corresponding to some $\psi_1,\psi_2$. We suppose that $M_3\|\mathcal{J}_1\|<(\mu+2\nu)^{-1}$, where $M_3=\max(\|\psi_1\|_{L^p(\Gamma)},\|\psi_2\|_{L^p(\Gamma)})$. Then there exists $C_4=C_4(M_1,M_3,\mu,\nu)>0$ such that \[ \|\eta_1-\eta_2\|_{L^q(B_a)}<C_4\|\psi_1-\psi_2\|_{L^p(\Gamma)}. \] \end{thm} \begin{proof} We begin with the following inequality. \[ \|\eta_1-\eta_2\|_{L^q(B_a)}\le \sum_{j=1}^{\infty}\left\|\mathcal{J}_j\psi_1\otimes\cdots\otimes\psi_1- \mathcal{J}_j\psi_2\otimes\cdots\otimes\psi_2\right\|_{L^q(B_a)}. \] We note that \begin{eqnarray*} & (\psi_1\otimes\cdots\otimes\psi_1)-(\psi_2\otimes\cdots\otimes\psi_2) \\ &= (\psi_1-\psi_2)\otimes\psi_2\otimes\cdots\otimes\psi_2+ \psi_1\otimes(\psi_1-\psi_2)\otimes\psi_2\otimes\cdots\otimes\psi_2+\cdots \\ &+ \psi_1\otimes\cdots\otimes\psi_1\otimes(\psi_1-\psi_2)\otimes\psi_2+ \psi_1\otimes\cdots\otimes\psi_1\otimes(\psi_1-\psi_2). \end{eqnarray*} We obtain \begin{eqnarray*} &\fl \|\eta_1-\eta_2\|_{L^q(B_a)} \\ &\fl\le \sum_{j=1}^{\infty}\|\mathcal{J}_j\|\sum_{k=1}^j \|\psi_1\|_{L^p(\Gamma)}\cdots\|\psi_1\|_{L^p(\Gamma)}\|(\psi_1-\psi_2)\|_{L^p(\Gamma)}\|\psi_2\|_{L^p(\Gamma)}\cdots\|\psi_2\|_{L^p(\Gamma)}, \end{eqnarray*} where $\|(\psi_1-\psi_2)\|_{L^p(\Gamma)}$ is in the $k$th position of the product. Furthermore, \begin{eqnarray*} \|\eta_1-\eta_2\|_{L^q(B_a)} &\le \sum_{j=1}^{\infty}j\|\mathcal{J}_j\|M_3^{j-1}\|\psi_1-\psi_2\|_{L^p(\Gamma)} \\ &\le C_1\|\mathcal{J}_1\|\|\psi_1-\psi_2\|_{L^p(\Gamma)}\sum_{j=1}^{\infty}j\left(\mu+2\nu\right)^jM_3^{j-1} \\ &\le \frac{C_1}{M_3}\|\psi_1-\psi_2\|_{L^p(\Gamma)}\sum_{j=1}^{\infty}j\left(\mu+2\nu\right)^{j-1}M_3^{j-1}. \end{eqnarray*} The proof is complete if we put \[ C_4=C_1\sum_{j=1}^{\infty}j\left(\mu+2\nu\right)^{j-1}M_3^{j-2}. \] \end{proof} \section{Implementation of the inverse Rytov series} \label{implem} To increase the number of observed data, multiple inputs are considered. That is, the medium is illuminated $M_S$ times by $f(x)=f^{(\alpha)}(x)$, $\alpha=1,\dots,M_S$. For the $\alpha$th source, light is detected at points $x_d^{(\alpha,\beta)}\in\Gamma$, $\beta=\beta^{(\alpha)}=1,\dots,M_D^{(\alpha)}$. In total, we have $M_{\rm SD}=\sum_{\alpha=1}^{M_S}M_D^{(\alpha)}$ source-detector pairs. We set $k=\beta+\sum_{i=1}^{\alpha-1}M_D^{(i)}$ ($k=1,\dots,M_{\rm SD}$). Correspondingly, we can write \[ \psi=\psi_k= \ln\frac{u_0^{(\alpha)}(x_d^{(\alpha,\beta)})}{u^{(\alpha)}(x_d^{(\alpha,\beta)})}= \sum_{j=1}^{\infty}\left(J_j^{(\alpha)}\eta^{\otimes j}\right)(x_d^{(\alpha,\beta)}), \quad k=1,\dots,M_{\rm SD}. \] Let us consider how the $j$th-order operator $\mathcal{J}_j$ in the inverse Rytov series can be numerically constructed. Here we assume that $y\in B_a\subset\mathbb{R}^n$ is discretized into $N_y$ points $y^{(l)}$ ($l=1,\dots,N_y$) with tiny volume $(\Delta y)^n$. Thus, $\eta$ becomes a vector $\vv{\eta}\in\mathbb{R}^{N_y}$. \subsection{Linearized problem} The operator $K_1$ becomes a matrix $\underline{K}_1\in\mathbb{R}^{M_{\rm SD}\times N_y}$ when $y$ is discretized: \[ \{\underline{K}_1\}_{kl}=-gG\left(x_d^{(\alpha,\beta)},y^{(l)}\right)u_0(y^{(l)})(\Delta y)^n. \] Similarly, $J_1$ becomes a matrix $\underline{J}_1\in\mathbb{R}^{M_{\rm SD}\times N_y}$: \[ \{\underline{J}_1\}_{kl}=-\frac{g}{u_0(x_d^{(\alpha,\beta)})}G\left(x_d^{(\alpha,\beta)},y^{(l)}\right)u_0(y^{(l)})(\Delta y)^n. \] A naive way to construct $\mathcal{J}_1$ is to compute the Moore-Penrose pseudoinverse with a regularizer such as the truncated singular value decomposition: \[ \underline{\mathcal{J}}_1=\underline{J}_1^+\in\mathbb{R}^{N_y\times M_{\rm SD}}. \] The first term of the inverse Rytov series can be calculated as \[ \vv{\eta}_1=\underline{\mathcal{J}}_1\vv{\psi}, \] where $\vv{\psi}={^t}(\psi_1,\dots,\psi_{M_{\rm SD}})$. \subsection{Born series} First we make $\vv{u}_0\in\mathbb{R}^{M_{\rm SD}}$ as \[ \{\vv{u}_0\}_k=u_0^{(\alpha)}(x_d^{(\alpha,\beta)}). \] We put $\vv{v}_0=\vv{u}_0\in\mathbb{R}^{M_{\rm SD}}$. For given vectors $\vv{b}_1,\vv{b}_2,\dots\in\mathbb{R}^{N_y}$, we recursively define \[ \vv{v}_i=\vv{K}_i(\vv{b}_1,\dots,\vv{b}_i)\in\mathbb{R}^{M_{\rm SD}}, \] where \[\fl \{\vv{K}_i(\vv{b}_1,\dots,\vv{b}_i)\}_k= -g\sum_{l=1}^{N_y}G\left(x_d^{(\alpha,\beta)},y^{(l)}\right)\{\vv{b}_i\}_l\{\vv{v}_{i-1}\}_l(\Delta y)^n\quad(k=1,\dots,M_{\rm SD}) \] for $i=1,\dots,j$. Note that $\vv{K}_1(\vv{b}_1)=\underline{K}_1\vv{b}_1$. In particular, $\vv{u}_i$ ($i=1,\dots,j$) are recursively obtained as \[ \vv{u}_i=\vv{K}_i(\vv{\eta},\dots,\vv{\eta}). \] Using these $\vv{K}_i$, we introduce \begin{eqnarray*} &\fl \left\{\vv{J}_i(\vv{b}_1,\dots,\vv{b}_i)\right\}_k= \sum_{m=1}^i(-1)^{m+1}\frac{1}{m(\{\vv{u}_0\}_k)^m} \\ &\fl\times \sum_{i_1+\cdots+i_m=i}\left\{\vv{K}_{i_1}(\vv{b}_1,\dots,\vv{b}_{i_1})\right\}_k\cdots\left\{\vv{K}_{i_m}(\vv{b}_{i-i_m+1},\dots,\vv{b}_i)\right\}_k \in\mathbb{R}^{M_{\rm SD}} \end{eqnarray*} for $k=1,\dots,M_{\rm SD}$. Let us define $\vv{\psi}_i$ ($i=1,\dots,j$) as \[ \vv{\psi}_i=\vv{J}_i(\vv{\eta},\dots,\vv{\eta}). \] \subsection{Inverse Rytov series} We form the compositions $[i_1,\dots,i_m]$ such that $i_1+\cdots+i_m=j$. For each $m$ ($1\le m\le j-1$) and each composition $(i_1,\dots,i_m)$, we compute \[ \vv{\eta}_{\rm tmp}= \vv{\mathcal{J}}_m\left(\vv{\psi}_1,\dots,\vv{\psi}_m\right)\in\mathbb{R}^{N_y}. \] Here, \[ \vv{\mathcal{J}}_1(\vv{a}_1)=\underline{\mathcal{J}}_1\vv{a}_1, \] and for $j\ge2$, \begin{eqnarray*} &\fl \vv{\mathcal{J}}_j(\vv{a}_1,\dots,\vv{a}_j)= \\ &\fl- \sum_{m=1}^{j-1}\sum_{i_1+\cdots+i_m=j}\vv{\mathcal{J}}_m\left( \vv{J}_{i_1}(\underline{\mathcal{J}}_1\vv{a}_1,\dots,\underline{\mathcal{J}}_1\vv{a}_{i_1}),\cdots, \vv{J}_{i_m}(\underline{\mathcal{J}}_1\vv{a}_{j-i_m+1},\dots,\underline{\mathcal{J}}_1\vv{a}_j)\right), \end{eqnarray*} where $\vv{a}_1,\dots,\vv{a}_j$ are vectors of dimension $M_{\rm SD}$. Let $\vv{\Sigma}(m)$ denote the sum of $\vv{\eta}_{\rm tmp}$ for all $\left(\begin{array}{c}j-1\\ m-1\end{array}\right)$ compositions. The above step is repeated for all $m$ ($1\le m\le j-1$). We obtain \[ \vv{\eta}_j=\sum_{m=1}^{j-1}\vv{\Sigma}(m). \] In this way, we obtain $\vv{\eta}_j$ ($j=1,\dots,N$). Finally, the reconstructed $\vv{\eta}$ for the $N$th-order approximation is given by \[ \vv{\eta}\approx\vv{\eta}_1+\cdots+\vv{\eta}_N. \] \section{Concluding remarks} \label{concl} In this paper, multilinear forward operators $J_j:L^q(B_a\times\cdots\times B_a)\to L^p(\Gamma)$ and inverse operators $\mathcal{J}_j:L^p(\Gamma\times\cdots\times\Gamma)\to L^q(B_a)$ were considered. In the case of the Born series, linear operators can be used with tensor products instead of multilinear operators \cite{Bardsley-Vasquez14}. For the Rytov series, we can similarly define linear operators $J_j:(L^q(B_a))^{\otimes j}\to L^p(\Gamma)$. Moreover, as was done for the inverse Born series \cite{Bardsley-Vasquez14,Hoskins-Schotland22,Lakhal18,Moskow-Schotland19}, it is possible to consider the inverse Rytov for nonlinear inverse problems in Banach spaces $X,Y$, for which the forward problem is from $X$ to $Y$ instead of from $L^q(B_a)$ to $L^p(\Gamma)$. Although the expression of $\psi_j$ in the Rytov series is more complicated than that of $u_j$ in the Born series, the inverse Rytov series can be computed in a recursive manner. In Sec.~\ref{implem}, multiple measurements with multiple source terms were assumed. This is the usual procedure to acquire a large number of measurement data. In this case, the Rytov series is expressed as $\vv{\psi}=\sum_{j=1}^{\infty}\vv{J}_j\eta^{\otimes j}$, where $\{\vv{J}_j\eta^{\otimes j}\}_k=(J_j^{(\alpha)}\eta^{\otimes j})(x_d^{(\alpha,\beta)})$ ($k=1,\dots,M_{\rm SD}$). Although the diffusion coefficient $D_0$ was assumed to be a known constant, indeed the diffusion coefficient is an unknown function in addition to $\mu_a$. Markel and Schotland has discussed the simultaneous reconstruction of the two functions with the (first) Rytov approximation \cite{Markel-Schotland04}. It is an interesting future issue to extend the inverse Rytov series to the case of simultaneous reconstruction. \section*{Acknowledgments} This work was supported by JST, PRESTO Grant Number JPMJPR2027. \section*{References}
1,477,468,749,918
arxiv
\subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document} \section{Introduction} \label{sec:intro} The Earth is bombarded annually by dozens of meter-sized objects from space \citep{Brown2002Natur.420..294B}. These bodies impact the atmosphere at hypervelocity radiating enormous amounts of energy due to the ablation produced when colliding with the air molecules \citep{Revelle1979JATP...41..453R}. The largest bodies can survive this process and reach the Earth's surface generating dangerous scenarios as exemplified by Chelyabinsk \citep{Brown2013Natur.503..238B}, being also the source of meteorite falls \citep{Ceplecha1998SSRv...84..327C, Gritsevich2012CosRe..50...56G, Sansom2019ApJ...885..115S} and more rarely of high strength projectiles capable to excavate impact craters \citep{Tancredi2009M&PS...44.1967T}. The associated risk depends on different factors, mainly the size and mass of the object, but also the geometry of incidence and the velocity with respect to the Earth \citep{Baldwin1971JGR....76.4653B}. Knowing the nature and origin of these impactors is of vital importance to prevent a localized catastrophe, as well as studying the hazardous range of sizes associated with small asteroids and comets, which remain largely undiscovered to date \citep{Mainzer2011ApJ...743..156M, Granvik2018Icar..312..181G}. In this sense, the analysis of their dynamic origin from all the available data, including space sensors, allows us to delve into the sources and the physical mechanisms producing dangerous projectiles. Fortunately, events capable of globally devastating our planet occur on scales of millions of years \citep{Morrison2019Icar..327....1M, Trigo2022asteroid}. The largest well-documented impact was the Tunguska event in 1908, a projectile of around 60 m that released as much energy as hundreds of atomic bombs \citep{Morrison2019Icar..327....1M, Robertson2019Icar..327...36R}. More recently there was the well-known Chelyabinsk event, a 19-meter asteroid that went unnoticed by all observation instruments and exploded over a Russian city, injuring thousands of people \citep{Brown2013Natur.503..238B, Boro2016A&A...585A..90B, Kartashova2018P&SS..160..107K}. However, meter-sized Earth impacting objects are more common that we could think from our experience. On average, about 30 asteroids of a few meters in diameter are detected each year colliding with our planet, which are the least observed objects in the Solar System due to their size and low albedo for the dominant chondritic population. Hypervelocity causes that these rocky, rocky-metal or metallic bodies generate tens of kilometer long luminous columns of ionized gas and dust during the atmospheric entry, producing the so-called superbolides when they are brighter than magnitude -17 \citep{Ceplecha1999md98.conf...37C, Koschny2017JIMO...45...91K}. These events release so much energy (around 1 kilotons of TNT in average) that their light curves can be detected by satellite sensors. The detections from space are useful and complementary to the data obtained from the ground by fireball networks \citep{Trigo2004EM&P...95..553T, Jenniskens2011Icar..216...40J, Howie2017ExA....43..237H, Colas2020A&A...644A..53C, Eloy2021MNRAS.504.4829P}. The origin of these projectiles is still under discussion since there are multiple factors involved in the orbital evolution as well as the diversity of possible sources. Dynamically associating bodies can be highly challenging even more for rare events such large bolides. In this regard, the purpose of the present work is to investigate the orbits of meter-sized hazardous projectiles that impacted our atmosphere, and to explore the connection they may have with cometary and asteroidal meteoroid streams and near-Earth Objects (NEOs). Taking into account relevant previous work on the dynamics of these rocks \citep{Pauls2005MandPS...40.1241P, Fu2005Icar..178..434F, Schunova2012Icar..220.1050S}, our main goal is to find if there is a significant population of transient meter-sized projectiles directly associated with asteroids or comets belonging to the NEO population based on space-borne observation. The mere existence of such associations is noteworthy because, so far, we have understood that most meteorite-dropping events are sporadic in origin. CNEOS data properly analyzed, having into account uncertainties associated with the detections and generating a synthetic population to compare each event against the random background, will demonstrate that a substantial part of these meter-sized hazardous populations could be younger and might be produced in shorter timescales, being a consequence of the physical processes that experience NEOs in near-Earth space. In addition, the data analysis has allowed us to identify a small population of likely interstellar meteoroids, at least three confidently characterized as exhibiting unequivocally hyperbolic orbital parameters. \section{Database and methodology} \label{sec:datamet} The NASA-JPL Center for NEOs Studies (CNEOS) has been monitoring atmospheric flares produced by bright bolides since 1988 with space sensors. These records are collected by the Nuclear Test Ban Treaty monitoring satellites, also known as US Government (USG) sensors, a classified military system \citep{Tagliaferri1994hdtc.conf..199T}. The data provided by CNEOS are generally in good agreement with independent ground-based records in location and time. However, for some cases, important differences in velocity have been reported \citep{Boro2017P&SS..143..147B, Granvik2018Icar..311..271G, Devillepoix2019MNRAS.483.5166D}. This may be because space sensors are most likely to capture events in the largest subtended atmospheric volume, i.e., away from the most effective detection area of the sensors. Previously, these data (59 events up to 2015) have already been partially used to analyze size-frequency of meter-scaled objects \citep{Brown2002Natur.420..294B} and to examine their physical properties and orbital parameters \citep{Brown2016Icar..266...96B}. \citet{Brown2016Icar..266...96B} did not find any clear association, even though 17\% of the studied events exhibited typical Jupiter Family Comet (JFC) orbits. To test meteoroid stream linkages, they used the Drummond criterion \citep{Drummond1981Icar...45..545D} for orbit dissimilarity with a cut-off of 0.05 assuming an error in CNEOS velocity around 0.1-0.2 km/s. They pointed to the Taurid shower and its sub-components as the only significant associated shower in the dataset studied. However, \citet{Devillepoix2019MNRAS.483.5166D} stated that this velocity error may not be realistic and reported some variations above 20\% having deviations in the radiant of up to 90º when checked with ground-based measurements. These complications in the data analysis greatly hinders finding associations with parent bodies using the CNEOS database. First, because of the lack of uncertainty information, and second, because the observed velocity vector is a crucial factor in calculating meteoroid orbits, which in some cases exhibit large errors \citet{Devillepoix2019MNRAS.483.5166D}. For this reason, following a normal distribution, we assume a standard deviation in latitude and longitude of 0.1 degrees, in heights of 0.5 km and 20\% for the components of the velocity vector. From this, we apply a Monte Carlo simulation by generating normal distributions on each parameter, randomly producing 10,000 clones for each event. Using our \textit{3D-FireTOC} software \citep{Eloy2021MNRAS.504.4829P, Eloy2021Falcon9}, we compute each of the clone orbital elements, which are compared with the entire database of The International Astronomical Union (IAU) \citep{Jopek2014me13.conf..353J, Jopek2017P&SS..143....3J, Jenniskens2020P&SS..18204821J}, using the established meteor shower (EMS), the working meteor shower (WMS), the near-Earth comet (NEC) list, and the near-Earth asteroid (NEA) dataset, each of which is composed of 110, 615, 121, and 25,798 unique entries respectively. Since we assume a distribution of uncorrelated errors following a normal distribution, we find that some of the clones have spurious orbits, as would be expected since especially the uncertainties of the velocity vector components are most likely correlated. To identify these anomalous data and filter with the same criteria, we apply a statistical approach typically used to detect outliers in a univariate dataset that approximately follows a normal distribution: the Generalized Extreme Studentized Deviate (GESD) method \citep{rosner1983percentage}. We clean our clone population with an upper limit of 1,000 outliers at a 5\% significance level, re-generating new clones to always maintain the initial population size. In addition, as the computed orbital elements exhibit a skewed normal distributions, we use the median and the median absolute deviation indicators instead of the mean and standard deviation, which are less robust for asymmetric distributions. Different criteria have been developed to measure the similarity of orbits based on semi-quantitative formulas. These functions were created to measure how an orbit can evolve into another due to perturbations. They are, therefore, approximations, and the criteria thresholds, to some extent, arbitrary. Depending on the inclination of the orbit or the dispersion of the orbital elements of a complex, the threshold values may vary. However, it should only be considered reducing the cut-off values if the number of low inclination associations requires it. The first orbital dissimilarity function we use is the one established by \citet{Southworth1963SCoA....7..261S}: \begin{equation} D_{S H}^{2}=\left(e_{B}-e_{A}\right)^{2}+\left(q_{B}-q_{A}\right)^{2}+\left(2 \sin \frac{I_{AB}}{2}\right)^{2} \\ +\left(\frac{e_{B}+e_{A}}{2}\right)^{2}\left(2 \sin \frac{\pi_{B A}}{2}\right)^{2}, \label{eq:D_SH} \end{equation} where $e$ is the eccentricity, $q$ is the perihelion distance, $I_{AB}$ is the angle made by the orbit's planes and $\pi_{BA}$ the difference between longitudes of perihelia measured from the intersection of both orbits. By adding weights in the first parts of the equation \ref{eq:D_SH}, \citet{Drummond1981Icar...45..545D} modified the formula reaching the following expression: \begin{equation} D_{D}^{2}=\left(\frac{e_{B}-e_{A}}{e_{B}+e_{A}}\right)^{2}+\left(\frac{q_{B}-q_{A}}{q_{B}+q_{A}}\right)^{2}+\left(\frac{I_{AB}}{\pi}\right)^{2} \\ +\left(\frac{e_{B}+e_{A}}{2}\right)^{2}\left(\frac{\theta_{B A}}{\pi}\right)^{2}, \label{eq:D_D} \end{equation} where $\theta_{BA}$ is the orbit angle between the lines of apsides. As a combination of equations \ref{eq:D_SH} and \ref{eq:D_D}, \citet{Jopek1993Icar..106..603J} suggested a new criterion: \begin{equation} D_{H}^{2}=\left(e_{B}-e_{A}\right)^{2}+\left(\frac{q_{B}-q_{A}}{q_{B}+q_{A}}\right)^{2}+\left(2 \sin \frac{I_{AB}}{2}\right)^{2} \\ +\left(\frac{e_{B}+e_{A}}{2}\right)^{2}\left(2 \sin \frac{\pi_{B A}}{2}\right)^{2}. \label{eq:D_H} \end{equation} Finally, we also consider the criterion proposed by \citet{Jenniskens2008Icar..194...13J}, which obtained some invariants derived from the observed orbital elements of a meteoroid stream or a parent body: \begin{equation} D_{J}^{2}=\left( \frac{C 1_{1}-C 1_{2}} { 0.13}\right)^{2}+\left( \frac{C 2_{1}-C 2_{2}}{0.06}\right)^{2}+\left(\frac{C 3_{1}-C 3_{2}}{14.2^{\circ}}\right)^{2}. \label{eq:D_J} \end{equation} In equation \ref{eq:D_J}, $C1$, $C2$ and $C3$ can be computed as: \begin{eqnarray} C1 & = & (1-e^2) \cos^2(i), \nonumber\\ C2 & = & e^2(0.4-\sin^2(i)\sin^2(\omega)),\\ C3 & = & \omega + \Omega, \nonumber \end{eqnarray} where $\omega$ is the argument of perihelion and $\Omega$ is the longitude of the ascending node. Table \ref{tab:criteria} lists the four different criteria we use and their corresponding thresholds (in their broadest options due to the large uncertainties of the data). \begin{table*} \centering \caption{Threshold criterion typically used for orbit dissimilarity and their references.} \label{tab:criteria} \begin{tabular}{lcccc} \hline Criterion & Ref. Criterion & Threshold & Ref. Threshold \\ \hline $D_{SH}$ & \citep{Southworth1963SCoA....7..261S} & 0.3 & \citep{Porub2006CoSka..36..103P} \\ $D_J$ & \citep{Jenniskens2008Icar..194...13J} & 1.5 & \citep{Jenniskens2008Icar..194...13J} \\ $D_D$ & \citep{Drummond1981Icar...45..545D} & 0.18 & \citep{Moorhead2019JIMO...47..134M} \\ $D_H$ & \citep{Jopek1993Icar..106..603J} & 0.35 & \citep{Jopek1997AandA...320..631J} \\ \hline \end{tabular} \end{table*} As a first approximation, we count as possibly associated every clone that at least meets one of these criteria. To check the robustness of the associations, considering that the most influential parameter in the orbit is the velocity modulus, we separate the clones in different ranges according to their observed velocity. In this way, we can check how strong the association is as the pre-atmospheric velocity varies. This gives a first idea of the probability that an event may be associated with a meteor shower or a parent body even though the uncertainty is unknown. If an event is mostly associated with an object in all velocity variation ranges, the linkage likelihood would be high. However, if an event is associated in some ranges, not in others, or only appeared associated exclusively when the velocity modulus varied greatly, the likelihood that it belongs to such a complex or originates from a certain object would be lower. When working with large datasets, the probability of two orbits being similar at any time by coincidence is considerably high \citep{Southworth1963SCoA....7..261S, Wiegert2004EM&P...95...19W, Porub2004EM&P...95..697P}. Therefore, we need a quantitative metric to indicate the rate of false positives for every event with each of the compared datasets, so we can be convinced that the rate of association is well above that expected by chance. We generate a synthetic population of 1,000 orbits for each observed CNEOS event following \citet{Pauls2005MandPS...40.1241P} by generating a semi-major axis, eccentricity and inclination distribution from the individual orbital elements computed, drawing from this in a Monte Carlo sense and assigning random ascending node and argument of perihelion values with the constrains that the generated orbit intercepts the Earth. For each synthetic impactor the same 10,000 cloning process is performed and then comparison with the reference meteoroid stream, NEA and NEC populations performed to find the false positive rate. This provides a clear grounding as to whether or not the statistical association rates found are significant given the thresholds adopted. Associations that do not exceed the randomly expected mean are directly discarded to provide only the most likely candidates. However, it is not a sufficient condition that the association exceeds the expected at random. We perform null-hypothesis significance testing and use the p-value as a measure equivalent to the false-positive rate, i.e., we calculate the probability of the associations not being random. To impose the condition that the orbit impacts the Earth, we set random longitude of node and set an argument of perihelion that fulfills the condition: \begin{equation} \frac{1}{e} \left( \frac{a(1-e^2)}{r_{min}} \right) < \cos\omega < \frac{1}{e} \left( \frac{a(1-e^2)}{r_{max}} \right), \label{eq:imp_cond} \end{equation} where $r_{min}$ and $r_{max}$ represent the inner radius of 0.983 AU and the outer radius of 1.017 AU of the torus defined by the Earth's orbit and which must necessarily contain the ascending or descending node of any orbit that eventually intersects our planet. We then count all the clones associated with each meteoroid stream or parent body that meet at least one criterion and are associated with more than 75\% of the clones, and we sort the associations by frequency. Following \citet{Porub2004EM&P...95..697P}, to check that the orbits are not just similar at the time of impact by coincidence and to verify the consistency of these links over time, we numerically integrate every CNEOS event and parent body candidates 10,000 years backward in time, representing parent meteoroid stream candidates with 18 particles uniformly distributed throughout each stream orbit. Since the original parent body may have fragments scattered along the whole orbit, it is important to integrate particles for each stream spread in true anomaly, as bodies belonging to the same complex may diverge due to close encountering with planets \citep{Dmitriev2015P&SS..117..223D}. In this way, we can check how the dissimilarity criteria evolve for different positions of the meteor shower orbit. In this regard, we use the IAS 15 integrator implemented in the \textit{REBOUND} package \citep{Rein2012A&A...537A.128R}, including the Sun and all the planets in the Solar System. We track the evolution of each dissimilarity criterion over time considering a linkage as robust when it remains below the cut-off for 5,000 years. Typically, the bulk density of meteoroids entering the atmosphere is estimated by calculating the tensile strength ($s$) at the time of flare. This is usually done by following \citet{bronshten1981physics} approximation ($s\approx\rho_{atm}V^2 $), which is the pressure in the shock layer that has caused the disruption of the object at a certain height and velocity. However, a more sophisticated fragmentation model is needed since the analysis of superbolides and the comparison with recovered meteorites prove that large meteoroids or small asteroids undergo atmospheric disruption processes at dynamical pressure lower than their mechanical strength \citep{Popova2011M&PS...46.1525P}. \citet{Foschini2001A&A...365..612F} considered the shock wave turbulence interaction that locally increases the dynamic pressure exerted on the meteoroid and approximated the tensile strength as: \begin{equation} s \approx (1+\alpha)\rho_{atm}\kappa V^2, \label{eq:tensile} \end{equation} where $\alpha$ is the degree of ionization, $\kappa$ is an amplification factor of the kinetic energy ($2 \leq \kappa \leq 6$ for monoatomic gases and plasma), and $\rho_{atm}$ and $V^2$ are the atmospheric density and the velocity respectively at the time of disruption. However, we observed that this approximation offers values far in excess of those expected for tensile strength given the values provided by CNEOS. Alternatively, we use the following fragmentation model: \begin{equation} s \approx \frac{(\gamma-1)(1+\alpha)}{2\gamma}\rho_{atm}V^2, \label{eq:tensile2} \end{equation} where $\alpha=1$ since the fluid at the stagnation point is highly ionized, and $\gamma=1.7$ is the ratio of specific heats \citep{Kadono1996JGR...10126097K, Foschini1999A&A...342L...1F, Farinella1998Icar..132..378F}. The value of the atmospheric density is derived from the data of U.S. standard atmosphere \citep{atmosphere1976national}. From this, we classify the body as cometary if $s<10^5\,Pa$, carbonaceous if $10^5\,Pa<s<10^6\,Pa$, rocky if $10^6\,Pa<s<10^7\,Pa$, and rocky-iron if its tensile strength is greater \citep{Chyba1993Natur.361...40C}. This will allow us to compare the densities with the Tisserand parameter with respect to Jupiter ($T_j$) and to compute the size of the object assuming that the energy recorded by the USG sensors is equal to the kinetic energy as follows: \begin{equation} D = 2 \sqrt[3]{\frac{3T_E}{2\pi\rho v^2}}. \label{eq:D} \end{equation} With this approach, we intend both to account for the unknown uncertainty of the CNEOS detections and to encompass phenomena that eventually differentiate the orbit of an object from its complex and that may have generated, precisely, an exceptional event (in terms of mass and energy) apparently unrelated to its initial swarm. By using this methodology, we will obtain the most likely candidates in case an event is truly associated with a or a NEO. Calculations have been performed by using the CSUC (Consorci de Serveis Universitaris de Catalunya) supercomputing infrastructures, employing 2,500 hours of computation on 100 Intel Xeon Platinum 8168 central processing unit at 2.7 GHz. \section{RESULTS} As of March 1, 2022, the CNEOS public database counted 887 events starting from 1988. However, only 255 contained sufficient data to reconstruct their orbit (i.e, date, longitude, latitude, height, and observed velocity vector). The fireballs recorded have a random distribution around the Earth and seem to come from all parts of the sky, that is, the radiants are distributed throughout the celestial sphere. The fireballs appear to have a random impact angle with respect to the local horizon and a random azimuth. Figure \ref{fig:hist_orbits} shows the principal parameters and the calculated orbits of the studied events. \begin{figure}[ht!] \plotone{CNEOS_histogram.pdf} \caption{Histogram of the principal parameters and the calculated orbits of the 255 CNEOS studied events in this work. The y-axis is arbitrarily scaled.} \label{fig:hist_orbits} \end{figure} As a first indication of the possible origin of each event, we calculate the tensile strength and classify them according to the above mentioned criteria in Section \ref{sec:datamet}. We find that 35.7\% of the dynamic strengths correspond to carbonaceous bodies, 2.4\% exhibit bulk densities typical of cometary meteoroids, while 61.9\% appears to be rocky or rocky-iron bodies. On the other hand, 9.8\% of the events belong to the JFCs, while 85.5\% of them have values of the Tisserand parameter characteristic of asteroid-like orbits ($T_j>3$). In Figure \ref{fig:tensile} is depicted the tensile strength as a function of the Tisserand parameter with its respective classification. The size of the circles refers to the diameter of the body which, as can be seen, does not seem to reveal any correlation. Like expected, the Chelyabinsk superbolide produced by a LL5 ordinary chondrite falls between the rocky and rocky-iron dominion \citep{Galimov2013SoSyR..47..255G,Zaytsev2021EM&P..125....2Z}. \begin{figure}[ht!] \plotone{CNEOS_tensile.pdf} \caption{Tensile strength as a function of the Tisserand parameter for the CNEOS events. The radii of the circles are arbitrarily scaled with the diameter of the object. Impacts corresponding to the asteroid 2022 EB5, 2019 MO, 2018 LA, the Chelyabinsk event and the first interstellar meteoroid are annotated.} \label{fig:tensile} \end{figure} We chose the best associations since some of the events have two or more candidates from the same dataset. We impose as a minimum criterion that the association appears in at least in three-fourths of the clone population. Table \ref{tab:asso_summary} shows the number of event associations to meteor showers or NEOs according to the number of criteria they comply with and differentiating between the established and the working list (5 events have multiple candidates from cross-datasets). \begin{deluxetable*}{lccccc} \tablecaption{Best meteoroid stream, NEA and NEC associations counts that appear in more than 75\% of the generated random clones, exceed the expected random mean and fulfill criteria for at least 5,000 years.\label{tab:asso_summary}} \tablewidth{0pt} \tablehead{ \colhead{Assoc.} & \colhead{1 crit.} & \colhead{2 crit.} & \colhead{3 crit.} & \colhead{4 crit.} & \colhead{All crit.} } \startdata EMS & 1 & 2 & 1 & 13 & 17 \\ WMS & 2 & 3 & 4 & 15 & 24 \\ NEA & 0 & 2 & 1 & 8 & 11 \\ NEC & 5 & 1 & 2 & 3 & 11 \\ Total & 8 & 8 & 8 & 39 & 63 \\ \enddata \end{deluxetable*} The number of repetitions of the association, i.e., the number of clones associated with an object, can be understood as the probability that the event actually belongs to that specific meteoroid stream (or parent body) assuming a 20\% uncertainty on the observed velocity vector provided by the CNEOS database. Tables \ref{tab:EMS}, \ref{tab:WMS}, \ref{tab:NEA} and \ref{tab:NEC} group the events by i) established, ii) working meteor shower, and iii) NEA and NEC associations, respectively, together with the number of criteria they meet, the percentage of associations in the clones both in total and by velocity variation range, and the probability of not being random. \begin{deluxetable*}{lccccccc} \tablecaption{Best established meteor shower associations by event.\label{tab:EMS}} \tablewidth{0pt} \tablehead{ \colhead{Established meteor shower} & \colhead{Event} & \colhead{Nº crit.} & \colhead{$R_{Total}$} & \colhead{$R_{0-5\%}$} & \colhead{$R_{5\%-10\%}$} & \colhead{$R_{10\%-20\%}$} & \colhead{p-value} } \startdata Corvids & 2005-04-19 07:37:47 & 4 & 96.56\% & 98.57\% & 98.85\% & 96.69\% & 93.81\% \\ & 2019-09-12 12:49:48 & 4 & 91.77\% & 99.88\% & 99.96\% & 95.45\% & 71.85\% \\ & 2019-06-22 21:25:48 & 4 & 93.91\% & 99.97\% & 99.79\% & 88.08\% & 65.11\% \\ & 2019-07-23 20:42:58 & 4 & 85.43\% & 99.96\% & 99.92\% & 70.75\% & 60.54\% \\ & 2013-04-21 06:23:12 & 4 & 80.09\% & 99.89\% & 93.80\% & 53.02\% & 56.02\% \\ & 2018-06-26 17:51:53 & 4 & 77.99\% & 84.45\% & 82.83\% & 65.52\% & 50.00\% \\ & 2006-09-02 04:26:15 & 3 & 87.29\% & 98.85\% & 95.72\% & 62.53\% & 67.50\% \\ & 2018-09-25 14:10:33 & 2 & 85.55\% & 99.87\% & 99.91\% & 94.68\% & 89.17\% \\ & 2019-05-19 14:47:03 & 2 & 82.40\% & 92.44\% & 87.73\% & 69.32\% & 67.42\% \\ Daytime kappa Aquariids & 2005-06-03 08:15:41 & 4 & 97.84\% & 99.50\% & 98.44\% & 98.11\% & 83.81\% \\ h Virginids & 2015-04-21 01:42:51 & 4 & 93.02\% & 94.48\% & 95.32\% & 91.87\% & 75.51\% \\ & 2006-10-14 18:10:49 & 4 & 98.77\% & 99.88\% & 99.82\% & 99.56\% & 53.24\% \\ tau Herculids & 2014-06-28 02:40:07 & 4 & 77.87\% & 99.16\% & 89.28\% & 49.44\% & 63.92\% \\ Andromedids & 2020-12-29 20:32:22 & 4 & 87.16\% & 99.57\% & 94.81\% & 62.86\% & 63.30\% \\ Phoenicids & 2006-12-09 06:31:12 & 4 & 76.51\% & 99.49\% & 98.28\% & 58.30\% & 59.71\% \\ alpha Virginids & 2017-02-18 19:48:29 & 4 & 92.32\% & 99.77\% & 99.71\% & 96.90\% & 53.65\% \\ alpha Capricornids & 2005-12-03 12:45:49 & 1 & 78.80\% & 93.52\% & 92.27\% & 53.96\% & 59.49\% \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lccccccc} \tablecaption{Best working meteor shower associations by event.\label{tab:WMS}} \tablewidth{0pt} \tablehead{ \colhead{Working meteor shower} & \colhead{Event} & \colhead{Nº crit.} & \colhead{$R_{Total}$} & \colhead{$R_{0-5\%}$} & \colhead{$R_{5\%-10\%}$} & \colhead{$R_{10\%-20\%}$} & \colhead{p-value} } \startdata iota Cygnids & 2008-11-18 09:41:51 & 4 & 82.07\% & 92.01\% & 88.74\% & 75.46\% & 92.95\% \\ Daytime delta Scorpiids & 2014-11-26 23:16:51 & 4 & 96.97\% & 99.06\% & 98.55\% & 96.11\% & 85.78\% \\ & 2008-06-27 02:01:23 & 4 & 84.88\% & 88.49\% & 85.97\% & 84.74\% & 51.70\% \\ alpha Geminids & 2006-01-10 23:25:28 & 4 & 96.29\% & 96.14\% & 96.57\% & 96.81\% & 78.91\% \\ & 2013-12-23 08:30:57 & 4 & 98.77\% & 99.97\% & 99.06\% & 98.79\% & 70.87\% \\ & 2019-09-14 12:39:34 & 4 & 98.91\% & 99.63\% & 99.10\% & 99.41\% & 64.37\% \\ Southern omega Scorpiids & 2007-09-22 17:57:12 & 4 & 91.18\% & 98.94\% & 98.73\% & 81.14\% & 71.63\% \\ & 2004-10-07 13:14:43 & 1 & 83.55\% & 98.54\% & 95.71\% & 65.85\% & 50.00\% \\ gamma Taurids & 2017-05-24 07:03:03 & 4 & 95.21\% & 96.43\% & 95.86\% & 96.40\% & 70.49\% \\ & 2014-11-04 20:13:30 & 4 & 95.82\% & 100\% & 98.75\% & 88.92\% & 55.92\% \\ & 2021-04-02 15:52:58 & 4 & 96.79\% & 99.84\% & 98.86\% & 90.05\% & 54.51\% \\ & 2018-10-05 00:27:04 & 3 & 92.48\% & 99.14\% & 97.72\% & 94.74\% & 55.26\% \\ & 2017-07-13 09:30:36 & 2 & 99.07\% & 99.89\% & 98.05\% & 99.02\% & 68.73\% \\ pi Leonids & 2008-11-21 00:26:44 & 4 & 87.42\% & 99.54\% & 95.86\% & 83.42\% & 63.44\% \\ Southern October delta Arietids & 2017-06-23 20:21:55 & 4 & 80.40\% & 82.15\% & 79.77\% & 77.16\% & 60.97\% \\ beta Cancrids & 2014-05-16 20:06:28 & 4 & 87.32\% & 96.42\% & 92.7\% & 80.99\% & 60.36\% \\ gamma Triangulids & 2020-10-26 15:09:10 & 4 & 82.99\% & 96.46\% & 94.58\% & 70.74\% & 56.4\% \\ & 2015-04-08 04:06:31 & 3 & 92.36\% & 99.64\% & 98.93\% & 94.58\% & 50.00\% \\ October gamma Cetids & 2013-12-08 03:10:09 & 4 & 97.09\% & 100\% & 100\% & 100\% & 54.41\% \\ & 2013-04-30 08:40:38 & 2 & 84.30\% & 100\% & 100\% & 69.27\% & 54.34\% \\ May lambda Draconids & 2006-06-07 00:06:28 & 3 & 98.04\% & 99.64\% & 99.53\% & 99.20\% & 75.32\% \\ Daytime theta Aurigids & 2020-07-12 07:50:32 & 3 & 82.58\% & 83.80\% & 84.02\% & 81.24\% & 73.96\% \\ Northern chi Orionids & 2017-12-15 13:14:37 & 2 & 84.22\% & 98.36\% & 94.73\% & 79.13\% & 72.89\% \\ gamma Delphinids & 2022-02-17 03:53:24 & 1 & 75.77\% & 94.35\% & 92.25\% & 79.96\% & 81.37\% \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lccccccc} \tablecaption{Best near-Earth asteroid associations by event. $^*$2022-03-11 21:22:46 event satisfies the condition of $D_D$ for 3,400 years and $D_SH$, $D_J$ and $D_H$ for more than 2000 years. \label{tab:NEA}} \tablewidth{0pt} \tablehead{ \colhead{Near-Earth asteroid} & \colhead{Event} & \colhead{Nº crit.} & \colhead{$R_{Total}$} & \colhead{$R_{0-5\%}$} & \colhead{$R_{5\%-10\%}$} & \colhead{$R_{10\%-20\%}$} & \colhead{p-value} } \startdata 2021 AU4 & 2020-07-12 07:50:32 & 4 & 89.69\% & 92.97\% & 92.48\% & 89.37\% & 99.99\% \\ 2005 CV69 & 2021-03-05 13:50:01 & 4 & 85.26\% & 94.80\% & 92.57\% & 85.97\% & 99.99\% \\ 2016 WN8 & 2021-11-28 18:06:50 & 4 & 94.57\% & 97.17\% & 97.51\% & 97.66\% & 93.43\% \\ 2015 AO43 & 2019-02-01 18:17:10 & 4 & 82.50\% & 98.79\% & 95.37\% & 68.71\% & 80.16\% \\ 2020 DF4 & 2020-03-04 20:25:59 & 4 & 93.51\% & 100\% & 100\% & 99.97\% & 69.03\% \\ 1996 JG & 2014-11-26 23:16:51 & 4 & 94.31\% & 97.46\% & 97.21\% & 93.32\% & 67.25\% \\ 2014 XO7 & 2010-12-25 23:24:00 & 4 & 98.07\% & 100\% & 100\% & 99.97\% & 59.13\% \\ 2012 KN11 & 2021-11-17 15:53:21 & 4 & 83.01\% & 92.04\% & 89.77\% & 79.05\% & 50.74\% \\ 2020 KF6 & 2012-05-15 11:04:17 & 3 & 92.16\% & 100\% & 100\% & 99.92\% & 54.64\% \\ 2018 BJ5 & 2012-07-25 07:48:20 & 2 & 92.03\% & 100\% & 100\% & 99.32\% & 74.51\% \\ 2008 ON13 & 2017-12-15 13:14:37 & 2 & 91.59\% & 99.50\% & 99.06\% & 92.42\% & 50.00\% \\ 2022 EB5$^*$ & 2022-03-11 21:22:46 & 0 & 80.44\% & 98.46\% & 87.03\% & 56.80\% & 50.00\% \\ \enddata \end{deluxetable*} \begin{deluxetable*}{lccccccc} \tablecaption{Best near-Earth comet associations by event. \label{tab:NEC}} \tablewidth{0pt} \tablehead{ \colhead{Near-Earth comet} & \colhead{Event} & \colhead{Nº crit.} & \colhead{$R_{Total}$} & \colhead{$R_{0-5\%}$} & \colhead{$R_{5\%-10\%}$} & \colhead{$R_{10\%-20\%}$} & \colhead{p-value} } \startdata 300P/Catalina & 2019-09-12 12:49:48 & 4 & 85.33\% & 99.88\% & 99.96\% & 85.55\% & 65.23\% \\ & 2015-07-19 07:06:26 & 1 & 82.50\% & 91.57\% & 88.83\% & 76.03\% & 88.90\% \\ 79P/du Toit-Hartley & 2010-02-28 22:24:50 & 4 & 94.51\% & 99.61\% & 98.49\% & 86.34\% & 59.64\% \\ C/1905 F1 & 2020-03-04 20:25:59 & 4 & 99.60\% & 100\% & 100\% & 100\% & 99.99\% \\ & 2019-03-19 02:06:39 & 3 & 98.07\% & 99.57\% & 99.20\% & 99.07\% & 99.99\% \\ & 2003-09-27 12:59:02 & 2 & 83.68\% & 97.94\% & 95.50\% & 86.25\% & 99.96\% \\ P/2003 T12 & 2018-01-15 02:18:38 & 3 & 82.26\% & 91.10\% & 90.01\% & 74.71\% & 92.69\% \\ & 2008-12-24 15:51:58 & 1 & 78.58\% & 89.32\% & 85.16\% & 71.15\% & 99.28\% \\ 182P/LONEOS & 2003-11-10 13:54:06 & 1 & 83.70\% & 92.81\% & 90.13\% & 75.90\% & 64.07\% \\ & 2015-04-08 04:06:31 & 1 & 89.96\% & 98.50\% & 97.98\% & 87.46\% & 77.53\% \\ 34D/Gale & 2007-06-11 09:47:05 & 1 & 75.40\% & 99.44\% & 99.34\% & 80.31\% & 52.09\% \\ \enddata \end{deluxetable*} As a sample, in Figure \ref{fig:propagation} we display the evolution of the dissimilarity criteria during the orbital integration for 10,000 years of four events that appear associated with a meteor shower fulfilling the established criteria. As can be seen, some orbits become more similar as time progresses backwards during the first 5,000 years. For convenience, Table \ref{tab:appendix} summarizes the time evolution of the $D_{SH}$ and $D_{D}$ criteria over 10,000 years backwards of all events with possible association. \begin{figure}[ht!] \plotone{prop.pdf} \caption{Dissimilarity criteria of 2020-03-04 20:25:59, 2014-11-26 23:16:51, 2005-04-19 07:37:47 and 2017-05-24 07:03:03 events in relation to their respective associations over a backward integration of 10,000 years. A dashed horizontal line shows the cut-off for each criterion.} \label{fig:propagation} \end{figure} Out of the 255 events studied, 41 appear to be associated with a unique meteor shower, 11 have an unique NEA association and 11 may be linked to an unique NEC, while 5 events present multiple candidates. This means that almost 23\% of the large fireballs recorded by USG sensors have a likely cometary or asteroidal origin, which according to \citet{Devillepoix2019MNRAS.483.5166D} could occur in more than 60\% of the cases. We find especially relevant the number of associations with NECs since they appear in around 4\% of the events, equivalent to NEAs (4\%). If the associations were not distinguishable from the background, it would be expected that NEA associations would produce a huge number of candidates since the asteroid dataset used is much larger than the others. In relation to the concordance between the dissimilarity criteria, Table \ref{tab:crit_relation} shows the percentage of coincidence between the different criteria, as well as the percentage of associations for each one in relation to the total number of associations evaluated. \begin{deluxetable*}{cccccc} \tablecaption{Coincidences of associations between the dissimilarity criteria and percentage of associations corresponding to the total of potential associations evaluated.\label{tab:crit_relation}} \tablewidth{0pt} \tablehead{ \colhead{} & \colhead{$D_{SH}$} & \colhead{$D_J$} & \colhead{$D_D$} & \colhead{$D_H$} & \colhead{Assoc.} } \startdata $D_{SH}$ & - & 69.0\% & 71.3\% & 90.2\% & 19.4\% \\ $D_J$ & 69.0\% & - & 73.6\% & 68.4\% & 20.0\% \\ $D_D$ & 71.3\% & 73.6\% & - & 77.6\% & 20.0\% \\ $D_H$ & 90.2\% & 68.4\% & 77.6\% & - & 20.4\% \\ \enddata \end{deluxetable*} \section{Discussion} We consider as extremely remarkable the Corvid complex as a source of meter-sized projectiles (linked to 2004 HW asteroid), the Tau Herculids complex (linked to Comet 73P/Schwassman-Wachmann 3) and the h Virginids meteor showers. We also found an association of a bolide with the kappa Aquarids that was previously identified as the source of large bolides \citep{TrigoKappa2007MNRAS.382.1933T}. Associations with Corvids or h Virginids represent more than half of the total. This overrepresentation of orbits associated with these meteoroid streams is similar to that found in \citet{Dumitru2017A&A...607A...5D}. This is probably due to the fact that they are swarms with very low inclination orbits, and the CNEOS events present an inclination distribution with a tendency to coplanar orbits, as depicted in Figure \ref{fig:hist_orbits}. It is remarkable the dynamical association of comets 300P/Catalina, C/1905 F1 (Giacobini), P/2003 T12 (SOHO) and 182P/LONEOS with multiple large bolides strongly satisfying several criteria. Regarding the associations with different velocity deviations, we observed that in most of the obtained cases they are considerably robust, except for a few cases for velocity variations larger than 10\% in modulus. This is because the associations that only appeared in a specific range of velocities were not strong enough to be statistically significant (appearing in more than 75\% of the total clones), thus demonstrating the reliability of the proposed method. To check the robustness of the chosen cut-off values, in Figure \ref{fig:crit_variation} is depicted how the number of associations by criterion and the number of events with associations vary when the thresholds are modified by $\pm$25\%. As can be seen, even with the lowest cut-off for each criterion, the number of associations is still significant: around 20\% of the CNEOS fireball events would have a cometary or asteroidal origin. \begin{figure}[ht!] \plotone{th_var.pdf} \caption{Variation of associations by criterion and events with association when modifying the threshold of the criteria by $\pm$25\%. The number of associations by criterion are presented stacked, however, they do have multiples overlapping.} \label{fig:crit_variation} \end{figure} We also note that some events have multiple potential associations, in some cases because the meteoroid streams share parent bodies (such as Andromedids and December Phi Cassiopeiids), while in other cases simply because of the similarity of orbits. We note that the method applied in this work to calculate the tensile strength possibly results in excessively high values. For example, we obtain 8$\pm$3 MPa for the Chelyabinsk event, while its bulk strength is estimated to be 1 MPa \citep{Boro2013Natur.503..235B}. We can observe that the dynamic pressure computed for the first fulguration at 45 km height is 0.7 MPa, and as it penetrated the atmosphere it grew up to 18 MPa at 22 km, which is in good agreement with our results since the height given by CNEOS of the radiated energy peak is 23.3 km. Therefore, this fact could lead to an overestimation of the tensile strength of almost one order of magnitude. Concerning the reliability of the dissimilarity criteria, good agreement can be observed between the $D_{J}$ and $D_D$ criteria, which have a reasonable match and a similar percentage of associations, see Table \ref{tab:crit_relation}. Both are below 25\% of associations, which is the estimated likelihood of chance association with random background NEAs \citep{Wiegert2004EM&P...95...19W}. This fact is especially remarkable since both criteria were defined with different theoretical approaches. For subsequent studies, it would be of interest to analyze comparatively the evolution of more dissimilarity criteria, particularly incorporating others like the one proposed by G. Valsecchi that is partially invariant to secular perturbations \citep{Valsecchi1999esra}. The $D_H$ for the chosen threshold values is the one that appears as the most permissive criterion. It is worth pointing out that some associations do not coincide in date with the expected time of activity for the corresponding meteor shower. This is especially noticeable in meteoroid streams with inclinations close to the ecliptic, such as gamma Taurids. Different works have analyzed the associations between NEOs and meteorite-dropping bolides \citep{Halliday1987Icar...69..550H, Trigo2007MNRAS.382.1933T, Boro2015aste.book..257B}. Recent evidence also come from some meteorite recovered with an accurate orbital determination as e.g. Annama which could be dynamically associated with 2014 UR116 (2008 XB) \citep{Trigo2015MNRAS.449.2119T}, or Chelyabinsk with asteroid (86039) 1999 NC43 \citep{Boro2013Natur.503..235B}. In fact, fragments from these parent bodies result from impacts, but also can be generated by tidal force disruptions as some are rubble-piles and fast rotators \citep{Trigo2007MNRAS.382.1933T, Trigo2009P&SS...57..243T, Chapman2010Natur.463..305C, Trigo2014acm..conf..533T}. Fragmentation processes can produce swarms of meter-sized rocks that can suffer large orbit perturbations over hundreds of years (10$^4$ - 10$^5$ years) \citep{Pauls2005MandPS...40.1241P}. Those complexes whose orbits are highly inclined may be less affected by planetary perturbations and have longer lifetimes \citep{Jones2008EM&P..102...35J, TrigoInclination10.1111}. As a result of the disruption, the different fragments can undergo significant divergences between their orbits, which are only accentuated with the passage of time \citep{Bottke2002Icar..156..399B}. In addition, other non-gravitational effects can alter the orbits being relevant for the dynamic association of objects, for instance, radiation pressure, Yarkovsky and YORP effects, Poynting–Robertson effect or Lorentz force, however, these are only expected to be significant for very small objects over long time scales \citep{Broz2006PhDT.......281B}. Catastrophic disruptions generally eject fragments at escape velocity, which is much lower than the orbital velocity \citep{Jenniskens1998EP&S...50..555J, Bottke2005Icar..179...63B}. Therefore, many fragments remain in similar orbits producing sources of potential projectiles \citep{Williams2004JIMO...32...11W, Porub2004EM&P...95..697P, Jenniskens2007pimo.conf...56J}. The decoherence time scale in meteoroid streams can be so pronounced as to be hardly indistinguishable from a chance association \citep{Pauls2005MandPS...40.1241P}. This can be explained by several mechanisms: (1) the streams may intercept the Earth several times, producing different activity periods due to the joint action of planetary perturbation and collisions with micrometeoroids, which can produce an orbital scattering \citep{Murray1982Icar...49..125M, Babadzhanov1987PAICz..67..141B, TrigoDust2005ApJ...621.1146T}; (2) non-gravitational orbital scattering caused by the Yarkovsky orbital drift derived from solar radiation \citep{Farinella1998Icar..132..378F}; (3) weakly bound agglomerates, such as rubble-piles asteroids or fragile comets, can suffer tidal distortion and disruption if they become fast rotators \citep{Richardson1998Icar..134...47R, Schunova2014Icar..238..156S}; and (4) the effect of the sublimation pressure produced by the cometary nucleus, which may cause meteoroid stream members to undergo mass segregation just as they form. Mass segregation may also result from the spiral approaches of a comet to the Sun producing the release of fragments at different times and velocities \citep{Hughes1981MNRAS.195..625H, Williams1991ASIB..272..225W}; Several bodies from cometary reservoirs beyond Neptune are strongly perturbed by Jupiter or Saturn as they cross their orbits, eventually decaying to Earth-like orbits \citep{Hahn1990Natur.348..132H, Asher1993MNRAS.263..179A}. JFCs are originally trans-Neptunian objects (TNO) and Centaurs that have evolved into the inner planetary region by a progressive decay \citep{Duncan1995AJ....110.3073D, Levison1997Icar..127...13L, Emel2004MNRAS.350..161E, Jewitt2008tnoc.book....1J}. Some of these comets can eventually cross the Earth's orbit \citep{Ipatov2004NYASA1017...46I}. Although many more asteroidal objects impact the atmosphere than JFCs, encounters with the latter can produce much more energetic bolides. It has even been estimated that some of the superbolides produced by meter-sized meteoroids may not necessarily be fragile and could be associated with dormant comets or Damocloids \citep[see e.g.][]{TrigoDormant2017ASSP...46...11T}. Among the 255 events analyzed, we identify 5 hyperbolic orbits, 3 of which maintain this characteristic in the whole uncertainty range. Table \ref{tab:interstellar} shows these large fireballs with presumably unbound heliocentric orbits and their respective tensile strength, right ascension and declination of the geocentric radiant, heliocentric velocity, semi-major axis, eccentricity and inclination. To confirm that these bodies do indeed come from outside our Solar System, it would be necessary to perform a dynamic analysis over time with trustworthy uncertainty assumptions. \citet{Siraj2019arXiv190407224S} found that 2014-01-08 17:05:34 event had a velocity away from the Local Standard of Rest velocity under large uncertainty variations, announcing the first fireball detection of interstellar origin. Although our results are in good agreement with their computed orbital elements, we notice a discrepancy in the estimation of the radiant position. In addition, 2017-03-09 04:16:37 and 2009-04-10 18:42:45 events also appear to have Solar System unbound orbits within the margins of error. Corroborating these metric-sized interstellar objects impacting our atmosphere would indicate that about a 1\% of these events may be produced by bodies from beyond our heliosphere. As suggested by the high tensile strengths obtained (in the known range of values for high strength chondrites and iron meteorites: $10^7$ to $10^8\,Pa$), and in agreement with \citep{Siraj_2022}, it is perhaps more unlikely that submetric-sized meteoroids survive long stays in interstellar space due to the effect of thermal stress and cosmic radiation over millions of years in comparison with large objects (see Figure \ref{fig:tensile}). The presence of refractory aggregates in the interstellar medium could progressively erode smaller fragile objects biasing cometary meteoroids, which, in turn, could be more common due to their eccentric orbits \citep{TrigoJurgen202210.1093/mnras/stab2827}. The identified CNEOS bolides were produced by roughly meter-sized rocks, but other two larger bodies have been recently identified: 1I/'Oumuamua or 2I/Borisov \citep{Meech2017Natur.552..378M, Guzik2020NatAs...4...53G}. The great difference in the observation frequency of these interstellar fireballs ($\sim$3/255) versus interstellar objects ($\sim$2/1000000) could simply be due to the lower expected number of large bodies compared to the smaller ones. It is also likely that a significant amount of interstellar meteors remains hidden under the uncertainties of ground-based observation networks \citep{Hajdukova2019msme.book..235H, Hajdukova2020P&SS..19004965H}. \begin{deluxetable*}{lccccccc} \tablecaption{CNEOS events with hyperbolic orbits and their respective tensile strength, right ascension and declination of the geocentric radiant, heliocentric velocity, semi-major axis, eccentricity and inclination.\label{tab:interstellar}} \tablewidth{0pt} \tablehead{ \colhead{Event} & \colhead{$s\, (MPa)$} & \colhead{$RA_{geo}\,(^{\circ})$} & \colhead{$DEC_{geo}\,(^{\circ})$} & \colhead{$V_{h}\, (km/s)$} & \colhead{$a\, (AU)$} & \colhead{$e$} & \colhead{ $i\,(^{\circ})$} } \startdata 2021-05-06 05:54:27 & 11.8 $\pm$ 3.4 & 62.6 $\pm$ 5.8 & 12.2 $\pm$ 3.2 & 44.1 $\pm$ 2.5 & -2.64 $\pm$ 3.10 & 1.15 $\pm$ 0.18 & 6.05 $\pm$ 2.56 \\ 2017-03-09 04:16:37 & 75.5 $\pm$ 14.9 & 170.7 $\pm$ 6.3 & 34.1 $\pm$ 6.8 & 50.1 $\pm$ 3.8 & -1.22 $\pm$ 0.60 & 1.57 $\pm$ 0.30 & 24.03 $\pm$ 5.57 \\ 2015-02-17 13:19:50 & 4.0 $\pm$ 1.5 & 339.3 $\pm$ 9.1 & -9.6 $\pm$ 2.7 & 44.0 $\pm$ 5.1 & -1.45 $\pm$ 4.94 & 1.10 $\pm$ 0.32 & 1.12 $\pm$ 0.99 \\ 2014-01-08 17:05:34 & 222.8 $\pm$ 67.6 & 88.9 $\pm$ 1.5 & 13.3 $\pm$ 3.8 & 61.1 $\pm$ 5.8 & -0.46 $\pm$ 0.17 & 2.42 $\pm$ 0.46 & 10.05 $\pm$ 3.87 \\ 2009-04-10 18:42:45 & 4.9 $\pm$ 1.4 & 107.9 $\pm$ 4.4 & 4.4 $\pm$ 2.4 & 45.5 $\pm$ 3.4 & -1.91 $\pm$ 1.62 & 1.33 $\pm$ 0.35 & 6.52 $\pm$ 1.35 \\ \enddata \end{deluxetable*} Some of the events studied in this paper with likely associations have been recorded and analyzed also from the ground. Examples are the famous Chelyabinsk fireball (2013-02-15 03:20:33) or the event that produced the Flensburg meteorite (2019-09-12 12:49:48), whose orbits were studied in detail and can be computed accurately assuming much smaller uncertainties than those in this work. Other events such as 2020-11-28 16:34:11 were recorded and analyzed by the Japanese SonotaCo consortium system \citep{sonotaco2009meteor}, obtaining results that are compatible with our proposed association with Andromedids meteor shower, although it does not meet the criterion of orbit dissimilarity over time \citep{sonotaco2020fireball}. The atmospheric entry of asteroid 2008 TC3 was recorded by USG sensors as well (2008-10-07 02:45:40), presenting values contrary to the ones obtained using multiple ground-based observations \citep{Farnocchia2017Icar..294..218F}, as pointed out by \citet{Devillepoix2019MNRAS.483.5166D}. However, we found that this error may simply be a typo in transcribing the values to the CNEOS website. \citet{Farnocchia2017Icar..294..218F} states that the atmospheric flight of the asteroid seemed to come from the north (negative velocity in the z-direction). This does not agree with the z component of the velocity vector appearing in CNEOS $[-9, 9, 3.8]$ km/s, which is positive. By doing the calculations with -3.8 km/s, the results then match reasonably well with \citet{Farnocchia2017Icar..294..218F}, as seen in Table \ref{tab:miscel}. More recently, on March 11, 2022, asteroid 2022 EB5 was observed prior to its impact with the atmosphere and also recorded by USG sensors. From the data published on the CNEOS web portal, we calculated the orbital elements with the same methodology applied to this work. Table \ref{tab:miscel} shows that the results are in relative good agreement with those reported by the Minor Planet Center (MPC). The largest difference is found in the estimation of the semi-major axis, even though the value is within the uncertainty margins. Nevertheless, the association appears to be a good candidate, except that it only meets a dissimilarity criterion for 3,400 years, as shown in Table \ref{tab:NEA}. Similarly occurs for asteroid 2018 LA, however, the dynamic relation was discarded for not exceeding the expected randomly. These associations could have been reliably established if a distribution of clones within the uncertainties were propagated backward in time, instead of just the median value. We have not performed this fine calculation for all events because our intention was to calculate a reliable minimum percentage of associations with near-Earth objects. However, for the particular case of 2022 EB5, we computed the following minimum orbit dissimilarity values held below the threshold over 10,000 years: $D_{SH}^{min}=0.005$, $D_{J}^{min}=0.033$, $D_{D}^{min}=0.002$ and $D_{H}^{min}=0.005$. For the case of the asteroid 2019 MO impact, the values provided by CNEOS do not allow to establish a clear association, as can be seen in the difference of the $\Omega$ and $\omega$ in Table \ref{tab:miscel}. \begin{deluxetable*}{lccccccc} \tablecaption{Semi-major axis, eccentricity, inclination, argument of perihelion and ascending node for the 2008-10-07 02:45:40 and 2022-03-11 21:22:46 produced by the impact of asteroid 2022 EB5, 2019 MO, 2018 LA and 2008 TC3. FAR = \citet{Farnocchia2017Icar..294..218F}. JEN = \citet{jenniskens2021impact}. COR$^*$ = corrected CNEOS $V_z$ sign. \label{tab:miscel}} \tablewidth{0pt} \tablehead{ \colhead{Event} & NEA & \colhead{Data} & \colhead{$a\, (AU)$} & \colhead{$e$} & \colhead{$i\,(^{\circ})$} & \colhead{$\omega\,(^{\circ})$} & \colhead{$\Omega\,(^{\circ})$} } \startdata 2022-03-11 21:22:46 & 2022 EB5 & MPC & 2.8194097 & 0.6851332 & 10.40900 & 222.40750 & 350.99733 \\ & & CNEOS & 2.20$\pm$0.82 & 0.59$\pm$0.16 & 9.26$\pm$3.34 & 222.14$\pm$6.21 & 350.976$\pm$0.001 \\ 2019-06-22 21:25:48 & 2019 MO & MPC & 2.4582908 & 0.6181381 & 1.54135 & 216.73545 & 91.04007 \\ & & CNEOS & 2.45$\pm$1.22 & 0.60$\pm$0.22 & 0.70$\pm$0.61 & 29.57$\pm$16.71 & 270.913$\pm$0.001 \\ 2018-06-02 16:44:12 & 2018 LA & JEN & 1.37640 & 0.431861 & 4.29741 & 256.04869 & 71.869605 \\ & & CNEOS & 1.28$\pm$0.30 & 0.40$\pm$0.15 & 4.44$\pm$1.53 & 261.21$\pm$11.36 & 71.850$\pm$0.001 \\ 2008-10-07 02:45:40 & 2008 TC3 & FAR & 1.3082050 & 0.3120674 & 2.542215 & 234.448925 & 194.1011436 \\ & & CNEOS & 1.59$\pm$0.23 & 0.42$\pm$0.09 & 4.09$\pm$1.83 & 39.61$\pm$13.58 & 14.089$\pm$0.001 \\ & & COR$^*$ & 1.30$\pm$0.14 & 0.35$\pm$0.08 & 2.64$\pm$1.04 & 241.84$\pm$17.03 & 194.089 $\pm$ 0.001 \\ \enddata \end{deluxetable*} This kind of mismatch and the lack of uncertainty makes the USG sensor database not very reliable at first glance. However, with the approach we propose in this paper, we can at least narrow down the probabilities of a large fireball to be associated with a meteoroid stream or a NEO. \section{Conclusions} We examined the CNEOS database superbolide events recorded by US Government sensors with sufficient information to calculate their orbits. However, since there is a lack of uncertainty values in the reported database, we assumed a deviation in the provided parameters based on previous studies that have compared some of these detections with ground-based observations. We developed a new statistical procedure to overtake this challenge. We generated thousands of clones within these error margins and analyzed the associations of each with meteoroid streams and NEOs, thus obtaining a probabilistic estimate of their origins. Following that approach, we have reached the following conclusions: \begin{itemize} \item Among the 887 events in the CNEOS fireball database, only 255 contain enough information to allow the calculation of the heliocentric orbit, so we concentrated our effort in those specific events. \item Using a common methodology, performing backwards integration of the orbits over 10,000 years, applying four dissimilarity criteria and tracking their evolution over time, and estimating a false positive rate, we find that 58 of these events probably have an asteroidal or cometary origin. This number corresponds to about 23\% of the total number of meter-sized projectiles producing large bolides. This suggests that roughly one of each four superbolides are originated by near-Earth objects. \item Based on the height and velocity of each event provided by CNEOS, we estimated the tensile strength for classifying the projectiles according to their bulk density. We found that 61.9\% is rocky or rocky-iron, 35.7\% is carbonaceous and 2.4\% is cometary. Regarding their heliocentric orbits, 85.5\% present orbital elements typical of asteroids while 9.8\% belong to the JFCs. \item We identified 5 events with hyperbolic orbits, 3 of them keeping this condition in the uncertainty range and showing high tensile strength values. We confirm the orbital elements of the announced as the first interstellar superbolide (2014-01-08 17:05:34). Another possible events with unbound heliocentric orbits are 2017-03-09 04:16:37 and 2009-04-10 18:42:45. Corroborating these results would imply that at least about 1\% of the large meteoroids impacting our atmosphere could be interstellar in origin, which may be biased in size by the harsh interstellar medium. \item We find especially reliable the $D_{J}$ and $D_D$ criteria, since having a different theoretical approach they results in similar performances. We also check the robustness of the threshold values for each criterion in relation to variation of the number of associations. \item Given the typical decoherence time scales for meteoroid streams and how fast large meteoroids can segregate from their respective streams or parent bodies, the mere existence of meter-sized projectiles from NEOs points out physical processes capable of producing a transient population of large meteoroids in relatively short timescales to avoid orbital decoherence. We envision that the disruption of crumble asteroids and comets, often due to impacts or tidal forces during close approaches could be the underlying reason. \item The scientific interest in identifying such sources of large bolides is enormous, given that most of these events are potentially delivering meteorites to the Earth and the Moon, so they give the chance to establish direct links between asteroids, meteoroid streams and meteorites. We expect that the new capabilities to identify the sources of meteorite-dropping bolides serve as get new free-delivered samples from hazardous asteroids and comets crossing the near-Earth region. Scientific opportunity is enormous as the bulk elemental composition, mineralogy and physical properties of these projectiles can be inferred from these new meteorites. \item In addition, our capacity to identify dangerous meteoroid streams, capable to generate meter-sized projectiles can be useful to evaluate the specific risk for extravehicular operations in future Lunar exploration, like the Artemis missions \citep{Artemieva2008SoSyR..42..329A, Moorhead2020JSpRo..57..160M}. Similar goal can be achieved from the study of large bolides on Mars for the future Mars Sample Return initiative, or other manned missions. \item Our findings imply that both NEOs and certain meteoroid streams are potential producers of meter-sized hazardous projectiles. Identifying the main NEOs producing large bolides might allow us to study possible encounter scenarios and develop palliative strategies and ways to sample these materials for further study \citep{Abell2021BAAS...53d.270A, Barbee2021BAAS...53d.117B, ElisaEloy2022AcAau.192..402S}. \end{itemize} Finally, we also emphasize the great contribution that would represent for the scientific planetary protection community if the events published by CNEOS were accompanied by uncertainties or some measures of the quality of the detection. Satellite records may play a crucial role in the study of Earth colliding meter-sized rocks, a consequence of the continuous decay of asteroids and comets, both to prepare campaigns to search for possible dropped meteorites and to better understand the physical processes delivering space objects to our planet. \section*{Acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 865657) for the project “Quantum Chemistry on Interstellar Grains” (QUANTUMGRAIN). A total of 2,500 hours of supercomputing time has been used to perform the orbital studies and backwards integrations using CSUC facilities. JMT-R, EPA and AR acknowledge financial support from the FEDER/Ministerio de Ciencia e Innovación – Agencia Estatal de Investigación (PGC2018-097374-B-I00, PI: JMT-R; CTQ2017-89132-P, PI: AR). AR is indebted to the “Ramón y Cajal” program and DIUE (project 2017SGR1323). EPA thanks Carlos Gascón for helpful comments. \section*{Availability of data} The data underlying this article will be shared on reasonable request to the corresponding author. \section{Appendix information} \startlongtable \begin{deluxetable*}{llccccccccccccc} \tabletypesize{\tiny} \tablecaption{All events with association and the time evolution of the $D_{SH}$ and $D_{D}$ criterion over 10,000 years backwards. The value shown is the mean of each 1,000-year step. \label{tab:appendix}} \tablewidth{0pt} \tablehead{ \colhead{Event} & \colhead{Association} & \colhead{Crit.} & \colhead{0 yr} & \colhead{-1 kyr} & \colhead{-2 kyr} & \colhead{-3 kyr} & \colhead{-4 kyr} & \colhead{-5 kyr} & \colhead{-6 kyr} & \colhead{-7 kyr} & \colhead{-8 kyr} &\colhead{ -9 kyr} & \colhead{-10 kyr} } \startdata 2022-02-17 03:53:24 & gamma Delphinids & $D_{SH}$ & 0.25 & 0.21 & 0.29 & 0.29 & 0.28 & 0.36 & 0.46 & 0.55 & 0.61 & 0.66 & 0.86 & \\ & & $D_{D}$ & 0.13 & 0.09 & 0.11 & 0.12 & 0.11 & 0.13 & 0.15 & 0.18 & 0.2 & 0.22 & 0.31 & \\ 2021-11-28 18:06:50 & 2016 WN8 & $D_{SH}$ & 0.24 & 0.24 & 0.23 & 0.23 & 0.22 & 0.22 & 0.23 & 0.23 & 0.24 & 0.25 & 0.26 & \\ & & $D_{D}$ & 0.14 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.14 & 0.14 & 0.14 & 0.15 & \\ 2021-11-17 15:53:21 & 2012 KN11 & $D_{SH}$ & 0.11 & 0.12 & 0.15 & 0.19 & 0.21 & 0.21 & 0.19 & 0.19 & 0.21 & 0.27 & 0.33 & \\ & & $D_{D}$ & 0.07 & 0.07 & 0.07 & 0.08 & 0.08 & 0.08 & 0.08 & 0.08 & 0.09 & 0.1 & 0.12 & \\ 2021-04-02 15:52:58 & gamma Taurids & $D_{SH}$ & 0.28 & 0.29 & 0.34 & 0.4 & 0.43 & 0.43 & 0.39 & 0.32 & 0.26 & 0.23 & 0.26 & \\ & & $D_{D}$ & 0.12 & 0.11 & 0.13 & 0.14 & 0.14 & 0.14 & 0.13 & 0.11 & 0.1 & 0.1 & 0.12 & \\ 2021-03-05 13:50:01 & 2005 CV69 & $D_{SH}$ & 0.14 & 0.14 & 0.14 & 0.15 & 0.18 & 0.22 & 0.27 & 0.3 & 0.31 & 0.31 & 0.29 & \\ & & $D_{D}$ & 0.07 & 0.07 & 0.06 & 0.06 & 0.07 & 0.09 & 0.11 & 0.13 & 0.13 & 0.11 & 0.09 & \\ 2020-12-29 20:32:22 & Andromedids & $D_{SH}$ & 0.21 & 0.2 & 0.2 & 0.19 & 0.2 & 0.21 & 0.23 & 0.25 & 0.27 & 0.29 & 0.31 & \\ & & $D_{D}$ & 0.18 & 0.18 & 0.17 & 0.17 & 0.17 & 0.17 & 0.17 & 0.17 & 0.18 & 0.18 & 0.18 & \\ 2020-10-26 15:09:10 & gamma Triangulids & $D_{SH}$ & 0.29 & 0.29 & 0.27 & 0.25 & 0.23 & 0.21 & 0.18 & 0.15 & 0.13 & 0.12 & 0.14 & \\ & & $D_{D}$ & 0.14 & 0.14 & 0.14 & 0.16 & 0.18 & 0.21 & 0.23 & 0.24 & 0.24 & 0.23 & 0.21 & \\ 2020-07-12 07:50:32 & Daytime theta Aurigids & $D_{SH}$ & 0.16 & 0.24 & 0.31 & 0.54 & 0.58 & 0.61 & 0.46 & 0.27 & 0.26 & 0.34 & 0.46 & \\ & & $D_{D}$ & 0.07 & 0.09 & 0.13 & 0.21 & 0.22 & 0.24 & 0.23 & 0.12 & 0.1 & 0.12 & 0.17 & \\ & 2021 AU4 & $D_{SH}$ & 0.15 & 0.15 & 0.16 & 0.16 & 0.17 & 0.17 & 0.18 & 0.18 & 0.19 & 0.19 & 0.2 & \\ & & $D_{D}$ & 0.08 & 0.08 & 0.09 & 0.1 & 0.1 & 0.11 & 0.11 & 0.11 & 0.12 & 0.12 & 0.12 & \\ 2020-03-04 20:25:59 & C/1905 F1 & $D_{SH}$ & 0.24 & 0.31 & 0.41 & 0.4 & 0.29 & 0.21 & 0.25 & 0.3 & 0.29 & 0.32 & 0.42 & \\ & & $D_{D}$ & 0.1 & 0.11 & 0.14 & 0.13 & 0.1 & 0.08 & 0.09 & 0.1 & 0.1 & 0.11 & 0.14 & \\ & 2020 DF4 & $D_{SH}$ & 0.11 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.09 & 0.09 & 0.09 & 0.09 & \\ & & $D_{D}$ & 0.04 & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 & 0.05 & 0.04 & 0.04 & \\ 2019-09-14 12:39:34 & alpha Geminids & $D_{SH}$ & 0.25 & 0.23 & 0.23 & 0.23 & 0.23 & 0.24 & 0.24 & 0.25 & 0.25 & 0.24 & 0.24 & \\ & & $D_{D}$ & 0.22 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.2 & 0.2 & \\ 2019-09-12 12:49:48 & Corvids & $D_{SH}$ & 0.26 & 0.24 & 0.24 & 0.25 & 0.26 & 0.27 & 0.27 & 0.25 & 0.23 & 0.22 & 0.22 & \\ & & $D_{D}$ & 0.12 & 0.11 & 0.11 & 0.11 & 0.11 & 0.11 & 0.11 & 0.11 & 0.1 & 0.1 & 0.1 & \\ & 300P/Catalina & $D_{SH}$ & 0.13 & 0.12 & 0.12 & 0.12 & 0.12 & 0.13 & 0.12 & 0.12 & 0.12 & 0.12 & 0.12 & \\ & & $D_{D}$ & 0.16 & 0.15 & 0.16 & 0.16 & 0.16 & 0.17 & 0.17 & 0.16 & 0.16 & 0.16 & 0.16 & \\ 2019-07-23 20:42:58 & Corvids & $D_{SH}$ & 0.17 & 0.18 & 0.19 & 0.11 & 0.07 & 0.09 & 0.15 & 0.2 & 0.23 & 0.25 & 0.28 & \\ & & $D_{D}$ & 0.06 & 0.07 & 0.08 & 0.04 & 0.03 & 0.03 & 0.05 & 0.07 & 0.08 & 0.08 & 0.09 & \\ 2019-06-22 21:25:48 & Corvids & $D_{SH}$ & 0.22 & 0.22 & 0.22 & 0.21 & 0.19 & 0.18 & 0.17 & 0.17 & 0.17 & 0.18 & 0.21 & \\ & & $D_{D}$ & 0.08 & 0.08 & 0.08 & 0.08 & 0.08 & 0.07 & 0.07 & 0.07 & 0.07 & 0.08 & 0.09 & \\ 2019-05-19 14:47:03 & Corvids & $D_{SH}$ & 0.19 & 0.19 & 0.21 & 0.22 & 0.24 & 0.26 & 0.28 & 0.3 & 0.3 & 0.3 & 0.3 & \\ & & $D_{D}$ & 0.08 & 0.08 & 0.08 & 0.08 & 0.09 & 0.09 & 0.1 & 0.11 & 0.1 & 0.1 & 0.1 & \\ 2019-03-19 02:06:39 & C/1905 F1 & $D_{SH}$ & 0.38 & 0.42 & 0.47 & 0.42 & 0.4 & 0.46 & 0.49 & 0.48 & 0.53 & 0.54 & 0.55 & \\ & & $D_{D}$ & 0.13 & 0.14 & 0.15 & 0.14 & 0.13 & 0.15 & 0.16 & 0.17 & 0.18 & 0.18 & 0.19 & \\ 2019-02-01 18:17:10 & 2015 AO43 & $D_{SH}$ & 0.13 & 0.13 & 0.13 & 0.14 & 0.14 & 0.15 & 0.15 & 0.15 & 0.16 & 0.16 & 0.17 & \\ & & $D_{D}$ & 0.08 & 0.07 & 0.08 & 0.08 & 0.08 & 0.08 & 0.09 & 0.09 & 0.09 & 0.09 & 0.09 & \\ 2018-10-05 00:27:04 & gamma Taurids & $D_{SH}$ & 0.1 & 0.11 & 0.12 & 0.14 & 0.16 & 0.17 & 0.16 & 0.14 & 0.11 & 0.11 & 0.13 & \\ & & $D_{D}$ & 0.08 & 0.09 & 0.09 & 0.1 & 0.1 & 0.11 & 0.1 & 0.09 & 0.08 & 0.07 & 0.09 & \\ 2018-09-25 14:10:33 & Corvids & $D_{SH}$ & 0.24 & 0.22 & 0.25 & 0.26 & 0.25 & 0.24 & 0.21 & 0.21 & 0.27 & 0.36 & 0.42 & \\ & & $D_{D}$ & 0.16 & 0.13 & 0.14 & 0.15 & 0.15 & 0.15 & 0.14 & 0.13 & 0.14 & 0.16 & 0.18 & \\ 2018-06-26 17:51:53 & Corvids & $D_{SH}$ & 0.25 & 0.26 & 0.28 & 0.3 & 0.33 & 0.36 & 0.38 & 0.42 & 0.45 & 0.48 & 0.52 & \\ & & $D_{D}$ & 0.16 & 0.16 & 0.16 & 0.16 & 0.17 & 0.17 & 0.18 & 0.19 & 0.2 & 0.21 & 0.21 & \\ 2018-01-15 02:18:38 & P/2003 T12 & $D_{SH}$ & 0.33 & 0.33 & 0.24 & 0.21 & 0.33 & 0.27 & 0.34 & 0.21 & 0.26 & 0.19 & 0.15 & \\ & & $D_{D}$ & 0.14 & 0.14 & 0.1 & 0.08 & 0.11 & 0.13 & 0.14 & 0.1 & 0.12 & 0.12 & 0.1 & \\ 2017-12-15 13:14:37 & 2008 ON13 & $D_{SH}$ & 0.2 & 0.2 & 0.2 & 0.21 & 0.21 & 0.21 & 0.21 & 0.21 & 0.2 & 0.2 & 0.2 & \\ & & $D_{D}$ & 0.09 & 0.08 & 0.08 & 0.08 & 0.08 & 0.08 & 0.08 & 0.08 & 0.09 & 0.09 & 0.09 & \\ & Northern chi Orionids & $D_{SH}$ & 0.2 & 0.21 & 0.2 & 0.2 & 0.2 & 0.19 & 0.19 & 0.19 & 0.19 & 0.2 & 0.2 & \\ & & $D_{D}$ & 0.13 & 0.15 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & \\ 2017-07-13 09:30:36 & gamma Taurids & $D_{SH}$ & 0.18 & 0.15 & 0.13 & 0.13 & 0.14 & 0.15 & 0.15 & 0.15 & 0.16 & 0.15 & 0.15 & \\ & & $D_{D}$ & 0.07 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & 0.06 & \\ 2017-06-23 20:21:55 & Southern October delta Arietids & $D_{SH}$ & 0.18 & 0.17 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.19 & 0.19 & 0.19 & \\ & & $D_{D}$ & 0.15 & 0.14 & 0.15 & 0.14 & 0.14 & 0.14 & 0.14 & 0.15 & 0.15 & 0.15 & 0.14 & \\ 2017-05-24 07:03:03 & gamma Taurids & $D_{SH}$ & 0.15 & 0.14 & 0.14 & 0.14 & 0.14 & 0.14 & 0.15 & 0.15 & 0.15 & 0.16 & 0.16 & \\ & & $D_{D}$ & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & \\ 2017-02-18 19:48:29 & alpha Virginids & $D_{SH}$ & 0.24 & 0.19 & 0.17 & 0.16 & 0.15 & 0.15 & 0.15 & 0.16 & 0.17 & 0.18 & 0.19 & \\ & & $D_{D}$ & 0.1 & 0.08 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & \\ 2015-07-19 07:06:26 & 300P/Catalina & $D_{SH}$ & 0.3 & 0.29 & 0.24 & 0.21 & 0.29 & 0.34 & 0.31 & 0.21 & 0.17 & 0.27 & 0.33 & \\ & & $D_{D}$ & 0.1 & 0.1 & 0.09 & 0.08 & 0.1 & 0.11 & 0.1 & 0.08 & 0.07 & 0.11 & 0.12 & \\ 2015-04-21 01:42:51 & h Virginids & $D_{SH}$ & 0.29 & 0.27 & 0.24 & 0.21 & 0.19 & 0.17 & 0.17 & 0.17 & 0.18 & 0.19 & 0.21 & \\ & & $D_{D}$ & 0.14 & 0.14 & 0.13 & 0.12 & 0.12 & 0.12 & 0.11 & 0.11 & 0.12 & 0.12 & 0.13 & \\ 2015-04-08 04:06:31 & gamma Triangulids & $D_{SH}$ & 0.33 & 0.32 & 0.32 & 0.29 & 0.24 & 0.19 & 0.16 & 0.17 & 0.18 & 0.19 & 0.2 & \\ & & $D_{D}$ & 0.18 & 0.16 & 0.16 & 0.15 & 0.14 & 0.13 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & \\ & 182P/LONEOS & $D_{SH}$ & 0.34 & 0.41 & 0.52 & 0.57 & 0.53 & 0.46 & 0.51 & 0.68 & 0.85 & 0.97 & 1.0 & \\ & & $D_{D}$ & 0.14 & 0.17 & 0.22 & 0.24 & 0.22 & 0.19 & 0.2 & 0.25 & 0.3 & 0.34 & 0.35 & \\ 2014-11-26 23:16:51 & 1996 JG & $D_{SH}$ & 0.1 & 0.12 & 0.12 & 0.12 & 0.12 & 0.12 & 0.12 & 0.12 & 0.13 & 0.13 & 0.14 & \\ & & $D_{D}$ & 0.06 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.07 & 0.08 & 0.08 & 0.08 & 0.08 & \\ & Daytime delta Scorpiids & $D_{SH}$ & 0.26 & 0.25 & 0.24 & 0.24 & 0.24 & 0.24 & 0.23 & 0.22 & 0.22 & 0.22 & 0.23 & \\ & & $D_{D}$ & 0.17 & 0.16 & 0.16 & 0.15 & 0.15 & 0.15 & 0.15 & 0.14 & 0.14 & 0.15 & 0.15 & \\ 2014-11-04 20:13:30 & gamma Taurids & $D_{SH}$ & 0.1 & 0.1 & 0.09 & 0.09 & 0.1 & 0.09 & 0.1 & 0.1 & 0.11 & 0.11 & 0.11 & \\ & & $D_{D}$ & 0.09 & 0.08 & 0.08 & 0.09 & 0.09 & 0.09 & 0.09 & 0.09 & 0.09 & 0.09 & 0.09 & \\ 2014-06-28 02:40:07 & tau Herculids & $D_{SH}$ & 0.23 & 0.2 & 0.19 & 0.2 & 0.19 & 0.17 & 0.17 & 0.17 & 0.19 & 0.21 & 0.24 & \\ & & $D_{D}$ & 0.16 & 0.15 & 0.14 & 0.15 & 0.14 & 0.13 & 0.12 & 0.12 & 0.12 & 0.12 & 0.13 & \\ 2014-05-16 20:06:28 & beta Cancrids & $D_{SH}$ & 0.22 & 0.22 & 0.21 & 0.21 & 0.21 & 0.19 & 0.16 & 0.14 & 0.12 & 0.11 & 0.12 & \\ & & $D_{D}$ & 0.08 & 0.09 & 0.08 & 0.08 & 0.09 & 0.1 & 0.11 & 0.13 & 0.17 & 0.23 & 0.3 & \\ 2013-12-23 08:30:57 & alpha Geminids & $D_{SH}$ & 0.23 & 0.23 & 0.23 & 0.23 & 0.22 & 0.22 & 0.22 & 0.22 & 0.22 & 0.22 & 0.22 & \\ & & $D_{D}$ & 0.14 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.14 & \\ 2013-12-08 03:10:09 & October gamma Cetids & $D_{SH}$ & 0.18 & 0.19 & 0.21 & 0.24 & 0.23 & 0.22 & 0.2 & 0.19 & 0.17 & 0.16 & 0.15 & \\ & & $D_{D}$ & 0.07 & 0.09 & 0.11 & 0.13 & 0.12 & 0.11 & 0.09 & 0.08 & 0.07 & 0.06 & 0.06 & \\ 2013-04-30 08:40:38 & October gamma Cetids & $D_{SH}$ & 0.26 & 0.26 & 0.24 & 0.23 & 0.22 & 0.2 & 0.2 & 0.19 & 0.18 & 0.18 & 0.19 & \\ & & $D_{D}$ & 0.15 & 0.14 & 0.13 & 0.12 & 0.12 & 0.1 & 0.09 & 0.07 & 0.06 & 0.06 & 0.07 & \\ 2013-04-21 06:23:12 & Corvids & $D_{SH}$ & 0.25 & 0.28 & 0.33 & 0.33 & 0.3 & 0.28 & 0.27 & 0.25 & 0.24 & 0.24 & 0.25 & \\ & & $D_{D}$ & 0.19 & 0.19 & 0.2 & 0.2 & 0.21 & 0.21 & 0.21 & 0.21 & 0.22 & 0.22 & 0.23 & \\ 2012-07-25 07:48:20 & 2018 BJ5 & $D_{SH}$ & 0.15 & 0.17 & 0.19 & 0.19 & 0.19 & 0.19 & 0.18 & 0.19 & 0.2 & 0.2 & 0.2 & \\ & & $D_{D}$ & 0.22 & 0.3 & 0.35 & 0.33 & 0.33 & 0.31 & 0.29 & 0.28 & 0.28 & 0.29 & 0.28 & \\ 2012-05-15 11:04:17 & 2020 KF6 & $D_{SH}$ & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 & 0.13 & 0.12 & 0.12 & 0.12 & 0.13 & \\ & & $D_{D}$ & 0.34 & 0.37 & 0.35 & 0.33 & 0.31 & 0.29 & 0.26 & 0.17 & 0.12 & 0.2 & 0.39 & \\ 2010-12-25 23:24:00 & 2014 XO7 & $D_{SH}$ & 0.21 & 0.52 & 0.69 & 0.29 & 0.48 & 0.27 & 0.47 & 0.38 & 0.52 & 0.77 & 0.38 & \\ & & $D_{D}$ & 0.12 & 0.19 & 0.24 & 0.13 & 0.17 & 0.13 & 0.16 & 0.15 & 0.21 & 0.26 & 0.19 & \\ 2010-02-28 22:24:50 & 79P/du Toit-Hartley & $D_{SH}$ & 0.25 & 0.25 & 0.25 & 0.25 & 0.25 & 0.26 & 0.26 & 0.26 & 0.26 & 0.26 & 0.26 & \\ & & $D_{D}$ & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.14 & 0.14 & 0.13 & \\ 2008-12-24 15:51:58 & P/2003 T12 & $D_{SH}$ & 0.32 & 0.43 & 0.61 & 0.58 & 0.39 & 0.41 & 0.6 & 0.64 & 0.5 & 0.4 & 0.56 & \\ & & $D_{D}$ & 0.12 & 0.14 & 0.2 & 0.19 & 0.14 & 0.14 & 0.19 & 0.2 & 0.17 & 0.14 & 0.19 & \\ 2008-11-21 00:26:44 & pi Leonids & $D_{SH}$ & 0.25 & 0.23 & 0.2 & 0.19 & 0.19 & 0.23 & 0.26 & 0.3 & 0.33 & 0.34 & 0.34 & \\ & & $D_{D}$ & 0.09 & 0.08 & 0.07 & 0.07 & 0.08 & 0.09 & 0.1 & 0.12 & 0.12 & 0.12 & 0.11 & \\ 2008-11-18 09:41:51 & iota Cygnids & $D_{SH}$ & 2.01 & 2.01 & 2.0 & 1.99 & 1.98 & 1.97 & 1.97 & 1.96 & 1.95 & 1.94 & 1.93 & \\ & & $D_{D}$ & 0.92 & 0.92 & 0.91 & 0.9 & 0.89 & 0.88 & 0.87 & 0.86 & 0.85 & 0.85 & 0.84 & \\ 2008-06-27 02:01:23 & Daytime delta Scorpiids & $D_{SH}$ & 0.33 & 0.33 & 0.33 & 0.31 & 0.28 & 0.24 & 0.2 & 0.2 & 0.22 & 0.26 & 0.29 & \\ & & $D_{D}$ & 0.17 & 0.16 & 0.16 & 0.15 & 0.15 & 0.14 & 0.13 & 0.13 & 0.13 & 0.14 & 0.14 & \\ 2007-09-22 17:57:12 & Southern omega Scorpiids & $D_{SH}$ & 0.13 & 0.35 & 0.4 & 0.4 & 0.77 & 0.41 & 0.44 & 0.45 & 0.27 & 0.42 & 0.33 & \\ & & $D_{D}$ & 0.13 & 0.15 & 0.15 & 0.19 & 0.25 & 0.2 & 0.16 & 0.18 & 0.14 & 0.15 & 0.2 & \\ 2007-06-11 09:47:05 & 34D/Gale & $D_{SH}$ & 0.3 & 0.29 & 0.29 & 0.29 & 0.29 & 0.28 & 0.28 & 0.28 & 0.27 & 0.27 & 0.27 & \\ & & $D_{D}$ & 0.55 & 0.56 & 0.54 & 0.52 & 0.5 & 0.49 & 0.47 & 0.46 & 0.46 & 0.45 & 0.44 & \\ 2006-12-09 06:31:12 & Phoenicids & $D_{SH}$ & 0.22 & 0.25 & 0.4 & 0.42 & 0.31 & 0.31 & 0.45 & 0.52 & 0.48 & 0.46 & 0.55 & \\ & & $D_{D}$ & 0.12 & 0.11 & 0.14 & 0.15 & 0.13 & 0.12 & 0.15 & 0.17 & 0.17 & 0.16 & 0.18 & \\ 2006-10-14 18:10:49 & h Virginids & $D_{SH}$ & 0.39 & 0.34 & 0.29 & 0.34 & 0.3 & 0.21 & 0.25 & 0.29 & 0.19 & 0.15 & 0.18 & \\ & & $D_{D}$ & 0.13 & 0.12 & 0.1 & 0.11 & 0.1 & 0.08 & 0.09 & 0.1 & 0.07 & 0.06 & 0.07 & \\ 2006-09-02 04:26:15 & Corvids & $D_{SH}$ & 0.28 & 0.25 & 0.22 & 0.19 & 0.17 & 0.15 & 0.15 & 0.16 & 0.17 & 0.2 & 0.23 & \\ & & $D_{D}$ & 0.13 & 0.12 & 0.11 & 0.1 & 0.09 & 0.09 & 0.08 & 0.08 & 0.09 & 0.09 & 0.1 & \\ 2006-06-07 00:06:28 & May lambda Draconids & $D_{SH}$ & 0.26 & 0.22 & 0.25 & 0.21 & 0.27 & 0.24 & 0.26 & 0.32 & 0.45 & 0.42 & 0.41 & \\ & & $D_{D}$ & 0.16 & 0.12 & 0.1 & 0.08 & 0.13 & 0.09 & 0.09 & 0.13 & 0.21 & 0.19 & 0.17 & \\ 2006-01-10 23:25:28 & alpha Geminids & $D_{SH}$ & 0.28 & 0.28 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 & 0.3 & 0.3 & 0.31 & \\ & & $D_{D}$ & 0.16 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.16 & \\ 2005-12-03 12:45:49 & alpha Capricornids & $D_{SH}$ & 0.26 & 0.25 & 0.25 & 0.25 & 0.23 & 0.24 & 0.23 & 0.23 & 0.26 & 0.27 & 0.31 & \\ & & $D_{D}$ & 0.17 & 0.16 & 0.16 & 0.16 & 0.15 & 0.15 & 0.15 & 0.15 & 0.15 & 0.16 & 0.16 & \\ 2005-06-03 08:15:41 & Daytime kappa Aquariids & $D_{SH}$ & 0.34 & 0.36 & 0.39 & 0.3 & 0.17 & 0.18 & 0.3 & 0.35 & 0.27 & 0.2 & 0.31 & \\ & & $D_{D}$ & 0.11 & 0.12 & 0.15 & 0.11 & 0.07 & 0.07 & 0.13 & 0.16 & 0.13 & 0.09 & 0.12 & \\ 2005-04-19 07:37:47 & Corvids & $D_{SH}$ & 0.26 & 0.23 & 0.24 & 0.28 & 0.3 & 0.29 & 0.25 & 0.2 & 0.19 & 0.23 & 0.27 & \\ & & $D_{D}$ & 0.15 & 0.12 & 0.1 & 0.1 & 0.11 & 0.1 & 0.1 & 0.09 & 0.08 & 0.09 & 0.1 & \\ 2004-10-07 13:14:43 & Southern omega Scorpiids & $D_{SH}$ & 0.19 & 0.2 & 0.19 & 0.19 & 0.18 & 0.19 & 0.19 & 0.2 & 0.21 & 0.24 & 0.25 & \\ & & $D_{D}$ & 0.19 & 0.21 & 0.2 & 0.19 & 0.18 & 0.19 & 0.2 & 0.2 & 0.21 & 0.23 & 0.23 & \\ 2003-11-10 13:54:06 & 182P/LONEOS & $D_{SH}$ & 0.58 & 0.57 & 0.92 & 1.04 & 0.95 & 0.78 & 0.61 & 0.68 & 0.8 & 0.88 & 0.84 & \\ & & $D_{D}$ & 0.31 & 0.27 & 0.37 & 0.4 & 0.35 & 0.32 & 0.25 & 0.27 & 0.33 & 0.36 & 0.33 & \\ 2003-09-27 12:59:02 & C/1905 F1 & $D_{SH}$ & 0.35 & 0.43 & 0.59 & 0.66 & 0.6 & 0.45 & 0.34 & 0.44 & 0.66 & 0.85 & 0.83 & \\ & & $D_{D}$ & 0.15 & 0.19 & 0.26 & 0.29 & 0.27 & 0.21 & 0.16 & 0.18 & 0.28 & 0.36 & 0.37 & \\ \enddata \end{deluxetable*}
1,477,468,749,919
arxiv
\section{Introduction}\label{intro} The rapid growth of the modern science requires effective purpose-built information systems. Since inception of the first scientific information systems, mathematicians have been involved in the full cycle of software product development, from idea to implementation. A well-known example is \TeX{}, an open source typesetting system designed and mostly written by Donald Knuth~\cite{knuth}. \TeX{} has a solid community of developers, researchers, and enthusiasts, who contribute new packages~\cite{ctan}. The reader is likely aware of Mathematica~\cite{mathematica-site} and WolframAlpha~\cite{WolframAlpha-site} commercial systems, led by a mathematician and physicist Stephen Wolfram according to his principles of computational knowledge theory (see e.g.~\cite{wolfram}). Tools for mathematical content management are developed with the help of communities of mathematicians, e.g. MathJax~\cite{mathjax,cervone} by American Mathematical Society, as well as independent researchers, e.g. ASCIIMathML \cite{ascii}. Math-Net.Ru~\cite{Zhizhchenko}, a collection of publications from refereed journals, and arXiv.org, a collection of publicly available pre-prints, are information systems that benefit from contributions of the mathematical community. The similar situation can be seen in other natural sciences. For example, there are examples of information systems developed by chemists~\cite{cml,murray}. However, the contemporary science community clearly lacks information systems, covering all its needs. Main challenges in mathematical knowledge management (MKM) are discussed in~\cite{mkm} -- \cite{nti:2014}. Further, we frame the most urgent tasks: \begin{itemize} \item modeling representations of mathematical knowledge, i.e., techniques for representing MKM include data structures, logics, formal theories, diagrams; \item presentation formats, i.e., formats, programming languages etc.; \item authoring languages and tools; \item creating repositories of formalized mathematics, and mathematical digital libraries; \item mathematical search and retrieval, i.e., querying collections of mathematical documents; \item implementing math assistants, tutoring and assessment systems; \item developing collaboration tools for mathematics; \item creating new tools for detecting repurposing material, including plagiarism of others' work and self-plagiarism; \item creation of ``live documents''~\cite{parinov}; \item creation of interactive documents, e.g. efforts of the Liber Mathematicae community~\cite{liber-site,liber} and Computable Document Format (CDF)~\cite{cdf} by Wolfram; \item developing deduction systems, i.e., theorem provers and computer algebra systems (e.g. ~\cite{mizar,coq}). The solution of this task requires rigid formalization of mathematical statements and proofs. \end{itemize} While mathematics is full of formalisms, there is currently no a single widely accepted formalism for computer mathematics. To tackle this issue, we describe an approach that is based on Semantic Web models and technologies~\cite{tim}. At the core of integration of mathematical resources, there is building structured representation of the scientific content. World Wide Web Consortium (W3C) (www.w3.org) is an international community to develop standard and technologies of Semantic Web, including special purpose markup languages for many domains. In this paper, we elaborate semantic-based approaches to solve some of the tasks described above. In Section~\ref{sem}, we outline existing semantic models for mathematical documents. In Section~\ref{formalizm}, we present $OntoMath^{PRO}$, a novel ontological model for mathematics that was developed by the authors together with mathematicians from Kazan Federal University. Section~\ref{applicat} contains concrete applications in search as well as education powered by the ontology. \section{Semantic Models of Mathematical Documents} \label{sem} In this section, we give an overview of state-of-the-art semantic models of mathematical documents. \subsection{Semantic markup for formulas} Semantic markup enables automatic intelligent information processing. For representation of mathematical formulas, there has been developed Mathematical Markup Language (MathML)~\cite{mathml}. MathML was designed by W3C as a machine-readable language to both present and consume mathematical content in WWW. The increasing role of this language in mathematical content management is discussed in~\cite{miner}. Widely used tools for authoring mathematical articles include \LaTeX{}-based integrated development environments and office packages with mathematical formula support, such as MS Word+MathType. MathML Word2TeX~\cite{word2tex} and \LaTeX{}ML~\cite{latex2} can be leveraged to convert documents from popular formats to XHTML+MathML for publication in Web. \subsection{High-level models} Open Mathematical Documents (OMDoc)~\cite{kohlhase:2006}, an XML-based language, is integrated with MathML/OpenMath and adds support of statements, theories, and rhetorical structures to formalize mathematical documents. OMDoc has been used for interaction between structured specification systems and automated theorem provers~\cite{omdoc:provers}. The OMDoc OWL Ontology (available at \url{http://kwarc.info/projects/docOnto/omdoc.html}) is based on the notion of statements. Sub-statement structures include definitions, theorems, lemmas, corollaries, proof steps. The relation set comprises of partonomic (whole-part), logical dependency, and verbalizing properties. The paper~\cite{ zhiltsov:2010} presents an OMDoc-based approach to author mathematical lecture notes using S\TeX{} macro package~\cite{zhiltsov:2010, kohlhase:2005, kohlhase:2008} in \LaTeX{} and expose them as Linked Data accessible in Web. S\TeX{} offers macros for introducing new mathematical symbols and using arbitrary metadata vocabularies. S\TeX{} is integrated with OMDoc ontology, providing definitions of OpenMath symbols and elements of the logical structure of mathematical documents, such as theorems and proofs. This model also makes such documents directly available from the Web converting them to XHTML/ RDFa format and offers different types of services like notation explanation, versioning and semantic search. The MathLang Document Rhetorical (DRa) Ontology~\cite{kamareddine} characterizes document structure elements according to their mathematical rhetorical roles that are similar to the ones defined in the statement level of OMDoc. This semantics focuses on formalizing proof skeletons for generation proof checker templates. The Mocassin Ontology~\cite{solovyev} encompasses many structural elements of the models mentioned above. However, this model is more oriented on representing structural elements and relations between them, e.g. logical dependency or referencing, occuring frequently in published scholarly papers in mathematics. In~\cite{solovyev} we demonstrate its utility in the information extraction scenario. \subsection{Terminological resources} Terminological resources, such as vocabularies, datasets, thesauri, and ontologies include descriptions of mathematical knowledge objects. The general-purpose DBpedia dataset~\cite{dbpedia} contains, according to our estimates, about 7,800 concepts (including 1,500 concepts with labels in Russian) from algebra, 46,000 (9,200) concepts from geometry, 30,000 (4,300) concepts from mathematical logic, 150,000 (28,000) from mathematical analysis, and 165,000 (39,000) concepts on theory of probability and statistics. The ScienceWISE project~\cite{sciencewise:demo} gives over 2,500 mathematical definitions, including concepts from mathematical physics, connected with subclass-of, whole-part, associative, and importance relationships. The Online Encyclopedia of Integer Sequences~\cite{oeis} is a knowledge base of facts about numbers. Given a sequence of integers, this service (\url{http://oeis.org}) displays the information about its name, general formula, implementation in programming languages, successive numbers, references, and other relevant information. Cambridge Mathematical Thesaurus~\cite{cmt} contains a taxonomy of about 4,500 entities in 9 languages from the undergraduate level mathematics, connected with logical dependency and associative relationships. \section{Ontologies as Formalisms for Mathematical Knowledge Representation} \label{formalizm} We introduce ontology-based formalisms for knowledge representation as well as our novel ontological model for mathematics. \subsection{Basic terms} \label{base-onto} Both knowledge representation and knowledge interchange between information agents, such as researchers and information systems, rely on a conceptualization~\cite{genesereth}. Each communication agent has its own vocabulary to refer to elements of the conceptualization. Therefore, discrepancy between agent protocols can occurr for two reasons: i) agents may have different conceptualizations; ii) they may have incompatible models of languages, i.e., meanings of terms. Effective communication requires a single conceptualization as well as the sharable vocabulary. Ontologies suffice this requirement. Improving the classical definition by T.Gruber~\cite{gruber}, the authors of~\cite{studer} define an ontology as ``a formal, explicit specification of a shared conceptualization''. An ontology defines basic concepts and relations between them of a given domain and includes: \begin{itemize} \item classes \item properties \item restrictions. \end{itemize} Hence, we accept the formal approach to ontology definition given by N.~Guarino according to formal semantics~\cite{guarino}. \begin{definition}\label{extensionalrelational} An extensional relational structure is a tuple $S = (D,R)$ where \begin{itemize} \item $D$ is a set called the universe of discourse \item $R$ is a set of relations on $D$. \end{itemize} \end{definition} Let $W$ the set of world states (also called worlds, or possible worlds) for an area of interest. \begin{definition}\label{conceptual-relation} A conceptual relation (or intensional relation) $\rho^n$ of arity $n$ on $<D,W>$ is a total function $\rho^n : W \rightarrow 2^{D^n}$ from the set $W$ into the set of all $n$-ary (extensional) relations on $D$. \end{definition} From Definition~\ref{conceptual-relation}, we can provide a formal definition of conceptualization. \begin{definition}\label{conceptualization} A conceptualization (or intensional relational structure) is a triple $C = (D,W, \mathfrak{R})$ with \begin{itemize} \item $D$ a universe of discourse; \item $W$ a set of world states; \item $\mathfrak{R}$ a set of conceptual relations on the domain space $<D,W>$. \end{itemize} \end{definition} Ontological commitment establishes the proper meanings of vocabulary elements. Let $\mathbf{L}$ be a first-order logical language with vocabulary $\mathbf{V}$ and $\mathbf{C} = (D,W, \mathfrak{R})$, a conceptualization. \begin{definition}\label{commitment} An ontological commitment (or intensional first order structure) for $\mathbf{L}$ is a tuple $\mathbf{K} = (\mathbf{C}, \mathfrak{I})$, where $\mathfrak{I}$ (called intensional interpretation function) is a total function $\mathfrak{I} : V \rightarrow D \cup \mathfrak{R}$ that maps each vocabulary symbol of $\mathbf{V}$ to either an element of $D$ or an intensional relation belonging to the set $\mathfrak{R}$. \end{definition} Let $I: V \rightarrow D \cup R$ be any function that maps vocabulary to the union of elements and relations of the universe of discourse (called extensional interpretation function), and $S$ is from Definition~\ref{extensionalrelational}. An intended model is a model that conforms the chosen ontological commitment, or formally \begin{definition}\label{intended-models} A model $M = (S,I)$ is called an intended model of $\mathbf{L}$ according to $\mathbf{K}$ if \begin{enumerate} \item for all constant symbols $c \in \mathbf{V}$ we have $I(c) = \mathfrak{I}(c);$ \item there exists a world state $w \in W$ such that, for each predicate symbol $v \in \mathbf{V}$ there exists an intensional relation $\rho \in \mathfrak{R}$ such that $\mathfrak{I}(v) = \rho$ and $I(v) =\rho (w).$ \end{enumerate} \end{definition} Finally, the ontology is defined as follows: \begin{definition}[Ontology]\label{ontology-def} An ontology $\mathbf{O}_\mathbf{K}$ for ontological commitment $\mathbf{K}$ is a logical theory consisting of a set of formulas of $\mathbf{L}$, constructed so that the set of its models matches as close as possible the set of intended models of $\mathbf{L}$ according to $\mathbf{K}$. \end{definition} The ontology can be expressed in various formalisms. The most ubiquitous languages are $F$-logic~\cite{angele}, and, particularly, description logics languages~\cite{baader}. In practice, Web Ontology Language (OWL)~\cite{owl:primer}, a knowledge representation language founded on a description logic SHIQ, is the most used in the Semantic Web community. \subsection{OntoMath$^{PRO}$} $OntoMath^{PRO}$~\cite{kesw} is the first attempt to build an ontology of mathematical knowledge objects according to principles described above. Hence, we apply formalisms from the previous section to mathematics. We assume that, in our case, the universe of discourse is mathematical objects from scientific refereed publications. The conceptualization for mathematics is principles for classication of objects according to their characteristics. The vocabulary represents the mathematical terminology. The ontological commitment is meanings of mathematical terms widely accepted in the contemporary mathematical community. Then, the ontology captures the accepted conceptualization and the ontological commitment. The current version of $OntoMath^{PRO}$ contains concepts from the pre-selected fields of mathematics, such as number theory, set theory, algebra, analysis, geometry, mathematical logic, discrete mathematics, theory of computation, differential equations, numerical analysis, probability theory, and statistics. The ontology defines six relations, such as taxonomic relation, logical dependency, associative relation between objects, belongingness of objects to fields of mathematics, and associative relation between problems and tasks. Each mathematical concept is represented as a class in the ontology. The class has definitions both in Russian and English, relations with other classes, and links to verified Semantic Web resources~\cite{dbpedia,sciencewise:demo}. The current version of ontology has 3,449 classes, 3,627 taxonomic and 1,139 non-taxonomic relations. We distinguish two hierarchies of classes: a taxonomy of the fields of mathematics and a taxonomy of mathematical knowledge objects. In the taxonomy of fields, most fundamental fields, such as geometry and analysis, have been elaborated thoroughly. For example, there have been defined specific sub-fields of geometry: analytic geometry, differential geometry, fractal geometry and others. There are three types of top level concepts in the taxonomy of mathematical knowledge objects: i) basic metamathematical concepts, e.g. Set, Operator, Map, Function, Predicate etc; ii) root elements of the concepts related to the particular fields of mathematics, e.g. Element of Logics; iii) common scientific concepts: Problem, Method, Statement, and Formula. Concrete theoretical results, e.g. Arslanov's completeness criterion, can be found in lower levels. \section{Applications}\label{applicat} We present applications of the proposed semantic models for mathematical formula search and learning. \subsection{Mathematical formula search}\label{search-math} We have implemented two applications for mathematical formula search: syntactical search of formulas in MathML, and semantic ontology-based search. The syntactical search leverages formula parts from documents formatted in \TeX{}. Our algorithm~\cite{elizarov:mathml} transforms formulas in \TeX{} to MathML. We set up an information retrieval system prototype for a collection of articles in Lobachevskii Journal of Mathematics (LJM, \url{http://ljm.ksu.ru}). For the end-user, the query input interface supports a convenient \TeX{} syntax. The search hit description includes hightlighted occurrences of formulas as well as document metadata. In our previous work~\cite{Nikita}, we have developed a semantic publishing platform for scientific collections in mathematics that analyzes the underlying semantics in mathematical scholarly papers and effectively builds their consolidated ontology-based representation. The current data set contains a semantic representation of articles of ``Proceedings of Higher Education Institutions: Mathematics journal''. Our demo application (\url{http://cll.niimm.ksu.ru/mathsearch}) features a use case of querying mathematical formulas in the published dataset that are relevant to a given mathematical concept. The supported user input is close to a keyword search: our system is agnostic to a particular symbolic notation used to express mathematical concepts, and the user is able to select query suggestions by keywords. Our search interface also supports filtering by the document structure context, i.e., a particular segment of the document (e.g. a theorem or a definition) that contains the relevant formula. \subsection{Learning}\label{learning} For a practicing mathematician, an ability of solving problems is crucial. The proficient solver must realize relationships between particular methods, tasks, and proof techniques to make the transition from solving problems to proving theorems~\cite{velleman}. We describe our experiments on ontology-based assessment of the competence of students, who attended a course on numerical analysis. For our experiments, we extracted a small fragment of $OntoMath^{PRO}$ ontology. It contains taxonomies of tasks and solving methods for systems of linear equations (numerical analysis) as well as relationships between them. The experiment participants were students who attended the course and had high overall grades. Each participant is given a list of classes and asked to link them using only two relationships: taxonomic relation and \emph{solves}. Therefore, we treat this task as a classification task. We use standard performance measures for classification tasks, such as precision (P), recall (R), and F-score $=2\cdot\frac{P*R}{P+R}$. According to our results, reconstruction of concept properties is the hardest task (35\% F-score on average) for most students comparing to reconstruction of taxonomies (83\%). It means that the ontology could be used by students to conceive the correct conceptualization of a field of mathematics. The detailed analysis of the experiments is provided in~\cite{kesw}. \section{Conclusion}\label{conclusion} The paper summarizes the key tasks in mathematical knowledge representation. We give an overview of state-of-the-art semantic models of mathematical documents. We introduce ontology-based formalisms for knowledge representation as well as our novel ontological model, $OntoMath^{PRO}$, for mathematics. We present applications of the proposed semantic models for mathematical formula search and learning. We emphasize that while the ontology has achieved maturity, it is the result of ongoing work. The ontology is publicly available on \url{ontomathpro.org}. On this webpage, we encourage our colleagues to take part in collaborative editing, including correction and contributing new classes, relations, and definitions. We also organize a discussion to prospect novel applications. \textbf{Acknowledgments:} A.~Kirillovich would like to thank Evelina Khakimova (University of Virginia), Claudia Acevedo (Lemoine Editores), and Maria Isabel Duarte (EAFIT University) for the assistance in the work with bibliographic sources.
1,477,468,749,920
arxiv
\section{Ground State for Finite and Open System} When calculating the ground state of the effective one-dimensional model [Eq.~(5) of the main text] for an open and finite system with a fixed number of particles $N_p$, one gets a huge degeneracy due to the missing particles at the system boundaries. To avoid this degeneracy we cut the system and take only $N_s=m Z N_p - (m-1)Z = 9 N_p - 6$ sites instead of $\bar{N}_s=9 N_p$ sites (for $Z=3$ and $m=3$). The degeneracy is then lifted and there is only one possible ground state. This procedure corresponds to forcing the last $6$ sites to be empty. The other two possible ground states that would occur in the thermodynamic limit can then be found by putting either $3$ or all $6$ empty sites to the other boundary of the chain. We will use this procedure for our DMRG calculations as well as for the analytical calculations of the boundary charge. Using this ground state search, one gets a periodicity of $2\pi$. To get the periodicity of $6\pi$, we calculate all three ground states. We expect these states to evolve into each other when executing an adiabatic time evolution in the grand-canonical ensemble with the chemical potential located in the charge gap. One then gets the periodicity of $6\pi$ which we show in the main text. For convenience, we choose the chemical potential in such a way that the jumps of the adiabatic time evolution and the ones of the ground state search occur at the same position. The positions of the jumps in the adiabatic time evolution may change slightly when changing the chemical potential within the gap. The average of the FBC is given by \begin{align} \label{eq:boundary_charge} Q_B = \sum_{n=1}^{\bar{N}_s} f_n (\rho_n - N_p/\bar{N}_s), \end{align} where $\bar{N}_s$ is the number of sites including the $6$ empty sites, $N_p$ is the number of particles, and $\rho_n=\langle a^\dagger_n a_n\rangle$. The envelope function is denoted by $f_n$ and needs to decay smoothly from 1 to 0. For our numerical calculations we take a linear slope for the decay of length $l_p$. The center of this slope has a distance of $L_p$ to the left boundary. For the calculations shown in Fig.~3 of the main text we use $N_s=174$ ($\bar{N}_s=180$) with $L_p=90$ and $l_p=90$. \section{Analytical Calculation of the FBC} In this section we calculate the FBC for the effective one-dimensional model in dependence of the phase $\alpha$ analytically. We focus on the case of $Z=3$ and $m=3$ (other cases can be treated analogously) and consider the atomic limit with strong electron-electron (Coulomb) interaction $U_l=U\gg v_{ex} \gg t$. We introduce an effective unit cell of $Z_{\text{eff}} = Z m = 9$, so that the average bulk density is $\bar{\rho}_B= \frac{1}{Z_{\text{eff}}}=\frac{1}{9}$. In the atomic limit the problem of finding the FBC in the given strongly interacting model can be reduced to an effective single-particle model, in which a particle can occupy one of the first $m$ sites of the effective unit cell with $Z_{\text{eff}}$ sites. As shown in Ref.~\onlinecite{pletyukhov_etal_prr_20} the FBC in this limit is dominantly given by the polarization contribution deep in the bulk, which has the form \begin{align} Q_B \approx Q_P &= - \frac{1}{Z_{\text{eff}}} \sum_{j=1}^{Z_{\text{eff}}} j ( \rho^{\rm bulk}_j - \bar{\rho}_B) \\&= - \frac{1}{Z_{\text{eff}} } \sum_{j=1}^{m} j \rho^{\rm bulk}_j + \frac{Z_{\text{eff}}+1}{2 Z_{\text{eff}}} \\&=- \frac{1}{9 } \sum_{j=1}^{m} j \rho^{\rm bulk}_j + \frac{5}{9} . \end{align} % Depending on the minima of the cosine potential (see Fig.~\ref{fig:cos_3min}), a particle can sit either on site $j=1$ (for $0<\alpha < \frac{2 \pi}{3}$), or on $j=2$ (for $\frac{4 \pi}{3}<\alpha< 2 \pi$), or on $j=3$ (for $\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}$). We thus get three plateaus, \begin{align} Q_B \left(0<\alpha < \frac{2 \pi}{3}\right) & \approx - \frac{1}{9 } + \frac59 = \frac49, \\ Q_{B} \left(\frac{4 \pi}{3}<\alpha< 2 \pi\right) & \approx - \frac{2}{9 } + \frac59 = \frac{3}{9}, \\ Q_B \left(\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}\right) & \approx - \frac{3}{9 } +\frac59 = \frac{2}{9}. \end{align} \begin{figure} \centering \includegraphics[width =\columnwidth]{cos_3min.pdf} \caption{On-site potentials $v_j (\alpha) = \cos (\frac{2 \pi}{3} j+\alpha)$ for $j=1,2,3$.} \label{fig:cos_3min} \end{figure} \subsection{Vicinity of $\alpha=0, \frac{2\pi}{3}, \frac{4\pi}{3}$} At $\alpha =0, \frac{2 \pi}{3}, \frac{4 \pi}{3}$ the minima contain two sites with the same onsite potential. Therefore we consider the first order degenerate perturbation theory in $t$ in the three different intervals around these values. 1) \underline{$\frac{\pi}{3} < \alpha < \pi$}: $| \psi_0 \rangle \approx c_1^{(1)} |1 \rangle + c_3^{(1)} |3 \rangle $. Then we find \begin{align} Q_B \left(\frac{\pi}{3} < \alpha < \pi\right) &\approx - \frac19 \left( |c_1^{(1)}|^2 + 3 | c_3^{(1)}|^2 \right) + \frac59 \\ &= - \frac19 \left(- |c_1^{(1)}|^2 + | c_3^{(1)}|^2 \right) + \frac39. \label{QB1} \end{align} % However, this formula is incorrect, because the hybridization between $|1 \rangle$ and $| 3 \rangle$ is in $O (t)$ impossible, and we need to revise the above result. Consider the two subintervals 1a) $\frac{\pi}{3} <\alpha < \frac{2 \pi}{3}$; 1b) $\frac{2 \pi}{3} <\alpha < \pi$. 1a) For $\frac{\pi}{3} <\alpha < \frac{2 \pi}{3}$ the density is mostly located on $|1 \rangle$, with a small admixture of $| 0 \rangle$, which replaces $| 3 \rangle$ in Eq.~\eqref{QB1}. Thus the correct expression reads \begin{align} Q_B \left(\frac{\pi}{3} < \alpha < \frac{2\pi}{3}\right) &\approx - \frac19 \left( |c_1^{(1)}|^2 + 0 | c_3^{(1)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_1^{(1)}|^2 - | c_3^{(1)}|^2 \right) + \frac12. \label{QB1a} \end{align} 1b) For $\frac{2\pi}{3} <\alpha < \pi$ the density is mostly located on $|3 \rangle$, with a small admixture of $| 4 \rangle$, which replaces $| 1 \rangle$ in Eq.~\eqref{QB1}. Thus the correct expression reads \begin{align} Q_B \left(\frac{2 \pi}{3} < \alpha < \pi\right) &\approx - \frac19 \left( 4 |c_1^{(1)}|^2 + 3 | c_3^{(1)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_1^{(1)}|^2 - | c_3^{(1)}|^2 \right) + \frac16. \label{QB1b} \end{align} % Comparing Eqs.~\eqref{QB1a} and \eqref{QB1b} and taking into account that $|c_1^{(1)}|^2 - |c_3^{(1)}|^2$ is a continuous function of $\alpha$ (see below) vanishing at $\alpha = \frac{2 \pi}{3}$, we observe that at this value of $\alpha$ the boundary charge value jumps from $\frac12$ to $\frac16$, such that the jump value is $\frac16 - \frac12 = -\frac13$. \hspace{0.5cm} 2) \underline{$\pi < \alpha < \frac{5 \pi}{3}$}: $| \psi_0 \rangle \approx c_3^{(2)} |3 \rangle + c_2^{(2)} |2 \rangle $. This gives us \begin{align} Q_B \left(\pi < \alpha < \frac{5\pi}{3}\right) &\approx - \frac19 \left( 3 |c_3^{(2)}|^2 + 2 | c_2^{(2)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_3^{(2)}|^2 - | c_2^{(2)}|^2 \right) + \frac{5}{18}. \end{align} \hspace{0.5cm} 3) \underline{$-\frac{\pi}{3} < \alpha < \frac{\pi}{3}$}: $| \psi_0 \rangle \approx c_2^{(3)} |2 \rangle + c_1^{(3)} |1 \rangle $, leading to \begin{align} Q_B \left(-\frac{\pi}{3} < \alpha < \frac{\pi}{3}\right) &\approx - \frac19 \left(2 |c_2^{(3)}|^2 + | c_1^{(3)}|^2 \right) + \frac59 \notag \\ &\approx - \frac{1}{18} \left( |c_2^{(3)}|^2 - | c_1^{(3)}|^2 \right) + \frac{7}{18}. \end{align} The coefficients $c_a$ and $c_b$ are found from the eigenvalue problem \begin{align} &\left( \begin{array}{cc} \frac{v_a - v_b}{2} & - t \\ -t & - \frac{v_a - v_b}{2} \end{array} \right) \left( \begin{array}{c} c_a \\ c_b \end{array} \right) \notag\\&= - \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2} \left( \begin{array}{c} c_a \\ c_b \end{array} \right) \end{align} and it follows \begin{align} c_b &= \left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right] \frac{c_a}{t}, \\ c_a^2 &= \frac{t^2}{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2}, \end{align} \begin{align} c_a^2 - c_b^2 &= \frac{t^2 - \left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 }{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2} \\&= - (v_a-v_b) \frac{ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2} }{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2} . \end{align} % The results are shown in Fig.~\ref{fig:numerics_explanation} for certain parameters, where we also compare them to DMRG data. When we made the ansatz that we only need one particle in a cell of $Z_{\text{eff}}=9$ sites, we assumed that there are always two empty minima between the particles due to the repulsive electron-electron interaction. However, it is possible to have configurations where there is only one empty minimum between two particles. Then, both particles need to be located on the outer site of their minimum as shown in Fig.~\ref{fig:neglected}. In this case they also do not `see' each other's Coulomb interaction and it would be a ground state for $t=0$. % \unitlength0.75cm \begin{figure} \begin{picture}(11,3.8) \label{fig:unused} \put(0.75,3){\line(1,0){0.4}} \put(1.5,2){\line(1,0){0.4}} \put(2.25,2){\line(1,0){0.4}} \put(3,3){\line(1,0){0.4}} \put(3.75,2){\line(1,0){0.4}} \put(4.5,2){\line(1,0){0.4}} \put(5.25,3){\line(1,0){0.4}} \put(6,2){\line(1,0){0.4}} \put(6.75,2){\line(1,0){0.4}} \put(7.5,3){\line(1,0){0.4}} \put(8.25,2){\line(1,0){0.4}} \put(9,2){\line(1,0){0.4}} \put(9.75,3){\line(1,0){0.4}} \put(1.61,1.97){\color{OliveGreen}{$\uparrow$}} \put(6.86,1.97){\color{blue}{$\uparrow$}} \put(0.0,2.3){$\cdots$} \put(10.5,2.3){$\cdots$} \multiput(2.15,1.1)(0,0.3){9}{\color{blue}{\line(0,1){0.1}}} \multiput(6.55,1.75)(0,0.3){7}{\color{OliveGreen}{\line(0,1){0.1}}} \put(-1.0,1.66){\color{OliveGreen}{$\underbrace{\hspace{5.0cm}}_{\hspace{0.7cm}\text{Range of Coulomb interaction of the first particle}}$} \put(2.15,1.0){\color{blue}{$\underbrace{\hspace{6.6cm}}_{\text{Range of Coulomb interaction of the second particle}}$} \end{picture} \caption{States that are ground states at $t=0$ but can be neglected at $t\neq 0$. Two of the particles have only a distance of $(m-1)$ minima. The energy is higher than for the states where every $m$th minimum is occupied as the coupling of the green particle to the right and of the blue one to the left is much smaller in these cases.} \label{fig:neglected} \end{figure} % However, we do not need to consider them for the case with $t\neq 0$. Indeed, the neglected states are not coupled to the used ones in the orders that we look at. So there are no neglected couplings. Additionally, all states that have a contribution of those new states should have a larger energy than the calculated ground state because the coupling to the adjacent site is much smaller for the configurations shown in Fig.~\ref{fig:neglected} due to the Coulomb interaction. Therefore, these states cannot contribute to the ground state for $t\neq 0$. Using this degenerate perturbation theory, we find some discontinuities at $\alpha = \frac{\pi}{3}, \pi, \frac{5 \pi}{3}$ (see Fig.~\ref{fig:numerics_explanation}) because in the vicinity of these points, there are not two sites in the minimum. In the next section we will remove these discontinuities by treating the vicinities of these points in second order in $t$ with a non-degenerate perturbation theory. \subsection{Vicinity of $\alpha=\frac{\pi}{3}, \pi, \frac{5\pi}{3}$} In the vicinity of these points we can use non-degenerate perturbation theory where, up to first order in the perturbation, the ground state is given by \begin{align} \Ket{\Psi}=\Ket{n}+\sum_{m\ne n} \frac{V_{mn}}{E_n-E_m} \Ket{m} \,. \end{align} Here, $\Ket{n}$ denotes the ground state for $t=0$, and $V_{mn}$ are the matrix elements between the ground state and the excited states $m$, which are given by the hopping in our model. In the given regions there is one site in each minimum of the on-site potential. We will call this site $b$, while the two adjacent sites will be called $a$ and $c$. We then get \begin{align} \Ket{\Psi_0}=\Ket{b}+\frac{t}{v_c - v_b} \Ket{c} +\frac{t}{v_a - v_b} \Ket{a} \, \label{eq:GS_pert} \end{align} for the ground state. Taking into account that this state is not normalized, we get % \begin{align} |c_b|^2&=1-\left(\frac{t}{v_a-v_b}\right)^2-\left(\frac{t}{v_c-v_b}\right)^2 + \mathcal O\left(t^3\right) \, , \\ |c_a|^2&=\left(\frac{t}{v_a-v_b}\right)^2+ \mathcal O\left(t^3\right), \\ |c_c|^2&=\left(\frac{t}{v_c-v_b}\right)^2+ \mathcal O\left(t^3\right)\, . \end{align} The boundary charge in the three different regions can then be calculated as follows: \begin{align} Q_B &\left(0<\alpha < \frac{2 \pi}{3}\right) \notag\\ & \approx - \frac{1}{9 } \left(0|c_3^{(1)}|^2+|c_1^{(1)}|^2+2|c_2^{(1)}|^2 \right) + \frac59 \notag\\ &\approx \left(-|c_3^{(1)}|^2+|c_2^{(1)}|^2 \right) +\frac49, \end{align} \begin{align} Q_{B} &\left(\frac{4 \pi}{3}<\alpha< 2 \pi\right) \notag\\ & \approx - \frac{1}{9 }\left(2|c_2^{(2)}|^2+3|c_3^{(2)}|^2+4|c_1^{(2)}|^2 \right) + \frac59 \notag\\ &\approx \left(-|c_2^{(2)}|^2+|c_1^{(2)}|^2 \right) +\frac29, \end{align} \begin{align} Q_B &\left(\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}\right) \notag\\ & \approx - \frac{1}{9 } \left(|c_1^{(3)}|^2+2|c_2^{(3)}|^2+3|c_3^{(3)}|^2 \right) +\frac59 \notag\\ &\approx \left(-|c_1^{(3)}|^2+|c_3^{(3)}|^2 \right) +\frac39 \, . \end{align} % We then insert \begin{align} |c_c|^2-|c_a|^2 \approx \left(\frac{t}{v_c-v_b}\right)^2-\left(\frac{t}{v_a-v_b}\right)^2 \end{align} to get the final result that is shown in Fig.~\ref{fig:numerics_explanation} together with the results of the first order perturbation theory calculated above. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{QB_expl_supp.pdf} \caption{ Boundary charge as a function of $\alpha$ for the effective model calculated with DMRG and perturbation theory. The other parameters are $m=3$, $Z=3$, $t=1$, $v_{ex}=5$ and $U_l=U=10$ for $l=1,...,6$. We used $N_s=174$ and $L_p=90$, $l_p=90$ for the envelope function to get the DMRG results. The dashdotted line shows the results calculated in the vicinity of $\alpha=\frac{\pi}{3}, \pi, \frac{5\pi}{3}$, while the dashed line was calculated in the vicinity of $\alpha=0, \frac{2\pi}{3}, \frac{4\pi}{3}$.} \label{fig:numerics_explanation} \end{figure} \subsection{Uniting the results} In the previous sections we calculated the behavior of the boundary charge in different regimes of $\alpha$. To get a final expected curve, one needs to decide where to change between those regimes. Basically we have two different functions for the boundary charge. One should be valid around $\alpha=\pi/3, \pi, 5\pi/3$ and the other one around $\alpha=0, 2\pi/3, 4\pi/3$. These two results are plotted in the whole interval of $[0, 2\pi]$ in Fig.~\ref{fig:numerics_explanation}. As one can see, both results fit a certain part of the numerical curve quite well while there are other parts where they show useless behavior like jumps and divergences. Nevertheless they coincide very well in the intermediate regions between the regimes where they were calculated. To get one final analytical curve the method of calculation was changed at the points where both curves cross each other. The final result can be seen in Fig.~\ref{fig:numerics_final}. The numerical and analytical results lie nearly perfectly on top of each other. Fig.~\ref{fig:numerics_final} corresponds to a zoom into Fig.~3 of the main text, where we show all three ground states with their periodicity of $6\pi$. There, we find a jump of unity for each of the ground states because a particle leaves the system at that point. In Fig.~\ref{fig:numerics_final} we see only a jump by 1/3 because the system changes to another ground state as indicated by the colors. Thereby, all particles are shifted by one minimum to get into the new real ground state of our system. As already mentioned above, the other two ground states can be found by forcing other sites to have zero occupation. For the analytical calculation this means that sites $4,5,6$ or $7,8,9$ are occupied instead of sites $1,2,3$. The boundary charge is then changed by $-1/3$ or $-2/3$. \begin{figure} \centering \includegraphics[width =\columnwidth]{QB_final_supp.pdf} \caption{ Boundary charge as a function of $\alpha$ for the effective model calculated with DMRG and perturbation theory. The other parameters are $m=3$, $Z=3$, $t=1$, $v_{ex}=5$ and $U_l=U=10$ for $l=1,...,6$. We used $N_s=174$ and $L_p=90$, $l_p=90$ for the envelope function to get the DMRG results. The two curves of Fig.~\ref{fig:numerics_explanation} are now combined to one final curve.} \label{fig:numerics_final} \end{figure} \subsection{Limits of our perturbation theory} As we performed the perturbation theory in the regime $U \gg v_{ex} \gg t$, we expect it to fail when $U$ and $v_{ex}$ are not large enough. In Fig.~\ref{fig:prob_pert} we calculate the boundary charge in dependence of $\alpha$ for different $U$ and $v_{ex}$ with constant $U/v_{ex}=2$. For large values of $U$ and $v_{ex}$ the results coincide very well with our numerical DMRG results. For smaller values of $U$ and $v_{ex}$ the curves start to differ. When $U$ and $v_{ex}$ are of the order of $t$ the perturbation theory does not even give us a smooth curve. For those parameters the results of the different regimes of $\alpha$ do not agree in the intermediate region and cannot be united in a satisfying way. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{problems_perturbation_theory.pdf} \caption{Results of DMRG and perturbation theory for $t=1.0$ and $(v_{ex}, U)=$ a) $(5.0, 10.0)$, b) $(4.0, 8.0)$, c) $(3.0, 6.0)$, d) $(2.0, 4.0)$, e) $(1.0, 2.0)$, f) $(0.5, 1.0)$. The perturbation theory works quite well for large values of $v_{ex}$ and $U$ as expected, while there are clear drawbacks for smaller $v_{ex}$ and $U$. The DMRG results are obtained with $N_s=174$, $L_p=90$, and $l_p=90$.} \label{fig:prob_pert} \end{figure} \section{Dependence on Envelope Function} In the thermodynamic limit the boundary charge needs to be independent of the details of the envelope function. However, the boundary charge can slightly depend on the envelope function for finite system sizes as shown in Fig.~\ref{fig:diff_env}. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{diff_envelope.pdf} \caption{$Q_B\left(2\pi\right)-Q_B\left(\frac{4\pi}{3}\right)$ for different system sizes and different envelope functions. $\bar{N}_s=N_s+(m-1)Z=N_s+6$ describes the system size including the blocked sites where the density is forced to be zero. For $\bar{N}_s\rightarrow\infty$ all curves tend to $1/9$. The other parameters are $t=1$, $v_{ex}=3$, and $U=3$.} \label{fig:diff_env} \end{figure} % To be as close as possible to the thermodynamic limit, we choose the envelope function with $L_p=\bar{N}_s/2$ and $l_p=\bar{N}_s/2$ (orange curve in Fig.~\ref{fig:diff_env}) for our calculations, where $\bar{N}_s=N_s+6$ denotes the system including the sites that we forced to be empty. With this choice already systems of relatively small size give us a value of $Q_B\left(2\pi\right)-Q_B\left(\frac{4\pi}{3}\right)$ that coincides with the one in the thermodynamic limit. \section{Interface between two CDWs} In this section, we show how the analytical arguments presented in the main text can be extended to describe the charge located at the interface between two CDWs in a 1D nanowire. We consider an interface between two CDWs described by a spatially modulated potential of the form % \begin{align} V_m(x)=\begin{cases} 2 V_m \cos (2mk_Fx+\alpha_<), &x<0,\\ 2 V_m \cos (2mk_Fx+\alpha_>), &x>0, \end{cases} \end{align} % where $\alpha_{<}$ ($\alpha_>$) describes the phase offset of the CDW in the domain $x<0$ ($x>0$). We can now follow the same arguments as in the main text for $x\in(-\infty,0)$ and $x\in(0,+\infty)$ separately. In terms of the conjugate bosonic fields $\phi$ and $\theta$, the CDW term then takes the form $H_{CDW}^m=\int dx\,\mathcal{H}_{CDW}^m(x)$ with % \begin{align} \mathcal{H}_{CDW}^m(x)=\begin{cases} \frac{{-2|\tilde V_m|}}{(2\pi a)^m} \cos (2m\phi(x)+\alpha_< -\alpha_0),&x<0,\\ \frac{{-2|\tilde V_m|}}{(2\pi a)^m} \cos (2m\phi(x)+\alpha_> -\alpha_0),&x>0, \end{cases} \end{align} % where $\alpha_0$ is again an irrelevant overall phase shift. The CDW term is minimized for the pinning values % \begin{align} \phi(x)=\begin{cases} -(\alpha_<-\alpha_0)/2m+l\pi/m,&x<0,\\ -(\alpha_>-\alpha_0)/2m+n\pi/m,&x>0, \end{cases} \end{align} % where $l$ and $n$ are integers. Therefore, the charge located at the interface is given by % \begin{align} Q_D &= - \int^{+\infty}_{-\infty} dx \ \frac{\partial_x \phi (x)}{\pi} \nonumber\\&=- \frac{1}{\pi} [\phi(+\infty)-\phi(-\infty)]\nonumber\\ &=(l-n)/m+(\alpha_>-\alpha_<)/2\pi m\ \ {\rm mod}\,{1}. \label{eq:charge_interface} \end{align} % We thus find that the fractional charge changes linearly with the phase difference $\alpha_>-\alpha_<$ with a slope of $1/2\pi m$. Finally, we note that analogous considerations allow us to recover the fractional charge of the excitations in the 2D case. Indeed, a bulk excitation in the 2D FQHE corresponds to a kink (domain wall) in the pinned combination of the fields $\eta_{1(n+1)}-\eta_{\bar{1}n}$ for a given $n$, see Eq.~(8) in the main text, while the uniform phase $\varphi$ drops out. By using $\sum_n(\eta_{1(n+1)}-\eta_{\bar{1}n})=-2(2l+1)\sum_n\phi_n$ and using that the charge density of a single wire is given by $\rho_n=-\partial_x\phi_n(x)/\pi$ in units of the electron charge $e$, we find that a kink between two adjacent minima of the cosine carries the charge $e/(2l+1)$. \bibliographystyle{unsrt} \section{Ground State for Finite and Open System} When calculating the ground state of the effective one-dimensional model [Eq.~(5) of the main text] for an open and finite system with a fixed number of particles $N_p$, one gets a huge degeneracy due to the missing particles at the system boundaries. To avoid this degeneracy we cut the system and take only $N_s=m Z N_p - (m-1)Z = 9 N_p - 6$ sites instead of $\bar{N}_s=9 N_p$ sites (for $Z=3$ and $m=3$). The degeneracy is then lifted and there is only one possible ground state. This procedure corresponds to forcing the last $6$ sites to be empty. The other two possible ground states that would occur in the thermodynamic limit can then be found by putting either $3$ or all $6$ empty sites to the other boundary of the chain. We will use this procedure for our DMRG calculations as well as for the analytical calculations of the boundary charge. Using this ground state search, one gets a periodicity of $2\pi$. To get the periodicity of $6\pi$, we calculate all three ground states. We expect these states to evolve into each other when executing an adiabatic time evolution in the grand-canonical ensemble with the chemical potential located in the charge gap. One then gets the periodicity of $6\pi$ which we show in the main text. For convenience, we choose the chemical potential in such a way that the jumps of the adiabatic time evolution and the ones of the ground state search occur at the same position. The positions of the jumps in the adiabatic time evolution may change slightly when changing the chemical potential within the gap. The average of the FBC is given by \begin{align} \label{eq:boundary_charge} Q_B = \sum_{n=1}^{\bar{N}_s} f_n (\rho_n - N_p/\bar{N}_s), \end{align} where $\bar{N}_s$ is the number of sites including the $6$ empty sites, $N_p$ is the number of particles, and $\rho_n=\langle a^\dagger_n a_n\rangle$. The envelope function is denoted by $f_n$ and needs to decay smoothly from 1 to 0. For our numerical calculations we take a linear slope for the decay of length $l_p$. The center of this slope has a distance of $L_p$ to the left boundary. For the calculations shown in Fig.~3 of the main text we use $N_s=174$ ($\bar{N}_s=180$) with $L_p=90$ and $l_p=90$. \section{Analytical Calculation of the FBC} In this section we calculate the FBC for the effective one-dimensional model in dependence of the phase $\alpha$ analytically. We focus on the case of $Z=3$ and $m=3$ (other cases can be treated analogously) and consider the atomic limit with strong electron-electron (Coulomb) interaction $U_l=U\gg v_{ex} \gg t$. We introduce an effective unit cell of $Z_{\text{eff}} = Z m = 9$, so that the average bulk density is $\bar{\rho}_B= \frac{1}{Z_{\text{eff}}}=\frac{1}{9}$. In the atomic limit the problem of finding the FBC in the given strongly interacting model can be reduced to an effective single-particle model, in which a particle can occupy one of the first $m$ sites of the effective unit cell with $Z_{\text{eff}}$ sites. As shown in Ref.~\onlinecite{pletyukhov_etal_prr_20} the FBC in this limit is dominantly given by the polarization contribution deep in the bulk, which has the form \begin{align} Q_B \approx Q_P &= - \frac{1}{Z_{\text{eff}}} \sum_{j=1}^{Z_{\text{eff}}} j ( \rho^{\rm bulk}_j - \bar{\rho}_B) \\&= - \frac{1}{Z_{\text{eff}} } \sum_{j=1}^{m} j \rho^{\rm bulk}_j + \frac{Z_{\text{eff}}+1}{2 Z_{\text{eff}}} \\&=- \frac{1}{9 } \sum_{j=1}^{m} j \rho^{\rm bulk}_j + \frac{5}{9} . \end{align} % Depending on the minima of the cosine potential (see Fig.~\ref{fig:cos_3min}), a particle can sit either on site $j=1$ (for $0<\alpha < \frac{2 \pi}{3}$), or on $j=2$ (for $\frac{4 \pi}{3}<\alpha< 2 \pi$), or on $j=3$ (for $\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}$). We thus get three plateaus, \begin{align} Q_B \left(0<\alpha < \frac{2 \pi}{3}\right) & \approx - \frac{1}{9 } + \frac59 = \frac49, \\ Q_{B} \left(\frac{4 \pi}{3}<\alpha< 2 \pi\right) & \approx - \frac{2}{9 } + \frac59 = \frac{3}{9}, \\ Q_B \left(\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}\right) & \approx - \frac{3}{9 } +\frac59 = \frac{2}{9}. \end{align} \begin{figure} \centering \includegraphics[width =\columnwidth]{cos_3min.pdf} \caption{On-site potentials $v_j (\alpha) = \cos (\frac{2 \pi}{3} j+\alpha)$ for $j=1,2,3$.} \label{fig:cos_3min} \end{figure} \subsection{Vicinity of $\alpha=0, \frac{2\pi}{3}, \frac{4\pi}{3}$} At $\alpha =0, \frac{2 \pi}{3}, \frac{4 \pi}{3}$ the minima contain two sites with the same onsite potential. Therefore we consider the first order degenerate perturbation theory in $t$ in the three different intervals around these values. 1) \underline{$\frac{\pi}{3} < \alpha < \pi$}: $| \psi_0 \rangle \approx c_1^{(1)} |1 \rangle + c_3^{(1)} |3 \rangle $. Then we find \begin{align} Q_B \left(\frac{\pi}{3} < \alpha < \pi\right) &\approx - \frac19 \left( |c_1^{(1)}|^2 + 3 | c_3^{(1)}|^2 \right) + \frac59 \\ &= - \frac19 \left(- |c_1^{(1)}|^2 + | c_3^{(1)}|^2 \right) + \frac39. \label{QB1} \end{align} % However, this formula is incorrect, because the hybridization between $|1 \rangle$ and $| 3 \rangle$ is in $O (t)$ impossible, and we need to revise the above result. Consider the two subintervals 1a) $\frac{\pi}{3} <\alpha < \frac{2 \pi}{3}$; 1b) $\frac{2 \pi}{3} <\alpha < \pi$. 1a) For $\frac{\pi}{3} <\alpha < \frac{2 \pi}{3}$ the density is mostly located on $|1 \rangle$, with a small admixture of $| 0 \rangle$, which replaces $| 3 \rangle$ in Eq.~\eqref{QB1}. Thus the correct expression reads \begin{align} Q_B \left(\frac{\pi}{3} < \alpha < \frac{2\pi}{3}\right) &\approx - \frac19 \left( |c_1^{(1)}|^2 + 0 | c_3^{(1)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_1^{(1)}|^2 - | c_3^{(1)}|^2 \right) + \frac12. \label{QB1a} \end{align} 1b) For $\frac{2\pi}{3} <\alpha < \pi$ the density is mostly located on $|3 \rangle$, with a small admixture of $| 4 \rangle$, which replaces $| 1 \rangle$ in Eq.~\eqref{QB1}. Thus the correct expression reads \begin{align} Q_B \left(\frac{2 \pi}{3} < \alpha < \pi\right) &\approx - \frac19 \left( 4 |c_1^{(1)}|^2 + 3 | c_3^{(1)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_1^{(1)}|^2 - | c_3^{(1)}|^2 \right) + \frac16. \label{QB1b} \end{align} % Comparing Eqs.~\eqref{QB1a} and \eqref{QB1b} and taking into account that $|c_1^{(1)}|^2 - |c_3^{(1)}|^2$ is a continuous function of $\alpha$ (see below) vanishing at $\alpha = \frac{2 \pi}{3}$, we observe that at this value of $\alpha$ the boundary charge value jumps from $\frac12$ to $\frac16$, such that the jump value is $\frac16 - \frac12 = -\frac13$. \hspace{0.5cm} 2) \underline{$\pi < \alpha < \frac{5 \pi}{3}$}: $| \psi_0 \rangle \approx c_3^{(2)} |3 \rangle + c_2^{(2)} |2 \rangle $. This gives us \begin{align} Q_B \left(\pi < \alpha < \frac{5\pi}{3}\right) &\approx - \frac19 \left( 3 |c_3^{(2)}|^2 + 2 | c_2^{(2)}|^2 \right) + \frac59 \notag\\ &\approx - \frac{1}{18} \left( |c_3^{(2)}|^2 - | c_2^{(2)}|^2 \right) + \frac{5}{18}. \end{align} \hspace{0.5cm} 3) \underline{$-\frac{\pi}{3} < \alpha < \frac{\pi}{3}$}: $| \psi_0 \rangle \approx c_2^{(3)} |2 \rangle + c_1^{(3)} |1 \rangle $, leading to \begin{align} Q_B \left(-\frac{\pi}{3} < \alpha < \frac{\pi}{3}\right) &\approx - \frac19 \left(2 |c_2^{(3)}|^2 + | c_1^{(3)}|^2 \right) + \frac59 \notag \\ &\approx - \frac{1}{18} \left( |c_2^{(3)}|^2 - | c_1^{(3)}|^2 \right) + \frac{7}{18}. \end{align} The coefficients $c_a$ and $c_b$ are found from the eigenvalue problem \begin{align} &\left( \begin{array}{cc} \frac{v_a - v_b}{2} & - t \\ -t & - \frac{v_a - v_b}{2} \end{array} \right) \left( \begin{array}{c} c_a \\ c_b \end{array} \right) \notag\\&= - \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2} \left( \begin{array}{c} c_a \\ c_b \end{array} \right) \end{align} and it follows \begin{align} c_b &= \left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right] \frac{c_a}{t}, \\ c_a^2 &= \frac{t^2}{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2}, \end{align} \begin{align} c_a^2 - c_b^2 &= \frac{t^2 - \left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 }{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2} \\&= - (v_a-v_b) \frac{ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2} }{\left[ \frac{v_a - v_b}{2} + \sqrt{\left( \frac{v_a - v_b}{2} \right)^2 + t^2}\right]^2 +t^2} . \end{align} % The results are shown in Fig.~\ref{fig:numerics_explanation} for certain parameters, where we also compare them to DMRG data. When we made the ansatz that we only need one particle in a cell of $Z_{\text{eff}}=9$ sites, we assumed that there are always two empty minima between the particles due to the repulsive electron-electron interaction. However, it is possible to have configurations where there is only one empty minimum between two particles. Then, both particles need to be located on the outer site of their minimum as shown in Fig.~\ref{fig:neglected}. In this case they also do not `see' each other's Coulomb interaction and it would be a ground state for $t=0$. % \unitlength0.75cm \begin{figure} \begin{picture}(11,3.8) \label{fig:unused} \put(0.75,3){\line(1,0){0.4}} \put(1.5,2){\line(1,0){0.4}} \put(2.25,2){\line(1,0){0.4}} \put(3,3){\line(1,0){0.4}} \put(3.75,2){\line(1,0){0.4}} \put(4.5,2){\line(1,0){0.4}} \put(5.25,3){\line(1,0){0.4}} \put(6,2){\line(1,0){0.4}} \put(6.75,2){\line(1,0){0.4}} \put(7.5,3){\line(1,0){0.4}} \put(8.25,2){\line(1,0){0.4}} \put(9,2){\line(1,0){0.4}} \put(9.75,3){\line(1,0){0.4}} \put(1.61,1.97){\color{OliveGreen}{$\uparrow$}} \put(6.86,1.97){\color{blue}{$\uparrow$}} \put(0.0,2.3){$\cdots$} \put(10.5,2.3){$\cdots$} \multiput(2.15,1.1)(0,0.3){9}{\color{blue}{\line(0,1){0.1}}} \multiput(6.55,1.75)(0,0.3){7}{\color{OliveGreen}{\line(0,1){0.1}}} \put(-1.0,1.66){\color{OliveGreen}{$\underbrace{\hspace{5.0cm}}_{\hspace{0.7cm}\text{Range of Coulomb interaction of the first particle}}$} \put(2.15,1.0){\color{blue}{$\underbrace{\hspace{6.6cm}}_{\text{Range of Coulomb interaction of the second particle}}$} \end{picture} \caption{States that are ground states at $t=0$ but can be neglected at $t\neq 0$. Two of the particles have only a distance of $(m-1)$ minima. The energy is higher than for the states where every $m$th minimum is occupied as the coupling of the green particle to the right and of the blue one to the left is much smaller in these cases.} \label{fig:neglected} \end{figure} % However, we do not need to consider them for the case with $t\neq 0$. Indeed, the neglected states are not coupled to the used ones in the orders that we look at. So there are no neglected couplings. Additionally, all states that have a contribution of those new states should have a larger energy than the calculated ground state because the coupling to the adjacent site is much smaller for the configurations shown in Fig.~\ref{fig:neglected} due to the Coulomb interaction. Therefore, these states cannot contribute to the ground state for $t\neq 0$. Using this degenerate perturbation theory, we find some discontinuities at $\alpha = \frac{\pi}{3}, \pi, \frac{5 \pi}{3}$ (see Fig.~\ref{fig:numerics_explanation}) because in the vicinity of these points, there are not two sites in the minimum. In the next section we will remove these discontinuities by treating the vicinities of these points in second order in $t$ with a non-degenerate perturbation theory. \subsection{Vicinity of $\alpha=\frac{\pi}{3}, \pi, \frac{5\pi}{3}$} In the vicinity of these points we can use non-degenerate perturbation theory where, up to first order in the perturbation, the ground state is given by \begin{align} \Ket{\Psi}=\Ket{n}+\sum_{m\ne n} \frac{V_{mn}}{E_n-E_m} \Ket{m} \,. \end{align} Here, $\Ket{n}$ denotes the ground state for $t=0$, and $V_{mn}$ are the matrix elements between the ground state and the excited states $m$, which are given by the hopping in our model. In the given regions there is one site in each minimum of the on-site potential. We will call this site $b$, while the two adjacent sites will be called $a$ and $c$. We then get \begin{align} \Ket{\Psi_0}=\Ket{b}+\frac{t}{v_c - v_b} \Ket{c} +\frac{t}{v_a - v_b} \Ket{a} \, \label{eq:GS_pert} \end{align} for the ground state. Taking into account that this state is not normalized, we get % \begin{align} |c_b|^2&=1-\left(\frac{t}{v_a-v_b}\right)^2-\left(\frac{t}{v_c-v_b}\right)^2 + \mathcal O\left(t^3\right) \, , \\ |c_a|^2&=\left(\frac{t}{v_a-v_b}\right)^2+ \mathcal O\left(t^3\right), \\ |c_c|^2&=\left(\frac{t}{v_c-v_b}\right)^2+ \mathcal O\left(t^3\right)\, . \end{align} The boundary charge in the three different regions can then be calculated as follows: \begin{align} Q_B &\left(0<\alpha < \frac{2 \pi}{3}\right) \notag\\ & \approx - \frac{1}{9 } \left(0|c_3^{(1)}|^2+|c_1^{(1)}|^2+2|c_2^{(1)}|^2 \right) + \frac59 \notag\\ &\approx \left(-|c_3^{(1)}|^2+|c_2^{(1)}|^2 \right) +\frac49, \end{align} \begin{align} Q_{B} &\left(\frac{4 \pi}{3}<\alpha< 2 \pi\right) \notag\\ & \approx - \frac{1}{9 }\left(2|c_2^{(2)}|^2+3|c_3^{(2)}|^2+4|c_1^{(2)}|^2 \right) + \frac59 \notag\\ &\approx \left(-|c_2^{(2)}|^2+|c_1^{(2)}|^2 \right) +\frac29, \end{align} \begin{align} Q_B &\left(\frac{2 \pi}{3}<\alpha < \frac{4 \pi}{3}\right) \notag\\ & \approx - \frac{1}{9 } \left(|c_1^{(3)}|^2+2|c_2^{(3)}|^2+3|c_3^{(3)}|^2 \right) +\frac59 \notag\\ &\approx \left(-|c_1^{(3)}|^2+|c_3^{(3)}|^2 \right) +\frac39 \, . \end{align} % We then insert \begin{align} |c_c|^2-|c_a|^2 \approx \left(\frac{t}{v_c-v_b}\right)^2-\left(\frac{t}{v_a-v_b}\right)^2 \end{align} to get the final result that is shown in Fig.~\ref{fig:numerics_explanation} together with the results of the first order perturbation theory calculated above. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{QB_expl_supp.pdf} \caption{ Boundary charge as a function of $\alpha$ for the effective model calculated with DMRG and perturbation theory. The other parameters are $m=3$, $Z=3$, $t=1$, $v_{ex}=5$ and $U_l=U=10$ for $l=1,...,6$. We used $N_s=174$ and $L_p=90$, $l_p=90$ for the envelope function to get the DMRG results. The dashdotted line shows the results calculated in the vicinity of $\alpha=\frac{\pi}{3}, \pi, \frac{5\pi}{3}$, while the dashed line was calculated in the vicinity of $\alpha=0, \frac{2\pi}{3}, \frac{4\pi}{3}$.} \label{fig:numerics_explanation} \end{figure} \subsection{Uniting the results} In the previous sections we calculated the behavior of the boundary charge in different regimes of $\alpha$. To get a final expected curve, one needs to decide where to change between those regimes. Basically we have two different functions for the boundary charge. One should be valid around $\alpha=\pi/3, \pi, 5\pi/3$ and the other one around $\alpha=0, 2\pi/3, 4\pi/3$. These two results are plotted in the whole interval of $[0, 2\pi]$ in Fig.~\ref{fig:numerics_explanation}. As one can see, both results fit a certain part of the numerical curve quite well while there are other parts where they show useless behavior like jumps and divergences. Nevertheless they coincide very well in the intermediate regions between the regimes where they were calculated. To get one final analytical curve the method of calculation was changed at the points where both curves cross each other. The final result can be seen in Fig.~\ref{fig:numerics_final}. The numerical and analytical results lie nearly perfectly on top of each other. Fig.~\ref{fig:numerics_final} corresponds to a zoom into Fig.~3 of the main text, where we show all three ground states with their periodicity of $6\pi$. There, we find a jump of unity for each of the ground states because a particle leaves the system at that point. In Fig.~\ref{fig:numerics_final} we see only a jump by 1/3 because the system changes to another ground state as indicated by the colors. Thereby, all particles are shifted by one minimum to get into the new real ground state of our system. As already mentioned above, the other two ground states can be found by forcing other sites to have zero occupation. For the analytical calculation this means that sites $4,5,6$ or $7,8,9$ are occupied instead of sites $1,2,3$. The boundary charge is then changed by $-1/3$ or $-2/3$. \begin{figure} \centering \includegraphics[width =\columnwidth]{QB_final_supp.pdf} \caption{ Boundary charge as a function of $\alpha$ for the effective model calculated with DMRG and perturbation theory. The other parameters are $m=3$, $Z=3$, $t=1$, $v_{ex}=5$ and $U_l=U=10$ for $l=1,...,6$. We used $N_s=174$ and $L_p=90$, $l_p=90$ for the envelope function to get the DMRG results. The two curves of Fig.~\ref{fig:numerics_explanation} are now combined to one final curve.} \label{fig:numerics_final} \end{figure} \subsection{Limits of our perturbation theory} As we performed the perturbation theory in the regime $U \gg v_{ex} \gg t$, we expect it to fail when $U$ and $v_{ex}$ are not large enough. In Fig.~\ref{fig:prob_pert} we calculate the boundary charge in dependence of $\alpha$ for different $U$ and $v_{ex}$ with constant $U/v_{ex}=2$. For large values of $U$ and $v_{ex}$ the results coincide very well with our numerical DMRG results. For smaller values of $U$ and $v_{ex}$ the curves start to differ. When $U$ and $v_{ex}$ are of the order of $t$ the perturbation theory does not even give us a smooth curve. For those parameters the results of the different regimes of $\alpha$ do not agree in the intermediate region and cannot be united in a satisfying way. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{problems_perturbation_theory.pdf} \caption{Results of DMRG and perturbation theory for $t=1.0$ and $(v_{ex}, U)=$ a) $(5.0, 10.0)$, b) $(4.0, 8.0)$, c) $(3.0, 6.0)$, d) $(2.0, 4.0)$, e) $(1.0, 2.0)$, f) $(0.5, 1.0)$. The perturbation theory works quite well for large values of $v_{ex}$ and $U$ as expected, while there are clear drawbacks for smaller $v_{ex}$ and $U$. The DMRG results are obtained with $N_s=174$, $L_p=90$, and $l_p=90$.} \label{fig:prob_pert} \end{figure} \section{Dependence on Envelope Function} In the thermodynamic limit the boundary charge needs to be independent of the details of the envelope function. However, the boundary charge can slightly depend on the envelope function for finite system sizes as shown in Fig.~\ref{fig:diff_env}. \begin{figure}[t!] \centering \includegraphics[width =\columnwidth]{diff_envelope.pdf} \caption{$Q_B\left(2\pi\right)-Q_B\left(\frac{4\pi}{3}\right)$ for different system sizes and different envelope functions. $\bar{N}_s=N_s+(m-1)Z=N_s+6$ describes the system size including the blocked sites where the density is forced to be zero. For $\bar{N}_s\rightarrow\infty$ all curves tend to $1/9$. The other parameters are $t=1$, $v_{ex}=3$, and $U=3$.} \label{fig:diff_env} \end{figure} % To be as close as possible to the thermodynamic limit, we choose the envelope function with $L_p=\bar{N}_s/2$ and $l_p=\bar{N}_s/2$ (orange curve in Fig.~\ref{fig:diff_env}) for our calculations, where $\bar{N}_s=N_s+6$ denotes the system including the sites that we forced to be empty. With this choice already systems of relatively small size give us a value of $Q_B\left(2\pi\right)-Q_B\left(\frac{4\pi}{3}\right)$ that coincides with the one in the thermodynamic limit. \section{Interface between two CDWs} In this section, we show how the analytical arguments presented in the main text can be extended to describe the charge located at the interface between two CDWs in a 1D nanowire. We consider an interface between two CDWs described by a spatially modulated potential of the form % \begin{align} V_m(x)=\begin{cases} 2 V_m \cos (2mk_Fx+\alpha_<), &x<0,\\ 2 V_m \cos (2mk_Fx+\alpha_>), &x>0, \end{cases} \end{align} % where $\alpha_{<}$ ($\alpha_>$) describes the phase offset of the CDW in the domain $x<0$ ($x>0$). We can now follow the same arguments as in the main text for $x\in(-\infty,0)$ and $x\in(0,+\infty)$ separately. In terms of the conjugate bosonic fields $\phi$ and $\theta$, the CDW term then takes the form $H_{CDW}^m=\int dx\,\mathcal{H}_{CDW}^m(x)$ with % \begin{align} \mathcal{H}_{CDW}^m(x)=\begin{cases} \frac{{-2|\tilde V_m|}}{(2\pi a)^m} \cos (2m\phi(x)+\alpha_< -\alpha_0),&x<0,\\ \frac{{-2|\tilde V_m|}}{(2\pi a)^m} \cos (2m\phi(x)+\alpha_> -\alpha_0),&x>0, \end{cases} \end{align} % where $\alpha_0$ is again an irrelevant overall phase shift. The CDW term is minimized for the pinning values % \begin{align} \phi(x)=\begin{cases} -(\alpha_<-\alpha_0)/2m+l\pi/m,&x<0,\\ -(\alpha_>-\alpha_0)/2m+n\pi/m,&x>0, \end{cases} \end{align} % where $l$ and $n$ are integers. Therefore, the charge located at the interface is given by % \begin{align} Q_D &= - \int^{+\infty}_{-\infty} dx \ \frac{\partial_x \phi (x)}{\pi} \nonumber\\&=- \frac{1}{\pi} [\phi(+\infty)-\phi(-\infty)]\nonumber\\ &=(l-n)/m+(\alpha_>-\alpha_<)/2\pi m\ \ {\rm mod}\,{1}. \label{eq:charge_interface} \end{align} % We thus find that the fractional charge changes linearly with the phase difference $\alpha_>-\alpha_<$ with a slope of $1/2\pi m$. Finally, we note that analogous considerations allow us to recover the fractional charge of the excitations in the 2D case. Indeed, a bulk excitation in the 2D FQHE corresponds to a kink (domain wall) in the pinned combination of the fields $\eta_{1(n+1)}-\eta_{\bar{1}n}$ for a given $n$, see Eq.~(8) in the main text, while the uniform phase $\varphi$ drops out. By using $\sum_n(\eta_{1(n+1)}-\eta_{\bar{1}n})=-2(2l+1)\sum_n\phi_n$ and using that the charge density of a single wire is given by $\rho_n=-\partial_x\phi_n(x)/\pi$ in units of the electron charge $e$, we find that a kink between two adjacent minima of the cosine carries the charge $e/(2l+1)$. \bibliographystyle{unsrt}
1,477,468,749,921
arxiv
\section{Introduction} Measurement of the radiation effects imparted to astronauts on the space station is an important task for the International Space Station (ISS) mission. Numerous high-energy protons are produced at the solar surface when a huge solar flare occurs. They arrive at the ISS a few hours later. These protons pose a strong radiation hazard not only to the astronauts but also to the electronic devices on-board the ISS. An early prediction of the arrival of the Solar Energetic Particles (SEPs) is expected to minimize such radiation hazards. For realization of such a forecast, a new type of solar neutron detector is proposed for mounting on the ISS. In April 1979, the plan was accepted by the Space Development Committee of the Ministry of Science of Japan as a JEM payload \cite{Minute}. The new neutron detector can measure the neutron energy; it can also detect the direction of the arrival of neutrons. Those functions are very important not only to identify when solar neutrons depart from the Sun but also to assess radiation hazards for human bodies and radiation damage to electronics. These new functions will enable more precise understanding not only of the production mechanisms of the Solar Energetic Particles (SEPs) at the Sun surface, but also the radiation effects imparted by them. We can determine whether they are produced indirectly by the wall of ISS or directly by solar neutrons. \section{New Solar Neutron Detector} The new neutron detector can measure the energy of solar neutrons and the Albedo neutrons from the Earth. The detector comprises fine blocks of the scintillation plates; the size of one scintillator plate is 6mm$\times$3mm$\times$96mm. The neutron tracks are identifiable as the tracks of protons that are produced through n-p interaction processes in the scintillator plates. The arrival direction and the track length of protons are detected by two multi-anode photomultipliers (H4140-20; Hamamatsu Photonics K.K.) looking perpendicularly from the X direction and Y direction (Fig. 1). Thereby, the detector can identify the arrival direction of neutrons with accuracy of $\pm$(1.8$-$45) deg, depending on the track length. Consequently, we can identify whether those neutrons have come from the Sun or the Earth or from the wall of the ISS because the direction of the Sun is measured using a position sensor on-board the ISS. The neutron's| energy can be estimated by measuring the range of protons. The neutron detector designated as FIB in SEDA can measure the energy of neutrons between 30 MeV and 100 MeV. However we noticed another method to obtain the energy of protons. The intensity of photons deposited into each block of the scintillator plate will be measured one-by-one by the Analog-Digital-Converter. Therefore the sum of all ADC values deposited on the scintillator plate along a track must be proportional to the kinetic energy of protons. Herein we call this method "total photon counting method". The deposited energy in each thin plastic plate will be measured by the 256 channel multi-anode photomultipliers one by one, switching the ADC electronically. The diagram of the electric logic is shown in Figure 2. In this conference, we will report the energy resolution of the total photon counting method in oral presentation, comparing them with that of the range method. Simultaneously, the Bonner Ball Detector (BBD) will measure neutrons with energies in the range of (0.03 eV$-$15 MeV) in SEDA. The BBD detector cannot identify the arrival direction of neutrons|. The operational principle of the BBD is the same as the neutron monitors that are used throughout the world to measure the long-term modulation of cosmic ray intensity, but the detector is small. The FIB and the BBD detectors are mounted in a large box called the Space Environment Data Acquisition equipment-Attached Payload (SEDA-AP), which weighs 450 kg. It will be launched on the Japan Exposure Module (JEM) of the ISS on 13 June 2009 by the Endeavour space shuttle. Details of the FIB detector on-board the SEDA-AP are available elsewhere \cite{Koga, Matsumoto, Imaida, Muraki, webJAXA}. \section{Energy Resolution of FIB detector} The energy resolution of the FIB detector was obtained using the proton beam at Riken. A 160 MeV proton beam was bombarded in front of the FIB. Different proton energy was realized to install aluminum plates of various thicknesses. The energy radiated on the FIB detector is respectively equivalent to the energy of 27 MeV, 44 MeV, 68 MeV, and 102 MeV. The range of tracks was scanned visually to obtain the range distribution. We have inferred that the mean value of the range corresponds to the incident proton energy, but the distribution from the mean value corresponds to the detector|s energy resolution. Consequently, the energy resolution was obtained. The results are presented in Fig. 3. Fitting the data to a function of 1/E, we obtained the energy resolution of the FIB to the proton tracks as $\Delta$E/E = 10$\%$/(E/50 MeV). However, when the number of trigger events for neutrons increases to more than 16 Hz, the FIB detector cannot record the pattern of the event because the transmission rate is limited by the baud rate of the communication between the ISS and the ground-based station. In this case, they can measure the number of layers, i.e., the range of charged particles. Furthermore, in case the trigger rate exceeds 64 Hz, the total deposited energy will be measured merely using use of the mesh type dynode of the multi-anode photomultipliers. At this time, the energy resolution is not as good as that of the range method: it is $\Delta$E/E = 40$\%$, being independent from energy. \section{ Expected trigger rate and Large solar flares} The FIB will record various data. The main sensor is cubic: 10cm$\times$10cm$\times$10cm. The main sensor is covered by six plates of the plastic scintillator, which have a role of the anti-counter. Therefore, when high-energy neutrons enter into the sensor and produce high-energy protons, the high-energy protons penetrate the main sensor and arrive at the anti-counter. Using the anti-counter, such high-energy neutrons are not recorded. We can remove the bottom side anti-counter using communications from the ground, but it depends on the trigger rate by the backside particles. Detection of high-energy solar neutrons with energies $>$100 MeV is made using ground-based neutron detectors. The onboard detectors of the ISS can record solar neutrons with energy of less than 100 MeV. The delay time of solar neutrons with energy of 100 MeV is 11 min later than the light if solar neutrons are produced instantaneously at the Sun by the solar flare. According to our estimation \cite{Imaida}, the expected flux of solar neutrons for gigantic solar flares, e.g. the 4 June 1991 event, is 15 Hz for 100 MeV. Here we have estimated the detection efficiency of neutrons by the FIB as 10$\%$. Therefore, we will be able to record the pattern of events with energy ranges of between 40$-$100 MeV using the FIB detector for smaller solar flares such as X=1. This limitation is imposed merely by the available memory on the FIB. The FIB detector can only record the total energy of the events if the trigger rate becomes greater than 64 Hz. Such a situation is expected to occur for neutrons with energy of less than 60 MeV for the largest solar flares. The expected background level is 1.5 Hz above the middle latitude and 0.2 Hz above the equator. They are produced by albedo neutrons, so-called cosmic ray albedo neutrons, induced by collisions of the galactic cosmic rays with the nucleus of the upper atmosphere. Finally in Figure 4, we present a photograph of the end-to-end test experiment that was made at the Tsukuba space center of JAXA in the end of 2007. \section{Acknowledgements} The authors thank Mr. N. Mochizuki for Flight Module data analysis.
1,477,468,749,922
arxiv
\section{Introduction} In recent years, the Internet of Things (IoT) has attracted much attention, and is expected to support different applications such as smart homes, smart gateways, environmental monitoring, and smart cities \cite{iot1, iot2}. Different from human-type communications, the IoT spawns a new communication scenario named massive connectivity, where a huge number of devices can access a single base station (BS) in a sporadic manner. In this scenario, the conventional grant-based random access solutions designed for human-type communications becomes very inefficient due to severe latency caused by device collisions \cite{GF}. Grant-free non-orthogonal random access (GF-NORA) has been considered as a promising technique for massive connectivity \cite{GF,iotgf}, where active devices are allowed to directly transmit pilots and data to the BS without waiting for a grant. As such, in each transmission frame, the BS needs to conduct device activity detection (DAD), channel estimation (CE), and data detection (DD). A possible solution to this problem is to divide each transmission frame into a pilot phase and a data phase. The pilot phase is for pilot transmission at the devices and joint DAD and CE at the BS, and the data phase is for data transmission at the devices and DD at the BS. Since the device activity patterns are sporadic, at any given time, only a small and random fraction of all devices are active. Joint DAD and CE can be cast into a compressed sensing (CS) problem, where advanced compressed sensing algorithms, such as approximate message passing (AMP) \cite{AMP}, sparse Bayesian learning (SBL) \cite{SBL}, turbo compressed sensing (Turbo-CS) \cite{TCS}, and variance state propagation (VSP) \cite{vsp}, can be used to solve the problem. For example, the authors in \cite{ampgf} formulated the joint DAD and CE problem as a compressed sensing multiple measurement vector (MMV) problem by assuming multiple receive antennas, and used the AMP algorithm to solve the formulated problem. In \cite{TCS-UAD}, the turbo generalized MMV (GMMV) algorithm was proposed to solve the joint DAD and CE problem in a MIMO system with mixed analog-to-digital converters. In addition to user sparsity, the channel sparsity in the angle domain of the receiver antenna array can also help with joint DAD and CE. For example, \cite{angle} exploited user-angle-domain sparsity in a massive MIMO grant-free system and proposed a Turbo-GMMV-AMP algorithm for the problem. Moreover, machine learning technologies have been applied to further improve the performance. In \cite{dnn}, the authors used a deep neural network to learn the weights involved in message passing to improve the convergence performance. In contrast to the two-phase approach, another line of research proposed an one-phase approach \cite{SD1,SD2}, where the BS is required to conduct joint DAD, CE and DD. As compared to the two-phase approach, the more challenging one-phase approach generally increases the computational complexity, but can achieve significant performance improvement by efficiently exploiting the structure (such as sparsity and low rank) inherent in the channels and the signals. Due to the limited coverage of terrestrial BSs, the development of terrestrial IoT is highly restricted in extreme environments such as deserts, forests, and oceans. Recently, satellite is considered as a potential solution for global IoT services \cite{remote-iot, mmtc, survey}. In particular, low earth orbit (LEO) satellites supplement and extend terrestrial IoT systems, which can effectively solve the environmental constraints faced by terrestrial networks. In addition, the LEO satellite-IoT system has the ability to be immune to natural disasters and to guarantee all-weather communication. Compared with geostationary orbit (GEO) satellites, LEO satellites operate in low-earth orbits with a height typically lower than 2000 km. A shorter communication distance provides a more real-time IoT service. Yet, the LEO satellite raises new features due to the high speed and the long distance between satellites and ground devices: \begin{itemize} \item Large transmission delay: The delay of LEO satellite channel\footnote{The delay refers to the transmission time of the electromagnetic signals during the space between the device and the satellite.}, though much shorter than that of a GEO satellite, is typically more than 1 ms, and is still quite large compared with that in terrestrial networks. Ref. \cite{dop1} and Ref. \cite{dop2} proposed to use global navigation satellite system (GNSS) based techniques to synchronize the devices and the satellite, and the delay of each device can be largely compensated. The residual delay can be handled by existing techniques such as the use of cyclic prefix (CP) in the orthogonal frequency division multiplexing (OFDM) system. \item Severe Doppler effect: It has been reported in \cite{dopcal} that in a LEO satellite communication system with a carrier frequency of Ku band, the maximum Doppler shift can be over 200 kHz. In \cite{uad-ce}, the authors considered GF-NORA in LEO satellite-IoT, and assumed that with the help of terrestrial BSs, the Doppler shifts are completely compensated. But this method is not applicable in remote areas without terrestrial BS. In addition, with GNSS, the devices can acquire their position information and calculate the Doppler shifts. However, the compensation of the Doppler shifts at the terrestrial device is incomplete, since there are typically more than one path and a terrestrial device can only compensate the Doppler shift of one path. The residual Doppler shifts of Ku-band signals can be over several thousand Hertz. As such, it is of pressing interest to design a grant-free random access scheme that can reliably handle the severe Doppler effect of the LEO satellite-IoT channel. \end{itemize} In this paper, we assume that with the aid of GNSS, the device delays of the satellite channel can be largely compensated. Then we adopt the OFDM technique to deal with the residual delays, providing that the residual delays do not exceed the length of CP. Besides, since the Doppler shift compensation at the IoT devices is expensive and inaccurate, we propose to deal with the Doppler effect at the satellite by assuming that the average Doppler shift of the devices in a beam is estimated and then compensated. We focus on the joint DAD and CE problem in an GF-NORA for LEO satellite-IoT, where active devices suffer from the residual Doppler effect. To distinguish the Doppler components with high precision, we adopt the OFDM-symbol repetition technique for the pilot design, where a super OFDM symbol are constructed by concatenating repeated regular OFDM symbols. In \cite{zc1, zc2}, the authors designed the long preamble sequence by concatenating the circularly shifted replicas of a short Zadoff–Chu (ZC) sequence, for the random access in satellite communication. In addition, similar repetition techniques have been used for carrier frequency offset estimation for a single user, while the receiver carries out the maximum Likelihood (ML) estimation of the carrier frequency offset \cite{cof1,cof2,cof3,cof4}. The problem considered in this paper is more challenging. On one hand, due to the multi-path effect, each IoT device generally has more than one Doppler component. On the other hand, there are a large number of devices in the satellite-IoT system. As such, the ML-based estimation methods, if applied directly, may incur a prohibitively high computational complexity. To estimate the time-varying channel of satellite-IoT, we represent the channel with a grid-based parametric model, and point out that the time-varying OFDM channel exhibits block sparsity in the delay-Doppler domain. Then, together with the sparsity in the user domain (due to sporadic transmission of the terrestrial devices), we formulate the joint DAD and CE problem for OFDM-based GF-NORA in LEO satellite-IoT as a sparse signal recovery problem. Many existing compressed sensing algorithms \cite{OMP, AMP, GAMP, TCS, STCS, SBL, PCSBL,vsp} can be applied to provide approximate solutions to the problem. It is known that Bayesian CS algorithms, such as AMP and Turbo-CS, can achieve significant performance improvement over non-Bayesian methods in sparse signal reconstruction. But AMP and Turbo-CS generally rely on a certain randomness property of the measurement matrix to ensure convergence, and the recovery performance of these algorithms may degrade seriously when such randomness is not met. As inspired by the robustness of the VSP algorithm to a broad family of measurement matrices, we extend the VSP algorithm to the massive connectivity scenario by appropriately handling the user sparsity prior, with the resulting algorithm termed modified VSP (MVSP). Specifically, we characterize the channel sparsity structure in the delay-Doppler-user domain with a probability model, which consists of a linear module and a Markov random field (MRF) module. The linear module handles the linear constraint between received signal and unknown vector, and the MRF module handles the block-sparse prior in the delay-Doppler domain as well as the sparse prior in the user domain. Different from the original Ising model \cite{ising} in the MRF, we introduce an auxiliary variable to characterize the relationship between the channel states and the device activity. The two modules are iterated until convergence. The proposed approach generally suffers from the energy leakage problem since the employed parametric channel model is based on a two-dimensional grid in the delay-Doppler domain. To reduce the mismatch between the actual channel and its on-grid representation, an expectation-maximization (EM) based learning method, named EM-MVSP, is proposed to update the delay-Doppler grid parameters. The contributions of this paper are summarised as follows. \begin{itemize} \item We develop a grid-based parametric system model for the OFDM-based satellite-IoT, and formulate the joint DAD and CE problem as a sparse signal recovery problem. Interestingly, we show that the measurement matrix in our considered problem can only be partially manipulated by the design of pilots, and exhibits a special correlation structure caused by the Doppler effect. Our experiments show that most existing compressed sensing algorithms including AMP and Turbo-CS behave poorly in the considered problem. \item To distinguish the Doppler components, we adopt the OFDM-symbol repetition technique to increase the frequency resolution of the OFDM system. We show that this OFDM-symbol repetition technique can efficiently improve the DAD and CE performance of the OFDM-based satellite-IoT system. \item We propose the MVSP algorithm for the joint DAD and CE problem, which is robust to the measurement matrix in our problem. Different from the original VSP algorithm, we introduce an auxiliary variable to characterize the relationship between the channel states and the device activity, and thus the DAD can be conducted by the MRF module. \item To alleviate the mismatch of the grid-base model, we further propose the EM-MVSP algorithm to update the grid parameters using EM method. We show that significant performance improvement can be achieved by the EM-MVSP algorithm, as compared to the counterpart algorithms including AMP, SBL, Turbo-CS and MVSP. \end{itemize} The rest of the paper is organized as follows. Section II introduces the time-varying satellite- IoT channel and the GF-NORA satellite-IoT system model, and then transforms it to a parametric form. Section III formulates the DAD and CE problem, constructs a probability model, and presents the MVSP algorithm. In Section IV, the MVSP algorithm is extended to the mismatch scenario with the EM framework. Numerical results are given in Section V, and Section VI concludes this paper. The frequently used abbreviations in this paper are summarized in Table \ref{abbr}. \textit{Notation:} We use a bold symbol lowercase letter and bold symbol capital letter to denote a vector and a matrix, respectively. The trace, transpose, conjugate transpose, inverse, vectorization and Frobenius norm of a matrix are denoted by $\text{Tr}(\cdot)$, $(\cdot)^{\text{T}}$, $(\cdot)^{\text{H}}$, $(\cdot)^{-1}$, $\text{vec}(\cdot)$ and $\|\cdot\|_{\text{F}}$, respectively; $\propto$ represents both sides of the equation are multiplicatively connected to a constant; $\text{diag}(\boldsymbol{a})$ forms a diagonal matrix with the diagonal elements in $\boldsymbol{a}$; $|\cdot|$ denotes the modulus of a complex number; $\|\cdot\|$ denotes the $\ell^2$ norm; $\mathbb{E}_p[\boldsymbol{x}]$ denotes the expectation of $\boldsymbol{x}$ with respect to distribution $p$; $\otimes$ denotes the Kronecker product; $[N]$ denotes the set $\{1,2,\ldots, N\}$; $\delta(\cdot)$ denotes the Dirac delta function; The Gaussian and complex Gaussian distribution of $\boldsymbol{x}$ with mean $\bar{\boldsymbol{m}}$ and covariance matrix $\boldsymbol{\Sigma}$ is denoted by ${\cal{N}}(\boldsymbol{x}; \bar{\boldsymbol{m}},\boldsymbol{\Sigma})$ and ${\cal{CN}}(\boldsymbol{x}; \bar{\boldsymbol{m}},\boldsymbol{\Sigma})$, respectively. \begin{table} \center \caption{Frequently used abbreviations and corresponding meaning.} \begin{tabular}{ |m{5em}| m{5cm}| m{5em}| m{5cm} | } \hline Abbr. & Meaning & Abbr. & Meaning \\ \hline AMP & Approximate message passing & CE & Channel estimation \\ \hline CP & Cyclic prefix & CS & Compressed sensing \\ \hline DAD & Device activity detection & EM & Expectation–maximization\\ \hline GAMP & Generalized approximate message passing & GF-NORA & Grant-free non-orthogonal random access\\ \hline GNSS & Global navigation satellite system & IoT & Internet of Things \\ \hline LEO & Low earth orbit & ML & Maximum Likelihood\\ \hline MRF & Markov random field & MVSP & Modified variance state propagation \\ \hline NMSE & Normalized mean-squared error & OFDM & Orthogonal frequency division multiplexing \\ \hline OMP & Orthogonal matching pursuit & PCSBL & Pattern-coupled sparse Bayesian learning \\ \hline SBL & Sparse Bayesian learning & SNR & Signal-noise-ratio \\ \hline STCS & Structured turbo compressed sensing & VSP & Variance state propagation \\ \hline \end{tabular} \label{abbr} \end{table} \section{System Model} \subsection{Time-Varying Satellite-IoT Channel} \begin{figure}[htbp] \centering \includegraphics[width=2.4in]{system-eps-converted-to.pdf} \caption{The LEO satellite-IoT model.} \label{system-model} \end{figure} As illustrated in Fig.\ \ref{system-model}, we consider a multi-beam LEO-satellite system where there exists $K$ potential devices within a beam coverage. We assume that the signals in different beams are orthogonal, i.e., the inter-beam interference is ignored. Then, within a beam, the noiseless baseband received scalar signal from device $k$ at the satellite can be expressed as \begin{equation}\label{rk1} r_k(t) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \bar{h}_k(\tau, \nu)s_k(t-\tau) e^{j2\pi\nu t} {\rm d}\tau {\rm d}\nu , \end{equation} where $\bar{h}_k(\tau,\nu)$ is the channel impulse response at delay $\tau$ and Doppler frequency $\nu$, and $s_k(t)$ is the signal from the $k$-th device. As different devices are usually geographically separated, we assume that the channel realizations between the satellite and different devices are uncorrelated. We also assume that with the help of GNSS, the delay can be partially pre-compensated at the device, and the satellite uses the Doppler frequency shift at the beam center to compensate all the devices in the beam. As such, the impact of the transmission delay and Doppler frequency shift can be largely eliminated, or in other words, only the residual delay and the residual Doppler frequency shift need to be taken into account. Without loss of generality, we assume that there exist $P_k$ paths for each device $k$, where $P_k$ is a small positive integer since the satellite communication is typically in a weak multipath transmission environment. The residual delay and the residual Doppler shift of the $p$-th path of device $k$ are denoted as $\Bar{\tau}_{k,p}$ and $\Bar{\nu}_{k,p}$, respectively. We assume that $\Bar{\tau}_{k,p}$ and $\Bar{\nu}_{k,p}$ are constant during a transmission frame. The corresponding attenuation including path loss, reflection and processing gains of each device in the $p$-th path is characterized by a complex coefficient $\bar{h}_{k,p}$. Therefore, $\bar{h}_k(\tau,\nu)$ can be approximated by \begin{equation}\label{h-spar} \bar{h}_k(\tau,\nu) = \sum\limits_{p=0}^{P_k-1} \bar{h}_{k,p} \delta(\tau- \Bar{\tau}_{k,p}) \delta(\nu- \Bar{\nu}_{k,p}). \end{equation} Substituting \eqref{h-spar} into \eqref{rk1} yields \begin{equation}\label{rk2} r_k(t) = \sum\limits_{p=0}^{P_k-1} \bar{h}_{k,p} s_k(t-\Bar{\tau}_{k,p}) e^{j2\pi\Bar{\nu}_{k,p} t}. \end{equation} \subsection{Grant-Free Satellite-IoT System Model} In this paper, we adopt the grant-free non-orthogonal random access (GF-NORA) scheme, in which the devices share the physical channel resource and directly transmit their signals without requiring the permission of the satellite. Following \cite{GF}, we assume that each transmission frame in GF-NORA consists of two phases, namely, the pilot phase and the data phase. Each device is preassigned with a unique non-orthogonal pilot sequence. In the pilot phase, the active devices transmit their pilots, based on which the satellite detects the active devices and estimates their channels. In the data phase, the devices transmit data without the grant from the satellite, and the satellite decodes the data based on the estimated channel of the active devices. Orthogonal frequency division multiplexing (OFDM) is employed for both the pilot and data transmission phases. In this work, we focus on the device activity detection (DAD) and channel estimation (CE) in the pilot phase. The system model of the pilot phase is described as follows. In the pilot phase, we construct a super-symbol by concatenating $N$ repetitions of an OFDM symbol, and a cyclic prefix (CP) is applied to eliminate the inter-symbol interference, as shown in Fig.~\ref{frame}. The duration of such a super-symbol with a CP is $\Bar{T} = NT+ T_{\text{cp}}$, where $T$ is the length of a regular OFDM symbol, $T_{\text{cp}}>\tau_{\text{max}}$ is the length of the CP, and $\tau_{\text{max}}$ is the maximal residual delay for all devices. We note that the above repetitions of an OFDM symbol improve the frequency resolution of the receiver, so that the Doppler frequency shifts can be identified more clearly and thus its adverse effect can be more efficiently mitigated. Without loss of generality, we assume that the pilot sequence of a device contains $U$ consecutive super-symbols. The frequency spacing between any two adjacent subcarriers is set to $\Delta f = 1/T$. In the $u$-th super-symbol, the baseband modulated signal at the $k$-th device is given by \begin{subequations} \begin{equation}\label{d} d_{k,u}(t) = \sum\limits_{m=0}^{M-1} x_{k,m,u} e^{j2\pi m \Delta f t} \xi(t-u\Bar{T}), \forall k, \forall u, \end{equation} where $x_{k,m,u}$ is the pilot on the $m$-th subcarrier in the $u$-th super-symbol of the $k$-th device, and \begin{equation}\label{win} \xi(t)=\begin{cases} 1, &t\in \left[-T_{\text{cp}}, NT\right],\\ 0, &\text{otherwise}, \end{cases} \end{equation} is the transmitted rectangular pulse. The baseband pilot signal of device $k$ is given by \begin{equation}\label{s} s_k(t) = \sum\limits_{u=0}^{U-1}d_{k,u}(t), \forall k, \end{equation} where $U$ is the number of super-symbols in a pilot sequence. \end{subequations} \begin{figure*}[htbp] \centering \includegraphics[width=4.5in]{frame-eps-converted-to.pdf} \caption{The structure of a pilot sequence in the pilot phase.} \label{frame} \end{figure*} In each transmission frame, only a small subset of devices are active. To characterize such sporadic transmission, the device activity is represented by an indicator function $\alpha_k$ as \begin{equation}\label{alpha} \alpha_k=\begin{cases} 1, &\text{if device $k$ is active},\\ 0, &\text{if device $k$ is inactive}, \end{cases}\quad k = 1,\cdots, K, \end{equation} with a probability $p(\alpha_k=1) = \rho $ where $\rho \ll 1$. Combining \eqref{rk2}, \eqref{d}, \eqref{s} and \eqref{alpha}, the received baseband signal of all devices at the satellite can be expressed as \begin{align}\label{eq.9} r(t) =& \sum\limits_{k=1}^K r_k(t) +w(t) \nonumber\\ =& \sum\limits_{k=1}^K\alpha_k \sum\limits_{p=0}^{P_k-1} e^{j2\pi\Bar{\nu}_{k,p} t} \bar{h}_{k,p} \sum\limits_{u=0}^{U-1}\sum\limits_{m=0}^{M-1} x_{k,m,u} e^{j2\pi m \Delta f (t-\Bar{\tau}_{k,p})} \xi(t-\Bar{\tau}_{k,p}-u\Bar{T}) + w(t), \end{align} where $w(t)$ is an additive white Gaussian noise (AWGN). In a conventional OFDM system with symbol duration $T$, any two subcarriers with minimum frequency shift $\Delta f = 1/T$ are orthogonal to each other, i.e., the frequency resolution is the subcarrier spacing $1/T$. In our super-symbol system, since each super-symbol consists of $N$ repeated regular OFDM symbols, the frequency resolution is $1/(NT)$. In other words, $N$-time oversampling in the frequency domain can be applied in our system. In the $u$-th super-symbol interval, the demodulated signals with $N$-time oversampling is given by \begin{align}\label{y1} y_{n,u} = \frac{1}{NT}\int_{u\Bar{T}}^{u\Bar{T}+NT} &r(t)e^{-j2\pi\frac{n}{N}\Delta f t}{\rm d}t, \;\;\text{for} \;\;n \in\{0,1,\ldots,NM-1\}, \end{align} where the boundary of the observation window of the $u$-th super-symbol specifies the upper and lower limits of the integral. By plugging \eqref{eq.9} into \eqref{y1}, $y_{n,u}$ can be expressed as \begin{subequations} \begin{align}\label{y2} y_{n,u} =& \sum\limits_{k=1}^K\frac{\alpha_k}{NT}\sum\limits_{m=0}^{M-1} x_{k,m,u}\sum\limits_{p=0}^{P_k-1}e^{-j2\pi m \Delta f \Bar{\tau}_{k,p}}\bar{h}_{k,p}\int_{u\Bar{T}}^{u\Bar{T}+NT}e^{j2\pi\Bar{\nu}_{k,p} t} e^{j2\pi (m-\frac{n}{N}) \Delta f t} {\rm d}t+w_{n,u} \nonumber\\ =& \sum\limits_{m=0}^{M-1}\sum\limits_{k=1}^K x_{k,m,u}g_{m,n,k,u}+w_{n,u}, \end{align} where $w_{n,u} = \int_{u\Bar{T}}^{u\Bar{T}+NT} e^{j2\pi \frac{n}{N}\Delta f t}w(t){\rm d}t$, and \begin{align} \label{g} g_{m,n,k,u} = & \frac{\alpha_k}{NT}\sum\limits_{p=0}^{P_k-1}e^{-j2\pi m \Delta f \Bar{\tau}_{k,p}}\bar{h}_{k,p} \int_{u\Bar{T}}^{u\Bar{T}+NT}e^{j2\pi\Bar{\nu}_{k,p} t} e^{j2\pi(m-\frac{n}{N})\Delta f t} {\rm d}t\text{.} \end{align} \end{subequations} Let $\boldsymbol{x}_{m,u} = \left[x_{1,m,u}, \cdots, x_{K,m,u}\right]^T$ and $\boldsymbol{g}_{m,n,u}=\left[g_{m,n,1,u}, \cdots, g_{m,n,K,u}\right]^T$. We can rewrite $y_{n,u}$ as \begin{equation}\label{y3} y_{n,u} = \sum\limits_{m=0}^{M-1}\boldsymbol{x}_{m,u}^T \boldsymbol{g}_{m,n,u} + w_{n,u}. \end{equation} Let $\boldsymbol{y}_u=[y_{0,u},\cdots, y_{N\!M-1,u}]^T$. We have \begin{subequations} \begin{align}\label{y-real} \boldsymbol{y}_u &=\boldsymbol{G}_u\boldsymbol{x}_u+ \boldsymbol{w}_u, \end{align} where $\boldsymbol{w}_u = \left[ w_{0,u},\ldots, w_{NM-1,u} \right]^{\text{T}}$ is the AWGN with variance $\sigma^2$, $ \boldsymbol{x}_u \triangleq{ \left[\boldsymbol{x}_{0,u}^T,\cdots, \boldsymbol{x}_{M-1,u}^T\right]^T} \!\in \!\mathbb{C}^{MK\times 1}$ , and \begin{align}\label{Gu} \boldsymbol{G}_u &\triangleq{ \begin{bmatrix} \boldsymbol{g}_{0,0,u}^T &\cdots &\boldsymbol{g}_{M-1,0,u}^T\\ &\vdots& \\ \boldsymbol{g}_{0,N\!M-1,u}^T &\cdots &\boldsymbol{g}_{M-1,N\!M-1,u}^T \end{bmatrix} } \! \in \!\mathbb{C}^{NM\times MK}. \end{align} \end{subequations} $\boldsymbol{G}_u$ characterizes the channels of all the $M$ subcarriers in the $u$-th super-symbol. Our goal is to estimate $\{\boldsymbol{G}_u\}_{u=0}^{U-1}$ based on $\{\boldsymbol{y}_u\}_{u=0}^{U-1}$ and $\{\boldsymbol{x}_u\}_{u=0}^{U-1}$. A brute-force approach to this problem is infeasible since the number of observations in $\{\boldsymbol{y}_u\}_{u=0}^{U-1}$ is only $UMK$, which is far less than that of the unknown variables in $\{\boldsymbol{G}_u\}_{u=0}^{U-1}$, i.e., $UNM^2K$. As such, we need a more elegant representation of $\{\boldsymbol{G}_u\}_{u=0}^{U-1}$ with less unknowns, as detailed in the next subsection. \subsection{Grid-Based Parametric System Model} \label{Paramodel} We now present a grid-based parametric model for the channel $\boldsymbol{G}_u$. From \eqref{g} and \eqref{Gu}, the unknown parameters in $\boldsymbol{G}_u$ include each device's path delays $\{\bar{\tau}_{k,p}\}_{p=0}^{P_k-1}$, path Doppler shifts $\{\bar{\nu}_{k,p}\}_{p=0}^{P_k-1}$ and channel gains $\{\bar{h}_{k,p}\}_{p=0}^{P_k-1}$. In practical wireless communication scenarios, each channel path may consist of many sub-paths, and the parameters of all these sub-paths are usually difficult to distinguish. Exact identification of these parameters is therefore very challenging. To address this issue, we discretize the delay domain and the Doppler domain into a two-dimensional grid. Instead of estimating the equivalent channel matrix $\boldsymbol{G}_u$, we estimate the representation of the channel on the grid. Then $\boldsymbol{G}_u$ can be recovered based on the grid parameters and the corresponding channel representation. This approach avoids the estimation of each physical path or sub-path separately, but considers the overall representation of the physical channel on the delay-Doppler grid. In specific, for each device $k$, the grid parameters of the delay domain and the Doppler domain are defined as \begin{align} \boldsymbol{\tau}_k &=\left\{\tau_{l,k}\right\}_{l=0}^{L-1},~\tau_{l,k}\in\left[0, \tau_{\text{max}}\right), \nonumber\\ \boldsymbol{\nu}_k &=\left\{\nu_{j,k}\right\}_{j=0}^{J-1},~\nu_{j,k}\in\left[-\nu_{\text{max}}/2, \nu_{\text{max}}/2\right).\nonumber \end{align} As such, the channel $\bar{h}_k(\tau,\nu)$ can be approximated as \begin{equation}\label{h-grid} \bar{h}_k(\tau,\nu) = \sum\limits_{l=0}^{L-1} \sum\limits_{j=0}^{J-1} h_{k,l,j} \delta(\tau- \tau_{k,l}) \delta(\nu- \nu_{k,j}), \end{equation} where $h_{k,l,j}$ is device $k$'s channel representation at grid $(\tau_{k,l}, \nu_{k,j})$. We rewrite $y_{n,u}$ in \eqref{y2} as \begin{align}\label{eq.17} y_{n,u} \!=& \frac{1}{NT}\int_{u\Bar{T}}^{u\Bar{T}+NT} r(t)e^{-j2\pi \frac{n}{N}\Delta f t}{\rm d}t\nonumber\\ \!= \!& \sum\limits_{k=1}^K\frac{\alpha_k}{NT}\sum\limits_{m=0}^{M-1} x_{k,m,u} \sum\limits_{j=0}^{J-1} \! \int_{u\Bar{T}}^{u\Bar{T}+NT} \!\!\! e^{j2\pi\nu_{k,j} t} e^{j2\pi (m-\frac{n}{N}) \Delta f t} {\rm d}t \sum\limits_{l=0}^{L-1} e^{-j2\pi m \Delta f \tau_{k,l}}h_{k,l,j} + w_{n,u}. \end{align} Let $\boldsymbol{b}_{k,m} = \left[e^{-j2\pi m \Delta f \tau_{k,0}}, \cdots, e^{-j2\pi m \Delta f \tau_{k,L-1}}\right]^{\text{T}}\in \mathbb{C}^{L\times 1}$, $\boldsymbol{h}_{k,j} = \left[h_{k,0,j}, \cdots, h_{k,L-1,j}\right]^T\in \mathbb{C}^{L\times 1}$, and $c_{k,m,n,j,u}=\frac{1}{NT}\int_{u\Bar{T}}^{u\Bar{T}+NT}e^{j2\pi\nu_{k,j} t} e^{j2\pi (m-\frac{n}{N}) \Delta f t} {\rm d}t$. We rewrite \eqref{eq.17} as \begin{subequations} \begin{align}\label{eq.18} y_{n,u} =& \sum\limits_{k=1}^K\alpha_k\sum\limits_{m=0}^{M-1} x_{k,m,u} \sum\limits_{j=0}^{J-1} c_{k,m,n,j,u} \boldsymbol{b}_{k,m}^T\boldsymbol{h}_{k,j}+w_{n,u} \nonumber\\ =& \sum\limits_{k=1}^K\alpha_k \left[\sum\limits_{m=0}^{M-1} x_{k,m,u} \left(\boldsymbol{b}_{k,m}^T \otimes \boldsymbol{c}_{k,m,n,u}^T\right)\right] \textup{vec}\left(\boldsymbol{H}_k^T\right)+w_{n,u} \nonumber\\ =& \sum\limits_{k=1}^K\alpha_k \boldsymbol{a}_{k,n,u} \textup{vec}\left(\boldsymbol{H}_k^T \right)+w_{n,u}, \end{align} where $\boldsymbol{c}_{k,m,n,u} = \left[c_{k,m,n,0,u}, \cdots, c_{k,m,n,J-1,u}\right]^T\in \mathbb{C}^{J\times 1}$, $\boldsymbol{H}_k = \left[\boldsymbol{h}_{k,0}, \cdots, \boldsymbol{h}_{k,J-1}\right]\in \mathbb{C}^{L\times J}$, and \begin{align} \label{ak} \boldsymbol{a}_{k,n,u} = \sum\limits_{m=0}^{M-1} x_{k,m,u} \left(\boldsymbol{b}_{k,m}^T \otimes \boldsymbol{c}_{k,m,n,u}^T\right)\in \mathbb{C}^{1\times LJ}. \end{align} \end{subequations} Then we have \begin{subequations} \begin{align}\label{yu} \boldsymbol{y}_u &=\boldsymbol{A}_u\boldsymbol{h}+ \boldsymbol{w}_u, \end{align} where $\boldsymbol{h}_u \triangleq{ \left[\alpha_1 \textup{vec}\left(\boldsymbol{H}_1^T\right)^T,\cdots, \alpha_K \textup{vec}\left(\boldsymbol{H}_K^T\right)^T\right]^T} \in \mathbb{C}^{Q\times 1}$, \begin{align} \boldsymbol{A}_u &\triangleq{\begin{bmatrix} \boldsymbol{a}_{1,0,u} & \cdots & \boldsymbol{a}_{K,0,u}\\ & \vdots & \\ \boldsymbol{a}_{1,N\!M-1,u} & \cdots & \boldsymbol{a}_{K,N\!M-1,u} \\ \end{bmatrix}}\in \mathbb{C}^{NM\times Q} , \label{Au} \end{align} \end{subequations} $R=NMU$ and $Q=KLJ$. $\boldsymbol{h}$ combines the device activity and delay-Doppler grid parameters. Note that $\boldsymbol{h}$ is a sparse vector since the channel is sparse in the user-delay-Doppler domain, as elaborated in Section \ref{spar}. Denote by $\boldsymbol{y} \triangleq{\left[\boldsymbol{y}_0^T, \cdots, \boldsymbol{y}_{U-1}^T\right]^T}\in \mathbb{C}^{R\times 1}$ the collection of all the $U$ demodulated super-symbols in the pilot phase. From \eqref{yu}, we obtain \begin{align}\label{y-grid} \boldsymbol{y} = \boldsymbol{A} \boldsymbol{h} + \boldsymbol{w}, \end{align} where $\boldsymbol{A} = \left[\boldsymbol{A}_0^T , \cdots, \boldsymbol{A}_{U-1}^T \right]^T\in \mathbb{C}^{R\times Q}$, and $\boldsymbol{w} = \left[\boldsymbol{w}_0^T , \cdots, \boldsymbol{w}_{U-1}^T \right]^T\in \mathbb{C}^{R\times 1}$. We emphasize that $\boldsymbol{h}$ contains all the unknown channel parameters for the reconstruction of the channels $\{\boldsymbol{G}_u\}_{u=0}^{U-1}$. To be specific, we use model \eqref{y-real} to generate channels and signals in simulation, and use the discretized model \eqref{y-grid} to design DAD and CE algorithms. We then reconstruct the channel $\boldsymbol{G}_u$ by the estimated $\boldsymbol{h}$. Suppose that $\Hat{\boldsymbol{h}}$ is an estimate of $\boldsymbol{h}$, $\Hat{h}_{k,l,j}$ is the estimate corresponding to the $(k,l,j)$ grid point, and $\Hat{\alpha}_k$ is an estimate of $\alpha_k$. Then the recovered channel is given by \begin{subequations} \begin{equation} \label{G-hat} \Hat{\boldsymbol{G}}_u \triangleq{\begin{bmatrix} \Hat{\boldsymbol{g}}_{0,0,u}^T & \cdots & \Hat{\boldsymbol{g}}_{M-1,0,u}^T \\ & \vdots & \\ \Hat{\boldsymbol{g}}_{0,N\!M-1,u}^T & \cdots & \Hat{\boldsymbol{g}}_{M-1,N\!M-1,u}^T \end{bmatrix}}, \end{equation} where $\Hat{\boldsymbol{g}}_{m,n,u}=\left[\Hat{g}_{m,n,1,u}, \Hat{g}_{m,n,2,u}, \cdots, \Hat{g}_{m,n,K,u}\right]^T$, and \begin{align} \Hat{g}_{m,n,k,u} =& \frac{\Hat{\alpha}_k}{NT}\sum\limits_{l=0}^{L-1} \sum\limits_{j=0}^{J-1}e^{-j2\pi m \Delta f \tau_{k,l}}\Hat{h}_{k,l,j} \int_{u\Bar{T}}^{u\Bar{T}+NT}e^{j2\pi\nu_{k,j} t} e^{j2\pi (m-\frac{n}{N}) \Delta f t} {\rm d}t. \end{align} \end{subequations} The normalized mean-squared error (NMSE) of the channel estimation is defined by \begin{align} \label{nmse} \text{NMSE} = \frac{1}{U}\sum_{u=0}^{U-1}\frac{\mathbb{E}\left[\|\hat{\boldsymbol{G}}_u-\boldsymbol{G}_u\|_\text{F}^2\right]}{\mathbb{E}\left[\|\boldsymbol{G}_u\|_{\text{F}}^2\right]}. \end{align} \subsection{Channel Sparsity} \label{spar} It is well known that channel sparsity can be exploited to significantly reduce the number of pilots required in channel estimation. In this regard, the channel in our considered satellite-IoT scenario exhibit the following sparsity structure: \begin{enumerate} \item Sparsity in the user domain: Most of the devices are inactive at any given time, i.e., most of $\{\alpha_k\}$ are zero. \item Sparsity in the delay-Doppler domain: Since the satellite communication is in a weak multi-path transmission environment, the numbers of dominant paths between the satellite and the devices are limited. \item Block-sparsity in the delay-Doppler domain: The scattering effect of the electromagnetic waves causes delay and Doppler spread in wireless channels. Besides, the grid mismatch causes additional spreading in the delay-Doppler domain. These effects make the channel coefficients appear in clusters in the delay-Doppler domain. \end{enumerate} Fig. \ref{bs} illustrates the sparsity structure of $\boldsymbol{h}$ in the delay-Doppler-user domain. This sparsity structure is exploited in the design of the receiver. \begin{figure}[htbp] \centering \includegraphics[width=2in]{sparsity.pdf} \caption{An illustration of the channel sparsity structure.} \label{bs} \end{figure} \subsection{Problem Description} \label{PD} Recall that the receiver of the considered grant-free satellite-IoT system carriers out joint DAD and CE. With \eqref{y-grid} and the discussions in Subsection D, the joint DAD and CE problem is to estimate a sparse vector $\boldsymbol{h}$ given the observation $\boldsymbol{y}$. Various compressed sensing algorithms have been proposed to solve the linear-inverse problem in the form of \eqref{y-grid}. However, popular Bayesian algorithms, such as AMP and Turbo-CS, suffer from significant performance degradation when applied to our problem. This is because AMP and Turbo-CS generally rely on the randomness of the measurement matrix $\boldsymbol{A}$ to ensure convergence. For example, the recovery performance of AMP is guaranteed only when the elements of $\boldsymbol{A}$ are independently and identically distributed (i.i.d.) Gaussian; as for Turbo-CS, $\boldsymbol{A}$ is required to be right-rotationally invariant. When the randomness requirement of $\boldsymbol{A}$ is not met, the recovery performance of the corresponding algorithms will be seriously degraded. In this work, the structure of $\boldsymbol{A}$ cannot be arbitrarily designed. From \eqref{ak} and \eqref{Au}, each row of $\boldsymbol{A}$ is the sum of a series of Kronecker products. This introduces correlation between the elements in a same row of $\boldsymbol{A}$. Fig. \ref{ther1} shows the local thermogram of $\boldsymbol{A}$ in a random experiment, which reflects the amplitude of the elements in $\boldsymbol{A}$. It is clear that the amplitudes of the elements in a same row are correlated, rather than independent and identically distributed. As a comparison, Fig. \ref{ther2} shows the thermogram of a matrix of the same size with each element randomly drawn from the standard complex Gaussian distribution. Fig. \ref{ther3} shows the local thermogram of $\boldsymbol{AF}$ where $\boldsymbol{F}$ is a discrete Fourier transform (DFT) matrix (which is unitary). Clearly, $\boldsymbol{AF}$ and $\boldsymbol{A}$ show different patterns. Therefore $\boldsymbol{A}$ is not a right-rotationally invariant matrix, and thus does not meet the randomness requirements of both AMP and Turbo-CS. We will see that AMP and Turbo-CS perform poorly for this task in simulations. Considering the special structure of $\boldsymbol{A}$ in \eqref{y-grid}, we follow the variance state propagation (VSP) framework proposed in \cite{vsp} and propose the MVSP algorithm to solve this sparse signal recovery problem. Compared with AMP and Turbo-CS, the MVSP algorithm is more robust to the structure of the measurement matrix, which is derived in next section. \begin{figure*}[htbp] \centering \subfigure[]{\label{ther1} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.2in]{ther1-eps-converted-to.pdf} \end{minipage} } \subfigure[]{\label{ther2} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.2in]{ther2-eps-converted-to.pdf} \end{minipage} } \subfigure[]{\label{ther3} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=2.2in]{ther3-eps-converted-to.pdf} \end{minipage} } \centering \caption{ Thermograms of random matrices. (a) local (the first 100 rows and the first 100 columns) thermogram of $\boldsymbol{A}$ in \eqref{y-grid} in a random experiment. (b) Thermogram of a matrix with each entry randomly drawn from a standard complex Gaussian distribution. (c) local (the first 100 rows and the first 100 columns) thermogram of $\boldsymbol{AF}$ where $\boldsymbol{F}$ is a unitary DFT matrix.} \end{figure*} \section{Receiver Design for Joint DAD and CE} In this section, we first introduce the probability model and the problem formulation, and then derive the MVSP algorithm based on a factor-graph representation of the probability model. \subsection{Probability Model} For notational convenience, we rewrite $\boldsymbol{h}$ as \begin{align} \boldsymbol{h}= \left[h_{1,1}, h_{1,2},\cdots,h_{1,LJ},\cdots,h_{K,1}, h_{K,2},\cdots,h_{K,LJ}\right]^T.\nonumber \end{align} Similarly to \cite{vsp-iccc}, we assign the sparse channel $\boldsymbol{h}$ with a conditional Gaussian prior as \begin{align}\label{eq.24} p\left(\boldsymbol{h}|\boldsymbol{v}\right) = \prod_{k=1}^K \prod_{i=1}^{LJ} p\left(h_{k,i}|v_{k,i}\right), \end{align} where $\boldsymbol{v} = \left[v_{1,1},v_{1,2},\cdots,v_{K,LJ}\right]$, and $p\left(h_{k,i}|v_{k,i}\right) = \mathcal{C}\mathcal{N}\left(h_{k,i};0,v_{k,i}\right)$ is a circularly symmetric complex Gaussian (CSCG) distribution with zero mean and variance $v_{k,i}$, $k\in\left\{1,\cdots,K\right\}$, $i\in\left\{1,\cdots,LJ\right\}$. Each $v_{k,i}$ is assigned with a conditionally independent distribution given by \begin{align}\label{eq.25} p\left(v_{k,i}|s_{k,i}\right) = & \textup{Gamma}\left(v_{k,i};\gamma_{k,1},\gamma_{k,2}\right) \delta(s_{k,i}-1) +\delta(v_{k,i})\delta(s_{k,i}+1), \end{align} where $s_{k,i} \in \{-1, 1\}$ is a hidden binary state; $\textup{Gamma}\left(v_{k,i};\gamma_{k,1},\gamma_{k,2}\right)$ is the Gamma distribution \begin{equation}\label{gamma} \textup{Gamma}\!\left(v_{k,i};\gamma_{k,1},\gamma_{k,2}\right)\! =\! \begin{cases} \frac{\gamma_{k,2}^{\gamma_{k,1}} v_{k,i}^{\gamma_{k,1}-1} e^{-\gamma_{k,2} v_{k,i}}}{\Gamma\left(\gamma_{k,1}\right)},&\!v_{k,i} > 0,\\ 0, &\!\textup{otherwise}, \end{cases} \end{equation} with $\Gamma\left(\gamma_{k,1}\right)=\int_0^{\infty} t^{\gamma_{k,1}-1} e^{-t} {\rm d}t$ being the Gamma function. Then, a Markov random field (MRF) prior is used to characterize the sparse structure of $\boldsymbol{v}$. The joint probability of the hidden state and device activity variables is modeled as \begin{equation}\label{mrf} p\left(\boldsymbol{s}_k, \alpha_k\right) \!\propto\! \prod\limits_{i=1}^{LJ}\prod\limits_{z\in \mathcal{D}_i}\left(\varphi\left(s_{k,z},s_{k,i}\right)\right)^{\frac{1}{2}}\!\psi\left(\ell_{k}, \alpha_k\right)p(\alpha_k), \end{equation} where $\boldsymbol{s}_k = \left[s_{k,1},s_{k,2},\cdots,s_{k,LJ}\right]$, $\varphi\left(s_{k,z},s_{k,i}\right) = \exp\left(\beta s_{k,z}s_{k,i}\right)$, and $\ell_k =\sum_{i}^{LJ}s_{k,i}$; $\mathcal{D}_i$ denotes the set includes the indexes of the left, right, top and bottom neighbors of $s_{k,i}$, i.e., $\{i-1,i+1,i-J,i+J\}$; $\beta$ is the parameter of the MRF, corresponding to the average size of non-zero blocks. Different from the original Ising model \cite{ising}, we add the constraint $ \psi\left(\ell_{k},\alpha_k\right)$ that represents the conditional probability density of $\ell_k$ given $\alpha_k$. Note that $\ell_k$ is discrete since each $s_{k,i}$ is binary. However, since $LJ$ is large, $\ell_k$ for an active device $k$ (i.e., $\alpha_k=1$) can be well approximated by a Gaussian random variable. Thus, we have \begin{subequations} \begin{equation}\label{psi} \psi\left(\ell_{k},\alpha_k\right)\! =\! {\cal{N}}(\ell_k; m_{\psi}, \sigma^2_{\psi})\delta(\alpha_k\! -\! 1)\!+\! \delta(l_k\!+\!1)\delta(\alpha_k), \end{equation} where $m_{\psi}$ and $\sigma^2_{\psi}$ are the mean and variance of $\ell_k$ conditioned on $\alpha_k\!=\!1$, respectively given by $m_{\psi}=LJ(2\rho_s-1)$ and $\sigma^2_{\psi}=4LJ\rho_s(1-\rho_s)$, \end{subequations} and $\rho_s$ is the sparsity rate of $\{v_{k,i}\}_{i=1}^{LJ}$, i.e., $p(s_{k,i}=1)=\rho_s$. The joint probability of $p\left(\boldsymbol{y},\boldsymbol{h},\boldsymbol{v},\boldsymbol{s},\boldsymbol{\alpha}\right)$ can be decomposed as \begin{align}\label{eq.30} p\left(\boldsymbol{y},\boldsymbol{h},\boldsymbol{v},\boldsymbol{s},\boldsymbol{\alpha}\right) =& p\left(\boldsymbol{y}|\boldsymbol{h}\right)p\left(\boldsymbol{h}|\boldsymbol{v}\right)p\left(\boldsymbol{v}|\boldsymbol{s}\right)p\left(\boldsymbol{s}, \boldsymbol{\alpha}\right)\nonumber\\ =&p\left(\boldsymbol{y}|\boldsymbol{h}\right)\prod\limits_{k=1}^K \prod\limits_{i=1}^{LJ} p\left(h_{k,i}|v_{k,i}\right)p\left(v_{k,i}|s_{k,i}\right)\prod\limits_{k=1}^K p\left(\boldsymbol{s}_k, \alpha_k\right), \end{align} where $\boldsymbol{s}=\left[\boldsymbol{s}_1, \cdots,\boldsymbol{s}_K\right]$ and $\boldsymbol{\alpha}=\left[\alpha_1, \cdots,\alpha_K\right]$. The dependencies of the random variables in the factorization \eqref{eq.30} can be shown by a factor graph as depicted in Fig.~\ref{fg}, where circles represent variable nodes and squares represent factor nodes. The factor nodes in Fig.~\ref{fg} are defined as \begin{subequations} \begin{align} \zeta_{k,i}&:p\left(v_{k,i}|s_{k,i}\right),\\ \eta_{k,i}&:p\left(h_{k,i}|v_{k,i}\right) = \mathcal{C}\mathcal{N}\left(h_{k,i};0,v_{k,i}\right),\\ \iota &:p\left(\boldsymbol{y}|\boldsymbol{h}\right) = \mathcal{C}\mathcal{N}\left(\boldsymbol{y}-\boldsymbol{A}\boldsymbol{x};\boldsymbol{0},\sigma^2\boldsymbol{I}\right),\\ \chi_k&:\delta\left(\ell_k-\sum_is_{k,i}\right),\\ \psi_k&:\psi(\ell_k, \alpha_k). \end{align} \end{subequations} The factor graph in Fig.~\ref{fg} includes two modules, namely, the linear module that handles the linear constraint in \eqref{y-grid} and the MRF module that handles the MRF prior in \eqref{mrf}. \begin{figure}[htbp] \centering \includegraphics[width=3.2in]{factorgraph-eps-converted-to.pdf} \caption{The factor graph characterizing the hierarchical probability model for sparse signals. } \label{fg} \end{figure} \subsection{MVSP Algorithm} The MVSP algorithm is a sum-product message passing algorithm defined on Fig.~\ref{fg}. A major difference between MVSP and the original VSP in \cite{vsp} is that variable nodes $\{\ell_k\}$ and $\{\alpha_k\}$ are added to the factor graph for device activity detection, which involves more sophisticated message updates in the MRF module. The details of the MVSP are presented in the following. Let $\varpi_{a\to b}$ denote the message passing from node $a$ to node $b$. In Fig.~\ref{fg}, node $v_{k,i}$ receives a message $\varpi_{\eta_{k,i} \to v_{k,i}}$ from node $\eta_{k,i}$. From the sum-product rule, the message from $v_{k,i}$ to $\zeta_{k,i}$ is still given by $\varpi_{\eta_{k,i} \to v_{k,i}}$. Then the message from $\zeta_{k,i}$ to $s_{k,i}$ is a Bernoulli distribution given by \begin{align}\label{eq.36} \varpi_{\zeta_{k,i} \to s_{k,i}} \! \propto\! \int_{v_{k,i}} p\left(v_{k,i}|s_{k,i}\right)\varpi_{\eta_{k,i} \to v_{k,i}} = \pi_{\zeta_{k,i} \to s_{k,i}}\delta\left(s_{k,i}-1\right) \!+\!\left(1-\pi_{\zeta_{k,i} \to s_{k,i}}\right)\delta\left(s_{k,i}+1\right), \end{align} where $\pi_{\zeta_{k,i} \to s_{k,i}}$ is the probability of $s_{k,i} = 1$ specified in the message $\varpi_{\zeta_{k,i} \to s_{k,i}}$. \begin{figure}[htbp] \centering \includegraphics[width=3.4in]{MRF2-eps-converted-to.pdf} \caption{Illustration of the MRF module in MVSP. } \label{fig2} \end{figure} We now describe the message passing involved in the MRF \eqref{mrf}. A 4-connected MRF is used in MVSP to leverage the block sparsity of each $\boldsymbol{s}_k$ in the delay-Doppler domain. The detailed factor graph characterizing the 4-connected MRF module is given in Fig.~\ref{fig2}. For clarity, the left, right, top, bottom neighbors to $s_{k,i}$ are reindexed by $\mathcal{I}_i=\left\{i_\text{L}, i_\text{R}, i_\text{T}, i_\text{B}\right\}$ (i.e., $s_{k,i_\text{L}}=s_{k,i-1}$, $s_{k,i_\text{R}}=s_{k,i+1}$, $s_{k,i_\text{T}}=s_{k,i+J}$, $s_{k,i_\text{B}}=s_{k,i-J}$). The left, right, top, and bottom incoming messages of $s_{k,i}$, denoted as $\varpi_{k,i}^\text{L}$, $\varpi_{k,i}^\text{R}$, $\varpi_{k,i}^\text{T}$, and $\varpi_{k,i}^\text{B}$, are Bernoulli distributions. By defining $\mathcal{J}=\left\{\text{L}, \text{R}, \text{T}, \text{B}\right\}$, the incoming message of $s_{k,i}$ from the left is given by \begin{subequations} \begin{align}\label{mesgl} \varpi_{k,i}^\text{L} &\!\propto\! \int_{s_{k,i_\text{L}}} \varpi_{\zeta_{k,i_\text{L}} \to s_{k,i_\text{L}}} \prod\limits_{\jmath\in\mathcal{J}\setminus \text{R}} \varpi^{\jmath}_{k,i_\text{L}} \varphi\left(s_{k,i},s_{k,i_\text{L}} \right)\varpi_{\chi_{k} \to s_{k,i_\text{L}}}\nonumber\\ &=\lambda_{k,i}^\text{L}\delta\left(s_{k,i}-1\right)+\left(1-\lambda_{k,i}^\text{L}\right)\delta\left(s_{k,i}+1\right), \end{align} where $\mathcal{J}\setminus \text{R}$ denotes the set $\mathcal{J}$ by excluding the element ``R'', $\varpi_{\chi_{k} \to s_{k,i_\text{L}}}$ is the message from $\chi_{k}$ to $s_{k,i_\text{L}}$, denoted as \begin{align} \label{chi2s} \varpi_{\chi_{k} \to s_{k,i_\text{L}}} = & \;\pi_{\chi_{k} \to s_{k,i_\text{L}}} \delta\left(s_{k,i_\text{L}} -1 \right) + \left(1-\pi_{\chi_{k} \to s_{k,i_\text{L}}}\right)\delta\left(s_{k,i_\text{L}}\ + 1\right), \end{align} and $\lambda_{k,i}^\text{L}$ is given by \eqref{lambdal}. \end{subequations} \begin{figure*}[bp] \hrule \begin{align}\label{lambdal} \lambda_{k,i}^\text{L}\!=\! \frac{e^{\beta}\pi_{\zeta_{k,i_\text{L}} \to s_{k,i_\text{L}}}\pi_{\chi_{k} \to s_{k,i_\text{L}}}\!\!\prod\limits_{\jmath\in\mathcal{J}\setminus \text{R}}\!\!\lambda_{k,i_\text{L}}^{\jmath}+e^{-\beta}\left(1\!-\!\pi_{\zeta_{k,i_\text{L}} \to s_{k,i_\text{L}}}\right)\!\!\left(1\!-\!\pi_{\chi_{k} \to s_{k,i_\text{L}}}\right)\prod\limits_{\jmath\in\mathcal{J}\setminus \text{R}}\left(1-\lambda_{k,i_\text{L}}^{\jmath}\right)} {\left(e^{\beta}\!+\!e^{-\beta}\right)\!\!\left(\!\pi_{\zeta_{k,i_\text{L}} \to s_{k,i_\text{L}}}\pi_{\chi_{k} \to s_{k,i_\text{L}}} \!\!\prod\limits_{\jmath\in\mathcal{J}\setminus \text{R}} \!\! \lambda_{k,i_\text{L}}^{\jmath} \!+\!\left(1\!-\!\pi_{f_{k,i_\text{L}} \to s_{k,i_\text{L}}}\right)\!\!\left(\!1-\!\pi_{\chi_{k} \to s_{k,i_\text{L}}}\!\right)\!\prod\limits_{\jmath\in\mathcal{J}\setminus \text{R}}\!\left(1-\lambda_{k,i_\text{L}}^{\jmath}\right)\!\!\right)} \end{align} \end{figure*} The incoming messages of $s_{k,i}$ from the right, top, and bottom, i.e., $\varpi_{k,i}^\text{R}$, $\varpi_{k,i}^\text{T}$, and $\varpi_{k,i}^\text{B}$, have a form similar to $\varpi_{k,i}^\text{L}$. The message from $\chi_{k}$ to $\ell_k$ is expressed as \begin{align}\label{chi2ell} \varpi_{\chi_{k} \to \ell_k} \!\propto\! \int_{\boldsymbol{s}_{k}}\delta\left(\ell_{k}-\sum_{i}s_{k,i}\right) \prod_i\left(\varpi_{\zeta_{k,i} \to s_{k,i}} \prod\limits_{\jmath\in\mathcal{J}} \varpi_{k,i}^{\jmath} \right)\!. \end{align} Similarly to \eqref{psi}, we approximate \eqref{chi2ell} with a Gaussian distribution: \begin{subequations} \label{chi2ell_appro} \begin{align}\label{chi2ell2} \varpi_{\chi_{k} \to \ell_k} & = {\cal{N}}(\ell_k; m_{\chi_k\to \ell_k}, \sigma_{\chi_k\to \ell_k}^2), \end{align} where $m_{\chi_k \to \ell_k}$ and $\sigma^2_{\chi_k \to \ell_k}$ are the mean and variance, respectively given by $m_{\chi_k \to \ell_k} =\sum_i(2\pi_{s_{k,i} \to \chi_k}-1)$ and $\sigma_{\chi_k\to \ell_k}^2 = 4\sum_i\pi_{s_{k,i}}(1-\pi_{s_{k,i}})$, and \begin{equation}\label{chi2ell3} \pi_{s_{k,i}\to \chi_k} \!=\!\frac{\pi_{\zeta_{k,i} \to s_{k,i}} \prod\limits_{\jmath\in {\cal{J}}}\lambda_{k,i}^{\jmath}}{\pi_{\zeta_{k,i} \to s_{k,i}} \!\prod\limits_{\jmath\in {\cal{J}}}\!\lambda_{k,i}^{\jmath} + (1\!-\!\pi_{\zeta_{k,i} \to s_{k,i}}) \prod\limits_{\jmath\in {\cal{J}}}(\!1\!-\lambda_{k,i}^{\jmath})}. \end{equation} \end{subequations} The message from $\psi_{k}$ to $\alpha_{k}$ is given by \begin{subequations} \begin{align} \varpi_{\psi_{k} \to \alpha_{k}}\! &\!\propto\! \int_{\ell_k} \psi(\ell_k,\alpha_k) \varpi_{\chi_{k} \to \ell_k} = \pi_{\psi_{k} \to \alpha_{k}}\delta(\alpha_{k}-1)+(1-\pi_{\psi_{k} \to \alpha_{k}})\delta(\alpha_{k}),\label{psi2alpha1} \end{align} with $\varpi_{\ell_{k} \to \psi_{k}}\! = \!\varpi_{\chi_{k} \to \ell_{k}}$, where \begin{align} \pi_{\psi_{k} \to \alpha_{k}} &= \frac{\pi_{\psi_{k} \to \alpha_{k}, 1}}{\pi_{\psi_{k} \to \alpha_{k}, 1} + \pi_{\psi_{k} \to \alpha_{k}, 0}}, \label{psi2alpha2}\\ \pi_{\psi_{k} \to \alpha_{k}, 1} & = \frac{ e^{-(m_{\chi_k \to \ell_k} - m_{\psi})^2/(2\sigma_{\chi_k \to \ell_k}^2+ 2\sigma_{\psi}^2) } }{\sqrt{2\pi(\sigma^2_{\chi_k \to \ell_k}+\sigma_{\psi}^2)}},\\ \pi_{\psi_{k} \to \alpha_{k}, 0} & = \frac{e^{-(-1-m_{\chi_k \to \ell_k})^2/2\sigma_{\chi_k \to \ell_k}^2}}{\sqrt{2\pi\sigma^2_{\chi_k \to \ell_k}}}. \end{align} \end{subequations} The message from $\psi_k$ to $\ell_k$ is constant and given by \begin{align}\label{psi2ell} \varpi_{\psi_k \to \ell_k} & = \int_{\alpha_k} p(\alpha_k) \psi(\ell_k, \alpha_k) = \rho {\cal{N}}(\ell_k; m_{\psi}, \sigma_{\psi}^2) + (1-\rho)\delta(\ell_k+1). \end{align} Then we calculate the message from $\chi_{k}$ to $s_{k,i}$ with a similar form of \eqref{chi2s}. To obtain $\pi_{\chi_k \to s_{k,i}}$, we first calculate $\lambda_{\chi_k \to s_{k,i}}$ based on the sum-product rule: \begin{align} \lambda_{\chi_k \to s_{k,i}} =& \int_{\boldsymbol{s}_k\setminus s_{k,i}, \ell_k} \prod_{i'\neq i} \left(\varpi_{\zeta_{k,i'} \to s_{k,i'}} \prod_{\jmath\in{\cal{J}}} \varpi_{k,i'}^\jmath \right) \varpi_{\psi_{k} \to \ell_{k}}\delta\left(\ell_k-\sum_is_{k,i}\right)\nonumber\\ =& \int_{ \ell_k} \!\varpi_{\psi_{k} \to \ell_{k}} \!\int_{\boldsymbol{s}_k\setminus s_{k,i}} \prod_{i'\neq i} \left(\varpi_{\zeta_{k,i'} \to s_{k,i'}} \prod_{\jmath\in{\cal{J}}} \varpi_{k,i'}^\jmath \right) \delta\left(\ell_k - s_{k,i}-\sum_{i' \neq i}s_{k,i'}\right), \end{align} where $\boldsymbol{s}_k\setminus s_{k,i}$ denotes all the entries of $\boldsymbol{s}_k$ expect $s_{k,i}$. The inner integral is approximated by \begin{subequations} \label{eq.33} \begin{align} {\cal{N}}(\ell_k-s_{k,i}; m_{\chi_k \to s_{k,i}}, \sigma^2_{\chi_k \to s_{k,i}}), \end{align} where $m_{\chi_k \to s_{k,i}}$ and $\sigma^2_{\chi_k \to s_{k,i}}$ are respectively given by \begin{align} m_{\chi_k \to s_{k,i}} & = \sum_{i' \neq i}(2\pi_{s_{k,i} \to \chi_k}-1), \\ \sigma^2_{\chi_k \to s_{k,i}} & = 4\sum_{i' \neq i}\pi_{s_{k,i} \to \chi_k}(1- \pi_{s_{k,i} \to \chi_k}). \end{align} \end{subequations} As such, $\lambda_{\chi_k \to s_{k,i}}$ can be given by \begin{align} \label{eq.34} \lambda_{\chi_k \to s_{k,i}}\! =\! (1-\rho) {\cal{N}}(s_{k,i}; LJ+m_{\chi_k\to s_{k,i}}, \sigma^2_{\chi_k \to s_{k,i}}) \!+ \! \rho {\cal{N}}(s_{k,i}; m_{\chi_k\to s_{k,i}}m_{\psi}, \sigma^2_{\chi_k \to s_{k,i}} \!+\! \sigma_{\psi}^2). \end{align} Then, $\pi_{\chi_k \to s_{k,i}}$ is given by \begin{align} \label{pi-chi2s} \pi_{\chi_k \to s_{k,i}}\! =\! \frac{\lambda_{\chi_k \to s_{k,i}}(s_{k,i}=1)}{\lambda_{\chi_k \to s_{k,i}}(s_{k,i}\!=\!1)+\lambda_{\chi_k \to s_{k,i}}(s_{k,i}=\!-1\!)}. \end{align} The output message of $s_{k,i}$ can be calculated as \begin{subequations} \begin{align}\label{s2zeta} \varpi_{s_{k,i} \to \zeta_{k,i}} = & \pi_{s_{k,i} \to \zeta_{k,i}}\delta\left(s_{k,i}-1\right)+\left(1-\pi_{s_{k,i} \to \zeta_{k,i}}\right)\delta\left(s_{k,i}+1\right), \end{align} where \begin{align} \pi_{s_{k,i} \to \zeta_{k,i}} \!= \!\frac{\pi_{\chi_{k} \to s_{k,i}} \prod\limits_{\jmath\in\mathcal{J}}\lambda_{k,i}^{\jmath}}{\pi_{\chi_k \to s_{k,i}} \!\prod\limits_{\jmath\in\mathcal{J}}\!\lambda_{k,i}^{\jmath} \!+\! (1-\pi_{\chi_{k} \to s_{k,i}}) \!\prod\limits_{\jmath\in\mathcal{J}}\!(1-\lambda_{k,i}^{\jmath})}. \end{align} \end{subequations} With $\varpi_{s_{k,i} \to \zeta_{k,i}}$, the message from $\zeta_{k,i}$ to $v_{k,i}$ is a Bernoulli-Gamma distribution given by \begin{align}\label{zetatov} \varpi_{\zeta_{k,i} \to v_{k,i}} \!\! \propto \!\!\!\! \int_{s_{k,i}}\!\!\!\!\! p\left(v_{k,i}|s_{k,i}\right)\varpi_{s_{k,i} \to \zeta_{k,i}} \!\!\!= \! \pi_{s_{k,i} \to \zeta_{k,i}}\textup{Gamma}\left(v_{k,i};\gamma_{k,1},\gamma_{k,2}\right) \!\!+\!\! \left(1 \!-\!\pi_{s_{k,i} \to \zeta_{k,i}}\right)\!\delta\!\left(v_{k,i}\right). \end{align} With $\varpi_{v_{k,i} \to \eta_{k,i}} = \varpi_{\zeta_{k,i} \to v_{k,i}}$, the message from $\eta_{k,i}$ to $h_{k,i}$ is given by \begin{equation}\label{eq.43} \varpi_{\eta_{k,i} \to h_{k,i}} \propto \int_{v_{k,i}} p\left(h_{k,i}|v_{k,i}\right)\varpi_{v_{k,i} \to \eta_{k,i}}. \end{equation} The message from $h_{k,i}$ to $\iota$ is $\varpi_{h_{k,i} \to \iota} =\varpi_{\eta_{k,i} \to h_{k,i}}$. The message from $\iota$ to $h_{k,i}$ is \begin{align}\label{eq.44} \varpi_{\iota \to h_{k,i}} \!\propto\!& \int_{\boldsymbol{h} {\setminus h_{k,i}}} p\left(\boldsymbol{y}|\boldsymbol{h}\right) \prod\limits_{i^{'}\neq i} \varpi_{\eta_{k,i^{'}} \to \iota} \prod\limits_{k^{'}\neq k}\prod\limits_{i''} \varpi_{\eta_{k^{'},i''} \to \iota} \nonumber\\ =&\int_{\boldsymbol{h} {\setminus h_{k,i}}} \!\!\!\!\! p\left(\boldsymbol{y}|\boldsymbol{h}\right) \prod\limits_{i^{'}\neq i}\int_{v_{k,i^{'}}} \!\!\!\! p(x_{k,i^{'}}|v_{k,i^{'}}) \varpi_{v_{k,i^{'}} \to \eta_{k,i^{'}}} \!\!\!\prod\limits_{k^{'}\neq k}\prod\limits_{i''} \!\! \int_{v_{k^{'},i''}}\!\!\!\!\!\! p(x_{k^{'},i''}|v_{k^{'},i''})\varpi_{v_{k^{'},i''} \to \eta_{k^{'},i''}}, \end{align} where $\boldsymbol{h} {\setminus h_{k,i}}$ denotes all the entries of $\boldsymbol{h}$ except $h_{k,i}$. Clearly, $\varpi_{h_{k,i} \to \eta_{k,i}} = \varpi_{\iota \to h_{k,i}}$. Then, the messages $\varpi_{\eta_{k,i} \to v_{k,i}}$ and $\varpi_{v_{k,i} \to \zeta_{k,i}}$ can be computed as \begin{equation}\label{eq.45} \varpi_{v_{k,i} \to \zeta_{k,i}} = \varpi_{\eta_{k,i} \to v_{k,i}} \propto \int_{h_{k,i}} p\left(h_{k,i}|v_{k,i}\right) \varpi_{h_{k,i} \to \eta_{k,i}}. \end{equation} Note that the integrals involved in \eqref{eq.44} and \eqref{eq.45} are difficult to evaluate. From \cite{vsp}, we can replace the output of the linear module for node $v_{k,i}$ by the mean $\mu_{\eta_{k,i} \to v_{k,i}} = \mathbb{E}_{\varpi_{\eta_{k,i} \to v_{k,i}}}\left[v_{k,i}\right]$. Then $\varpi_{\eta_{k,i} \to v_{k,i}}$ is approximated by \begin{equation}\label{eq.46} \mu_{\eta_{k,i} \to v_{k,i}} = \arg \max\limits_{v_{k,i}} p\left(\boldsymbol{y}|\boldsymbol{v}\right) |_{v_{k,i^{'}}=\mu_{{v_{k,i^{'}} \to \eta_{k,i^{'}}}},\forall i^{'}\neq i}, \end{equation} where $\mu_{v_{k,i} \to \eta_{k,i}} $ is the incoming mean of $v_{k,i}$ for the linear module, i.e., $\mu_{v_{k,i} \to \eta_{k,i}} = \mathbb{E}_{\varpi_{v_{k,i} \to \eta_{k,i}}}\left[v_{k,i}\right]$. Note that $p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{v}\right) \propto p\left(\boldsymbol{y}|\boldsymbol{h}\right)p\left(\boldsymbol{h}|\boldsymbol{v}\right)$ is a complex Gaussian distribution with the mean $\boldsymbol{m}$ and the covariance $\boldsymbol{\Phi}$ given by \begin{align} \boldsymbol{m} &= \boldsymbol{D}\boldsymbol{A}^H( \sigma^{-2}\boldsymbol{I}+ \boldsymbol{A}\boldsymbol{D} \boldsymbol{A}^H)^{-1}\boldsymbol{y},\label{post-m}\\ \boldsymbol{\Phi} &= \boldsymbol{D}- \boldsymbol{D}\boldsymbol{A}^H( \sigma^{-2}\boldsymbol{I}+ \boldsymbol{A}\boldsymbol{D} \boldsymbol{A}^H)^{-1}\boldsymbol{A}\boldsymbol{D},\label{post-psi} \end{align} where $\boldsymbol{D}$ is a diagonal matrix with the $((k-1)LJ+i)$-th diagonal element equals to $\mu_{v_{k,i} \to \eta_{k,i}}$. Hence, by using the ELBO-based method in \cite{vsp}, the solution of \eqref{eq.46} can be obtained by \begin{equation}\label{elbo} \mu_{\eta_{k,i} \to v_{k,i}} = |m_{(k-1)LJ + i}|^2 + \phi_{(k-1)LJ + i}, \end{equation} where $m_{(k-1)LJ + i}$ and $\phi_{(k-1)LJ + i}$ are the $\{(k-1)LJ + i\}$-th entry of $\boldsymbol{m}$ and the $\{(k-1)LJ + i\}$-th diagonal element of $\boldsymbol{\Phi}$, respectively. Then $\pi_{\zeta_{k,i} \to s_{k,i}}$ is approximated as \begin{equation}\label{f2s} \pi_{\zeta_{k,i} \to s_{k,i}} = \min \left(\frac{\mu_{\eta_{k,i} \to v_{k,i}}}{\gamma_{k,1}/\gamma_{k,2}}, 1\right), \end{equation} Similarly, $\varpi_{v_{k,i} \to \eta_{k,i}}$ can be approximated as \begin{equation}\label{v2g} \mu_{v_{k,i} \to \eta_{k,i}} = \mathbb{E}_{\varpi_{v_{k,i} \to \eta_{k,i}}}\left[v_{k,i}\right] = \frac{\gamma_{k,1}}{\gamma_{k,2}}\pi_{s_{k,i} \to \zeta_{k,i}}. \end{equation} Then, we approximate the mean of the Gamma distribution $\gamma_{k,1}/\gamma_{k,2}$ by $\kappa_k$, where $\kappa_k$ is the mean of the largest $\lfloor3LJ\rho_s\rceil$ elements in $\{\mu_{\eta_{k,i} \to v_{k,i}}\}_{i=1}^{LJ}$. For inactive devices, this approximation is not accurate because $\{\mu_{\eta_{k,i} \to v_{k,i}}\}_{i=1}^{LJ}$ has no information about the Gamma distribution. To address this problem, we use the detected active devices' information to help inactive devices. Let $\kappa_1'\ge\ldots\ge\kappa_K'$ be the reordered sequence of $\kappa_1,\ldots,\kappa_K$, and $K^+=\sum_{k}\hat\alpha_k$ the number of detected active devices, where $\hat\alpha_k$ is given by \eqref{uad}. We suppose that the devices corresponding to the largest $\theta_1 K^+$ $\kappa_k'$s are active, and their $\kappa_k$s are calculated by $\lfloor3LJ\rho_s\rceil$, where $0<\theta_1<1$. The left $K-\theta_1 K^+$ devices' $\kappa_k$ is approximated by $1/(\theta_2K^+)\sum_{k=1}^{\theta_2K^+}\kappa_k'$, where $\theta_2\ge1$.The detailed derivation of the approximations in \eqref{eq.46}-\eqref{v2g} can be found in \cite{vsp}. The above messages are updated iteratively until the algorithm converges. Then, we use the estimate $\hat{\boldsymbol{h}}=\boldsymbol{m}$ to recover the channel $\hat{\boldsymbol{G}}_u$ based on \eqref{G-hat}, and estimate the device activity as \begin{align} \label{uad} \hat\alpha_k=\!\begin{cases} 1, \quad \text{if}\; \rho \pi_{\psi_k \to \alpha_k}\!>\! (1-\rho)(1-\pi_{\psi_k \to \alpha_k}),\\ 0, \quad \text{if}\; \rho \pi_{\psi_k \to \alpha_k}\!\le\! (1-\rho)(1-\pi_{\psi_k \to \alpha_k}), \end{cases} \!\forall k. \end{align} \subsection{Overall Algorithm and Complexity} The overall MVSP algorithm is summarized in Algorithm 1. We now analyse the computational complexity of the MVSP algorithm. The complexity in step 5 is $\mathcal{O}(R^3+R^2Q)$. The complexities in step 8, step 10 and step 11 are all $\mathcal{O}(Q)$. The complexity in step 13 is $\mathcal{O}(Q+K)$. Thus, the computational complexity of the MVSP algorithm is dominated by step 6, and is given by $\mathcal{O}\left(T_{\text{out}}(T_{\text{in1}}(R^3+R^2Q+Q)+K)\right)$. \begin{algorithm}[htpb] \label{algo1} \caption{MVSP algorithm \LinesNumbered \KwIn{ $\boldsymbol{y}$, $\boldsymbol{A}$, $T_{\text{out}}$, $T_{\text{in1}}$, $T_{\text{in2}}$ \For{$t_{\text{out}} = 1$ to $T_{\text{out}}$} { ${\boldsymbol{\hat\mu}}=\left[\mu_{v_{1,1}\rightarrow \eta_{1,1}},\ldots,\mu_{v_{K,LK} \rightarrow \eta_{K,LJ}}\right]^{\text{T}}$ \; \For{$t_{\text{in1}}=1$ to $T_{\text{in1}}$} { $\boldsymbol{D}=\text{diag}(\hat{\boldsymbol{\mu}})$\; Compute $\boldsymbol{m}$ and $\boldsymbol{\Phi}$ by using \eqref{post-m} and \eqref{post-psi}\; Update $\hat{\boldsymbol{\mu}}=\left[\hat{\mu}_{\eta_{1,1}\rightarrow v_{1,1}},\ldots,\hat{\mu}_{\eta_{K,LJ}\rightarrow v_{K,LJ}}\right]^{\text{T}}$ with \eqref{elbo}\; } Compute $\{\pi_{\zeta_{k,i} \rightarrow s_{k,i}}\}$ based on \eqref{f2s} and $\left[\mu_{\eta_{1,1}\rightarrow v_{1,1}},\ldots,\mu_{\eta_{K,LJ}\rightarrow v_{K, LJ}}\right]^{\text{T}}=\hat{\boldsymbol{\mu}}$\; \For{$t_{\text{in2}}=1$ to $T_{\text{in2}}$ } { Compute $\{\varpi_{k,i}^{\jmath}\}$ based on \eqref{mesgl}\; Compute $\{\pi_{\chi_{k} \rightarrow s_{k,i}}\}$ based on \eqref{eq.33}-\eqref{pi-chi2s}\; } Compute $\{\pi_{\chi_{k} \rightarrow \alpha_{k}}\}$, $\{\hat\alpha_k\}$, and $\{\mu_{v_{k,i} \rightarrow \eta_{k,i}}\}$ by \eqref{psi2alpha2}, \eqref{uad} and \eqref{v2g}, respectively\; } \KwOut{$\hat{\boldsymbol{h}}=\boldsymbol{m}$, $\hat\alpha_k, \forall k$. \end{algorithm} \section{EM-MVSP Algorithm} \subsection{Model Mismatch Problem} It is worth noting that the true delay and Doppler frequency shift of each path may not fall onto the given grid $\boldsymbol{\tau}_k$ and $\boldsymbol{\nu}_k$, which results in the model mismatch problem. To tackle this, we treat $\boldsymbol{\tau}_k$ and $\boldsymbol{\nu}_k$ involved in $\boldsymbol{A}$ as unknown parameters and consider a parametric dictionary learning method to improve the performance of the MVSP. To this end, we denote $\boldsymbol{A}$ as $\boldsymbol{A}(\boldsymbol{\omega})$ where $\boldsymbol{\omega}=\{\boldsymbol{\tau}_1,\ldots, \boldsymbol{\tau}_{K}, \boldsymbol{\nu}_1,\ldots,\boldsymbol{\nu}_{K}\}$. We learn the parameter set $\boldsymbol{\omega}$ through the EM method. \subsection{EM Framework} Each iteration of the EM method consists of two steps: \begin{itemize} \item E-step: To calculate the posterior distribution of hidden variables based on the estimates obtained in the previous iteration, and then formulate a function $\mathcal{Q}$ which is the posterior mean of the log-likelihood function with respect to the hidden variables; \item M-step: To maximize $\mathcal{Q}$ with respect to the hidden variables to update parameters. \end{itemize} E-Step and M-step iterate until converge. In our problem, the observed variables are the received signal $\boldsymbol{y}$ in \eqref{y-grid}, the hidden variables are the channel representation vector $\boldsymbol{h}$, and the parameters to be estimated are $\boldsymbol{\omega}$. Note that the MVSP algorithm can provide approximate posterior distributions of $\boldsymbol{h}$ in the iteration. Therefore, in this problem, the key to update $\boldsymbol{\omega}$ is the M-step, i.e., to maximize the posterior mean of the log-likelihood function $\mathcal{Q}$. Denote by $\boldsymbol{\omega}^{(i)}$ the parameter update obtained in the $i$-th iteration. Then the $\mathcal{Q}$ function is formulated as \begin{align} \mathcal{Q}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right)=&\int_{\boldsymbol{h}} \text{ln} p\left(\boldsymbol{y},\boldsymbol{h}|\boldsymbol{\omega}\right)p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) \nonumber\\ =&\int_{\boldsymbol{h}} \text{ln}p\left(\boldsymbol{y}|\boldsymbol{h},\boldsymbol{\omega}\right)p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) \int_{\boldsymbol{h}} \text{ln}p\left(\boldsymbol{h}\right)p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right).\label{Qfunc} \end{align} Note that the second term in \eqref{Qfunc} is irrelevant to $\boldsymbol{\omega}$. Therefore, M-step is equivalent to minimizing \begin{align} \label{Ffunc} \mathcal{F}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right)=-\int_{\boldsymbol{h}} \text{ln}p\left(\boldsymbol{y}|\boldsymbol{h},\boldsymbol{\omega}\right)p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right). \end{align} From the additive CSCG noise model in \eqref{y-grid}, we have \begin{align} \label{posty} p\left(\boldsymbol{y}|\boldsymbol{h},\boldsymbol{\omega}\right) = \mathcal{CN}\left(\boldsymbol{y}-\boldsymbol{A}\left(\boldsymbol{\omega}\right)\boldsymbol{h}; \boldsymbol{0}, \sigma^2\boldsymbol{I}\right). \end{align} In each outer iteration of the MVSP algorithm, the posterior distribution of the channel representation $\boldsymbol{h}$ given by the linear module is approximated by \begin{align} \label{posth} p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) = \mathcal{CN}\left(\boldsymbol{h}-\boldsymbol{m}\left(\boldsymbol{\omega}^{\left(i\right)}\right); \boldsymbol{0},\boldsymbol{\Phi}\left(\boldsymbol{\omega}^{\left(i\right)}\right)\right), \end{align} where $\boldsymbol{m}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$ and $\boldsymbol{\Phi}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$ are the posterior mean and the covariance matrix of $\boldsymbol{h}$ calculated based on $\boldsymbol{y}$ and $\boldsymbol{A}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$. Plugging \eqref{posty} and \eqref{posth} into \eqref{Ffunc}, we have \begin{align} \label{Ffunc2} \mathcal{F}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right) = & \frac{1}{{\sigma^2}}\int_{\boldsymbol{h}} \Big(\boldsymbol{y}^{\text{H}}\boldsymbol{y} - \boldsymbol{y}^{\text{H}}\boldsymbol{A}(\boldsymbol{\omega})\boldsymbol{h} - \boldsymbol{y}^{\text{H}} \boldsymbol{A}^{\text{H}}(\boldsymbol{\omega}) \boldsymbol{y} \nonumber \\ & + \boldsymbol{h}^{\text{H}}\boldsymbol{A}^{\text{H}}(\boldsymbol{\omega})\boldsymbol{A}\left(\boldsymbol{\omega}\right)\boldsymbol{h} \Big) p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) + \text{ln}(\pi\sigma^2). \end{align} Remove the irrelevant terms in \eqref{Ffunc2} and define a new objective function \begin{align} \label{Gfunc2} \mathcal{G}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right)=& \int_{\boldsymbol{h}} \Big( - \boldsymbol{y}^{\text{H}}\boldsymbol{A}\left(\boldsymbol{\omega}\right)\boldsymbol{h} - \boldsymbol{h}^{\text{H}} \boldsymbol{A}^{\text{H}}\left(\boldsymbol{\omega}\right) \boldsymbol{y} + \boldsymbol{h}^{\text{H}}\boldsymbol{A}^{\text{H}}\left(\boldsymbol{\omega}\right)\boldsymbol{A}\left(\boldsymbol{\omega}\right)\boldsymbol{h} \Big) p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) \nonumber \\ =&-2\mathcal{R}\left(\boldsymbol{y}^{\text{H}} \boldsymbol{A}\left(\boldsymbol{\omega}\right) {\boldsymbol{m}}\left(\boldsymbol{\omega}^{\left(i\right)}\right)\right) + \text{Tr}\left(\boldsymbol{A}^{\text{H}}\left(\boldsymbol{\omega}\right) \boldsymbol{A}\left(\boldsymbol{\omega}\right) \int_{\boldsymbol{h}}\boldsymbol{h}\boldsymbol{h}^{\text{H}}p\left(\boldsymbol{h}|\boldsymbol{y},\boldsymbol{\omega}^{\left(i\right)}\right) \right)\nonumber \\ =& -2\mathcal{R}\left(\boldsymbol{y}^{\text{H}} \boldsymbol{A}\left(\boldsymbol{\omega}\right) \boldsymbol{m}\left(\boldsymbol{\omega}^{(i)}\right)\right) + \text{Tr}\left(\boldsymbol{A}^{\text{H}}\left(\boldsymbol{\omega}\right) \boldsymbol{A}\left(\boldsymbol{\omega}\right) \boldsymbol{\Omega}^{\left(i\right)} \right), \end{align} where $\boldsymbol{\Omega}^{\left(i\right)}= \boldsymbol{\Phi}\left(\boldsymbol{\omega}^{\left(i\right)}\right)+\boldsymbol{m} \left( \boldsymbol{\omega}^{\left(i\right)} \right) \boldsymbol{m}^{\text{H}}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$. In summary, in each update of $\boldsymbol{\omega}$ through EM, our method is to calculate the posterior distribution of $\boldsymbol{h}$ and construct the objective function $\mathcal{G}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right)$ based on $\boldsymbol{y}$ and $\boldsymbol{\Omega}^{(i)}$. It is not easy to obtain an analytical solution to this problem with respect to both $\boldsymbol{\tau}_k$ and $\boldsymbol{\nu}_k$. Therefore, we use the gradient descent method to minimize $\mathcal{G}\left(\boldsymbol{\omega}, \boldsymbol{\omega}^{\left(i\right)}\right)$ and then update $\boldsymbol{\omega}$ in each iteration. \begin{algorithm}[htbp] \caption{EM-MVSP algorithm} \LinesNumbered \KwIn{ $\boldsymbol{y}$, $\{\boldsymbol{x}_u\}$, $T_{\text{out}}$, $T_{\text{in}}$, $i_{\text{max}}$.} Initialization: $i=0, \boldsymbol{\omega}^{\left(0\right)}=\{\boldsymbol{\tau}_1^{\left(0\right)},\ldots, \boldsymbol{\tau}_{K}^{\left(0\right)}, \boldsymbol{\nu}_1^{\left(0\right)},\ldots,\boldsymbol{\nu}_{K}^{\left(0\right)}\}$\; \While{the stopping criterion is not met} { Generate $\boldsymbol{A}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$ based on $\boldsymbol{\omega}^{\left(i\right)}$ and $\{\boldsymbol{x}_u\}$\; With $\boldsymbol{y}$ and $\boldsymbol{A}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$, calculate $\boldsymbol{m}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$, $\boldsymbol{\Phi}\left(\boldsymbol{\omega}^{\left(i\right)}\right)$ and $\hat\alpha_k\left(\boldsymbol{\omega}^{\left(i\right)}\right)$ by Algorithm \ref{algo1}\; Minimize \eqref{Gfunc2} with respect to $\boldsymbol{\tau}_k$, and obtain the updated $\boldsymbol{\tau}^{\left(i+1\right)}_k$, $\forall k$\; Minimize \eqref{Gfunc2} with respect to $\boldsymbol{\nu}_k$, and obtain the updated $\boldsymbol{\nu}^{\left(i+1\right)}_k$, $\forall k$\; $\boldsymbol{\omega}^{\left(i+1\right)}=\{\boldsymbol{\tau}_1^{\left(i+1\right)},\ldots, \boldsymbol{\tau}_{K}^{\left(i+1\right)}, \boldsymbol{\nu}_1^{\left(i+1\right)},\ldots,\boldsymbol{\nu}_{K}^{\left(i+1\right)}\}$, $i=i+1$\; } \KwOut{$\hat{\boldsymbol{h}}={\boldsymbol{m}}\left(\boldsymbol{\omega}^{(i)}\right)$, $\hat\alpha_k=\hat\alpha_k\left(\boldsymbol{\omega}^{(i)}\right)$ .} \end{algorithm} \subsection{Overall Algorithm and Complexity} Algorithm 2 summarizes the EM-MVSP algorithm. The stopping criterion of the iteration is generally set to ``$\|\boldsymbol{\omega}^{\left(i+1\right)}-\boldsymbol{\omega}^{\left(i\right)}\|^2$ is less than a small positive threshold or $i\ge i_{\text{max}}$'', where $i_{\text{max}}$ is the maximal number of iterations of the EM. We now analyse the computational complexity of the EM-MVSP algorithm. The complexity in step 3 is $\mathcal{O}\left(RQ\left(M+LJ\right)\right)$. The complexities in step 5 and step 6 are both $\mathcal{O}\left(Q^3+Q^2R\right)$. As mentioned in Section \uppercase\expandafter{\romannumeral4}-C, the complexity in step 4 is $\mathcal{O}(T_{\text{out}}T_{\text{in1}}(R^3+R^2Q))$. Thus, the computational complexity of the EM-MVSP algorithm is given by $\mathcal{O}\left(i_{\text{max}}\left(Q^3+Q^2R+T_{\text{out}}(T_{\text{in1}}(R^3+R^2Q+Q)+K)\right)\right)$. \section{Simulation Results} In this section, we carry out simulations to demonstrate the effectiveness of the proposed algorithms. The CE and DAD performance are respectively measured in terms of NMSE and detection error probability $P_e= \mathbb{E}[1/K\sum_k ( p(\hat{\alpha}_k=0|\alpha_k=1) + p(\hat{\alpha}_k=1|\alpha_k=0) )]$ over 100 independent trials. The signal-noise-ratio (SNR) is defined as $\sum_u\|\boldsymbol{G}_u\boldsymbol{x}_u\|^2/(R\sigma^2)$. The baseline methods used for comparison includes OMP \cite{OMP}, GAMP \cite{GAMP}, structured TCS (STCS) \cite{STCS}, SBL \cite{SBL}, and pattern-coupled SBL (PCSBL) \cite{PCSBL}. The iteration number of EM in EM-MVSP is 4. $\{x_{k,m,u}\}$ are generated from the standard complex Gaussian distribution. The parameters in the simulation are listed as follows. The satellite operates at a speed of 7 km/s in an orbit of 600 km, and the coverage of the beam is a circle with a diameter of 50 km. As shown in Fig.~\ref{beamsim}, device A and device B are at the border of the beam coverage, and the elevation angle between the satellite and device A is $\alpha_e$. The velocity of devices follows the uniform distribution with [0, 120] km/h, the delay spreading is 0.5 $\mu$s, the length of CP is $T_{\text{\text{cp}}}= 2 \;\mu\text{s}$. We adopt the TDL-A channel model in \cite{3gpp}. The large scale fading is compensated since it changes slightly over the devices in a beam. The total number of OFDM symbols in a transmission frame is $UN=12$, and the subcarrier spacing is $\Delta f = 15$ kHz. \begin{figure}[h] \centering \includegraphics[width=1.8in]{beamsim-eps-converted-to.pdf} \caption{Illustration of the simulated satellite-IoT system.} \label{beamsim} \end{figure} We first consider the scenario when there is no mismatch between the real channel and the grid-based model, that is, the delay and the Doppler frequency shift of each channel path fall onto the delay-Doppler grid. In Fig. \ref{on-nmse} we show the CE NMSE versus the SNR. The proposed MVSP outperforms other baseline methods as SNR increases. In particular, due to the special structure of the measurement matrix discussed in Section \ref{PD}, we notice that GAMP and STCS behave poorly in this task, with a NMSE about 0 dB. PCSBL outperforms SBL but still has a large performance gap with the proposed MVSP algorithm. In the SNR range of $[16,24]$ dB, the MVSP algorithm is at least 5 dB better than other baseline methods in NMSE. Fig. \ref{on-perr} shows the detection error probability versus the SNR. We see that as the SNR increases, the $P_e$ of GAMP, STCS, OMP and SBL can hardly be improved. MVSP obviously outperform the baseline methods, especially when the SNR is over 16 dB. \begin{figure}[t] \centering \subfigure[CE performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{on-nmse-eps-converted-to.pdf} \label{on-nmse} \end{minipage} }\subfigure[DAD performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{on-pe-eps-converted-to.pdf} \label{on-perr} \end{minipage} } \caption{CE and DAD performances under various SNR and on-grid settings. Related parameters are $K=200$, $\rho = 0.1$, $N=4$, $L=4$, $J=24$, $M=32$ and $\alpha_e = 50^\circ$.} \end{figure} \begin{figure}[t] \centering \subfigure[CE performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{n-nmse-eps-converted-to.pdf} \label{off-n-nmse} \end{minipage} }\subfigure[DAD performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{n-pe-eps-converted-to.pdf} \label{off-n-perr} \end{minipage} } \caption{CE and DAD performances under various number of cascaded OFDM symbols $N$ and off-grid settings. Related parameters are $K=100$, $\rho = 0.2$, $L=6$, $J=20$, SNR=20 dB, $M=16$ and $\alpha_e = 70^\circ$. } \end{figure} \begin{figure}[t] \centering \subfigure[CE performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{off-nmse-eps-converted-to.pdf} \label{off-snr-nmse} \end{minipage} }\subfigure[DAD performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{off-pe-eps-converted-to.pdf} \label{off-snr-perr} \end{minipage} } \caption{CE and DAD performances under various SNR and off-grid settings, and $\alpha_e = 50^\circ$. Related parameters are $K=200$, $\rho = 0.1$, $N=4$, $L=4$, $J=24$ and $M=32$. } \label{off-50} \end{figure} \begin{figure}[t] \centering \subfigure[CE performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{ang90-nmse-eps-converted-to.pdf} \label{off-snr-nmse90} \end{minipage} }\subfigure[DAD performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{ang90-pe-eps-converted-to.pdf} \label{off-snr-perr90} \end{minipage} } \caption{CE and DAD performances under various SNR and off-grid settings, and $\alpha_e = 90^\circ$. Related parameters are $K=200$, $\rho = 0.1$, $N=4$, $L=4$, $J=24$ and $M=32$. } \label{off-90} \end{figure} \begin{figure}[t] \centering \subfigure[CE performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{spar-nmse-eps-converted-to.pdf} \label{off-spar-nmse} \end{minipage} }\subfigure[DAD performance]{ \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=2.4in]{spar-pe-eps-converted-to.pdf} \label{off-spar-perr} \end{minipage} } \caption{CE and DAD performances under various sparsity $\rho$ and off-grid settings. Related parameters are $K=200$, $N=4$, $L=4$, $J=20$, SNR $=24$ dB, $M=32$ and $\alpha_e = 50^\circ$. } \label{off-spar} \end{figure} We further consider the mismatch between the real channel and the grid-based model, namely, the off-grid scenario. Fig. \ref{off-n-nmse} shows the NMSE against the number of repeated OFDM symbols $N$ in a super-symbol. As $N$ increases, it is interesting that different methods have different trends since more repeated OFDM symbols can improve the frequency resolution, but also results in the correlation of measurement matrix. The NMSEs of GAMP and OMP increase because they are sensitive to measurement matrix, and repeated OFMD symbols results in performance degradation. As for PCSBL and SBL, their NMSEs first slightly decrease and then increase. Then, MVSP are more robust to the measurement matrix, with significant performance gain as $N$ increases. We see that for MVSP more than 4 dB gain can be obtain from repeated OFDM symbols. When $N>4$, almost all the algorithms have a performance degradation due to the measurement matrix correlation. In Fig. \ref{off-snr-nmse}, we show detection error probability against $N$. The trade off between frequency resolution improvement and measurement matrix correlation is also obvious, which demonstrates the effectiveness of cascaded OFDM symbols. Then we show the NMSE performance against the SNR with $\alpha_e= 50^\circ$ in Fig. \ref{off-snr-nmse}. As the SNR increases, MVSP and EM-MVSP are at least 4 and 6 dB better than other baseline methods in NMSE, respectively. The proposed algorithms also behave well in DAD as shown in Fig. \ref{off-snr-perr}. We notice that PCSBL, MVSP and EM-MVSP have a significant performance gap compared with other methods in the considered SNR range, and their performance is similar at a lower SNR. This is because these three methods all consider the channel block-sparsity structure, but one-dimension block-sparsity in PCSBL and two-dimension block-sparsity in MVSP and EM-MVSP. In Fig. \ref{off-90}, we further consider the scenario with elevation angle $\alpha_e= 90^\circ$, where the Doppler effect is more severe. We see that MVSP and EM-MVSP still outperform the baselines. Compared with PCSBL, EM-MVSP have a performance gain of more than 3 dB in NMSE and one order of magnitude in $P_e$. Thus, the proposed MVSP and EM-MVSP algorithms show performance advantages under different Doppler effects. Fig. \ref{off-spar} shows the CE and DAD performances of all the algorithms under varying sparsity $\rho$. The proposed MVSP and EM-MVSP have a considerable performance gap within the considered sparsity range. Even at $\rho=0.2$, i.e., about 40 active devices access to the satellite, $P_e$ of EM-MVSP can reach $10^{-2}$, which demonstrates the advantage of the proposed algorithms for massive connectivity in satellite-IoT systems. \section{Conclusion} In this paper, we studied the joint DAD and CE for GF-NORA in LEO satellite-IoT. We developed an OFDM-symbol repetition technique to better distinguish the Doppler shifts of the LEO satellite channel. We established a grid-based parametric system model, and showed that joint DAD and CE can be formulated as a CS problem. However, we pointed out that the measurement matrix of the problem exhibits special correlation structure, so that existing Bayesian CS algorithms such as AMP and Turbo-CS do not behave well. To address this issue, we proposed the MVSP algorithm which is robust to the sensing matrix and can efficiently exploit the channel sparsity in the delay-Doppler-user domain. We then used the EM method to learn the grid parameters and further improve the performance of MVSP. Simulation results demonstrated that the proposed algorithms significantly outperform the counterparts methods. \bibliographystyle{IEEEtran}
1,477,468,749,923
arxiv
\section{Introduction} \label{sec:introduction} The Membrane Electrode Assembly (MEA) is the core of a Proton Exchange Membrane Fuel Cell (PEMFC). It consists of two symmetric catalyst layers (CL), placed at the anode and cathode sides and separated by a polymer electrolyte membrane (PEM), and of the gas diffusion layer~\cite{Eikerling2007, Vielstich2003, Weber2004}. Despite the tremendous progresses achieved in the past decades, the PEMFC is not yet largely commercialized. The most significant hurdles for large scale production include reduction of costs, improvement of power density and enhancement of durability~\cite{Borup2007, Peighambardoust2010}. It is currently consensual that further development of PEMFCs implies a directly understanding of the material properties at the molecular level, for each component of the MEA. In particular, regions of crucial importance are the catalyst layers, where different electrochemical reaction mechanisms take place~\cite{Litster2004,Mehta2003}. This includes two half-cell reaction mechanisms: {\em i)} the Hydrogen Oxidation Reaction (HOR), \ce{H2 -> 2H+ + 2e-} at the anode; and {\em ii)} the Oxygen Reduction Reaction (ORR), \ce{O2 + 4H+ + 4e- -> 2H2O} at the cathode~\cite{Markovic2002, Damjanovic1967, DeMorais2011}. The rates of those reaction mechanisms determine the efficiency of electrochemical conversion, which is directly related to the fuel cell performance~\cite{Rinaldo2010,Franco2008}. The most efficient choice of catalyst particles for enhancing reaction rates are Pt-based particles. The high cost associated to the amount of platinum required for the catalyst, particularly at the cathode, is one of the drawbacks of fuel cells~\cite{Stamenkovic2007,Gasteiger2005,Eikerling2007b, Eikerling2009}. The CL performance also depends on the transport conditions for reactants and products moving from (to) other MEA components from (to) the catalyst surface inside the CL. A good cathode CL performance (similarly for the anode CL) may depend on: transport of protons from the membrane to the catalyst; electron conduction from the current collector to the catalyst; reactant gases from gas channels to the catalyst; and correct removal of water from the catalyst layer~\cite{Eikerling2007b}. In order to meet all requirements, a complex structure with interconnected pores for reactants diffusion, a phase for electron conduction and a path for proton transport must be considered in devising a CL~\cite{Malek2007, Malek2011a, More2006,Xie2010}. The necessity of having a heterogeneous structure to satisfy all catalyst layer functionalities, implies the quest for new materials design to optimize the distribution of transport media, in order to reduce transport losses and produce the highest current density with a minimum amount of catalyst particles~\cite{Litster2004}. Effective properties mainly depend on the nature of the materials used and fabrication process applied. During the preparation of catalyst layer ink, Pt/C agglomerates, Nafion ionomer and solvent are mixed together. This process is highly empirical and uses poorly controlled processing methods, which are not based on any knowledge of physico-chemical processes at the molecular level~\cite{Wilson1992, Wilson1993, Wilson1995}. Also, the CL is composed by materials characterized by very heterogeneous wetting properties, i.e., {\em hydrophilic} or {\em hydrophobic} character. The hydrophilicity of the CL plays an important role in fuel cell water management and it can be modified during the fabrication process~\cite{Li2009, Li2010}. Moreover, these wetting properties can be affected during fuel cell operation. The degradation mechanisms for these materials include ripening and compositional changes of catalyst due to corrosion, catalysts poisoning by adsorbed impurities, aging of the proton exchange electrolyte membrane, changes in the hydrophobic/hydrophilic properties of catalyst layer surfaces~\cite{Mashio2010, Borup2007,Chen2006, Wang2009}. In Ref.~\cite{Borges2013} we introduced a mean-field-like model for the interaction of the hydrated Nafion ionomer with a substrate, characterized by a tunable degree of hydrophilicity. In particular, we focused on transport properties of water molecules in different regions of the film and demonstrated a high degree of heterogeneity. We also gave a few hints about the dependence of some morphological features on the wetting properties of the substrate~\cite{Borges2013, Borges2013b}. Here, we consider a much more extended set of simulation data and a provide a complete picture of the produced ultra-thin films morphology. We performed a comprehensive Molecular Dynamics (MD) computer simulation investigation of the substrate effects on the ionomer ultra-thin film morphology at different hydration levels, considering as the control parameter the hidrophilicity degree of the substrate. We have analyzed quantitatively morphology and topology of the films, both at the interfaces with the solid support and air, and in the central layers far from the boundaries. We propose a general qualitative scenario for thin-films morphology in different hydration conditions and wetting nature of the support. We finally speculate about possible implications of our work on the optimization of the actual devices. The paper is organized as follows. In Section~\ref{sec:catalyst layer} we provide an overview of experimental and computer simulation work relevant in the present context. In Sect.~\ref{sec:ionomer-model} we describe the atomistic model used for mimicking the hydrated ionomer and our effective model for the interaction of the ionomer with the substrate. We characterize the wetting properties of the support in terms of a contact angle. We finally give a few details on our computer simulations scheme. More technical details can be found in the Supplementary Information accompanying this paper. In Sect.~\ref{sec:morphology} we report our extended investigation of the morphology, while in Sect.\ref{sec:PEMFC-technology} we focus more in details on both the support/ionomer and ionomer/vacuum interfaces, discussing the implications of our findings on PEMFC technology. Finally, Sect.~\ref{sec:conclusions} contains our conclusions and possible perspectives on further work. \section{The catalyst layer} \label{sec:catalyst layer} The CL structure is formed by platinum nanoparticles dispersed on a carbon matrix with impregnated Nafion ionomer~\cite{Malek2007, Malek2011a, More2006,Xie2010}. Nafion is a perfluorinated polymer which results from the copolymerisation of a tetrafluoroethylene backbone (Teflon) and perfluorovinyl ether groups, terminated by sulfonate group side-chains~\cite{Moore2004}. Nafion is characterized by a highly heterogeneous structure at the nanoscale, due to a spontaneous phase separation of the hydrophobic backbones and hydrophilic sulfonated side chains upon hydration~\cite{Gierke1981, Hsu1983,Yeager1981,Gebel1987, Gebel2000a,Young2002,Rubatat2002,Schmidt-Rohr2008,Elliott2011}. Nafion has been introduced as one of CL constituents for two reasons~\cite{Litster2004}: first, during the fabrication process it acts as a binder, playing an important role on the dispersion of Pt/C aggregates and, as a consequence, on the Pt utilization. Second, during fuel cell operation, it forms an extended proton-conductor network available for proton migration from (to) the membrane to (from) the catalyst sites. Nafion inside CL presents an inhomogeneous and non-continuous phase. It can be found as a well-dispersed ultra-thin film on the surface of carbon supports and Pt particles. Typically, this film is not uniformly distributed and has a thickness spanning the range $\sim 4$ to $20$ nm~\cite{More2006}. The formation of Nafion ultra-thin films inside the catalyst layer has been analysed in numerous recent studies~\cite{Ma2007, Paul2011, Paul2011a, Paul2013, Wood2009, Dura2009, Masuda2009, Koestner2011, Eastman2012, Nagao2013, Kusoglu2014, Modestino2012, Modestino2013}. Structure and properties of these films significantly differ from those in the ionomer membrane (bulk). A detailed study based on variation of the ionomer film thickness and comparison with the membrane, has shown that some ionomer properties, {\em e. g.}, water uptake, swelling, water diffusion, respond differently to relative humidity. There is a critical thickness of around $60$~nm, where a transition from a bulk-like to confined ionomer is observed~\cite{Eastman2012}. Other experiments in thin-films adsorbed on \ce{Si2O}-terminated surfaces have underlined a proton conductivity which is lower than in the case of the bulk membrane~\cite{Paul2011,Paul2011a}. Also, Atomic Force Microscopy (AFM) experiments have shown that the ionomer orientation depends on the atomic arrangement of the substrate surface~\cite{Masuda2009,Koestner2011}. In the CL the Nafion ionomer is expected to self-organize in different forms, depending on the properties of the substrate. The impact of surface hydrophilicity on the ionomer properties have been recently subject of many studies, and there is experimental evidence that the change of wetting properties of the substrate is sufficient to affect Nafion film morphology~\cite{Modestino2012, Modestino2013, Bass2010, Bass2011}. Modestino {\em et al.}~\cite{Modestino2012} have investigated the possibility to control structure and properties of Nafion thin films by modifying the wetting properties of the substrate. They prepared Nafion thin-films deposited on hydrophobic (OTS passivated Si) and hydrophilic (silicon) substrates, and investigated the impact of the internal morphology on water uptake. They found that thin films cast on hydrophobic substrates result in parallel orientation of ionomer channels, which retards the absorption of water from humidified environments. In contrast, films prepared on \ce{SiO2} result in isotropic orientation of these domains, thus favoring water adsorption and swelling of the polymer. Wood {\em et al.}~\cite{Wood2009} observed multilayer structures of Nafion thin films in contact with smooth flat surfaces. These structures consist of separate hydrophobic and hydrophilic domains formed within the Nafion layer, when equilibrated with saturated \ce{D2O} vapor. Any strong interaction between a flat surface and Nafion is likely to lead to the polymer chains lying flat on that surface, which is completely different from any bulk Nafion morphologies proposed so far. When Nafion was in contact with a bare Pt surface, a hydrophobic Nafion region was found to form adjacent to a Pt film. In contrast, when a PtO monolayer was present, the hydrophobic backbone was pushed outward and the hydrophilic side chains were in contact with the PtO surface. These restructuring processes were reversible and strongly influenced by the polymer hydration. Dura {\em et al.}~\cite{Dura2009} performed Neutron Refractometry (NR) measurements in order to investigate the structure of Nafion in contact with \ce{SiO2}, \ce{Au} and \ce{Pt} surfaces. They showed that lamellar structures, composed of thin alternating water-rich and Nafion-rich layers, exist at the interface between \ce{SiO2} and the hydrated Nafion film. However, multilamellar structures do not exist at the Pt/Nafion or Au/Nafion (metallic) interfaces, where a single thin layer rich in water occurs. This difference indicates that Au and Pt surfaces have a lower affinity to the sulfonic acid/water phase than the more hydrophilic \ce{Si2O} surface. These structures where interpreted in terms of an interface-induced ordering of the ribbon-like aggregates or lamellae observed in Small-Angle X-Rays Scattering (SAXS) experiments of bulk Nafion. Therefore, the first Nafion-rich layer could be formed by closely packed ribbons or lamellae, oriented with their faces parallel to the substrate, and with successive layers of increasingly disordered character. Molecular dynamics (MD) simulations can also provide insights in clarifying nanoscale structure and transport properties of Nafion at interfaces. Despite this evidence, only a few numerical studies have been dedicated to the above issues, partly due to the issue of convincingly parametrizing interaction force fields between Nafion and substrate materials. A few examples are reported in what follows. Most part of computational work has focused on the behaviour of Nafion in the presence of carbon and platinum based materials~\cite{Balbuena2005,Lamas2006,Liu2008,Selvan2012,Selvan2008}. These simulations showed that Nafion strongly interacts with Pt nanoparticles, mainly through the hydrophilic sulfonic chains. Mashio {\em et al.}~\cite{Mashio2010} analysed the morphology of Nafion ionomer and water in contact with graphite surfaces. Because of the hydrophobic nature of the graphite sheet and ionomer backbones, Nafion ionomer was found to interact with the graphite sheet mainly via the backbones, whereas side chains were oriented away from the graphite sheet and water molecules were adsorbed at the sulfonic acid groups. The effect on structure and transport properties of the functionalization of graphitized carbon sheet with carboxyl ($COOH$) or carboxylate ($COO^-$) groups was also explored. The most significant effect on water and ionomer distributions was shown to come from the graphite sheet functionalized with ion groups. It was observed that the number of water molecules, hydronium ions and sulfonic acid groups in the vicinity of the graphite sheet increases with the application of the ionized functional groups. Overall, the structure and surface properties of carbon supports clearly affect the transport properties of proton and water. \section{Modelling} \label{sec:ionomer-model} \subsection{The ionomer model} \label{subsec:ionomer-model} The Nafion polymer, is formed by a hydrophobic polytetrafluorothylene backbone ([\ce{-CF_2-CF_2}]) and intercalated perfluorinated side-chains, which are terminated by a strongly hydrophilic radical sulfonic acid group ($\ce{SO_3H}$). We consider a united-atom representation for $\ce{CF}$, $\ce{CF_2}$ and $\ce{CF_3}$ and a fully atomistic model for the \ce{SO3-} groups in the side-chain~\cite{Urata2005}. This mixed modelling scheme is commonly used to represent Nafion~\cite{Allahyarov2007,Allahyarov2009,Cui2007,Cui2008,Vishnyakov2000,Vishnyakov2001,Liu2010}. The polymer backbone is formed by a linear chain of $160$ bonded monomers, which corresponds to a (completely extended) length of approximately $24$~nm. $10$ side-chains are uniformly distributed along the backbone. Each side-chain has $11$ atoms and a length of approximately $1$~nm. The spacing between adjacent side-chains has been chosen in order to match an equivalent weight $\sim 1143.05$~g/mol of $\ce{SO_3^-}$, a value typical for commercial Nafion 117. The simulation starts from a configuration created by randomly placing 20 polymer chains, 200 hydronium ions and the number of water molecules set according to the desired water content $\lambda$. The system was equilibrated after a series of annealing and optimization runs. After the equilibration, trajectories of, at least, $5$~ns were generated for analyses. The total interaction energy of the system is the sum of non-bonded and intramolecular bonded terms. The force field parameters of our model are similar to the ones of the fully atomistic model of Venkatnathan~\cite{Venkatnathan2007} and adapted to the united-atom representation. The polymer backbone is charged neutral, while the sulfonic acid head groups are assumed to be fully ionized (\ce{SO3-} ). In order to preserve charge neutrality, flexible hydronium complexes (\ce{H3O+}) were added, with force field parameters and partial charges taken from~\cite{Kusaka1998}. Water molecules are described by the rigid extended Simple Point Charge (SPC/E) model~\cite{Berendsen1987}. A list of all parameters is given in Table~1 in the Supporting Information. We tested the reliability of the ionomer model by performing various simulations of hydrated Nafion in the bulk and compared our results with those found in the literature. Our model is able to reproduce the general Nafion morphology and the correct dynamics of water and hydronium ions. For the reader interested, the main results of Nafion membrane are reported in Supporting Information. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figure1a.pdf} \includegraphics[width=0.45\textwidth]{./figure1b.pdf} \caption{ {\em Top:} 9-3 Lennard-Jones potential function for different values of $\epsilon_{w}$, with $\sigma_{w}=0.32$~nm. Most hydrophilic case on the bottom. {\em Bottom:} Simulated clusters formed by $3500$ water molecules in contact with the support characterized by increasing values of $\epsilon_{w}$. It is evident the increasingly hydrophilic nature of the interaction. } \label{fig1:93lj} \end{figure} \subsection{The interaction with the support} \label{subsec:support-model} The effect of confinement due to the presence of a solid phase characterized by given wetting properties is mimicked by the interaction potential of the ionomer with the support. The hydrophobic or hydrophilic character of a surface is related to nano-scale features, such as structure and polarity~\cite{Giovambattista2007,Castrillon2009,Nijmeijer1990}. Here we have considered a mean-field-like interaction ionomer/substrate, that allows us to precisely control the hydrophilic character of the substrate by using a unique tunable control parameter. This strategy has already been successfully applied in studies of molecular liquids at interfaces, like pure water in contact with perfectly smooth walls~\cite{Scheidler2002,Spohr1988}. All system units interact with an infinite smooth unstructured wall (the support), placed at $z=0$ and parallel to the $xy$-plane, {\em via} a $9\text{--}3$ Lennard Jones potential~\cite{Abraham1977}. This only depends on the distance, $z$, of the unit from the support: \begin{equation} V_{w}^\alpha(z)=\epsilon^\alpha_w\left[ \frac{2}{15}\left(\frac{\sigma_w^\alpha}{z}\right)^9-\left(\frac{\sigma_w^\alpha}{z}\right)^3\right]\theta(z_c-z), \label{eq:wall} \end{equation} where $z_c=$~1.5 nm is a cut-off distance and $\theta$ is the Heaviside function. \begin{table}[t] \footnotesize \centering \begin{tabular}{l r r r r r r} \hline\hline $\epsilon_{w}$ (kcal/mole) & 0.125 & $^*$0.25 & 0.5 & $^*$1.0 & $^*$1.5 & $^*$2.0 \\ \textbf{$\theta$} (degrees) & $163.0$ & $151.3$ & $136.3$ & $100.9$ & $69.1$ & $29.7$ \\ \hline\hline \end{tabular} \caption{Values of the water droplet contact angles at the indicated values of $\epsilon_{w}$. We indicate with $^*$ the values of $\epsilon_{w}$ which we will consider in our analysis of the supported thin-films.} \label{tb:contactangle} \end{table} The index $\alpha$ identifies complexes ($H_2O$, $H_30^+$, $SO_3^-$) with significant dipolar coupling to the (hydrophilic) support ($\alpha=\text{phyl}$), or units corresponding to the hydrophobic sections of the polymer ($\alpha=\text{phob}$) which, in contrast, interact very mildly. The energy well $\epsilon_w^\text{phob}=0.5$~kcal/mole is fixed and is the typical strength of the interaction of polymer units with a carbon sheet. This choice is justified by the observation that chemical and physical processes occurring at the surface, {\it e.g.} adsorption and chemical reactions in operating PEMFC, can affect surface polarity~\cite{Mashio2010, Giovambattista2007}. These polarity changes do not affect the interaction with the (apolar) backbone monomers in the same way they modify the interaction with water molecules. The impact of the polarity of the substrate is therefore expected to be more important on the wall/water than on wall/ionomer interactions. The hydrophilicity parameter $\epsilon_w^\text{phyl}=\epsilon_w$ is the control parameter, which was systematically varied in the range $0.125$ to $2.0$~kcal/mol. The typical interaction length scale $\sigma_w^\alpha=0.32$~nm in all cases. Examples of the potential of Eq.~(\ref{eq:wall}) at the indicated values of $\epsilon_w$ are shown in Fig.~\ref{fig1:93lj} (top). \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figure2a.pdf} \includegraphics[width=0.45\textwidth]{./figure2b.pdf} \caption{ {\em a)} Water droplet profiles at the indicated values of $\epsilon_{w}$. The solid lines are the results of the fitting procedure discussed in the text. {\em b)} Contact angles extracted from the droplet profiles. $\theta$ varies linearly in the investigated $\epsilon_{w}$ range. } \label{fig2:dropprofile} \end{figure} \begin{figure*}[] \centering \includegraphics[width=0.96\textwidth]{./figure3.pdf} \caption{ Lateral views of typical snapshots of hydrated Nafion thin-films at $\lambda=22$, 11 and 6, formed in contact with supports at the indicated values of the contact angle. These range from strongly hydrophobic ($\theta=150^\circ$) to very hydrophilic nature ($\theta=30^\circ$). The typical films thickness is about 4.5~nm. Beads pertaining to backbones are shown in brown, those pertaining to side-chains are in yellow, \ce{SO_3} groups are in red, water molecules in blue and hydronium ions in white. } \label{fig3:side} \end{figure*} \subsection{Wetting properties of the support and water droplet contact angles} \label{subsec:contact-angle} In order to associate a physical meaning to the adopted choice for $\epsilon_{w}$ we have performed additional simulations of water droplets gently deposited on supports described by Eq.~(\ref{eq:wall}) and calculated the corresponding contact angles, $\theta$. By convention, a value of $\theta\le\pi/2$ corresponds to an hydrophilic support, while $\theta>\pi/2$ to an hydrophobic one. Figure~\ref{fig1:93lj} (bottom) shows typical snapshots of the equilibrated water droplets at the values $\epsilon_{w}=$ 0.25, 1.0, 1.5 and 2.0~kcal/mol. Already from visual inspection, the increasing hydrophilic character of the support is evident. The contact angles can next be estimated by fitting the droplet profiles~\cite{Shi2009,Werder2003}. Droplets profiles for different values of $\epsilon_{w}$ are shown in Fig.~\ref{fig2:dropprofile}~(a). A circular best fit through these points is extrapolated to the wall surface and provides $\theta$. We compute $\theta$ for each value of $\epsilon_{w}$. In Fig.~\ref{fig2:dropprofile}~(b) we plot $\epsilon_w$-dependence of the contact angle, which is linear in the investigated range. The associated contact angles to the $\epsilon_{w}$ are displayed in Table~\ref{tb:contactangle}. We will often refer to these values in what follows. Altogether, these data prove that our strategy is able to provide us with different scenarios for the wetting character of the substrate, ranging from strongly hydrophobic to very hydrophilic conditions. Note that these values are representative of specific materials studied in the past. For example, computer simulations of water droplets on a platinum surface shows a contact angle $\theta\simeq$~20-30$^\circ$ \cite{Shi2009}. In the case of carbon nanotubes, the contact angle varies in the range 100$^\circ$ to 106$^\circ$, while for graphite from 110$^\circ$ to 115$^\circ$~\cite{Werder2003, Werder2001}. \section{Morphology of the hydrated ionomer thin-films} \label{sec:morphology} In Fig.~\ref{fig3:side} we show typical snapshots of the self-organized ionomer thin-films at the indicated values of hydration level and contact angles. Four hydrophilicity levels have been considered, encompassing very hydrophobic $(\theta \approx 150^\circ)$, intermediate $(\theta \approx 100^\circ)$, hydrophilic ($\theta \approx 70^\circ$) and strongly hydrophilic $(\theta \approx 30^\circ)$ supports. These contact angles correspond to interaction energies $\epsilon_{w}=$ 0.25, 1.0, 1.5 and 2.0~kcal/mol respectively, as detailed in Table~\ref{tb:contactangle}. The water contents considered are 6, 11 and 22. Those values are typical hydration level found in electrodes in fuel cell operation. Side-chains (yellow beads) terminated by the \ce{SO3-} groups (red beads), decorate the interface between the backbone (brown beads) and the hydrophilic domains formed by water molecules and hydronium ions (blue and white beads, respectively). This configuration is typical of the phase-separated structure present in the Nafion membrane (bulk). The films thickness is about 4.5~nm, for all cases. By visual inspection, it is clear that the hydrophilicity of the substrate indeed controls the global morphology of the film. Also, it is evident that morphology and connectivity of the hydrated domains within the film, changes significantly at different values of $\theta$ and $\lambda$. In what follows we report our analysis work and quantify these changes. \subsection{Mass density distributions} \label{subsec:mass density} The structure of the ionomer film is first analysed in terms of the mass density profiles along the $z$-direction, perpendicular to the substrate. In Fig.~\ref{fig4:densityProfile} we show the polymer (left) and water (right) mass density distributions, $\rho_{p}(z)$ and $\rho_w(z)$ respectively, corresponding to snapshots of Fig.~\ref{fig3:side}. These curves clearly show important complementary changes on the distributions of water and polymer, following the value of $\theta$. We first focus on films on top of strongly hydrophobic surfaces ($\theta=150^\circ$). In the highly hydrated film ($\lambda=22$), at short distances from the surface, {\em i.e.} $z<1$~nm, the presence of polymer is dominant, while $\rho_{w}(z)$ shows almost no presence of water molecules at distances $z<0.5$~nm (Fig.~\ref{fig4:densityProfile} (a) and (b)). In this region, $\rho_{p}(z)$ presents two well defined peaks. In the center of the film, {\em i.e.} at distances $1.0<z<3.0$~nm, $\rho_{w}(z)$ is at its maximum value, while $\rho_{p}(z)$ is at the minimum. This suggests the formation of water domains confined between polymer-rich layers localized at the bottom and on top of the film. When decreasing the degree of hydration ($\lambda=$~11 and 6) this layered structure is less evident and the distribution of the polymer is less localized. As indicated in Fig.~\ref{fig4:densityProfile}~(e) the polymer density profile only has a shallow minimum in the latter case. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{./figure4.pdf} \caption{ Mass density profiles for polymer ($\rho_{p}(z)$) and water molecules ($\rho_{w}(z)$) in the considered thin-films at $\lambda = 22$, 11 and 6 at the indicated values of the contact angles $\theta$. $z$ is the distance from the support. } \label{fig4:densityProfile} \end{figure} In the case $\theta=100^\circ$, one starts to observe the presence of water molecules in direct contact with the substrate, as shown by the appearance of a peak in $\rho_{w}(z)$ at very short $z$. This suggests that a threshold exists at a value of the contact angle included in the range $100^\circ\div 150^\circ$, marking a transition from a completely hydrophobic to a mixed hydrophilic/hydrophobic character. In contrast, the polymer density profile shows the intensity of the first peaks are substantially decreased. Therefore, once water molecules start to adsorb at the support, the ionomer self-organizes by increasingly moving upward, and both species populate the substrate. With decreasing $\lambda$ this equilibrium is altered and the presence of polymer on the substrate is still dominant. In the more hydrophilic cases ($\theta=70^\circ$ and $30^\circ{}$), the fraction of polymer in direct contact with the substrate is strongly reduced. At $\lambda=22$, the presence of ionomer is significant only for distances $z>2.5$~nm due to the presence of a large amount of water on the bottom which pushes the polymer upward, forming an ionomer layer on the upper part of the film. When $\lambda$ is lowered to 6, a significant fraction of the ionomer can be already found at a distance $z\simeq$~1~nm (Fig.~\ref{fig4:densityProfile}~(e)). In contrast, almost no water molecules are found in the middle of the film, in the range $1.0<z<2.5$~nm. This range encompasses the broad peak characterizing the polymer distribution and water molecules are concentrated in the region corresponding to a minimum of the polymer density profile. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{./figure5.pdf} \caption{ Radial distribution functions calculated from water/water oxygen atoms ($g_{O_wO_w}(r)$) and water/hydronium oxygen atoms ($g_{O_wO_h}(r)$) at $\lambda=$~22, 11 and 6 and at the indicated values of $\theta$. Data for the membrane in the same hydration conditions are also shown, for comparison. } \label{fig5:gOw-film} \end{figure} For all cases the positions of the two peaks in the vicinity of the wall for both $\rho_p$ and $\rho_w$ (at 0.29 and 0.55~nm for water, and 0.33 and 0.76~nm for polymer, respectively) do not change neither with hydration nor with surface hydrophilicity. The positions of those peaks are directly controlled by the interaction of the chemical units with the wall and, more precisely, by the parameter $\sigma_{w}=$~0.32~nm in Eq.~(\ref{eq:wall}). The relative distances between the two peaks (0.26~nm and 0.46~nm) are comparable with the nearest-neighbours distances between water molecules and between polymer beads and other species, respectively. Also, the oscillations in density profile (layering) are a typical feature of liquids at the interface with smooth walls~\cite{Spohr1989, Lee1994}. From the above analysis we can conclude that the modulation of the interaction with the support has indeed a strong impact on local density profiles and, as a consequence, on the morphology of the thin-films. Although it is not surprising that the support wetting behaviour grows due to an increasing hydrophilic character, the overall density profiles are complex and extremely variable. A deeper understanding of the morphological features of these thin-films implies a more detailed analysis, that we will discuss in what follows. \subsection{Radial Distribution Functions} \label{subsec:gdr} In this Section, we explore in details the local structure of the thin-films in terms of 3-dimensional partial radial distribution functions, $g_{\alpha\beta}(r)$, between selected chemical species $\alpha$ and $\beta$, for all the investigated systems. The $g_{\alpha\beta}(r)$ are properly normalized to the entire film volume. Fig.~\ref{fig5:gOw-film} shows the $g_{\alpha\beta}(r)$ for the oxygen atoms pertaining to water/water ($g_{O_wO_w}(r)$) and water/hydronium ($g_{O_wO_h}(r)$). We observe that the positions of the peaks are very similar to those for the membrane, while the intensity of the peaks, decreases when increasing the hydrophilicy of the substrate. The fist coordination number of water molecules around hydronium ions is reduced. For the case of $\lambda=22$, it decreases from 4.37 for $\theta=150^\circ$ and in the bulk, to 3.66 for $\theta=30^\circ$, indicating that a smaller number of water molecules is found in the vicinities of hydronium ions for the films formed on most hydrophilic supports. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{./figure6.pdf} \caption{ Radial distribution functions for sulphur-sulphur ($g_{SS}(r)$), suphur/water ($g_{SO_w}(r)$), and sulphur/hydronium ($g_{SO_h}(r)$) for $\lambda=$~22, 11 and 6, at the indicated values of $\theta$. Data for the membrane are also shown, for comparison.} \label{fig6:gSS-film} \end{figure} The local structure around the \ce{SO3-} groups is investigated considering the $g_{\alpha\beta}(r)$ of sulphur atoms with sulphur, $g_{SS}(r)$, and water, $g_{SO_w}(r)$, and hydronium, $g_{SO_h}(r)$, oxygen atoms. These data are shown in Fig.~\ref{fig6:gSS-film}. At variance with the cases of water and hydronium discussed above, the $g_{SS}(r)$ calculated for the different films are very different when compared to the bulk case. This effect is accentuated at $\lambda=22$ (Fig.~\ref{fig6:gSS-film}). For $\theta=30^\circ$, the first peak is located at $0.49$~nm and an additional peak exists at $\simeq 0.7$~nm. When the hydrophilicity degree decreases, for $\theta=100^\circ$ and $150^\circ$, the first peak is shifted to $0.58$~nm, while the second one transforms into a shoulder, approaching the structureless $g_{SS}(r)$ found in the membrane. This indicates that the ionomer formed on a hydrophilic support self-organizes in such a way to have the \ce{SO3-} groups at distances smaller than those found for more hydrophobic cases or in the membrane. Consequently the number of \ce{SO3-} ions lying together is larger in the case of $\theta=30^\circ$. A possible conclusion is that for highly hydrated films ($\lambda=22$) the interaction of the film with the substrate transforms a bulk-like local structure, where \ce{SO3-} groups are less constrained and more spaced, into a configuration where the \ce{SO3-} groups form compact ionic domains. Both $g_{SO_w}(r)$ and $g_{SO_h}(r)$ exhibit strong correlations, similar to what is observed in the bulk (Figs.~\ref{fig6:gSS-film}). The first and second peaks are observed around $0.38$ and $0.60$~nm and these positions do not vary with the hydrophilicity of the support or with the hydration level of the film. Only the amplitude of those peaks show some changes with $\theta$ and $\lambda$. From the first shell coordination number of water molecules and hydronium ions around the sulphur atoms, we found that the number of water molecules surrounding the \ce{SO3-} decreases when the hydrophilicity of the substrate increases, while the opposite trend is observed for the hydronium. As it could be expected, these changes are more evident at $\lambda=22$, with water and hydronium coordination numbers varying respectively from $6.01$ and $1.45$ in the hydrophilic case, to $6.94$ and $0.9$ in the hydrophobic case. These findings are consistent with the picture based on the $g_{SS}(r)$ data. The number of water and hydronium molecules around the sulphur atoms is always correlated with the \ce{SO3-} agglomeration. Indeed, when the sulfonate ions are less agglomerated, they leave more space available for the water molecules to come closer to \ce{SO3-} groups. Consequently, the hydronium ions are increasingly solvated. In summary, we have observed that for $\theta=30^\circ$ and $70^\circ$ sulphur atoms are found in compact agglomerates. As a consequence, around the \ce{SO3-} groups the number of water molecules decreases and the number of hydronium ions increases. This effect is more evident for the highly hydrated films ($\lambda=22$). We also conclude that the changes between the structure of the film and the membrane increases with the hydration level. \begin{figure*}[t] \centering \includegraphics[width=0.6\textwidth]{./figure7.pdf} \caption{ Probability distributions of $cos(\phi_{\ce{SO3-}})$, where $\phi_{\ce{SO3-}}$ is the angle formed by the \ce{SO3-} orientation vector $\hat{u}_{\ce{SO3-}}$ and the normal to the support, $\hat{z}$. The distributions are calculated in slabs of thickness $0.3$~nm parallel to the substrate and at the indicated distances from the support, $z$ (in nm). In the first slab, one can observe the inversion of the \ce{SO3-} orientation when decreasing $\theta$, as discussed in the text. } \label{fig7:so3orient} \end{figure*} \subsection{Molecular orientation profiles} \label{subsec:orientation} To further elucidate both global and local features of the deposited thin-films, orientational order of sulfonic acid groups in regions of the films at different distances from the support were extensively investigated. Similar information about the orientational order of water molecules has already been reported in Ref.~\cite{Borges2013}. There, we have shown that the orientation of water molecules is mainly driven by the interaction with the support, similar to the case of water molecules near Lennard-Jones smooth walls~\cite{Spohr1988,Glebov1997,Tatarkhanov2009}. The orientation of the \ce{SO3-} groups at different distances from the support was quantified as follows: the films have been partitioned into partially overlapping slabs parallel to the support, with a thickness $\delta z=0.3$~nm. In each slab we have calculated the probability distributions $P(cos(\phi_{\ce{SO3-}}))$, with $cos(\phi_{\ce{SO3-}})=\hat{u}_\ce{SO3-}\cdot\hat{z}$. Here, $\hat{z}$ is the unit vector normal to the support and the unit vector $\hat{u}_{\ce{SO3-}}$ is oriented normal to the plane formed by the three oxygen atoms and points toward the sulphur atom. The \ce{SO3-} orientations at different distances from the support are crucial to elucidate the global ionomer orientation. As a reference, for $cos (\phi_{\ce{SO3-}}) = 1$, the three oxygen atoms face the support and lye in the $xy$-plane. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{./figure8.pdf} \caption{ Average of $cos(\phi_{SO_3^-})$ as function of the distance from the surface in the films at (a) $\lambda=22$, (b) 11 and (c) 6. } \label{fig8:avcos-so3} \end{figure} In Fig.~\ref{fig7:so3orient} we show $P(cos(\phi_{\ce{SO3-}}))$ for the investigated films at the indicated values of $\theta$, $\lambda$ and distances from the support. Clearly, $P(cos(\phi_{\ce{SO3-}}))$ depends on the hydrophilic degree of the support. Focusing on the first layer, it is evident that in the most hydrophobic ($\theta=150^\circ$) and hydrophilic ($\theta=30^\circ$) cases, the \ce{SO3-} are oriented in opposite directions. In the first case, the side chains are oriented with the sulfonate groups pointing opposite to the substrate, while in the second case, they point toward the substrate. In the intermediate cases, ($\theta=70^\circ$ and $30^\circ$), the $P(cos(\phi_{\ce{SO3-}}))$ are peaked around $-0.5$. Therefore, the three oxygen atoms point in the direction of the ionomer, with the \ce{SO3-} vector forming an angle of about $60^\circ$ with the normal to the support. This orientation corresponds to side-chains aligned horizontally to the substrate. Side-chain orientational configurations parallel and orthogonal to the support are called "standing" and "lying", respectively, and have been also observed in previous simulations of the ionomer placed on top of platinum surfaces~\cite{Cheng2010,Selvan2008}. When decreasing hydration, the degree of ionomer orientational order decreases. It is interesting to note that, in the case of $\theta=70^\circ$, the side-chains are first found in the lying position at $\lambda=22$ for gradually shifting to standing configurations, at $\lambda=6$. This indicates that water content also plays an important role in determining the side chains orientation. Indeed, in this particular low-$\lambda$ case, most part of water molecules are in contact with the substrate and, consequently, the ionomer self-organizes to maximize the fraction of \ce{SO3-} groups in direct contact with water. Details of the interface between water domains and side-chains will be further discussed below. The data shown in Fig.~\ref{fig7:so3orient} also show that the \ce{SO3-} groups are characterized by different preferential orientations in different regions within the film. In order to be more specific on this point, the evolution of the average value $\langle cos(\phi_{\ce{SO3-}}) \rangle$ across the film is illustrated in Fig.~\ref{fig8:avcos-so3}. Interestingly, side-chains orientation inversions at particular distances are evident in some conditions. This inversion is particularly clear in the cases corresponding to $\lambda=22$ (Fig.~\ref{fig8:avcos-so3}~(a)) for $\theta=150^\circ$ and $100^\circ$. Here, $\langle cos(\phi_{\ce{SO3-}})\rangle$, which is negative in the regions close to the support, steadily increases across the central region of the film eventually assuming positive values in the regions furthest from the support. Also interesting are the cases of the films at $\lambda=6$ formed on very hydrophilic supports (Fig.~\ref{fig8:avcos-so3}~(c)). For $\theta=70^\circ$ and $30^\circ$, two inversions on the side-chain average orientation are observed. Strong correlations exist in this case with the water density profiles shown in Fig.~\ref{fig4:densityProfile}~(f). Indeed, we observe the minima of $\langle cos(\phi_{\ce{SO3-}})\rangle $ at $z\simeq$~2.25-2.75~nm, which have a significant overlap with the region where water pools have been observed ($z\simeq$~2.5-3.5~nm). This observation additionally supports the idea that side-chain orientation is mainly governed by the non-trivial distribution of water domains inside the film. An other observation originating from the data of Fig.~\ref{fig8:avcos-so3} is that at distances larger than $3$~nm, side-chain sulfonic acid groups always point toward the support, independently of the values of $\theta$ and $ \lambda$. This side-chain alignment on the top of the film is attributed in part to the ionomer/air interface. We will come back to this point in what follows. In summary, our results demonstrate that the interaction of water molecules with the support determines the side-chains orientation. Indeed, the \ce{SO3-} groups must be embedded in water domains, to minimize the surface tension at the interface between the hydrophobic polymer backbone and water~\cite{Moore2004}. Therefore, although $\theta$ plays a mild role on the orientational properties of water molecules (as we demonstrated in ref.~\cite{Borges2013}), it has indeed a strong impact on side-chains orientation. This information is very important for the following, when we will propose a general qualitative picture for the morphology of supported Nafion thin-films. In the next Section we complete our investigation by characterizing the formation of ionic clusters across the film. \begin{table}[b] \centering \begin{tabular}{ccccc} \hline \hline $\lambda\backslash\theta(^\circ)$ & $150$ & $100$ & $70$ & $30$\\ \hline \hline 22 & 1.71 & 1.87 & 3.02 & 2.96\\ 11 & 3.38 & 3.10 & 3.65 & 3.33\\ 6 & 5.85 & 5.64 & 4.96 & 4.99\\ \hline\hline \end{tabular} \caption{ Average \ce{SO3-} groups cluster sizes for the ionomer thin films at the indicated values of hydrophilicity degree $\theta$ and hydration level $\lambda$.} \label{tb:clustersize} \end{table} \begin{figure}[b] \centering \includegraphics[width=0.45\textwidth]{./figure9.pdf} \caption{ Average cluster size (left) and mass density distributions (right) for the \ce{SO3-} groups as function of distance from the support, $z$, at the indicated values of $\lambda$ and $\theta$.} \label{fig9:ionic-clusterZ} \end{figure} \subsection{Formation of ionic clusters} \label{subsec: ionic-clusters} Above we have shown that films present different \ce{SO3-} packing features, {\em i.e.}, both coordination numbers and minimum distances between \ce{SO3-} groups (Fig.~\ref{fig6:gSS-film}) change for the different investigated cases. Here we conclude our analysis by focusing on the features of ionic clusters. This information is important for proposing a general picture for the morphology of the supported films in different hydration conditions and for different wetting nature of the substrate. We have identified the ionic clusters by identifying the \ce{SO3-} groups separated by a distance less than a cut-off $r_c=$~0.64~nm. The clustering analysis provide us with the probability distribution of the size of the clusters, i.e., the number of molecules pertaining to the same cluster. If a \ce{SO3-} group has no nearest neighbours within the cut-off distance, it is considered as an isolated cluster of size $1$. In Table~\ref{tb:clustersize} we show the average cluster size for all the investigated films. At fixed $\theta$, the cluster size decreases when increasing water content, which is an expected effect due to film swelling: an increasing number of water molecules intercalates between adjacent side chains, therefore \ce{SO3-} groups form less compact agglomerates and isolated groups are found with a higher probability. The hydrophilicity degree also impacts the average cluster size in a non-trivial fashion, which possibly depends on the details of the morphology of the considered film. This result seems to be at odds with a visual inspection of the snapshots shown in Fig.~\ref{fig3:side}, where quite extended regions of condensation of \ce{SO3-} groups are evident in particular regions of the films. To better clarify this point, we computed the average clusters size in different regions of the film, as a function of the distance $z$ from the substrate. In Fig.~\ref{fig9:ionic-clusterZ} we plot the average cluster sizes $\langle S_{\ce{SO3-}}(z)\rangle$ (left), together with the sulfonic acid mass density distributions $\rho_{\ce{SO3-}}(z)$ (right). This helps us in underlining the regions where the presence of \ce{SO3-} groups is relevant. For all values of $\lambda$, at $\theta=30^\circ$ and $70^\circ$, the $\langle S_{\ce{SO3-}}(z)\rangle$ curves clearly indicate the formation of very extended clusters at distances larger than $2$~nm from the support, in the top part of the film, closer to the ionomer/air interface. This is consistent with the high \ce{SO3-} mass density in this region. However, we also note that, for the cases $\theta=150^\circ$ and $100^\circ$, the distribution of average cluster sizes does not show any pronounced peak, despite the presence of well defined maxima in the $\rho_{\ce{SO3-}}$ curves. In conclusion, the formation of \ce{SO3-} clusters seems not to be simply determined by the distribution of \ce{SO3-} but is apparently controlled by the details of the morphology of the film. Also, we emphasize that ionic clustering should play a crucial role on water dynamics. In general, \ce{SO3-} group cluster has a strong impact on hydrogen binding between side-chains, and determines both water binding and the different mechanisms of proton transport~\cite{Kreuer2000,Elliott2007}. \subsection{Water clusters and connectivity} \label{sec: water-domains} We now focus on the topology of the domains formed by the water molecules, and investigate both shape and connectivity of the hydrated domains. We have characterized the water mass density distributions in planes parallel to the substrate, by partitioning the film in four slabs of thickness $1.2$~nm and computing the projected water density distributions on the $xy$-plane, averaged over the trajectory. Our data are plot in the form of color maps in Fig.~\ref{fig10:map-water}. Here a lighter color (yellow) identifies regions where water density is higher, while darker color characterizes regions where the presence of ionomer is significant. \begin{figure}[] \centering \includegraphics[width=0.38\textwidth]{./figure10.pdf} \caption{ Contour plots of water density for $\lambda=$ 22 (a) 11 (b) and 6 (c), calculated in slabs at the different indicated distances from the substrate. } \label{fig10:map-water} \end{figure} We first consider the maps in Fig.~\ref{fig10:map-water} for the most hydrophobic cases ($\theta=150^\circ$). Water is concentrated in the second and third slabs, and at $\lambda=22$, a quite homogeneous distribution suggests that water molecules form a unique layer parallel to the support and confined by two ionomer layers separated by a distance of about $\sim 2.4$~nm. The side-chains pertaining to the facing ionomer layers point toward the water layer, with Nafion chains adopting a "sandwich" morphology. In contrast, when decreasing water content, the water pool tends to be concentrated in the central region of the film, surrounded by the ionomer. This is particularly evident for $\lambda=6$, where water molecules form an elongated domain and seems to suggest an inverted micelles morphology, with ellipsoidal or cylindrical micelles shape oriented parallel to the substrate. In the intermediate case, $\theta=100^\circ$, although we do not observe any percolating water-rich region that could be considered as a continuous water layer, water can still form extended agglomerates in the three slabs closer to the wall. For $\lambda=6$, these water "pools" are well delimited and seem to be connected in adjacent slabs. We can also observe a few ionomer "barriers" (indicated by the darker color in the middle of the maps) connecting hydrophobic domains in adjacent slabs. At high hydration, $\lambda=22$, the formation of "pools" is less clear, water being quite homogeneously distributed in all regions, with the ionomer well hydrated everywhere. In the most hydrophilic cases, $\theta=30^\circ$ and $70^\circ$, water distributions are similar, and the largest water domain forms in contact with the substrate, as expected. For $\lambda=22$, the amount of water is also significant in the second slab. This suggests that water forms a thick continuous layer between the substrate and the ionomer which accumulate on the top of the film, at the interface with air. As a result, these films adopt a completely phase-separated bi-layer configuration. When $\lambda$ decreases, water domains become less homogeneous already beyond the first considered layer, and the formation of disconnected pools in the middle of the film is observed. For $\lambda=6$, water is mostly concentrated in the first and third slab, suggesting a morphology with alternated water-poor and water-rich layers. Also, a single narrow water channel forms, directly connecting the two otherwise disconnected water domains. We finally observe that in all cases the fourth furthest slab is not populated by water molecules, consistent with a hydrophobic interface with air, mostly composed by the ionomer backbones with the side-chains pointing toward the substrate~\cite{Bass2010}. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{./figure11.pdf} \caption{ Qualitative picture of film morphologies, at different values of $\theta$, ranging from highly hydrophobic (top) to very hydrophylic (bottom) and different hydration levels $\lambda$ (high and low hydration on left and right, respectively). } \label{fig11:scheme-film} \end{figure} \subsection{A qualitative picture for morphology} \label{subsec:general-morphology} Based on the analysis presented in the previous Sections we are now in the position to draw a general picture of the morphology of the supported hydrated Nafion thin films, at different hydration levels and for for varying wetting nature of the support. Despite the qualitative nature of our conclusions, this is the most important message of the present work. We schematically represent the expected morphology of the thin-films in the different conditions as cartoons in Fig.~\ref{fig11:scheme-film}. The \ce{SO3-} groups are represented by red beads, side chains by spring-like symbols and polymer backbones by solid black lines. Water pools are the blue domains. In summary, with reference to the wetting character of the support, we classify the typical morphologies in three classes: {\bf 1.~Hydrophobic} The film at high hydration (left) shows a typical "sandwich" structure, constituted by a sequence of layers of different nature (Fig.~\ref{fig11:scheme-film}~(a)). This is in agreement with the experimental observations of Refs.~\cite{Dura2009,Wood2009}. Nafion backbones are therefore in direct contact with the substrate, with the sulfonic acid groups pointing upward, toward the water domain. On the top of the water pool, a reversed structure sulfonic groups/polymer backbone is observed, with a completely hydrophobic film/air interface. At low water content (right), the ionomer folds around the water domain, forming an inverted-micelle structure, which reminds the experimental observations of Refs.~\cite{Bass2010, Bass2011}. More precisely, in our simulations the ionomer folds into a inverted-micelle cylinders of diameter $\simeq 4$~nm and with the symmetry axis parallel to the support, as one can observe in the water maps in Figure~\ref{fig10:map-water} {\bf 2.~Intermediate} In this case the ionomer film organizes into a configuration with interconnected water "pools" (Fig~\ref{fig11:scheme-film}~(b)). The film/substrate interface is characterized by both the presence of ionomer and water, while the film/air interface still has a hydrophobic character, with the side-chains of the ionomer pointing toward the substrate. Hydration level mostly impacts the size of water pools, which decreases by decreasing $\lambda$. In general, the local structure of the film in this case is very similar to the case of the membrane and no evident phase separation parallel to the support is present. {\bf 3.~Hydrophilic} Thin films in contact with very hydrophilic substrates are organized in well-separated water and ionomer layers (Fig~\ref{fig11:scheme-film}~(c)). In high hydration conditions (left ) water floods the substrate and the ionomer accumulate at the top, with the hydrophobic polymer backbone in contact with air. For lower values of $\lambda$ (right), the ionomer approaches the support. This behavior is not driven by a direct interaction with the substrate, but rather indirectly due to the interaction of the side chains with the water layer in contact with the support. In this case the film can adopt a multilamellar configuration with multiple water layers parallel to the substrate and separated by ionomer domains. Adjacent water layers can be locally connected by water channels, which form dynamically but seem to be quite stable. This picture originating from our data is also consistent with the experimental observations of Refs.~\cite{Dura2009,Wood2009}, where the Authors discovered lamellar structures, formed close to hydrophilic substrates and composed of alternating water-rich and Nafion-rich thin layers. We conclude this Section by observing that in this work we have considered very thin films of about $4.5$~nm and therefore showed that the wetting nature of the support strongly impacts morphology on length scales of the order of a few nanometers. However, we have also underlined that our qualitative picture seems to be in agreement with experimental observations on films of much larger thickness. We therefore conjecture that the structure of real films could be the results of a geometrical tiling, where the local building blocks are morphologies similar to the ones of Fig.~\ref{fig11:scheme-film}. How this tiling extends from the substrate to the ionomer/air interface in real systems is an open issue. In what follows we will discuss how the qualitative features summarized above can be relevant for PEMFC technology. \section{Nafion thin-films morphology and PEMFC technology} \label{sec:PEMFC-technology} In this Section we discuss the relevance of our findings in the understanding of the catalyst layer features, a crucial issue in the PEMFC technology. From our analysis, the ionomer morphology is expected to impact the catalyst layer activity as follows. First, a strong effect can be envisaged on the transport features of water and hydronium complexes close to the catalyst and the catalyst/support interfaces. Indeed, we have shown in our previous publication~\cite{Borges2013} that complex morphology changes can result in a highly heterogeneous transport behaviour of water across the film. In particular, the extent of the heterogeneity seems to be directly controlled by the wetting character of the substrate and increases steadily by increasing the hydrophilicity character of the support~\cite{Borges2013}. Second, our findings could also be relevant for a better understanding of the ionomer/catalyst interface. This is the region where the electrochemical reactions governing a PEMFC operation take place. In the actual device, two phenomena directly affect the reaction kinetics: adsorption of chemical species and the formation of the electrochemical double layer. Detailed descriptions of these mechanisms are not possible with our level of description, which cannot account for electrochemical activity. We can however speculate about the impact of the ionomer structural organization on these phenomena. Third, analysis of the (top) film/air interface is relevant to understand its impact on the water and reactant gases transport inside catalyst layer pores (in the CL gas phase). The upper surface of the film plays an important role in the hydrophilicity of the catalyst layer pores, which in turn impacts water management during operation conditions. Moreover, the reactant gases in the gas phase ({\it e.g.} \ce{O2} and \ce{H2}) must cross the film in order to reach the catalyst surface where the reactions take place. Below we will describe the ionomer/air interface and its possible impact on the water and gases absorption and water management. In what follows we explore these points in details, by characterizing the interfacial regions, {\it i.e.} immediately adjacent to the substrate and at the top of the film. We will first analyse ionomer adsorption and overall substrate coverage for different wetting nature of the support. Next, we will investigate the main features of the charge distribution close to the substrate. Finally, we will characterize the ionomer/air interface. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth,angle=-90,origin=c]{./figure12.pdf} \caption{ Snapshots of the adsorption region, which extends to $z\simeq 0.56$~nm from the support. The backbone segments beads are plotted in brown, side-chains hydrophobic segments in yellow, the \ce{SO3-} groups in red, water molecules in blue and hydronium complexes in white. } \label{fig12:snapshot-adsorptionRegion} \end{figure} \subsection{Ionomer adsorption} \label{subsec:film-wall} In the CL, the catalyst (Pt and/or Pt-alloy) surfaces can react with water, hydronium ions or other chemical species~\cite{Subbaraman2010}. Although in this work electrochemical reactivity of the substrate is not accounted for, we are in the position to characterize the overall surface coverage. This should depend on the details of the ionomer distribution immediately adjacent to the substrate, which corresponds to the first peak in the mass density profiles of Fig.~\ref{fig4:densityProfile}. In Fig.~\ref{fig12:snapshot-adsorptionRegion} we show typical snapshots of the adsorption region, which extends to $z\simeq 0.56$~nm from the support. In the case of hydrophobic substrates, $\theta=150^\circ$, and at any degree of hydration, the ionomer is adsorbed via the backbone, as also observed in simulations of an ionomer adsorbed on graphitized carbon sheets~\cite{Mashio2010}. For the case of intermediate hydrophilicity, $\theta=100^\circ$, a more balanced presence of water, backbone segments and side-chains is observed. In the most hydrophilic cases, $\theta=70^\circ$ and $30^\circ$, limited adsorption of the ionomer is still observed, which takes place via the sulfonate groups (red beads in Fig.~\ref{fig12:snapshot-adsorptionRegion}). \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{./figure13.pdf} \caption{ Ionomer backbone, water, side-chains, \ce{SO3-} and hydronium complexes coverage as a function of the substrate contact angle, at the indicated values of the hydration levels. } \label{fig13:coverage} \end{figure} The average substrate coverage for the ionomer backbones, \ce{H2O}, side-chains, \ce{H3O+} and \ce{SO3-} groups are shown in Fig.~\ref{fig13:coverage} for all thin-films investigated. The coverage is defined here as the number of molecules within the adsorption region per unit of area. The data in Fig.~\ref{fig13:coverage} clearly show an inversion of surface coverage following the hydrophilicity degree of the support. In contrast, water content does not seem to significantly modify ionomer backbone or water coverages. Indeed, by decreasing water content from $\lambda=22$ to $\lambda=6$, backbone coverage changes from $14.65$ to $15.72$~molecules/nm$^2$ for the most hydrophobic case, while water coverage reduces from $12.80$ to $11.56$~molecules/nm$^2$ for the most hydrophilic case. The reduction of water coverage is compensated by the increases of \ce{H3O+} and \ce{SO3-} coverages. \ce{SO3-} coverage increases from $0.003$ to $0.007$~molecules/nm$^2$ while the \ce{H3O+} coverage changes from $0.008$ to $0.015$~molecules/nm$^2$. Hence, the number of adsorbed \ce{SO3-} groups is higher for $\lambda=$~6 and 11, and they are well dispersed on the surface. In contrast, for $\lambda=22$, the \ce{SO3-} groups can be found in more agglomerated configurations. Overall, Figs.~\ref{fig12:snapshot-adsorptionRegion} and~\ref{fig13:coverage} further corroborate our previous observation of a transition from a predominant backbone coverage to predominant water coverage, when increasing the hydrophilic character of the substrate. However, even for most hydrophilic cases adsorption of the ionomer is still observed and occurs mainly via \ce{SO3-} groups. The adsorption of \ce{SO3-} is more evident when the hydration of the film is lower. During PEMFC operation, oxidation and reduction reactions occurring on the top of catalyst surfaces strongly depend on surface coverage of reactants and spectator species~\cite{Franco2006,Malek2011a,DeMorais2011}. Our results shows that water molecules and hydronium ions can be found away from the catalyst surface, in the case where the wetting nature of the substrate is not favourable. The adsorption of the ionomer could block the adsorption of reactant species, reducing the area where the electrochemical reaction occurs. Note that this behaviour is usually overlooked when addressing the issue of increasing Pt utilization in PEMFCs. Also important for PEMFC development is to clarify the impact of ionomer adsorption in ORR mechanism. It is well know that the kinetics of the ORR is sensitive to the nature of adsorption of spectator species~\cite{Markovic2002}. For example, specific adsorption of sulfonate anions has an important deactivation effect on the ORR. The extent of this feature correlates with the strength of the catalyst-sulfonate bond (the strenght of \ce{SO3-} adsorption)~\cite{Subbaraman2010, Subbaraman2010a}. Various factors can influence the chemical nature of \ce{SO3-} adsorption, including nature of the counter-cation, extent of \ce{SO3-} agglomeration within the ionomer, length and spacing between side chains adjacent along the backbone. Our results show that the \ce{SO3-} groups are adsorbed in different configurations, {\it e.g.} , both clustered and dispersed. This should affect the chemical nature of the \ce{SO3-} adsorption, and ultimately affect the electrochemical potential that drives the electrochemical reactions. To conclude this Section, we observe that cell reactions are also governed by the structural properties of the Electrical Double Layer (EDL) formed close to the electrode surface~\cite{Quiroga2014}. Unfortunately, standard electrochemical theories normally used to describe the EDL, completely ignore the heterogeneous environment created by the adsorbed ionomer, which affects both charge and potential distributions~\cite{WangLRoudgarA2009, Krapf2006, Zhdanov2006561, Zhdanov2004, Zhdanov2008, Biesheuvel2009}. In contrast, our findings clearly show that the ionomer dictates the distribution of charges very close to the surface (as indicated by the ionic distributions shown in Fig.~\ref{fig9:ionic-clusterZ}) and, as a consequence, the over-potential at the reactant-electrode distance ($\sim$~0.2-0.5~nm) is also affected. Moreover, considering the different ionomer morphologies that may be found inside the CL, it is not much to say that the reaction rates are far from being uniformly distributed inside CL. Our results also strongly support the existence of a non-uniform spatial distribution of reaction rates, due to the complexity of the ionomer structure. An effective control of the ionomer morphology could therefore provide a valuable path for further development of PEMFC technology, for optimizing electrochemical interface and reducing ionomer inhibition. \subsection{The ionomer/vacuum interface} \label{subsec:film-air} The morphology of the Nafion/vacuum interface has recently received special attention, also due to its importance in ionomer water uptake~\cite{Bass2011}. This interface includes the hydrophobic ionomer backbones which are exposed to the gas phase, and the underneath hydrophilic side-chains, pointing toward the water-rich domains. It is considered responsible for the so-called Schroeder's paradox, {\em i.e.,} a different Nafion water uptake from a liquid solvent or its vapour~\cite{Freger2009, Choi2003}. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth,angle=-90,origin=c]{./figure15.pdf} \caption{ Colour maps of the wetting character of Nafion thin-film ionomer/vacuum interface. Hydrophobic and hydrophilic regions are in yellow and blue, respectively. The technique used for determining the maps is described in details in the text. } \label{fig:filmsurfacemap} \end{figure} \begin{figure*}[] \centering \includegraphics[width=0.9\textwidth]{./figure16.pdf} \caption{ $xy$-contour maps of the $z$-position of atoms at the ionomer/vacuum interface for thin-films at $\theta=70^\circ$ and hydration levels $\lambda=$ 22 (a), 11 (b), and 6 (c).} \label{fig:roughnessmap} \end{figure*} In order to explore the wetting nature of the ionomer/vacuum interface, we have determined spatial color maps of the local hydrophilic/hydrophobic character of the interface. In our calculations we have considered the atoms pertaining to polymer backbones and side-chains (different than sulfonate groups) as hydrophobic, while hydrophilic species included sulfonate groups, water molecules and hydronium ions. We have identified the ionomer/vacuum interface as the region with $3.0\le z\le 4.5$~nm. This region was partitioned in a regular grid, with cubic cells of volume $0.2\times0.2\times1.5$~nm$^3$, for all considered cases. We next attributed to each cell $i$ the difference in volume associated to hydrophobic and hydrophilic atoms in the cell, $\delta V^i = V^i_{phobic}-V^i_{philic}$. A negative value of $\delta V^i$ therefore corresponds to a mostly hydrophilic cell, a positive value to a hydrophobic one. The volume associated to each atom was computed by considering the value of the corresponding Lennard-Jones interaction parameter $\sigma$ as the effective diameter of the atom. We considered an average over an ensemble of $10^3$ configurations for each film. In Fig.~\ref{fig:filmsurfacemap} we show the wetting maps for all films considered. The color range interpolates from strongly hydrophobic (yellow) to very hydrophilic (violet) regions. Thin films clearly present a compact and extended hydrophobic layer on the top in all cases, as already demonstrated above. However, violet regions are evident for $\theta=150^\circ$ and $100^\circ$ at high values of $\lambda$, which result from significant number of water molecules which accumulate immediately below the polymer backbone. In contrast, films with $\theta=70^\circ$ and $30^\circ$ present hydrated regions of very limited extent. These data suggest that the hydrophobicity of the ionomer/vacuum interface is particularly pronounced in the case of films formed on very hydrophilic substrates. At the lowest water contents, the films present similar surface hydrophobicity at all $\theta$ values. Our results also suggest that tuning the film/substrate interaction can modify the Nafion ionomer/vacuum interface morphology. For instance, the substrate with $\theta=30^\circ$ determines an interface configuration where the entire water content is confined under the polymer, whereas the ionomer backbone forms a "crusty" hydrophobic layer. This "crust" should present high resistance to deformation, which could decrease water uptake and lead to transport losses during PEMFC operation. It could also prevent reactants (\ce{O2} and \ce{H2} coming from the CL pores) to cross the thin-film for reaching the catalyst sites. In contrast, the films formed on the substrate with $\theta=150^\circ$, are characterized by a configuration where a fraction of the ionomer backbone is in direct contact with the substrate. This reduces the concentration of polymer backbone at the interface with vacuum and, as a consequence, increases the presence of water. Clearly, this interface should be more favourable for water absorption, which is in contrast with the results of Ref.~\cite{Modestino2012} where, however, thin-films about $20$ times thicker than the ones considered here were investigated. \begin{table}[b] \centering \begin{tabular}{cccc} \hline \hline $\theta(^\circ)\backslash\lambda$& $22$ & $11$ & $6$\\ \hline \hline $150^\circ{}$ & 0.16 & 0.25 & 0.29 \\ $100^\circ{}$ & 0.21 & 0.56 & 0.29 \\ $70^\circ{}$ & 0.13 & 0.46 & 0.24 \\ $30^\circ{}$ & 0.25 & 0.44 & 0.30 \\ \hline\hline \end{tabular} \caption{ Roughness coefficient $R$~(nm) for the ionomer/vacuum interface, calculated as discussed in the text. } \label{tb:roughness} \end{table} The hydrophobic "crusty" ionomer/vacuum interface is characterized by a certain degree of roughness, which depends on the hydration conditions. Roughness can be quantified as the vertical deviation of the real surface compared to its ideal form, defined as the average vertical position of the interface. We can thus define a mean-squared roughness coefficient as $R^2=1/N \sum\limits_{i=1}^{N}(Z_i-\bar{Z})^2$, where $Z_i$ denotes the $z$-coordinate of the exposed atom $i$ at the interface, $\bar{Z}$ is the average $z$-position of the surface atoms, and $N$ is the number of the surface atoms~\cite{Huang2012}. Surface atoms are identified as those with no other atoms in a square prism of edge $0.1$~nm and height $5$~nm above them. In Fig.~\ref{fig:roughnessmap} we show the $xy$-contour maps of the $z$-position of atoms at the ionomer/vacuum interface, for the case $\theta=70^\circ$, at the indicated values of $\lambda$. Table~\ref{tb:roughness} reports the values of $R$ for all films studied. The roughness of the films surface assumes values in the range $0.13\div 0.56$~nm, which can be compared to an experimental value of the roughness of Nafion films in contact with air of $0.35$~nm~\cite{Bass2010}. Interestingly, the roughness of the films at intermediate hydrophilicity, $\theta=100^\circ$, are slightly higher when compared to other films. This can be attributed to the disordered cluster configurations described above. According to Bass {\em et al.}~\cite{Bass2010}, the morphology of these interfaces is stable as long as the water vapour is not saturated. At that point, the hydrophobic layer should deform and the buried hydrophilic groups eventually migrate to the surface. However, when the surface is initially hydrophobic (especially at low water contents), the high energetic and kinetic barriers associated with the rearrangement of many chemical groups, may keep the ionomer kinetically trapped in this state for very long times~\cite{Bass2010}. \section{Conclusions and Perspectives} \label{sec:conclusions} We have studied by Molecular Dynamics simulations the formation of Nafion ultra-thin films in contact with unstructured flat supports, characterized by their global wetting properties only. By tuning a single control parameter, $\epsilon_{w}$, we have been able to investigate in an unique framework an extended range of environments peculiar of the PEMFC catalyst layer, ranging from strongly hydrophobic (carbon-like) to very hydrophilic (platinum-like). The hydrophilicity degree of the substrate was estimated by computing the contact angle of a water droplet gently deposited on it. We considered four substrates, from strongly hydrophobic, through intermediate and mildly hydrophilic to very hydrophilic. Also, three hydration levels were considered, in order to investigate the role played by water content. Self-assembled instances of the thin-films corresponding to these very diverse conditions were analysed in details, in terms of their structural properties. Based on a very extended data sets, we have been able to propose a general picture for Nafion supported thin films morphology for variable wetting nature of the substrate and hydration conditions. Our data show that variations in the hydrophilic character of the substrate have strong impact on film morphology. This ranges from a sandwich structure, where an extended water pool is sandwiched by ionomer sheets, to a bilayer configuration. In this case water floods the interface with the substrate and polymer mostly accumulate at the top, at the interface with air. By decreasing water content, films convert into inverted micelles and multilamellar structures, for hydrophobic and hydrophilic supports, respectively. We have also discovered that, in contrast to the sandwich structure, the bilayer structure shows large and compact \ce{SO3-} agglomerates, resulting in a poor hydration of \ce{H3O+} and \ce{SO3-} . Analysis of surface coverage showed a clear transition from predominant backbone coverage to predominant water coverage, when switching from hydrophobic to hydrophilic surfaces. Finally, we have shown that tuning the hydrophilicity of the substrate it is possible to modify the film/vapour interface. The results presented in this work could be of interest for optimization of the catalyst layer performances and further development of PEMFC technology. We have shown that it is indeed possible to control the main morphological features of the films by tuning the wetting nature of the substrate. Therefore, the use of appropriate substrates could be highly attractive for controlling some aspects such as ionomer coverage, proton accessibility to the active surface, \ce{SO3-} adsorption, among others. This would optimize the electrode/electrolyte interface, in order to create electrochemical environment favourable to enhance cell reaction rates. \bibliographystyle{achemso}
1,477,468,749,924
arxiv
\section{Introduction} The hard X-ray transient GRO~J0422+32 (XN~Per~1992) was discovered by the BATSE instrument on the Compton Gamma Ray Observatory in data from 1992 August 5 (Paciesas et al. 1992), and at its peak reached an intensity in soft $\gamma$ rays approximately three times brighter than the Crab Nebula and pulsar. The source was observed by CGRO/OSSE beginning 1992 August 11, approximately at the peak of the outburst. An optical counterpart was proposed by Castro-Tirado et al. (1992) and confirmed by the soft $\gamma$-ray observations of SIGMA (Roques et al. 1994). While the mass function of $1.2 \pm 0.04 \hbox{$\rm\thinspace M_{\odot}$}$ determined by Filippenko, Matheson, \& Ho (1995) is low enough that the compact object might indeed be a neutron star, the H$\alpha$ radial velocity curve and the M stellar type of the mass donor imply a mass of 3.6$\hbox{$\rm\thinspace M_{\odot}$}$ for the compact primary. The photometric measurements of Callanan et al. (1996) support this mass estimate and give a distance estimate of $\sim$2 kpc. Broadband energy spectra from TTM, HEXE, and OSSE show that during outburst the source was in the X-ray low, hard state, which coincides with the breaking $\gamma$-ray state (Grove et al. 1997, 1998, and references therein). The gamma radiation is thus likely the result of thermal Comptonization in a hot corona near the accretion disk. The $\gamma$-ray spectrum hardened ($\Delta kT / kT \simeq +20\%$) as the outburst declined (Grove et al. 1998). Power spectra above 20 keV show significant red noise and peaked noise components frequently referred to as ``quasi-periodic oscillations'' (QPOs), even though they do not necessarily satisfy the width requirement (FWHM/$f_0 < $ 0.5) for such a label. BATSE reported ``QPOs'' centered at roughly 0.04 Hz and 0.2 Hz (Kouveliotou et al. 1992), both of which were confirmed by SIGMA (Vikhlinin et al. 1995) and OSSE (Grove et al. 1992, 1994). The spectral shape, rapid variability, and outburst lightcurve are similar to previous X-ray novae A0620-00 and XN~Mus~1991, both of which have measured mass functions that make them very strong black hole candidates (BHCs). Based on these similarities, GRO~J0422+32 has been classified as a black hole candidate. \section{Observations} The OSSE instrument consists of four identical large-area NaI(Tl)--CsI(Na) phoswich detector systems (Johnson et al. 1993). Energy spectra and high time-resolution counting rates are accumulated in an alternating sequence of two-minute measurements of source and background fields. High time-resolution data were collected from the on-source detectors in 8-ms rate samples in five energy bands from $\simeq$35 keV to $\simeq$600 keV. OSSE observed GRO~J0422+32 on 34 days spanning 1992 August 11 -- 1992 September 17. The source reached its maximum intensity at 100 keV shortly after the start of the OSSE pointing, then began a roughly exponential decline with a decay time of $\simeq$40 days, falling to about half maximum intensity at 100 keV at the end of the pointing. Some 10\% of the total light yield of NaI(Tl) results from a phosphorescence with a decay time of $\sim$150 ms (McKlveen \& McDowell 1975). Fluctuations in this ``afterglow'' from the passage of a heavy cosmic ray can trigger the OSSE detector system, evidenced as clusters of low-energy events with a soft spectrum ($\sim$E$^{-5}$, detectable up to $\simeq$100 keV) and in power spectra of blank sky pointings as weak broad-spectrum noise roughly consistent with exponential shots with time constant $\simeq$70 ms. We estimate that the residual noise power in the 35--60 keV band (normalized to the source intensity) after a screening process is applied to remove these events is $<10^{-3}$ (RMS/I)$^2$ Hz$^{-1}$ below 3 Hz and falls as 1/f$^2$ above 3 Hz (see Fig. \ref{power_spec} for comparison to the noise power from the source). The residual power is undetectable in the 75--175 keV band. \subsection{Power Spectrum Analysis} We obtained the power spectral density in the 35--60 keV and 75--175 keV energy bands through a multi-step process. We segmented the data into two-minute on-source pointings to reduce potential systematic effects that might arise on long timescales from orbital variations in the background count rate or differences between source-pointed detectors. To eliminate the possibility of spurious power spectral features (i.e. side-lobes) arising from the window function, we selected only those two-minute pointings that contained no data gaps or dropouts. Then we Fourier-transformed each 16384-point time series of 8-ms samples, normalized according to the procedure of Leahy et al. (1983), and subtracted the Poisson noise contribution, which we corrected for deadtime effects. We then summed the power spectra incoherently into daily and longer accumulations. Finally, following the prescription of Belloni and Hasinger (1990), we renormalized the power spectra to the source intensity, which we calculated from the background-subtracted spectral data from the standard OSSE analysis (Johnson et al. 1993). With this normalization, power spectra from different instruments, sources, and observations are directly comparable. The normalized power spectral density (PSD) $P_k$ at Fourier frequency bin $k$ is given as the fractional root-mean-square (RMS) variation of the source intensity $I$, viz. \begin{equation} P_k \, df = \frac{N_{tot}} {N_{src}^2} \, \left ( \frac{2|H_k|^2} {n_{tot}} \, - \, p \right ) \, \Delta f \, , \end{equation} where $N_{tot}$ is the total (i.e. source $+$ background) counts in $M$ Fourier transforms, $N_{src}$ is the source counts---estimated from the standard spectral analysis---in these $M$ transforms, $|H_k|^2$ is the mean Fourier power at frequency $k$, $p \simeq 2$ is the Poisson noise power after accounting for deadtime effects, $n_{tot} = N_{tot} / M$ is the mean of the total (i.e. source $+$ background) counts per transform, and $\Delta f = 1/131.072$ Hz is the frequency resolution of the power spectrum. We emphasize that this normalization is valid for the background-dominated case, which is appropriate for the low-energy gamma-ray band. The standard deviation of the PSD estimator is $ \sigma_k = P_k / \sqrt{M} $ (Bendat \& Piersol 1986), which accounts for both intrinsic source noise and Poisson noise. Figure \ref{power_spec}a shows the normalized PSD in the 35--60 keV and 75--175 keV bands for the entire OSSE pointing. The total fractional RMS variation between 0.01 Hz and 60 Hz is $\simeq$40\% in 35--60 keV, and $\simeq$30\% in 75--175 keV. The shape of the power spectrum is essentially identical in the two energy bands. It shows breaks at a few times $10^{-2}$ Hz and a few Hz, and a strong peaked-noise component (frequently labelled a ``QPO'') at 0.23 Hz, with FWHM $\simeq 0.2$ Hz. Statistically significant red noise is detected at frequencies up to $\sim$20 Hz. Not readily apparent in this figure is an intermittent peaked noise component at about 0.04 Hz. The amplitude of this feature varies from day to day. The two peaked-noise components and the lower-frequency spectral break have been reported elsewhere (Kouveliotou et al. 1992, Grove et al. 1992, Denis et al. 1994). OSSE's high sensitivity and high sampling rate have made the second spectral break apparent and permitted a study of the evolution of the various components that comprise the noise spectrum. We found (Grove et al. 1994) that the integrated 0.01--60 Hz power increased as the intensity of the source decreased, i.e. total power was anticorrelated with intensity and correlated with spectral hardness. In addition, while the intensity in these energy bands dropped by nearly a factor of two and the energy spectrum hardened by $\sim$20\% in effective temperature over the course of the OSSE observation, the frequencies of the two breaks and the main peaked noise remained constant, i.e. there was no evidence for significant variability in the timescales of the noise processes. \subsection{Lag Spectrum Analysis} From the cross-spectral density (Bendat \& Piersol 1986), one can measure the phase or time lag between two series as a function of Fourier frequency. Given two time series, e.g. of soft photons $s(t_k)$ and hard photons $h(t_k)$, the cross-spectral density $C_{sh}(f_k)$ is given by \begin{equation} C_{sh}(f_k) df = \frac{2}{M} \sum{S_m^*(f_k) H_m(f_k)} \Delta f \end{equation} where the sum runs over the $M$ Fourier transforms $S_m(f_k)$ and $H_m(f_k)$ of the segmented soft and hard time series, respectively. The phase difference $\Delta \phi_{sh}(f_k)$ at Fourier frequency $f_k$ between the two series is then \begin{equation} \Delta \phi_{sh}(f_k) = \arctan \left( \frac{Im[C_{sh}(f_k)]}{Re[C_{sh}(f_k)]} \right) \end{equation} where $Im[C_{sh}(f_k)]$ and $Re[C_{sh}(f_k)]$ are the imaginary and real parts, respectively, of the cross-spectral density. Time lags are simply computed from the phase lags by dividing by $2 \pi f_k$. Following van der Klis et al. (1987), we correct for deadtime-induced cross-talk between the two bands by subtracting the mean cross-spectral density in the 40--62.5 Hz range from the entire cross spectum. Because the imaginary part is negligible in this frequency range, the subtraction does not alter the sign of the phase differences at lower frequencies, and it has negligible effect on the amplitude of the phase differences at frequencies below $\sim$10--20 Hz. We have used the standard deviation of the phase difference given by Bendat \& Piersol (1986), which is strictly applicable only for noiseless measurements. Poisson noise causes this to be an underestimate above a few Hz (Vaughan \& Nowak 1997), but this effect does not alter our conclusions. The phase and time delay spectra we derive for the entire observation of GRO~J0422+32 are shown in Fig. \ref{lag_spec}. The hard emission (75--175 keV) lags the soft emission (35--60 keV) at all Fourier frequencies, except above 10 Hz, where there is no statistically significant lag or lead between the two bands. The phase lag is a weak function of Fourier frequency and peaks near 1 Hz. The peak lies far below the Nyquist frequency and is therefore not a consequence of the finite data binning, as discussed by Crary et al. (1998). At frequencies $\sim$0.01 Hz, hard lags as large as 300 ms are observed, and the time lag falls roughly as $1/f$. There is no significant change in the lag at the frequencies dominated by the strong peaked noise component at 0.23 Hz. \section{Discussion} Generally similar power spectra have been reported from a number of black hole candidates, and beginning with Terrel (1972), they have frequently been modeled as arising from a superposition of randomly occurring bursts, or ``shots''. If the shots have an instantaneous rise and exponential decay (or vice versa) with time constant $\tau$, the resulting power spectrum is constant below the characteristic frequency $ 1 / (2 \pi \tau )$ and falls as $ 1 / f^2 $ at high frequencies. This type of model can describe the two breaks and the $ 1/ f^2$ behavior above several Hz in the power spectrum of GRO~J0422+32 if there exist (at least) two independent shot components, with e-folding times $\tau_s \simeq 50$ ms and $\tau_l \simeq 2.1$ sec. The PSD of the two-shot model is shown for the 75--175 keV band in Fig. \ref{power_spec}a. Note that the best-fit values of the long and short e-folding times and the ratio of amplitudes of the two components are independent of energy (Table \ref{shot_fit}). Subtracting the PSD of the two-shot model from the observed PSD gives a peaked noise profile that is broad and asymmetric, with a sharp low-frequency edge and a broad high-frequency tail, as shown in Fig. \ref{power_spec}b. Plausible alternative descriptions of the continuum between 0.1 and 1.0 Hz, e.g. a simple power law with index $-0.9$, do not significantly alter the shape of the peaked noise, although they may change its amplitude. The sharp low-frequency edge indicates that the physical process responsible for the peaked noise has a well-defined maximum timescale. This process may perhaps be thermal-viscous instabilities in the accretion disk (Chen \& Taam 1994) or oscillations in a Comptonizing corona (Cui et al. 1997). We attempted to fit the total PSD with simple analytic forms---e.g. in the time domain, multiple exponentially-damped sinusoids; or in the frequency domain, multiple zero-centered Lorentzians to model the continuum and offset Lorentzians to model the peaked noise---but none of these adequately describes the sharp rise and broad fall of the peaked noise, nor do they add significantly to our understanding of the characteristic timescales represented in the PSD. Similarly, the scenario of Vikhlinin, Churazov, \& Gilfanov (1994), in which shots arise from a common reservoir and are coupled through a weak amplitude or probability interaction that generates QPOs, also fails to describe the observed PSD in detail. The lag spectrum (Fig. \ref{lag_spec}) is generally similar to that of several other BHCs in the Ginga or Rossi XTE/PCA band (i.e. below 40 keV). In the X-ray low, hard state, these include Cyg~X-1, GX339--4, and GS2023+338 (Miyamoto et al. 1992), and 1E1740.7--2942 and GRS~1758--258 (Smith et al. 1997). In the X-ray very high state, BHCs with similar lag spectra are GS1124--683 and GX339--4, subtype ``C+D'' for the latter object (Miyamoto et al. 1993). Furthermore, the lag spectrum is quite similar to that between 20-50 keV and 50-100 keV from Cyg~X-1, which appeared to be essentially independent of the X-ray or $\gamma$-ray state (Crary et al. 1998). The present result is more evidence indicating that the frequency-dependent time lag is a common phenomenon shared by many accreting objects in binaries. The observed power and lag spectra are at odds with the predictions of accretion models that produce most of the X-ray and $\gamma$-ray emission from a region whose size is comparable to that of the last stable orbit around a black hole of mass a few M$_{\odot}$. The characteristic time scale associated with the dynamics of accretion in such an object is of order $10^{-3}$ sec; hence one would expect most of the associated power in the kHz frequency range. By contrast there is a remarkable {\it lack} of power at this range. Furthermore, under these conditions the time lags, which in these models are indicative of the photon scattering time in the hot electron cloud, should be independent of the Fourier frequency and also of order $10^{-3} $ sec, the photon scattering time in this region. Miller (1995) has argued that the observed time lags represent lags instrinsic to the soft seed photons, rather than the Comptonizing cloud. However, Nowak \& Vaughan (1996) have shown that any intrinsic lag is washed out if the observed photon energies are much greater than the seed photon energies, as is the case here, leaving again a frequency-independent lag due to the difference in scattering times across the cloud. The discrepancy between observed and predicted power and lag spectra prompted an alternative approach proposed recently by Kazanas, Hua \& Titarchuk (1997; hereafter KHT) and Hua, Kazanas \& Titarchuk (1997). These authors suggested that, while the process responsible for the formation of the high energy spectra is indeed Comptonization, the hot, scattering electron cloud extends over several decades in radius with a power law profile in density, $n(r) \propto 1/r^p$. This power-law density profile has a number of properties of interest in interpreting timing and spectral observations. For a $\delta-$function injection of soft photons at the center of the cloud, the light curves of the photons emerging from the cloud at a given energy are power laws extending in time to $\sim r/c$ ($r$ is the outer edge radius of the atmosphere) followed by an exponential cutoff. For small values of the total Thomson depth $\tau_0$, the power-law index of the light curve is roughly equal to the power-law index $p$ of the density profile of the scattering cloud, becoming progressively flatter for increasing values of $\tau_0$ and higher escaping energies (Fig. 1 in KHT). On the other hand, the corresponding light curves for clouds of uniform density are exponentials without power-law portions. The time dependence of the photon flux can therefore be used to map the radial density profile of the scattering cloud. For a uniform cloud, the density profile has index $p=0$, and the light curve has no power law portion, i.e. the resulting PSD is that corresponding to an exponential shot. For a density profile with index $p=1$ and total Thomson depths in the scattering atmosphere of a few, the PSD is $\propto 1/f$ (KHT Fig. 1). One should note that this form of the PSD assumes infinitely sharp turn-on of the shots at $t=0$. As Kazanas \& Hua (1997) have shown, a finite turn-on time $t_0$ will introduce an additional steepening of the PSD at frequencies $\omega \sim 1/t_0$ extending over a decade in frequency, yielding PSDs in agreement with those of Fig. \ref{power_spec}a. The great advantage of the present scheme is therefore the direct physical association of features in the PSD with properties of the source. Modeling of the light curves of GRO~J0422+32 with this type of shot indicates values for $t_0 \approx 50$ msec. The model presented in KHT provides constraints on the time lags that can be of great value in probing the structure of the scattering medium. In the process of Comptonization, photons of energy $E_2$ lag in time behind photons of energy $E_1 < E_2$ simply because more scatterings are required to take a photon from $E_1$ to $E_2$. The lag in time is proportional to the scattering time, which depends only on the density of the medium. Thus in general, for a uniform medium the lag time is constant (i.e. independent of the Fourier period). However, in a medium with a power-law density profile, the hard photons sample a range of several orders of magnitude in density, which appears in the corresponding time lags. In addition, because the probability of scattering at a given density range is constant for a medium with $p=1$, all lags should be present with equal weight, producing a time-lag function $\propto 1/f$, with a maximum lag at the time scale corresponding to the scattering time at the edge of the power-law atmosphere. Indeed, Fig. \ref{lag_spec} is in excellent agreement with the above arguments (see Hua, Kazanas, \& Cui 1997a for fits to similar lag spectra from Cyg~X-1, and Hua, Kazanas, \& Cui 1997b for discussion regarding preliminary OSSE data from GRO~J0422+32). \acknowledgements This work was supported under NASA DPR S-10987C. \clearpage
1,477,468,749,925
arxiv
\section{Introduction} Recognition of human actions from RGB-D (Red, Green, Blue and Depth) data has attracted increasing attention in computer vision in recent years due to the advantages of depth information over conventional RGB video, e.g. being insensitive to illumination changes and reliable to estimate body silhouette and skeleton~\cite{Shotton2011}. Since the first work of such a type~\cite{li2010action} reported in 2010, many methods~\cite{wang2012mining,xia2012view,bloom2012g3d,Oreifej2013} have been proposed based on specific hand-crafted feature descriptors extracted from depth and/or skeleton data and many benchmark datasets were created for evaluating algorithms. With the recent development of deep learning, a few methods~\cite{pichao2015,pichaoTHMS,du2015hierarchical} have been developed based on Convolutional Neural Networks (ConvNets) or Recurrent Neural Network (RNN). However, in most cases, either deeply learned or hand-crafted features have been employed. Little study was reported on the advantages of using both features simultaneously. This paper presents a novel framework that combines deeply learned features from the depth modality through ConvNets and the hand-crafted features extracted from skeleton modality. The framework overcomes the weakness of ConvNets being sensitive to global translation, rotation and scaling and leverages the strength of skeleton based features, e.g. Histogram of Oriented Displacement (HOD)~\cite{Gowayyed2013_HOD}, being invariant to scale, speed and clutter of background. In particular, depth and skeleton data are firstly augmented for deep learning and making the recognition insensitive to view variance. Secondly, depth sequences are segmented using the hand-crafted features based on joints motion histogram to exploit the local temporal information. All training segments are clustered using an Infinite Gaussian Mixture Model (IGMM) through Bayesian estimation and labelled for training Convolutional Neural Networks (ConvNets) on the depth maps. Thus, a depth sequence can be reliably encoded into a sequence of segment labels. Finally, the sequence of labels is fed into a joint Hidden Markov Model and Support Vector Machine (HMM-SVM) classifier to explore the global temporal information for final recognition. The proposed framework has demonstrated a novel way in effectively exploring the spatial-temporal (both local and global) information for action recognition and has a number of advantages compared to conventional discriminative methods in which temporal information is often either ignored or weekly encoded into a descriptor and to generative methods in which temporal information tends to be overemphasized, especially when the training data is not sufficient. Firstly, the use of skeleton data to segment video sequences into segments makes each segment have consistent and similar movement and, to some extent, be semantically meaningful (though this is not the intention of this paper) since skeletons are relatively high-level information extracted from depth maps and each part of the skeleton has semantics. Secondly, the ConvNets trained over DMMs of depth maps provides a reliable sequence of labels by considering both spatial and local temporal information encoded into the DMMs. Thirdly, the use of HMM on the sequences of segment labels explores the global temporal information effectively and the SVM classifier further exploits the discriminative power of the label sequences for final classification. The reminder of this paper is organized as follows. Section 2 describes the related work. Section 3 presents the proposed framework. \section{Related Work} Human action recognition from RGB-D data has been extensively researched and much progress has been made since the seminal work~\cite{li2010action}. One of the main advantages of depth data is that they can effectively capture 3D structural information. Up to date, many effective hand-crafted features have been proposed based on depth data, such as Action Graph (AG)~\cite{li2010action}, Depth Motion Maps (DMMs)~\cite{Yang2012a}, Histogram of Oriented 4D Normals (HON4D)~\cite{Oreifej2013}, Depth Spatio-Temporal Interest Point (DSTIP)~\cite{xia2013spatio} and Super Normal Vector (SNV)~\cite{yangsuper}. Recent work~\cite{pichao2015} also showed that features from depth maps can also be deeply learned using ConvNets. Skeleton data which is usually extracted from depth maps~\cite{Shotton2011} provides a high-level representation of human motion. Many hand-crafted skeleton based features have also been developed in the past. They include EigenJoints~\cite{Yang2012}, Moving Pose~\cite{zanfir2013moving}, Histogram of Oriented Displacement (HOD)~\cite{Gowayyed2013_HOD}, Frequent Local Parts (FLPs)~\cite{pichao2014} and Points in Lie Group (PLP)~\cite{vemulapalli2014human}, which are all designed by hand. Recently, the work~\cite{du2015hierarchical} demonstrated that features from skeleton can also been directly learned by deep learning methods. However, skeleton data can be quite noisy especially when occlusion exists and the subjects are not in standing position facing the RGB-D camera. Joint use of both depth maps and skeleton data have also been attempted. Wang et al.~\cite{wang2012mining} designed a 3D Local Occupancy Patterns (LOP) feature to describe the local depth appearance at joint locations to capture the information for subject-object interactions. In their subsequent work, Wang et al. \cite{wang2014learning} proposed an Actionlet Ensemble Model (AEM) which combines both the LOP feature and Temporal Pyramid Fourier (TPF) feature. Althloothi et al. \cite{Althloothi2014} presented two sets of features extracted from depth maps and skeletons and they are fused at the kernel level by using Multiple Kernel Learning (MKL) technique. Wu and Shao \cite{wu2014deep} adopted Deep Belief Networks (DBN) and 3D Convolutional Neural Networks (3DCNN) for skeletal and depth data respectively to extract high level spatial-temporal features. \section{Proposed Framework} Fig.~\ref{fig:framework} shows the block-diagram of the proposed framework. It consists of five key components: {\it Data augmentation} to enlarge the training samples by mimicking virtual cameras through rotating the viewpoints; {\it Segmentation} to segment sequences of depth maps into segments by extracting the key-frames from skeleton data, to exploit the local temporal information. {\it IGMM clustering} to label all training segments through clustering; {\it ConvNets on DMMs} to train ConvNets to classify segments reliably; and {\it HMM-SVM} to model the global temporal information of actions and classify a sequence of segment labels into an action. \subsection{Data Augmentation} The main purposes of data augmentation are to address the issue of training ConvNets on a small dataset and to deal with view variations. The method presented in~\cite{pichao2015} is adopted in this paper, where depth data is augmented by rotating the 3D cloud points captured in the depth maps and skeleton to mimic virtual cameras from different viewpoints. More details can be found in~\cite{pichao2015}. \begin{figure}[!ht] \begin{center} {\includegraphics[height = 100mm, width = 85mm]{Frameworknew6}} \end{center} \caption{The proposed action coding framework, where D represents Depth data while S denotes Skeleton data.} \label{fig:framework} \end{figure} \subsection{Segmentation} In order to exploit the local temporal information, depth and skeleton sequences are divided into segments such that motion across frames within each segment is similar and consistent. To this end, key-frames are first extracted using the inter-frame and intra-frame joint motion histogram analysis, a method similar to the one described in~\cite{shao2009motion} . The joint motion histograms are insensitive to the background motion compared to the use of optical flow in~\cite{shao2009motion}. Specifically, skeleton data are firstly projected on to three orthogonal Cartesian planes. The motion vectors calculated between two frames of the corresponding joints in each plane are quantized by its magnitude and orientation. The combination of magnitude and orientation corresponds to a bin in the motion histogram. Given the number of joints $J$, the probability of the $k^{th}$ bin in the histogram of one projection is given as: \begin{equation}\label{eq2} p(k) = \dfrac{h(k)}{J\times3} \end{equation} where $h$ denotes the counts of the $k^{th}$ bin. The final motion histogram is a concatenation of the histograms in the three projections. Thus, the entropy of motion vectors in this frame can be defined as: \begin{equation}\label{eq2} \eta = -\sum\limits_{k=1}^{k_{max}}p(k)\cdot log_{2}(p(k)) \end{equation} where $k$ denotes the bin index and $k_{max}$ is the total bin number in the histogram. Intuitively, a peaked motion histogram contains less motion information thus produces a low entropy value; a flat and distributed histogram includes more motion information and therefore yields a high entropy value. Then, we follow the work~\cite{shao2009motion} where intra-frame and inter-frame analysis (more details can be found in~\cite{shao2009motion}) are designed to extract key frames such that the extracted key frames contain complex and fast-changing motion and, thus, are salient with respect to their neighboring frames. The details of the algorithm are summarized in Algorithm~\ref{alg1}. \begin{algorithm}[htb] \caption{ Key Frames Extraction \label{alg1}} \begin{algorithmic}[1] \State Select initial frames: $Initial_{i} = \{fx^{1}_{i}, fx^{2}_{i},..., fx^{n_{i}}_{i}\}$ from video $i$ by picking local maxima in the entropy curve calculated by the concatenated motion histogram, where $n_{i}$ denotes the number of initial frames extracted from video $i$; \State Calculate the histogram intersection values between neighboring frames; \State Weight the entropy values of $Initial_{i}$ by corresponding histogram intersection values; \State Extract key frames $Key_{i} = \{fy^{1}_{i}, fy^{2}_{i},..., fy^{m_{i}}_{i}\}$ by finding peaks in the weighted entropy curve, where $m_{i}$ denotes the number of key frames extracted from video $i$; \end{algorithmic} \end{algorithm} If $p$ key frames are extracted from each action sample, the whole video sequence can be divided into $M = p + 1$ segments, with the key frames being the beginning or the ending frames of each segment together with the first and last frames of the sequence. \subsection{IGMM Clustering} The segments of training samples are clustered using HOD~\cite{Gowayyed2013_HOD} features extracted from the skeleton data and these segments are then labelled and used to train ConvNets on DMMs constructed from the depth maps of segments. Assume that all the action samples in one dataset are segmented to totally $W$ video segments, and let $X = [\vec{x}_{1}, \vec{x}_{2},...,\vec{x}_{W}]$ be the HODs of these segments, where the dimension of HOD is $n$ and $\vec{x}_{l} = [x^{1}_{l}, x^{2}_{l},...,x^{n}_{l}]^{T}$. In this paper, it is assumed that the HODs from all segments can be modeled by an IGMM~\cite{wood2006non}. Bayesian approach is adopted to find the best model of $K$ Gaussian components that gives the maximum posterior probability (MAP), each Gaussian accounts for a distinct type or class of segments. Compared with traditional $K$-means, IGMM dynamically estimates the number of clusters from the data. Mathematically, the model is specified with the following notations. \begin{equation} \begin{array}{c} c_{i}|\vec{\pi} \sim Multinomial(\cdot|\vec{\pi})\\ \vec{x_{l}}|c_{i} = k, \Theta \sim N(\cdot|\theta_{k}) \end{array} \end{equation} where $C = \{c_{i}\}_{i = 1}^{N}$ indicates which class the HOD belongs to, $\Theta = \{\theta_{k}\}_{k=1}^{K}$, $\theta_{k} = \{\vec{\mu}_{k}, \Sigma_{k}\}$ are distribution parameters of class, and $\vec{\pi} = \{\pi_{k} \}_{k=1}^{K}$, $\pi_{k} = P(c_{i} = k)$ are mixture weights. Here, we do not know the number of clusters $K$, otherwise the complete data likelihood can be computed. To address this problem, a fully Bayesian approach was adopted instead of conventional maximum likelihood (ML) approach, where the relationship between observed data $X$ and a model $H$ in Bayes's rule is: \begin{equation} P(X|Y) \propto P(h)P(X|H) \end{equation} With respect to the conjugate priors for the model parameters, the same method as the one proposed in~\cite{wood2006non} is adopted, that is Dirichlet for $\vec{\pi}$ and Normal Times Inverse Wishart for $\theta$~\cite{fraley2007bayesian}~\cite{gelman2014bayesian}. \begin{equation} \begin{array}{c} \vec{\pi}|\alpha \sim Dirichlet(\cdot|\dfrac{\alpha}{K}, \dfrac{\alpha}{K}, ..., \dfrac{\alpha}{K})\\ \Theta \sim G_{0} \end{array} \end{equation} $\Theta \sim G_{0}$ is shorthand for \begin{equation} \begin{array}{c} \sum_{k} \sim Inverse - Wishart_{v_{0}}(\wedge_{0}^{-1})\\ \vec{\mu}_{k} \sim N(\vec{\mu}_{0}, \sum_{k}/K_{0}) \end{array} \end{equation} where $\dfrac{\alpha}{K}$ controls how uniform the class mixture weights will be; the parameters, $\wedge_{0}^{-1}$, $v_{0}$, $\vec{\mu_{0}}$, $K_{0}$ encode the prior experience about the position of the mixture densities and the shape; the hyper-parameters $\wedge_{0}^{-1}$, $v_{0}$ affect the mixture density covariance; $\vec{\mu_{0}}$ specifies the mean of the mixture densities, and $K_{0}$ is the number of pseudo-observations~\cite{gelman2014bayesian}. With the model defined above, the posterior distribution can be represented as: \begin{equation} \begin{array}{l} P(C, \Theta, \vec{\pi}, \alpha | X) \propto\\ P(X|C,\Theta)P(\Theta|G_{0})\prod \limits_{i=1}^{N} P(c_{i}|\vec{\pi})P(\vec{\pi}|\alpha)P(\alpha) \label{eq:igmm-bayesian} \end{array} \end{equation} With some manipulations, Eq.~\ref{eq:igmm-bayesian} can be solved using Gibbs sampling~\cite{neal2000markov} and the Chinese restaurant process~\cite{ghahramani2005infinite}. Details can be found in ~\cite{wood2006non,gelman2014bayesian}. Through the IGMM clustering, the numbers of active clusters will be estimated and all segments can be labelled with its corresponding cluster through hard assignment. These labelled segments will be the training samples for the ConvNets in the framework. \subsection{Pseudo-Color Coding of DMMs \& ConvNets Training} {\it DMMs and Pseudo-Color Coding} Since skeleton data are often noisy, ConvNets are trained on DMMs~\cite{Yang2012a} of segments for reliable classification from the training segments labelled by the IGMM cluster. Traditional DMMs~\cite{Yang2012a} are computed by adding all the absolute differences between consecutive frames in projected planes, which dilutes the temporal information. After segmentation, it is likely that there are more action pairs such as hands up and hands down within the segments. To distinguish the action pairs, we stack the motion energy within a segment with a special treatment to the first frame as described in Eq.~\ref{dmm}. \begin{equation} \label{dmm} DMM_{v} = map_{v}^{1} + \sum\limits_{k=2}^{M-1}|map_{v}^{k+1} - map_{v}^{k}| \end{equation} where $v \in \{f,s,t\}$ denotes the three projected views in the corresponding orthogonal Cartesian planes and $map_{v}^{k}$ is the projection of the k$th$ frame under the projection view $v$. In this way, temporal information which denotes the human body posture at the beginning of the action is captured together with the accumulated motion information, as shown in Fig~\ref{fig:DMM}(a)(b). In order to better exploit the motion patterns, DMMs are pseudo-colored into color images in the same way as that in~\cite{pichao2015}. In particular, the pseudo-color coding is done through the following hand-designed rainbow transform: \begin{equation}\label{coloring} C_{i={1,2,3}} = \{sin[2\pi\cdot(-I + \varphi_{i})\cdot\frac{1}{2}+\frac{1}{2}]\}^{2}\cdot f(I) \end{equation} where $C_{i={1,2,3}}$ presents the BGR channels, respectively; $I$ is the normalized gray value; $\varphi_{i}$ denotes the phase of the three channels; $f(I)$ is an amplitude modulation function which further increases the non-linearity; the added values, $\frac{1}{2}$ guarantee non-negativity. We use the same parameter settings as~\cite{pichao2015}. \begin{figure}[!ht] \begin{center} {\includegraphics[height = 80mm, width = 85mm]{DMMnew}} \end{center} \caption{The first row of DMMs represents an action that wave hand from right to left, the second row of DMMs represents an action that wave hand from left to right. (a) are computed in traditional way, (b) are computed in proposed way and (c) are pseudo-coloring coded from (b). } \label{fig:DMM} \end{figure} From Fig.~\ref{fig:DMM} it can seen that the two simple pair actions are almost the same in the traditional DMMs, but more discriminative in the modified DMMs. The temporal information in motion maps are enhanced through the pseudo-coloring method. {\it ConvNets Training and Classification} Three ConvNets are trained on the pseudo-color coded DMMs constructed from the video segments in the three Cartesian planes. The layer configuration of the three ConvNets is same as the one in~\cite{krizhevsky2012imagenet}. The implementation is derived from the publicly available Caffe toolbox \cite{jia2014caffe} based on one {NVIDIA GeForce GTX TITAN X} card and the pre-trained models over ImageNet~\cite{krizhevsky2012imagenet} are used for initialization in training. For an testing action sample, only the original skeleton sequences without rotation are used to extract key frames for segmentation. Three DMMs are constructed from each segment in the three Cartesian planes as input to the ConvNets and the averages of the outputs from the three ConvNets are computed to label the testing video segments. The sequence of labels will serve as input to the subsequent HMM-SVM classifier. \subsection{HMM-SVM for Classification} To effectively exploit the global temporal information, discrete HMMs are trained using the well-known Baum-Welch algorithm from the label sequences obtained from the ConvNets, one HMM per action. Specifically, the number of states are set to $K$, the number of clusters estimated from the IGMM, each state can emit one of the $K$ symbols. The observation symbol probability distribution is set as: $$ B = \{b_{j}(k)\},\left\{ \begin{aligned} b_{j}(k) = 1, &~ k = j \\ b_{j}(k) = 0, &~ k \neq j\\ \end{aligned} \right. $$ For each action sample, its likelihood being generated from the HMMs forms a feature vector as the input to the SVM for refining the classification. {\small \bibliographystyle{ieee}
1,477,468,749,926
arxiv
\section{Introduction} Observations of transient and supernova phenomena have informed fundamental discoveries about our universe, ranging from its expansion history and current expansion rate \citep{riess, perlmutter, freedman, riess_2019} to the progenitor physics of rare and interesting events \citep{pursiainen, patrick}. In the near future, next generation wide-field sky surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{lsst} will have the ability to observe larger swaths of sky with higher resolution and certainly uncover even more new and exciting astrophysical phenomena. These surveys promise to generate ever larger volumes of photometric data at unprecedented rates. However, the availability of spectroscopic resources is not expected to scale nearly as quickly. Thus, the challenge of effectively allocating these limited resources is more important than ever. For type Ia SN cosmology, spectroscopic information is used to minimize contamination in constructing pure and representative samples of SNe Ia to continue to constrain the dark energy equation of state. For supernova physicists, spectra uncover important information about an event's potential progenitor processes \citep{filippenko, perets, federica, sollerman}. Spectra taken near peak brightness of an event are optimal as they include mostly transient information and are not dominated by host galaxy features. With millions of alerts each night, fast and accurate automatic classification mechanisms will be needed to replace the time-consuming process of manual inspection. More specifically, the ability to perform classification early on in the lifetime of a transient would allow for ample time to take spectra at the peak luminosity of the event or at multiple points over the course of the event's lifetime. \subsection{Photometric Supernova Classification} An impressive body of work has emerged over the past decade on photometric classification of supernovae. Since only a small percentage of discovered supernovae have ever been followed up spectroscopically, a reliable photometric classifier is indispensable to the advancement of supernova science. The Supernova Photometric Classification Challenge \citep[SNPhotCC,][]{spcc_1,spcc_2} created not only an incentive to invest in photometric SN classification, but also a dataset that would be used to train and evaluate classifiers for years to come. Successful approaches range from empirical template-fitting \citep{sako2008} to making classification decisions based on manually extracted features \citep{richards, karpenka}. The more recent Photometric LSST Astronomical Time-series Classification Challenge \citep[PLAsTiCC,][]{plasticc} diversified the dataset by asking participants to differentiate between 14 different transient and variable object classes, including the 6 common supernova types included in this work. The top entries made use of feature extraction paired with various machine learning classification methods, such as boosted decision trees and neural networks \citep{plasticc_results}. Ensemble methods, in which the results of multiple classifiers are combined to create the final classification probability, were widely used as well. Deep learning is a branch of machine learning that seeks to eliminate the necessity of human-designed features, decreasing the computational cost as well as avoiding the introduction of potential biases \citep{charnock_moss, moss, naul, aguirre}. In recent years, many deep learning techniques have been applied to the challenge of photometric SN classification. Recurrent neural networks (RNNs) are designed to learn from sequential information, such as time-series data, and have been used with great success on this problem. \cite{charnock_moss} applies a variant of RNNs known as Long Short Term Memory networks \citep[LSTMs,][]{Hochreiter1997LongSM} to achieve impressive performance distinguishing SNIa from core collapse (CC) SNe. \cite{rapid} uses a gated recurrent unit (GRU) RNN architecture to be able to perform real-time and early lightcurve classification. \cite{moller} performs both binary classification and classification by type with full and partial lightcurves using Bayesian RNNs. \cite{superraenn} uses a GRU RNN as an autoencoder to smooth out irregularities in lightcurve data that is then fed into a random forest classifier. Convolutional neural networks (CNNs), which are used in this work, are a state-of-the-art image recognition architecture \citep{lecun1989backpropagation, lecun1998gradient, alexnet, zeiler2014visualizing}. \cite{pelican} addresses the issue of non-representative training sets by using a CNN as an autoencoder to learn from unlabeled test data. \cite{alerce_stamp} developed an image time-series classifier as part of the ALeRCE alert broker, using a CNN to differentiate between various transient types as well as bogus alerts. Outside of these traditional models, deep learning is still providing new and creative solutions to the photometric transient classification problem. Convolutional recurrent neural networks are used to classify a time series of image stamps by \cite{ramanah} to detect gravitationally lensed supernovae. A newer type of deep learning architecture, known as a transformer, achieves a very impressive result when applied to the PLAsTiCC dataset by \cite{transformer}. A variational autoencoder was used by ParSNIP \citep{parsnip} to develop a low-dimensional representation of transient lightcurves that uses redshift-annotated photometric data to perform full lightcurve photometric classification and generate time-varying spectra, among other tasks. \subsection{Early Photometric Supernova Classification} Though much progress has been made on the photometric supernova classification problem, most of the solutions tackle classification of full supernova lightcurves retrospectively. However, the earlier an object can be classified, the more opportunities there are for the community to perform follow-up observation. Spectroscopic or photometric follow-up at early stages not only reveals insights into progenitor physics, but can also serve as a benchmark for further observations at later epochs. SN type IIb, for example, exhibit hydrogen features in early spectra that quickly disappear over time \citep{SNIIb}. Shock breakout physics is another use case of follow-up observation. \cite{patrick} was the first to report capturing the complete evolution of a shock cooling lightcurve, a short-lived event preceding peak luminosity that reveals properties of the shock breakout and progenitor star for stripped-envelope supernovae such as the IIb. Despite the general focus on full lightcurve classification, several notable works have addressed the challenge of early photometric classification. \cite{sullivan} was able to not only differentiate between SNIa and CC SN, but also predict redshift, phase, and lightcurve parameters for SNIa using only two or three epochs of multiband photometry data. \cite{poznanski} also performed binary Ia vs. CC SNe classification, but using a Bayesian template-fitting technique on only single epoch photometry and photometric redshift estimates. {\small{PSNID}} \citep{sako2008, sako2011}, the algorithm that produced the highest overall figure of merit in SNPhotCC, was used by the Sloan Digital Sky Survey \citep{sdss} and the Dark Energy Survey \citep{des} to classify early-time and full supernova lightcurves. \cite{rapid} is a recent application of deep learning techniques specifically to early-time transient classification. A GRU RNN is trained and tested on a PLAsTiCC-derived dataset of 12 transients, including 7 supernova types, that are labeled at each epoch with ``pre-explosion" prior to the date of explosion and the correct transient type after explosion. Thus, the model is able to produce a classification at each epoch of observation. \cite{moller} has also produced an RNN-based photometric classifier that is capable of classifying partial supernova lightcurves, but primarily achieves good results for Ia vs. CC SN classification. \cite{villar} uses a recurrent variational autoencoder architecture to perform early-time anomaly detection for exotic astrophysical events within the PLAsTiCC dataset, such as active galactic nuclei and superluminous SNe. Finally, LSST alert brokers such as ALeRCE \citep{alerce_lc} specialize in accurate early-time classification of transient alerts. \subsection{Overview} Originally introduced in \cite{Qu_2021}, hereafter Q21, as a full lightcurve photometric classification algorithm, {\small SCONE}\ was able to retrospectively differentiate Ia vs. CC SN with $>99$\% accuracy and categorize SNe into 6 types with $>98$\% accuracy without redshift information. Our approach centers on producing heatmaps from 2-dimensional Gaussian processes fit on each lightcurve in both wavelength and time dimensions. These “flux heatmaps” of each supernova detection, along with “uncertainty heatmaps” of the Gaussian process uncertainty, constitute the dataset for our model. This preprocessing step smooths over irregular sampling rates between filters, mitigates the effect of flux outliers, and allows the CNN to learn from information in all filters simultaneously. Section 2 outlines the details of the datasets and models used in this work and we discuss the classifier's performance on the various dataset types in Section 3, including a comparison with existing literature. We state our conclusions and goals for future work in Section 4. \section{Methods} \subsection{Simulations} For this work, {\small SCONE}\ was trained and tested on a set of LSST deep drilling field (DDF) simulations. The dataset was created with SNANA \citep{snana} using the PLAsTiCC transient class models for supernovae types Ia, II, Ibc, Ia-91bg, Iax, and SLSN \citep{plasticc_data, plasticc-models, SNIa_1, SNIa_2, SNIa_3, SNIax_1, SNII_1, SNII_2, SNII_3, SNII_4, SNIbc_1, SNIbc_2, SNIbc_3, SNIbc_4, SLSN_1, SLSN_2, SLSN_3}. The relative rates and redshift distribution are identical to those of the data produced for the PLAsTiCC challenge. This is the same dataset used to evaluate {\small SCONE}'s categorical classification performance in Q21. {No cuts on individual low S/N lightcurve points were made, but lightcurves with fewer than two $5\sigma$ detections were removed, as $t_{\rm trigger}$ would be ill-defined in those cases. We note that in observed data, transient light curve samples will contain SNe contaminated by other galactic astrophysical sources, but methods such as \cite{alerce_lc}} are reliably able to distinguish extragalactic and galactic events. Thus, we can assume the feasibility of creating a pure sample of SN lightcurves such as the one used in this work. \begin{table} \centering \caption{Training, validation, and test dataset sizes for the $t_{\rm trigger}+N$ datasets.} \label{tbl:datasets} \begin{tabular}{l c c} \hline Dataset & Number of Each Type & Total Size\\ \hline Training & 6148 & 36888\\ Validation & 769 & 4614\\ Test & 768 & 4608\\ \hline Full & 7685 & 46110\\ \end{tabular} \end{table} \subsection{Trigger Definition} We define a \textit{detection} as any observation exceeding the 5$\sigma$ signal-to-noise (S/N) threshold. We define the \textit{trigger} as the next detection that occurs at least one night after the first. In this work, the dataset with the least photometric information includes observations up to (and including) the date of trigger. Thus, all SNe in our datasets have at least two epochs of observation. As the date of first detection is also a common choice of trigger date in other transient surveys, the implications of this discrepancy are explored further in Section 3.3. We present results on a dataset where the distinction between these two definitions is small, i.e. $t_{\rm trigger}\leq t_{\rm first\;detection}+5$. \subsection{Datasets and Heatmap Creation} \subsubsection{$t_{\mathrm{trigger}}+N$ Datasets} To evaluate {\small SCONE}'s classification performance on lightcurves at different stages of the supernova lifetime, five sets of heatmaps were created from the simulations described in Section 2.1. All sets of heatmaps take data starting 20 nights prior to the date of trigger ($t_{\mathrm{trigger}}$) and end at $N=$ 0, 5, 15, 25, and 50 days after the date of trigger, respectively. Hereafter, these are collectively referred to as ``$t_{\mathrm{trigger}}+N$ datasets". \begin{figure*}[ht] \figurenum{1} \label{fig:same-sn} \includegraphics[scale=0.29,trim={3cm 2cm 0cm 0cm}]{images/same_sn_5_datasets.pdf} \centering \caption{An SNII ($z=0.39$) shown in all five heatmap datasets along with the lightcurves and Gaussian process fits used to create each heatmap. The flux and flux error measurements from the raw photometry are shown as points with error bars, while the Gaussian process fits to each photometry band are shown as curves. The Gaussian process errors, which are used to create the heatmaps in the middle column, are not shown in the lightcurve plots. The $x$-axis limit of the plots in each row are different, as the lightcurve is truncated according to the label on the left for each row in the figure. } \end{figure*} Prior to training, the lightcurve data is processed into heatmaps. We use the approach described by \citet{avocado} to apply 2-dimensional Gaussian process regression to the raw lightcurve data to model the event in the wavelength ($\lambda$) and time ($t$) dimensions. We use the Mat\'ern kernel ($\nu = \frac{3}{2}$) with a fixed 6000~\AA\ characteristic length scale in $\lambda$ and fit for the length scale in $t$. Once the Gaussian process regression model has been trained, we obtain its predictions on a $\lambda,t$ grid and call this our ``flux heatmap". It is important to note that the Gaussian processes are fit on lightcurves truncated at $N$ days after trigger in each dataset and not given access to lightcurve information past the cutoff date. Thus, though the $\lambda$ axis is not affected by the different choices of $N$, the $t$ range of the input lightcurve data varies for each $t_{\rm trigger} + N$ dataset. For the datasets in this work, the $\lambda,t$ grids were chosen to preserve the shape of the resulting heatmap despite the fact that the number of nights of lightcurve data varies between the $t_{\rm trigger} + N$ datasets. $\lambda$ is chosen to be $3000 < \lambda < 10,100$~\AA\ with a $221.875$~\AA\ interval for all datasets, while the $t$ interval depends on the number of nights of data: $t_{\rm trigger} - 20 \leq t \leq t_{\rm trigger} + N$ with a $\frac{N+20}{180}$ day interval, where $N=0,5,15,25,50$. This ensures that all heatmaps have size $32 \times 180$. In addition to the flux heatmap, we also take into account the uncertainties on these predictions at each $\lambda_i, t_j$, producing an ``error heatmap". We stack these two heatmaps depthwise for each SN lightcurve and divide by the maximum flux value to constrain all entries to [0,1]. This $32 \times 180 \times 2$ tensor is our input to the convolutional neural network. An example of the heatmaps and associated lightcurves of a single SN in all 5 datasets is shown in Figure~\ref{fig:same-sn}. Results on the $t_{\rm trigger}+N$ datasets are described in Section 3.2. \subsubsection{Bright Supernovae} Our model was also evaluated on the subset of particularly bright supernovae from the $t_{\rm trigger} +0$ and $t_{\rm trigger} +5$ datasets to emulate a real-world use case of {\small SCONE}\ for spectroscopic targeting, as bright supernovae are better candidates for spectroscopic follow-up. ``Bright SNe" included in these datasets were chosen to be SNe with last included detection $r<20$ mag. With this threshold, there were 907 SNe in the $t_{\rm trigger} +0$ bright dataset and 5,088 SNe in the $t_{\rm trigger} +5$ bright dataset. As described in more detail in Section 2.4, {\small SCONE}\ was trained with a standard $t_{\rm trigger}+N$ training set combined with 40\% of the $t_{\rm trigger}+N$ bright dataset, and tested on the $t_{\rm trigger}+N$ bright dataset. Results on these datasets are described in Section 3.5. \subsubsection{Mixed Dataset} In order to evaluate {\small SCONE}'s ability to classify SNe with any number of nights of photometry, a sixth dataset (the ``mixed" dataset) was created from the same PLAsTiCC simulations. Data is taken starting 20 nights prior to the date of trigger (as with the $t_{\rm trigger}+N$ datasets) but truncated at a random night between 0 and 50 days after trigger. Due to the choice of the $t$ interval described in Section 2.3.1, heatmaps with any number of nights of photometry data are all the same size can thus be mixed in a single dataset in this manner. We train {\small SCONE}\ on this mixed dataset and evaluate its performance on each of the $t_{\mathrm{trigger}}+N$ datasets in Section 3.6. \subsection{Dataset Train/Test Split} Due to the importance of class balancing in machine learning datasets, the same quantity of SNe from each SN type was selected to create the $t_{\rm trigger} +N$ and mixed datasets used to train, validate, and test {\small SCONE}. 7685 SNe of each of the 6 types were randomly chosen, as this was the quantity of the least abundant type. Thus, the size of each full dataset was 46,110. An 80/10/10 training/validation/test split was used for all results in this work. The sizes of the training, validation, and test subsets of each dataset can be found in Table \ref{tbl:datasets}. For evaluation on the bright datasets, {\small SCONE}\ was trained on a hybrid training set of 40\% of the $t_{\rm trigger} +N$ bright dataset combined with a $t_{\rm trigger} +N$ training set, prepared as described in Section 2.3.1. Thus, the training set was not quite class-balanced, as the bright dataset is not class-balanced but the $t_{\rm trigger}+N$ training set is. The trained model was then evaluated on the full bright dataset to produce the results shown in Figure~\ref{fig:bright}. Due to the imbalanced nature of the bright datasets, the confusion matrices in this figure take the place of an accuracy metric, which could be misleading. We chose to include 40\% of the bright dataset in the training process to ensure that the model has seen enough of these particularly bright objects to make reasonable predictions. \subsection{Model} In this work, we report early lightcurve classification results using the vanilla {\small SCONE}\ model developed in Q21 as well as a variant of {\small SCONE}\ that incorporates redshift information. The architecture of {\small SCONE}\ with redshift is shown in Figure~\ref{fig:architecture}. Both redshift and redshift error are concatenated with the output of the first dropout layer and used as inputs to the fully connected classifier. The model uses spectroscopic redshift information when available and photometric redshift estimates if not. Prior to training and testing, the input flux and error heatmaps are divided by the maximum flux value of each heatmap for normalization. This means that absolute brightness information is not used for classification. All results in this work, with and without redshift, used the sparse categorical crossentropy loss function, the Adam optimizer \citep{adam}, and trained for 400 epochs with a batch size of 32. {\small SCONE}\ without redshift used a constant 1e-3 learning rate, whereas {\small SCONE}\ with redshift used a constant 5e-4 learning rate. \begin{figure*} \figurenum{2} \label{fig:architecture} \includegraphics[scale=0.50, trim={2.25cm 0cm 0cm 0cm}]{images/scone_with_z.pdf} \centering \caption{{\small SCONE}\ architecture with redshift information for categorical early lightcurve classification.} \end{figure*} \subsection{Computational Performance} The time required for the heatmap creation process was measured using a sample of 100 heatmaps on a single 32-core NERSC Cori Haswell compute node (with Intel Xeon Processor E5-2698 v3). The time required to create one heatmap was $0.03 \pm 0.01$ seconds. When producing larger-scale datasets, this process is also easily parallelizable over multiple cores or nodes to further decrease heatmap creation time. {\small SCONE}\ without redshift has 22,606 trainable parameters and {\small SCONE}\ with redshift has 22,670 trainable parameters, while other photometric classification models require at least hundreds of thousands. The performance gains of this simple but effective model compounded with a small training set make {\small SCONE}\ lightweight and fast to train. The first training epoch on a NVIDIA V100 Volta GPU takes approximately 17 seconds (4 milliseconds per batch with a batch size of 32), and subsequent training epochs take approximately 5 seconds each with TensorFlow’s dataset caching. The first training epoch on a Haswell node takes approximately 12 minutes (625 milliseconds per batch), and subsequent epochs take approximately 6 minutes each. Test time per batch of 32 examples is 3 milliseconds on GPU and 10 milliseconds on a Haswell CPU. \section{Results and Discussion} \subsection{Evaluation Metrics} The \textit{accuracy} of a set of predictions describes the frequency with which the predictions match the true labels. In this case, we define our prediction for each SN example as the class with highest probability output by the model, and compare this to the true label to obtain an accuracy. The \textit{confusion matrix} is a convenient visualization of the correct and missed predictions by class, providing a bit more insight into the model's performance. The confusion matrices shown in Figure~\ref{fig:cm} are normalized such that the $(i,j)$ entry describes the fraction of the true class, $i$, classified as class $j$. The confusion matrices in Figure~\ref{fig:bright} are colored by the normalized values, just like Figure~\ref{fig:cm}, but overlaid with absolute (non-normalized) values. For both figures, the $(i,i)$ entries, or those on the diagonal, describe correct classifications. The \textit{receiver operating characteristic (ROC) curve} makes use of the output probabilities for each class rather than simply taking the highest probability class, as the previous two metrics have done. We consider an example to be classified as class $i$ if the output probability for class $i$, or $p_i$, exceeds some threshold $p$ ($p_i > p$). The ROC curve sweeps values of $p$ between 0 and 1 and plots the true positive rate (TPR) at each value of $p$ against the false positive rate (FPR). TPR is the percentage of correctly classified objects in a particular class, or true positives (TP), as a fraction of all examples in that class, true positives and false negatives (TP+FN). Other names for TPR include \textit{recall} and \textit{efficiency}. The values along the diagonal of the normalized confusion matrices in Figure~\ref{fig:cm} are efficiency values. $$\mathrm{Efficiency=TPR = \frac{TP}{TP+FN}}$$ FPR is the percentage of objects incorrectly classified as a particular class, or false positives (FP), as a fraction of all examples not in that class, false positives and true negatives (FP+TN). $$\mathrm{FPR = \frac{FP}{FP+TN}}$$ The \textit{area under the ROC curve}, or \textit{AUC}, is used to evaluate the classifier from its ROC curve. A perfect classifier would have an AUC of 1, while a random classifier would score (on average) a 0.5. The \textit{precision} or \textit{purity} of a set of predictions is the percentage of correctly classified objects in a particular predicted class. $$\mathrm{Precision=\frac{TP}{TP+FP}}$$ \begin{table*} \centering \caption{Training, validation, and test accuracies \textit{without} redshift information for each early lightcurve dataset. These averages and standard deviations were computed from 5 independent runs of {\small SCONE}.} \label{tbl:no-z-acc} \begin{tabular}{l c c c c c} \hline Accuracy without Redshift & \multicolumn{5}{c}{Days After Trigger}\\ \hline & 0 days & 5 days & 15 days & 25 days & 50 days \\ \hline Training & $58.36 \pm 0.14$\% & $68.92 \pm 0.21$\% & $73.99 \pm 0.14\%$& $76.89 \pm 0.29$\% & $80.93 \pm 0.14$\%\\ Validation & $59.57 \pm 0.51$\% & $70.74 \pm 0.59$\% & $73.31 \pm 3.01\%$ & $79 \pm 0.84$\% & $82.5 \pm 2.35$\%\\ Test & $59.66 \pm 0.43$\% & $70.05 \pm 0.63$\% & $73.66 \pm 2.36$\%& $79 \pm 0.86$\% & $82.2 \pm 1.8$\%\\ \end{tabular} \end{table*} \begin{table*} \centering \caption{Training, validation, and test accuracies \textit{with} redshift information for each early lightcurve dataset. These averages and standard deviations were computed from 5 independent runs of {\small SCONE}.} \label{tbl:with-z-acc} \begin{tabular}{l c c c c c} \hline Accuracy with Redshift & \multicolumn{5}{c}{Days After Trigger}\\ \hline & 0 days & 5 days & 15 days & 25 days & 50 days \\ \hline Training & $72.73 \pm 0.27$\% & $79.61 \pm 0.3$\% & $83.07 \pm 0.2\%$& $84.68 \pm 0.2$\% & $87.17 \pm 0.26$\%\\ Validation & $74.78 \pm 0.18$\% & $80.52 \pm 1.42$\% & $83.98 \pm 1.15\%$& $86.75 \pm 0.5$\% & $89.2 \pm 0.85$\%\\ Test & $74.27 \pm 0.51$\% & $80.2 \pm 0.93$\% & $84.14 \pm 1.37$\%& $86.71 \pm 1$\% & $89.04 \pm 0.39$\%\\ \end{tabular} \end{table*} \begin{figure*}[ht] \figurenum{3} \label{fig:accs} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/no z/1sigma_accuracy_vs_time.pdf} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/with z/1sigma_accuracy_vs_time.pdf} \centering \caption{Accuracy/efficiency over time for each supernova type without redshift (left) and with redshift (right) for the $t_{\mathrm{trigger}}+N$ test datasets. The values used in this plot correspond with the diagonals on each normalized confusion matrix in Figure~\ref{fig:cm}.} \end{figure*} \subsection{$t_{\mathrm{trigger}}+N$ Datasets} The accuracies our model achieved without redshift on each $t_{\mathrm{trigger}}+N$ dataset are described in Table~\ref{tbl:no-z-acc}, and the accuracies with redshift are described in Table~\ref{tbl:with-z-acc}. These tables show that redshift unequivocally improves classification performance, especially at early times when there is little photometric data to learn from. The inclusion of redshift information not only increases the average accuracies for each dataset but also improves the model's generalizability, as the standard deviations for the validation and test accuracies are lower overall in Table~\ref{tbl:with-z-acc}. The largest improvement in accuracy between $t_{\mathrm{trigger}}+N$ datasets occurred between 0 and 5 days after trigger for all datasets. Since the explosion likely reached peak brightness during this period, the lightcurves truncated at 5 days after trigger includes much more information necessary for differentiating between the SN types. Figure~\ref{fig:accs} shows the accuracy evolution over time for each supernova type in the test sets. From the test sets with redshift plot on the right, it is clear that the jump in overall accuracy between 0 and 5 days after trigger can be attributed to the sharp accuracy boost experienced by SNIbc at 5 days after trigger. Overall, SNIbc benefited the most from the inclusion of redshift, though classification performance on all types saw improvement. Note that, as described in Section 2.3, all heatmaps are normalized to values between 0 and 1 so absolute flux values are not used to differentiate between types. Thus, the model cannot rely on relative luminosity information. The confusion matrices for $t_{\mathrm{trigger}}+\{0,5,50\}$ test sets with and without redshift information are shown in Figure~\ref{fig:cm}. The top two panels are early epoch classification results (0 and 5 days after trigger) and the bottom panel shows late epoch results. The confusion matrices from intermediate epochs (15 and 25 days after trigger) were omitted for brevity. At the date of trigger (top panel of Figure~\ref{fig:cm}), the incorporation of redshift information primarily prevents confusion between SLSN-I and SNIbc. True SLSN-I events misclassified as SNIbc decreased from 11\% on average to 2\% with redshift. True SNIbc misclassified as SLSN-I decreased from 16\% on average to 2\% with redshift. Overall, SLSN-I were classified with 91\% accuracy with redshift compared to 69\% without redshift, and SNIbc were classified with 54\% accuracy with redshift compared to 36\% without redshift. All types saw marked improvement in classification performance without redshift from 0 to 5 days after trigger, while classification with redshift saw drastic improvement in SNIbc accuracy but only minor improvement for other types. Finally, the effect of added redshift becomes less noticeable by late epochs, where classification accuracy (along the diagonal) is only mildly improved in the bottom panel of Figure~\ref{fig:cm}. The confusion matrices in Figure~\ref{fig:cm} are normalized by true type, meaning that the values in each row sum to 1. Thus, the values along the diagonal are \textit{efficiency} scores. Normalizing by predicted type, such that the values in each column sum to 1, would result in \textit{purity} scores along the diagonal. However, since all datasets used in Figure~\ref{fig:cm} are class-balanced, the purity scores can be reconstructed from these confusion matrices by dividing each main diagonal value by the sum of the values in its column. \begin{figure*} \figurenum{4} \label{fig:cm} \includegraphics[scale=0.4,trim={1cm 0 0 0}]{images/no z/cm_-20x0_trigger_32x180.pdf} \includegraphics[scale=0.4,trim={1cm 0 0 0}]{images/with z/cm_-20x0_trigger_32x180.pdf} \includegraphics[scale=0.4,trim={1cm 0 0 0}]{images/no z/cm_-20x5_trigger_32x180.pdf} \includegraphics[scale=0.4,trim={1cm 0 0 0}]{images/with z/cm_-20x5_trigger_32x180.pdf} \includegraphics[scale=0.4,trim={1cm 0 0 0}]{images/no z/cm_-20x50_trigger_32x180.pdf} \includegraphics[scale=0.4,trim={0cm 0 0 0}]{images/with z/cm_-20x50_trigger_32x180.pdf} \centering \caption{Normalized confusion matrices produced by {\small SCONE}\ without (left) and with (right) redshift for the $t_{\mathrm{trigger}}+\{0,5,50\}$ test sets (heatmaps created from lightcurves truncated at 0, 5, and 50 days after the date of trigger). These matrices were made with test set classification performance from 5 independent runs of {\small SCONE}.} \end{figure*} \begin{figure*} \figurenum{5} \label{fig:roc} \includegraphics[scale=0.35,trim={0 0 0 1cm}]{images/no z/roc_-20x0_trigger_32x180.pdf} \includegraphics[scale=0.35,trim={0 0 0 1cm}]{images/with z/roc_-20x0_trigger_32x180.pdf} \includegraphics[scale=0.35,trim={0 0 0 0}]{images/no z/roc_-20x5_trigger_32x180.pdf} \includegraphics[scale=0.35,trim={0 0 0 0}]{images/with z/roc_-20x5_trigger_32x180.pdf} \includegraphics[scale=0.35,trim={0 0 0 0}]{images/no z/roc_-20x50_trigger_32x180.pdf} \includegraphics[scale=0.35,trim={-7cm 0 0 0}]{images/with z/roc_-20x50_trigger_32x180.pdf} \centering \caption{Receiver operating characteristic (ROC) curves produced by {\small SCONE}\ without (left) and with (right) redshift for the $t_{\mathrm{trigger}}+\{0,5,50\}$ test sets (heatmaps created from lightcurves truncated at 0, 5, and 50 days after the date of trigger).} \end{figure*} The datasets used for the confusion matrices in Figure~\ref{fig:cm} were also used to create ROC curves for each SN type. ROC curves for test sets without redshift are shown on the left side of Figure~\ref{fig:roc} and ROC curves for test sets with redshift are shown on the right. The addition of redshift information seems to most notably improve the model's ability to classify SLSN-I -- all three panels on the right show SLSN-I as the highest AUC curve whereas all three panels on the left show SNIa-91bg with a higher AUC curve than SLSN-I. This is consistent with our earlier observations from the confusion matrices and accuracy plots. The information in the ROC curves for all $t_{\mathrm{trigger}}+N$ datasets is summarized in Figure~\ref{fig:auc}, AUC over time plots with and without redshift. The performance looks quite impressive, starting at an average AUC of above 0.9 with redshift at the date of trigger and increasing to 0.975 by 50 days after trigger. Without redshift, average AUC is still respectable, starting at 0.88 and increasing to 0.97. \begin{figure*} \figurenum{6} \label{fig:auc} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/no z/auc_vs_time.pdf} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/with z/auc_vs_time.pdf} \centering \caption{Area under the ROC curve (AUC) without (left) and with (right) redshift over time for each supernova type.} \end{figure*} \subsection{Approximating a First-Detection Trigger Definition} Another common trigger definition used in transient surveys places the trigger at the date of the first detection ($t_{\rm first\;detection}$)rather than the second, which is the definition followed in this work. In order to more directly compare {\small SCONE}'s results with those of other classifiers following the first detection trigger definition, the distribution of $t_{\rm trigger}- t_{\rm first\;detection}$ was examined as well as {\small SCONE}'s performance on the subset of the $t_{\rm trigger}+0$ dataset with date of second detection ($t_{\rm trigger}$) at most 5 days after the date of first detection (i.e. $t_{\rm trigger}\leq t_{\rm first\;detection}+5$). Figure~\ref{fig:trigger-dist} shows that $>65$\% of $t_{\rm trigger}$ dates are no more than 5 days after the date of first detection. To further understand the direct impact of this choice of trigger definition, {\small SCONE}\ was tested on the subset of the $t_{\rm trigger}+0$ dataset with date of second detection ($t_{\rm trigger}$) at most 5 days after the date of first detection. This cut ensures that the lightcurves used for classification are not given substantially more information than those created with the first detection trigger definition. The normalized confusion matrices for the $t_{\rm trigger}\leq t_{\rm first\;detection}+5$ dataset are shown with and without redshift in Figure~\ref{fig:trigger}. With redshift, {\small SCONE}'s performance primarily suffers on SLSN-I and SNIa classification. SLSN-I appears to more strongly resemble SNIax and SNIbc at early times, as the SNIax confusion rose to 8\% from 1\% and the SNIbc confusion rose to 11\% from 2\%. SNIa were commonly misclassified as SNIa-91bg at early times, which is not reflected in the $t_{\rm trigger}+0$ confusion matrices in Figure~\ref{fig:cm}. Surprisingly, true SNIa-91bg were not misclassified as SNIa despite the prevalence of SNIa misclassified as SNIa-91bg. Without redshift, however, {\small SCONE}'s performance on the $t_{\rm trigger}\leq t_{\rm first\;detection}+5$ subset very closely resembles the $t_{\rm trigger}+0$ results shown in Figure~\ref{fig:cm}. \begin{figure} \figurenum{7} \label{fig:trigger-dist} \centering \includegraphics[scale=0.6,trim={0 0 0 0cm}]{images/test_set_trigger_to_first_detection_to_50.pdf} \caption{Distribution of $t_{\mathrm{trigger}}-t_{\mathrm{first\, detection}}$ in a {\small SCONE}\ test dataset of 4608 SNe.} \end{figure} \begin{figure*} \figurenum{8} \label{fig:trigger} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/no z/cm_first_detection_to_trigger_under_5_days.pdf} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/with z/cm_first_detection_to_trigger_under_5_days.pdf} \caption{Normalized confusion matrices produced by {\small SCONE}\ without (left) and with (right) redshift for the $t_{\rm trigger}\leq t_{\rm first\;detection}+5$ subset of the $t_{\mathrm{trigger}}+0$ test set. This cut ensures that the lightcurves used for performance evaluation are not given substantially more information than those created with the first detection trigger definition.} \end{figure*} \subsection{Baseline Model} A multi-layer perceptron model \citep[MLP,][]{HORNIK1989359} was developed as a baseline for direct comparison to {\small SCONE}. MLP architectures are a simple type of feedforward neural network with at least 3 layers (input, hidden, output) in which each node in a particular layer is connected to every node in the subsequent layer. They have been successfully used in many general as well as image classification tasks \citep{MLPMixer,gMLP}. The $32 \times 180 \times 2$ input heatmap is split into 180 non-overlapping ``patches" of size $32 \times 1$. The patches were chosen to be full height in the wavelength dimension to remain consistent with the full height convolutional kernels used in {\small SCONE}. A $180 \times 64$-dimensional hidden layer is then computed via $h_{1,ij} = \mathrm{relu}(x^j_{i} W_{1,ji}+b_{1,j})$, where $\mathrm{relu}(x)=\mathrm{max}(0,x)$ is the rectified linear unit, $x^j$ is the $j^{\rm th}$ input heatmap patch, $W_1$ is the weight matrix learned by the network, and $b_1$ is the learned bias vector. The dimensionality of the hidden layer is then squashed to a single 64-dimensional vector with global average pooling: $h_{2,i} = \mathrm{average}(h_{1,ij})$. Finally, the output class is computed via $y_{k} = \sigma(h_{2,i} W_{2,ji}+b_{2,j})_k$, where $\sigma(\vec{x})_k=\frac{e^{x_k}}{\sum_j{e^{x_j}}}$ is the softmax function, $W_2$ is the learned weight matrix, and $b_2$ is the learned bias vector. Without redshift, our model achieved a test accuracy of 56\%. With redshift, the test accuracy improved to 67.19\%. The performance of the MLP on the $t_{\mathrm{trigger}}+0$ dataset with and without redshift is summarized in the confusion matrices in Figure~\ref{fig:baseline}. Compared to the performance of {\small SCONE}\ on the $t_{\mathrm{trigger}}+0$ dataset in the top panel of Figure~\ref{fig:cm}, the MLP is less accurate at classifying most SN types, most noticeably with redshift. The degraded but still respectable performance of the MLP on classification both with and without redshift shows that these supernova types can indeed be differentiated in some hyperdimensional space by a neural network, and that {\small SCONE}\ in particular possesses the required discriminatory power for this task. \begin{figure*} \figurenum{9} \label{fig:baseline} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/no z/cm_cm_mlp_-20x0_trig.pdf} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/with z/cm_cm_mlp_-20x0_trig.pdf} \centering \caption{Normalized confusion matrices produced by the baseline MLP model without (left) and with (right) redshift for the $t_{\mathrm{trigger}}+0$ test set (heatmaps created from lightcurves truncated at the date of trigger).} \end{figure*} \subsection{Bright Supernovae} Bright supernovae, defined as supernovae with last included $r$-band observation $r<20$~mag, were identified from both the $t_{\mathrm{trigger}}+0$ and $t_{\mathrm{trigger}}+5$ datasets. Since fewer (and likely dimmer) observations were included for each supernova in the $t_{\mathrm{trigger}}+0$ dataset, there are much fewer examples of bright supernovae than in the $t_{\mathrm{trigger}}+5$ dataset. The bright supernovae subsets of these datasets are referred to as the ``bright $t_{\mathrm{trigger}}+N$ datasets". To evaluate the performance of {\small SCONE}\ on identifying bright supernovae at early epochs, the model was trained on a regular class-balanced $t_{\mathrm{trigger}}+N$ training set, prepared as described in Section 2.4, combined with 40\% of the bright $t_{\mathrm{trigger}}+N$ dataset. The results of testing the trained {\small SCONE}\ model on the bright $t_{\mathrm{trigger}}+N$ datasets are shown in Figure~\ref{fig:bright}. These confusion matrices, like the ones in Figure~\ref{fig:cm}, are colored by efficiency score. However, since the dataset is not class-balanced, the overlaid values are absolute (non-normalized) to preserve information on the relative abundance of each type. Thus, an efficiency (purity) score for each type can be calculated by dividing each main diagonal value by the sum of the values in its row (column). The overall accuracies as well as the total number of SNe in each dataset are summarized in Table~\ref{tbl:bright-acc}. The benefits of redshift information are much more pronounced for certain types than others. As also noted in analyses of Figures~\ref{fig:cm} and ~\ref{fig:auc}, the quantity of SNIbc misclassified as SLSN-I was significantly reduced in results from {\small SCONE}\ with redshift information. At the date of trigger, 44.4\% of SNIbc were misclassified as SLSN-I without redshift. This contamination rate was reduced to only 3.7\% with redshift. However, classification of bright SNIa seems relatively unaffected by the presence of redshift information. 5 days after trigger, SNIa were classified with an efficiency/accuracy of 98.6\% and a purity score of 98.1\% without redshift, and 97.4\% efficiency/accuracy and 99.1\% purity with redshift. \begin{table} \centering \caption{Test accuracies with and without redshift information for the bright datasets.} \label{tbl:bright-acc} \begin{tabular}{l c c c} \hline & Total & Accuracy no $z$ & Accuracy with $z$\\ \hline bright $t_{\mathrm{trigger}}+0$ & 907 & 82.91\% & 91.18\%\\ bright $t_{\mathrm{trigger}}+5$ & 5088 & 94.65\% & 95.2\% \\ \end{tabular} \end{table} \begin{figure*} \figurenum{10} \label{fig:bright} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/no z/cm_bright_-20x0.pdf} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/with z/cm_bright_-20x0.pdf} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/no z/cm_bright_-20x5.pdf} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/with z/cm_bright_-20x5.pdf} \centering \caption{Early epoch confusion matrices with (right) and without (left) redshift for the bright supernovae ($<20$ magnitude) in each $t_{\mathrm{trigger}}+N$ dataset. {\small SCONE}\ was trained with a class-balanced $t_{\mathrm{trigger}}+N$ training set combined with 40\% of $<20$ magnitude supernovae. These confusion matrices were created by testing the trained {\small SCONE}\ model on the full $<20$ magnitude supernovae dataset. The confusion matrices are colored according to normalized accuracies, as in Figure~\ref{fig:cm}, and are overlaid with absolute (non-normalized) values since the dataset is imbalanced.} \end{figure*} \subsection{Mixed Dataset} Training on the $t_{\mathrm{trigger}}+N$ datasets represents one way of deploying {\small SCONE}\ for real-world transient alert applications, while training on a mixed dataset is a much less computationally expensive alternative. On one hand, testing a $t_{\mathrm{trigger}}+N$-trained model on a $t_{\mathrm{trigger}}+N$ test set yields the best classification accuracies. However, this approach requires the creation of separate datasets for each choice of $N$, which could be an expensive initial time investment depending on the number of datasets and size of each dataset (see Section 2.6 for computational requirements for heatmap creation). In this work, only five datasets ($N=0,5,15,25,50$) were created, but perhaps $N=0,1,...,50$ will be needed to accurately classify real-world transient alerts with any number of nights of photometry. Training on a mixed dataset, where each heatmap is created with a random number of nights of photometry after trigger, is a viable alternative for resource- or time-constrained applications. To directly compare the performance of {\small SCONE}\ trained on the mixed dataset and the $t_{\mathrm{trigger}}+N$ datasets, the mixed-dataset-trained model was tested on each individual $t_{\mathrm{trigger}}+N$ dataset. The accuracies over time split by SN type are summarized in Figure~\ref{fig:mixed-accs}. Compared to the results of {\small SCONE}\ trained and tested on each individual $t_{\mathrm{trigger}}+N$ dataset (Figure~\ref{fig:accs}), the accuracies are lower but still respectable. The performance at the date of trigger is the most dissimilar, with average accuracy ~74\% with $z$ for a model trained on $t_{\mathrm{trigger}}+0$ and ~64\% with mixed. The performance of the mixed-trained model performs similarly to the $t_{\mathrm{trigger}}+N$-trained model by 5 days after trigger, however, both averaging just under 80\% with $z$. The AUCs over time split by SN type are shown in Figure~\ref{fig:mixed-auc}. These AUC plots are comparable to the $t_{\mathrm{trigger}}+N$ AUCs in Figure~\ref{fig:auc}, indicating that the performances of both models are comparable when averaged over all values of the prediction threshold $p$. However, the predicted class for categorical classification is not typically calculated with respect to a threshold; rather, it is defined as the class with the highest prediction confidence for each example. Thus, the AUCs are analogous to analyzing the performance on each type as its own binary classification problem, resulting in slight discrepancies from the accuracies. \begin{figure*} \figurenum{11} \label{fig:mixed-accs} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/no z/mixed_accuracy_vs_time.pdf} \includegraphics[scale=0.4,trim={0 0 0 0}]{images/with z/mixed_accuracy_vs_time.pdf} \centering \caption{Test set accuracy/efficiency without (left) and with (right) redshift over time for {\small SCONE}\ trained on the mixed dataset and tested on each individual $t_{\mathrm{trigger}}+N$ dataset. The values used in these plots correspond with the diagonals on a normalized confusion matrix.} \end{figure*} \begin{figure*} \figurenum{12} \label{fig:mixed-auc} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/no z/mixed_auc_vs_time.pdf} \includegraphics[scale=0.4,trim={0 0 0 0cm}]{images/with z/mixed_auc_vs_time.pdf} \centering \caption{Area under the ROC curve (AUC) without (left) and with (right) redshift over time for {\small SCONE}\ trained on the mixed dataset and tested on each individual $t_{\mathrm{trigger}}+N$ dataset.} \end{figure*} \subsection{Comparison with Existing Literature} At the time of this writing, the only work in existing literature with a similarly strong focus on early photometric classification of supernovae is RAPID \citep[][hereafter M19]{rapid}, a GRU RNN approach to photometric transient classification that differentiates between 12 transient types, including 7 supernova types. RAPID differs in several significant ways from the data, model, and results presented in this work. We highlight some of the differences in these two works below. \subsubsection{Comparison of Methods} The most obvious difference is in the type of neural network architecture used for classification. RAPID uses a uni-directional RNN architecture, which is designed to learn from time-series data chronologically. {\small SCONE}\ employs a convolutional neural network architecture, which is most commonly used for image recognition tasks. In this instance, however, {\small SCONE}\ is designed to read in data chronologically. Convolutional layers in a CNN work by computing functions on a ``sliding window" of the input image, thereby allowing the model to learn small-scale structures in the image. These windows, or the convolutional kernel, is typically a small square chosen to match the characteristic length scale of structures in the images. {\small SCONE}'s convolutional kernel, however, is chosen to span the full height of the input heatmap, resulting in a window that slides chronologically along the horizontal, or time, axis. M19 trained and tested on a dataset of simulated Zwicky Transient Facility (ZTF) lightcurves, which have $g$- and $r$- band photometry, compared to the LSST lightcurves used in this work, with $ugrizY$ photometry bands. In addition to the 6 supernova types that this work focuses on, M19 includes 4 rare transient classes (pair instability supernovae (PISN), intermediate luminosity transients (ILOT), calcium-rich gap transients (CART), tidal disruption events (TDE)) as well as point-Ia and KN. Other differences include the addition of a ``pre-explosion" class, rest- vs. observer-frame time intervals, and the choice of trigger definition. M19 chooses to include an additional class, ``pre-explosion", to describe examples at time-steps prior to the occurrence of the transient event. M19 also converts time intervals out of observer-frame by dividing by $1+z$, which is not done in this work to ensure that mistakes in redshift estimates will not be propagated to affect the lightcurve data. Finally, M19 uses the first-detection trigger date definition, while this work defines $t_{\rm trigger}$ to be the date of the second detection. \subsubsection{Comparison of Results} The results of {\small SCONE}\ classification with redshift (right side panels of Figures~\ref{fig:cm}-\ref{fig:auc}) is used to compare with RAPID's results, as RAPID also incorporates redshift information. As described in the previous section, this work differs in many ways from M19 and the following comparison does not account for these differences; a rigorous comparison of the two models against a single dataset is left to a future work. Most notably, {\small SCONE}\ improves upon RAPID's SNIbc and SNII classification accuracy, while RAPID performs very well at classifying early-time SNIa. From Figure 7 of M19, 12\% of SNIbc are correctly classified 2 days after detection, compared to {\small SCONE}'s 54\% accuracy at the date of trigger. In RAPID's results, 30\% of true SNIbc are misclassified as CART, which is not included in the datasets in this work. The second and third largest contaminants (SNIax at 19\%, then SNIa and SNIa-91bg at 8\% each), are both part of this analysis. From Figure~\ref{fig:cm}, we find that SNIax and SNIa-91bg are also major contaminants for {\small SCONE}\ at 23\% and 11\%, respectively, at the date of trigger and 16\% and 4\%, respectively, 5 days after trigger. However, there is no significant contamination from SNIa, with contamination rates at 4\% on the date of trigger and 1\% 5 days after trigger. 2 days after detection, SNII is classified at 7\% accuracy by RAPID compared to 64\% accuracy at the date of trigger by {\small SCONE}. The primary contaminant of SNII for RAPID 2 days after detection is SNIa at 21\%, which is not reflected in {\small SCONE}'s results, where the contamination rate is 6\% at the date of trigger and 3\% 5 days after trigger. The second largest contaminant, SLSN-I, is also not an issue in {\small SCONE}'s SNII classification. Surprisingly, the improvement over time of RAPID's SNII classification accuracy outpaces its SNIbc classification accuracy, as it is able to achieve 49\% accuracy on SNII 40 days after detection compared to 31\% accuracy on SNIbc. While {\small SCONE}'s SNIa classification accuracy slowly climbs from 77\% at the date of trigger to 93\% 50 days after trigger, RAPID is able to classify SNIa at 88\% accuracy almost immediately after detection. A future direct comparison will aid in concluding whether this discrepancy is due to differences in the datasets, such as M19's exclusion of $z \geq 0.5$ objects, or something more fundamental to the model architectures. \section{Conclusions} Our ability to observe the universe has improved in leaps and bounds over the past century, allowing us to find new and rare transient phenomena, enrich our understanding of transient physics, and even make cosmological discoveries aided by observational data. Our photometric observing capabilities greatly outpace the rate at which we can gather the associated spectroscopic information, resulting in a vast trove of photometric data sparsely annotated by spectroscopy. In the era of large-scale sky surveys, with millions of transient alerts per night, an accurate and efficient photometric classifier is essential not only to make use of the photometric data for science analysis, but also to determine the most effective spectroscopic follow-up program early on in the life of the transient. In this work, we presented {\small SCONE}'s performance classifying simulated LSST early-time supernova lightcurves for SN types Ia, II, Ibc, Ia-91bg, Iax, and SLSN-I. As a neural network-based approach, {\small SCONE}\ avoids the time-intensive manual process of feature selection and engineering, and requires only raw photometric data as input. We showed that the incorporation of redshift estimates as well as errors on those estimates significantly improved classification accuracy across the board, and was especially noticeable at very early times. Notably, this is the first application of convolutional neural networks to this problem. {\small SCONE}\ was tested on 3 types of datasets: datasets of lightcurves that were truncated at 0, 5, 15, 25, and 50 days after trigger ($t_{\mathrm{trigger}}+N$ datasets); bright ($< 20$ magnitude) subsets of the $t_{\mathrm{trigger}}+\{0,5\}$ datasets; and a dataset of lightcurves truncated at a random number of nights between 0 and 50 (``mixed"). Without redshift, {\small SCONE}\ was able to classify $t_{\mathrm{trigger}}+0$ lightcurves with 60\% overall accuracy, which increases to 82\% at 50 days after trigger. {\small SCONE}\ with redshift information starts at 74\% overall accuracy at the date of trigger and improves to 89\% 50 days after trigger. Confusion matrices, ROC plots, and accuracy over time as well as AUC over time plots of results with and without redshift were presented to better understand classification performance and identify areas of improvement. For the bright subsets, overall accuracy is $>90$\% at the date of trigger with redshift and over 80\% without. These results improve to around 95\% accuracy both with and without redshift by 5 days after trigger. The overall accuracy over time of a mixed-dataset-trained model tested on the $t_{\mathrm{trigger}}+N$ datasets shows some degradation in accuracy at very early epochs, but may be a worthwhile lightweight alternative to the more resource-intensive process of creating many $t_{\mathrm{trigger}}+N$ datasets. We showed that {\small SCONE}'s performance with redshift is competitive with existing work on early classification, such as M19, while improving on computational time requirements. {\small SCONE}\ has a lightweight pre-processing step and can achieve impressive performance with a small training set. It requires only hundredths of a second to preprocess each lightcurve into a heatmap and seconds for each training epoch on GPU. This makes {\small SCONE}\ a great candidate for incorporation into alert brokers for LSST and future wide-field sky surveys. In future work, we plan to apply this model to real data to further validate the approach. We also plan to extend {\small SCONE}\ to classify both full-duration and early lightcurves for more transient and variable classes in the PLAsTiCC simulations. \section{Acknowledgments} This work was supported by DOE grant DE-FOA-0002424, NASA Grant NNH15ZDA001N-WFIRST, and NSF grant AST-2108094. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
1,477,468,749,927
arxiv
\section{Introduction} For a finite tensor category $\cat$ the \emph{Brauer-Picard group} $\BrPic(\cat)$ is defined as the group of equivalence classes of invertible $\cat$-bimodule categories. This group is an important invariant of the tensor category $\cat$ and appears at several essential places in representation theory, for example in the classification problem of $G$-extensions of fusion categories, see \cite{ENO10}. In mathematical physics (bi-)module categories appear as boundary conditions and defects in $3d$-TQFT, in particular the Brauer-Picard group is a symmetry group of such theories, see \cite{FSV13}, \cite{FPSV15}. \\ An important structural insight is the following result proven in Thm. 1.1 \cite{ENO10} for $\cat$ a fusion category and in Thm. 4.1 \cite{DN12} for $\cat$ a finite tensor category (hence not necessarily semisimple): There is a group isomorphism from the Brauer-Picard group to the group of equivalence classes of \emph{braided autoequivalences} of the \emph{Drinfeld center} $Z(\cat)$: \begin{align}\label{ENO} \BrPic(\cat) \cong \Aut_{br}(Z(\cat)) \end{align} In the case $\cat=\Rep(G)$ of finite dimensional complex representations of a finite group $G$ (respectively $\cat=\Vect_G$ which has the same Drinfeld center) computing the Brauer-Picard group is already an interesting and non-trivial task. This group appears as the symmetry group of (extended) Dijkgraaf-Witten theories with structure group $G$. See \cite{DW90} for the original work on Chern-Simons with finite gauge group $G$ and see \cite{FQ93}, \cite{Mo13} for the extended case. In \cite{O03} the authors have obtained a parametrization of $\Vect_G$-bimodule categories in terms of certain subgroups $L \subset G\times G^{op}$ and $2$-cocycles $\mu$ on $L$ and \cite{Dav10} has determined a condition when such pairs correspond to invertible bimodule categories. However, the necessary calculations to determine $\BrPic(\cat)$ seem to be notoriously hard and the above approach gives little information about the group structure. In \cite{NR14} the authors use the isomorphism to $\Aut_{br}(Z(\cat))$ in order to compute the Brauer-Picard group for several groups $G$ using the following strategy: They enumerate all subcategories $\LL\subset Z(\cat)$ that are braided equivalent to $\cat=\Rep(G)$, then they prove that $\Aut_{br}(Z(\cat))$ acts transitively on this set. Finally, they determine the stabilizer of the standard subcategory $\cat \subset Z(\cat)$ with trivial braiding. For $G$ abelian, the second author's joint paper \cite{FPSV15} determines a set of generators of the Brauer-Picard group and provides a field theoretic interpretation of the isomorphism $\BrPic(\cat) \cong \Aut_{br}(Z(\cat))$ in terms of 3d-Dijkgraaf-Witten theory with defects. Results for Brauer-Picard groups of other categories $\cat$ include representations of the Taft algebra in \cite{FMM14} and of supergroups in \cite{Mom12},\cite{BN14}. \\ An alternative characterization of elements in $\Aut_{br}(Z(H\md\mod))$ in terms of quantum commutative Bigalois objects was given in \cite{ZZ13}. \\ \enlargethispage{0.2cm} In this article we propose an approach to calculate $\BrPic(\cat)$ for $\cat=H\md\mod$, the category of finite-dimensional representations of a finite-dimensional Hopf algebra $H$. Let $\cat$ be any tensor category. Then there exists a well-known group homomorphism: $$ \Ind_\cat:\;\Aut_{mon}(\cat)\to \BrPic(\cat) \cong \Aut_{br}(Z(\cat)) $$ given by assigning to a monoidal automorphism $\Psi \in \Aut_{mon}(\cat)$ the invertible $\cat$-bimodule category $_{\Psi}\cat_\cat$, where the left $\cat$-module structure is given by precomposing with $\Psi$; then we use the isomorphism (\ref{ENO}) mentioned above. The image of this map gives us a natural subgroup of the Brauer-Picard group. If we can choose another category $\cat'$ and a braided equivalence $F: Z(\cat') \stackrel{\sim}{\to} Z(\cat)$, then we get a different induction and a new subgroup of $\Aut_{br}(Z(\cat))$: $$\Ind_{\cat',F}:\;\Aut_{mon}(\cat')\to \BrPic(\cat') \cong \Aut_{br}(Z(\cat')) \stackrel{F}{\cong} \Aut_{br}(Z(\cat))$$ Consider a finite dimensional Hopf algebra $H$ and let $\cat=H\md\mod$ be the category of finite dimensional $H$-modules. Then $Z(H\md\mod)=DH\md\mod=H^*\bowtie H\md\mod$ and we have a canonical choice $\cat'=H^*\md\mod$ and a canonical isomorphism of Hopf algebras $DH \ito D(H^*)$ (see Thm. 3 in \cite{Rad93}), that gives us a canonical braided equivalence $D(H^*)\md\mod \cong DH\md\mod$. Hence, we have two canonical subgroups of $\Aut_{br}(DH\md\mod)$, namely $\im(\Ind_{H \md \mod)})$ and $\im(\Ind_{H^*\md\mod})$. \\ Let us introduce an additional set $\cR\subset \Aut_{br}(DH\md\mod)$. For each decomposition of $H$ into a Radford biproduct $H=A\ltimes K$ (see Sect. 10.6 \cite{Mon93}), where $A$ a Hopf algebra and $K$ a Hopf algebra in the category $Z(A\md\mod)$, for a choice of a Hopf algebra $L$ in $Z(A\md\mod)$ and a non-degenerate Hopf algebra pairing $\langle \cdot,\cdot\rangle:K \otimes L \to k$ in the category $Z(A\md\mod)$, we can construct a canonical braided equivalence by Thm 3.20 in \cite{BLS15}: $$\Omega^{\langle \cdot , \cdot \rangle}:Z(A\ltimes K\md\mod) \ito Z(A \ltimes L\md\mod)$$ In the special case of $L:=K$, the functor $\Omega^{\langle \cdot , \cdot \rangle}$ is a braided autoequivalence of $Z(A\ltimes K\md\mod)=DH\md\mod$. In this case, we identify $\langle \cdot , \cdot \rangle$ canonically with an isomorphism of Hopf algebras in $Z(A\md\mod)$ that we denote by $\delta:K \ito K^*$. We call the triple $(A,K,\delta)$ a partial dualization datum and $r_{A,K,\delta}:=\Omega^{\langle \cdot ,\cdot\rangle} \in \Aut_{br}(DH\md\mod)$ a partial dualization of $H$ on $K$. \\ In the case of a group algebra $H=kG$ of a finite group $G$, we obtain for each decomposition of $G$ as a semi-direct product $G=Q \ltimes N$ a decomposition of $kG$ as a Radford biproduct $kG= kQ \ltimes kN$, where $N$ is a normal subgroup of $G$. $kN$ is a Hopf algebra in $Z(kQ\md\mod)$, where $kQ$ acts on $kN$ by conjugation and where the $kQ$-coaction on $kN$ is trivial. The existence of a Hopf isomorphism $\delta:kN \ito k^N$ in the category $Z(kQ\md\mod)$ forces $N$ to be abelian and $kN$ to be a self-dual $kQ$-module. Thus, for each partial dualization datum $(Q,N,\delta)$, we obtain an element $r_{Q,N,\delta} \in \Aut_{br}(DG\md\mod)$. We denote the set of partial dualizations by $\cR$. \begin{question}\label{q_decomposition} Do the subgroups $\im(\Ind_{H\md\mod})$, $\im(\Ind_{H^*\md\mod})$ together with partial dualizations $\cR$ generate the group $\Aut_{br}(DH\md\mod)$? Does $\Aut_{br}(DH\md\mod)$ decompose as an ordered product of $\im(\Ind_{H\md\mod})$, $\im(\Ind_{H^*\md\mod})$ and $\cR$? \end{question} \noindent Natural questions for applications are: \begin{question} The elements of $\im(\Ind_\cat),\im(\Ind_{\cat'})$ are by definition realized as different bimodule category structures of the abelian categories $\cat$ and $\cat'$ respectively. What are the bimodule categories associated to the partial dualizations? \end{question} \begin{question} What are the three types of group extensions of the fusion category $\cat$ associated by the isomorphism in \cite{ENO10} to the two subgroups and to partial dualizations? \end{question} A decomposition as described in Question \ref{q_decomposition} would give us effective control over the Brauer-Picard group $\BrPic(\cat)$ through explicit and natural generators. Additionally, these generators have an interesting field theoretic interpretation (see next page). \\ In the present article we consider the case $H=kG$ with $G$ a finite group, hence $\cat=\Vect_G$, $\cat'=\Rep(G)$; in this case the subgroups $\Aut_{mon}(\Vect_G)$ and to a lesser extend $\Aut_{mon}(\Rep(G))$ are well-known. As a main result, we prove that the decomposition described in Question \ref{q_decomposition} holds for the subgroup of elements in $\BrPic(\Vect_G)$, which fulfill the additional cohomological property of \emph{laziness}. This condition is automatically fulfilled in the case that $G$ is abelian. Further, for some known examples, we check that the decomposition holds also for the full Brauer-Picard group (see Section \ref{sec_examples}). One important example is the following \begin{exampleX}[Sec. \ref{sec_Fp_AutBr}] Let $G = \ZZ_p^n$ with $p$ a prime number. Fixing an isomorphism $\ZZ_p \simeq \widehat{\ZZ}_p$, we have the following group isomorphism: $$\BrPic(\Rep(\ZZ_p^n)) \simeq \O_{2n}(\F_p,q)$$ where $\O_{2n}(\F_p,q)$ is the group of invertible matrices in $\F_p^{2n\times 2n}$ that are invariant under $q:\F_p \times \F_p \to \F_p;(k_1,...,k_n,l_1,...,l_n) \mapsto \sum_{i=1}^{n}k_il_i$. In this case, the images of $\Ind_\cat$ resp. $\Ind_{\cat'}$ in these Lie groups are lower resp. upper block triangular matrices, intersecting in the subgroup $\Out(\ZZ_p^n) \simeq \GL_n(\F_p)$. The partial dualizations are Weyl group elements. Our result gives an analogue of the Bruhat decomposition of the Lie group $\O_{2n}(\F_p,q)$. There are $n+1$ double cosets of the parabolic Weyl group $\SS_n$, accounting for the $n+1$ non-isomorphic partial dualizations on subgroups $\ZZ_p^k$ for $k=0,...,n$. \end{exampleX} Our general decomposition is modeled after this example and retains roughly what remains of the Bruhat decomposition for a Lie group over a ring (say in the case $G=\ZZ_k^n$ with $k$ not prime), but it is not a Bruhat decomposition in general. Moreover, for $G$ non-abelian the subgroups $\Ind_{\cat},\Ind_{\cat'}$ in $\Aut_{br}(\DG\md\mod)$ are not isomorphic. Additionally, we exhibit a rare class of braided autoequivalences acting as the identity functor on objects and morphisms but having a non-trivial monoidal structure.\\ From a mathematical physics perspective these subgroups arise as follows: A Dijkgraaf-Witten theory has as input data a finite group $G$ and a $3$-cocycle $\omega$ on $G$. It is a topological gauge theory with principal $G$-bundles on a manifold $M$ as classical fields. Since for a finite group $G$ all $G$-bundles are flat, they already form the configuration space. The $\omega$ corresponds to a Lagrangian functional (in our article $\omega$ is trivial). We are interested in the symmetry group of the quantized theory, which is per definition the group of invertible defects and hence $\BrPic(\Vect_G)$. Based on this gauge theoretic view, it is natural to expect automorphisms of $G$ to be a symmetry of the classical and the quantized theory. Indeed, $\cV=\Out(G)$ is a subgroup of $\Aut_{br}(\DG\md\mod)$ and since it already exists at the classical level, we call this a classical symmetry (see Proposition \ref{vcat}). It is both, a subgroup of $\im(\Ind_{\Vect_G})$ and a subgroup of $\im(\Ind_{\Rep(G)})$. More symmetries can be obtained by the following idea: equivalence classes of gauge fields are principal $G$-bundles and thus are in bijection with homotopy classes of maps from $M$ to $BG$, the classifying space of $G$. One may view a Dijkgraaf-Witten theory based on $(G,\omega)$ as a $\sigma$-model with target space $BG$. Then the $3$-cocycle $\omega$ can be viewed as a background field on the target space, and the choice of $\omega$ corresponds to the choice of a $2$-gerbe. Even for trivial $\omega$ we obtain a non-trivial symmetry group of this $2$-gerbe and hence an additional subgroup of automorphisms of the theory. These symmetries are again classical symmetries, the so-called \emph{background field symmetries} $\H^2(G,k^\times)$. Our subgroup $\im(\Ind_{\Vect_G})=\cB\rtimes \cV$ where $\cB \cong \H^2(G,k^\times)$ is therefore the semidirect product of the two classical symmetry groups from above (see Proposition \ref{lm_BField}). An interesting implication of our result is that in order to obtain the full automorphism group one considers a second $\sigma$-model (the 'dual $\sigma$-model') associated to $\cat'=\Rep(G)$ which however leads to the same quantum field theory. This dual $\sigma$-model induces another subgroup of background field symmetries $\cE$ which is a subgroup of $\im(\Ind_{\Rep(G)})$ (see Proposition \ref{efield}). However, the group $\im(\Ind_{\Rep(G)})$ is \emph{not} a semidirect product of $\cE$ and $\cV$ in general. The elements in $\cR$ have the field theoretic interpretation of so-called \emph{partial em-dualities} (electric-magnetic-dualities). In so-called quantum double models (see e.g. \cite{BCKA13} and \cite{KaW07}), irreducible representations of $\DG$ have the interpretation of quasi-particle charges (anyon charges). These are parametrized by pairs $([g],\chi)$, where $[g]$ is a conjugacy class in $G$ and $\chi$ an irreducible representation of the centralizer $\Cent(g)$. The irreducibles of the form $([g],1)$ are called \emph{magnetic} and the ones of the form $([1],\chi)$ are called \emph{electric}. Partial dualizations $\cR$ are symmetries of the quantized theory that exchange magnetic and electric charges (see Proposition \ref{pcat} and Section \ref{sec_nonlazyReflection}). These are only present at the quantum level, hence we call them quantum symmetries. \\ For the lazy Brauer-Picard group, which incorporates the abelian case as a special case, it is enough to dualize on direct abelian factors of $G$. A partial dualization is then induced by a Hopf automorphisms of $DG$ (Proposition \ref{pcat}). For the general Brauer-Picard group, we need to consider semi-direct abelian factors of $G$. Partial dualizations are then induced by algebra isomorphisms that are not necessarily Hopf (see Section \ref{sec_nonlazyReflection}). \\ \noindent We now outline the structure of this article and give details on our methods and results:\\ In Section 2, we give some preliminaries: We recall the definition the Drinfeld double $\DG$ and list the irreducible modules $\ocat_g^\chi$ to be able to express our result also in this explicit basis. Further, we give some basic facts about Hopf Bigalois objects, these are certain $H^*$-bicomodule algebras $A$ such that the functor $A \otimes_H \bullet$ gives an element in $\Aut_{mon}(H\md\mod)$ - all monoidal equivalences of $H\md\mod$ arise in this way for some Bigalois object. A special class of Bigalois objects is given by \emph{lazy} Bigalois objects: These are described by pairs $(\phi,\sigma)$ where $\phi \in \Aut_{Hopf}(H\md\mod)$ describes the action of $A \otimes_H \bullet$ on $H$-modules and where $\sigma \in \Z^2_L(H^*)$, a lazy $2$-cocycle, describes a monoidal structure on the functor $A \otimes_H \bullet$. \\ In Section 3, we recall the decomposition of the group of Hopf algebra automorphisms $\Aut_{Hopf}(\DG)$ we have obtained in \cite{LP15} into certain subgroups. These subgroups can be seen as upper triangular matrices $E$, lower triangular matrices $B$, block diagonal matrices $V \cong \Aut(G)$ and $V_c \cong \Aut_c(G)$ and so called \emph{reflections} on direct abelian factors of $G$. \\ In Section 4, we construct certain \emph{braided} lazy autoequivalences of $\DG\md\mod$. For this we first consider lazy monoidal autoequivalences $\Aut_{mon}(\DG\md\mod)$. These can be parametrized by pairs $$(\phi,\sigma) \in \Aut_{Hopf}(\DG) \ltimes \Z^2_L(\DG^*)$$ where a pair $(\phi,\sigma)$ corresponds to the functor $(F_\phi,J^{\sigma})$ acting on objects via $\phi$ together with a monoidal structure $J^\sigma$ determined by the lazy $2$-cocycle $\sigma$. A $2$-cocycle on a Hopf algebra $H$ is lazy if the Doi twist with $\sigma$ gives again the same Hopf algebra structure. We note that two different pairs may of course give functors that are monoidal equivalent. For example, they might differ by a pair consisting of an inner Hopf automorphism and an exact $2$-cocycle. Additionally, internal Hopf automorphisms produce trivial monoidal autoequivalences. This leads us to the following tedious notation: Let $\underline{\Aut}_{mon}(\DG \md \mod)$ be the category of monoidal autoequivalences and $\underline{\Aut}_{mon,L}(\DG \md \mod) := \Aut_{Hopf}(\DG) \ltimes \Z^2_L(\DG^*)$. Further, let $\widetilde{\Aut}_{mon,L}(\DG \md\mod) := \Out_{Hopf}(\DG) \ltimes \H^2_L(\DG^*)$ be the quotient by lazy coboundaries and inner automorphisms. Let $\Aut_{mon}(\DG \md \mod)$, $\Aut_{mon,L}(\DG \md \mod)$ be the groups of \emph{equivalence classes} of monoidal autoequivalences respectively lazy monoidal autoequivalences. Note that $\widetilde{\Aut}_{mon,L}(\DG \md\mod) \twoheadrightarrow \Aut_{mon,L}(\DG \md \mod)$ has a non-trivial kernel (see Sequence (\ref{monlazy})). Accordingly, we use the notation $\Aut_{br}(\DG\md\mod)$, $\widetilde{\Aut}_{br,L}(\DG \md\mod)$ etc. for the corresponding subgroups of braided autoequivalences. For $\mathcal{\underline{U}} \subset \underline{\Aut}_{br,L}(\DG\md\mod)$ we denote the images by $\widetilde{\mathcal{U}} \subset \widetilde{\Aut}_{br,L}(\DG\md\mod)$ and $\mathcal{U} \subset \Aut_{br,L}(\DG\md\mod)$. \\ The goal of Section 4, is to construct certain subgroups of $\Aut_{br,L}(\DG\md\mod)$. Recall the subgroups $V$, $B$, $E$ as well as the subset $R$ in $\Aut_{Hopf}(\DG)$ from Section 3. We observe that for each of the subsets any suitable element $\phi$ can be combined with a specific $2$-cocycle $\sigma$ in $\H^2(G,k^\times)$ resp. $\H^2_L(k^G)$ resp. a pairing such that the pair $(\phi,\sigma)$ becomes braided. We thus define $\cVUnderL$, $\cBUnderL$, $\cEUnderL$, $\cRUnderL$ $\subset \underline{\Aut}_{br,L}(\DG \md \mod)$ in Propositions \ref{vcat}, \ref{lm_BField}, \ref{efield} and \ref{pcat} as follows: \\ $\bullet$ The subgroup $\cVUnderL\cong \Aut(G)$ consists of pairs $(\phi,1)$ i.e. functors induced by group automorphisms $\phi \in \Aut(G)$ on objects and a trivial monoidal structure. The images in the quotients $\underline{\Aut}_{br,L}(\DG\md\mod)$ resp. $\Aut_{br,L}(\DG\md\mod)$ are $\cVTildeL \cong \cVL \cong\Out(G)$. \\ $\bullet$ The subgroup $\cEUnderL$ consists of suitable elements $\phi\in E$, each combined with a specific cocycle $\sigma\in \Z^2(k^G)$. More precisely, they are constructed in a way that makes the group homomorphism $\cEUnderL\to \Aut_{Hopf}(\DG)$ given by $(\phi,\sigma)\mapsto \phi$ into an isomorphism on $$\cETildeL\stackrel{\sim}{\longrightarrow} Z(G)\wedge Z(G)\subset E$$ The image $\cEL \subset \Aut_{br,L}(\DG\md\mod)$ corresponds to lazy elements in the image $$\Ind_{\Rep(G)}:\;\Aut_{mon}(\Rep(G))\to \BrPic(\Rep(G))$$ (up to $\cVL$). Lazy implies here that they arise from $\Aut_{mon}(\Rep(Z(G)))$. \\ $\bullet$ The subgroup $\cBUnderL$ is constructed similarly as $\cEUnderL$. We combine an element $\phi \in B$ with a special cocycle $\sigma \in \Z^2(G,k^\times)$. Then the image $\cBL \subset \Aut_{br}(\DG\md\mod)$ corresponds to lazy elements in the image of $$\Ind_{\Vect_G}:\;\Aut_{mon}(\Vect_G)\to \BrPic(\Rep(G))$$ (up to $\cVL$). Lazy implies here that they arise from $\Aut_{mon}(\Vect_{G_{ab}})$. In this case $(\phi,\sigma) \mapsto \phi$ is a surjective group homomorphism \begin{align*} \cBTildeL \twoheadrightarrow \hat{G}_{ab}\wedge \hat{G}_{ab}\subset B \end{align*} which is \emph{not} injective in general. Rather $\cBTildeL$ is a central extension of $\hat{G}_{ab}\wedge \hat{G}_{ab}$ by conjugation invariant \emph{distinguished cohomology classes} of $G$ (c.f. \cite{Higgs87}). For $G$ non-abelian, we have hence an interesting class of braided autoequivalences in $\Aut_{br}(\DG\md\mod)$, which are trivial on objects ($\phi=1$) but have nontrivial monoidal structure $J^\sigma$. These seem to be rather rare, in fact the first nontrivial example arises for $G$ a certain non-abelian group of order $p^{9}$, see Example \ref{exm_p9_B}. \\ $\bullet$ The reflections $\cRUnderL=\cRTildeL=\cR$ arise as follows: For every decomposition $G=H\times C$ and Hopf isomorphism $\delta:kC \simeq k^C$, $C$ is an abelian direct factor of $G$. For a triple $(H,C,\delta)$ we consider reflections $r_{H,C,\delta}\in R \subset \Aut_{Hopf}(\DG)$; we say that two reflections are equivalent $r_{H,C,\delta} \sim r'_{H',C',\delta'}$ whenever there exists a group isomorphism $C \simeq C'$. For every $r \in R$ we find the unique $2$-cocycle induced by a pairing $\lambda$ such that the element $(r,\lambda)$ is the braided autoequivalence. We have $\cRUnderL\cong R$. Note however, that contrary to the abelian case, $\cRUnderL$ does not conjugate $\cEUnderL$ and $\cBUnderL$. Also, in order to describe the Brauer-Picard group for the non-lazy case one needs a notion of partial dualizations on \emph{semidirect} products. These do not give lazy elements, unless the semidirect factor is indeed a direct factor. See also the example $G=\SS_3$ in Section \ref{sec_examples} for non-lazy reflections. \\ In Section 5, we finally prove the main result of this article:\\ \begin{theoremX}[\ref{thm_classification}]~\\ (i) Let $G$ be a finite group then for every element $(\phi,\sigma) \in \Aut_{br,L}(\DG\md\mod)$ there exists a $(r,\lambda) \in \cRL$ such that $(\phi,\sigma)$ is in \begin{equation*} (\cVL \ltimes \cBL)\cEL \cdot (r,\lambda) \end{equation*} (ii) Let $G=H \times C$ where $H$ purely non-abelian and $C$ \emph{elementary} abelian. Then $\Aut_{br,L}(\DG\md mod)$ has a double coset decomposition \begin{equation*} \begin{split} \Aut_{br,L}(\DG\md mod) &= \bigsqcup_{(r,\lambda) \in \cRL/\sim} \cVL \cBL \cdot (r,\lambda) \cdot \cVL \cEL \end{split} \end{equation*} where two partial dualizations $(r_{H,C,\delta},\lambda)$, $(r'_{H',C',\delta'},\lambda')$ are equivalent if and only if there exists a group isomorphism $C \simeq C'$. \end{theoremX} The proof proceeds roughly as follows: Take an arbitrary element $(\phi,\sigma)$, write $\phi\in \Aut_{Hopf}(\DG)$ according to the decomposition obtained in Section 3 and show that the braiding condition implies that the factors of the decomposition of $\phi$ in $V,E,B,R$ can be lifted to $\cVUnderL,\cEUnderL,\cBUnderL,\cRUnderL$. We simplify $(\phi,\sigma)$ by multiplying with these lifts. Finally, we argue that $(\id,\sigma')$ being a braided autoequivalence implies that $\sigma'$ is (up to a natural equivalence) a conjugation invariant distinguished $2$-cocycle on $G$. \\ We close in Section \ref{sec_examples} by comparing our results in examples for $G$ to the (full) Brauer-Picard group obtained in \cite{NR14} and show that the answer to Question \ref{q_decomposition} is positive in these cases. \section{Preliminaries} We will work with a field $k$ that is algebraically closed and has characteristic zero. We denote by $\widehat{G}$ the group of $1$-dimensional characters of $G$. \\ \subsection{Modules over the Drinfeld double}\hskip 0pt \\ \noindent We assume the reader is familiar with Hopf algebras and the representation theory of Hopf algebras to the extend as given in the standard literature e.g. \cite{Kass94}. Denote by $kG$ the group algebra and by $k^G= kG^*$ the dual of the group algebra. Both have well known Hopf algebra structures. The Hopf algebra $kG$ acts on itself and on $k^G$ by conjugation: $h \triangleright g = hgh^{-1}$ and $h \triangleright e_g = e_{hgh^{-1}}$, where the functions $e_x \in k^G$ defined by $e_x(y) = \delta_{x,y}$ for a basis of $k^G$. We will also use the convention $g^h := h^{-1}gh$ and $^hg := hgh^{-1}$. Starting from a finite dimensional Hopf algebra $H$ one can construct the Drinfeld double $DH$ that is $H^{* \mathrm{cop}}\otimes H$ as a coalgebra (see e.g. \cite{Kass94}). Here we will be mainly interested in the Drinfeld double of the Hopf algebra $kG$ for a finite group $G$. We denote the vector space basis of $\DG$ by $\{e_x \times y\}_{x,y \in G}$. Then $\DG$ has the following Hopf algebra structure: $$(e_x \times y)(e_{x'} \times y') = e_x(^yx') (e_x \times yy') \qquad \Delta(e_x \times y) = \sum_{x_1x_2=x}(e_{x_1} \times y) \otimes (e_{x_2} \times y)$$ with the unit $1_{\DG} = \sum_{x \in G} (e_x \times 1_G)$, the counit $\epsilon(e_x \times y) = \delta_{x,1_G}$ and the antipode $S(e_x \times y) = e_{y^{-1}x^{-1}y} \times y^{-1}$. \noindent Later we will also use the Hopf algebra $\DG^*$ the dual Hopf algebra of $\DG$ for this we recall that in $\DG^*$ we have $$(x \times e_y)({x'} \times e_{y'}) = (xx' \times e_y*e_{y'}) \qquad \Delta(x \times e_y) = \sum_{y_1y_2=y}(x \times e_{y_1}) \otimes (x^{y_1} \times e_{y_2})$$ In the case the group $G=A$ is abelian we have that $DA \simeq k(\hat{A}\times A)$ and $DA^* \simeq k(A\times\hat{A})$ are isomorphic to each other as Hopf algebras. In general there is no Hopf isomorphism from $\DG$ to $\DG^*$. \noindent Let us denote the category of left $\DG$-modules by $\DG \md\mod$ and recall that it is a semisimple braided tensor category as follows: \\ $\bullet$ The simple objects of $\DG \md \mod$ are induced modules $\ocat_g^{\rho} := kG \otimes_{k\Cent(g)}V$, where $[g] \subset G$ is a conjugacy class and $\rho:\Cent_G(g) \rightarrow \GL(V)$ an isomorphism class of an irreducible representation of the centralizer of a representative $g \in [g]$. We have the following left $\DG$-action on $\ocat_g^{\rho}$: $$(e_h\times t).(y\otimes v):=e_h((ty)g(ty)^{-1})(ty\otimes v)$$ More explicitly: $\ocat_g^\rho$ is a $G$-graded vector space consisting of $|[g]|$ copies of $V$: $$\ocat_g^\rho:=\bigoplus_{ \gamma \in [g]} V_{\gamma},\quad V_{\gamma}:=V$$ Then the action of an element $(e_h\times 1)\in \DG$ is given by projecting to the homogeneous component $V_h$. Choose a set of coset representatives $\{s_i \in G \}$ of $G/\Cent_G(g) \simeq [g]$. Then for a homogenous component $V_\gamma$ with $\gamma \in [g]$ there is a unique representative $s_i \in G$ that corresponds under the conjugation action to the element $\gamma \in [g]$, thus $s_igs_i^{-1}=\gamma$. For an $h \in G$, there is a unique representative $s_j$ such that $hs_i \in s_j\Cent_G(G)$. The action of an element $(1\times h) \in \DG$ is then given by \begin{align}\label{dgd} V_{\gamma}&\to V_{h\gamma h^{-1}}; v\mapsto (1\times h).v:= h.v := \rho(s_j^{-1}hs_i)(v) \end{align} this is indeed well-defined, since $s_jgs_j^{-1}=h \gamma h^{-1}$. \\ $\bullet$ The monoidal structure on $\DG \md \mod$ is given by the tensor product of $\DG$-modules, i.e. with the diagonal action on the tensor product. \\ $\bullet$ The braiding $\{c_{M,N}:M \otimes N \stackrel{\sim}{\to} N \otimes M \mid M,N \in \DG \md \mod\}$ on $\DG \md \mod$ is defined by the universal $R$-matrix $$R = \sum_{g \in G}(e_g \times 1)\otimes (1 \times g) = R_1 \otimes R_2 \in \DG \otimes \DG$$ \begin{align}\label{DGbraid} c_{M,N}(m \otimes n) = \tau(R.(m\otimes n)) = R_2.n \otimes R_1.m \end{align} where $\tau:M \otimes N \to N \otimes M; m \otimes n \mapsto n \otimes m$ is the twist. \\ In the above convention we leave out the sum for $R=R_1 \otimes R_2 \in \DG \otimes \DG$. This should not be confused with the Sweedler notation for the coproduct or coaction. Note that $\DG \md\mod$ is equivalent as braided monoidal category to the category of $G$-Yetter-Drinfeld-modules and the Drinfeld center of the category of $G$-graded vector spaces $Z(\Vect_G)$. \\ Given a monoidal category $\cat$, denote by $\underline{\Aut}_{mon}(\cat)$ be the category of monoidal autoequivalences and natural monoidal transformations. Denote by ${\Aut_{mon}}(\cat)$ the group of isomorphism classes of monoidal autoequivalences. Similarly, for a braided category $\bcat$, let $\underline{\Aut}_{br}(\bcat)$ be the category of braided autoequivalences and natural monoidal transformations and let ${\Aut_{br}}(\bcat)$ be the group of isomorphism classes of braided monoidal autoequivalences. \\ \subsection{Hopf-Galois-Extensions}~\\ \noindent In order to study braided automorphisms of $\DG\md\mod$ we will make use of the theory of Hopf-Galois extensions. For this our main source is \cite{Schau91}, \cite{Schau96}, \cite{Schau02} and \cite {BC04}. The motivation for this approach lies mainly in the relationship between Galois extensions and monoidal functors as formulated in e.g. in \cite{Schau91} and also stated in Proposition \ref{eqn:monoidal1}. Namely, monoidal functors between the category of $L$-comodules and the category of $H$-comodules are in one-to-one correspondence with $L$-$H$-Bigalois objects. For this reason we are lead to the study of $DG^*$-Bigalois extensions. We have summarized the relevant facts in more detail in Sect.~1 of \cite{LP15}. \begin{definition} Let $H$ be a Hopf algebra. A right $H$-comodule algebra $(A,\delta_R)$ is called a right $H$-\emph{Galois extension of } $k$ (or $H$-\emph{Galois object}) if the Galois map \begin{diagram} \beta_A:&A \otimes A &\rTo^{id_A \otimes \delta_R} &A \otimes A \otimes H &\rTo^{\mu_A \otimes id_H} &A \otimes H \\ &x \otimes y &\rMapsto^{} &x \otimes y_0 \otimes y_1 &\rMapsto^{} &xy_0 \otimes y_1 \end{diagram} is a bijection. A morphism of right $H$-Galois objects is an $H$-colinear algebra morphism. Left $H$-Galois objects are defined similarly. Denote by $\underline{\Gal}$(H) the category of right $H$-Galois extensions of $k$ and by $\mathrm{Gal}(H)$ the set of equivalence classes of right $H$-Galois objects. \end{definition} \begin{definition} Let $L,H$ be two Hopf algebras. An $L$-$H$-\emph{Bigalois object} $A$ is an $L$-$H$-bicomodule algebra which is a left $H$-Galois object and a right $L$-Galois object. Denote by $\Bigal(L,H)$ the set of isomorphism classes of $L$-$H$-Bigalois objects and by $\Bigal(H)$ the set of isomorphism classes of $H$-$H$-Bigalois objects. \end{definition} \noindent Recall that the \emph{cotensor product} of a right $L$-comodule $(A,\delta_R)$ and a left $L$-comodule $(B,\delta_L)$ is defined by $$A\square_L B:= \left\{ \sum a\otimes b\in A\otimes B\mid \sum \delta_R(a)\otimes b= \sum a\otimes \delta_L(b)\right\}$$ Moreover, if $A$ is an $E$-$L$-Bigalois object and $B$ an $L$-$H$-Bigalois object then the cotensor product $A\square_L B$ is an $E$-$H$-Bigalois object. \begin{proposition} The cotensor product gives $\Bigal(H)$ a group structure. The Hopf algebra $H$ with the natural $H$-$H$-Bigalois object structure is the unit in the group $\Bigal(H)$. Further, we can define a groupoid $\underline{\Bigal}$ where the objects are given by Hopf algebras and the morphisms between two Hopf algebras $L$, $H$ are given by elements in $\Bigal(L,H)$. The composition of morphisms is the cotensor product. \label{groupoid} \end{proposition} Recall that a fiber functor $H \md \mathrm{comod} \rightarrow \mathrm{Vect}_k$ is a $k$-linear, monoidal, exact, faithful functor that preserves colimits. We denote by $\underline{\mathrm{Fun}}_{fib}(H\md\comod,\mathrm{Vect}_k)$ the category of fiber functors and monoidal natural transformations. Given a $H$-comodule $A$, then $A \square_H \bullet:\mathrm{comod} \rightarrow \mathrm{Vect}_k $ is a $k$-linear functor such that lax monoidal structures on this functor are in one-to-one correspondence with algebra structures on $A$ that make $A$ an $H$-comodule algebra. Such a functor is monoidal (not just lax monoidal) if and only if the Galois map $\beta_A$ is a bijection. Moreover we have an equivalence of categories $\underline{\Gal}(H) \simeq \underline{\mathrm{Fun}}_{fib}(H\md \comod,\Vect_k)$ (see \cite{Ulb89}). In order for the image $A \square_H \bullet$ to have a $H$-comodule structure such that it is an equivalence of $H\md\comod$, we need $A$ to be a n $H$-Bigalois object. Moreover we have: \begin{proposition}(Sect. 5 \cite{Schau91}) \\ Let $H$ be a Hopf algebra then we have the following group isomorphism: \begin{equation*}\begin{split} \mathrm{Bigal}(H) & \to \Aut_{mon}(H\md\mathrm{comod}) \\ A &\mapsto (A \square_H \bullet,J^A) \end{split}\end{equation*} where the monoidal structure $J^A$ of the functor $ A \square_H \bullet$ is given by \begin{equation} \begin{split} J^A_{V,W}:(A \square_H V) \otimes_k (A \square_H W) &\stackrel{\sim}{\rightarrow} A \square_H (V \otimes_k W) \\ \left(\sum x_i \otimes v_i\right)\otimes \left(\sum y_i \otimes w_i\right) &\mapsto \sum x_iy_i \otimes v_i \otimes w_i \end{split} \label{eqn:monoidal1} \end{equation} \label{fib} \end{proposition} There is a large class of $H$-Galois extensions that are $H$ as $H$-comodules. Let us from now on use the Sweedler notation: $\Delta_H(h) = h_1 \otimes h_2$. A $k$-linear, convolution invertible map $\sigma: H \otimes H \to k$ such that $\sigma(1,b) = \epsilon(b) = \sigma(b,1)$ is called a $2$-cocycle on $H$ if for all $a,b,c \in H$ \begin{align*} \sigma(a_1,b_1)\sigma(a_2b_2,c) = \sigma(b_1,c_1)\sigma(a,b_2c_2) \end{align*} We denote by $\Z^2(H)$ the set of $2$-cocycles and by $\Reg^1(H)$ the set of $k$-linear, convolution invertible maps $\eta:H \to k$ such that $\eta(1)=1$. We have a map $\d:\Reg^1(H) \to \Z^2(H)$ defined by $\d\eta = (\eta \otimes \eta)* \eta^{-1} \circ \mu_H$ for $\eta \in \Reg^1(H)$. The $2$-cocycle in the image of $\d$ are called \emph{closed}. The convolution of $2$-cocycles does not yield a $2$-cocycle in general and $\Z^2(H)$ does not form a group. In order to ensure that the convolution is a group structure on $\Z^2(H)$ we need the following: A $2$-cocycle $\sigma$ is called \emph{lazy} if it (convolution)-commutes with the multiplication in $H$: $\sigma * \mu_H = \mu_H * \sigma$. Hence for all $a,b \in H$: \begin{align}\label{lazy} \sigma(a_1,b_1)a_2b_2 = a_1b_1\sigma(a_2,b_2) \end{align} The set of lazy $2$-cocycles $\Z^2_L(H)$ does indeed form a group with respect to convolution. Denote by $\Reg_L^1(H)$ the subgroup of those $\eta \in \Reg^1(H)$ that additionally commute with the identity $\eta*\id_H = \id_H*\eta$. Define the \emph{lazy cohomology group} by $$\H_L^2(H):=\Z^2(H)/\d(\Reg^1_L(H))$$ It is possible that $\d \eta$ is a lazy $2$-cocycle even if $\eta \in \Reg^1(H)$ is not in $\Reg_L^1(H)$. Such an $\eta$ is called \emph{almost lazy} and the subgroup of such almost lazy maps is denoted by $\Reg^1_{aL}(H) \subset \Reg^1(H)$. \\ If $H$ is finite dimensional (or pointed) every $H$-Galois object is of the form ${_\sigma}H$ for some $2$-cocycle $\sigma \in \Z^2(H)$. ${_\sigma}H$ is $H$ as an $H$-comodule and has a twisted algebra structure: $a \cdot_\sigma b = \sigma(a_1,b_1)a_2b_2$ for $a,b \in H$. Further, an right $H$-Galois object $A$ is automatically an $L\md H$-Bigalois object with $L \simeq {_\sigma} H_{\sigma^{-1}}$, which is $H$ as coalgebra and has the twisted algebra structure: $a \cdot_\sigma b = \sigma(a_1,b_1)a_2b_2\sigma^{-1}(a_3,b_3)$ for $a,b \in H$. If $\sigma$ is lazy this implies ${_\sigma} H_{\sigma^{-1}} = H$. We call a right $H$-Galois object ${_\sigma}H$ lazy of $\sigma$ is lazy and we call an $H$-Bigalois object lazy if its lazy as a right $H$-Galois object. Denote by $\Bigal_{lazy}(H)$ the group of lazy $H$-Bigalois objects. \\ We can assign a pair $(\phi,\sigma) \in \Aut_{Hopf}(H) \times \Z^2_L(H^*)$ to an $H^*$-Bigalois object $^{\phi^*}{_\sigma}H^*$ that is $H^*$ as a right $H^*$-comodule, where the left $H^*$-coaction is given by precomposing with the dual $\phi^*:H^* \to H^*$ and where the algebra structure is twisted by $\sigma$ as above. This assignment induces a group homomorphism on $\Out_{Hopf}(H) \times \Z^2_L(H^*)$ which is not surjective, but has $\Bigal_{lazy}(H^*)$ as image. The kernel of this map is $\Int(H)/\Inn(H)$, where $\Int(H)$ are internal Hopf automorphisms (of the form $x \cdot x^{-1}$ for an invertible $x \in H$) and where $\Inn(H)$ are inner Hopf automorphisms (of the form $x \cdot x^{-1}$ for a group-like $x \in H$). Given an $x \cdot x^{-1} \in \Int(H)/\Inn(H)$, the dual $x^*:H^* \to k$ is in $\Reg_{aL}^1(H^*)$ and the pair $(x \cdot x^{-1},\d(x^*))$ is a non-trivial element in $\mathrm{Out}_{Hopf}(H) \ltimes \H^2_{L}(H^*)$. To summarize, we have an exact sequence of groups \begin{align}\label{monlazy} 1 \to \Int(H)/\Inn(H) \to \mathrm{Out}_{Hopf}(H) \ltimes \H^2_{L}(H^*) \to \Bigal(H^*) \end{align} where $\Bigal_{lazy}(H^*)$ is given by the image of the third map. We use the group isomorphism in Proposition \ref{fib} to define the group of lazy monoidal autoequivalences $\Aut_{mon,L}(H \md\mod)$ by the corresponding subgroup of $\Aut_{mon}(H \md\mod)$ and accordingly $\Aut_{br,L}(H \md\mod):= \Aut_{mon,L}(H \md\mod) \cap \Aut_{br}(H \md\mod)$ \section{Decomposition of $\Aut_{Hopf}(\DG)$}\label{sec_cell}\noindent According to the Sequence \ref{monlazy}, the study of lazy monoidal autoequivalences can essentially be reduced to the study of Hopf automorphisms and the study of the second lazy cohomology group. In this section, we recall the decomposition of $\Aut_{Hopf}(\DG)$ that was proven in \cite{LP15}. According to \cite{Keil} (Theorem 1.1 and Corollary 1.2) we can uniquely represent every Hopf automorphism $\phi$ of $\DG$ by an invertible matrix $\left( \begin{smallmatrix} u & a \\ b & v \end{smallmatrix}\right)$ where $u \in \End_{Hopf}(k^G)$, $b \in \Hom(G,\widehat{G})$, $a \in \Hom(\widehat{Z(G)},Z(G))$ and $v \in \End(G)$ fulfilling certain equations (see \cite{Keil}). Then $\phi(f \times g) = u(f_{(1)})b(g) \times a(f_{(2)})v(g)$ for all $f \in k^G$ and $g \in G$. Further, matrix multiplication, where the multiplication of the entries is composition of maps and addition of the entries is the respective convolution product, corresponds to the composition in $\Aut_{Hopf}(\DG)$. \begin{proposition}\cite{Keil}~\\ Let $\{e_g \times h \mid g,h \in G\}$ be the standard basis of $\DG$. There are the following natural subgroups of $\Aut_{Hopf}(\DG)$: \\ (i) $V := \left\{ e_g \times h \mapsto e_{v(g)}\times v(h) \mid v \in \Aut(G) \right\} \simeq \left\{ \left( \begin{smallmatrix} v^{-1} & 0 \\ 0 & v \end{smallmatrix}\right) \mid v \in \Aut(G) \right\}$ \\ (ii) $B := \left\{ e_g\times h \mapsto b(h)(g)\;e_g \times h \mid b \in \Hom(G_{ab},\widehat{G}_{ab}) \right\} \simeq \left\{ \left( \begin{smallmatrix} 1 & b \\ 0 & 1 \end{smallmatrix}\right) \mid b \in \Hom(G_{ab},\widehat{G}_{ab}) \right\}$ \\ (iii) $E := \left\{ e_g \times h \mapsto \sum_{g_1g_2 = g}e_{g_1} \times a(e_{g_2})h \mid a \in \Hom(\widehat{Z(G)},Z(G)) \right\}$ \\ \hspace*{1.35cm}$\simeq \left\{ \left( \begin{smallmatrix} 1 & 0 \\ a & 1 \end{smallmatrix}\right) \mid a \in \Hom(\widehat{Z(G)},Z(G))\right\}$ \\ (iv) $V_c := \left\{ e_g\times h \mapsto e_{w(g)}\times h \mid w \in \Aut_c(G) \right\} \simeq \left\{ \left( \begin{smallmatrix} 1 & 0 \\ 0 & w \end{smallmatrix}\right) \mid w \in \Aut_c(G) \right\}$ where $\Aut_c(G)$ is the group of central automorphisms, hence $w \in \Aut(G)$ with $w(g)g^{-1} \in Z(G) \; \forall g \in G$. \\ We have $V \simeq \Aut(G), B \simeq \Hom(G_{ab},\widehat{G}_{ab}), E \simeq \Hom(\widehat{Z(G)},Z(G)), V_c \simeq \Aut_c(G)$. We can also write $B \simeq \widehat{G}_{ab}\otimes_{\ZZ} \widehat{G}_{ab}$ and $E \simeq Z(G) \otimes_{\ZZ} Z(G)$. \\ \label{auto} \end{proposition} \begin{proposition}\cite{LP15} Let $R_t$ be the set of all tuples $(H,C,\delta,\nu)$, where $C$ is an abelian subgroup of $G$ and $H$ is a subgroup of $G$, such that $G = H \times C$, $\delta:kC \stackrel{\sim}{\rightarrow} k^C$ a Hopf isomorphism and $\nu:C \rightarrow C$ a nilpotent homomorphism. \\ (i) For $(H,C,\delta,\nu)$ we define a \emph{twisted reflection} $r_{H,C,\delta,\nu}:\DG \rightarrow \DG$ of $C$ by: \begin{equation*} \begin{split} (f_H,f_C) \times (h,c) &\mapsto (f_H,\delta(c)) \times (h, \delta^{-1}(f_C)\nu(c)) \end{split} \end{equation*} where $f_H \in k^H$, $f_C \in k^C$, $h \in H$ and $c \in C$. All $r_{H,C,\delta,\nu}$ are Hopf automorphisms. \\ (ii) Denote by $R$ the subset of $R_t$ with elements fulfilling $\nu=1_C$. We call the corresponding Hopf automorphisms \emph{reflections} on $C$. \\ \label{reflections} \end{proposition} \begin{theorem}~\label{thm_cell}\cite{LP15} \\ (i) Let $G$ be a finite group, then $\Aut_{Hopf}(\DG)$ is generated by the subgroups $V$, $V_c$, $B$, $E$ and the set of reflections $R$. \\ (ii) For every $\phi \in \Aut_{Hopf}(\DG)$ there is a twisted reflection $r = r_{H,C,\delta,\nu} \in R_t$ such that $\phi$ is an element in the double coset \begin{equation*} \begin{split} &[(V_c \rtimes V) \ltimes B] \cdot r \cdot [(V_c \rtimes V) \ltimes E] \end{split} \end{equation*} (iii) Two double cosets corresponding to reflections $(C,H,\delta),(C',H',\delta') \in R$ are equal if and only if $C \simeq C'$. \\ (iv) For every $\phi \in \Aut_{Hopf}(\DG)$ there is a reflection $r=r_{H,C,\delta} \in R$ such that $\phi$ is an element in $r \cdot [ B ((V_c \rtimes V) \ltimes E)]$ \\ (v) For every $\phi \in \Aut_{Hopf}(\DG)$ there is a reflection $r = r_{H,C,\delta} \in R$ such that $\phi$ is an element in $[((V_c \rtimes V) \ltimes B)E] \cdot r$ \end{theorem} We illustrate the statement of Theorem \ref{thm_cell} on some examples: \begin{example}\label{exm_Fp_Aut} For $G=\ZZ_p^n$ with $p$ a prime number, we fix an isomorphism $\ZZ_p \simeq \widehat{\ZZ}_p$. We then have $\Aut_{Hopf}(\DG) \simeq \GL_{2n}(\F_p)$ The previously defined subgroups are in this case: \begin{itemize} \item $V \cong \Aut(G) = \GL_n(\F_p)$ and $V_c\ltimes V\cong \GL_n(\F_p)\times \GL_n(\F_p)$ \item $B\cong \widehat{G}_{ab}\otimes \widehat{G}_{ab}=\F_p^{n \times n}$ as additive group. \item $E\cong Z(G)\otimes Z(G)=\F_p^{n \times n}$ as additive group. \end{itemize} All reflections are not twisted and can be described as follows: For each dimension $d\in\{0,\ldots, n\}$ there is a unique isomorphism type $C\cong \F_p^d$. The possible subgroups of this type $C\subset G$ are the Grassmannian $\mbox{Gr}(n,d,G)$, the possible $\delta:C \stackrel{\sim}{\to} \hat{C}$ are parametrized by $\GL_d(\F_p)$ and in this fashion $R$ can be enumerated. On the other hand, we have only $n+1$ representatives $r_{[C]}$ for each dimension $d$, given for example by permutation matrices. One checks this indeed gives a decomposition of $\GL_{2n}(\F_p)$ into $V_cVB$-$V_cVE$-cosets, e.g. \begin{center} \begin{tabular}{rrccccc} $\GL_{4}(\F_p)$ & $=$ & $(V_cVB\cdot r_{[1]}\cdot V_cVE)$ & $\cup$ & $(V_cVB\cdot r_{[\F_p]}\cdot V_cVE)$ & $\cup$ & $(V_cVB\cdot r_{[\F_p^2]}\cdot V_cVE)$ \\ $|\GL_{4}(\F_p)|$ & $=$ & $p^8|\GL_2(\F_p)|^2$ &$+$ & $\frac{p^3|\GL_2(\F_p)|^4}{(p-1)^4}$ &$+$ & $p^4|\GL_2(\F_p)|^2$ \\ & $=$ & $p^8(p^2-1)^2(p^2-p)^2$ & $+$ & $\frac{p^3(p^2-1)^4(p^2-p)^4}{(p-1)^4}$ & $+$ & $p^4(p^2-1)^2(p^2-p)^2$ \\ & $=$ & \multicolumn{3}{l}{$(p^4-1)(p^4-p)(p^4-p^2)(p^4-p^3)$} & & \end{tabular} \end{center} It corresponds to a decomposition of the Lie algebra $A_{2n-1}$ according to the $A_{n-1}\times A_{n-1}$ parabolic subsystem. Especially on the level of Weyl groups we have a decomposition as double cosets of the parabolic Weyl group \begin{align*} \SS_{2n} &=(\SS_n\times \SS_n)1(\SS_n\times \SS_n) \;\cup\; (\SS_n\times \SS_n)(1,1+n)(\SS_n\times \SS_n)\;\cup\\ & \;\cdots \;\cup \;(\SS_n\times \SS_n)(1,1+n)(2,2+n)\cdots (n,2n)(\SS_n\times \SS_n)\\ e.g.\quad |\SS_4| &=4+16+4 \end{align*} In this case, the full Weyl group $\SS_{2n}$ of $\GL_{2n}(\F_p)$ is the set of all reflections (as defined above) that preserve a given decomposition $G=\F_p\times \cdots\times \F_p$. \end{example} \section{Subgroups of $\Aut_{br}(\DG\md\mod)$} \subsection{General considerations} Recall from the Preliminaries that every lazy $\DG^*$-Bigalois object is of the form ${_\sigma}^{\phi^*}\DG^*$ for some $\sigma \in \H^2_L(\DG^*)$ and some $\phi \in \Out_{Hopf}(\DG)$. The functor in $\Aut_{mon,L}(\DG\md\mod)$ corresponding to ${_\sigma}^{\phi^*}\DG^*$ under the group isomorphism in Proposition \ref{fib} acts on objects by precomposing with $\phi$. We want to slightly modify this action in order to have nicer formulas. For this we use a anti-automorphism (called flip in Def. 3.1 of \cite{Keil}): \begin{align}\label{flip} ^\dag:\Aut_{Hopf}(\DG) \ito \Aut_{Hopf}(\DG); \begin{pmatrix} u & b \\ a & v \end{pmatrix} \mapsto \begin{pmatrix} v^* & b^* \\ a^* & u^* \end{pmatrix} \end{align} where $v^*:k^G \to k^G$ is the dual of $v:kG \to kG$, $u^*:kG \to kG$ is the dual of $u:k^G \to k^G$, and similarly $b^*:kG \to k^G$ and $a^*:k^G \to kG$. \begin{definition} We have the following map: \begin{align} \Psi:\Aut_{Hopf}(\DG) \times \Z_L^2(\DG^*) &\to \underline{\Aut}_{mon}(\DG\md\mod) \nonumber \\ (\phi,\sigma) &\mapsto (F_\phi,J^\sigma) \end{align} where $F_\phi$ assigns a left $\DG$-module $(M,\rho_L)$ to the left $\DG$-module $(M,\rho_L\circ (\phi^\dag \otimes_k \id))$ simply denoted by $_{\phi}M$. The monoidal structure is given by \begin{align*} J^{\sigma}_{M,N}: {_\phi}M \otimes_k {_\phi}N \stackrel{\sim}{\to} {_\phi}(M \otimes_k N); \; m \otimes n \mapsto \sigma_1.m \otimes \sigma_2.m \end{align*} where we view the $2$-cocycle $\sigma \in \Z_L^2(\DG^*)$ as $\sigma=\sigma_1 \otimes \sigma_2 \in \DG \otimes_k \DG$ leaving out the sum. This should not be confused with the Sweedler notation for the coproduct. \end{definition} \begin{definition}\label{defmonlazy}~Let us define: \\ (i) $\underline{\Aut}_{mon,L}(\DG \md \mod) := \im(\Psi) \simeq \Aut_{Hopf}(\DG) \ltimes \Z^2_L(\DG^*)$. \\ (ii) $\widetilde{\Aut}_{mon,L}(\DG \md\mod) := \Out_{Hopf}(\DG) \ltimes \H^2_L(\DG^*)$. \\ (iii) $\Aut_{mon,L}(\DG \md \mod)$ as the image of $\widetilde{\Aut}_{mon,L}(\DG \md\mod) \rightarrow \Aut_{mon}(\DG \md \mod)$. \\ (iv) $\underline{\Aut}_{br,L}(\DG \md\mod) := \underline{\Aut}_{mon,L}(\DG \md \mod) \cap \underline{\Aut}_{br}(\DG \md \mod)$ \\ (v) $\widetilde{\Aut}_{br,L}(\DG\md\mod)$ as the image of $\underline{\Aut}_{br,L}(\DG \md\mod) \to \widetilde{\Aut}_{mon,L}(\DG \md\mod)$ and $\Aut_{br,L}(\DG\md\mod)$ as the image of $\widetilde{\Aut}_{br,L}(\DG \md\mod) \to \Aut_{br}(\DG\md\mod)$ \\ (vi) For $\mathcal{\underline{U}} \subset \underline{\Aut}_{br,L}(\DG\md\mod)$ we denote the respective images by $\widetilde{\mathcal{U}} \subset \widetilde{\Aut}_{br,L}(\DG\md\mod)$ and $\mathcal{U} \subset \Aut_{br,L}(\DG\md\mod)$. \end{definition} Before constructing the subgroups mentioned above, we show some general properties. The following Lemma follows essentially from Theorem 9.4 \cite{Keil} but we want state it a bit differently in our notation. \begin{lemma}\label{lm_dgaction} Let $\phi\in \Aut_{Hopf}(\DG)$ in the form $\phi(f \times g) = u(f_{(1)})b(g) \times a(f_{(2)})v(g)$ for $f \in k^G$ and $g \in G$. Then the functor $F_\phi$ maps a $\DG$-module $M$ to the $\DG$-module ${_\phi}M$ with the action: $$(f \times g)._\phi m := \phi^\dag(f \times g).m := (v^*(f_{(1)})b^*(g) \times a^*(f_{(2)})u^*(g)).m$$ $F_\phi$ has the following explicit form on simple $\DG$-modules: \begin{equation*} F_\phi(\ocat^\rho_g) = \ocat^{(\rho \circ u^*)b(g)}_{a(\rho')v(g)} \end{equation*} where we denote by $\rho': \Z(G) \rightarrow k^{\times}$ the one-dimensional representation such that the any central element $z\in \Z(G)$ act in $\rho$ by multiplication with the scalar $\rho'(z)$. In particular $\rho|_{\Z(G)}=\dim(\rho)\cdot \rho'$. \end{lemma} The functor $(F_\phi,J^\sigma) \in \underline{\Aut}_{mon,L}(\DG\md\mod)$ is braided if and only if the following diagram commutes: \begin{diagram} &{_\phi}M \otimes {_\phi}N &\rTo^{F_{\phi}(J^\sigma_{M,N})} & {_\phi}(M \otimes N) \\ &\dTo_{c_{{_\phi}M,{_\phi}N}} & &\dTo_{F_\phi(c_{M,N})} \\ &{_\phi}N \otimes {_\phi}M &\rTo_{F_{\phi}(J^\sigma_{N,M})} &{_\phi}(N \otimes M) \end{diagram} for all $M,N \in \DG \md \mod$. This is equivalent to the fact that for all $\DG$-modules $M,N$ \begin{align} R_2.\sigma_2.n \otimes R_1.\sigma_1.m &= \sigma_{1}.\phi^*(R_2).n \otimes \sigma_{2}.\phi^*(R_1).m \label{eqn:braid} \end{align} holds for all $m \in M$ and $n \in N$. Where $R=R_1 \otimes R_2 = \sum_{x \in G} (e_x \times 1) \otimes (1 \times x)$ is the $R$-matrix of $\DG$ and $c$ the braiding in $\DG\md\mod$. As above, we identify $\sigma \in Z^2_L(\DG^*)$ with an element $\sigma=\sigma_1 \otimes \sigma_2 \in \DG \otimes \DG$. \\ From \cite{LP15} Lem. 5.3, Lem. 5.4 and Lem. 5.6 we know three subgroups of $\Z^2_L(\DG^*)$: \begin{itemize}\itemsep5pt \item $\Z^2_{inv}(G,k^\times)$: group $2$-cocycles $\beta \in \Z^2(G,k^\times)$ such that $\beta(g,h)=\beta(g^t,h^t)$ $\forall t,g \in G$ that are trivially extended to $2$-cocycles on $\DG^*$. \item $\Z^2_c(k^G)$: $2$-cocycles $\sigma \in \Z^2(k^G)$ such that $\alpha(e_g,e_h)= 0$ if $g$ or $h$ not in $Z(G)$, extended trivially to $\DG^*$. \item $\P_c(kG,k^G)\simeq \Hom(G,Z(G))$: central bialgebra pairings $\lambda:kG \times k^G \to k$ resp. group homomorphisms $G \to Z(G)$. These give cocycles on $\DG^*$ as follows: $\sigma_{\lambda}(g \times e_x,h \times e_y) = \lambda(g,e_y)\epsilon(e_x)$. \end{itemize} On the other hand, $\beta_{\sigma}(g,h)=\sigma(g \times 1, h \times 1)$ defines a $2$-cocycle in $\Z^2_{inv}(G,k^\times)$. Further, for $\chi,\rho \in \widehat{G}$ and $\widehat{G}=\Hom(G,k)$ the group of characters, $\alpha_{\sigma}(\chi,\rho) := \sigma(1 \times \chi,1 \times \rho)$ defines a $2$-cocycle in $\Z^2(\widehat{G},k^\times)$. Also, $\lambda_{\sigma}(g,f):=\sigma^{-1}( (g \times 1)_1, 1 \times f_1)\sigma(1 \times f_2, (h \times 1)_2)$ defines a lazy bialgebra pairing in $\P_L(kG,k^G) \simeq \Hom(G,G)$. \begin{lemma}\label{lmm} Let $\phi \in \Aut_{Hopf}(\DG)$ be given as above by $$ \phi(f \times g) = u(f_1)b(g) \times a(f_2)v(g)$$ and $\sigma \in \Z^2_L(\DG^*)$ such that $(F_\phi,J^\sigma)$ is braided then the following equations have to hold for all $\rho,\chi \in \widehat{G}$, $g,h \in G$: \begin{align} \beta_{\sigma}(g,g^{-1}hg) &= \beta_{\sigma}(h,g)b(h)(v(g)) \label{h1} \\ \alpha_{\sigma}(\rho,\chi) &= \alpha_{\sigma}(\chi,\rho)u(\chi)(a(\rho)) \label{hh2}\\ \lambda_{\sigma}(h,\chi) &= b(h)(a(\chi)) \label{h3}\\ \rho(g) &= u(\rho)[v(g)]b(g)[a(\rho)] \label{h4} \end{align} \label{nec} \end{lemma} \begin{proof} Evaluating equation (\ref{eqn:braid}) we get \begin{align*} &\sum_{g,t,h,d,k \in G}\sigma(g \times e_t,h \times e_d) (1 \times k).(e_h \times d).n \otimes (e_k \times 1).(e_g \times t).m \\ =&\sum_{g,t,h,d,k \in G}\sigma(g \times e_t,h \times e_d) (e_g \times t).\phi^*(1 \times k).n \otimes (e_h \times d).\phi^*(e_k \times 1).m \end{align*} The left hand side is equal to \begin{align*} &\sum_{g,t,h,d \in G}\sigma(g \times e_t,h \times e_d) (e_{ghg^{-1}} \times gd).n \otimes (e_g \times t).m \\ =&\sum_{g,t,h,d \in G}\sigma(g \times e_t,g^{-1}hg \times e_{g^{-1}d}) (e_{h} \times d).n \otimes (e_g \times t).m \\ \end{align*} and the right hand side is equal to \begin{align*} &\sum_{g,k,t,h,d,k_1k_2=k}\hspace{-0.7cm}\sigma(g \times e_t,h \times e_d) (e_g \times t).(b^*(k) \times u^*(k)).n \otimes (e_h \times d).(v^*(e_{k_1}) \times a^*(e_{k_2})).m \\ =&\hspace{-0.6cm}\sum_{g,t,h,d,w,y,z,x}\hspace{-0.7cm}\sigma(g \times e_t,h \times e_d) a^w_{x}b(y)(v(z)w)(e_g \times t).(e_y \times u^*(v(z)w)).n \otimes (e_h \times d).(e_z \times x).m \\ =&\hspace{-0.6cm}\sum_{g,t,h,d,w,y,z,x}\hspace{-0.7cm}\sigma(g \times e_t,h \times e_d) a^w_{x}b(y)(v(z)w)(\delta_{g,tyt^{-1}}e_g \times tu^*(v(z)w)).n \otimes (\delta_{h,dzd^{-1}}e_h \times \d x).m \\ =&\sum_{t,d,y,z,x,w}\sigma(y \times e_t,z \times e_d)b(y)(v(d^{-1}zd)w)a^w_{x}(e_{y} \times tu^*(v(d^{-1}zd)w)).n \otimes (e_{z} \times \d x).m \\ =&\sum_{t,d,y,h,x,w}\sigma(h \times e_d,g \times e_t)b(h)(v(t^{-1}gt)w)a^w_{x}(e_{h} \times du^*(v(t^{-1}gt)w)).n \otimes (e_{g} \times tx).m \\ =&\hspace{-1cm}\sum_{ \myover{t,h,g,d,x,w,d'}{ d = d' u^* \circ v(xt^{-1}gtx^{-1}) u^*(w)}} \hspace{-1cm}\sigma(h \times e_{d'},g \times e_{tx^{-1}})b(h)(v(g)w)a^w_{x}(e_{h} \times d).n \otimes (e_{g} \times t).m \end{align*} Here we have used several times that the homomorphism $a$ is supported on $Z(G)$ and that $b$ maps $G$ to the character group $\hat{G}$ which is abelian. We now that the above equality of the right and left hand side have to hold in particular for the regular $\DG$-module and the elements $m=n=1$. This implies: \begin{align}\label{master} \sigma(g \times e_t , h^g \times e_{g^{-1}d}) &= \hspace{-1cm} \sum_{\myover{x,w,d'}{ d = d' u^*(w)(u^*\circ v)(g^t)}} \hspace{-1cm} \sigma(h \times e_{d'}, g \times e_{tx^{-1}})b(h)(v(g)w)a^w_x \end{align} for all $g,h,d,t \in G$ where $a^w_x = e_w(a(e_x))$. On the other hand, if equation (\ref{master}) holds, then also the right and left hand side above are equal. Let us set $g=1$, sum over all $d$, multiply with $\chi(t)$ for $\chi \in \widehat{G}$ and sum over $t$ in ($\ref{master}$): \begin{align*} \sigma(1 \times \chi, h \times 1) &= \sum_{x,w,t}\chi(t)\sigma(h \times 1, 1 \times e_{tx^{-1}})b(h)(w)e_w(a(e_x)) \\ &= \sum_{x,t}\chi(t)\chi(x) \sigma(h \times 1, 1 \times e_{t})b(h)((a(e_x))) \\ &= \sigma(h \times 1, 1 \times \chi) b(h)(a(\chi)) \end{align*} applying the convolution with $\sigma^{-1}$ on both sides leads to equation ($\ref{h3}$). Further, we multiply both sides of equation $(\ref{master})$ with $\rho(t),\chi(d)$ for some $\chi,\rho \in \hat{G}$ and sum over all $t,d \in G$: \begin{align*} \sigma(g \times \rho, h^g \times \chi)\chi(g) = \sigma(h \times \chi, g \times \rho)\chi(a(\rho)) \chi(u^*\circ v(g))b(h)(v(g)a(\rho)) \end{align*} Setting $\chi=1=\rho$ gives equation ($\ref{h1}$) and setting $g=1=h$ gives equation (\ref{hh2}). On the other hand setting $g=h$ and $\rho=\chi$ and using equations (\ref{h1}),(\ref{hh2}) we have: $$\sigma(g \times \rho, g \times \rho)\rho(g) = \sigma(g \times \rho, g \times \rho)u(\rho)(v(g))b(g)(a(\rho))$$ This almost implies the last equation (\ref{h4}) but it is not yet clear that $\sigma(g \times \rho, g \times \rho)$ is never zero, since elements of the form $g \times \rho$ are not group-like in $\DG^*$. However, we can argue as follows: Apply the $2$-cocycle condition several times \begin{align} \label{decsigma} \sigma&(g \times e_x, h \times e_y) = \sigma((g \times 1)(1 \times e_x), h \times e_y) \nonumber \\ &= \sum_{\myover{x_1x_2x_3=x}{y_1y_2=y}} \sigma^{-1}(g \times 1 ,1 \times e_{x_1})\sigma(1 \times e_{x_2}, (h \times 1) (1 \times e_{y_1}))\sigma(g \times 1, (h \times 1 )(1 \times e_{x_3}e_{y_2 })) \nonumber \\ &= \sum_{\myover{x_1x_2x_3x_4x_5=x}{y_1y_2y_3y_4y_5=y}} \sigma^{-1}(g \times 1, 1 \times e_{x_1} )\sigma^{-1}(1 \times e_{y_1}, h \times 1) \sigma(1 \times e_{x_2}, 1 \times e_{y_2}) \nonumber \\ & \qquad \sigma(1 \times e_{x_3}e_{y_3}, h \times 1)\sigma^{-1}(h \times 1, 1 \times e_{x_4}e_{y_4})\sigma(g \times 1, h \times 1)\sigma(gh \times 1, 1 \times e_{x_5}e_{y_5}) \end{align} which on characters gives: \begin{align*} \sigma(g \times \chi, h \times \rho) = \sigma^{-1}(g \times 1, 1 \times \chi )&\alpha_{\sigma}^{-1}(1 \times \rho, h \times 1) \alpha_{\sigma}(\chi,\rho)\lambda_{\sigma}(h,\chi \rho) \\ \cdot &\beta_{\sigma}(g,h)\sigma(gh \times 1, 1 \times \chi \rho) \end{align*} since $\beta_{\sigma} \in \Z^2(G,k^\times)$ and $\alpha_{\sigma} \in \Z^2(\widehat{G},k^\times)$ the only thing left is: \begin{align*} 1&= (\sigma^{-1}*\sigma)(g \times 1, 1 \times \chi) = \sum_{\myover{t \in G}{g^t=g}}\sigma^{-1}(g \times e_t, 1 \times \chi)\sigma(g^t \times 1, 1 \times \chi) \\ &= \sigma^{-1}(g \times 1, 1 \times \chi)\sigma(g \times 1, 1 \times \chi) \end{align*} Hence elements of the form $\sigma(g \times 1, 1 \times \chi)$ and $\sigma(1 \times \chi, g \times 1)$ are also non zero and it follows that $\sigma(g \times \rho, g \times \rho)$ is also never zero which proves equation (\ref{h4}). \\ \end{proof} \subsection{Automorphism Symmetries}~ \\ We have seen in Definition \ref{auto} that a group automorphism $v \in \Aut(G)$ induces a Hopf automorphism in $V \subset \Aut_{Hopf}(\DG)$. We now show that automorphisms of $G$ also naturally induce braided autoequivalences of $\DG\md\mod$. \\ \begin{proposition}~\\ (i) Consider the subgroup $\cVUnderL:=V\times \{1\}$ of $\Aut_{Hopf}(\DG) \ltimes \Z_L^2(\DG^*)$. For an element $(v,1) \in \cVUnderL$ the corresponding monoidal functor $\Psi(v,1)=(F_v,J^{triv})$ with trivial monoidal structure is given on simple objects by $$F_{v}(\ocat^\rho_g) = \ocat^{v^{-1*}(\rho)}_{v(g)}$$ (ii) Every $\Psi(v,1)$ is braided. \\ (iii) Let $\cVTildeL$ be the image of $\cVUnderL$ in $\Out_{Hopf}(\DG) \ltimes \H_L^2(\DG^*)$, then we have $\cVTildeL\cong \Out(G)$. \label{vcat} \end{proposition} \begin{proof} (i),(iii) Follows from the above and Lemma \ref{lm_dgaction}. \\ (ii) Consider again equation ($\ref{master}$) in the proof of Lemma \ref{lmm}. An element in $\Aut_{Hopf}(\DG) \times \Z^2_L(\DG^*)$ is braided if and only if equation (\ref{master}) is satisfied. For an element $(v,1)$ it is easy to check that its true. (iv) The intersection of $\cVUnderL$ with the kernel $\Inn(G)\ltimes \B^2_L(\DG^*)$ is $\Inn(G)$. \end{proof} \begin{example} For $G=\F_p^n$ we have $V=\GL_n(\F_p)$. \end{example} \begin{example} The \emph{extraspecial $p$-group} $p_+^{2n+1}$ is a group of order $p^{2n+1}$ generated by elements $x_i,y_i$ for $i\in\{1,2,\ldots n\}$ and the following relations. In particular $2_+^{2+1}=\DD_4$. $$x_i^p=y_i^p=1\qquad[x_i,x_j]=[y_i,y_j]=[x_i,y_j]=1,\;for\;i\neq j\qquad [x_i,y_i]=z\in Z(p_+^{2n+1})$$ Then the inner automorphism group is $\Inn(G)\cong \ZZ_p^{2n}$ and the automorphism group is $\Out(G)=\ZZ_{p-1}\ltimes\Sp_{2n}(\F_p)$ for $p\neq 2$ resp. $\Out(G)=\SO_{2n}(\F_2)$ for $p=2$, see \cite{Win72}. \\ \end{example} \subsection{B-Symmetries}~\\ Now we want to characterize subgroups of $\Aut_{br}(\DG\md\mod)$ corresponding to the lazy induction $\Aut_{mon}(\Vect_G) \to \Aut_{br,L}(\DG\md\mod)$. One fact we need to understand for this is what trivial braided autoequivalences $(1,\beta)$ coming from $\Vect_G$ look like. If the group is abelian, then $\beta$ has to be cohomologically trivial, which implies that the characterization of such elements is easy. On the other hand, if $G$ is not abelian there are non-trivial cocycles $\beta$ leading to non-trivial braided monoidal functors. For this we need the following: \begin{definition} Let $G$ be a finite group. A cohomology class $[\beta]\in \H^2(G,k^\times)$ is called \emph{distinguished} if one of the following equivalent conditions is fulfilled \cite{Higgs87}: \\ (i) The twisted group ring $k_\beta G$ has the same number of irreducible representations as $kG$. Note that $k_\beta G$ for $[\beta]\neq 1$ has no $1$-dimensional representations. \\ (ii) The centers are of equal dimension $\dim Z(k_\beta G)=\dim Z(kG)$. \\ (iii) All conjugacy classes $[x]\subset G$ are \emph{$\beta$-regular}, i.e. for all $g\in\Cent(x)$ we have $\beta(g,x)=\beta(x,g)$. \\ The conditions are clearly independent of the representing $2$-cocycle $\beta$ and the set of distinguished cohomology classes forms a subgroup $\H^2_{dist}(G)$. \end{definition} In fact, nontrivial distinguished classes are quite rare and we give in Example \ref{exm_p9_B} a non-abelian group with $p^9$ elements which admits such a class. \\ In the following Proposition we construct $\cBUnderL$ which should be seen as a subset of the functors $\underline{\Aut}_{br}(\DG \md\mod)$. This is of course a large set and we need to identify certain functors. For this reason, as described in the introduction, we consider the quotient $\cBTildeL$, where we identify pairs that differ by by inner Hopf automorphism and exact cocycles. The main property, as shown below, is that up to certain elements this quotient is isomorphic to the group of alternating homomorphisms $G_{ab} \to G_{ab}$. In order to get $\cB_L \subset \Aut_{br}(\DG \md \mod)$ we need to consider the quotient of $\cBTildeL$ by the kernel of $\cBTildeL \to \Aut_{br}(\DG\md\mod)$. \begin{proposition}\label{lm_BField}~\\ (i) The group $B \times \Z^2_{inv}(G)$ is a subgroup of $\Aut_{Hopf}(\DG) \ltimes \Z_L^2(\DG^*)$. An element $(b,\beta)$ corresponds to the monoidal functor $(F_b,J^\beta)$ given by $ F_{b}(\ocat^\rho_g) = \ocat^{\rho * b(g)}_{g}$ with monoidal structure \begin{equation*} \begin{split} \ocat^{\rho * b(g)}_{g} \otimes \ocat^{\chi * b(h)}_{h} &\rightarrow F_b(\ocat^\rho_g \otimes \ocat^\chi_h) \\ (s_m \otimes v) \otimes (r_n \otimes w) &\mapsto \beta(g_m,h_n) (s_m \otimes v) \otimes (r_n \otimes w) \end{split} \end{equation*} where $\{s_m\},\{r_n\} \subset G$ are choices of representatives of $G/\mathrm{Cent}(g)$ and $G/\mathrm{Cent}(h)$ respectively and where $g_m=s_mg s_m^{-1}$,$h_n=r_nh r_n^{-1}$. \\ (ii) The subgroup $\cBUnderL$ of $B \times \Z_{inv}^2(G)$ defined by \begin{equation*} \begin{split} \cBUnderL := \{ (b,\beta) \in B \times \Z_{inv}^2(G) \mid b(g)(h) = \frac{\beta(h,g)}{\beta(g,h)} \quad \forall g,h \in G \} \end{split} \end{equation*} consists of all elements $(b,\beta) \in B \times \Z_{inv}^2(G)$ such that $\Psi(b,\beta)$ is a braided autoequivalence. \\ (iii) Let $B_{alt} \cong \widehat{G}_{ab}\wedge \widehat{G}_{ab}$ be the subgroup of alternating homomorphisms of $B$, i.e. $b\in\Hom(G_{ab},\widehat{G}_{ab})$ with $b(g)(h) = b(h)(g)^{-1}$. Then the following group homomorphism is well-defined and surjective: \begin{equation*} \begin{split} \cBUnderL \rightarrow B_{alt}; \quad (b,\beta) \mapsto b \end{split} \end{equation*} (iv) Let $\cBTildeL$ be the image of $\cBUnderL$ in $\Out_{Hopf}(\DG) \ltimes \H_L^2(\DG^*)$, then we have a central extension $$1\to \H^2_{dist,inv}(G)\to \cBTildeL\to B_{alt}\to 1$$ \enlargethispage{0.5cm} where $\H^2_{dist,inv}(G)$ is the cohomology group of conjugation invariant and distinguished cocycles. \end{proposition} \noindent Before we proceed with the proof, we give some examples: \begin{example}\label{exm_Fp_B} For $G=\F_p^n$ we have $B=\hat{G}\otimes \hat{G} \simeq \F_p^{n\times n}$ the additive group of $n\times n$-matrices and $B_{alt}=\hat{G}\wedge \hat{G} \simeq \F_p^{\binom{n}{2}}$ the additive group of $n\times n$-matrices that are skew-symmetric and have zero diagonal entries. Note that the second condition does not follow from the first for $p=2$. For an abelian group there are no distinguished $2$-cohomology-classes, hence $\cB \simeq \cBTildeL \simeq B_{alt}$. \end{example} \begin{example}\label{exm_D4_B} For $G=\DD_4=\langle x,y \mid x^2=y^2=(xy)^4=1 \rangle$ we have $G_{ab}=\langle \bar{x},\bar{y}\rangle\cong \ZZ_2^2$, $B=\Hom(G_{ab},\hat{G}_{ab})=\ZZ_2^{2\times 2}$ and $B_{alt}=\{1,b\} \cong \ZZ_2$ with $b(\bar{x})(\bar{y})=b(\bar{y})(\bar{x})=-1$. It is known that $\H^2(\DD_4,k^\times)=\ZZ_2=\{[1],[\alpha]\}$ and that the non-trivial $2$-cocycles in the class $[\alpha]$ have a non-trivial restriction to the abelian subgroups $\langle x,z\rangle,\langle y,z\rangle\cong\ZZ_2^2$ of $G$. Especially $[\alpha]$ is not a distinguished $2$-cohomology class. By definition of $\cBUnderL$: $$\cBUnderL=\{(1,sym),\;(b,\beta\cdot sym)\}$$ where $\beta$ is the pullback of any nontrivial $2$-cocycle in $G_{ab}$ with $\beta(x,y)\beta(y,x)^{-1}=-1$ and $sym$ denotes any symmetric $2$-cocycles. Especially $[\beta]=[1]$ as one checks on the abelian subgroups and thus by definition $$\cBTildeL=\{(1,[1]),\;(b,[1])\}\cong \ZZ_2$$ However, these $(1,1)$ and $(b,\beta)$, which are pull-backs of two different braided autoequivalences on $G_{ab}$, give rise to the \emph{same} braided equivalence up to monoidal isomorphisms on $G$. Especially in this case we have a \emph{non-injective} homomorphism. $$\cBTildeL\to \Aut_{mon}(\DG\md\mod)$$ \end{example} More generally for the examples $G=p_+^{2n+1}$ we have $B,B_{alt}$ as for the abelian group $\F_p^{2n}$, but (presumably) all braided autoequivalences in $\cBUnderL(\F_p^{2n})$ pull back to a single trivial braided autoequivalence on $G$. \\ It tempting to ask if the kernel of $\cBTildeL\to \Aut_{mon}(\DG\md\mod)$ consist of those $(b,\beta)$ for which $[\beta]=[1]$ i.e. if the remaining non-injectivity is controlled by the non-injectivity of the pullback $\H^2(G_{ab})\to \H^2(G)$. \\ We give an example where $\cBTildeL\to B_{alt}$ is not injective. This gives us a new 'kind' of a braided autoequivalence $(1,\beta)$ that would be trivial in the abelian case: \begin{example}\label{exm_p9_B} In \cite{Higgs87} p. 277 a group $G$ of order $p^9$ with $\H^2_{dist}(G)=\ZZ_p$ is constructed as follow: We start with the group $\tilde{G}$ of order $p^{10}$ generated by $x_1,x_2,x_3,x_4$ of order $p$, all commutators $[x_i,x_j],i\neq j$ nontrivial of order $p$ and central. Then $\widetilde{G}$ is a central extension of $G:=\tilde{G}/\langle s\rangle$ where $s:=[x_1,x_2][x_3,x_4]$. This central extension corresponds to a class of distinguished $2$-cocycles $\langle\sigma\rangle=\ZZ_p=\H_{dist}^2(G)=\H^2(G)$. This is a consequence of the fact that $s$ cannot be written as a single commutator. Further, we can find a conjugation invariant representative, because there is a conjugation invariant section $G \to \widetilde{G}$. The conjugation invariant distinguished $2$-cocycle $\beta$ corresponds to a braided equivalence $(\id,J^\beta)$ trivial on objects. From $G_{ab}\cong \ZZ_p^4$, hence $B_{alt}=\ZZ_p^4\wedge\ZZ_p^4=\ZZ_p^6$ we have a central extension $$1\to \ZZ_p\to \cBTildeL\to \ZZ_p^6\to 1$$ In fact, we assume that the sequence splits and the braided autoequivalence $(\id,J^\beta)$ is the only nontrivial generator of the image $\Psi(\cBTildeL)\subset\Aut_{br}(\DG\md\mod)$, since the pullback $\H^2(G_{ab})\to \H^2(G)$ is trivial. \\ \end{example} \begin{proof}[Proof of Lemma \ref{lm_BField}]~ (i): Let us show that $B$ acts trivially on $\Z^2(G,k^\times)$, hence also on $\Z_{inv}^2(G,k^\times)$: \begin{align*} b.\beta &= \sum_{x,y,g,h}\epsilon(e_y)\epsilon(e_h)\beta(x,g)(e_x*b^*(y) \times y) \otimes (e_g*b^*(h) \times h) \\ &= \sum_{x,g} \beta(x,g)(e_x \times 1) \otimes (e_g \times 1) = \beta \end{align*} For the action on simple $\DG$-modules use Lemma \ref{lm_dgaction}. \\ (ii): Assume $\Psi(b,\beta)$ is braided then according to Lemma \ref{nec} we get for $v=\id$: \begin{equation} b(g)(h) = \beta(h,g)\beta(hgh^{-1},h)^{-1} \quad \forall g,h \in G \label{eqn:symb1} \end{equation} Because $\beta$ is closed we have: $1 = d\beta(h,gh^{-1},h) =\frac{ \beta(gh^{-1},h)\beta(h,g)}{\beta(hgh^{-1},h)\beta(h,gh^{-1})}$ and therefore: \begin{align*} \quad\quad b(g)(h) = \beta(h,g)\beta(hgh^{-1},h)^{-1} = \beta^{-1}(gh^{-1},h)\beta(h,gh^{-1}) \end{align*} \begin{align} \Leftrightarrow b(g)(h)= b(g)(h)b(h)(h) = b(gh)(h) = \beta^{-1}(g,h)\beta(h,g) \label{eqn:symb2} \end{align} In the proof of Lemma \ref{nec} we also have shown that $\Psi(b,\beta)$ is braided if and only if (\ref{master}) holds. In this case where $\sigma(g \times e_x, h \times e_y) = \beta(g,h)\epsilon(e_x)\epsilon(e_y)$ it reduces to $(\ref{eqn:symb1})$, hence $\Psi(b,\beta)$ is braided. Since the product of braided autoequivalences is braided this also shows that $\cBUnderL$ is in fact a subgroup of $B \times \Z_{inv}^2(G,k^\times)$. \\ (iii) By definition of $\cBUnderL$ we have $b\in B_{alt}$. We now show surjectivity: Let $G_{ab}= G/{[G,G]}$ be the abelianization of $G$ and $\hat{\beta_b} \in Z^2(G_{ab})$ an abelian $2$-cocycle defined uniquely up to cohomology by $b(g)(h) = \hat{\beta_b}(h,g)\hat{\beta_b}(hgh^{-1},h)^{-1} = \hat{\beta_b}(h,g)\hat{\beta_b}(g,h)^{-1}$ for $g,h \in G_{ab}$. Further, we have a canonical surjective homomorphism $\iota:G \rightarrow G_{ab}$ which induces a pullback $\iota^*:Z^2(G_{ab}) \rightarrow \Z_{inv}^2(G,k^\times)$, hence we define $\beta_b := \iota^*\hat{\beta_b}$. \\ (iv) By (iii) the map $(b,\beta)\mapsto b$ is a group homomorphism $\cBUnderL\to B_{alt}$ and this factorizes to a group homomorphism $\cBTildeL\to B_{alt}$, since $(\Inn(G)\times \B^2(G)) \cap (B\times \Z_{inv}^2(G,k^\times))=1$. The kernel of this homomorphism consists of all $(1,[\beta])\in \cBTildeL$, hence all $(1,[\beta])$ where $[\beta]$ has at least one representative $\beta$ with $\beta(g,x)=\beta(gxg^{-1},g)$ for all $g,x\in G$. We denote this kernel by $K$ and note that it is central in $\cBUnderL$. It remains to show $K=\H^2_{dist}(G)$: Whenever $[\beta]\in K$ then there exists a representative $\beta$ with $\beta(g,x)=\beta(gxg^{-1},g)$ for all $g,x\in G$, in particular for any elements $g\in \Cent(x)$, which implies any conjugacy class $[x]$ is $\beta$-regular and thus $[\beta]\in \H^2_{dist}(G)$. For the other direction we need a specific choice of representative: Suppose $[\beta]\in \H^2_{dist}(G)$ and thus all $x$ are $\beta$-regular; by \cite{Higgs87} Lm. 2.1(i) there exists a representative $\beta$ with $$\frac{\beta(g,x)\beta(gx,g^{-1})}{\beta(g,g^{-1})}=1$$ for all $\beta$-regular $x$ (i.e. here all $x$) and all $g$. An easy cohomology calculation shows indeed $$\frac{\beta(g,x)}{\beta(gxg^{-1},g)} =\frac{\beta(g,x)}{\beta(gxg^{-1},g)} \cdot \frac{\beta(gx,g^{-1})\beta(gxg^{-1},g)}{\beta(gx,1)\beta(g,g^{-1})}=1$$ hence $(1,\beta)\in \cBUnderL$ by equation (\ref{eqn:symb1}). \\ \end{proof} \subsection{E-Symmetries}~\\ It is now natural to construct a subgroup of $E \times \Z^2_c(k^G)$ in a similar fashion. This construction corresponds to the lazy induction $\Aut_{mon}(\Rep(G)) \to \Aut_{br}(\DG\md\mod)$. Unlike in the case of $B$-Symmetries, we do not need to consider some sort of distinguished cocycles. As we will see, being braided for elements of the form $(1,\alpha)$ already implies that the corresponding braided functor is trivial (up to equivalence). \\ In the following Proposition we construct $\cEUnderL$ which, as in the case of $B$-Symmetries, should be thought of as a collection of functors with no equivalence relation. Identifying pairs that differ by inner Hopf automorphisms and exact cocycles gives us $\cETildeL$. As shown below, the main statement is that this quotient is isomorphic to the group of alternating homomorphisms $Z(G) \to Z(G)$. In order to get the subgroup $\cEL \subset \Aut_{br}(\DG\md\mod)$ we have to take the quotient of $\cETildeL$ by the kernel $\cETildeL \to \Aut_{br}(\DG\md\mod)$. \\ \begin{proposition}~\\ (i) The group $E \times \Z^2_c(k^G)$ is a subgroup of $\Aut_{Hopf}(\DG) \ltimes \Z^2_L(\DG^*)$. An element $(a,\alpha)$ corresponds to the monoidal functor $\Psi(a,\alpha)=(F_a,J^\alpha)$ given on simple objects by $F_{a}(\ocat^\rho_g) = \ocat^{\rho}_{a(\rho')g}$, with the monoidal structure \begin{align*} \ocat^{\rho}_{a(\rho')g}\otimes \ocat^{\rho}_{a(\chi')h} &\to F_a(\ocat^\rho_g\otimes \ocat^\chi_h) \\ (s_m \otimes v)\otimes(r_n \otimes w) & \mapsto \sum_{\myover{i,j; x \in \mathrm{Cent}(g)}{ y \in \mathrm{Cent}(h)}} \alpha(e_{s_i x s_m^{-1}},e_{r_jyr_n^{-1}}) [s_i \otimes \rho(x)(v)] \otimes [r_j \otimes \chi(y)(w)] \end{align*} where $\rho',\chi': Z(G) \rightarrow k^{*}$ are the $1$-dimensional characters determined by the restrictions of $\rho,\chi$ to $Z(G)$. By $\{s_i\},\{r_j\} \subset G$ we denote the choices of representatives of $G/\mathrm{Cent}(g)$ and $G/\mathrm{Cent}(h)$ respectively. (ii) The subgroup $\cEUnderL\subset E \times Z^2_c(k^G)$ defined by \begin{align*} \cEUnderL := \{ (a,\alpha) \in E \times & \Z^2_c(k^G) \mid \forall g,t,h \in G: \alpha(e_t,e_{ght}) = \alpha(e_t,e_{hg^{-1}t}) \\ &\alpha(e_t,e_h) = \sum_{x,y \in Z(G)} \alpha(e_{hy^{-1}},e_{tx^{-1}})e_y(a(e_x)) \} \end{align*} consists of all elements $(a,\alpha) \in E \times \Z^2_c(k^G)$ such that the monoidal autoequivalence $\Psi(a,\alpha)$ is braided. \\ (iii) Let $E_{alt}\cong Z(G)\wedge_{\ZZ} Z(G)$ be the subgroup of alternating homomorphisms in $E=\Hom(\widehat{Z(G)},Z(G))=Z(G)\otimes_{\ZZ} Z(G)$, i.e. the set of homomorphisms $a:\widehat{Z(G)}\to Z(G)$ with $\rho(a(\chi)) = \chi(a(\rho))^{-1}$ and $\chi(a(\chi))$ for all $\chi,\rho \in \widehat{Z(G)}$. Then the following group homomorphism is well-defined and surjective: \begin{equation*} \begin{split} \cEUnderL &\rightarrow E_{alt},\quad (a,\alpha) \mapsto a \end{split} \end{equation*} (iv) Let $\cETildeL$ be the image of $\cEUnderL$ in $\Out_{Hopf}(\DG) \ltimes \H_L^2(\DG^*)$, then the previous group homomorphism factorizes to an isomorphism $$\cETildeL\cong E_{alt}$$ For each $a\in E_{alt}$ we have a representative functor $\Psi(a,\alpha)=(F_a,J^\alpha)$ for a certain $\alpha$ obtained by pull-back from the center of $G$. More precisely, the functor is given by $F_{a}(\ocat^\rho_g) = \ocat^{\rho}_{a(\rho')g}$ and the monoidal structure given by a scalar \begin{align*} \ocat^{\rho}_{a(\rho')g}\otimes \ocat^{\rho}_{a(\chi')h} &\to F_a(\ocat^\rho_g\otimes \ocat^\chi_h) \\ m\otimes n &\mapsto \alpha'(\rho',\chi')\cdot (m\otimes n) \end{align*} where $\alpha'\in \Z^2(\widehat{Z(G)})$ is any $2$-cocycle in the cohomology class associated to the alternating form $a\in E_{alt}$ on the abelian group $\widehat{Z(G)}$. \label{efield} \end{proposition} Before we proceed to the proof we give some examples: \begin{example}\label{exm_D4_E} For $G=\DD_4=\langle x,y\rangle,x^2=y^2=(xy)^4=1$ we have $Z(G)=\langle [x,y]\rangle\cong \ZZ_2$ and hence $E=\Hom(\widehat{Z(G)},Z(G))=\ZZ_2$ and $E_{alt}=1$. More generally for the examples $G=p_+^{2n+1}$ we have $E=\ZZ_p\otimes\ZZ_p=\ZZ_p$ and $E_{alt}=\ZZ_p\wedge\ZZ_p=1$ and hence $\cETildeL=1$. \end{example} \begin{example}\label{exm_p9_P} For the group of order $p^9$ in Example \ref{exm_p9_B} we have $Z(G)=\ZZ_p^5$ generated by all commutators $[x_i,x_j],i\neq j$ modulo the relation $[x_1,x_2][x_3,x_4]$. Hence $E_{alt}=\ZZ_p^{5}\wedge \ZZ_p^5\cong \ZZ_p^{\binom{5}{2}}=\ZZ_p^{10}$ and respectively $\cETildeL=\ZZ_p^{10}$. \end{example} \begin{proof}[Proof of Proposition \ref{efield}]~\\ \noindent (i) Let us show that $E$ acts trivially on $Z^2_c(k^G)$. For $a \in E$ and $\alpha \in Z^2_c(k^G)$: \begin{align*} a.\alpha &= \sum_{x_1,x_2,y,z_1,z_2,w}\alpha(e_y,e_w)(e_{x_1} \times a^*(e_{x_2})y) \otimes (e_{z_1} \times a^*(e_{z_2}) w)\\ &= \sum_{y,w}\alpha(e_y,e_w)(1 \times y) \otimes (1 \times w) = \alpha \end{align*} For the action on simple $\DG$-modules use Lemma \ref{lm_dgaction}. The rest is straight forward. \\ (ii) Let $(a,\alpha) \in E \times \Z^2_c(k^G)$. Then we use again the fact that $\Psi(a,\alpha)$ is braided if and only if equation (\ref{master}) holds. In this case we have $\sigma(g \times e_x, h \times e_y)=\alpha(e_x,e_y)$ and $\Psi(a,\alpha)$ if braided iff for all $g,t,d \in G:$ \begin{equation} \begin{split} \alpha(e_t,e_{gd}) &= \sum_{h,x \in Z(G)}\alpha(e_{dh^{-1}(t^{-1}g^{-1}t)},e_{tx^{-1}})e_{h}(a(e_x)) \end{split} \label{eqn:eq_monoidalStructure_E} \end{equation} Setting $g=1$ gives us the second defining equation of $\cEUnderL$. Further, (\ref{eqn:eq_monoidalStructure_E}) is equivalent to \begin{align} \alpha(e_t,e_{gdt^{-1}gt}) &= \sum_{h,x \in Z(G)}\alpha(e_{dh^{-1}},e_{tx^{-1}})e_{h}(a(e_x)) \label{eq_monoidalStructure_E2} \end{align} \noindent and therefore: $\alpha(e_t,e_{gdt^{-1}gt}) = \alpha(e_t,e_d)$ which is equivalent to the first defining equation of $\cEUnderL$. Since the product of braided autoequivalences is braided this also shows that $\cEUnderL$ is in fact a subgroup of $E \times \Z^2_c(k^G)$.\\ \noindent (iii) We first note that by equation (\ref{hh2}) for $u=\id$ we have $a \in E_{alt}$. We now show surjectivity: Since $Z(G)$ is an abelian group there exists a unique (up to cohomology) $2$-cocycle $\alpha \in \H^2(\widehat{Z(G)})$ with can be pulled back to a $2$-cocycle in $\Z^2_c(k^G)$. Then $(a,\alpha)$ is in $\cEUnderL$ which proves surjectivity. \\ \noindent (iv) Before we show the isomorphism we obtain the description of the explicit representatives: In (iii) we constructed preimages $(a,\alpha)$ of each $a\in E_{alt}$ by pulling back a $2$-cocycle $\alpha'\in \Z^2(\widehat{Z(G)})$ in the cohomology class associated to $a$. We now apply the explicit formula in (i) and use that $\alpha$ is only nonzero on $e_g,e_h$ with $g,h\in Z(G)$: Hence we have only nonzero summands for $s_m^{-1}s_i\in Z(G)$, hence $i=m$ and similarly $j=n$. Moreover, $\rho,\chi$ reduce on $Z(G)$ to one dimensional representations $\rho',\chi'$. Evaluating the resulting sum we get the asserted form. \\ \noindent Next we note that the group homomorphism $\cEUnderL\to E_{alt}$ in (iii) factorizes to a group homomorphism $\cETildeL\to E_{alt}$, since $(\Inn(G)\times \B_L^2(k^G)) \cap (E\times \Z_c^2(k^G))=1$. The kernel of this homomorphism consists of all $(1,[\alpha])\in \cETildeL$, i.e. all classes $[\alpha]$ such that there exists a lazy representative $\alpha\in \Z^2_c(k^G)$. Then, by definition of $\cEUnderL$, the following is fulfilled for a pair $(1,\alpha) \in \cEUnderL$: $$\alpha(e_t,e_{ght}) = \alpha(e_t,e_{hg^{-1}t}) \quad \alpha(e_g,e_h) =\alpha(e_{h},e_{g})$$ \noindent By \cite{LP15} Cor. 3.5 a symmetric lazy cocycle $\alpha\in\Z^2_c(k^G)$ is already cohomologically trivial.\\ \end{proof} \subsection{Partial E-M Dualizations}~\\ Recall that $R$ was the set of triples $(H,C,\delta)$ such that $G = H \times C$ and $\delta: kC \to k^C$ a Hopf isomorphism. Corresponding to that triple there is unique Hopf automorphism of $DG$ that we called $r_{H,C,\delta}$ that exchanges the $C$ and $\hat{C}$. We will identify the triple $(H,C,\delta)$ with the corresponding automorphism $r=r_{H,C,\delta}$ and the other way around. \\ \begin{proposition}~\\ (i) Consider the subset $R\times \P_c(kG,k^G)$ in $\Aut_{Hopf}(\DG)\ltimes \Z^2_L(\DG^*)$. An element $(r,\lambda)$ corresponds to the monoidal functor $\Psi(r,\lambda)=(F_r,J^\lambda)$ given on simple objects by $F_{r}(\ocat^{\rho_H\rho_C}_{hc}) = \ocat^{\rho_H\delta(c)}_{\delta^{-1}(\rho_C)h}$, where we decompose any group element and representation according to the choice $G= H \times C$ into $h\in H,c\in C$ resp. $\rho_H\in \Cent_H(h)\md\mod,\rho_C\in \Cent_C(c)\md\mod$. The monoidal structure $J^\lambda$ is given by \begin{equation*} \begin{split} \ocat^{\rho_H\delta(c)}_{\delta^{-1}(\rho_C)h} \otimes \ocat^{\chi_H\delta(c')}_{\delta^{-1}(\chi_C)h'} &\rightarrow F_r(\ocat^{\rho_H\rho_C}_{hc} \otimes \ocat^{\chi_H\chi_C}_{h'c'}) \\ (s_m \otimes v) \otimes (r_n \otimes w) &\mapsto \sum_{\myover{i }{ z \in \mathrm{Cent}(hc)}}\lambda((h'c')_n,e_{s_izs_m^{-1}}) [s_i \otimes \rho(z)(v)] \otimes (r_n \otimes w) \end{split} \end{equation*} where $\{s_m\},\{r_n\} \subset G$ are choices of representatives of $G/\mathrm{Cent}(g)$ and $G/\mathrm{Cent}(h)$ respectively and where $(h'c')_n=r_nh'c'r_n^{-1}$. \\ (ii) Define the following set uniquely parametrized by decompositions $G=H\times C$: \begin{align*} \cRUnderL := \{ &(r_{H,C,\delta},\lambda) \in R \times \P_c(kG,k^G) \mid \lambda(hc,e_{h'c'}) = \delta_{c,c'}\epsilon(h)\epsilon(e_{h'}) \} \end{align*} Then $\Psi(r_{H,C,\delta},\lambda)$ is a braided autoequivalence iff $(r_{H,C,\delta},\lambda) \in \cRUnderL$. \\ (iii) For $(r_{H,C,\delta},\lambda) \in \cRUnderL$ the monoidal structure of $\Psi(r_{H,C,\delta},\lambda)$ simplifies: \begin{align*} \ocat^{\rho_H\delta(c)}_{\delta^{-1}(\rho_C)h} \otimes \ocat^{\chi_H\delta(c')}_{\delta^{-1}(\chi_C)h'} &\rightarrow F_r(\ocat^{\rho_H\rho_C}_{hc} \otimes \ocat^{\chi_H\chi_C}_{h'c'}) \\ m\otimes n &\mapsto \rho_C(c')\cdot(m\otimes n) \end{align*} \label{pcat} \end{proposition} \begin{proof} (i) For the action on simple $\DG$-modules use Lemma \ref{lm_dgaction}.\\ \noindent (ii) For $(r_{H,C,\delta},\lambda) \in R \times \P_c(kG,k^G)$ the functor $\Psi(r,\lambda)$ is braided iff the equation (\ref{master}) holds, where we have to consider the case $\sigma(g \times e_x, h \times e_y) = \lambda(g,e_y)\epsilon(e_x)$. Let us denote an element in the group $G = H \times C$ by $g = g_Hg_C$ and we write $p_C,p_H$ for the obvious projections. Then we check equation (\ref{master}) in this case: \begin{equation*} \begin{split} \sum_{x,y,z \in G}\lambda(y^{-1}xy,e_{z})(e_x \times y) \otimes (e_y \times z) = \sum_{x,y,z,w \in G}\delta_{y,w}\lambda(y^{-1}xy,e_{z})(e_x \times y) \otimes (e_w \times z) \end{split} \end{equation*} has to be equal to \begin{align*} &\sum_{w,y,g_1,g_2}\lambda(w,e_y)(1 \times y)(\delta^*((g_1g_2)_C) \times (g_1g_2)_H) \otimes (e_w \times 1)(e_{g_1}\circ p_H \times \delta^{-1*}(e_{g_2}\circ p_C)) \\ &= \hspace{-0.5cm}\sum_{x,y,g_1,g_2,w,z}\lambda(w,e_y)(1 \times y)\delta^*((g_1g_2)_C)(r)e_z(\delta^{-1*}(e_{g_2}\circ p_C))( e_x \times (g_1g_2)_H) \otimes (e_w \times 1)(e_{g_1}\circ p_H \times z) \\ &= \hspace{-0.5cm}\sum_{\myover{x,y,g_1,g_2,w,z }{ (g_2)_H=1}}\delta_{z_H,1}\lambda(w,e_y)\delta(x_C)((g_1g_2)_C)e_{(g_2)_C}(\delta^{-1}(e_{z_C}))(1 \times y)( e_{x} \times (g_1g_2)_H) \otimes (e_w \times 1)(e_{g_1} \circ p_H \times z) \\ &= \hspace{-0.7cm} \sum_{\myover{w,y,g_1,g_2 }{ (g_2)_H=1, w_H = (g_1)_H}}\hspace{-0.7cm} \delta_{z_H,1}\lambda(w,e_y)\delta(x_C)((g_2)_C)e_{(g_2)\circ p_C}(\delta^{-1}(e_{z_C}))( e_{yxy^{-1}} \times y(g_1)_H) \otimes (e_w \times z) \\ &= \sum_{x,y,w,z} \delta_{z_H,1}\lambda(w,e_{yw_H^{-1}})\delta(x_C)((\delta^{-1}(e_{z_C})))(e_{x} \times y) \otimes (e_{w} \times z) \end{align*} This is equivalent to saying that for all $x,y,w,z \in G$ the following holds: \begin{align*} \delta_{y,w}\lambda(y^{-1}xy,e_{z}) = \delta_{z_H,1}\lambda(w,e_{yw_H^{-1}})\delta(x_C)((\delta^{-1}(e_{z_C}))) \label{pcat} \end{align*} So we see that $(r,\lambda)$ fulfills this equation if and only if $\lambda(hc,e_{h'c'}) = \delta(c)(\delta^{-1}(e_{c'}))\epsilon(h)\epsilon(e_{h'}) = \delta_{c,c'}\epsilon(h)\epsilon(e_{h'})$ for all $hc,h'c' \in H \times C$ which is equivalent to the defining equations of $\cRUnderL$. \\ \noindent (iii) This is a simple calculation using that $C$ is abelian and then that $\lambda(hc,e_{h'c'}) = \delta_{c,c'}\epsilon(h)\epsilon(e_{h'})$ implies $i=m$ and only leaves the term $\delta_{c',z}$. \end{proof} \section{Main Result}\label{mainresult} Recall that we have defined certain characteristic elements of $\Aut_{br}(\DG \md \mod)$ in the Propositions \ref{vcat}, \ref{lm_BField}, \ref{efield}, \ref{pcat} and showed how they can be explicitly calculated: We have that $\cETildeL$ is isomorphic to the group of alternating homomorphisms $\widehat{Z(G)} \to Z(G)$, that $\cBTildeL$ is a central extension of the group of alternating homomorphisms on $G_{ab} \to \widehat{G_{ab}}$ and that $\cRL$ is parametrized by decompositions $G=H \times C$ together with $\delta: kC \simeq k^C$ such that $\delta(c)(\delta^{-1}(e_{c'}))=\delta_{c,c'}$. In our main result we show that these elements generate $\Aut_{br}(\DG \md \mod)$. \begin{theorem}\label{thm_classification}~\\ (i) Let $G=H \times C$ where $H$ is purely non-abelian and $C$ is \emph{ elementary} abelian. Then the subgroup of $\Aut_{Hopf}(\DG) \ltimes Z^2_L(\DG^*)$ defined by \begin{equation*} \AutUnder_{br,L}(\DG\md\mod) := \{ (\phi,\sigma) \in \Aut_{Hopf}(\DG) \ltimes \Z^2_L(\DG^*) \mid (F_\phi,J^\sigma) \text{ braided } \} \end{equation*} has the following decomposition into disjoint double cosets \begin{equation*} \begin{split} \AutUnder_{br,L}(\DG\md mod) &= \bigsqcup_{(r,\lambda) \in \cRL/\sim} \cVUnderL \cBUnderL \cdot (r,\lambda)\cdot \d\Reg^1_{aL}(\DG^*) \cdot \cVUnderL \cEUnderL \; \end{split} \end{equation*} where two partial dualizations $(r_{H,C,\delta},\lambda)$, $(r'_{H',C',\delta'},\lambda')$ are equivalent if and only if there exists a group isomorphism $C \simeq C'$ and where $\d\Reg^1_{aL}(\DG^*)$ is the group of almost lazy coboundaries on $\DG^*$. \\ Similarly, the quotient $\Aut_{br,L}(\DG\md\mod)$ has a decomposition into double cosets \begin{equation*} \begin{split} \Aut_{br,L}(\DG\md mod) &= \bigsqcup_{(r,\lambda) \in \cRL/\sim} \cVL \cBL \cdot (r,\lambda) \cdot \cVL \cEL \end{split} \end{equation*} (ii) Let $G$ be a finite group with not necessarily elementary abelian direct factors. For every element $(\phi,\sigma) \in \Aut_{br,L}(\DG\md\mod)$ there exists a $(r,\lambda) \in \cRL$ such that $(\phi,\sigma)$ is in \begin{equation*} (r,\lambda) \cdot [ \cBL (\cVL \ltimes \cEL)] \end{equation*} and similarly for $\AutUnder_{br,L}(\DG\md\mod)$. \\ (iii) Let $G$ be a finite group with not necessarily elementary abelian direct factors. For every element $(\phi,\sigma) \in \Aut_{br,L}(\DG\md\mod)$ there exists a $(r,\lambda) \in \cRL$ such that $(\phi,\sigma)$ is in \begin{equation*} [(\cVL \ltimes \cBL)\cEL] \cdot (r,\lambda) \end{equation*} and similarly for $\AutUnder_{br,L}(\DG\md\mod)$. \end{theorem} \noindent Before we turn to the proof, we add some useful facts. \\ (i) $\Psi$ induces a group homomorphism to $\Aut_{br}(\DG\md\mod)$ that factors through $$\AutUnder_{br,L}(\DG\md\mod)\to \AutTilde_{br,L}(\DG\md\mod)$$ (ii) $\Psi$ is still not necessarily injective, as Example \ref{exm_D4_B} shows. The kernel is controlled by invertible but not group-like elements in $\DG$ (see Cor. 2.23 in \cite{LP15}). \\ (iii) The group structure of $\AutTilde_{br,L}(\DG\md\mod)$ can be almost completely read off using he maps from $\cVTildeL,\cBTildeL, \cETildeL,\cRTildeL$ to the known groups (resp. set) $\Out(G), \\ B_{alt},E_{alt},R$ in terms of matrices. Only $\cBTildeL\to B_{alt}$ is not necessarily a bijection in rare cases (in these cases additional cohomology calculations are necessary to determine the group structure). \\ (iv) The decomposition of $\AutUnder_{br,L}(\DG\md\mod)$ is up to up to a monoidal natural transformation which comes from a coboundary in $\d\Reg^1_{aL}(\DG^*)$. \\ \begin{proof}[Proof of Theorem \ref{thm_classification}]~\\ (i) We start with a general element $(\phi,\sigma) \in \AutUnder_{br,L}(\DG\md\mod)$. As in Theorem \ref{thm_cell} (ii) we write $\phi$ as a product of elements in $V,V_c,B,E,R$. Since we only have elementary abelian direct factors the twist $\nu$ is zero. The general procedure is to multiply the element $(\phi,\sigma)$ with specific elements of $\cVUnderL,\cBUnderL,\cEUnderL$ in order to simplify the general form of $\phi$. We will use the symbol $\leadsto$ after an multiplication and warn that the $u,v,b,a$ before and after the multiplication are in general different. We will use the matrix notation with respect to the product $\DG = k^G \rtimes kG$ and also with respect to a product $G = H \times C$. For example we write an $v \in \Aut(H \times C)$ as $\left(\begin{smallmatrix} v_{H,H} & v_{C,H} \\ v_{H,C} & v_{C,C} \end{smallmatrix} \right)$ and similarly for the $u,b,a$. \\ First, it is easy to see that we can find elements $\cVUnderL$ such that $(\phi,\sigma)$ becomes a pair where the automorphism $\phi$ has the form \begin{equation} \leadsto\begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} 1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix}\begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:st1a} \end{equation} and where the $2$-cocycle $\sigma$ stays the same, since the cocycles in $\cVUnderL$ are trivial. Here we used that $V$ normalizes $V_c$ and $E$. Hence with this step we have eliminated the $\cVUnderL \cong \Aut(G)$ parts in $\phi$. Further, we use the fact that the subgroup $\Aut_c(G)$ normalizes the subgroup $B$ and arrive at \begin{align} \leadsto&\begin{pmatrix} 1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix}\begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:st11} \\ = &\begin{pmatrix} v^*\hat{p}_{H} + b\delta^{-1}+v^*\delta w a + b p_H w a & v^*\delta w + bp_Hw \\ \delta^{-1} + p_H w a & p_H w \end{pmatrix}\label{eqn:st12} \end{align} Since (\ref{eqn:st12}) together with the $2$-cocycle $\sigma$ is braided we deduce from Lemma \ref{nec} equation (\ref{h1}) that \begin{align} 1 = [\delta \circ p_C \circ w(g)(v \circ p_H \circ w(g))] \cdot [b \circ p_H \circ w(g) (p_H \circ w(g))] \label{eqn:step1beta} \end{align} for all $g \in G$. In particular for $g = w^{-1}(h)$ with arbitrary $h \in H$: \begin{align} 1 = b(h)(h) = b_{H,H}(h)(h) \end{align} which implies that $b_{H,H}$ is alternating. Further, taking $g = w^{-1}(h,c)$ in $(\ref{eqn:step1beta})$ we get $\delta(c)(v(h)) = 1$ for all $c \in C, h\in H$, hence $v_{H,C} = 0$. Taking the inverse of (\ref{eqn:st11}) and arguing analogously on the inverse matrix we deduce that $a_{H,H}$ is alternating and that $(w^{-1})_{C,H}=0$ and therefore $w_{C,H} =0$. Both such alternating $b_{H,H}$ can be trivially extended to alternating $b = \left(\begin{smallmatrix} b_{H,H} & 0 \\ 0 & 0 \end{smallmatrix} \right)$ on $G$ and similarly for $a_{H,H}$. Now we use Propositions \ref{lm_BField} (iii) and \ref{efield} (iii): For these alternating $a,b$ exist $2$-cocycles $\beta_b \in \Z_{inv}^2(G,k^\times)$ and $\alpha_a \in Z^2_c(k^G)$ such that $(b,\beta_b) \in \cBUnderL$ and $(a,\alpha_a) \in \cEUnderL$. Multiplying equation (\ref{eqn:st11}) with the inverses of $(b,\beta_b)$ and $(a,\alpha_a)$ we simplify equation (\ref{eqn:st11}) to \begin{align} \leadsto&\begin{pmatrix} 1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix}\begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:st2} \end{align} with $ a = \left(\begin{smallmatrix} 0 & a_{C,H} \\ a_{H,C} & a_{C,C} \end{smallmatrix} \right)$, $b = \left(\begin{smallmatrix} 0 & b_{C,H} \\ b_{H,C} & b_{C,C} \end{smallmatrix} \right)$, $ v = \left(\begin{smallmatrix} v_{H,H} & v_{C,H} \\ 0 & v_{C,C} \end{smallmatrix} \right)$ and $ w = \left(\begin{smallmatrix} w_{H,H} & 0 \\ w_{H,C} & w_{C,C} \end{smallmatrix} \right)$ where the $2$-cocycle $\sigma$ changes to some $2$-cocycle $\sigma'$. The $b$ and $a$ can be simplified even further by using the fact that we can construct alternating $\tilde{b} = \left(\begin{smallmatrix} 0 & \tilde{b}_{C,H} \\ -b_{H,C} & 0 \end{smallmatrix} \right)$ with $\tilde{b}_{C,H}(c)(h) = - 1/ b_{H,C}(h)(c)$ and similarly an alternating $\tilde{a}$. For these maps there exists again $2$-cocycles that lift them to elements in $\cBUnderL$ and $\cEUnderL$ respectively. As before, we multiplying equation (\ref{eqn:st2}) with the inverses of the lifts and get: \begin{align} \leadsto&\begin{pmatrix} 1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix}\begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:st3} \end{align} with $ a = \left(\begin{smallmatrix} 0 & 0 \\ a_{H,C} & a_{C,C} \end{smallmatrix} \right)$, $b = \left(\begin{smallmatrix} 0 & b_{C,H} \\ 0 & b_{C,C} \end{smallmatrix} \right)$, $ v = \left(\begin{smallmatrix} v_{H,H} & v_{C,H} \\ 0 & v_{C,C} \end{smallmatrix} \right)$ and $ w = \left(\begin{smallmatrix} w_{H,H} & 0 \\ w_{H,C} & w_{C,C} \end{smallmatrix} \right)$ Now we commute the matrix corresponding to $b$ to the right as follows: \begin{align} &\begin{pmatrix} 1 & \left(\begin{smallmatrix} 0 & b_{C,H} \\ 0 & b_{C,C} \end{smallmatrix} \right) \\ 0 & 1\end{pmatrix} \begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H+ \end{pmatrix} = \begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix} 1 & \left(\begin{smallmatrix} 0 & \tilde{b}_{C,H} \\ 0 & \tilde{b}_{C,C} \end{smallmatrix} \right) \\ 0 & 1\end{pmatrix} \begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix} \\ & = \begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix} \underbrace{\begin{pmatrix} \left(\begin{smallmatrix} 1 & \tilde{b}_{C,H}\delta^{-1} \\ 0 & 1 \end{smallmatrix} \right) & 0 \\ 0 & 1\end{pmatrix}}_{\in V_c} \underbrace{\begin{pmatrix} 1 & 0 \\ \left(\begin{smallmatrix} 0 & 0 \\ 0 & \delta^{-1} \tilde{b}_{C,C}\delta^{-1} \end{smallmatrix} \right) & 1\end{pmatrix}}_{\in E} \end{align} By commuting the $V_c$ elements in the decomposition to the right, multiplying with $V$ as in the first step and then commuting back we thus arrived at the following form: \begin{align} \leadsto &\begin{pmatrix} v^* & 0 \\ 0 & 1\end{pmatrix}\begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix}\begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:st4} \end{align} with $ a = \left(\begin{smallmatrix} 0 & 0 \\ 0 & a_{C,C} \end{smallmatrix} \right)$, $ v = \left(\begin{smallmatrix} v_{H,H} & v_{C,H} \\ 0 & v_{C,C} \end{smallmatrix} \right)$ and $ w = \left(\begin{smallmatrix} w_{H,H} & 0 \\ w_{H,C} & w_{C,C} \end{smallmatrix} \right)$. Here we eliminated the $a_{H,C}$ part, similarly as the $b_{C,H}$ part, by commuting the corresponding matrix to the left, past trough the reflection. This gives us again an element in $V_c$ which we can absorb. \\ Now consider the inverse of ($\ref{eqn:st4}$): $$ \begin{pmatrix} \hat{p}_H (v^*)^{-1} & \delta \\ -a \hat{p}_H (v^*)^{-1} + w^{-1}\delta^{-1}v^{*-1} & -a\delta + w^{-1}p_H \end{pmatrix}$$ is again braided, hence we use as before Lemma \ref{nec} equation (\ref{h1}) to get: \begin{align} 1 = \delta(p_C(g))(a( \delta(p_C(g))w^{-1}_{H,C}(p_H(g)))) = \delta(g_C)(a_{C,C}(\delta(g_C))) \delta(g_C)(w^{-1}_{H,C}(g_H)) \end{align} Since this has to hold for all $g=g_Hg_C \in H\times C$ we argue as before and get that $a_{C,C}$ is alternating and that $w_{H,C}^{-1} = 0$ and therefore $w_{H,C}=0$. So we can eliminate the $a_{C,C}$ part by the same arguments as before. Using Lemma \ref{nec} equation (\ref{hh2}) on (\ref{eqn:st4}) we deduce: $v_{C,H} =0$. Since $v$ is diagonal we can commute the matrix to the right through the reflection. We then get a product of a reflection $\delta' = v_{C,C}^* \circ \delta$, $H=H'$ and $v$. In other words, diagonal elements w.r.t a decomposition $G=H\times C$ of $V_c$ normalize reflections of the form $(H,C,\delta)$. We can lift any reflection to an element in $\cRUnderL$ according to Proposition \ref{pcat} (iii). Thus we arrive at: \begin{align} \leadsto &\begin{pmatrix} 1 & 0 \\ 0 & w\end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & \left(\begin{smallmatrix} w_{H,H} & 0 \\0 & w_{C,C} \end{smallmatrix} \right) \end{pmatrix} \label{eqn:st5} \end{align} Applying Lemma \ref{nec} equation (\ref{h4}) on (\ref{eqn:st5}) we get that $\chi(g) = \chi(w(g))$ for all $g \in G$, hence $w = \id$. During all of the above multiplications the $2$-cocycle $\sigma$ changed to some other $2$-cocycle $\sigma'$ so that now we are left with only $(1,\sigma')$, which is braided. We want to show that apart from the distinguished $\H^2_{inv}(G)$ part of such a $\sigma'$ such a braided autoequivalence has to be trivial. \\ First, we know from \cite{LP15} Lm. 5.3 that $\beta_{\sigma'}(g,h) = \sigma'(g \times 1, h \times 1)$ defines a $2$-cocycle on $G$. From equation (\ref{master}) we deduce that if $(1,\sigma')$ is braided then \begin{align*} \sigma'(g \times 1, 1 \times e_x) &= \sigma'(1 \times e_x, g \times 1) \\ \sigma'(g \times 1, h^g \times 1) &= \sigma'(h \times 1, g \times 1) \\ \sigma'(1 \times e_x,1 \times e_y) &= \sigma'(1 \times e_y, 1 \times e_x) \end{align*} this shows that $(1,\beta_{\sigma'}) \in \cBUnderL$. We multiply $(1,\sigma')$ from the left with $(1,\sigma_{\beta_{\sigma'}}^{-1})$ where $$\sigma_{\beta_{\sigma'}}(g \times e_x, h \times e_y) = \beta_{\sigma'}(g,h)\epsilon(e_x)\epsilon(e_y)=\sigma'(g \times 1, h \times 1)\epsilon(e_x)\epsilon(e_y)$$ and the resulting cocycle fulfills \begin{align*} \sigma_{\beta_{\sigma'}}^{-1}*\sigma'(g \times 1, h \times 1) &= \sum_{t,s \in G} \sigma^{-1}_{\beta_{\sigma'}}(g \times e_t,h \times e_s)\sigma'(g^t \times 1, h^s \times 1) \\ &= \sum_{t,s}\sigma^{-'1}(g \times 1, h \times 1)\epsilon(e_t)\epsilon(e_s)\sigma(g^t \times 1, h^s \times 1) = 1 \end{align*} Call the new cocycle again $\sigma'$ and note that it is now trivial if restricted to $kG \times kG$, hence we got rid of the distinguished part of $\sigma$. Further, since $\alpha_{\sigma'}(e_x,e_y) = \sigma'(1 \times e_x, 1 \times e_y)$ is a lazy symmetric $2$-cocycle in $\Z^2_c(k^G)$ it follows from \cite{LP15} Cor. 3.5. that $\alpha_{\sigma'}$ is cohomologically trivial. Let $\eta \in \Reg^1_L(k^G)$ such that $d\eta = \alpha_{\sigma'}$. Use equation \ref{decsigma} from the proof of Lemma \ref{lmm} in this case: \begin{align*} \sigma'(g \times e_x, h \times e_y) &= \sum_{\myover{x_1x_2x_3=x}{y_1y_2y_3=y}} \hspace{-0.5cm} \sigma'^{-1}(g \times 1, 1 \times e_{x_1} )\sigma'^{-1}(1 \times e_{y_1}, h \times 1) \d\eta(e_{x_2},e_{y_2})\sigma(gh \times 1, 1 \times e_{x_3}e_{y_3}) \nonumber \\ &=\sum_{\myover{x_1x_2t=x}{y_1y_2t=y}} \sigma'^{-1}(g \times 1, 1 \times e_{x_1} )\sigma'^{-1}(1 \times e_{y_1}, h \times 1) \d\eta(e_{x_2},e_{y_2})\sigma(g^xh^y \times 1, 1 \times e_{t}) \end{align*} where in the last equation we have used the lazy condition on $\sigma'$. Now let $\mu(g \times e_x) := \sigma^{-1}(g \times 1, 1 \times e_x)$ and check that together with $\eta$ this gives us the desired coboundary to show that $\sigma$ is exact: \begin{align*} &\d(\mu*(\eta \otimes \epsilon_{kG}))(g \times e_x, h \times e_y) \\ &= \sum_{\myover{x_1x_2 =x}{y_1y_2=y}}\mu*(\eta \otimes \epsilon_{kG})(g \times e_{x_1})\mu*(\eta \otimes \epsilon_{kG})(h \times e_{y_1})\mu*(\eta \otimes \epsilon_{kG})(g^{x_1}h^{y_1} \times e_{x_2}e_{y_2}) \\ &= \sum_{\myover{x_1x_2x_3x_4 =x}{y_1y_2y_3y_4=y}} \sigma^{-1}(g \times 1, 1 \times e_{x_1})\eta(e_{x_2})\sigma^{-1}(h \times 1, 1 \times e_{y_1})\eta(e_{y_2}) \\ & \qquad \sigma(g^{x_1x_2}h^{y_1y_2} \times 1, 1 \times e_{x_3}e_{y_3})\eta(e_{x_4}e_{y_4}) \\ &= \sum_{\myover{x_1x_2t=x}{y_1y_2t=y}} \sigma^{-1}(g \times 1, 1 \times e_{x_1})\sigma^{-1}(h \times 1, 1 \times e_{y_1})d\eta(e_{x_2},e_{y_2})\sigma(g^{xt^{-1}}h^{yt^{-1}} \times 1, 1 \times e_t) \\ &= \sum_{\myover{x_1x_2t=x}{y_1y_2t=y}} \sigma^{-1}(g \times 1, 1 \times e_{x_1})\sigma^{-1}(h \times 1, 1 \times e_{y_1})d\eta(e_{x_2},e_{y_2})\sigma(g^{x}h^{y} \times 1, 1 \times e_t) \\ &= \sigma'(g \times e_x, h \times e_y) \end{align*} \noindent (ii) By Theorem \ref{thm_cell} (iv) we write \begin{align} \phi = \begin{pmatrix} \hat{p}_{H} & \delta \\ \delta^{-1} & p_H \end{pmatrix} \begin{pmatrix}1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} v^* & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} \label{eqn:lr1} \end{align} where we have already eliminated the $V$ element since it normalizes $E$ and every $V$ has a lift to $\cVUnderL$. Similarly, we know from Proposition \ref{pcat} that (up to an $V$ that ensures $\delta(c)(\delta(e_{c'})) = \delta_{c,c'}$) every reflection $r$ has a lift $(r,\lambda) \in \cRUnderL$. Hence we multiply $(\phi,\sigma)$ with the inverse $(r,\lambda)^{-1}$ from the left so that $\phi$ changes to: \begin{align} \leadsto \begin{pmatrix} 1 & b \\ 0 & 1\end{pmatrix}\begin{pmatrix} v^* & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ a & 1\end{pmatrix} = \begin{pmatrix} v^* + ba & b \\ a & 1 \end{pmatrix} \label{eqn:lr2} \end{align} Since this element has to be braided, using Lemma \ref{nec} equation \ref{h1} together with (\ref{eqn:symb2}) it follows that $b$ is alternating on $G$. From Lemma \ref{nec} equation \ref{h4} follows that $v = \id_G$ and then that $a$ is alternating. Hence we can construct lifts to $\cBUnderL$ and $\cEUnderL$ and multiplying with the corresponding inverses just leaves us with a $(1,\sigma')$. As in $(i)$ we get rid of the distinguished part and then this is a trivial autoequivalence (up to natural transformation). \\ \noindent The proof of (iii) is completely analogous to (ii). \end{proof} \section{Examples and the full Brauer-Picard group}\label{sec_examples} We now discuss the results of this paper for several classes of groups $G$. In particular, we compare our results to the examples obtained in \cite{NR14}. In all these cases we verify that the decomposition we proposed in Question \ref{q_decomposition} is also true for the full Brauer-Picard group (i.e. the elements do not have to be lazy). \\ The approach in \cite{NR14} is to study $\Aut_{br}(\DG\md\mod)$ via its action on the set $\mathbb{L}(G)$ of so-called Lagrangian subcategories $\LL \subset \DG\md\mod$. These are parametrized by pairs $(N,[\mu])$ where $N$ is a normal abelian subgroup of $G$ and $[\mu]$ a $G$-invariant $2$-cohomology class on $N$. The associated Lagrangian subcategory is generated, as abelian category, by simple objects $\ocat_g^\chi$ in the following way (see Sect. 7 \cite{NR14}): $$\LL_{N,\mu}:=\left\langle \ocat_g^\chi\mid g\in N,\;\chi(h)=\mu(g,h)\mu(h,g)^{-1}\;\forall{h\in N} \right\rangle$$ Let further $\mathbb{L}_0(G):= \{ \LL \in \mathbb{L}(G) \mid \LL \simeq \Rep(G) \; \text{as a braided fusion category} \}$ \\ The group $\Aut_{br}(\DG\md\mod)$ acts on the lattice of fusion subcategories of $\DG\md\mod$ and on $\mathbb{L}(G)$. The subset $\mathbb{L}_0(G)$ is invariant under this action. By Prop. 7.6 \cite{NR14}: The action of $\Aut_{br}(\DG\md\mod)$ on $\mathbb{L}_0(G)$ is \emph{transitive}. The stabilizer of the standard Lagrangian subcategory $\LL_{1,1}=\Rep(G)$ is the image of the induction $$\Ind_{\Vect_G}:\;\Aut_{mon}(\Vect_G)\to \Aut_{br}(\DG\md\mod)$$ Since the image of $\Ind_{\Vect_G}$ is $\Out(G) \ltimes \H^2(G,k^\times)$, we have a group isomorphism $\mathrm{Stab}(\Rep(G)) \simeq \Out(G) \ltimes \H^2(G,k^\times)$, which implies $|\Aut_{br}(\DG\md\mod)| = |\H^2(G,k^\times)|$ $|\Out(G)|$ $|\mathbb{L}_0(G)|$. \\ We determine our lazy subgroups $\cBL,\cEL,\cRL,\cVL$ for certain examples and show how they act on $\mathbb{L}$(G). We explicitly calculate the action in terms of the simple objects $\ocat_g^\chi$. \subsection{General considerations on non-lazy reflections}\label{sec_nonlazyReflection}~\\ For each triple $(Q,N,\delta)$ where $G$ is a semi-direct product $G=Q \ltimes N$, $N$ a normal abelian subgroup of $G$, $\delta:kN \ito k^N$ a $G$-invariant (under conjugation action) Hopf isomorphism, we obtain an element $r_{Q,N,\delta}:=\Omega \in \Aut_{br}(DG\md\mod)$, where $\Omega$ is the braided autoequivalence given in Thm 3.20 in \cite{BLS15}: We have a decomposition of $kG$ as a Radford biproduct $kG= kQ \ltimes kN$, where $N$ is a normal subgroup of $G$, $kN$ is a Hopf algebra in $DQ\md\mod$, where $kQ$ acts on $kN$ by conjugation and where the $kQ$-coaction on $kN$ is trivial. In our notation, the braided autoequivalence $r_{Q,N,\delta}:\DG\md \mod \ito \DG\md\mod$ assigns a $DG$-module $M$ to a $\DG$-module $r_{Q,N,\delta}(M)$ where $r_{Q,N,\delta}(M)$ is $M$ as a $k$-vector space and where the $\DG$-action on $r_{Q,N,\delta}(M)$ is given by postcomposing with the following algebra isomorphism of $\DG= D(Q \ltimes N)$: \begin{align*} \DG \ni (f_Q,f_N) \times (q,n) &\mapsto (f_Q,\delta(n)) \times (q,\delta^{-1}(f_N)) \in \DG \end{align*} where $f_Q \in k^Q, f_N \in k^N, q \in Q$ and $n \in N$. Essentially, this is the reflection as defined in Proposition \ref{reflections} but since we do not have a direct product of $Q$ and $N$, we do not have a coalgebra isomorphism as we would have in the lazy case. We are denoting the $\DG$-action on $M$ by $\left((f_Q,f_N) \times (q,n)\right).m \in M$ for $m \in M$ and the $\DG$-action on $r_{Q,N,\delta}(M)$ by $\left((f_Q,f_N) \times (q,n)\right)._r m = \left((f_Q,\delta(n)) \times (q,\delta^{-1}(f_N))\right).m$. \\ We show how partial dualization $r_{Q,N,\delta}$ act on irreducibles $\ocat_1^\chi$ and thereby on the subcategory $\LL_{1,1}$. We need this in order to check that the group generated by $\cB,\cE,\cV$ and $\cR$ acts transitively on $\mathbb{L}_0(G)$. Since $\mathbb{L}_0(G)$ is the orbit of $\LL_{1,1}=\Rep(G)$ and since $\LL_{1,1}$ is generated by the simple objects $\ocat^{\chi}_1$, we only need the action of partial dualizations on simple objects of the form $\ocat^{\chi}_1$ for some irreducible character $\chi$ on $G$. \\ Since $r_{Q,N,\delta}$ is an autoequivalence, it sends simple objects to simple objects. Therefore, for each irreducible character $\chi$ on $G$ there exists a conjugacy class $[g] \subset G$ and an irreducible character $\rho$ on $\Cent(g)$ such that $r_{Q,N,\delta}(\ocat_1^\chi)=\ocat_g^\rho$. We have $\dim(\chi)=|[g]|\cdot\dim(\rho)$. We want to determine $[g]$ and $\rho$. \\ Clifford's theorem (see e.g. Page 70, Theorem 4.1 in \cite{Gor07}) states that the restriction of an irreducible character $\chi$ to a normal subgroup $N$ of $G$ decomposes into a direct sum of irreducible $N$-characters $\chi_i$ with the same multiplicity $e \in \nat$: $$\chi|_N=e\sum_{i=1}^t \chi_i$$ where the $\chi_i$ form a $G$-orbit under conjugation action on $N$ and hence on $\Rep(N)$. The group $Q=G/N$ acts on $\chi_i$ by conjugation in the argument and the subgroups $I_i \subset G/N = Q$ fixing a $\chi_i$ are called the inertia subgroups. We have $[Q:I_i]=t$. \\ Since $N$ is abelian, we obtain $1$-dimensional representations $\chi_i \in \hat{N}$ forming a $G$-conjugacy class. Then $n_i:=\delta^{-1}(\chi_i) \in N$ are all conjugate to each other in $G$. Fix one representative $n_i=\delta^{-1}(\chi_i)$ in this conjugacy class and the corresponding inertia subgroup $I_i \subset Q$. Further, since $\delta$ is $G$-conjugation invariant we also have the formula $\Cent(n_i) = N \rtimes I_i$. We have a decomposition via Clifford's theorem $\ocat^{\chi}_1 = \bigoplus_{j=0}^t T_j \otimes M_j$, where the $M_j$ are $1$-dimensional $k$-vector spaces with an $N$-action given by $\chi_j$ and where $T_j$ is an $e$-dimensional $k$-vector space with trivial $N$-action. Since the partial dualization preserves vector spaces, we also have a decomposition as vector spaces: $r_{Q,N,\delta}(\ocat^\chi_1) = \ocat_g^\rho = \bigoplus_{j=0}^t T_j \otimes M_j$. We calculate the $k^N$-action ($kN$-coaction) on $\ocat_g^\rho$: Let $t \otimes m_j \in T_j \otimes M_j$ then the modified $k^N$-action: \begin{align*} e_{n_l}._r(t \otimes m_j) & = \delta^{-1}(e_{n_l}).(t \otimes m_j) = \chi_j(\delta^{-1}(e_{n_l})) (t \otimes m_j) \\ & = e_{n_l}(\delta^{-1}(\chi_j)) (t \otimes m_j) = e_{n_l}(n_j) (t \otimes m_j) = \delta_{l,j} (t \otimes m_j) \end{align*} On the other hand the $k^Q$-action ($kQ$-coaction) on $\ocat_g^{\rho}$ stays the same, which is trivial here. Hence, we have shown: $[g] = [n_i]$. \\ Now calculate the action of $\Cent(n_i)=N \rtimes I_i$ on $T_i \otimes M_i$. Let $n \in N$ and note that the $k^G$-action on $\ocat^\chi_1$ is trivial since $|[1]| = 1$. Hence the $kN$-action on $\ocat_{n_i}^{\rho}$ is trivial. For $q \in I_i \subset Q=G/N$: $q._r(t \otimes m_i) = (q.t) \otimes m_i$ where $Q$ acts on $T_i$ since $T_i \otimes M_i$ is an $I_i$-submodule. Thus, $\rho$ is the character on $N \rtimes I_i$ which is the trivial extension of the $I_i$-representation $T_i$. Overall we get $$ r_{Q,N,\delta}(\ocat_1^\chi) = \ocat_{n_i}^{T_i} $$ \subsection{General considerations on non-lazy induction}\label{sec_nonlazyInduction}~\\ We now turn to the subgroups of $\Aut_{br}(\DG\md\mod)$ defined to be the images of the induction $$\Ind_{\cat}:\;\Aut_{mon}(\cat)\to\Aut_{br}(Z(\cat))$$ for the two cases $\cat =\Vect_G$ and $\cat = \Rep(G)$ where we assign a $F\in\Aut_{mon}(\cat)$ to the invertible $\cat$-bimodule category ${_F}\cat_\cat \in \BrPic(\cat)$ that has a right module category structure given by the monoidal structure of $\cat$ and a left module category structure by composing with $F$ and then using the monoidal structure of $\cat$. We then use the isomorphism $\BrPic(\cat) \simeq \Aut_{br}(Z(\cat))$ to get subgroups of $\Aut_{br}(\DG\md\mod)$. \\ We already know that $\im(\Ind_{\Vect_G}) \simeq \Out(G)\ltimes \H^2(G,k^\times)$. The subgroup $\im(\Ind_{\Rep(G)})$ is harder to compute. The group $\Aut_{mon}(\Rep(G))$ is parametrized by pairs $(N,\alpha)$ where $N$ is an abelian subgroup of $G$ and $\alpha$ belongs to a $G$-invariant cohomology class (see \cite{Dav01}). The subgroup of lazy monoidal autoequivalences corresponds to all pairs where $\alpha$ is $G$-invariant even as an $2$-cocycle. \begin{remark} An interesting example appears when we consider $G=\ZZ_2^{2n}\rtimes \Sp_{2n}(2)$ where $\Sp_{2n}(2)$ is the symplectic group over $\F_2$. There is a pair $(N,\alpha)$ such that the associated functor is a monoidal equivalence $$F_{N,\alpha}:\;\Rep(\ZZ_2^{2n}\rtimes \Sp_{2n}(2)) \stackrel{\sim}{\longrightarrow} \Rep(\ZZ_2^{2n}.\Sp_{2n}(2))$$ The groups $\ZZ_2^{2n}\rtimes \Sp_{2n}(2)$ and $\ZZ_2^{2n}.\Sp_{2n}(2)$ are isomorphic only for $n=1$. Namely, they are both isomorphic to $\SS_4$. See Example 7.6 in \cite{Dav01}. This leads to a nontrivial and non-lazy monoidal autoequivalence, which leads to a non-trivial non-lazy braided autoequivalence of $D\SS_4$, see the example below. \end{remark} For any $F\in\Aut_{mon}(\Rep(G))$, we want to determine the image $$E_{F}:=\Ind_{\Rep(G)}(F)\in \Aut_{br}(\DG\md\mod)$$ Unfortunately, it is not easy to calculate $E_{F}$ explicitly, since it depends on the isomorphism $\BrPic(\cat)\to \Aut_{br}(Z(\cat))$. In \cite{NR14} equations (16),(17), the image of the induction $\Ind_{\Vect_G}$ was worked out, but we are also interested in the image of $\Ind_{\Rep(G)}$ which seems to be harder. We can easily derive at least an necessary condition. From \cite{ENO10} we know that given an invertible $\cat$-bimodule category ${_\cat}\mcat_{\cat}$ the corresponding braided autoequivalence $\Phi_\mcat \in \Aut_{br}(Z(\cat))$ is determined by the condition that there exists a isomorphism of $\cat$-bimodule functors $ Z \otimes \cdot \simeq \cdot \otimes \Phi_\mcat(Z)$ for all $Z \in Z(\cat)$. In our case ${_\cat}\mcat_{\cat} = {_F}\cat_\cat$ and $\Phi_\mcat = E_F$. This implies for $(V,c),(V',c') \in Z(\cat)$ \begin{align*} E_{F}(V,c)=(V',c') \;\;\Rightarrow\;\; F(V)\otimes X \cong X\otimes V'\quad\forall X \in \cat \end{align*} In particular, we have $F(V) \simeq V'$. For $\cat=\Rep(G)$ this implies moreover: $$E_F({\ocat_g^\chi})=\ocat_{g'}^{\chi'} \;\:\Rightarrow \;\; F(\Ind_{\Cent(g)}^G(\chi))\cong \Ind_{\Cent(g')}^G(\chi')$$ Thus, possible images of $E_F$ are determined by the character table of $G$ and induction-restriction table with $\Cent(g),\Cent(g')$. We continue for the special case $g=1$ to determine the possible images $E_F(\ocat_1^\chi)$ and hence $E_F(\LL_{1,1})$. Our formula above implies: $$F(\chi)=\Ind_{\Cent(g')}^G(\chi')$$ In particular, $\Ind_{\Cent(g')}^G(\chi')$ has to be irreducible. \subsection{Elementary abelian groups}\label{sec_Fp_AutBr}~\\ For $G=\ZZ_p^n$ with $p$ a prime number. We fix an isomorphism $\ZZ_p \simeq \widehat{\ZZ}_p$. We know that $$\BrPic(\Rep(\ZZ_p^n)) \simeq \mathrm{O}^+_{2n}(\F_p)$$ where $\mathrm{O}^+_{2n}(\F_p):=\mathrm{O}_{2n}(\F_p,q)$ is the group of invertible $2n \times 2n$ matrices invariant under the form: $$q:\F_p^n \times \F_p^n \to \F_p: (k_1,...,k_n,l_1,...,l_n) \mapsto \sum_{i=1}^n k_il_i $$ For abelian groups, all $2$-cocycles over $\DG$ are lazy and the results of this article gives a product decomposition of $\BrPic(\Rep(\ZZ_p^n))$. \begin{itemize} \item $\cV \cong \GL_{n}(\F_p) \simeq \left\{ \begin{pmatrix} A^{-1} &0 \\ 0 &A \end{pmatrix} \mid A \in \GL_{n}(\F_p) \right\} \subset \mathrm{O}_{2n}(\F_p,q)$ \item $\cB \cong B_{alt} \cong \left\{ \begin{pmatrix} \mathbbm{1}_n &B \\ 0 & \mathbbm{1}_n \end{pmatrix} \mid B = -B^T, B_{ii} = 0, B \in \F_p^{n \times n} \right\} \subset \mathrm{O}_{2n}(\F_p,q)$ \item $\cE \cong E_{alt} \cong \left\{ \begin{pmatrix} \mathbbm{1}_n &0 \\ E & \mathbbm{1}_n \end{pmatrix} \mid E = -E^T, E_{ii} = 0, E \in \F_p^{n \times n} \right\} \subset \mathrm{O}_{2n}(\F_p,q)$ \end{itemize} The set $\cRL/\sim$ consists of $n+1$ representatives $r_{[C]}$, one for each possible dimension $d$ of a direct factor $\F_p^d\cong C\subset G$, and $r_{[C]}$ is an actual reflection on the subspace $C$ with a suitable monoidal structure determined by the pairing $\lambda$. Especially the generator $r_{[G]}$ conjugates $\cB$ and $\cE$. In this case the double coset decomposition is a variant of the Bruhat decomposition of $\mathrm{O}_{2n}(\F_p,q)$. It is interesting to discuss how, in this example, our subgroups act on the Lagrangian subcategories and to see that this action is indeed transitive. $\mathbb{L}_0(G)=\mathbb{L}(G)$ is parametrized by pairs $(N,[\mu])$ where $N$ is a subvector space of $\F_p^n$ and $[\mu] \in \H^2(N,k^\times)$ is uniquely determined by an alternating bilinear form $\langle,\rangle_\mu$ on $N$ given by $\langle g,h \rangle_\mu = \mu(g,h)\mu(h,g)^{-1}$. Let $N'$ be the orthogonal complement, so $\F_p^n=N\oplus N'$. We have $$\LL_{N,\mu}=\left\langle\ocat_g^{\chi_{N'}\langle g,-\rangle} \mid g\in N, \chi_{N'}\in\widehat{N}' \right\rangle \qquad \LL_{1,1}=\left\langle\ocat_1^{\chi} \mid \chi \in\widehat{G} \right\rangle $$ $\bullet$ Elements in $\cVTildeL=\Out(G)=\GL_n(\F_p)$ stabilize $\LL_{1,1}$. \\ $\bullet$ For any $\delta$, a partial dualization $r_N \in \cRL$ on $N$ maps $\LL_{1,1}$ to $\LL_{N,1}$. \\ $\bullet$ $b \in \Hom_{alt}(G,\widehat{G}) \simeq \cBTildeL$ acts by $\ocat_{g}^\chi \mapsto \ocat_g^{\chi\cdot b(g,\cdot)}$. In particular, it stabilizes $\LL_{1,1}$ and sends $\LL_{N,1}\mapsto \LL_{N,\beta\mid_N}$ where $\beta \in \Z^2(G,k^\times)$ is uniquely (up to coboundary) determined by $b(g,h)=\beta(g,h)\beta(h,g)^{-1}$. \\ $\bullet$ $a \in \Hom_{alt}(\widehat{G},G) \simeq \cETildeL$ acts by $\ocat_{g}^\chi \mapsto \ocat_{a(\chi)g}^{\chi}$. In particular, it sends $\LL_{1,1}$ to $\LL_{N,\eta}$ with $N=\im(a)$ being the image of $a$ and $\eta \in \Z^2(N,k^\times)$ is uniquely (up to coboundary) determined by $\eta(n,n')\eta(n',n)^{-1} = \chi(n')$ where $a(\chi) =n$ and $n' \in N=\im(a)$. For another $\chi'$ with $a(\chi')=n$ we have $\chi(n') = \chi(a(\rho)) = \rho(a(\chi))^{-1} = \rho(a(\chi')) = \chi'(n)$. \\ We see that can get every $\LL_{N,\mu} \in \mathbb{L}_0(G)$ with suitable combinations of elements of our subgroups applied to $\LL_{1,1}$. \\ \subsection{Simple groups}~\\ Let $G$ be a simple group, then our result returns \begin{itemize} \item $\cVTildeL=\Out(G)$ \item $\cBTildeL=\widehat{G}_{ab}\wedge\widehat{G}_{ab}=1$ \item $\cETildeL = Z(G) \wedge Z(G)=1$ \item $\cRL = 1$ \end{itemize} hence the only \emph{lazy} autoequivalences are induced by outer automorphisms of $G$. \\ We have no normal abelian subgroups except $\{1\}$ and hence the only Lagrangian subcategory is $\LL_{1,1}$ and the stabilizer $\Out(G)\ltimes \H^2(G,k^\times)$ is equal to $\Aut_{br}(\DG\md\mod)$. \\ Observe that in this example we obtain also a decomposition of the full Brauer-Picard group and our Question \ref{q_decomposition} is answered positively: Namely, $\Aut_{br}(\DG\md\mod)$ is equal the image of the induction $\Ind_{\Vect_G}$, while the other subgroups are trivial. \\ \subsection{Lie groups and quasisimple groups}~\\ Lie groups over finite fields $G(\F_{q}),q=p^k$ have (with small exceptions) the property $G_{ab}=1$ and there are no semidirect factors. On the other hand, they may contain a nontrivial center $Z(G)$. This is comparable to their complex counterpart, where the center of the simply-connected form $Z(G_{sc}(\CC))$ is equal to the fundamental group $\pi_1(G_{ad}(\CC))$ of the adjoint form with no center $Z(G_{ad}(\CC))=1$. In exceptional cases for $q$, the maximal central extension may be larger than $\pi_1(G_{ad}(\CC))$. Similarly, we could consider central extensions of sporadic groups $G$; these appear in any insolvable group as part of the Fitting group. \begin{definition} A group $G$ is called \emph{quasisimple} if it is a perfect central extension of a simple group: $$Z\to G\to H\qquad Z=Z(G), \quad [G,G]=G$$ \end{definition} As long as $\H^2(Z,\CC^\times)=1$, e.g. because $Z$ is cyclic, there is no difference to the simple case. \emph{Nontrivial} $\cETildeL$-terms appear as soon as $\H^2(Z,\CC^\times)\neq 1$. This is the case for $D_{2n}(\F_q)=\SO_{4n}(\F_q)$ (for $q$ odd or $q=2$) where we have $\pi_1(G_{ad}(\CC)) =\ZZ_2\times\ZZ_2$ and in some other (exceptional) cases. We consider all universal perfect central extensions where $\H^2(Z,\CC^\times)\neq 1$: \\ \begin{center} \begin{tabular}{lcl|l} $Z$ & & $H$ & $\cETildeL$ \\ \hline $\ZZ_2\times \ZZ_2$ && $D_{2n}(\F_{q})$ & $\ZZ_2$ \\ \hline $\ZZ_4\times \ZZ_4 \times \ZZ_3$ && $A_2(\F_{2^2})$ & $\ZZ_4$ \\ $\ZZ_3\times \ZZ_3 \times \ZZ_4$ && ${^2}A_3(\F_{3^2})$ & $\ZZ_3$ \\ $\ZZ_2\times \ZZ_2 \times \ZZ_3$ && ${^2}A_5(\F_{2^2})$ & $\ZZ_2$ \\ $\ZZ_2\times \ZZ_2$ && ${^2}B_2(\F_{2^3})$ & $\ZZ_2$ \\ $\ZZ_2\times \ZZ_2 \times \ZZ_3$ && ${^2}E_6(\F_{2^2})$ & $\ZZ_2$ \end{tabular} \end{center} The upper indices denote the order of the automorphism by which the so-called Steinberg groups are defined. $\Out(H)$ typically consists of scalar- and Galois-automorphisms of the base field $\F_q$, extended by the group of Dynkin diagram automorphisms; for example $D_4$ we have the triality automorphisms $\SS_3$. Note further that any automorphism on $G$ preserves the center $Z$, hence it factors to an automorphism in $H$. The kernel of this group homomorphism $\Out(G)\to\Out(H)$ is trivial, since all elements in $Z$ are products of commutators of $G$ elements. We have $\Out(G) \cong \Out(H)$ where surjectivity follows from $G$ being a universal central extension. For $G$ as above, the following holds: \begin{itemize} \item $\cVTildeL=\Out(H)$ \item $\cBTildeL=\widehat{G_{ab}}\wedge\widehat{G_{ab}}=1$ \item $\cETildeL=(\ZZ_n \times \ZZ_n \times \ZZ_k )\wedge (\ZZ_n \times \ZZ_n \times \ZZ_k )=\ZZ_n$ for gcd$(n,k)=1$ \\ where $n\in\{2,3,4\}$ as indicated in the above table. \item $\cRL=1$, as there are no direct factors of $G$. \end{itemize} Hence $\Aut_{br,L}(\DG\md\mod) \simeq \Out(H) \ltimes \ZZ_n$. \begin{claim} The decomposition we proposed in Question \ref{q_decomposition} is also true for the full Brauer-Picard group for the $G$ above. More precisely \begin{align*} \BrPic(\Rep(G)) &=\im(\Ind_{\Vect_G})\cdot \im(\Ind_{\Rep(G)}) \cdot \cR \\ &=\Out(G)\ltimes\H^2(G,k^\times)\cdot \ZZ_n\cdot 1 \end{align*} \begin{itemize} \item $\im(\Ind_{\Vect_G})=\H^2(G,k^\times)$ \item $\im(\Ind_{\Rep(G)}) \simeq \widetilde{\cE} \simeq \cETildeL \simeq \ZZ_n$ \item No reflections, as there is no semidirect decomposition of $G$. \end{itemize} \end{claim} \begin{proof} Let $N$ be a normal abelian subgroup of $G$. Then the image $\pi(N)$ of the surjection $\pi:G \to H$ is a normal abelian subgroup of $H$. Since $H$ is simple and non-abelian, $N$ has to be a subgroup of the center $\ker(\pi)=Z=Z(G)$. Further, since $\widehat{G} = \widehat{G_{ab}}=1$, the only $1$-dimensional simple object in $\LL_{1,1}$ is $\ocat^1_1$. On the other hand, if $[\mu] \in \H^2(N,k^\times)$ is degenerate on a non-trivial $N$, hence if there exists a $n \in N$ such that $\mu(n,\cdot)\mu(\cdot)^{-1}=1$, then $\LL_{N,\mu}$ has at least two (non-isomorphic) $1$-dimensional simple objects, namely $\ocat_1^1$ and $\ocat^1_n$. This implies, all $\LL_{N,\mu} \in \mathbb{L}_0(G)$ with $N \neq 1$ must have a non-degenerate $\mu$. \\ Recall that an element in $\im(\Ind_{\Rep(G)})$ determined by an $a \in \Hom_{alt}(\widehat{Z},Z)$ sends $\ocat^\chi_1$ to $\ocat^\chi_{a(\chi')}$ where $\chi':Z \to k^\times$ the $1$-dimensional character determined by $\chi$ restricted to $Z$. Given a pair $(N,[\mu])$ where $N$ is a normal central subgroup of $G$ and $\mu$ non-degenerate we give $a \in \Hom_{alt}(\widehat{Z},Z)$ such that $a(\LL_{1,1})=\LL_{N,\mu}$. Since $\mu$ is non-degenerate, $b:N \ito \widehat{N}$ defined by $b(n)(n')=\mu(n,n')\mu^{-1}(n',n)$ is bijective. We claim that $a \in \Hom_{alt}(\widehat{Z},Z)$ defined by $a(\chi) := b^{-1}(\chi|_N)$ does the job. Using that $N$ is a central normal subgroup, we see that $N = \im(a)$. Also, $\mu$ is indeed the cocycle determined by $a$ because: $\chi(n') = b(n,n') = \mu(n,n')\mu^{-1}(n',n)$ for $a(\chi)=n$ and all $n' \in N$. This proves that $a(\LL_{1,1})=\LL_{N,\mu}$. The action of $\Hom_{alt}(\widehat{Z},Z) \simeq \ZZ_n$ on $\mathbb{L}_0(G)$ is therefore indeed transitive. All elements of $\ZZ_n$ act differently on $\mathbb{L}_0(G)$ and therefore $\widetilde{\cE}_L \simeq \cE_L$. Also, in this case, the lazy elements $\cE_L \simeq \ZZ_n$ already give all $\im(\Ind_{\Rep(G)}) \simeq \ZZ_n$. The only non-lazy terms come from $\im(\Ind_{\Vect_G})=\H^2(G,k^\times)$ in the stabilizer. \end{proof} \subsection{Symmetric group $\SS_3$}~\\ For $G=\SS_3$ the following holds \begin{itemize} \item $\cVTildeL=\Out(\SS_3)=1$ \item $\cBTildeL=\hat{\SS}_3\wedge\hat{\SS}_3=\ZZ_2\wedge\ZZ_2=1$ \item $\cETildeL=Z(\SS_3)\wedge Z(\SS_3)=1$ \item $\cRL=1$, as there are no direct factors. \end{itemize} Hence our result implies that there are no \emph{lazy} braided autoequivalences of $D\SS_3\md\mod$. \noindent We now discuss the full Brauer-Picard group of $\SS_3$ which was computed in \cite{NR14} Sec. 8.1: We have the Lagrangian subcategories $\LL_{1,1},\LL_{\langle(123)\rangle,1}$ and stabilizer $\Out(\SS_3)\ltimes \H^2(\SS_3,k^\times)=1$. Hence $\Aut_{br}(D\SS_3\md\mod)=\ZZ_2$. \begin{claim} The decomposition we proposed in Question \ref{q_decomposition} is also true for the full Brauer-Picard group of $\SS_3$. More precisely \begin{align*} \BrPic(\Rep(\SS_3)) &=\im(\Ind_{\Vect_G})\cdot \im(\Ind_{\Rep(G)})\cdot \cR = 1\cdot1\cdot \ZZ_2 \end{align*} \begin{itemize} \item $\im(\Ind_{\Vect_G})=1$ \item $\im(\Ind_{\Rep(G)})=1$ \item Reflections $\ZZ_2$, generated by the partial dualizations $r_N$ on the semidirect decomposition $\SS_3=\ZZ_3\rtimes\ZZ_2$ with abelian normal subgroup $N=\ZZ_3$. More precisely $r$ interchanges $\LL_{1,1},\LL_{\langle(123)\rangle,1}$, the action on $\ocat_1^\chi$ is made explicit in the proof. \end{itemize} \end{claim} \begin{proof} First, $\im(\Ind_{\Vect_G})$ is the stabilizer $\Out(\SS_3)\ltimes \H^2(\SS_3,k^\times)=1$. Second, \cite{Dav01} states that $\Aut_{mon}(\Rep(G))$ is a subset of the set of pairs consisting of a abelian normal subgroup and a \emph{non-degenerate} $G$-invariant cohomology class on this subgroup. The only nontrivial normal abelian subgroup for $\SS_3$ is cyclic and hence there is no such pair, thus $\im(\Ind_{\Rep(\SS_3)})=1$. We apply the general considerations in Section \ref{sec_nonlazyReflection}: The Clifford decomposition of the restrictions triv$|_N$, sgn$|_N$, ref$|_N$ to $N=\ZZ_3$ is $1,1,\zeta\oplus\zeta^2$ respectively. In the last case $\ZZ_2$ is acting by interchanging the summands (resp. by Galois action), the inertia group being trivial. We get $r(\ocat_1^{\mref})=\ocat_{(123)}^1$ and the partial dualization $r$ maps $$\LL_{1,1}=\left\langle\ocat_1^{\mathrm{triv}},\;\ocat_1^{\mathrm{sgn}},\;\ocat_1^{\mathrm{ref}}\right\rangle \longmapsto \LL_{\langle(123)\rangle,1}=\left\langle \ocat_1^{\mathrm{triv}},\;\ocat_1^{\mathrm{sgn}},\;\ocat_{(123)}^{1}\right\rangle$$ \end{proof} \subsection{Symmetric group $\SS_4$}~\\ For $G=\SS_4$ the following holds: \begin{itemize} \item $\cVTildeL=\Out(\SS_4)=1$ \item $\cBTildeL=\hat{\SS}_4\wedge\hat{\SS}_4=\ZZ_2\wedge\ZZ_2=1$ \item $\cETildeL=Z(\SS_4)\wedge Z(\SS_4)=1$ \item $\cRL=1$, as there are no direct factors. \end{itemize} Hence, your result implies that there are no \emph{lazy} braided autoequivalences of $D\SS_4\md\mod$.\\ The full Brauer-Picard group of $\SS_4$ was computed in Sec. 8.2. \cite{NR14}. Denote the standard irreducible representations of $\SS_4$ by triv, sgn, ref2, ref3, ref3$\otimes$sgn, where $\mref2$ and $\mref3$ are the standard two and three dimensional irreducible representations of $\SS_4$. There is a unique abelian normal subgroup $N=\{1,(12)(34),(13)(24),(14)(23)\}\cong \ZZ_2\times \ZZ_2$. We have three Lagrangian subcategories $\LL_{1,1},\LL_{N,1},\LL_{N,\mu}$ for $N \cong \ZZ_2\times \ZZ_2$. The stabilizer is $\Out(\SS_4)\ltimes \H^2(\SS_4,k^\times)=\ZZ_2$. In particular, $\Aut_{br}(D\SS_4\md\mod)$ has order $6$. One checks, that the nontrivial $[\beta] \in \H^2(\SS_4,k^\times)$ restricts to the nontrivial $[\mu]$ on $N$, hence $$[\beta]:\; \LL_{1,1},\LL_{N,1},\LL_{N,\mu} \longmapsto \LL_{1,1},\LL_{N,\mu},\LL_{N,1}$$ and by order and injectivity, we have $\Aut_{br}(D\SS_4\md\mod)\cong\SS_3$. \begin{claim} The decomposition we proposed in Question \ref{q_decomposition} is also true for the full Brauer-Picard group of $\SS_4$. More precisely \begin{align*} \BrPic(\Rep(\SS_4)) &=\im(\Ind_{\Vect_G})\cdot \im(\Ind_{\Rep(G)})\cdot \cR\\ &=\ZZ_2\cdot \ZZ_2\cdot \ZZ_2 = \SS_3 \end{align*} \begin{itemize} \item $\im(\Ind_{\Vect_G})=\ZZ_2$ generated by the nontrivial cohomology class $[\beta]$ of $\SS_4$ with action on $\mathbb{L}_0(G)$ described above. Note that $[\beta]$ restricts to the unique nontrivial cohomology class $[\mu]$ on $N$. \item $\im(\Ind_{\Rep(G)})=\ZZ_2$ generated by the non-lazy monoidal autoequivalence $F$ of $\Rep(\SS_4)$, described in detail in in Sect. 8 of \cite{Dav01}. $E_F\in \Aut_{br}(\DG\md\mod)$ interchanges $\LL_{1,1},\LL_{\langle(123)\rangle,\mu}$. \item Reflections $\cR\cong \ZZ_2$, generated by the reflection $r=r_N$ on the semidirect decomposition $\SS_4=N\rtimes\SS_3$ with abelian kernel $N$. More precisely $r$ interchanges $\LL_{1,1},\LL_{\langle(123)\rangle,1}$. \end{itemize} \end{claim} \begin{proof} The stabilizer $\im(\Ind_{\Vect_G})$ and its action on $\mathbb{L}_0(G)$ has already been calculated. To compute $\im(\Ind_{\Rep(\SS_4)})$ note that $\Aut_{mon}(\Rep(\SS_4))$ has been explicitly computed in Sect. 8 of \cite{Dav01}: Since there is only one ontrivial normal subgroup $N=\ZZ_2\times \ZZ_2$ and only one (up to coboundary) non-degenerate $2$-cocycle $\mu$ on $N$, which is $G$-invariant \emph{only} as a cohomology class $[\mu]$. In \cite{Dav01} it is shown that this gives rise to a (non-lazy) monoidal autoequivalence $F$ of $\Rep(\SS_4)$ such that $F(\mref3)=\mref3\otimes \sgn$ which corresponds to mapping $[(12)]$ to $[(1234)]$. This automorphism is visible as a symmetry of the character table. \\ We compute the action of $E_F\in \im(\Ind_{\Rep(\SS_4)})$ on all $\ocat_1^\chi$. First, $\chi=\triv,\sgn,\mref2$ restricted to $N$ are trivial representations. Second, the possible images $$E_F(\ocat_1^{\mref3})=\ocat_g^\chi,\quad E_F(\ocat_1^{\mref3\otimes \sgn})=\ocat_{g'}^{\chi'}$$ belong to the $G$-conjugacy classes in $N$, i.e. $g,g' =1$ or $g,g' =(12)(34)$. They have to fulfill the characterization outlined in general considerations above, namely: $$F(\mref)=\mref\otimes \sgn\stackrel{!}{=}\Ind_{\Cent(g)}^G(\chi) \quad\quad F(\mref\otimes \sgn)=\mref\stackrel{!}{=}\Ind_{\Cent(g')}^G(\chi')$$ Assume $g=g'=1$. This implies that $E_F(\mathbb{L}_0(G))=\mathbb{L}_0(G)$ and thus $E_F$ is in the stabilizer, which is $\Out(\SS_4)\ltimes \H^2(\SS_4,k^\times)$. This is not possible since $E_F$ acts nontrivial on objects and does not come from an automorphism of $G$. Therefore we take $g,g' =(12)(34)$ and consider $$F(\mref3)=\mref3\otimes \sgn\stackrel{!}{=}\Ind_{\Cent(12)(34)}^G(\chi) \quad\quad F(\mref3 \otimes \sgn)=\mref 3\stackrel{!}{=}\Ind_{\Cent(12)(34)}^G(\chi')$$ where $\Cent(12)(34)=\langle (12),(13)(24)\rangle\cong\DD_4$. The character table quickly returns the only possible $\chi,\chi'$: $$E_F(\ocat^{\mref3}_1)=\ocat_{(12)(34)}^{(--)}\qquad E_F(\ocat^{\mref3 \otimes \sgn}_1)=\ocat_{(12)(34)}^{(+-)}\qquad$$ where $(++),(--),(+-),(--)$ are the four $1$-dimensional irreducible representations of $\DD_4 = \langle (12),(13)(24)\rangle$ where the first generator acts by the first $\pm 1$ in the bracket and the second generator by the second $\pm 1$. We see that $\chi|_N$ and $\chi'|_N$ are nontrivial, hence in $\LL_{N,\mu}$ for $\mu$ nontrivial and $$E_F:\; \LL_{1,1}=\left\langle\ocat_1^{\triv},\ocat_1^{\sgn},\ocat_1^{\mref2}, \ocat_1^{\mref3},\ocat_1^{\mref3\otimes \sgn}\right\rangle$$ $$\longmapsto \LL_{N,\mu}=\left\langle\ocat_1^{\triv},\ocat_1^{\sgn},\ocat_1^{\mref2}, \ocat_{(12)(34)}^{(--)},\ocat_{(12)(34)}^{(+-)}\right\rangle$$ We finally calculate the action of the partial dualization $r$ on the decomposition $\SS_4=N \rtimes \SS_3$. The general considerations in Section \ref{sec_nonlazyReflection} imply the following for the images $r(\ocat_1^\chi)$: Since $\chi=\triv,\sgn,\mref2$ restricted to $N$ are trivial, these are fixed. For $\chi=\mref3,\chi'=\mref3\otimes \sgn$ the restrictions are easily determined by the character table to be $$\chi|_N=\chi'|_N=(-+)\oplus (+-)\oplus (--)$$ which returns via $\delta:kN\to k^N$ precisely the conjugacy class $[(12)(34)]$ and the inertia subgroup is $I=N\rtimes \langle(12)\rangle$. To see the action on the centralizer, we restrict the representations $\chi,\chi'$ to $I$ and extend it trivially to $I=\Cent(12)(34)=\langle (12),(13)(24)\rangle\cong\DD_4$ yielding finally: $$r(\ocat_1^{ref})=\ocat_{(12)(34)}^{(++)} \qquad r(\ocat_1^{\mref\otimes \sgn})=\ocat_{(12)(34)}^{(-+)}$$ $$r:\; \LL_{1,1}=\left\langle\ocat_1^{\triv},\ocat_1^{\sgn},\ocat_1^{\mref2}, \ocat_1^{\mref3},\ocat_1^{\mref3\otimes \sgn},\right\rangle$$ $$\longmapsto \LL_{N,1}=\left\langle\ocat_1^{\triv},\ocat_1^{\sgn},\ocat_1^{\mref2}, \ocat_{(12)(34)}^{(++)},\ocat_{(12)(34)}^{(-+)},\right\rangle$$ \end{proof} \noindent{\sc Acknowledgments}: We are grateful to C. Schweigert for many helpful discussions. The authors are partially supported by the DFG Priority Program SPP 1388 ``Representation Theory'' and the Research Training Group 1670 ``Mathematics Inspired by String Theory and QFT''. S.L. is currently on a research stay supported by DAAD PRIME, funded by BMBF and EU Marie Curie Action.
1,477,468,749,928
arxiv
\section{Introduction} Complex networks have gained attention the past decade~\cite{boccaletti_complex_2006}. Especially with the rise of social media, social networks of unprecedented size became available, which contributed to the establishment of the computational social sciences~\cite{watts_twenty-first_2007,lazer_computational_2009}. But networks are also common in disciplines such as biology~\cite{guimera_origin_2010} and neurology~\cite{betzel_multi-scale_2013}. Many of these networks share various common characteristics. They often have skewed degree distributions~\cite{barabasi_scale-free_2009}, show a high clustering and a low average path length~\cite{watts_collective_1998}. Nodes often cluster together in dense groups, usually called communities. Nodes in a community often share other characteristics: metabolites show related functions~\cite{ravasz_hierarchical_2002} and people have a similar background~\cite{traud_social_2012}. Revealing the community structure can thus help to understand the network~\cite{fortunato_community_2010}. Modularity~\cite{newman_finding_2004} remains one of the most popular measures in community detection, even though it is flawed. There have been many algorithms suggested for optimizing modularity. The original algorithm~\cite{newman_finding_2004} created a full dendrogram and used modularity to decide on a cutting point. It was quite slow, running in $\O(n^2 m )$, where $n$ is the number of nodes and $m$ the number of links. Many algorithms were quickly introduced to optimize modularity, such as extremal optimization~\cite{duch_community_2005}, simulated annealing~\cite{reichardt_statistical_2006,guimera_functional_2005}, spectral methods~\cite{newman_finding_2006}, greedy methods~\cite{clauset_finding_2004}, and many other methods~\cite{fortunato_community_2010}. One of the fastest and most effective algorithms is the Louvain algorithm~\cite{blondel_fast_2008}, believed to be running in $\O(m)$. It has been shown to perform very well in comparative benchmark tests~\cite{lancichinetti_community_2009}. The algorithm is largely independent of the objective function to optimize, and as such has been used for different methods~\cite{traag_narrow_2011,rosvall_mapping_2010,rosvall_multilevel_2011,ronhovde_local_2010,evans_line_2009,lancichinetti_finding_2011} We first briefly describe the algorithm, and introduce the terminology. We then describe our simple improvement, which we call the random neighbor Louvain, and argue why we expect it to function well. We derive estimates of the runtime complexity, and obtain $\O(m)$ for the original Louvain algorithm, in line with earlier results, and $\O(n \log \langle k \rangle)$ for our improvement, where $\langle k \rangle$ is the average degree. This makes it one of the fastest algorithms for community detection to optimize an objective function. Whereas the original algorithm runs in linear time with respect to the number of edges, the random neighbor algorithm is nearly linear with respect to the number of nodes. Finally, we show on benchmark tests and some real networks that this minor adjustment indeed leads to reductions in running time, without losing much quality. These gains are especially visible for modularity, but less clear for other measures such as significance and surprise. \section{Louvain algorithm} Community detection tries to find a ``good'' partition for a certain graph. In other words, the input is some graph $G=(V,E)$ with $n=|V|$ nodes and $m=|E|$ edges. Each node has $k_i$ neighbors, which is called the degree, which on average is $\langle k \rangle = \frac{2m}{n}$. The output is some partition $\mathcal{V} = \{V_1, V_2, \ldots, V_r\}$, where each $V_c \subseteq V$ is a set of nodes we call a community. We work with non-overlapping nodes, such that $V_c \cap V_d = \emptyset$ for all $c \neq d$ and all nodes will have to be in a community, so that $\bigcup V_c= V$. Alternatively, we denote by $\sigma_i$ the community of node $i$, such that $\sigma_i = c$ if (and only if) $i \in V_c$. Both $\sigma$ and $\mathcal{V}$ may be used interchangeably to refer to the partition. If the distinction is essential, we will explicitly state this. The Louvain algorithm is suited for optimizing a single objective function that specifies some quality of a partition. We denote such an objective function with $\mathcal{H}$, which should be maximized. We use $\mathcal{H}(\sigma)$ and $\mathcal{H}(\mathcal{V})$ to mean the same thing. There are various choices for such an objective function, such as modularity~\cite{newman_finding_2004}, Potts models~\cite{reichardt_statistical_2006,ronhovde_local_2010,traag_narrow_2011}, significance~\cite{traag_significant_2013}, surprise~\cite{traag_detecting_2015}, infomap~\cite{rosvall_multilevel_2011} and many more. We will not specify any of the objective functions here, nor shall we discuss their (dis)advantages, as we focus on the Louvain algorithm as a general optimization scheme. Briefly, the Louvain algorithm works as follows. The algorithm initially starts out with a partition where each node is in its own community (i.e. $\sigma_i = i$), which is the initial partition. So, initially, there are as many communities as there are nodes. The algorithm moves around nodes from one community to another, to try to improve $\mathcal{H}(\sigma)$. We denote by $\Delta \mathcal{H}(\sigma_i \mapsto c)$ the difference in moving node $i$ to another community $c$. In particular, $\Delta \mathcal{H}(\sigma_i \mapsto c) = \mathcal{H}(\sigma') - \mathcal{H}(\sigma)$ where $\sigma'_j = \sigma_j$ for all $j \neq i$ and $\sigma'_i = c$, implying that if $\Delta \mathcal{H}(\sigma_i \mapsto c) > 0$, the objective function $\mathcal{H}$ is improved. At some point, the algorithm can no longer improve $\mathcal{H}$ by moving around individual nodes, at which point it aggregates the graph, and reiterates on the aggregated graph. We repeat this procedure as long as we can improve $\mathcal{H}(\sigma)$. The outline of the algorithm is displayed in Algorithm~\ref{algo:louvain}. \begin{algorithm}[t] \begin{algorithmic} \Function{Louvain}{Graph $G$} \State $\sigma_i \gets i$. \Comment{Initial partition} \State $\sigma' \gets$ \Call{MoveNodes}{$G$} \Comment{Initial move nodes} \While{$\mathcal{H}(\sigma') > \mathcal{H}(\sigma)$} \State $\sigma \gets \sigma'$ \State $G \gets$ \Call{Aggregate}{$G, \sigma$} \State $\Sigma \gets $ \Call{MoveNodes}{$G$} \Comment{Move nodes} \State $\sigma'_i \gets \Sigma_{\sigma'_i}$ for all $i$ \Comment{Correct $\sigma'$ according to $\Sigma$} \EndWhile \State \Return $\sigma'$ \EndFunction \item[] \Function{MoveNodes}{Graph $G$} \State $\sigma_i \gets i$ for $i=1,\ldots,|V(G)|$. \Comment{Initial partition} \State $q \gets -\infty$ \While{$\mathcal{H}(\sigma) > q$} \State $q = \mathcal{H}(\sigma)$ \For{random $v \in V(G)$} \State $c \gets $ \Call{SelectCommunity}{$v$} \If{$\Delta \mathcal{H}(\sigma_v \mapsto c) > 0$} \State $\sigma_v \gets c$. \EndIf \EndFor \EndWhile \EndFunction \item[] \Function{Aggregate}{Graph $G$, Partition $\sigma$} \State $A \gets$ \Call{Adjacency}{$G$} \State $A'_{cd} \gets \sum_{ij} A_{ij} \delta(\sigma_i,c)\delta(\sigma_j,d)$ \State \Return $A'$ \EndFunction \end{algorithmic} \caption{Louvain method. The algorithm loops over all nodes and moves nodes to alternative communities. When no more improvement can be made, it aggregates the graph and reiterates the procedure. } \label{algo:louvain} \end{algorithm} \begin{algorithm}[t] \begin{algorithmic} \item[]Select best neighbor community \Function{SelectCommunity}{Node $v$} \State $\delta \gets -\infty$. \State $c \gets \sigma_v$. \State $C \gets \{\sigma_u \mid (uv) \in E(G)\}$ \Comment{Neighbor communities.} \For{Community $c' \in C$} \If{$\Delta\mathcal{H}(\sigma_v \mapsto c') > \delta$} \State $\delta \gets \Delta\mathcal{H}(\sigma_v \mapsto c')$ \State $c \gets c'$ \EndIf \EndFor \State \Return $c$ \EndFunction \item[] \item[]Select random neighbor community \Function{SelectCommunity}{Node $v$} \State \Return random $\sigma \in \{\sigma_u \mid (uv) \in E(G)\}$. \EndFunction \end{algorithmic} \caption{Select the best or a random neighbor community.} \label{algo:selectcommunity} \end{algorithm} There are two key procedures: \textsc{MoveNodes} and \textsc{Aggregate}. The \textsc{MoveNodes} procedure displayed in Algorithm~\ref{algo:louvain} loops over all nodes (in random order), and considers moving them to an alternative community. This procedure relies on \textsc{SelectCommunity} to select a (possibly) better community $c$. Only if the improvement $\Delta \mathcal{H}(\sigma_v \mapsto c) > 0$, we will actually move the node to community $c$. The \textsc{Aggregate} procedure may depend on the exact quality function $\mathcal{H}$ used. In particular, the aggregate graph $G'$ should be constructed according to $\sigma$, such that $\mathcal{H}(G', \sigma') = \mathcal{H}(G, \sigma)$, where $\sigma'_i = i$ is the initial partition. That is, the quality of the initial partition $\sigma'$ of the aggregated graph $G'$ should be equal to the quality of the partition $\sigma$ of the original graph $G$. In Algorithm~\ref{algo:louvain} a version is displayed which is suited for modularity. Other methods may require additional variables to be used when aggregating the graph (e.g.~\cite{traag_narrow_2011}). The only procedure that remains to be specified is \textsc{SelectCommunity}. In the original Louvain algorithm, this procedure commonly considers all possible neighboring communities, and then greedily selects the best community. It is summarized in Algorithm~\ref{algo:selectcommunity}. We created a new flexible and fast implementation of the Louvain algorithm in \texttt{C++} for use in \texttt{python} using \texttt{igraph}. The implementation of the algorithm itself is quite detached from the objective function to optimize. In particular, all that is required to implement a new objective function is the difference when moving a node $\Delta \mathcal{H}$ and the quality function $\mathcal{H}$ itself (although the latter is not strictly necessary). This implementation is available open source from~\texttt{GitHub}\footnote{\url{https://github.com/vtraag/louvain-igraph}} and~\texttt{PyPi}\footnote{\url{https://pypi.python.org/pypi/louvain}}. \section{Improvement} \begin{figure*}[t] \begin{center} \includegraphics{clique} \end{center} \caption{\textbf{Clique.} The original Louvain algorithm considers all communities, which leads to $E(t) = \O(n_c^2)$ operations for putting all $n_c$ nodes of a clique in a single community. The improvement considers only random neighbors, which takes only $E(t) = \O(n_c \log n_c)$ operations to identify the whole clique as a community. In (a) we show the number of operations in a simulation, with the markers indicating the simulated number of operations, and the solid lines the analytically derived estimates. In (b)--(e) we show the actual time used when optimizing the indicated quality functions for a clique for the different objective functions. The solid lines in (b)--(e) denote best fits to $n_c^2$ and $n_c \log n_c$ in log-space. } \label{fig:clique} \end{figure*} Not surprisingly, the Louvain algorithm generally spends most of its time contemplating alternative communities. While profiling our implementation, we found that it spends roughly $95\%$ of the time calculating the difference $\Delta \mathcal{H}(\sigma_v \mapsto c)$ in Algorithm~\ref{algo:selectcommunity}. Much of this time is spent moving around nodes for the first time. With an initial partition where each node is in its own community, almost any neighboring community would be an improvement. Moreover, when the algorithm has progressed a bit, many neighbors likely belong to the same community. We therefore suggest that instead of considering all neighboring communities, we simply select a random neighbor, and consider that community (as stated in Algorithm~\ref{algo:selectcommunity}), which we call the random neighbor Louvain. Notice that the selection of a random neighbor makes the greedy Louvain algorithm less greedy and thus more explorative. Indeed, when also accepting moves with some probability depending on the improvement (possibly also accepting degrading moves), the algorithm comes close to resemble simulated annealing~\cite{reichardt_statistical_2006,guimera_functional_2005}. However, simulated annealing is rather slow for community detection~\cite{lancichinetti_community_2009}, so we don't explore that direction further, since we are interested in speeding up the algorithm. There are several advantages to the selection of a random neighbor. First of all, it is likely to choose a relatively ``good'' community. In general, a node should be in a community to which relatively many of its neighbors belong as well (although this of course depends on the exact quality function). By selecting a community from among its neighbors, there is a good chance that a relatively good community is picked. In particular, if node $i$ has $k_i(c)$ neighbors in community $c$, the probability that community $c$ will be considered for moving is $k_i(c)/k_i$. The probability for selecting a community is thus proportional to the number of neighbors in that community. Bad communities (with relatively few neighbors) are less frequently sampled, so that the algorithm focuses more on the promising communities (those with relatively many neighbors). Moreover, when considering the initial partition of each node in its own community, almost any move would improve the quality function $\mathcal{H}$. The difference between alternative communities in this early stage is likely to be marginal. Any move that puts two nodes in the same community is probably better than a node in its own community. Such moves quickly reduce the number of communities from roughly $n$ to $n/2$. But instead of considering every neighboring community as in the original Louvain algorithm, which takes roughly $\O(\langle k \rangle)$, our random neighbor Louvain algorithm only considers a single random neighbor, which takes constant time $\O(1)$. So, for the first few iterations, Louvain runs in $\O(n \langle k \rangle) = \O(m)$, whereas selecting a random neighbor runs in $\O(n)$. \begin{figure}[t] \begin{center} \includegraphics{performance_ratio} \end{center} \caption{\textbf{Network size.} We here show the ratio of the time and of the quality (i.e. $\mathcal{H}$) of the uncovered partitions by the original Louvain algorithm and the random neighbor Louvain. The random neighbor Louvain algorithm is $2$--$3$ times faster than the original Louvain algorithm and at some points event faster for clear communities in (a) when using $\mu = 0.1$. However, for less clear communities at $\mu = 0.8$ as displayed in (b), the optimization of significance and surprise is not faster by using the random neighbor Louvain. The random neighbor Louvain uncovers almost the same quality as the original version for a large part, as shown in (c) and (d). However, especially for surprise, the quality is adversely affected by the random neighbor Louvain for $\mu=0.1$, shown in (c). The results are based on benchmark graph with communities of size $n_c = 1\,000$ and an average degree of $\langle k \rangle = 15$. } \label{fig:performance} \end{figure} \begin{figure}[t] \begin{center} \includegraphics{performance_comm_size} \end{center} \caption{\textbf{Effect of community size.} Results in (a) show that the speedup ratio increases with the community size for $\mu = 0.1$. Surprise and significance find smaller substructures within large communities as seen in (c). This is also when the improvement starts to deteriorate. The results for large communities in (a) and (c) are very similar to the situation when $\mu = 0.8$ in (b) and (d), which resembles a random graph more closely. The speedup ratio in (b) corresponds to this: the speedup is rather large for modularity, while it is much lower for surprise and significance. In (e) and (f) we show that heterogeneity in the community sizes nearly do not impact the speedup ratio. In that case we generate LFR benchmark graphs with smallest community size $n_c = 10$ and the maximum community size varies from $2$ to $10$ times as large. We use $n = 10^5$ and $\langle k \rangle = 10$ for both benchmarks. } \label{fig:performance_comm_size} \end{figure} Notice there is a big difference between (1) selecting a random neighbor and then its community and (2) selecting a random community from among the neighboring communities. The first method selects a community proportional to the number of neighbors that are in that community, while the second method selects a community uniformly from the set of neighboring communities. Consider for example a node that is connected to two communities, and has $k_i - 1$ neighbors in the first community and only $1$ in the other community. When selecting a community of a random neighbor, the probability the good community is considered is $1 - \frac{1}{k_i}$, while the probability is only $\frac{1}{2}$ when selecting a random community. Secondly, random selection of a neighbor increases the likelihood of quick convergence. The probability that node $i$ is selected as a random neighbor is roughly $k_i/2m$, resembling preferential attachment~\cite{barabasi_emergence_1999} in a certain sense. Hubs are thus more likely to be chosen as a candidate community. Since, hubs connect many vertices, there is a considerable probability that two nodes consider the same hub. If these two (or more) nodes (and the hub) should in fact belong to the same community, chances are high both nodes and the hub quickly end up in the same community. As an illustration of this advantage, consider a hubs-and-spokes structure, with one central hub and only neighboring spokes that are connected to each other (and always to the hub). So, any spoke node $i$ is connected to nodes $i - 1$ and $i + 1$ and to the central hub, node $n$. Consider for simplicity that the nodes are considered in order and that every move will be advantageous. The probability that the first node will move to community $n$ is $p_1 = \frac{1}{3}$. For the second node, he will move to community $n$ if he chooses node $n$ immediately (which happens with probability $\frac{1}{3}$), or if he chooses node $1$, and node $1$ moved to community $n$, so that $p_2 = \frac{1}{3} + p_1\frac{1}{3}$. Similarly, for the other nodes $p_i = \frac{1}{3} + p_{i-1} \frac{1}{3} = \sum_{j=1}^i \left(\frac{1}{3}\right)^j$ which goes to $\frac{1}{2}$ for $n \to \infty$. This is higher than when just considering a random neighbor community. In that case, the probability the first node will move to community $n$ is still $\frac{1}{3}$. But for the second node, if node $1$ moved to community $n$, only two communities are left: $n$ and $3$. In that case, community $n$ is chosen with probability $\frac{1}{2}$. If node $1$ didn't move to community $n$, then node $2$ will move to community $n$ with probability $\frac{1}{3}$. In general, node $i$ moves to community $n$ with probability $p_i = p_{i-1} \frac{1}{2} + (1 - p_{i-1}) \frac{1}{3} = p_{i-1}\frac{1}{6} + \frac{1}{3}$. Working out the recurrence, we obtain that $p_i = \frac{1}{3} \sum_{j=0}^{i-1} \left(\frac{1}{6}\right)^j$, which tends to $\frac{2}{5}$. Selecting a community of a random neighbor thus works better than selecting a random community from among the neighbors. Selecting a community of a random node is even worse. In that case, the probability is $p_i = \frac{1}{n}(1 + \frac{1}{n})^{i-1}$ which tends to $0$ for $n \to \infty$. In short, selecting the community of a random neighbor is likely to choose a new community that will also be chosen by other nodes. In summary, selecting a random neighbor should work well because of two reasons. First, it tends to focus on communities that are ``good''. Secondly, it should help in convergence because of higher likelihood of selecting hubs. In particular, the evaluation of $\textsc{SelectCommunity}$ in the random neighbor Louvain takes a constant time $\O(1)$ whereas evaluating all communities takes about $O(\langle k \rangle)$. However, one essential question is whether $\textsc{SelectCommunity}$ will not be too frequently evaluated in the random neighbor Louvain to counter this benefit. \begin{figure*}[t] \begin{center} \includegraphics{real_networks} \end{center} \caption{\textbf{Empirical network results.} The random neighbor Louvain usually speeds up the optimization of the objective function for most empirical networks. For the hyperlink network from Google it does not work for any method, while the adolescent health dataset poses problems for optimizing CM modularity. The quality remains relatively similar compared to the original, especially for modularity as shown in (b). For significance and surprise the difference are more pronounced. } \label{fig:real_networks} \end{figure*} \begin{table} \sisetup{ table-number-alignment=center, group-minimum-digits=3} \begin{tabular}{ l S[table-format=7.1] S[table-format=8.1] S[table-format=4.2]} \toprule Network & {$n$} & {$m$} & {$\langle k \rangle$} \\ \midrule Health & 2539 & 12969 & 10.22 \\ Brightkite & 58228 & 214078 & 7.35 \\ Facebook & 63731 & 817035 & 25.64 \\ Author Collaboration & 22908 & 2673133 & 233.38 \\ Web (Google) & 875713 & 5105039 & 11.66 \\ Web (Berk./Stan.) & 685230 & 7600595 & 22.18 \\ \bottomrule \end{tabular} \caption{\textbf{Empirical network overview.}} \label{tab:real_networks} \end{table} To study this question, let us consider a ring of $r$ cliques of $n_c$ nodes each. The cliques (which are complete subgraphs containing $\binom{n_c}{2}$ links) are connected to another clique only by a single link in a circular fashion (i.e. clique $i$ is connected only to clique $i-1$ and $i+1$). Most methods tend to find the cliques (or sets of multiple cliques due to the resolution limit~\cite{fortunato_resolution_2007,traag_narrow_2011}). Indeed, it is one of the best possible community structures: we cannot add any more internal edges, nor can we delete any external edges without disconnecting the graph. However, for the runtime complexity, the external edges will play only a marginal role. We may therefore simply assume we will work with $r$ disconnected cliques of size $n_c$. Although the actual runtime will deviate from this, it should provide a reasonable runtime for relatively ``clear'' communities, and as such provide a lower bound for more difficult communities. The core question is thus how quickly both the original and the random neighbor Louvain run on cliques. We will assume the clique should become a single community, which is likely to be the case for most methods. Additionally, we assume $\Delta \mathcal{H} > 0$ only if a node is moved to a larger community, which is likely to be the case for most methods as nodes in a clique have more links to larger communities. The complexity of the original Louvain implementation is simple to evaluate in this case. The first node will be moved to one of its neighbors, an operation that costs $n_c$ evaluations. The second node has only $n_c-1$ evaluations to make, since the community of the first node disappeared. If we continue in this fashion, the total number of evaluations $t$ is then $\sum_{i = 1}^{n_c} n_c - i + 1 = \frac{n_c(n_c + 1)}{2} = \O(n_c^2)$. The analysis of the expected runtime of the random neighbor Louvain is more difficult (see Appendix~\ref{sec:clique} for more details). However, we can provide a lower bound that serves as a rough estimate. Let us again denote by $t$ the total number of operations before the whole clique is identified as a single community. We divide this in different phases of the algorithm, where each phase $i$ runs from the time where there are $n_c - i + 1$ communities, until there are $n_c - i$ communities. In phase $1$ we thus start out with $n_c$ communities, and in the next phase there are only $n_c - 1$ communities. If we denote by $t_i$ the number of operation in phase $i$, then by linearity $E(t) = E(\sum_t t_i) = \sum_t E(t_i)$. Notice that we will only leave phase $i$ whenever a community of size $1$ disappears. The probability that a community of $1$ disappears is $\frac{n_c - 1}{n_c}$, since it will join any other community (except itself). There are at most $i$ communities of size $1$ in phase $i$, so that the probability a community of size $1$ is selected is bounded above by $\frac{n_c - i + 1}{n_c}$. In fact, such a state is also relatively likely, as the community size distribution tends to become more skewed than a more uniform distribution due to the preferential attachment on the basis of the community sizes. The number of expected operations in phase $i$ is then bounded below by $\frac{n_c}{n_c - i + 1}$, and the expected operations in total is bounded below by \begin{align} E(t) &\geq n_c \sum_{i = 1}^{n_c} \frac{1}{n_c - i + 1} \\ &= n_c \sum_{i = 1}^{n_c} \frac{1}{i} = \O(n_c \log n_c) \end{align} However, this lower bound gives in fact a very accurate estimate of the expected running time, as seen in Fig.~\ref{fig:clique}. Whereas the original Louvain algorithm runs in $\O(n_c^2)$, the random neighbor version only uses $\O(n_c \log n_c)$ to put all nodes of a clique in a single community. We used an explicit simulation of this process to validate our theoretical analysis. Running the actual algorithms on cliques yields similar results (Fig.~\ref{fig:clique}). To get a rough idea of the overall running time, let us translate these results back to the ring of cliques. In that case, we have $r$ cliques of $n_c$ nodes. The runtime for the original Louvain method is $\O(n_c^2)$ for each clique, so that the total runtime is about $\O(rn_c^2)$. One factor of $n_c^2$ comes from running over $n_c$ nodes, while the other factor comes from running over $\langle k \rangle \approx n_c$ neighbors. Since $rn_c = n$, and $n \langle k \rangle = m$, we thus obtain an overall running time of Louvain of about $\O(rn_c^2) = \O(n \langle k \rangle) = \O(m)$, similar to earlier estimates~\cite{blondel_fast_2008,fortunato_community_2010}. Following the same idea, we obtain an estimate of roughly $\O(n \log \langle k \rangle)$ for the runtime of the random neighbor Louvain algorithm. So, whereas the original algorithm runs in roughly linear time with respect to the number of edges, the random neighbor algorithm runs in nearly linear time with respect to the number of nodes. Empirical networks are usually rather sparse, so that the difference between $\langle k \rangle$ and $\log \langle k \rangle$ is usually not that large. Still, it is quite surprising to find such an improvement for such a minor adjustment. \section{Experimental Results} We use benchmark networks and real networks to show that the random neighbor improvement also reduces the runtime in practice. These benchmark networks contain a planted partition, which we then try to uncover using both the original and the random neighbor Louvain algorithm. An essential role is played by the probability that a link falls outside of the planted community $\mu$. For low $\mu$ it is thus quite easy to identify communities, while for high $\mu$ it becomes increasingly more difficult. We report results using the speedup ratio calculated as $R_\text{speed} = \frac{T_\text{orig}}{T_\text{rn}}$, where $T_\text{rn}$ is the runtime of the random neighbor variant and $T_\text{orig}$ the runtime of the original Louvain method. The runtime is calculated in used CPU time, not elapsed real time. We also report the quality ratio, which is calculated as $R_\text{qual} = \frac{\mathcal{H}_\text{rn}}{\mathcal{H}_\text{orig}}$ where $\mathcal{H}_\text{rn}$ refers to the quality of the partition uncovered using the random neighbor improvement and $\mathcal{H}_\text{rn}$ to the quality using the original algorithm. In this way, if $R_\text{speed} > 1$ the random neighbor improves upon the original and similarly $R_\text{qual} > 1$ if the random neighbor is an improvement. Throughout all plots, error bars indicate standard errors of the mean. The Louvain algorithm can be applied to many different methods, and we here show results for (1) modularity using a configuration null model~\cite{newman_finding_2004} (CM modularity); (2) modularity using an Erdös-Rényi null model~\cite{reichardt_statistical_2006} (ER modularity); (3) significance~\cite{traag_significant_2013}; and (4) surprise~\cite{aldecoa_surprise_2013}. We first test the impact of the network size as a whole. We construct benchmark networks ranging from $n=10^4$ to $n=10^7$ nodes, with equally sized communities of $1\,000$ nodes, with a Poissonian degree distribution. The speed and quality of the original Louvain algorithm and the random neighbor Louvain algorithm for all four methods is reported in Fig.~\ref{fig:performance}. For all these methods, the random neighbor Louvain speeds up the algorithm roughly $2$--$3$ times. At the same time, the quality of the partitions found remains nearly the same. However, surprise and significance seem to perform worse than modularity. The speedup is rather limited for higher $\mu$ (or becomes even slower than the original), in which case communities are more difficult to detect. Surprise and significance tend to find relatively smaller communities than modularity~\cite{traag_detecting_2015}, suggesting that the performance gain of using the random neighbor Louvain is especially pertinent when making a relatively coarse partition. Revisiting the argument of the ring of cliques makes clear that the runtime does not necessarily scale with the degree, but rather, with the clique size, which we may approximate as the community size. Indeed, the runtime for merging all the $n_c$ nodes in a single community together, should take $\O(n_c^2)$ originally and $\O(n_c \log n_c)$ in the random neighbor Louvain, as previously argued. However, if there are no clear communities present in the network, the running time will not depend on the degree as much, but rather on the sizes of the communities found. Hence, the running time should then roughly scale as $\O(n n_c)$ for the original implementation and as $\O(n \log n_c)$ for the random neighbor Louvain. Since surprise and significance find smaller communities than modularity (unless the communities are clearly defined), the speedup will be rather limited, whereas it will be larger generally for modularity. We test this by generating benchmark networks with $n=10^5$ nodes, $\langle k \rangle = 10$ and varying community sizes from $10$ to $20\,000$. Results are displayed in Fig.~\ref{fig:performance_comm_size}. Indeed, for larger communities, surprise and significance have difficulties discerning such large communities, and it tends to find substructure within these large communities. Notice that modularity also merges smaller communities (thereby uncovering artificially larger communities), part of the problem of the resolution limit~\cite{fortunato_resolution_2007}. This is exactly also the point at which the speedup for surprise and significance goes down. Moreover, when the community structure is not clear, there is no effect of community size at all. Indeed, in that case, surprise tends to find small communities, and modularity tends to find large communities. The speedup follows this pattern: surprise and significance show very small speedups, while modularity shows larger speedups. However, modularity also prefers rather balanced communities~\cite{Lancichinetti2011}, so that perhaps modularity performs rather well because of the similarity in community sizes. We therefore also consider the impact of more heterogeneity by constructing LFR benchmark networks~\cite{lancichinetti_benchmark_2008}. In these benchmark graphs the community sizes and the degree both follow powerlaw distributions with exponents $1$ and $2$ respectively. The maximum degree was set at $2.5 \langle k \rangle$, while the minimum community size was set at $\langle k \rangle$ for $\langle k \rangle = 10$. We varied the maximum community size from $2 \langle k \rangle$ to $10 \langle k \rangle$. These results are displayed in Fig.~\ref{fig:performance_comm_size}, from which we can see that the heterogeneity in community sizes does not affect the results. We also tested the random neighbor Louvain on six empirical networks of varying sizes. These networks were retrieved from the Koblenz Network Collection\footnote{\url{http://konect.uni-koblenz.de/}}. We include (1) the \href{http://konect.uni-koblenz.de/networks/moreno_health}{adolescent health dataset}, a school network collected for health research~\cite{Moody2001}; (2) \href{http://konect.uni-koblenz.de/networks/loc-brightkite_edges}{Brightkite}, a social network site~\cite{Cho2011}; (3) a \href{http://konect.uni-koblenz.de/networks/facebook-wosn-links}{Facebook} friendship network~\cite{Viswanath2009}; (4) an \href{http://konect.uni-koblenz.de/networks/ca-cit-HepPh}{author collaboration network} from the High Energy topic on arXiv~\cite{JureLeskovecJonKleinberg2006}; (5) a \href{http://konect.uni-koblenz.de/networks/web-Google}{web hyperlink network} released by Google~\cite{Leskovec2008}; and (6) the \href{http://konect.uni-koblenz.de/networks/web-BerkStan}{complete web hyperlink network} from the universities of Berkeley and Stanford~\cite{Leskovec2008}. An overview of the size of the networks is provided in Table~\ref{tab:real_networks}, and the results are displayed in Fig.~\ref{fig:real_networks}. The random neighbor Louvain is clearly faster for most networks and methods, reaching even speedup ratios of over 10 for the hyperlink web network from Berkeley and Stanford. For the web network released by Google the improvement is not faster however. The quality remains relatively similar for most networks, especially for modularity, whereas the quality differs more for surprise and significance. \begin{figure}[t] \begin{center} \includegraphics{performance_weighted} \end{center} \caption{\textbf{Performance weighted neighbor sampling.} Instead of sampling a neighbor randomly, it is also possible to sample neighbors proportional to the weight. We here test the performance of the unweighted neighbor sampling in (a)--(b) and the weighted neighbor sampling in (c)--(d). We generate weighted LFR benchmark networks, where the strength of the nodes follows the degree $s_i = k_i^\beta$ with $\beta = 1.5$ with $\langle k \rangle = 10$ and $n=10^5$. The results for the unweighted neighbor sampling in (a) and (b) are very similar to the results for the weighted neighbor sampling in (c) and (d). Taking into account the weight hence does not improve the random neighbor sampling much. } \label{fig:performance_weighted} \end{figure} Notice that significance is not defined for weighted networks, such that significance is not run on those networks (health and author collaboration). But weighted networks raise an interesting point: is it possible to make use of the weight to improve the speed even more? A natural possibility is to sample neighbors proportional to the weight. Neighbors in the same community are often connected with a higher weight, part of the famous strength of weak ties~\cite{Granovetter1973,Onnela2007a}. Sampling proportional to the weight should thus increase the chances of drawing a ``good'' community. However, this depends on the extent to which this correlation between weight and community holds. The aggregated graph is weighted also, allowing the possibility of weighted sampling as well. On the other hand, only little time is spent in the aggregated iterations, making the benefit relatively small. Weighted sampling in constant time requires preprocessing, which takes an additional $\O(m)$ memory and $\O(m)$ time. The question is thus whether these costs do not offset the possible benefits. We use weighted benchmark networks~\cite{Lancichinetti2009} to test whether weighted sampling speeds up the algorithm even further. These benchmark networks introduce an additional mixing parameter for the weight $\mu_w$. Whereas the topological mixing parameter $\mu$ controls the probability of an edge outside of the community, the weight is distributed such that on average a proportion of about $\mu_w$ lies outside of the community. The strength of the nodes follows the degree $s_i = k_i^\beta$ with $\beta = 1.5$ with $\langle k \rangle = 10$ and $n=10^5$. The external weight $\mu_w s_i$ is spread over $\mu k_i$ external links, thereby leading to an average external weight of $\frac{\mu_w s_i}{\mu k_i} = \frac{\mu_w}{\mu} k_i^{\beta - 1}$. If $\mu_w > \mu$ the external weight is higher than the internal weight, making it difficult to detect communities correctly. Intuitively, we would thus expect to see an improvement in the random neighbor selection whenever $\mu_w < \mu$, as in that case, the weight correlates with the planted partition. The results for both the unweighted and the weighted random neighbor sampling is displayed in Fig.~\ref{fig:performance_weighted}. Although the weighted random neighbor sampling sometimes improves on the unweighted variant, overall the performance is comparable. The results on the unweighted benchmark networks and the empirical networks are also very comparable (not shown). \section{Conclusion} Many networks seem to contain some community structure. Finding such communities is important across many different disciplines. One of the most used algorithms to optimize some quality function is the Louvain algorithm. We here showed how a remarkably simple adjustment leads to a clear improvement in the runtime complexity. We argue that the approximate runtime of the original Louvain algorithm should be roughly $\O(m)$, while the improvement reduces the runtime to $\O(n \log \langle k \rangle)$ in a clear community structure. So, whereas the original algorithm is linear in the number of edges, the random neighbor algorithm is nearly linear in the number of nodes. We have tested the random neighbor algorithm extensively. The improvement is quite consistent across various settings and sizes. The runtime complexity was reduced, speeding up the algorithm roughly $2$--$3$ times, especially when concentrating on the coarser partitions found by modularity. Nonetheless, some methods, such as surprise and significance, are more sensitive to sampling a random neighbor. This seems to be mostly due to the community size in the uncovered partition. Whereas modularity prefers rather coarse partitions, both significance and surprise prefer more refined partitions, leading to much smaller communities. More refined partitions offer fewer opportunities for improving the runtime, so that sampling a random neighbor provides little improvement. The idea could also be applied in different settings. For example, the label propagation method is also a very fast algorithm~\cite{raghavan_near_2007}, but it doesn't consider any objective function. It simply puts a node in the most frequent neighboring community. But instead of considering every neighbor, it can simply choose a random neighbor, similar to the improvement here. We may thus expect a similar improvement in label propagation as for the Louvain algorithm. Similar improvements may be considered in other algorithms. The core of the idea is that a random neighbor is likely to be in a ``good'' community, which presumably also holds for other algorithms. \begin{acknowledgments} This research is funded by the Royal Netherlands Academy of Arts and Sciences (KNAW) through its eHumanities project~\footnote{\url{http://www.ehumanities.nl/computational-humanities/elite-network-shifts/}}. \end{acknowledgments}
1,477,468,749,929
arxiv
\section{Introduction} \IEEEPARstart{T}{he} reduction of dimensions in Silicon based transistors faces great challenges as dimensions approach atomic sizes and physical limits will be eventually reached. A great deal of research has focused during the last years in new materials that alleviate these limitations. One of these materials is Graphene \cite{Novoselov2004}⁠, a two-dimensional structure with outstanding electrical characteristics such as very high electron mobilities in the order of 20000 $\text{cm}^{2} \text{V}^{-1}\text{s}^{-1}$ on silicon substrates \cite{Chen2008b}⁠. The possibility of achieving such high electron mobilities, which are orders of magnitude higher than silicon based technologies, makes GFETs excellent candidates for replacing nanometer CMOS transistors in future high-speed analog electronic circuits \cite{Schwierz2010a}. Since the demonstration of the first GFET \cite{Lemme2007},⁠ the technology has evolved very fast. {In just very few years it has been shown that {de-embedded, intrinsic} GFETs transit frequencies $f_T$ are comparable to or higher than those of similarly sized nanometer CMOS devices \cite{Wu2011a}\cite{Lin2011a}\cite{Liao2010a}⁠. Actual measured $f_T$ is, in fact, much lower than CMOS, mainly due to the presence of interface and contact resistances. These resistances are a serious issue in GFET technology and therefore there are active research efforts on finding ways to reduce their impact. Latest research results have shown that contact resistances well bellow 100 $\Omega~\mu$m are possible; for instance, contact resistances as low as 20~$\Omega~\mu$m were measured for hydrogen intercalated graphene growth \cite{Moon2012}. RF/Analog design uses seldom the minimum width transistors of a technology. Minimum transistor sizes in RF applications are generally above 20~$\mu$m - 30~$\mu$m, whereas in analog-baseband circuits the dimensions can be as large as hundreds of micrometers. Transistors with these widths would present small contact resistances with values similar to those of parasitic resistances on the metallic interconnection/vias in nanometer CMOS technologies. Their impact on the circuit performance would be the same as other parasitics, and therefore, they can be handled using the same circuit design techniques that are used in todays CMOS circuits.} Likewise, high transconductance gain $g_m$ values were also demonstrated \cite{Moon2010a}⁠\cite{Wu2011a}⁠. In addition, it has been shown that the drain current in GFET transistors has a saturation region \cite{Meric2008}⁠. This is an important characteristic since it facilitates the use the GFETs as voltage-controlled current sources, and consequently, the design of analog circuits in general. {Until now, drain current saturation has been mainly observed in long gate GFET devices, and short-channel GFETs still present unsatisfying current saturation behavior. Nevertheless, it has been reported that the use of bilayer graphene can result in important current saturation improvements \cite{Szafranek2012}. \str{Likewise, lateral graphene heterostructures have also been suggested as a possible solution to enhance the current saturation \cite{Moon2013}}. Although GFET technology still faces technological challenges, projections of GFET vs. CMOS high-speed analog IC performance \cite{Rodriguez2011} have shown that GFET technology can potentially surpass CMOS in the near future provided that the low field mobility $\mu$ is kept above certain values.} The development of GFET devices has been accompanied by the appearance of electrical models that can be used to describe the electrical characteristics of the device and also to simulate circuits \cite{Shepard2008}\cite{Thiele2010}\cite{Jimenez2011a}⁠\cite{Habibpour2011}\cite{Fregonese2012}\cite{Fregonese2013}⁠. Some of these initial models are physical models which do not have closed expressions and therefore require the use of numerical methods to find solutions. These models are very useful to explain the device physics; however, they are not suitable for implementation in analog circuit modeling languages such as SPICE or Verilog-A. Other models are compact analytical models which can be written in SPICE or Verilog-A and used to simulate circuits with EDA CAD tools. These models, however, are still very complex for being used during circuit design. Analog circuit designers make many decisions based on hand-calculations, and therefore require simple analytical expressions. This paper introduces a comprehensive model which provides the circuit design community with simple mathematical expressions to analyze GFETs. The proposed model is based on⁠ \cite{Fregonese2013}⁠ and consists of simplifications and assumptions which are valid for the first triode region and saturation/negative output resistance region which are relevant for analog circuit design. The paper is organized as follows. Section II presents a brief summary of the large signal model that is used as base of this work. A simplified analytical expression for the drain current as a function of internal voltages and technology parameters is provided in Section III. Section IV provides closed expressions for small-signal hybrid-$\pi$ models ($g_m$, $r_{o}$, $C_{gs}$, $C_{gd}$). Section V presents closed expression for figures of merit $A_V$, $g_m/id$ and $f_T$ . Finally, a summary of the simplified model is provided. \section{Large Signal Model} An exhaustive study of the drain-source current using the drift equation for GFET transistors can be found in⁠ \cite{Fregonese2013}⁠. The result of this study shows that the drain-source current can be expressed as: \begin{align} I_{D} = \mu W\frac{\int_{0}^{V_{DSi}} \left (\left | Q_{net} \right | + en_{puddle} \right )dV}{L+\mu \left | \int_{0}^{V_{DSi}} \frac{1}{v_{SAT}} dV \right |} \label{EQ:ID_ORIG} \end{align} \str{where $\mu$ is the mobility, $W$ the transistor width, $L$ the transistor length, $Q_{net}$ the net {mobile charge density per unit area, $e$ the elementary charge ($1.6 \times 10^{-19}$ As)}, $ n_{puddle} = \frac{\Delta ^2}{\pi \hbar^2 {v_f}^2 }$, and $V_{DSi}$ the internal drain-source voltage.} The parameter $\Delta$ represents the spacial inhomogeneity of the electrostatic potential, $\hbar$ is the reduced Planck constant, and $v_f$ is the Fermi velocity. For simplicity, the integral in the numerator of (\ref{EQ:ID_ORIG}) can be split and solved independently. \begin{align} I_{D} = \mu W\frac{NUM}{DEN}=\mu W\frac{NUM_{1}+ NUM_{2}}{DEN} \label{EQ:ID_ORIG2} \end{align} where the first term in the numerator is: \begin{equation} \begin{split} &NUM_{1} = {\beta } \int\limits_{0}^{V_{DSi}} \left[ \frac{-C_{TOP}}{2\beta } + \right.\\ & \left. +\frac {\sqrt{{C_{TOP}}^{2} + 4\beta \left | C_{TOP} (V_{GSi} - V) + eN_{f} \right | }} {2\beta} \right]^2dV \label{EQ:NUM1_1} \end{split} \end{equation} and the factor $\beta ={e^3}/{(\pi \left ( \hbar v_f\right ) ^2)}$. $C_{TOP}$ is the top oxide capacitance, $V$ the potential variation along the channel due to $V_{DS}$ and $N_{f}$ is a term that accounts for the net acceptor/donor doping. Since the graphene material does not have a bandgap, GFETs do not switch off completely like other FET devices. Instead, they show a minimum conduction point which is known as the Dirac point. The doping level set by $N_{f}$ is responsible of shifting the Dirac point in a similar way than the intentional doping used to control the threshold voltage in MOS devices. In practice, the Dirac point is also affected by $V_{DS}$; nevertheless, $N_{f}$ sets an absolute offset which is biasing independent. Accordingly, it is possible to define a zero-bias threshold voltage for GFET devices as : \begin{equation} V_{TH,0} = eN_{f}/C_{TOP} \end{equation} and the effective gate-source overdrive voltage as: \begin{equation} V_ {eff} = V_{GSi} + V_{TH,0} \end{equation} \str{where $V_{GSi}$ is the internal gate-source voltage. Accordingly, (\ref{EQ:NUM1_1}) can be rewritten as:} \begin{equation} \begin{split} &NUM_{1} = {\beta } \int\limits_{0}^{V_{DSi}} \left[ \frac{-C_{TOP}}{2\beta } + \right.\\ & \left. +\frac {\sqrt{{C_{TOP}}^{2} + 4\beta \left | C_{TOP} (V_{eff} - V) \right | }} {2\beta} \right]^2dV \label{EQ:NUM1} \end{split} \end{equation} Equation (\ref{EQ:NUM1}) is simplified by introducing the integration variable to $z = C_{TOP}(V_{eff}-V)$. The integral has the following symbolic solution: \begin{equation} \begin{split} &NUM_{1(z>0)}= - \frac {1}{\beta^{2}{C_{TOP}}} \left [ \frac{ {C_{TOP}}^{4} }{32} - \right.\\ & \left. \frac {C_{TOP} \left ( {C_{TOP}}^{2} + 4\beta z \right )^{3/2}}{12}+ \frac{\beta^{2}z^{2}}{2} + \frac {\beta {C_{TOP}}^{2}z }{2} \right ]\Bigg|_{z_{1}}^{z_{2}}\\ &NUM_{1(z<0)}= - \frac {1}{\beta^{2}{C_{TOP}}} \left [ - \frac{{C_{TOP}}^{4} }{32} + \right.\\ & \left. \frac {C_{TOP} \left ( {C_{TOP}}^{2} - 4\beta z \right )^{3/2}}{12}- \frac{\beta^{2}z^{2}}{2} + \frac {\beta {C_{TOP}}^{2}z }{2} \right ]\Bigg|_{z_{1}}^{z_{2}} \label{EQ:NUM1_2} \end{split} \end{equation} where $z _1= C_{TOP}V_{eff}$ and $z_{2} = C_{TOP}(V_{eff}-V_{DSi})$. The second term of the numerator is given by: \begin{align} NUM_{2} = \int\limits_{0}^{V_{DSi}} en_{puddle} dV = en_{puddle}V_{DSi} \end{align} The denominator in (\ref{EQ:ID_ORIG2}) can be expressed as: \begin{align} DEN = L+\mu \left | \int\limits_{0}^{V_{DSi}} \frac{1}{v_{SAT}} dV \right | \end{align} which can be simplified assuming an average $v_{SAT}$ given by: \begin{align} v_{SAT,AV} = \frac{\omega }{\sqrt{ \pi \frac {\left | Q_{NET,AV} \right | } {e}+n_{puddle}}} \label{EQ:VSATAV} \end{align} where $\omega$ is obtained from the surface phonon energy of the substrate $\hbar\omega $ and $Q_{NET,AV}$ is the average charge given by: \begin{equation} \begin{split} & Q_{NET,AV} = \beta \left [ \frac {-C_{TOP}}{2\beta} + \right.\\ & \left. \frac { \sqrt{ {C_{TOP}}^{2}+4\beta \left | C_{TOP}(V_{eff}-V_{DSi}/2)\right | } }{2\beta} \right]^2 \end{split} \label{EQ:QNETAV} \end{equation} With the previous assumption, the denominator can be expressed as: \begin{align} DEN = L + \frac {\mu}{v_{SAT,AV}} \left | V_{DSi} \right | \label{EQ:DEN} \end{align} The accuracy of the model has been successfully evaluated by its authors by comparing it against numeric models and measured data of different GFET devices built by different groups \cite{Meric2010} \cite{Kedzierski2009}⁠ \cite{Moon2010a}⁠. The whole model is very compact and perfectly suitable for building SPICE and Verilog-A models; and consequently, it is also suitable for circuit simulation purposes. While the model shows outstanding accuracy, it can be appreciated that it is still complicated to be used by analog circuit designers. The main problem is that it lacks a simple closed mathematical expression for the drain current like the one that is available for CMOS FET transistors (Shichman-Hodges model) \cite{Harold1968}⁠ or like the collector current in bipolar transistors (Ebers-Moll model) \cite{Ebers1954}⁠. A simple expression for the drain current is fundamental since the parameters for small-signal hybrid-$\pi$ model and figures of merit are directly derived from this equation. The later ones represent the foundation of electronic circuit theory and are the main tools that analog circuit designers have in order to analyze circuit topologies and make design decisions. Therefore, it is of paramount importance to obtain a simplified expression for the GFET drain current. \section{Simplified Large Signal Model} The difficulty in finding a simple expression for the GFET drain current lies in the complexity of (\ref{EQ:NUM1_2}). Fortunately, the replacement of technology dependent parameters taken from the measured GFETs and physical constants unveils that there is a term that dominates and therefore (\ref{EQ:NUM1_2}) can be reduced to: \begin{align} NUM_{1} \simeq \left. - \frac {1}{2} \frac{z^{2}}{ C_{TOP} \times sign(z)} \right |_{z_{1}}^{z_{2}} \end{align} which for $z > 0$ (typical case in analog design) can be expressed as: \begin{align} NUM_{1} \simeq C_{TOP} V_{DSi} \left (V_{eff}-\frac {V_{DSi}}{2} \right ) \label{EQ:NUM_SHORT} \end{align} For typical technology parameters $NUM_1 \gg NUM_2$. As a result, $NUM_2$ can be disregarded and $NUM \simeq NUM_1$. {Expression (\ref{EQ:DEN}) becomes complicated when $v_{SAT,AV}$ is replaced by (\ref{EQ:VSATAV}) and $Q_{NET,AV}$ by (\ref{EQ:QNETAV}) . } However, it can also be simplified under the assumption that $V_{eff} > {V_{DSi}}/{2}$, and $n_{puddle} \ll \pi{ \left | Q_{NET,AV}\right |}/{e}$. Under these conditions, $\left | Q_{NET,AV} \right |\approx C_{TOP}\left ( V_{eff} - V_{DSi} /2\right )$ and the denominator can be simplified to: \begin{align} DEN \simeq L + \frac{\mu }{\omega }\sqrt{\frac{\pi C_{TOP}}{e}}V_{DSi}\sqrt{V_{eff}-\frac{V_{DSi}}{2}} \label{EQ:DEN_SHORT} \end{align} Finally, the GFET drain current is found by replacing (\ref{EQ:NUM_SHORT}) and (\ref{EQ:DEN_SHORT}) into (\ref{EQ:ID_ORIG2}): \begin{align} I_{D} \simeq \frac{\mu W C_{TOP} \left ( V_{eff} - {V_{DSi}}/{2} \right )}{\frac{L}{V_{DSi}} + \frac{\mu }{\omega }\sqrt{\frac{\pi C_{TOP}}{e}}\sqrt{V_{eff}-V_{DSi}/2}}\label{EQ:ID_SHORT} \end{align} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri1} \caption{Drain current for a 440 nm length, 1 $\mu$m width GFET calculated using the complete model, simplified model, and measured data from \cite{Meric2010}.} \label{FIG:ID} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri2} \caption{Drain current for a 1 $\mu$m length, 1 $\mu$m width GFET calculated using the complete model, simplified model, and measured data from \cite{Meric2010}.} \label{FIG:ID10} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri3} \caption{Drain current for a 3 $\mu$m length, 1 $\mu$m width GFET calculated using the complete model, simplified model, and measured data from \cite{Meric2010}.} \label{FIG:ID30} \end{figure} which is a closed analytical expression that relates the main technology parameters and biasing conditions. Fig.~\ref{FIG:ID}, Fig.~\ref{FIG:ID10}, and Fig.~\ref{FIG:ID30} show $I_D$ vs. $V_{DS_i}$ plots for GFETs of 1~um width and 440~nm, 1~um, and 3um length respectively from \cite{Meric2010}. $I_D$ is calculated by using (\ref{EQ:ID_SHORT}) and the complete model including fitting parameters from \cite{Fregonese2013}. {For these devices, $N_f$ is approximately 0, and therefore $V_{TH,0} \approx 0$ V. The other parameters have the following values: $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} $V_{GS_i}$ takes values from 0~V to 2~V in steps of 500 mV. It can be appreciated that the simplified model matches very well both the complete model and the measured data for $V_{eff} > V_{DS_i}/2$. The plots show that both the first triode region and the saturation/negative resistance region are correctly modeled by (\ref{EQ:ID_SHORT}). The first triode region can be used to build resistive loads or switches whereas the saturation region can be used to build voltage-controlled current-sources and in some biasing conditions negative resistance loads. The second triode region is not modeled by (\ref{EQ:ID_SHORT}) since the assumption $V_{eff} > V_{DS_i}/2$ does not hold anymore. This region, nevertheless, seems to have little practical value in analog design. {Fig.~\ref{FIG:ID44_VG} shows $I_D$ vs. $V_{GSi}$ curves for the 440~nm lenght, 1~$\mu$m width GFET transistor from \cite{Meric2010}. The ambipolar curves show also that the simplified model follows closely the complete model for $V_{eff} > V_{DS_i}/2$.} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri4} \caption{Drain current vs. $V_{GSi}$ for a 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010} calculated using the complete model, and simplified model} \label{FIG:ID44_VG} \end{figure} One important observation that can be made from the simplified expressions is the impact of short-channel lengths on the GFET drain current. It has been experimentally shown that there is strong dependence of short lengths in the GFET transport characteristics \cite{Han2011} \cite{Venugopal2011}⁠. This dependence can be analytically explained by (\ref{EQ:DEN_SHORT}) where it can be seen that $DEN \simeq L$ only for $V_{DS_i} \approx 0$. Once the gate and drain bias voltages increase, the value of $DEN$ departs from $L$ and increases quickly. For very short lengths and high electric fields, the current becomes independent of the channel length, something that has also been confirmed experimentally in \cite{Meric2011}⁠. Under these conditions, the drain current saturates, stops depending on $\mu$, and takes a value of approximately: \begin{align} I_{D} \simeq \omega W { \sqrt{\frac{ C_{TOP} \times {e} }{\pi}}} \sqrt{V_{eff}-V_{DSi}/2} \end{align} \section{Small Signal Model} While the large signal model in (\ref{EQ:ID_SHORT}) encloses the physics of the device in a single expression, it is still too complex to be used in quantitative circuit analyses of the behavior of amplifier configurations. These analyses are normally performed by taking advantage of linear system theory in which a simplified small-signal representation of the transistor biased in the operating point is used. The small-signal representation, also called hybrid-$\pi$ model, is shown in Fig. \ref{FIG:SMALLS}. The parameters $g_m$, $r_o$, $C_{gs}$, and $C_{gd}$ can be obtained by linearization of the large signal model. Naturally, the small-signal representation provides only limited information which is valid for small excursions from the operating point. However, it allows to calculate and estimate in an easy way small-signal dynamic linear behavior of gain, phase, poles, zeros, impulse response, etc. Large-signal behavior is non-linear and therefore its analysis requires the use of the complete model and a circuit simulator. \begin{figure}[t!] \centering \includegraphics[width=7.cm]{./figures/rodri5} \caption{GFET Symbol and equivalent hybrid-$\pi$ model for small-signal analysis.} \label{FIG:SMALLS} \end{figure} The derivation of small-signal parameters for the GFET transistor is presented in the following subsections. \subsection{Transconductance $g_m$} The expression for the transconductance gain can be directly derived from (\ref{EQ:ID_SHORT}): \begin{align} \left. g_{m} =\frac{\delta I_{D}}{\delta V_{GSi}} \right |_{V_{DSi},const.} \end{align} \begin{equation} \begin{split} &g_{m}= \left ( \frac{I_{D}}{V_{eff}-V_{DSi}/2} \right ) \left ( 1 - \frac {1}{2} \frac {I_{D}}{W\omega }\right.\\ & \left. \times\sqrt{\frac{\pi }{e \times C_{TOP}}} \frac{1}{ \sqrt{V_{eff} - V_{DSi}/2} } \right ) \label{EQ:GM} \end{split} \end{equation} Fig. \ref{FIG:GM} shows $g_m$ values calculated using (\ref{EQ:GM}) and the complete model. It can be seen that (\ref{EQ:GM}) follows closely the complete model in particular for $V_{eff} > V_{DSi}/2 $. {It is interesting to notice that $g_{m}$ drops substantially at large $V_{GSi}$ biasing voltages, mainly due to the effect of $V_{SAT}$. Therefore, the best $g_{m}$ performance is actually achieved at low $V_{eff}$ voltages.} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri6} \caption{Transconductance $g_m$ calculated using the complete and simplified model {for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:GM} \end{figure} \subsection{Output resistance $r_o$} The output resistance can be calculated as $r_{0} = 1/g_{0}$ where $g_{0}$ is the output conductance. An expression for $g_o$ can also be directly derived from (\ref{EQ:ID_SHORT}): \begin{align} g_{o} = \left. \frac{\delta I_{D}}{\delta V_{DSi}} \right |_{V_{GSi,const.}} \end{align} \begin{align} \begin{split} &g_{o} = \frac{I_{D}}{V_{eff}-V_{DSi}/2}\left [ -\frac{1}{2} + \right.\\ & +\left. \frac{I_{D}}{\mu WC_{TOP} } \left ( \frac{L}{{V_{DSi}}^{2}} + \frac{ \frac{\mu }{\omega }\sqrt{\frac{\pi C_{TOP}}{e}} }{4 \sqrt{V_{eff}-V_{DSi}/2}}\right )\right ] \label{EQ:GO} \end{split} \end{align} One important characteristic of the GFET device is that under some biasing conditions, $g_0$ becomes negative \cite{Wu2012}⁠ \cite{Grassi2013}⁠. A negative $g_0$ makes the device unstable, and in general this region needs to be avoided in amplifier design. On the other hand, a negative $g_0$ is a very welcome asset when designing oscillators. The biasing conditions in which $g_o$ changes from positive to negative values can be found by making $g_0 = 0$ in (\ref{EQ:GO}) and solving for $V_{DS}$. The expression for this boundary condition is: \begin{align} V_{DS,lim} = \frac{-2L+\sqrt{ 4L \left (L + \frac{\mu}{\omega } \sqrt{\frac{\pi C_{TOP}}{e}} {V_{eff}}^{3/2} \right ) }} { \frac{\mu}{\omega } \sqrt{\frac{\pi C_{TOP}}{e} }\sqrt{V_{eff}} } \label{EQ:RES} \end{align} Fig \ref{FIG:RES} shows $I_D$ vs. $V_{DS}$ plots for different $V_{GS}$ voltages. In addition, the plot shows the points in which $g_o = 0$ ($r_o = \infty$) which were found by using (\ref{EQ:RES}). It can be seen that (\ref{EQ:RES}) predicts very well the transition from positive to negative output resistance. \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri7} \caption{ Calculation of negative $r_o$ biasing requirements {for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:RES} \end{figure} \subsection{Total Channel Charge} The distributed gate capacitance in GFET devices is modeled as the series capacitance of $C_{TOP}$ and the quantum capacitance $C_q$ in the graphene channel \cite{Thiele2010}⁠: \begin{align} Cg = \frac{C_{q} \times C_{TOP}}{C_{q} + C_{TOP}} \end{align} where the parameter $C_q$ relates the distributed charge along the graphene channel and its potential $V_{CH}$, which depends strongly on both $V_{GS}$ and $V_{DS}$. {The total charge stored in the gate capacitance can be found by considering that the charge of all capacitors is the same when they are connected in series. Accordingly, the total charge stored in the gate capacitance is equal to the total charge in the graphene channel $Q_{CH}$}. The separation of the gate capacitance between Gate-Source capacitance and Gate-Drain capacitance can be done by taking partial derivatives of $Q_{CH}$. Consequently, it is beneficial to find a simple closed expression for $Q_{CH}$ which can be easily differentiated. $Q_{CH}$ can be expressed as \cite{Fregonese2013}⁠: \begin{align} Q_{CH} = W\int\limits_{0}^{L} \left (Q_{NET}(x)+en_{puddle} \right )dx \label{EQ:QCH} \end{align} By changing the integration variable dx to dV and reordering the expression, $Q_{CH}$ becomes: \begin{align} Q_{CH} = \frac{eW}{E_{AV}} \int\limits_{0}^{V_{DSi}}\left ( \frac{\beta }{e} \left | V_{CH} \right | V_{CH} + n_{puddle} \right )dV \label{EQ:QCH2} \end{align} where $E_{AV}$ is the average electric field which is given by: \begin{align} E_{AV} \approx \frac{dV}{dx} \approx \frac{V_{DSi}}{L} \label{EQ:EAV_APROX} \end{align} The integral in (\ref{EQ:QCH2}) is similar to that in (\ref{EQ:NUM1}) and therefore it is solved in the same way. Likewise, there is a quadratic term that dominates and therefore $Q_{CH}$ can be reduced to: \begin{align} Q_{CH} \approx \left. \frac{e \times W}{2E_{AV}}\left (-\frac{z^2}{C_{TOP} \times e} \right ) \right |_{z_{1}}^{z_{2}} \end{align} which after replacing $z_1$ and $z_2$ becomes: \begin{align} Q_{CH} \approx \frac{W C_{TOP}}{ E_{AV}} V_{DSi}\left ( V_{eff} - \frac{V_{DSi}}{2}\right ) \label{EQ:QCH3} \end{align} Finally, a simplified expression for $Q_{CH}$ is found by replacing (\ref{EQ:EAV_APROX}) into (\ref{EQ:QCH3}): \begin{align} Q_{CH} \approx C_{TOP}WL \left ( V_{eff} - V_{DSi}/2\right) \label{EQ:QCH_SHORT} \end{align} \subsection{Gate-Source Capacitance $C_{gs}$} The small-signal gate-source capacitance can be calculated as: \begin{align} C_{gs} = \left. \frac{\delta Q_{CH}}{\delta V_{GSi}} \right |_{V_{DSi,const.}} \end{align} \begin{align} C_{gs} =C_{TOP}WL \label{EQ:CGS} \end{align} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri8} \caption{ $C_{gs}$ calculation using the complete and simplified model {for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:CGS} \end{figure} Fig. \ref{FIG:CGS} shows plots of $C_{gs}$ calculated using (\ref{EQ:CGS}) and the complete model. It can be seen that for $V_{eff} > V_{DS}/2$, $C_{gs}$ approaches the value of the total oxide capacitance. For large $V_{eff}$ values the error is within 5\%. \subsection{Gate-Drain Capacitance $C_{gd}$} The small-signal gate-drain capacitance can be calculated as: \begin{align} C_{gd} = -\left. \frac{\delta Q_{CH}}{\delta V_{DSi}} \right |_{V_{GSi,const.}} \end{align} \begin{align} C_{gd}= \frac {C_{TOP}WL}{2} \label{EQ:CGD} \end{align} \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri9} \caption{ $C_{gd}$ calculation using the complete and simplified model {for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:CGD} \end{figure} Fig. \ref{FIG:CGD} shows plots of $C_{gd}$ calculated using (\ref{EQ:CGD}) and the complete model. In this case it is also possible to see that even though $C_{gd}$ values calculated with the complete model have valleys at different drain biasing conditions, their values are close to the value predicted by (\ref{EQ:CGD}). For large $V_{DS}$ values the error is within 15\%. {Fig. \ref{FIG:CGS_CGD} shows $C_{gs}$, $C_{gd}$, and $I_{D}$ vs. $V_{GSi}$. It is interesting to see that despite the fact that the capacitances change for different biasing conditions, their values approximate very well the simplified model when enough $V_{GSi}$ and $V_{DSi}$ bias is present.} \begin{figure}[t!] \centering \includegraphics[width=8.5cm]{./figures/rodri10} \caption{ { $C_{gs}$, $C_{gd}$, and $I_D$ calculation using the complete and simplified model for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:CGS_CGD} \end{figure} \section{Figures of Merit} The extraction of small-signal parameters allows the calculation of figures of merit that can be used to make performance comparisons. The main figures of merit used to evaluate amplifying devices are: intrinsic voltage gain $A_V$, transconductance efficiency $g_m/I_D$, and transit frequency $f_T$. Expressions for these figures of merit are found in the following subsections. \subsection{Intrinsic Voltage Gain $A_V$} The intrinsic voltage gain estimates the low frequency voltage amplification capabilities of the device and can be calculated as: \begin{align} A_V = g_m \times r_0 \end{align} \begin{equation} \begin{split} A_{V} &= \left( {1 - \frac{I_{DSi}}{2W\omega } \sqrt{\frac{\pi }{e C_{TOP}}} \frac{1}{\sqrt{V_{eff} - V_{DSi}/2}}} \right)\bigg/ \\ & \left[ -\frac{1}{2} + \frac{I_{DSi}}{\mu W C_{TOP}}\left( \frac{L}{{V_{DSi}}^{2}} +\frac{\mu }{4 \omega } \sqrt{\frac{\pi C_{TOP}}{e}}\times \right. \right.\\ & \left. \left. \frac{1}{\sqrt{V_{eff} - V_{DSi}/2}}\right) \right]\label{EQ:GAIN} \end{split} \end{equation} \subsection{Transconductance Efficiency $g_m/I_D$} The $g_m/I_D$ relates the transconductance amplification capability of the device and the drain current that is required to produce it. Therefore, it is a measure of the power consumption efficiency of the amplifying device. The expression for $g_m/I_D$ can be directly obtained from (\ref{EQ:GM}): \begin{equation} \begin{split} \frac { g_{m}} {I_{D}} &= \left ( \frac{1}{V_{eff}-V_{DSi}/2} \right ) \times\\ &\left ( 1 - \frac {1}{2} \frac {I_{D}}{W\omega } \sqrt{\frac{\pi }{e \times C_{TOP}}}\frac{1}{ \sqrt{V_{eff} - V_{DSi}/2} } \right ) \label{EQ:GMID} \end{split} \end{equation} \subsection{Transit Frequency $f_T$} The transit frequency estimates the frequency at which the current gain of the device drops to 1, and it is a measure of its high-speed and bandwidth capabilities. The transit frequency is defined as: \begin{align} f_{T} = \frac {g_{m}}{2 \pi \left ( C_{gs} + C_{gd} \right ) } \end{align} \begin{equation} \begin{split} f_{T} &= \frac{\left ( \frac{I_{D}}{V_{eff}-V_{DSi}/2} \right )}{2\pi \left (\frac {3}{2} C_{TOP} W L \right )}\times \\ &\left ( 1 - \frac {1}{2} \frac {I_{D}}{W\omega } \sqrt{\frac{\pi }{e \times C_{TOP}}}\frac{1}{ \sqrt{V_{eff} - V_{DSi}/2} } \right ) \label{EQ:FT} \end{split} \end{equation} Fig. \ref{FIG:FT} shows plots of $f_T$ calculated using (\ref{EQ:FT}) and the complete model. It can be seen that there is very good matching for most biasing points, and disagreements start to become visible only when $V_{eff} < V_{DS}.$ \begin{figure}[t!] \centering \includegraphics[width=7.5cm]{./figures/rodri11} \caption{ Transit Frequency $f_T$ calculation using the complete and simplified model {for the 440 nm length, 1 $\mu$m width GFET from \cite{Meric2010}. $N_f \approx 0$, $V_{TH,0} \approx 0$ V, $C_{TOP} = 3.6 \times 10^{-3} \text{F}/\text{m}^2$, $\mu = 7000$ $\text{cm}^2\text{V}^{-1}\text{s}^{-1}$, and $\hbar\omega = 56~\text{meV}$.} } \label{FIG:FT} \end{figure} \section{Summary} \begin{table}[!t] \renewcommand{\arraystretch}{1.5} \caption{Summary of the Simplified GFET Model } \centering \begin{tabular}{c c c} \hline \bfseries Name & \bfseries Expression & \bfseries Units \\ \hline\hline $I_D$ & $\frac{\mu W C_{TOP} \left ( V_{eff} - {V_{DSi}}/{2} \right )}{{L}/{V_{DSi}} + \frac{\mu }{\omega }\sqrt{{\pi C_{TOP}}/{e}}\sqrt{V_{eff}-V_{DSi}/2}}$ & [A]\\ $V_ {eff}$ & $V_{GSi} + eN_{f}/C_{TOP}$ & [V] \\ $C_{gs}$ & $C_{TOP}WL$ & [F]\\ $C_{gd}$ & ${C_{TOP}WL}/{2}$ & [F] \\ \hline \end{tabular} \label{TABLE:PAR} \end{table} A summary of the simplified GFET model is shown in Table~\ref{TABLE:PAR}. The expressions in this table were used to extract small-signal hybrid-$\pi$ model parameters and figures of merit typically used to compare the performance of transistors. The proposed model has been validated by comparing it against a complete analytical model and to measured data available in current literature. Whereas the complete analytical model hides the effects of physical parameters behind many separate calculations, the proposed model provides a simple expression that enables direct identification of dominant physical parameters. In addition, the proposed GFET model is ready for use in circuit design in exactly the same way as the Shichman-Hodges and Ebers-Moll models are used for CMOS and bipolar circuit design respectively. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,477,468,749,930
arxiv
\section{Introduction} It is generally believed \cite{anderson} that some essential physics of strongly correlated electron systems in solids, such as high-temperature superconductors \cite{bednorz}, can be described by the Hubbard model \cite{hubbard,gutzwiller} \begin{equation}\label{eq:hubbard} \begin{aligned}{} H=-t\sum_{ij,\sigma}\gamma_{ij}c^{\dagger}_{i \sigma}c_{j \sigma} + U \sum_{i} n_{i\uparrow}n_{i\downarrow}, \end{aligned} \end{equation} where $c^{\dagger}_{i \sigma}$ and $c_{i \sigma}$ are the creation and annihilation operators for an electron of spin $\sigma$ at site $i$, and $n_{i\sigma}=c^{\dagger}_{i \sigma} c_{i\sigma}$, the corresponding number operator of electrons. In Eq (\ref{eq:hubbard}), $\gamma_{ij}=1$ for nearest-neighbor (n.n.) sites $i,j$, and $0$ otherwise, which restricts the summation over the nearest-neighbor pairs. The $t$-term describes the kinetic energy due to hopping between adjacent sites, and the $U$-term is the on-site approximation of the Coulomb interaction between electrons. Indeed, the Hubbard model offers the simplest way to describe both the band motion and atomic features of the interacting electron systems in solids. In the Hubbard model, there are four possible states on the lattice sites: empty state $|0\rangle$, singly occupied states $|\uparrow\rangle$ (spin up) and $|\downarrow\rangle$ (spin down), and doubly occupied state $|\uparrow \downarrow\rangle$. Apparently, the double occupancy state $|\uparrow \downarrow\rangle$ is not favorable when $U$ is large. It is commonly believed that a model in which all the doubly occupied states are excluded can describe the low-energy physics of the system. Technically, such a singly occupied state system (with only empty and singly occupied sites) can be obtained by the Gutzwiller projection operator approach, which projects the Hubbard model to a subspace of the original Hilbert space \cite{fock}. For large $U$, to the order of $J=2t^2/U$ in the perturbation calculation, this projecting scheme leads to the well-known $t$-$J$ model \cite{spalek,zhang} \begin{equation}\label{eq:tj} \begin{aligned}{} H=-t\sum_{ij,\sigma}\gamma_{ij}\tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} + J \sum_{ij} \gamma_{ij} (S_{i}\cdot S_{j} -\frac{1}{4}\tilde{n}_{i}\tilde{n}_{j}), \end{aligned} \end{equation} where $\tilde{c}^{\dagger}_{i \sigma}= c^{\dagger}_{i \sigma}(1-n_{i\bar{\sigma}})$, $\tilde{c}_{i \sigma}= c_{i \sigma}(1-n_{i\bar{\sigma}})$, $\tilde{n}_{i}=\sum_{\sigma} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{i \sigma}$, and $S_i$ is the spin operator at site $i$. Note that the projection operator also generates a so-called three-site term that has been neglected in the $t$-$J$ model. That is, Eq (\ref{eq:tj}) is in fact a model with only two-site terms. The $t$-$J$ model can be viewed as a large $U$ limit of the Hubbard model, and as a prototype to describe the low-energy physics of the interacting electron systems. Using the standard Green function technique, one can indeed show that the $t$-$J$ model does exhibit a state with nonzero $\langle c_{i\downarrow}c_{j\uparrow}\rangle$, signaling the occurrence of superconductivity. Nonetheless, it is still somewhat difficult to obtain a more complete understanding of high-temperature superconductors from the $t$-$J$ model. The Hubbard model is the simplest approximation of the general Hamiltonian of interacting electron systems, in which all the interaction terms except the on-site term ($U$-term) are neglected. It is possible that the neglected interaction terms may play important roles in the understanding of the physics of the strongly correlated electron systems. Some important physics may be omitted in the Hubbard model. As a result, the $t$-$J$ model given in Eq (\ref{eq:tj}), which is a projection to the subspace of singly occupied states of the Hubbard model, may not be sufficient to describe the essential physics of the interacting electrons. In this paper, we will first describe the details of establishing a generalized Hubbard model, in which all of the two-site interaction terms between nearest-neighbor sites are retained. We will then apply the Gutzwiller projection scheme to this more general model to obtain an effective Hamiltonian in the subspace of singly occupied states \cite{spalek0}. Moreover, we will discuss how the inclusion of these neglected interaction terms in the Hubbard model modifies the $t$-$J$ model. \section{The Model} We now present a systematic derivation of a more complete effective Hamiltonian with all two-site interaction terms between nearest-neighbor sites in the subspace of singly occupied states. \subsection{Bloch Representation} The general Hamiltonian describing the dynamics of the electrons in solids can be expressed as follows \cite{hubbard,mahan,lzz} in Bloch representation \begin{equation}\label{eq:bloch} \begin{aligned}{} H &=\sum_{\textbf{k},\sigma}\epsilon_{\textbf{k}} c^{\dagger}_{\textbf{k}\sigma}c_{\textbf{k}\sigma} +\frac{1}{2}\sum_{\textbf{k}_{1}\textbf{k}_{2} \textbf{k}^{'}_{1}\textbf{k}^{'}_{2},\sigma\sigma^{'}} \langle\textbf{k}_{1}\textbf{k}_{2}|\frac{1}{r}| \textbf{k}^{'}_{1}\textbf{k}^{'}_{2}\rangle c^{\dagger}_{\textbf{k}_{1}\sigma} c^{\dagger}_{\textbf{k}_{2}\sigma^{'}} c_{\textbf{k}^{'}_{2}\sigma^{'}} c_{\textbf{k}^{'}_{1}\sigma}, \end{aligned} \end{equation} where $c^{\dagger}_{\textbf{k},\sigma}$ and $c_{\textbf{k},\sigma}$ are the creation and annihilation operators for electrons in the Bloch state with wave vector $\textbf{k}$ and spin $\sigma$, $\epsilon_{\textbf{k}}$ is the band energy, $\textbf{k}$ runs over the first Brillouin zone, and the matrix element \begin{equation} \begin{aligned}{} \langle\textbf{k}_{1}\textbf{k}_{2}|\frac{1}{r}| \textbf{k}^{'}_{1}\textbf{k}^{'}_{2}\rangle=e^2 \int d \textbf{x} d \textbf{x}^{'} \frac{\psi^{*}_{\textbf{k}_{1}}(\textbf{x}) \psi_{\textbf{k}^{'}_{1}}(\textbf{x}) \psi^{*}_{\textbf{k}_{2}}(\textbf{x}^{'}) \psi_{\textbf{k}^{'}_{2}}(\textbf{x}^{'})} {|\textbf{x} - \textbf{x}^{'}|}, \end{aligned} \end{equation} where $\psi_{\textbf{k}}(\textbf{x})$ and $\psi^{*}_{\textbf{k}}(\textbf{x})$ are the Bloch functions of the energy band. In Eq (\ref{eq:bloch}), the first term represents the kinetic energies of the electrons in the energy bands, and the second term is the Coulomb interaction energy of electrons. \subsection{Wannier Representation} The Hamiltonian in Eq (\ref{eq:bloch}) can be transformed to its Wannier representation form \begin{equation}\label{eq:wannier} \begin{aligned}{} H &=\sum_{ij,\sigma}T_{ij}c^{\dagger}_{i \sigma}c_{j \sigma} +\frac{1}{2}\sum_{ijkl,\sigma\sigma^{'}} \langle ij|\frac{1}{r}|kl\rangle c^{\dagger}_{i\sigma} c^{\dagger}_{j\sigma^{'}} c_{l\sigma^{'}} c_{k\sigma}, \end{aligned} \end{equation} where $c^{\dagger}_{i \sigma}$ and $c_{i \sigma}$ are the creation and annihilation operators for an electron with spin $\sigma$ in a Wannier orbital localized at site $i$, $T_{ij}$ is the Fourier transform of the band energy $\epsilon_{\textbf{k}}$ \begin{equation} \begin{aligned}{} T_{ij} &=\frac{1}{N} \sum_{\textbf{k}} \epsilon_{\textbf{k}} e^{i\textbf{k}\cdot (\textbf{R}_{i}-\textbf{R}_{j})}, \end{aligned} \end{equation} and the Wannier representation matrix element is given by \begin{equation}\label{eq:wannierelement} \begin{aligned}{} \langle ij|\frac{1}{r}|kl\rangle=e^2 \int d \textbf{x} d \textbf{x}^{'} \frac{\phi^{*}(\textbf{x}-\textbf{R}_{i}) \phi(\textbf{x}-\textbf{R}_{k}) \phi^{*}(\textbf{x}^{'}-\textbf{R}_{j}) \phi(\textbf{x}^{'}-\textbf{R}_{l})} {|\textbf{x} - \textbf{x}^{'}|}, \end{aligned} \end{equation} where $\phi(\textbf{x}-\textbf{R}_{i})$ and $\phi^*(\textbf{x}-\textbf{R}_{i})$ are the Wannier functions localized around lattice site $i$. In the most general cases where $i \neq j \neq k \neq l$, the Wannier matrix elements $\langle ij|\frac{1}{r}|kl\rangle$ are the so-called four-center integrals, and the corresponding terms in the series of the Coulomb interaction in Eq (\ref{eq:wannier}) can be referred to as the four-site terms. Since the Wannier function $\phi(\textbf{x}-\textbf{R}_{i})$ goes to zero rapidly when $\textbf{x}$ is away from $\textbf{R}_{i}$, the matrix element (\ref{eq:wannierelement}) is not negligible only when the sites $i,j,k,l$ are close enough so that the overlap between the Wannier functions is sufficiently large. Therefore, the Coulomb interaction energy can be conveniently approximated by a number of its leading terms of the series in the Wannier representation expression given in Eq (\ref{eq:wannier}). The biggest interaction term is the so-called on-site term ($i=j=k=l$) \begin{equation}\label{eq:0a} \begin{aligned}{} \frac{1}{2}U \sum_{i,\sigma} n_{i \sigma}n_{i \bar{\sigma}} = U \sum_{i} n_{i\uparrow}n_{i\downarrow}, \end{aligned} \end{equation} where $U=\langle ii|\frac{1}{r}|ii\rangle$. If only this on-site term is taken into account, the Hubbard model is obtained. The next leading order terms are the ones where the set $\{i,j,k,l\}$ contains only two distinct nearest-neighbor sites. These terms may be referred to as the two-site interaction terms \cite{twosite}. In this paper, we take one step further beyond the Hubbard model: In the approximation of the Coulomb interaction, we will retain all of these two-site interaction terms, in addition to the Hubbard on-site term. \subsection{Two-Site Interaction Approximation} We now describe a generalized Hubbard model with all of the two-site interaction terms between two nearest-neighbor sites. The two-side interaction terms are those where the set $\{i,j,k,l\}$ consists of only one pair of nearest-neighbor sites. These terms fall under two categories: 1. Among the four sites $i,j,k,l$, three of them are in fact the same site, and the fourth one is a nearest neighbor of the other three. There are four such possibilities: a. $i=j=k$, and $l$ is the n.n. site of $i$: \begin{equation}\label{eq:1a} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ii|\frac{1}{r}|ij\rangle n_{i\sigma} c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}}= \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ii|\frac{1}{r}|ij\rangle n_{i\bar{\sigma}} c^{\dagger}_{i\sigma} c_{j\sigma} \end{aligned} \end{equation} b. $i=j=l$, and $k$ is the n.n. site of $i$: \begin{equation}\label{eq:1b} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ii|\frac{1}{r}|ji\rangle n_{i\bar{\sigma}} c^{\dagger}_{i\sigma} c_{j\sigma} \end{aligned} \end{equation} c. $i=l=k$, and $j$ is the n.n. site of $i$: \begin{equation}\label{eq:1c} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ij|\frac{1}{r}|ii\rangle n_{i\sigma} c^{\dagger}_{j\bar{\sigma}} c_{i\bar{\sigma}}= \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ji|\frac{1}{r}|jj\rangle n_{j\bar{\sigma}} c^{\dagger}_{i\sigma} c_{j\sigma} \end{aligned} \end{equation} d. $j=l=k$, and $i$ is the n.n. site of $j$: \begin{equation}\label{eq:1d} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ij|\frac{1}{r}|jj\rangle n_{j\bar{\sigma}} c^{\dagger}_{i\sigma} c_{j\sigma} \end{aligned} \end{equation} Eq (\ref{eq:1a}) is true because the spin summation index $\sigma$ can be replaced by $\bar{\sigma}$ in the expression. Similarly, the right-hand-side of Eq (\ref{eq:1c}) results from interchanging the summation indexes $i$ and $j$, in addition to the change of $\sigma \rightarrow \bar{\sigma}$. From Eq (\ref{eq:wannierelement}), it can be verified that all the matrix elements in Eqs (\ref{eq:1a}-\ref{eq:1d}) are equal. Thus, all of the terms above can be combined into the following expression \begin{equation}\label{eq:1all} \begin{aligned}{} X\sum_{ij,\sigma} \gamma_{ij} c^{\dagger}_{i\sigma} c_{j\sigma} (n_{i\bar{\sigma}}+n_{j\bar{\sigma}}), \end{aligned} \end{equation} where the parameter $X$ represents the matrix elements \begin{equation}\label{eq:x} \begin{aligned}{} X= \langle ii|\frac{1}{r}|ij\rangle= \langle ii|\frac{1}{r}|ji\rangle= \langle ij|\frac{1}{r}|ii\rangle= \langle ji|\frac{1}{r}|ii\rangle. \end{aligned} \end{equation} Here, $i$ and $j$ represent the nearest-neighbor sites. 2. Among the sites $i,j,k,l$, there are two identical pairs, and moreover, the said two pairs are nearest neighbors. There are three such possibilities: a. $i=j$, $l=k$, $i$ and $l$ are the n.n. sites: \begin{equation}\label{eq:2a} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma} \gamma_{ij} \langle ii|\frac{1}{r}|jj\rangle c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}} \end{aligned} \end{equation} b. $i=k$, $j=l$, $i$ and $j$ are the n.n. sites: \begin{equation}\label{eq:2b} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma\sigma'} \gamma_{ij} \langle ij|\frac{1}{r}|ij\rangle n_{i\sigma} n_{j\sigma'} \end{aligned} \end{equation} c. $i=l$, $j=k$, $i$ and $j$ are the n.n. sites: \begin{equation}\label{eq:2c} \begin{aligned}{} \frac{1}{2}\sum_{ij,\sigma\sigma'} \gamma_{ij} \langle ij|\frac{1}{r}|ji\rangle c^{\dagger}_{i\sigma} c^{\dagger}_{j\sigma'} c_{i\sigma'} c_{j\sigma} \end{aligned} \end{equation} It has been argued that $\langle ii|\frac{1}{r}|jj\rangle$ and $\langle ij|\frac{1}{r}|ji\rangle$ are in the same order, but much smaller than $\langle ij|\frac{1}{r}|ij\rangle$ \cite{hubbard,strack}. Therefore, summing over all of the three terms in Eqs (\ref{eq:2a}-\ref{eq:2c}), we have the following terms in this category \begin{equation}\label{eq:2all} \begin{aligned}{} \frac{1}{2} V \sum_{ij,\sigma\sigma'} \gamma_{ij} n_{i\sigma} n_{j\sigma'}+ \frac{1}{2} Y \sum_{ij,\sigma} \gamma_{ij} ( c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}} +\sum_{\sigma'} c^{\dagger}_{i\sigma} c^{\dagger}_{j\sigma'} c_{i\sigma'} c_{j\sigma} ), \end{aligned} \end{equation} where the parameters $V$ and $Y$ represent the matrix elements \begin{equation}\label{eq:x} \begin{aligned}{} V & = \langle ij|\frac{1}{r}|ij\rangle, \\ Y &=\langle ii|\frac{1}{r}|jj\rangle= \langle ij|\frac{1}{r}|ji\rangle. \end{aligned} \end{equation} Eqs (\ref{eq:1all}) and (\ref{eq:2all}) contain all the two-site Coulomb interaction terms between two nearest neighbors. Similarly, for the kinetic energy term, only the constant and nearest-neighbor hopping terms are considered for consistency \begin{equation}\label{eq:tij} \begin{aligned}{} T_{ij}=\frac{1}{N} \sum_{\textbf{k}} \epsilon_{\textbf{k}} e^{i\textbf{k}\cdot (\textbf{R}_{i}-\textbf{R}_{j})}= T_{0}\delta_{ij}+T_1 \gamma_{ij}+...\, , \end{aligned} \end{equation} where \begin{equation} \begin{aligned}{} T_0 &=T_{ii}=\frac{1}{N} \sum_{\textbf{k}} \epsilon_{\textbf{k}}, \\ T_1 &=T_{\langle ij \rangle} = \frac{1}{N} \sum_{\textbf{k}} \epsilon_{\textbf{k}} e^{i\textbf{k}\cdot (\textbf{R}_{i}-\textbf{R}_{j})}, \quad \mbox{for n.n. sites} \: \langle ij \rangle, \end{aligned} \end{equation} and $(...)$ represents the hopping terms beyond nearest-neighbor sites. Since $T_1<0$, we will use $t=-T_1$ in the expressions hereafter. Substituting Eq (\ref{eq:tij}) into Eq (\ref{eq:wannier}), and approximating the Coulomb interaction by all the terms given in Eqs (\ref{eq:0a}), (\ref{eq:1all}), and (\ref{eq:2all}), we have \begin{equation}\label{eq:tpq0} \begin{aligned}{} H & \simeq T_{0} \sum_{i,\sigma} n_{i\sigma} -t\sum_{ij,\sigma}\gamma_{ij}c^{\dagger}_{i \sigma}c_{j \sigma} + U \sum_{i} n_{i \uparrow}n_{i\downarrow} \\ &+\frac{1}{2} V \sum_{ij,\sigma\sigma^{'}} \gamma_{ij} n_{i \sigma}n_{j \sigma^{'}} +X \sum_{ij,\sigma}\gamma_{ij}c^{\dagger}_{i \sigma} c_{j \sigma} \big(n_{i \bar{\sigma}}+ n_{j \bar{\sigma}}\big) \\ &+\frac{1}{2} Y \sum_{ij,\sigma} \gamma_{ij} \big( c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}} +\sum_{\sigma'} c^{\dagger}_{i\sigma} c^{\dagger}_{j\sigma'} c_{i\sigma'} c_{j\sigma}\big). \end{aligned} \end{equation} Eq (\ref{eq:tpq0}) contains all the two-site interaction terms as well as the on-site term and can be considered the most complete generalization of the Hubbard model to date. These two-site terms have been discussed in literature \cite{spalek0,strack,tao}. The $X$-term is the so-called bond-change interaction, which describes the density-dependent nearest-neighbor hopping of electrons. The $V$-term is the so-called nearest-neighbor density interaction. The matrix element $Y$ was discussed in reference \cite{hubbard}. When $X=V=Y=0$, Eq (\ref{eq:tpq0}) reduced to the Hubbard model. It is worth pointing out that, in contrast to the $U$-term, the terms parametrized by $V$, $X$, and $Y$ retain the long-range nature of the Coulomb interaction. Mott argued that, based on the long-range nature of the Coulomb interaction, the correlated electron system undergoes a first-order metal-insulator transition triggered by electronic correlations \cite{mott}. The Hubbard model, however, only exhibits a continuous metal-insulator transition owing to the correlation effect of electrons \cite{hubbard}. The reason that the Hubbard model does not predict a discontinuity in the number of current carriers may be attributed to the lack of the long-range interactions \cite{hubbard,mott,belitz}. Therefore, a discontinuous metal-insulator transition may exist in the model given in Eq (\ref{eq:tpq0}) \cite{1storderft}. \subsection{Gutzwiller Projection Technique} Similar to the Hubbard model, in the model described by Eq (\ref{eq:tpq0}), there are four possible states on the lattice sites: empty, singly occupied with spin up, singly occupied with spin down, and doubly occupied. The Hamiltonian describing the low-energy physics should not include the doubly occupied states. We need to project the Hamiltonian (\ref{eq:tpq0}) to a subspace of the Hilbert space where only empty and singly occupied sites are allowed. This can be accomplished by the Gutzwiller projection approach. The Gutzwiller projection operators can be defined in the following form \cite{gutzwiller,onevirtualdstate} \begin{equation}\label{eq:gutz} \begin{aligned}{} P_{s} &=\prod_{i}(1-n_{i \uparrow} n_{i \downarrow}), \\ P_{d} &=I-P_{s}, \end{aligned} \end{equation} where $n_{i \uparrow}$ and $n_{i \downarrow}$ are the electron number operators at site $i$ with spin up and down, respectively, $i$ runs over all the lattice sites, and $I$ is the identity operator. It can easily be verified that the projection operators $P_s$ and $P_d$ have the following properties \begin{equation}\label{eq:gutz_p} \begin{aligned}{} & (P_{s})^2 =P_{s}, \,\, (P_{s})^n=P_{s}, \\ & (P_{d})^2 =P_{d}, \,\, (P_{d})^n=P_{d}, \\ & P_s P_d =0, \end{aligned} \end{equation} where $n$ is any positive integer. The Gutzwiller projection operators can partition the original Hilbert space into two subspaces. The first one (denoted by $S$) contains the states with only empty and singly occupied sites, while the second one (denoted by $D$) contains the states with at least one doubly occupied site. The subscripts $s$ and $d$ denote the two subspaces $S$ and $D$, respectively. We now derive the effective Hamiltonian in the subspace $S$ using the Gutzwiller projection method. The specific treatment follows the formalism developed in Ref. \cite{wilson}. The Schr\"{o}dinger eigenvalue equation of the Hamiltonian $H$ given in Eq (\ref{eq:tpq0}) can be written as \begin{equation}\label{eq:schr0} \begin{aligned}{} H|\Psi\rangle = \varepsilon |\Psi\rangle, \end{aligned} \end{equation} where $|\Psi\rangle$ is the wave vector function in Hilbert space and $\varepsilon$ its corresponding energy eigenvalue. Since $P_s + P_d = I$, Eq (\ref{eq:schr0}) may be written as \begin{equation}\label{eq:psd} \begin{aligned}{} H (P_s + P_d)|\Psi\rangle = \varepsilon (P_s + P_d) |\Psi\rangle. \end{aligned} \end{equation} Operating $P_s$ and $P_d$ respectively to Eq (\ref{eq:psd}) from the left, and taking the properties of $P_s$ and $P_d$ into account, we obtain \[ \begin{bmatrix} P_s H P_s - \varepsilon & P_s H P_d \\ P_d H P_s & P_d H P_d - \varepsilon \end{bmatrix} \begin{bmatrix} |\Psi_s\rangle \\ |\Psi_d\rangle \end{bmatrix} =0, \] where $|\Psi_s\rangle=P_s|\Psi\rangle$ and $|\Psi_d\rangle=P_d|\Psi\rangle$, the wave vectors in the subspaces $S$ and $D$, respectively. The projected Hamiltonian matrix above can be diagonalized as \[ \begin{bmatrix} H_s - \varepsilon & 0 \\ 0 & H_d - \varepsilon \end{bmatrix} \begin{bmatrix} |\Psi_s\rangle \\ |\Psi_d\rangle \end{bmatrix} =0, \] where \begin{align}\label{eq:hshd} H_s &=P_s H P_s+P_s H P_d (\varepsilon-P_d H P_d)^{-1}P_d H P_s, \\\label{eq:hshd_d} H_d &=P_d H P_d+P_d H P_s (\varepsilon-P_s H P_s)^{-1}P_s H P_d, \end{align} are the projected Hamiltonians in the subspaces. In the following discussion, we are only interested in $H_s$, the effective Hamiltonian in the singly occupied subspace. Operating the projection operators $P_s$ and $P_d$ on $H$ in Eq (\ref{eq:tpq0}) from both the left and right, after some straightforward manipulations, we can obtain the following projected Hamiltonian matrix elements \begin{align} P_s H P_s = &T_{0}\sum_{i,\sigma}\tilde{n}_{i\sigma} -t\sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma} \tilde{c}_{j \sigma} +\frac{1}{2} V \sum_{ij,\sigma\sigma^{'}} \gamma_{ij} \tilde{n}_{i \sigma} \tilde{n}_{j \sigma^{'}} \nonumber \\ &+\frac{1}{2} Y \sum_{ij,\sigma} \gamma_{ij} \big( c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{j\bar{\sigma}} c_{i\bar{\sigma}} -\tilde{n}_{i \sigma} \tilde{n}_{j \sigma}\big), \label{eq:shs} \\ P_s H P_d =&-(t-X) \sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma} c_{j \sigma} n_{j\bar{\sigma}}, \label{eq:shd} \\ P_d H P_s =&-(t-X) \sum_{ij,\sigma}\gamma_{ij} n_{i\bar{\sigma}} c^{\dagger}_{i \sigma} \tilde{c}_{j \sigma}, \label{eq:dhs} \\ P_d H P_d = &-(t-2X)\sum_{ij,\sigma}\gamma_{ij} n_{i\bar{\sigma}} c^{\dagger}_{i \sigma} c_{j \sigma} n_{j\bar{\sigma}} +(U+2T_0) \sum_{i} n_{i \uparrow}n_{i \downarrow} \nonumber \\ &+(2 V-Y) \sum_{ij}\gamma_{ij} n_{i\uparrow} n_{i\downarrow}\big( n_{j\uparrow}+ n_{j\downarrow} - n_{j\uparrow} n_{j\downarrow} \big) \nonumber \\ &+\frac{1}{2} Y \sum_{ij,\sigma}\gamma_{ij} c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}},\label{eq:dhd} \end{align} where $\tilde{c}^{\dagger}_{i \sigma}= c^{\dagger}_{i \sigma}(1-n_{i\bar{\sigma}})$, $\tilde{c}_{i \sigma}= c_{i \sigma}(1-n_{i\bar{\sigma}})$, and $\tilde{n}_{i}=\sum_{\sigma} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{i \sigma}$ are the projected creation, annihilation, and number operators of electrons. Note that the $V$- and $Y$- terms do contribute to the zeroth-order of the effective Hamiltonian $P_s H P_s$. These are the terms neglected in the $t$-$J$ model. Note also that the $X$ and $U$-terms do not survive in the expression of $P_s H P_s$, indicating that they are not operators of the subspace $S$. As we will see below, however, they do contribute to the first-order terms in the perturbation expansion when the projected Hamiltonian matrix is diagonalized. With the results above, we are now ready to obtain the effective Hamiltonian $H_s$. For convenience, we rewrite \begin{align} &H_s = H^{(0)}_s+H^{(1)}_s, \label{eq:hs} \\ &H^{(0)}_s=P_s H P_s, \label{eq:hs0} \\ &H^{(1)}_s=P_s H P_d (\varepsilon-P_d H P_d)^{-1}P_d H P_s. \label{eq:hs1} \end{align} The zeroth-order term $H^{(0)}_s$ has been obtained in Eq (\ref{eq:shs}). In order to construct the effective Hamiltonian $H_s$ in the subspace $S$, the key operator to be determined is the inverse operator $(\varepsilon-P_d H P_d)^{-1}$. Since this is an operator in the subspace $D$, using the completeness of the wave vectors $|\Psi_{d}\rangle$, we have \begin{equation}\label{eq:inverse} \begin{aligned}{} (\varepsilon-P_d H P_d)^{-1}=\sum_{r,r'} |\Psi^r_{d}\rangle \langle\Psi^r_{d}| (\varepsilon-P_d H P_d)^{-1} |\Psi^{r'}_{d}\rangle \langle\Psi^{r'}_{d}|, \end{aligned} \end{equation} where the superscripts $r$ and $r'$ are the labels of the states in the subspace $D$. Please note that these states in $D$ must be virtual because this inverse operator is in the middle of $H^{(1)}_s$, which is an operator of $S$. As can be seen from Eq (\ref{eq:dhd}), the expression of $P_d H P_d$ is complicated. Nevertheless, in the case where there is one virtual doubly occupied site \cite{onevirtualdstate}, the matrix element \begin{equation}\label{} \begin{aligned}{} \langle\Psi^r_{d}| (\varepsilon &-P_d H P_d)^{-1} |\Psi^{r'}_{d}\rangle = \langle\Psi^r_{d}|\Big[\varepsilon-2T_0-U'+ \\ &\sum_{ij,\sigma}\gamma_{ij} c^{\dagger}_{i \sigma} c_{j \sigma} \Big( (t-2X) n_{i\bar{\sigma}} n_{j\bar{\sigma}} -\frac{1}{2} Y c^{\dagger}_{i\bar{\sigma}} c_{j\bar{\sigma}} \Big) \Big]^{-1}|\Psi^{r'}_{d}\rangle, \end{aligned} \end{equation} where $U'=U+z' (2V-Y)$, $0 \leq z' \leq z$, and $z$ is the coordinate number. If we consider the parameters $\varepsilon-2T_0$, $t-2X$, and $Y$ to be small quantities compared with $U'$, we can perform a perturbation expansion with respect to $1/U'$. To the leading term in $1/U'$, we have \begin{align}\nonumber (\varepsilon &-P_d H P_d)^{-1} \\ \nonumber &=\sum_{r,r'} |\Psi^r_{d}\rangle \langle\Psi^r_{d}| \Big[-\frac{1}{U'}\Big(1+O\big(\frac{\varepsilon-2T_0}{U'},\frac{t-2X}{U'},\frac{Y}{U'}\big)\Big)^{-1}\Big] |\Psi^{r'}_{d}\rangle \langle\Psi^{r'}_{d}| \\\nonumber &= \Big[-\frac{1}{U'}\Big(1+O\big(\frac{\varepsilon-2T_0}{U'},\frac{t-2X}{U'},\frac{Y}{U'}\big)\Big)^{-1}\Big] \sum_{r,r'} |\Psi^r_{d}\rangle \langle\Psi^{r'}_{d}| \delta_{r,r'} \\\nonumber &= -\frac{1}{U'}\Big(1+O\big(\frac{\varepsilon-2T_0}{U'},\frac{t-2X}{U'},\frac{Y}{U'}\big)\Big)^{-1} \\ &=-\frac{1}{U'}+O\big(\frac{\varepsilon-2T_0}{U^{'2}},\frac{t-2X}{U^{'2}},\frac{Y}{U^{'2}}\big). \label{eq:inverse2} \end{align} Substituting (\ref{eq:inverse2}) into Eq (\ref{eq:hs1}) and using the expressions of $P_s H P_d$ and $P_d H P_s$ in Eqs (\ref{eq:shd}) and Eqs (\ref{eq:dhs}), we obtain for the second term of $H_s$ in the form \begin{equation}\label{eq:j0_0} \begin{aligned}{} H^{(1)}_s &= -J_0 \sum_{ij,\sigma} \sum_{i'j',\sigma^{'}} \gamma_{ij}\gamma_{i'j'} \tilde{c}^{\dagger}_{i \sigma} c_{j \sigma} n_{j \bar{\sigma}} n_{i' \bar{\sigma}'} c^{\dagger}_{i' \sigma'} \tilde{c}_{j'\sigma'}, \end{aligned} \end{equation} where $J_0=(t-X)^2/U'$. In general, Eq (\ref{eq:j0_0}) represents a term with four sites $i,j,i',j'$ involved. In the most general case where $i\neq j\neq i'\neq j'$, it is a four-site term and an operator of the subspace $D$. Therefore we examine its special cases where two of the four sites are in fact the same site. Since $<ij>$ and $<i'j'>$ are two nearest-neighbor pairs, there are only four such possibilities:\\ 1. $i$ and $i'$ are the same site 2. $i$ and $j'$ are the same site 3. $j$ and $i'$ are the same site 4. $j$ and $j'$ are the same site\\ \noindent Further examination reveals that cases $1$ and $4$ result in a zero operator, while case $2$ is still an operator in the subspace $D$. Only case 3 with $i'=j$ provides a meaningful contribution to the operator in the subspace $S$ \begin{equation}\label{eq:j0_1} \begin{aligned}{} H^{(1)}_{s} &=-J_0 \sum_{ijk,\sigma} \gamma_{ij}\gamma_{jk} \tilde{c}^{\dagger}_{i \sigma} \big( \tilde{n}_{j \bar{\sigma}} \tilde{c}_{k \sigma} + c_{j \sigma} c^{\dagger}_{j \bar{\sigma}} \tilde{c}_{k \bar{\sigma}}\big). \end{aligned} \end{equation} Please note that in Eq (\ref{eq:j0_1}) the site $j$ is the nearest neighbor of both sites $i$ and $k$. Therefore, the two-site term can only result from the case of $i=k$ in the summation. Writing the two-site and three-site terms separately, we have \begin{equation}\label{eq:j0_1b} \begin{aligned}{} &H^{(1)}_{s}=(H^{(1)}_{s})_{2site}+(H^{(1)}_{s})_{3site}, \end{aligned} \end{equation} where \begin{align}{} &(H^{(1)}_{s})_{2site} =-J_0 \sum_{ij,\sigma} \gamma_{ij}\big( \tilde{n}_{i \sigma} \tilde{n}_{j \bar{\sigma}} + c^{\dagger}_{i \sigma} c_{j \sigma} c^{\dagger}_{j \bar{\sigma}} c_{i \bar{\sigma}}\big), \label{eq:hs_2} \\ &(H^{(1)}_{s})_{3site} =-J_0 \sum_{i\neq k,j,\sigma} \gamma_{ij}\gamma_{jk} \tilde{c}^{\dagger}_{i \sigma}\big( \tilde{n}_{j \bar{\sigma}} \tilde{c}_{k \sigma} + c_{j \sigma} c^{\dagger}_{j \bar{\sigma}} \tilde{c}_{k \bar{\sigma}}\big). \label{eq:hs_3} \end{align} Please note that the summation in Eq (\ref{eq:hs_3}) excludes the case $i=k$. This three-site term is usually ignored in the discussion of the $t$-$J$ model. We will also neglect this three-site term in the subsequent discussions. Substituting the expression of $H^{(0)}_s$ in Eq (\ref{eq:shs}) and $H^{(1)}_s \simeq (H^{(1)}_{s})_{2site} $ in Eq (\ref{eq:hs_2}) into Eq (\ref{eq:hs}), we finally obtain, to the order of two-site interactions terms, the effective Hamiltonian $H_s$ \begin{equation}\label{eq:tpq10} \begin{aligned}{} H_s &= T_{0} \sum_{i\sigma} \tilde{n}_{i\sigma}+ \sum_{ij,\sigma}\gamma_{ij} \Big(-t \, \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} - \frac{1}{2} (2J_0-Y)\, c^{\dagger}_{i\sigma} c^{\dagger}_{j \bar{\sigma}} c_{i \bar{\sigma}} c_{j\sigma} \\ &+ \frac{1}{2} (V-Y)\, \tilde{n}_{i \sigma} \tilde{n}_{j \sigma} + \frac{1}{2} (V-2J_0)\, \tilde{n}_{i \sigma}\tilde{n}_{j \bar{\sigma}}\Big), \end{aligned} \end{equation} which can be written as \begin{equation}\label{eq:tpq1} \begin{aligned}{} H_s &= T_{0} \sum_{i\sigma} \tilde{n}_{i\sigma}+ \sum_{ij,\sigma}\gamma_{ij} \Big(-t \, \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} - \frac{1}{2} J\, c^{\dagger}_{i\sigma} c^{\dagger}_{j \bar{\sigma}} c_{i \bar{\sigma}} c_{j\sigma} \\ &+ \frac{1}{2} p\, \tilde{n}_{i \sigma} \tilde{n}_{j \sigma} + \frac{1}{2} q\, \tilde{n}_{i \sigma}\tilde{n}_{j \bar{\sigma}}\Big), \end{aligned} \end{equation} with the coefficients given by the following equations \begin{equation}\label{eq:pars} \begin{aligned}{} p &=V-Y, \\ q &=V-2 J_{0}, \,\, J_{0} =(t-X)^2/U',\,\, U'=U+z'(2V-Y), \\ J &=p-q, \end{aligned} \end{equation} where $0 \leq z'\leq z$ as discussed above. Eq (\ref{eq:tpq1}) is the effective Hamiltonian in the subspace of singly occupied states. Since it is projected from a more general Hamiltonian retaining all terms up to the two-site interactions, we believe that Eq (\ref{eq:tpq1}) can describe the low-energy physics of the interacting electron systems in solids more accurately and completely. Moreover, there is no precondition imposed on the electron density, $n$, so the above derivation is applicable to systems at any doping level. Please note that by rescaling $H_s$ with $t$, this model has only three dimensionless independent parameters $\bar{T}_0=T_0/t$, $\bar{p}=p/t$, and $\bar{q}=q/t$. One may expect that the system will exhibit ferromagnetism when $\bar{q}$ is sufficiently larger than $\bar{p}$. On the other hand, the system will exhibit antiferromagnetism when $\bar{p}$ is sufficiently larger than $\bar{q}$. Our studies demonstrate that this is true. More interestingly, our study also shows that this model exhibits superconductivity for some parameter range of $\bar{p} > \bar{q}$. \section{The $t$-$J$ Model} Apparently, Eq (\ref{eq:tpq1}) reduces to the $t$-$J$ model with $J=2J_0$ in the case of $X=Y=V=0$. We will show that when $Y=V$, Eq (\ref{eq:tpq1}) will reduce to the $t$-$J$ model as well (with $J=2J_0-V$ in this case). Making use of the following operator identities \begin{align}{} \label{eq:rel0} &C^{\dagger}_{i\uparrow}C_{i\downarrow}=S^{+}_{i} =S^{x}_{i}+iS^{y}_{i}, \\ &C^{\dagger}_{i\downarrow}C_{i\uparrow}=S^{-}_{i} =S^{x}_{i}-iS^{y}_{i}, \\ &n_{i\uparrow}-n_{i\downarrow}=2S_{i}^{z}, \end{align} we obtain \begin{align}{} \label{eq:rel1} &\sum_{\sigma} c^{\dagger}_{i \sigma} c_{j \sigma} c^{\dagger}_{j \bar{\sigma}} c_{i \bar{\sigma}} =-2\big(S_i^x S_j^x+S_i^y S_j^y\big), \\ \label{eq:rel2} &\sum_{\sigma}\big( \tilde{n}_{i\sigma} \tilde{n}_{j\sigma}- \tilde{n}_{i\sigma} \tilde{n}_{j\bar{\sigma}}\big)=4 S_i^z S_j^z, \end{align} where $S_i^\alpha$, $\alpha=x,y,z$, are the spin operators at site $i$. Furthermore, it is easy to verify that \begin{align}{} \label{eq:rel3} &\sum_{\sigma}\big( \tilde{n}_{i\sigma} \tilde{n}_{j\sigma}+ \tilde{n}_{i\sigma} \tilde{n}_{j\bar{\sigma}}\big)= \tilde{n}_{i} \tilde{n}_{j}, \end{align} where $\tilde{n}_{i}=\sum_{\sigma}\tilde{n}_{i\sigma}$. Eqs (\ref{eq:rel1})-(\ref{eq:rel3}) immediately lead to the following well-known relation \begin{align}{} \sum_{\sigma} \big( c^{\dagger}_{i \sigma} c_{j \sigma} c^{\dagger}_{j \bar{\sigma}} c_{i \bar{\sigma}} + \tilde{n}_{i \sigma} \tilde{n}_{j \bar{\sigma}} \big) =-2(S_{i}\cdot S_{j} -\frac{1}{4} \tilde{n}_{i} \tilde{n}_{j}).\label{eq:charge_spin} \end{align} Writing the two terms in the summation over $\sigma'$ explicitly, the zeroth-order effective Hamiltonian Eq (\ref{eq:shs}) can be written as \begin{align}\nonumber H_s^{(0)}&=P_s H P_s =T_{0}\sum_{i,\sigma}\tilde{n}_{i\sigma} \\\nonumber &+\sum_{ij\sigma}\gamma_{ij}\Big( -t\tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} +\frac{1}{2} Y \big( c^{\dagger}_{i\sigma} c_{j\sigma} c^{\dagger}_{j\bar{\sigma}} c_{i\bar{\sigma}} + \tilde{n}_{i \sigma} \tilde{n}_{j \bar{\sigma}}\big) \\ & +\frac{1}{2} (V-Y) \big( \tilde{n}_{i \sigma} \tilde{n}_{j \sigma}+ \tilde{n}_{i \sigma} \tilde{n}_{j \bar{\sigma}}\big) \Big). \label{eq:shs_2} \end{align} Therefore, using Eqs (\ref{eq:rel3}) and (\ref{eq:charge_spin}), Eq (\ref{eq:shs_2}) takes the form \begin{align}\nonumber H_s^{(0)}&= T_{0}\sum_{i,\sigma}\tilde{n}_{i\sigma} -t\sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} +\sum_{ij}\gamma_{ij}\Big( -Y \big( S_{i}\cdot S_{j} -\frac{1}{4} \tilde{n}_{i} \tilde{n}_{j}\big) \\ &+ \frac{1}{2} (V-Y)\,\, \tilde{n}_{i} \tilde{n}_{j} \Big).\label{eq:shs_3} \end{align} We point out again that the terms parametrized by $Y$ and $(V-Y)$ in Eq (\ref{eq:shs_3}) are the neglected terms in the $t$-$J$ model. Similarly, for the first-order term $(H^{(1)}_{s})_{2site}$ given in Eq (\ref{eq:hs_2}), we have \begin{equation} \begin{aligned}{} (H^{(1)}_{s})_{2site} &=2J_0 \sum_{ij} \gamma_{ij}( S_{i}\cdot S_{j} -\frac{1}{4} \tilde{n}_{i} \tilde{n}_{j} ).\label{eq:hs1_2} \end{aligned} \end{equation} Summing up the terms in Eqs (\ref{eq:shs_3}) and (\ref{eq:hs1_2}), the effective Hamiltonian can be expressed as in the $t$-$J$ model form \begin{equation}\label{eq:tpq2} \begin{aligned}{} H_s &= T_{0} \sum_{i\sigma} \tilde{n}_{i\sigma} -t \,\sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} +\sum_{ij}\gamma_{ij}\Big( (2J_0-Y)S_{i}\cdot S_{j} \\ &-\frac{1}{4}(2J_0+Y-2V) \tilde{n}_{i} \tilde{n}_{j} \Big). \end{aligned} \end{equation} The above equation shows that when $V=Y$, the model reduces to the $t$-$J$ model with $J=2J_0-V$. However, it has been argued that $V \gg Y$ in the usual cases \cite{hubbard}. Also, since the value of $V$ is usually large, the condition of $J=2J_0-V>0$ (necessary for antiferromagnetic and superconducting states) is difficult to be realized. Therefore, the condition $V=Y$ may be rarely physically satisfied. Finally, using the parameters $p$ and $q$ defined in Eq (\ref{eq:pars}), the model can be written as \begin{equation}\label{eq:tpq_3} \begin{aligned}{} H_s &=T_{0} \sum_{i\sigma} \tilde{n}_{i\sigma} -t \,\sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} +\sum_{ij}\gamma_{ij}\Big[ J\, S_{i}\cdot S_{j} +\frac{1}{4}J'\, \tilde{n}_{i} \tilde{n}_{j} \Big], \end{aligned} \end{equation} where $J=p-q$ and $J'=p+q$. When $J'=-J$, which means $p=0$ (i.e. the condition $V=Y$), one has the $t$-$J$ model. A comparison of Eqs (\ref{eq:tj}) and (\ref{eq:tpq_3}) reveals how the inclusion of the two-site nearest-neighbor Coulomb interaction terms changes the $t$-$J$ model: The coefficient of the term $\tilde{n}_{i}\tilde{n}_{j}$ changes from $-J/4$ to a positive independent coefficient $J'=p+q$. This is the only change, and no new terms are generated. Our Green function analysis shows that a reasonable superconductivity dome appears at a finite value of $p$ ($p/q \sim 2.5$), which again seems to indicate that the coefficient of $\tilde{n}_{i}\tilde{n}_{j}$ of the conventional $t$-$J$ model, $-J/4$, is not in the physical range. Eqs (\ref{eq:tpq1}) and (\ref{eq:tpq_3}) are two alternative expressions of the new model, which reveal the origin of the spin correlations from two different though equivalent perspectives. For two nonempty nearest-neighbor sites, there are two different spin configurations: the spin directions are either the same or the opposite. Eq (\ref{eq:tpq1}) indicates that the energies for the two spin configurations are, in general, different ($p\neq q$), and that the relative direction of the spins on the nearest-neighbor sites is determined by the competition between the energies of the two spin configurations. On the other hand, as shown in Eq (\ref{eq:tpq_3}), if the interaction terms are rearranged such that the amount of energy $J S_{i}^{z} S_{j}^{z}$ from the $p$- and $q$-terms is combined into the $J$-term to form a complete Heisenberg interaction term \cite{direct}, the energies of the two spin configurations become the same, namely, $J'\tilde{n}_{i}\tilde{n}_{j}/4$. Then the spin correlations can be attributed to the Heisenberg interaction. The magnetic ordering of the model is determined by the coefficient of the Heisenberg term, $J=p-q$. A natural question arises: What if $p \simeq q$? When $p=q$, Eq (\ref{eq:tpq1}) or (\ref{eq:tpq_3}) becomes \begin{equation}\label{eq:tpq_00} \begin{aligned}{} H &=T_{0} \sum_{i\sigma} \tilde{n}_{i\sigma} -t \,\sum_{ij,\sigma}\gamma_{ij} \tilde{c}^{\dagger}_{i \sigma}\tilde{c}_{j \sigma} +\frac{1}{2}p\,\sum_{ij}\gamma_{ij} \tilde{n}_{i} \tilde{n}_{j}. \end{aligned} \end{equation} As the results suggest, there should be neither long-range nor short-range magnetic orders in the model described in Eq (\ref{eq:tpq_00}). This simplified model describes solely the electrical properties of the interacting electron systems. Please note that Eq (\ref{eq:tpq_00}) is a Hamiltonian in the subspace of singly occupied states, which is different from the usual generalized Hubbard model with the $V$-term. \section{Conclusion} Starting with a generalized Hubbard model with the next leading order Coulomb interaction terms, we have systematically developed a new model for the strongly correlated electrons in solids using the Gutzwiller projection approach. Our study demonstrates that the origin of the Heisenberg interaction term can be attributed to the competition between the energies of the two spin configurations of the nearest-neighbor electrons. A comparison of the conventional $t$-$J$ model and the new model has been discussed. It seems that the coefficient of one of the interaction terms in the $t$-$J$ model, i.e. $\tilde{n}_{i}\tilde{n}_{j}$, is not in the physical range, which may be the reason why the superconductivity dome has not been obtained from the $t$-$J$ model. We have studied the magnetism and superconductivity of this new model. The main results of this research have been summarized in a recent paper \cite{tao2}. Our study indicates that this model provides a more complete description of the physics of strongly correlated electron systems. The system is not necessarily in a ferromagnetic state as temperature $T\rightarrow 0$ at any doping level $\delta\geq 0$. The system, however, must be in an antiferromagnetic state at the origin of the doping-temperature ($\delta$-$T$) plane ($T\rightarrow 0$, $\delta=0$). Moreover, the model exhibits superconductivity in a doped region at sufficiently low temperatures. The details of the magnetic and superconducting properties of this model will be presented in forthcoming papers. Finally, since Eq (\ref{eq:tpq_3}) and the $t$-$J$ model differ only by the coefficient of the term $\tilde{n}_{i}\tilde{n}_{j}$, most of the existing numerical tools (computer programs) developed for the $t$-$J$ model can be directly applied to Eq (\ref{eq:tpq_3}) without any major changes.
1,477,468,749,931
arxiv
\section{Introduction} The \emph{core entropy} $h(f)$ of a postcritically finite (PCF) polynomial $f:\mathbb{C} \to \mathbb{C}$ is the topological entropy of the restriction of $f$ to its Hubbard tree. It is well known that for any PCF polynomial $f$, the number $\lambda(f) = e^{h(f)}$, which is called the \emph{growth rate} of $f$, is a \emph{weak Perron number} -- a real, positive algebraic integer that is greater than or equal to the moduli of its Galois conjugates. The study of algebraic properties of growth rates of polynomial maps has gained renewed interest in recent years, in particular due to W. Thurston, who plotted the closure of the set of Galois conjugates of growth rates of real PCF quadratic polynomials, defining what is known as the \emph{Thurston set} or \emph{entropy spectrum}, also nicknamed ``the bagel" for its shape (see Figure \ref{F:bagel3}, \cite{thurston}, \cite{TiozzoGaloisConjugates}). All Galois conjugates of the growth rate are also eigenvalues of the transition matrix associated to the Markov partition for the polynomial. From a dynamical point of view, eigenvalues of the Markov transition matrix determine statistical properties of the dynamical system, with respect to the measure of maximal entropy: for instance, simplicity of the leading eigenvalue is equivalent to ergodicity, and absence of other eigenvalues of maximal modulus is equivalent to mixing; the spectral gap yields the rate of mixing (see e.g. \cite[Ch.1]{Baladi-book}). On the other hand, another development has been the study of core entropy for complex quadratic polynomials, extending the well-known theory of topological entropy for real unimodal maps, going back to Milnor-Thurston \cite{MilnorThurston}. W. Thurston initiated the study of core entropy in his Cornell seminar \cite{ThurstonPeople}, leaving several open questions. For quadratic polynomials, each rational angle $\theta \in \mathbb{Q}/\mathbb{Z}$ determines a postcritically finite parameter $c_{\theta}$ in the Mandelbrot set, and we denote by $h(\theta)$ the core entropy of the polynomial $f_{\theta}(z) := z^2 + c_{\theta}$. It was proven in \cite{TiozzoContinuity}, \cite{DudkoSchleicher} that the core entropy function $h:\mathbb{Q} / \mathbb{Z} \to \mathbb{R}$ extends to a continuous function from $\mathbb{R}/\mathbb{Z}$ to $\mathbb{R}$. See also \cite{GaoYanTiozzo} for the higher degree case. The goal of this paper is to study the eigenvalues associated to PCF quadratic polynomials; in particular, we associate to any principal vein in the Mandelbrot set a fractal 3-dimensional object, generalizing what Thurston called the \emph{Master Teapot}, and study its geometry. As observed in \cite[Figure 7.7]{thurston}, there appear to be two different patterns. On the one hand, the roots outside the unit circle seem to move continuously with the parameter\footnote{See also the video \url{https://vimeo.com/259921275}. For visualizations of the Master Teapot, see also \url{http://www.math.toronto.edu/tiozzo/teapot.html}. For the Thurston set and various related sets, see e.g. \url{http://www.math.toronto.edu/tiozzo/gallerynew.html}.}; on the other hand, roots inside the circle do not move continuously, but rather they display \emph{persistence}: namely, the set of roots increases as one progresses towards the tip of the vein. In this paper, we will rigorously prove these two phenomena for the teapots associated to principal veins in the Mandelbrot set. \subsection{Continuity of eigenvalues} For a postcritically finite polynomial $f$, let $T_f$ be its Hubbard tree. The postcritical set of $f$ together with the branch points of $T_f$ determine a Markov partition for the action of $f : T_f \to T_f$. Denote by $M_f$ the transition matrix associated to this Markov partition. We consider the set $Z(f)$ of eigenvalues of $M_f$: $$Z(f) \coloneqq \{\lambda \in \mathbb{C} \mid \textrm{det}(M_f - \lambda I) = 0\}.$$ The growth rate of $f$ is one element of the set $Z(f)$. For a rational angle $\theta \in \mathbb{Q}/\mathbb{Z}$, we define $Z(\theta)$ to be $Z(f_{c_\theta})$. Denote as $Com^+(\mathbb{C})$ the collection of compact subsets of $\mathbb{C}\setminus\overline{\mathbb{D}}$, with the Hausdorff topology. Define $Z^+ : \mathbb{Q} / \mathbb{Z} \to Com^+(\mathbb{C})$ as the unit circle together with the set of eigenvalues of modulus greater than $1$, i.e. $$Z^+(\theta) := S^1 \cup \left(Z(\theta) \cap (\mathbb{C} \setminus \mathbb{D}\right) ).$$ The first main result is the following: \begin{theorem} \label{t:continuousdiskextension} The map $Z^+ : \mathbb{Q} / \mathbb{Z} \to Com^+(\mathbb{C})$ admits a continuous extension from $\mathbb{R}/\mathbb{Z} \to Com^+(\mathbb{C})$. \end{theorem} Since the growth rate is the leading eigenvalue, this is a generalization of the main theorem of \cite{TiozzoContinuity} to all eigenvalues. In the proof, we adapt to the new situation the combinatorial tools such as the \emph{wedge} and the \emph{spectral determinant} from there. \subsection{Entropy algorithms} The second focus of this paper is relating various algorithms for computing the core entropy of a quadratic polynomial. We prove that they all produce the same polynomials, up to cyclotomic factors. The easiest way to compute the entropy of a PCF map is by using the characteristic polynomial $P_{Mar}$ of the transition matrix. This approach is the simplest, but has several drawbacks, as e.g. the shape of the Hubbard tree is not stable under perturbations of the parameter. For this reason, Thurston came up with a different algorithm to compute core entropy (see \cite{ThurstonPeople}, \cite{Gao}), which is more stable, and is used e.g. in \cite{TiozzoContinuity} to prove the continuity. This gives rise to what we call the \emph{Thurston polynomial} $P_{Th}$. A third way to compute entropy is through the celebrated \emph{kneading theory} of Milnor-Thurston \cite{MilnorThurston}, which applies to real multimodal maps. In this paper, we establish a new version of kneading theory which can be applied to complex polynomials lying on a principal vein. This gives rise to a new \emph{principal vein kneading polnomial} $D(t)$. We developed this version so that it would have the property that the map from itineraries (of the critical point) to the kneading determinants is continuous. This continuity is needed for our proof of Theorem \ref{t:persistence}. We prove that the roots of the polynomials given by these three algorithms coincide, off the unit circle. \begin{theorem} \label{T:equalpolys} For any postcritically finite parameter the following 2 polynomials have the same roots off the unit circle: \begin{enumerate} \item the polynomial $P_{Th}(t)$ that we get from Thurston's algorithm; \item the polynomial $P_{Mar}(t)$ that we get from the Markov partition. \end{enumerate} If, furthermore, the parameter is critically periodic and belongs to a principal vein (so that the principal vein kneading polynomial is defined), a third polynomial that has the same roots off the unit circle is \begin{enumerate} \item[(3)] the principal vein kneading polynomial $D(t)$. \end{enumerate} \end{theorem} \begin{figure} \begin{center} \includegraphics[width = 0.8 \textwidth]{one_third.png} \end{center} \caption{The Master Teapot $\Upsilon_{1/3}$ for the $1/3$-vein. We plotted the roots associated to all critically periodic parameters with simplified itinerary of period up to $20$, obtaining $\sim 2.8 \times 10^6$ points.} \end{figure} \subsection{Teapots for principal veins} Thirdly, we investigate the multivalued function $Z$ restricted to \emph{principal veins} of the Mandelbrot set; the behavior of $Z$ on a principal vein is encapsulated in the geometry of various \emph{Master Teapots} associated to that vein. For natural numbers $p < q$ with $p$ and $q$ coprime, Branner-Douady \cite{BrannerDouady} showed the existence of the \emph{$\frac{p}{q}$-principal vein}, that is, a continuous arc that connects the ``tip'' of the $\frac{p}{q}$-limb of the Mandelbrot set to the main cardioid. We denote as $\mathcal{V}_{p/q}$ the set of parameters in the Mandelbrot set which lie on the $\tfrac{p}{q}$-principal vein. We are particularly interested in the set $\mathcal{V}_{p/q}^{per}$ of all parameters $c \in \mathcal{V}_{p/q}$ such that the map $f_c:z \mapsto z^2+c$ is strictly critically periodic. Finally, we define $\Theta_{p/q}^{per}$ to be the set of all angles $\theta \in \mathbb{Q}/\mathbb{Z}$ such that the external ray of angle $\theta$ lands at the root of a hyperbolic component on the $\tfrac{p}{q}$-principal vein. For each $\lambda$ that arises as a growth rate associated to the $\tfrac{p}{q}$-principal vein, define $$\mathcal{Z}(\lambda) \coloneqq \{ z \in \mathbb{C} \mid \textrm{det}(M_{\theta} - zI) = 0\textrm{ for every } \theta \in \Theta^{per}_{p/q} \textrm{ such that } \lambda = e^{h(\theta)}\}.$$ Note that $\mathcal{Z}(\lambda)$ equals the set of eigenvalues of $\textrm{det}(M_{c(\lambda)} - zI)$, where $c(\lambda)$ is the critically periodic parameter of growth rate $\lambda$ closest to the main cardioid in the vein (c.f. Lemma \ref{l:closestrepresentative}). \begin{definition} We define the \emph{$\frac{p}{q}$-Master Teapot} to be the set $$ \Upsilon_{p/q} \coloneqq \overline{ \left\{(z,\lambda) \in \mathbb{C} \times \mathbb{R} \mid \lambda = e^{h(\theta)} \textrm{ for some } \theta \in \Theta^{per}_{p/q}, \ z\in \mathcal{ Z}(\lambda) \right\} }, $$ where the overline in the notation above denotes the topological closure. \end{definition} The Persistence Theorem of \cite{BrayDavisLindseyWu} states that if a point $z \in \mathbb{D}$ is in the height-$\lambda$ slice of the Master Teapot $\Upsilon_{1/2}$, then $z$ is also in all the higher slices, i.e. for $z \in \mathbb{D}$, $(z,\lambda) \in \Upsilon_{1/2}$ implies $\{z\} \times [\lambda,2] \in \Upsilon_{1/2}$. The present work generalizes this to all principal veins. A calculation shows that the core entropy of the tip of the $\frac{p}{q}$-principal vein equals $\log \lambda_q$, where $\lambda_q$ is the largest root of the polynomial $P(x) := x^q - x^{q-1} - 2$. We prove the persistence property for all principal veins: \begin{theorem}[Persistence Theorem] \label{t:persistence} Let $p < q$ coprime, and let $\Upsilon_{p/q}$ be the $\tfrac{p}{q}$-Master Teapot. If a point $(z,\lambda)$ belongs to $\Upsilon_{p/q}$ with $z \in \mathbb{D}$, then the ``vertical segment" $\{z\} \times [\lambda, \lambda_{q}]$ also lies in $\Upsilon_{p/q}$. \end{theorem} Finally, because of renormalization, points in the teapot behave nicely under taking $q^{th}$ roots. \begin{theorem} \label{T:q-root} If $(z,\lambda) \in \Upsilon_{1/2}$ with $|z| \neq 1$, then for any $q$, if $w^q = z$ then the point $(w,+\sqrt[q]{\lambda})$ belongs to $\Upsilon_{p/q}.$ \end{theorem} \begin{corollary} \label{C:cylinder} The unit cylinder $[1, \lambda_q] \times S^1$ is contained in $\Upsilon_{p/q}$. \end{corollary} \subsection{The Thurston set} W. Thurston \cite{thurston} also investigated the one-complex-dimensional set obtained by projecting the Master Teapot (or a variant thereof) to its $z$-coordinate. This set displays a lot of structure, and is known as the \emph{Thurston set} or \emph{entropy spectrum}, also nicknamed ``the bagel" for its shape (see Figure \ref{F:bagel3}). The Thurston set has attracted considerable attention recently, and is also related to several other sets defined by taking roots of polynomials with restricted digits, as well as limit sets of iterated function systems (see, among others, \cite{BouschConnexite}, \cite{BouschPaires}, \cite{thompson}, \cite{CalegariKochWalker}, \cite{LindseyWu}, \cite{PerezSilvestri}). In \cite[Appendix]{TiozzoGaloisConjugates}, variations of the Thurston set are proposed and drawn for each principal vein. In particular, one considers the \emph{Thurston set} $\Sigma_{p/q}$ \emph{for the principal $p/q$-vein}, defined as $$\Sigma_{p/q} := \overline{ \left\{z \in \mathbb{C} \mid \textrm{det}(M_{\theta}-zI)=0 \textrm{ for some } \theta \in \Theta^{per}_{p/q} \right\} }.$$ Using Theorem \ref{t:continuousdiskextension}, we obtain \begin{theorem} \label{T:bagel-connected} For any $(p, q)$ coprime, the Thurston set $$\Sigma_{p/q} \cap \{ z \in \mathbb{C} \ : \ |z| \geq 1\}$$ is path connected. \end{theorem} The analogous property for the real case is proven in \cite{TiozzoGaloisConjugates}. \begin{remark} With the above definitions, $\Sigma_{p/q}$ is not the projection of $\Upsilon_{p/q}$ onto the horizontal coordinate. The issue is that multiple different critically periodic parameters in a principal vein can have the same core entropy while having different characteristic polynomials $\chi(t) = \textrm{det}(M_c -t I)$. Rather, $\Sigma_{p/q}$ is the projection of a ``combinatorial" version of the Master Teapot (see Section \ref{ss:combinatorialveins}). This version of the Thurston set differs slightly from the one considered in \cite{BrayDavisLindseyWu}. \end{remark} \begin{figure} \begin{center} \includegraphics[width = 0.8 \textwidth]{Bagel-3.pdf} \end{center} \caption{The Thurston set $\Sigma_{1/3}$ associated to the $1/3$-vein.} \label{F:bagel3} \end{figure} \medskip Note that there is a difference between the Galois conjugates of $\lambda$ and the eigenvalues of a matrix $M_\theta$ with $\lambda = e^{h(\theta)}$. In fact, the characteristic polynomial of $M_\theta$ need not be irreducible. Informed by the real case ($\tfrac{p}{q} = \tfrac{1}{2}$), we conjecture: \begin{conjecture} $$\Upsilon_{p/q} = \overline{\{ (z, \lambda) \in \mathbb{C} \times \mathbb{R} \mid \lambda = e^{h(\theta)} \textrm{ for some } \theta \in \Theta^{per}_{p/q}, z \textrm{ is a Galois conjugate of }\lambda \}}.$$ \end{conjecture} \subsection*{Structure of the paper} In Section \ref{S:background-M} we give some background on the Mandelbrot set, while in Section \ref{S:background-graph} we discuss some background on the graphs and combinatorial structures we use. In Section \ref{s:relatingThurstonAndMarkovPolys}, we relate the Thurston polynomial to the Markov polynomial, proving the first part of Theorem \ref{T:equalpolys}. In Sections \ref{sec:coversofthefinitemodel} and \ref{S:continuous-ext} we discuss the dependence of the eigenvalues on the external angle, proving Theorem \ref{t:continuousdiskextension}. Then, in Section \ref{S:kneading-veins} we develop our new kneading theory for principal veins, proving the second part of Theorem \ref{T:equalpolys}. In Section \ref{S:surgery} we discuss how to interpret the Branner-Douady surgery in terms of itineraries, and define the procedure of \emph{recoding} we need to compare itineraries on different veins. In Section \ref{S:renorm} we discuss how to describe renormalization (and tuning) in terms of our kneading polynomials. Using renormalization, we prove Theorem \ref{T:q-root}. In Section \ref{s:persistence}, we prove the Persistence Theorem, Theorem \ref{t:persistence}. Finally, in Section \ref{ss:combinatorialveins} we apply these results to combinatorial veins and the Thurston set, proving Theorem \ref{T:bagel-connected}. In the Appendix, we show the useful fact (probably well-known, but we could not find a reference) that the Markov polynomial and the Milnor-Thurston kneading polynomial coincide for real critically periodic parameters. \subsection*{Acknowledgements} G. T. is partially supported by NSERC grant RGPIN-2017-06521 and an Ontario Early Researcher Award ``Entropy in dynamics, geometry, and probability". K.L. is partially supported by NSF grant \#1901247. \section{Background on the Mandelbrot set and veins} \label{S:background-M} \subsection{The Mandelbrot set} \subsubsection{First definitions} Every quadratic polynomial on $\mathbb{C}$ is conformally equivalent to a unique polynomial of the form $f_c(z)=z^2+c$. The \emph{filled Julia set} for $f_c$, denoted $\mathcal{K}(f_c)$, consists of all points $z \in \mathbb{C}$ whose orbit under $f_c$ is bounded; the \emph{Julia set} $\mathcal{J}(f_c)$ is the boundary of $\mathcal{K}(f_c)$. The Mandelbrot set $\mathcal{M}$ is the set of parameters $c$ for which the filled Julia set for the map $f_c$ is connected. A parameter $c \in \mathcal{M}$ is said to be \emph{postcritically finite} if $\{f_c^n(0) \mid n \in \mathbb{N}\}$ is a finite set. A parameter $c \in \mathcal{M}$ is said to be \emph{(strictly) critically periodic} if there exists $n \in \mathbb{N}$ such that $f_c^n(0) = 0$. \subsubsection{Hubbard trees} Let $f_c$ be a quadratic polynomial for which the Julia set is connected and locally connected (hence, also path connected). Then any two points $x, y$ in the filled Julia set are connected by a \emph{regulated arc}, i.e. a continuous arc which lies completely in $\mathcal{K}(f_c)$ and is canonically chosen (see e.g. \cite{DHOrsay}). We denote such regulated arc $[x, y]$. Then we define the \emph{Hubbard tree} $T_c$ as the union $$T_c := \bigcup_{i, j \geq 0} [f_c^i(0), f_c^j(0)].$$ In particular, if $f_c$ is postcritically finite, the above hypotheses are satisfied, and the Hubbard tree $T_c$ is topologically a finite tree. Moreover, one has $f_c(T_c) \subseteq T_c$. \subsubsection{B\"{o}ttcher coordinates} \label{sss:BottcherCoords} For $c \in \mathcal{M}$, \emph{B\"{o}ttcher coordinates} on $\hat{\mathbb{C}} \setminus \mathcal{K}(f_c)$ are the ``polar coordinates'' induced by the unique Riemann mapping $\Phi_c: \hat{ \mathbb{C}} \setminus \overline{\mathbb{D}} \to \hat{\mathbb{C}} \setminus \mathcal{K}(f_c)$ that satisfies $\Phi_c(\infty) = \infty$ and $\Phi_c'(\infty) = 1$. The map $\Phi_c$ conjugates the dynamics outside of $\mathcal{K}(f_c)$ to the squaring map: $\Phi_c \circ f_c = (\Phi_c)^2$ on $\hat{\mathbb{C}} \setminus \mathcal{K}(f_c)$. Similarly, B\"{o}ttcher coordinates on $\hat{\mathbb{C}} \setminus \mathcal{M}$ come from the unique Riemann mapping $\Phi_{\mathcal{M}} : \hat{ \mathbb{C}} \setminus \overline{\mathbb{D}} \to \hat{\mathbb{C}} \setminus \mathcal{M}$ that satisfies $\Phi_{\mathcal{M}}(\infty) = \infty$ and $\Phi_{\mathcal{M}}'(\infty) = 1$. By Carath\'{e}odory's Theorem, the maps $\Phi_c$ or $\Phi_{\mathcal{M}}$ extend continuously to the unit circle if and only if $\mathcal{K}(f_c)$ or $\mathcal{M}$, respectively, are locally connected. The (dynamical or parameter) ray of angle $\theta$ is said to \emph{land} at $z \in \mathbb{C}$ if $\lim_{r \searrow 1} \Phi(re^{i\theta}) = z$. It is conjectured that $\mathcal{M}$ is locally connected, and it is well-known \cite[Theorem 13.1]{DHOrsay} that every rational parameter ray $R_{\mathcal{M}}(\theta)$ lands. We use $R_c(\theta)$ and $R_{\mathcal{M}}(\theta)$ to denote the dynamical or parameter rays of angle $\theta$. For any $c \in \mathcal{M}$, the landing point of $R_c(0)$ is a fixed point of $f_c$ and is called the \emph{$\beta$-fixed point}; the \emph{$\alpha$-fixed point} is the other fixed point of $f_c$. A point $c \in \partial \mathcal{M}$ is said to be \emph{biaccessible} (resp. \emph{$k$-accessible}) if it is the landing point of precisely $2$ (resp. $k$) parameter rays. \subsubsection{Hyperbolic and critically periodic parameters} A parameter $c \in \mathcal{M}$ is said to be \emph{hyperbolic} if the critical point for $f_c$ tends to the (necessarily unique) attracting cycle in $\mathbb{C}$. The hyperbolic parameters of $\mathcal{M}$ form an open set; connected components of this set are called \emph{hyperbolic components}. Each hyperbolic component $H$ is conformally equivalent to $\mathbb{D}$ under the map $\lambda$ which assigns to each $c \in H$ the multiplier of its (unique) attracting cycle. The \emph{center} of $H$ is the parameter $\lambda^{-1}(0)$, and (continuously extending $\lambda^{-1}$ to the unit circle) the \emph{root} of $H$ is the $\lambda^{-1}(1)$. Critically periodic parameters are precisely those parameters that are centers of hyperbolic components. The set of all real hyperbolic parameters is dense in $\mathcal{M} \cap \mathbb{R} = [-2,1/4]$; in particular, every component of the interior of $\mathcal{M}$ which meets the real line is hyperbolic \cite{Lyubich}. Every (strictly) critically periodic parameter $c$ is the center of a hyperbolic component of the Mandelbrot set. \subsubsection{Parabolic parameters} \label{sss:parabolic} A parameter $c \in \mathcal{M}$ is called \emph{parabolic} if $f_c$ has a periodic orbit with some root of unity as the multiplier (and such a point is called a parabolic periodic point). The root of each hyperbolic component of $\mathcal{M}$ is a parabolic parameter. Every parabolic periodic cycle of a polynomial attracts the forward orbit of a critical point, so quadratics have at most one parabolic periodic orbit. For a parabolic parameter $c$, the unique Fatou component of $\mathcal{K}(f_c)$ containing the critical value of $f_c$ is called the \emph{characteristic Fatou component}, and it has a unique parabolic periodic point on its boundary; this parabolic point is called \emph{characteristic periodic point} of the parabolic orbit. The characteristic periodic point is the landing point of at least two dynamical rays, and the two rays closest to the critical value on either side are called \emph{characteristic rays} (\cite{Schleicher}). \subsubsection{Rational angles and postcritically finite maps} \label{ss:anglestotrees} A rational angle $\theta = a/b$, written in lowest terms, is periodic (resp. preperiodic) under the doubling map if and only if $b$ is odd (resp. even). If $\theta$ is periodic, the landing point of $R_{\mathcal{M}}(\theta)$ is a parabolic parameter, which is the root of some hyperbolic component. We associate to $\theta$ the map, which we call $f_{\theta}$, that is the center of this hyperbolic component. Note that topological entropy is constant on the closure of a hyperbolic component. If $\theta$ is preperiodic, the landing point of $R_{\mathcal{M}}(\theta)$ is a strictly preperiodic parameter, and we call map associated to this parameter $f_{\theta}$. \subsection{Veins in parameter space} \label{ss:veins} A \emph{vein} in the Mandelbrot $\mathcal{M}$ set is a continuous, injective arc in $\mathcal{M}$. It is known (\cite[Corollary A]{BrannerDouady}) that there is a vein connecting the landing point in $\mathcal{M}$ of any external ray of angle $p/2^q$, for $p,q \in \mathbb{N}$, to the main cardioid. For any integers $p,q$ such that $0 < p < q$ and $p$ and $q$ are coprime, the \emph{$\frac{p}{q}$-limb} in the Mandelbrot set consists of the set of parameters $c \in \mathcal{M}$ such that $f_c$ has rotation number $\frac{p}{q}$ around the $\alpha$-fixed point of $f_c$. In each such limb, there exists a unique parameter $c_{p/q}$ such that the critical point, $0$, maps under $f_{c_{p/q}}$ to the $\beta$-fixed point (i.e. the landing point in $\mathcal{M}$ of the angle $0$ external ray in the dynamical plane) of $f_{c_{p/q}}$ in precisely $q$ steps. The \emph{$\frac{p}{q}$-principal vein}, $0 < p < q$ coprime, which we denote $\mathcal{V}_{p/q}$, is the vein joining $c_{p/q}$ to the main cardioid. The Hubbard tree $T_f$ associated to any point in the $\frac{p}{q}$-principal vein is a $q$-pronged star (see e.g. \cite[Proposition 15.3]{TiozzoThesis}), whose center point $\alpha_f$ is the $\alpha$-fixed point of $f_{c_{p/q}}$. Moreover, deleting $\alpha_f$ and $0$ from the Hubbard tree $T$ yields of a decomposition of $T$ into $q+1$ arcs: $$T_f \setminus \{\alpha_f, 0\} = I_0 \sqcup I_1 \sqcup \ldots \sqcup I_q$$ where the critical point, $0$, separates $I_0$ and $I_1$, the $\alpha$-fixed point separates $I_1, \ldots, I_q$, and the dynamics are: \begin{itemize} \item $f(I_0) \subseteq I_0 \cup I_1 \cup I_2$. \item $f: I_k \to I_{k+1}$ homeomorphically for $1 \leq k \leq q-1$, \item $f:I_q \to I_0 \cup I_1$ homeomorphically. \end{itemize} (See Figure \ref{f:Htree1/5}, which shows the combinatorial model of the Hubbard tree for angle $\theta=1/5$.) \begin{figure} \begin{minipage}{0.49 \textwidth} \includegraphics[width = 0.99 \textwidth]{vein3.jpg} \end{minipage} \begin{minipage}{0.49 \textwidth} \begin{tikzpicture}[scale=1.75] \filldraw [black] (-.35,.6) circle (1pt); \node at (-.5,.43) {$\alpha$}; \filldraw [black] (0,0) circle (1pt); \node at (.2,.1) {$x_0$}; \filldraw [black] (-.15,1) circle (1pt); \node at (-.15,1.2) {$x_1$}; \filldraw [black] (-1.2,.7) circle (1pt); \node at (-1.4,.7) {$x_2$}; \filldraw [black] (.76,-.6) circle (1pt); \node at (1.06,.-.6) {$x_3$}; \node at (.5,-.2) {$I_0$}; \node at (-.25,.2) {$I_1$}; \node at (-.1,.75) {$I_2$}; \node at (-.8,.8) {$I_3$}; \draw (-.35,.6)-- (-.15,1); \draw (-.35,.6)--(-1.2,.7); \draw(-.35,.6)--(0,0); \draw (0,0) -- (.76,-.6); \end{tikzpicture} \end{minipage} \caption{Left: The $1/3$-principal vein $\mathcal{V}_{1/3}$, joining the center of the main cardioid to the parameter with external angle $\theta = 1/4$. In red, some external rays landing on the vein. Right: The combinatorial model of the Hubbard tree for parameters on the $1/3$-principal vein. } \label{f:Htree1/5} \end{figure} In particular, for $f = f_{c_{p/q}}$, the map associated to the tip of the $\frac{p}{q}$-principal vein, the dynamics are given by $$\begin{array}{ll} f(I_0) = I_0 \cup I_1 \cup I_2 & \\ f(I_k) = I_{k+1} & \qquad \textup{for }k = 1, \dots, q-1 \\ f(I_q) = I_0 \cup I_1 & \end{array}$$ Hence, the entropy of $f_{c_{p/q}}$ equals $\log \lambda_{q}$, where $\lambda_q$ is the largest root of polynomial $P(x) = x^q - x^{q-1} - 2$. To see that, consider the piecewise linear model of slope $\lambda$, and suppose that $I_1$ has length $|I_1| = 1$. Then the length of $|f^q(I_1)| = |I_0 \cup I_1| = \lambda^q$ hence $|I_0| = \lambda^q -1$ and $|f(I_0)| = |I_0| + |I_1| + |I_2| = \lambda + \lambda^q = \lambda ( \lambda^q - 1)$ which implies $\lambda^q = \lambda^{q-1} + 2$. \section{Background on spectral determinant and growth rates} \label{S:background-graph} \subsection{Directed graphs} A \emph{directed graph} is an ordered pair $\Gamma = (\mathcal{V},\mathcal{E})$ where $\mathcal{V}$ is a set and $\mathcal{E}$ is a subset of $\mathcal{V} \times \mathcal{V}$. Elements of $\mathcal{V}$ are called \emph{vertices} and elements of $\mathcal{E}$ are called \emph{edges}. Given an edge $e=(v_1,v_2)$, the \emph{source} of $e$, denoted $s(e)$, is the vertex $v_1$, and the \emph{target} of $e$, denoted $t(e)$, is the vertex $v_2$; we say that such an edge ``goes from $v_1$ to $v_2$.'' The \emph{outgoing degree} (resp. \emph{incoming degree}) of a vertex $v$, denoted $\textrm{Out}(v)$ (resp. $\textrm{In}(v)$), is the cardinality of the set of edges whose source (resp. target) is $v$. A directed graph such that $\textrm{Out}(v)$ and $\textrm{In}(v)$ are finite for every $v \in \mathcal{V}$ is said to be \emph{locally finite}. A directed graph for which there exists $n \in \mathbb{N}$ such that $\textrm{Out}(v) \leq n$ for all $v \in \mathcal{V}$ is said to have \emph{bounded outgoing degree}. A directed graph is \emph{countable} if $\mathcal{V}$ is countable. \subsubsection{Paths and cycles} \label{ss:pathsandcycles} A \emph{path} in a directed graph based at a vertex $v$ is a sequence $(e_1,\ldots,e_n)$ of edges such that $s(e_1) = v$ and $t(e_i) = s(e_{i+1})$ for $1 \leq i \leq n-1$. Such a path is said to have \emph{length} $n$ and its \emph{vertex support} is the set $\{s(e_1),\ldots,s(e_n)\} \cup \{t(e_n)\}$. A \emph{closed path} based at $v$ is a path $e_1,\ldots,e_n$ such that $v=s(e_1) = t(e_n)$. Note that a closed path can intersect itself, and closed paths based at different vertices are considered different. A \emph{simple cycle} is a closed path which does not self-intersect, modulo cyclical equivalence (meaning two such paths are considered the same simple cycle if the edges are cyclically permuted). A \emph{multi-cycle} is the union of finitely many simple cycles with pairwise disjoint vertex-supports. The length of a multi-cycle is the sum of the lengths of the simple cycles that comprise it. A directed countable graph is said to have \emph{bounded cycles} if it has \emph{bounded outgoing degree} and for each positive integer $n$, it has at most finitely many simple cycles of length $n$. \subsubsection{Graph maps and quotients} Let $\Gamma_1=(\mathcal{V}_1,\mathcal{E}_1)$ and $\Gamma_2=(\mathcal{V}_2,\mathcal{E}_2)$ be locally finite directed graphs. A \emph{graph map} from $\Gamma_1$ to $\Gamma_2$ is a map $\pi:\mathcal{V}_1 \to \mathcal{V}_2$ such that for every edge $(v,w)$ in $\mathcal{E}_1$, $(\pi(v_1),\pi(v_2))$ is an edge in $\mathcal{E}_2$. We will denote such a map $\pi:\Gamma_1 \to \Gamma_2$. Such a graph map $\pi$ also induces maps, which (abusing notation) we also denote by $\pi$, $\pi:\mathcal{E}_1 \to \mathcal{E}_2$ and $\pi:\textrm{Out}(v) \to \textrm{Out}(\pi(v))$ for each $v \in \mathcal{V}_1$. A \emph{weak graph map} $\pi:\Gamma_1 \to \Gamma_2$ is a graph map $\pi:\Gamma_1 \to \Gamma_2$ such that the map $\pi:\mathcal{V}_1 \to \mathcal{V}_2$ between vertex sets is surjective, and the induced map $\pi:\textrm{Out}(v) \to \textrm{Out}(\pi(v))$ is a bijection or each $v \in \mathcal{V}_1$. For a directed, locally finite graph $G=(\mathcal{V},\mathcal{E})$, an equivalence relation $\sim$ on $\mathcal{V}$ is called \emph{edge-compatible} if whenever $v_1 \sim v_2$, for every vertex $w\in \mathcal{V}$ the total number of edges from $v_1$ to members of the equivalence class of $w$ equals the total number of edges from $v_2$ to members of the equivalence class of $w$. For such a graph $\Gamma = (\mathcal{V},\mathcal{E})$ and edge-compatible equivalence relation $\sim$, the \emph{quotient graph} $\overline{\Gamma} = \Gamma / \sim$ is defined as follows. Define the vertex set of $\overline{\Gamma}$ to be the set $\overline{\mathcal{V}} := \mathcal{V} / \sim$, and for each pair of vertices $[v]$ and $[w]$ in the quotient graph, define the number of edges from $[v]$ to $[w]$ in the quotient graph to be the total number of edges from the fixed vertex $v \in \mathcal{V}$ to all members of the equivalence class of $w$ in $\mathcal{G}$. \subsubsection{Adjacency operator and incidence matrix} Given a (directed) finite or countable graph $\Gamma$ with vertex set $\mathcal{V}$ that has bounded outgoing degree, we define the \emph{adjacency operator} $A:\ell^1(\mathcal{V}) \to \ell^1(\mathcal{V})$ to be the linear operator on $\ell^1(\mathcal{V})$ such that, denoting by $e_i$ the sequence that has $1$ at position $i$ and $0$ otherwise, the $j^{\textrm{th}}$ component of $A(e_i)$ is $(A(e_i))_j := \#(i \to j)$, the number of edges from $i$ to $j$. Note that for each pair $i,j$ of vertices and each $n$, the coefficient $(A^n(e_i))_j$ equals the number of paths of length $n$ from $i$ to $j$. When $\Gamma$ is a finite graph, $\ell^1(\mathcal{V}) \cong \mathbb{R}^{|\mathcal{V}|}$ is a finite-dimensional vector space, with a privileged choice of basis $\{e_i, i \in \mathcal{V}\}$, and $A$ is a linear map; in this case, $A$ can be represented by a (finite) square matrix, with one row/column for each $v \in V$, which we call the \emph{incidence matrix} associated to $\Gamma$. (We only define ``the'' incidence matrix up to permutation of the rows/columns, which will be sufficient for our purposes, since eigenvalues are invariant under elementary row operations.) For such a finite graph $\Gamma$, the characteristic polynomial for the action of $A$ is the polynomial $\chi_{\Gamma}(t) = \det(A - tI)$. \subsection{Spectral determinant} \label{ss:spectraldeterminant} For a directed finite or countable graph $\Gamma$ with bounded cycles, define the \emph{spectral determinant} $P_{\Gamma}(t)$ of $\Gamma$ by \begin{equation} \label{eq:spectraldeterminant} P_{\Gamma}(t) := \sum_{\gamma \textrm{ multi-cycle}} (-1)^{C(\gamma)}t^{\ell(\gamma)}, \end{equation} where $\ell(\gamma)$ denotes the length of the multi-cycle $\gamma$, while $C(\gamma)$ is the number of connected components of $\gamma$. Note that the empty cycle is considered a multicycle, so $P_{\Gamma}$ starts with the constant term $1$. When $\Gamma$ is a finite directed graph, it is well known (see e.g. \cite[Section 1.2.1]{BrouwerHaemers}) that $$P_{\Gamma}(t) = \det(I - t A) = (-t)^d \chi_\Gamma(t^{-1}),$$ i.e. the spectral determinant coincides, up to a factor of $(-t)^d$, with the (reciprocal of the) characteristic polynomial of the adjacency matrix. \subsection{Labeled wedges and associated graphs} The \emph{unlabeled wedge} is the set $$\Sigma \coloneqq \{(i,j) \in \mathbb{N}^2: 1 \leq i \leq j\}.$$ A \emph{labeling of $\Sigma$} is a map from $\Sigma$ to the $3$-element set $\{N,S, E \}$, i.e. a map $\Phi:\Sigma \to \{N,S, E\}$. ($N$ stands for `non-separated,' $S$ for `separated,' and $E$ for `equivalent.'). A \emph{labeled wedge} $\mathcal{W}$ is the set $\Sigma$ together with a labeling $\Phi$ of $\Sigma$; we will write $(i,j) \in \mathcal{W}$ to mean the point $(i,j) \in \Sigma$ together with the data of the value $\Phi$ assigns to $(i,j)$. Associated to any labeled wedge $\mathcal{W}$, there is an associated directed graph $\Gamma_{\mathcal{W}} = (\Sigma,E_{\mathcal{W}})$. The vertex set of $\Gamma_{\mathcal{W}}$ is the unlabeled wedge $\Sigma$. The edge set $E_{\mathcal{W}}$ is defined recursively as follows. For each vertex $(i,j) \in \Sigma$, \begin{itemize} \item if $(i,j)$ is equivalent, there is no edge in $E_{\mathcal{W}}$ with $(i,j)$ as its source, \item if $(i,j)$ is non-separated, we add the edge $\left((i,j) , (i+1,j+1) \right)$ to $E_{\mathcal{W}}$ \item if $(i,j)$ is separated, we add the edges $\left((i,j) , (1,j+1) \right)$ and $\left( (i,j) , (1,i+1) \right)$ to $E_{\mathcal{W}}$. \end{itemize} We say that a sequence $(\mathcal{W}_n)_{n \in \mathbb{N}}$ of labeled wedges converges to a labeled wedge $\mathcal{W}$ if for each finite set of vertices $V \subset \Sigma$ there exists $N \in \mathbb{N}$ such that for all $n \geq N$ the labels of the elements of $V$ for $\mathcal{W}_n$ and $\mathcal{W}$ are the same. \begin{theorem}[\cite{TiozzoContinuity}, Theorem 4.3] \label{t:SpectralDeterminantGrowthRate} Let $\mathcal{W}$ be a labeled wedge. Then its associated graph $\Gamma_{\mathcal{W}}$ has bounded cycles, and its spectral determinant $P(t)$ defines a holomorphic function in the unit disk. Moreover, the growth rate $r$ of the graph $\Gamma_{\mathcal{W}}$ equals the inverse of the smallest real positive root of $P(z)$, in the following sense: $P(z) \neq 0$ for $|z|<r^{-1}$ and, if $r>1$, then $P(r^{-1})=0$. \end{theorem} \begin{proposition}[\cite{TiozzoContinuity}, Proposition 4.2, part 5] \label{p:boundedcoeffs} \label{p:boundingnumberofcycles} For every labeled wedge $\mathcal{W}$ and $n \in \mathbb{N}$, the associated graph $\Gamma_{\mathcal{W}}$ has at most $(2n)^{\sqrt{2n}}$ multi-cycles of length $n$. \end{proposition} \subsubsection{Periodic labeled wedges and finite models} \label{ss:periodiclabeledwedgesfinitemodels} Given integers $p \geq 1$ (the \emph{period}) and $q \geq 0$ (the \emph{preperiod}), let $\equiv_{p,q}$ be the equivalence relation on $\mathbb{N}$ defined by $i \equiv_{p,q} j$ if and only if either \begin{enumerate} \item $i = j$, or \item $\min\{i,j\} \geq q+1$ and $i \equiv j \mod p$. \end{enumerate} The equivalence relation $\equiv_{p,q}$ on $\mathbb{N}$ induces an equivalence relation on $\mathbb{N} \times \mathbb{N}$, which we also denote $\equiv_{p,q}$, by setting $(i,j) \equiv_{p,q} (k,l)$ if and only if $i \equiv_{p,q} k$ and $j \equiv_{p,q} l$. It also induces an equivalence relation on the set of unordered pairs of natural numbers by declaring that the unordered pair $\{i,j\}$ is equivalent to to $\{k,l\}$ if either $(i,j) \equiv_{p,q} (k,l)$ or $(i,j) \equiv_{p,q} (l,k)$. A labeled wedge is \emph{periodic of period $p$ and preperiod $q$} if and only if \begin{enumerate} \item \label{i:correspondingverticessamelabel} any two pairs $(i,j)$ and $(k,l)$ such that $\{i,j\} \equiv_{p,q} \{k,l\}$ have the same label, \item a point $(i,j)$ is labeled $E$ if and only if $i \equiv_{p,q} j$, and \item if $i \equiv_{p,q} j$, then the pair $(i,j)$ is non-separated. \end{enumerate} The \emph{finite model} of a countably infinite, directed graph $\Gamma_{\mathcal{W}}$ associated to a periodic labeled wedge $\mathcal{W}$ of period $p$ is and preperiod $q$ is the quotient graph of $\Gamma_{\mathcal{W}} / \equiv_{p,q}$. (Figure \ref{f:1/5} shows the finite graph model for angle $\theta=1/5$.) \subsection{The Thurston entropy algorithm} \subsubsection{The labeled wedge associated to a rational angle} For any angle $\theta \in \mathbb{R}/\mathbb{Z}$, define $P_{\theta}$ to be the partition of $\mathbb{R}/\mathbb{Z}$ into $$\left[ \frac{\theta}{2}, \frac{\theta+1}{2} \right) \sqcup \left[ \frac{\theta+1}{2}, \frac{\theta}{2} \right),$$ and for each $i \in \mathbb{N}$, set $x_i(\theta) :=2^{i-1}\theta \mod 1$. Define $\mathcal{W}_{\theta}$ to be the labeled wedge defined by labeling each pair $(i,j) \in \Sigma$ as \emph{equivalent} if $x_i(\theta) = x_j(\theta)$, as \emph{separated} if $x_i(\theta)$ and $x_j(\theta)$ are in the interiors of different elements of the partition $P_{\theta}$, and labeling the pair as \emph{non-separated} otherwise. Denote by $\Gamma_{\theta}$ the associated infinite directed graph. (Figure \ref{f:1/9} shows a portion of the infinite graph $\Gamma_{1/9}$.) \begin{figure}[h!] \begin{tikzpicture}[node distance = 1.3cm] \tikzstyle{non-sep} = [rectangle, rounded corners, minimum width=.5cm, minimum height=.5cm,text centered, draw=black, fill=none] \tikzstyle{sep} = [rectangle, dashed, rounded corners, minimum width=.5cm, minimum height=.5cm, text centered, draw=black, fill=none] \tikzstyle{process} = [rectangle, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=orange!30] \tikzstyle{decision} = [diamond, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=green!30] \tikzstyle{thickarrow} = [thick,->,>=stealth] \tikzstyle{arrow} = [->,>=stealth] \node (11) [non-sep] {1,1}; \node (12) [non-sep, right of = 11] {1,2}; \node (13) [non-sep, right of=12] {1,3}; \node (14) [sep, right of=13] {1,4}; \node (15) [sep, right of=14] {1,5}; \node (16) [non-sep, right of=15] {1,6}; \node (17) [non-sep, right of=16] {1,7}; \node(rdots) [right of = 17]{$\ldots$}; \node (22) [non-sep, above of = 12] {2,2}; \node (23) [non-sep, above of=13] {2,3}; \node (24) [sep, above of=14] {2,4}; \node (25) [sep, above of=15] {2,5}; \node (26) [non-sep, above of=16] {2,6}; \node (27) [non-sep, above of=17] {2,7}; \node (33) [non-sep, above of = 23] {3,3}; \node (34) [sep, above of=24] {3,4}; \node (35) [sep, above of=25] {3,5}; \node (36) [non-sep, above of=26] {3,6}; \node (37) [non-sep, above of=27] {3,7}; \node (44) [non-sep, above of = 34] {4,4}; \node (45) [non-sep, above of=35] {4,5}; \node (46) [non-sep, above of=36] {4,6}; \node (55) [non-sep, above of = 45] {5,5}; \node (56) [non-sep, above of=46] {5,6}; \node (66) [non-sep, above of=56] {6,6}; \node (27) [non-sep, above of=17] {2,7}; \node (37) [non-sep, above of=27] {3,7}; \node (47) [non-sep, above of=37] {4,7}; \node (57) [non-sep, above of=47] {5,7}; \node (67) [non-sep, above of=57] {6,7}; \node (77) [non-sep, above of=67] {7,7};\ \node (upperdots) [above right of=77] {$\iddots$}; \node (upperdots) [ right of=77] {$\ldots$}; \draw [arrow] (12) to (23); \draw [arrow] (23) -- (34); \draw [arrow] (34) -- (15); \draw [arrow] (13) to [bend left] (24); \draw [arrow] (24) to [bend left] (13); \draw [arrow] (24) to (15); \draw [arrow] (34) to [out=-70,in=70, looseness=1.8] (14); \draw [arrow] (14) to [out = 210, in=-30, looseness=1] (12); \draw [arrow] (14) to (15); \draw [arrow] (15) to (16); \draw [arrow] (25) to (16); \draw [arrow] (35) to (16); \draw [arrow] (35) to (14); \draw [arrow] (45) to (56); \draw [arrow] (15) to [out=210, in=-40] (12); \draw [arrow] (25) to (13); \draw [arrow] (16) to (27); \draw [arrow] (26) to (37); \draw [arrow] (36) to (47); \draw [arrow] (46) to (57); \draw [arrow] (56) to (67); \draw[dotted] ([xshift=.75cm]16.south) -- ([xshift=.75cm]66.north); \begin{scope}[yshift=6cm, scale=1.25] \draw (0,0) circle (1); \filldraw [black] (360/9:1) circle (1pt); \node at (360/9:1.5) {$x_1 {=} \tfrac{1}{9}$}; \filldraw [black] (360*2/9:1) circle (1pt); \node at (360*2/9:1.3) {$x_2 {=} \tfrac{2}{9}$}; \filldraw [black] (360*4/9:1) circle (1pt); \node at (360*4/9:1.5) {$x_3 {=} \tfrac{4}{9}$}; \filldraw [black] (360*8/9:1) circle (1pt); \node at (360*8/9:1.3) {$x_4 {=} \tfrac{8}{9}$}; \filldraw [black] (360*7/9:1) circle (1pt); \node at (360*7/9:1.3) {$x_5 {=} \tfrac{7}{9}$}; \filldraw [black] (360*5/9:1) circle (1pt); \node at (360*5/9:1.5) {$x_6 {=} \tfrac{5}{9}$}; \draw[dashed] (360/18:1.5) to (360/18+180:1.5); \node at (360/18:1.5) {$\tfrac{\theta}{2} {=} \tfrac{1}{18}$}; \end{scope} \end{tikzpicture} \caption{The infinite graph $\Gamma_{1/9}$. Angle $1/9$ is strictly periodic with period 6; the vertical dotted line indicates the edge of a ``fundamental domain'' for $\equiv_{6,0}$. Vertices that are separated are indicated with a dashed boundary; non-separated and equivalent vertices have a solid boundary. The angle diagram in the upper left is helpful for determining which vertices of $\Gamma$ are separated. } \label{f:1/9} \end{figure} \subsubsection{Growth rate and core entropy} \label{sss:growthratecoreentropy} Given a finite or infinite graph $\Gamma$ with bounded cycles, we define its \emph{growth rate} as $$r := \limsup_{n \to \infty} \sqrt[n]{C(\Gamma, n)}$$ where $C(\Gamma, n)$ is the number of closed paths in $\Gamma$ of length $n$. It is straightforward to prove that the growth rate of a finite graph is the leading eigenvalue of its incidence matrix. \begin{proposition}[\cite{TiozzoContinuity}, Proposition 5.2] \label{p:GrowthRateFiniteModel} Let $\mathcal{W}$ be a periodic labeled wedge, with associated (infinite) graph $\Gamma$. Then the growth rate of $\Gamma$ equals the growth rate of its finite model. \end{proposition} When the infinite graph is a wedge coming from a rational angle, the logarithm of the growth rate yields precisely the core entropy of the corresponding PCF quadratic polynomial. \begin{theorem}[\cite{TiozzoContinuity}, Theorem 6.4] Let $\theta$ be a rational angle. Then the logarithm of the growth rate $r(\theta)$ of the infinite graph $\Gamma_{\theta}$ coincides with core entropy: $h(\theta) = \log r(\theta)$. \end{theorem} \noindent Note that here $h(\theta)$ denotes the core entropy of the critically periodic polynomial associated to $\theta$, as described in \S \ref{ss:anglestotrees}. For a postcritically finite polynomial $f$, we define the \emph{Thurston polynomial} $P_{Th}(f)$ to be the characteristic polynomial of the incidence matrix for the finite model graph associated to $f$. \section{Relating the Thurston polynomial and the Markov polynomial} \label{s:relatingThurstonAndMarkovPolys} The goal of this section is to prove the following comparison between the Thurston algorithm polynomial and the Markov polynomial, establishing the first part of Theorem \ref{T:equalpolys} from the introduction: \begin{theorem} \label{T:Th-Mar} For a postcritically finite parameter in the Mandelbrot set, the polynomial $P_{Th}(t)$ that we get from Thurston's algorithm and the polynomial $P_{Mar}(t)$ that we get from the Markov partition satisfy the following relation: $$P_{Th}(t) = P_{Mar}(t) Q(t)$$ where $Q(t)$ is a polynomial whose roots lie in $\{ 0 \} \cup S^1$. \end{theorem} As defined in \S \ref{sss:growthratecoreentropy}, the Thurston polynomial $P_{Th}(t)$ associated to a postcritically finite polynomial $f$ is the characteristic polynomial of the incidence matrix for the finite graph model associated to $f$. The Markov polynomial $P_{Mar}(t)$ associated to the postcritically finite polynomial $f$ is the characteristic polynomial of the incidence matrix for the Markov partition of the Hubbard tree $T_f$ formed by cutting $T_f$ at its branch points and at its postcritical set (including the critical point). \bigskip \begin{example} We compute the Thurston polynomial and Markov polynomial $P_{Mar}$ for angle $\theta=1/5$. \medskip \begin{figure}[h!] \begin{tikzpicture}[node distance = 1.3cm] \tikzstyle{non-sep} = [rectangle, rounded corners, minimum width=.5cm, minimum height=.5cm,text centered, draw=black, fill=none] \tikzstyle{sep} = [rectangle, dashed, rounded corners, minimum width=.5cm, minimum height=.5cm, text centered, draw=black, fill=none] \tikzstyle{process} = [rectangle, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=orange!30] \tikzstyle{decision} = [diamond, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=green!30] \tikzstyle{thickarrow} = [thick,->,>=stealth] \tikzstyle{arrow} = [->,>=stealth] \node (11) [non-sep] {1,1}; \node (12) [non-sep, right of = 11] {1,2}; \node (13) [non-sep, right of=12] {1,3}; \node (14) [non-sep, right of=13] {1,4}; \node (22) [non-sep, above of = 12] {2,2}; \node (23) [sep, above of=13] {2,3}; \node (24) [sep, above of=14] {2,4}; \node (33) [non-sep, above of = 23] {3,3}; \node (34) [non-sep, above of=24] {3,4}; \node (44) [non-sep, above of = 34] {4,4}; \draw [arrow] (12) to (23); \draw [arrow] (23) -- (14); \draw [arrow] (23) -- (13); \draw [arrow] (13) -- (12); \draw [arrow] (13) -- (14); \draw [arrow] (14) to [out=-90,in=-90, looseness=.5] (12); \draw [arrow] (34) to [out=-0,in=0, looseness=.5] (14); \draw [arrow] (24) to (13); \draw [arrow] (24) to (11); \begin{scope}[yshift=3cm, xshift=-1cm] \draw (0,0) circle (1); \filldraw [black] (360/5:1) circle (1pt); \node at (360/5:1.3) {$x_1 {=} \tfrac{1}{5}$}; \filldraw [black] (360*2/5:1) circle (1pt); \node at (360*2/5:1.5) {$x_2 {=} \tfrac{2}{5}$}; \filldraw [black] (360*4/5:1) circle (1pt); \node at (360*4/5:1.3) {$x_3 {=} \tfrac{4}{5}$}; \filldraw [black] (360*3/5:1) circle (1pt); \node at (360*3/5:1.5) {$x_4 {=} \tfrac{3}{5}$}; \draw[dashed] (360/10:1.5) to (360/10+180:1.5); \node at (360/10:1.5) {$\tfrac{\theta}{2} {=} \tfrac{1}{10}$}; \end{scope} \end{tikzpicture} \caption{The finite model graph for angle $\theta = 1/5$.} \label{f:1/5} \end{figure} \noindent \textbf{Thurston polynomial $P_{Th}$:} Figure \ref{f:1/5} shows the finite graph model for angle $\theta=1/5$. The adjacency matrix for this finite graph (omitting rows/columns for vertices of the form $(i,i)$ for $i=1,2,3,4$, which have no incident edges) is: \begin{center} \begin{tabular}{ c|cccccc } & $(1,2)$ & $(1,3)$ & $(1,4)$ & $(2,3)$ & $(2,4)$ & $(3,4)$ \\ \hline $(1,2)$ & $0$ & $0$ & $0$ & $1$ & $0$ & $0$ \\ $(1,3)$ &$1$& $0$ & $1$ & $0$&$0$ & $0$ \\ $(1,4)$ & $1$ & $0$ & $0$ & $0$ &$0$ & $0$ \\ $(2,3)$ & $0$ & $1$ & $1$ & $0$ & $0$ & $0$ \\ $(2,4)$ & $0$ & $1$ & $0$ & $0$ &$0$ & $0$ \\ $(3,4)$ & $0$ & $0$ & $1$ & $0$ & $0$ & $0$ \\ \end{tabular} \end{center} The Thurston polynomial $P_{Th}$ is the characteristic polynomial of the matrix above (padded with $0$s to have 4 more rows and columns, representing the vertices $(1,1),\ldots,(4,4)$): $$P_{Th}(x) = x^6 + 2 x^7 - x^{10}.$$ \medskip \noindent \textbf{Markov polynomial $P_{Mar}$:} Figure \ref{f:Htree1/5} shows the combinatorial model of the Hubbard tree for angle $1/5$. The incidence matrix for the dynamics on this combinatorial Hubbard tree is: \begin{center} \begin{tabular}{ c|cccc } & $I_1 $ & $I_2$ & $I_3$ & $I_0$ \\ \hline $I_1$ & $0$ & $1$ & $0$ & $0$ \\ $I_2$ &$0$& $0$ & $1$ & $0$ \\ $I_3$ & $1$ & $0$ & $0$ & $1$ \\ $I_4$ & $1$ & $1$ & $0$ & $0$ \\ \end{tabular} \end{center} The Markov polynomial for angle $1/5$ is the characteristic polynomial of the incidence matrix above: $$P_{Mar}(x) = -1 - 2 x + x^4.$$ \end{example} \medskip \subsection{Setup} We will use the notation established below in the remainder of this section. Let $f$ be a postcritically finite polynomial with Hubbard tree $T$. Let $P$ be the union of the postcritical set, the critical point and the branch points of the Hubbard tree. Define $\sigma(\theta) := 2 \theta \mod 1$. Let $\hat{\theta}$ be the characteristic angle of $f$, whose corresponding external dynamical ray $R_c(\hat{\theta})$ lands at the critical value (or whose external parameter ray $R_{\mathcal{M}}(\hat{\theta})$ lands at the root of the hyperbolic component containing the critical value). Let us consider $\mathbb{T} = \mathbb{R}/\mathbb{Z} \to J(f)$ the Carath\'eodory loop, sending an angle to the landing point of the corresponding external ray. Given an angle $\alpha \in \mathbb{T}$, we denote as $\overline{\alpha}$ the landing point of the ray of angle $\alpha$. Elements of $\mathbb{T}$ will be denoted by Greek letters $\xi, \eta, \alpha \dots$ and points in the Hubbard tree by lower-case Latin letters $x, y, z \dots$. Given two points $x, y$, we denote by $[x, y]$ the closed arc in $T$ joining $x$ and $y$, and by $(x, y)$ the open arc. Finally, an unordered pair of angles will be denoted $\{\xi, \eta \}$, while we denote $[\xi, \eta]$ the arc in the Hubbard tree joining the landing points $\overline{\xi}$ and $\overline{\eta}$. \smallskip Let $\mathcal{E}_1$ be the set of connected components of $T \setminus P$, and $E_1 := \mathbb{R}^{\mathcal{V}_1}$. Let $T_1 : E_1 \to E_1$ be the linear map induced by the action of the dynamics on the set $\mathcal{E}_1$, and $A_1$ be the matrix representing $T_1$ in the basis $\mathcal{E}_1$. By construction, this is the matrix obtained by the Markov partition. Hence the associated polynomial is $$P_{Mar}(t) := \det( I - t A_1).$$ Let $\mathcal{E}_2$ denote the set of unordered pairs of postcritical angles, $E_2 := \mathbb{R}^{\mathcal{E}_2}$. An element of $E_2$ will be called an \emph{angle pair configuration} and can be written as $$v = \sum_i a_i \{ \xi_i, \eta_i \}$$ with $a_i \in \mathbb{R}$. The \emph{formal support} of $v$ is the set of pairs $\{\xi_i, \eta_i \}$ for which $a_i \neq 0$. The \emph{geometric support} of $v$ is the union of the arcs $[\overline{\xi_i}, \overline{\eta_i}]$ in the tree. Let $T_2 : E_2 \to E_2$ be the linear map induced by the action of the dynamics on the set $\mathcal{E}_2$, and $A_2$ be the matrix representing $T_2$ in the basis $\mathcal{E}_2$. Hence the polynomial given by Thurston's algorithm is $$P_{Th}(t) := \det( I - t A_2).$$ \subsection{Semiconjugacy of $T_1$ and $T_2$ via elementary decomposition} \begin{definition} [elementary decomposition map] \ \begin{enumerate} \item We first define a map $\pi: \mathcal{E}_2 \to \mathcal{E}_1$ as follows. For any pair $\{\xi_1, \xi_2 \} \in \mathcal{E}_2$ of rational postcritical angles, write $x = \overline{\xi_1}$, $y = \overline{\xi_2}$ for the landing points. If $x = y$, define $\pi(\{ \xi_1, \xi_2 \}) = 0$. If $x \neq y$, let $$(x, y) \setminus P = (p_1, p_2) \cup \dots (p_{k-1}, p_k)$$ be the decomposition of $(x, y) \setminus P$ in its connected components, where each $p_i \in P$, and let $$\pi(\{ \xi_1, \xi_2\}) := \sum_{i = 1}^{k-1} [p_i, p_{i+1}].$$ \item Define the \emph{elementary decomposition} map $\pi : E_2 \to E_1$ to be the the linear extension to $E_2$ of the action of $\pi$ acting on $\mathcal{E}_2$. \end{enumerate} \end{definition} Note that, by construction, the following diagram commutes: \[\begin{tikzcd} E_2 \arrow{r}{T_2} \arrow[swap]{d}{\pi} & E_2 \arrow{d}{\pi} \\ E_1 \arrow{r}{T_1} & E_1 \end{tikzcd}\] \noindent Therefore, $$T_1(\ker \pi) \subseteq \ker \pi.$$ As a consequence, if $$Q(t) \coloneqq \det(I - t A_1\vert_{\ker \pi})$$ is the characteristic polynomial of the action of $T_1$ on $\ker \pi$, we have the identity $$P_{Th}(t) = P_{Mar}(t) Q(t).$$ Thus, to prove Theorem \ref{T:Th-Mar}, it suffices to establish the following: \begin{proposition} \label{L:Q-cycl} All non-zero roots of $Q(t)$ lie on the unit circle. \end{proposition} We will use the following lemma from linear algebra: \begin{lemma} \label{L:finite-cyclo} Let $T : E \to E$ be a linear map of a finite-dimensional vector space, and suppose that there exists a finite set $S \subseteq E$ such that $T(S) \subseteq S$ and so that the span of $S$ equals $E$. Then the characteristic polynomial of $T$ is of the form $$P(t) = t^k \chi(t),$$ where $k \geq 0$ and $\chi(t)$ is the product of cyclotomic polynomials. \end{lemma} \begin{proof} Since $T(S) \subseteq S$, then for any $v \in S$ the set $\{ T^n(v) \ : \ n \geq 0\} \subseteq S$ is finite. Moreover, since $S$ spans $E$, for any $v \in E$ we can write $v = \sum_i a_i v_i$ with $a_i \in \mathbb{C}$ and $v_i \in E$. Hence also $$T^n(v) = \sum_i a_i T^n(v_i) \subseteq \sum_i a_i S,$$ which is a finite set. Now, suppose that $v$ is an eigenvector of $T$, with eigenvalue $\lambda$. Then note that the set $\{ T^n(v) = \lambda^n v \ : \ n \geq 0\}$ is finite only if either $\lambda = 0$ or if $\lambda$ is a root of unity. This proves the claim. \end{proof} To prove Proposition \ref{L:Q-cycl}, we apply Lemma \ref{L:finite-cyclo} to the action of $T_1$ on $\ker \pi$ with of ``elementary triples'' and ``elementary stars'' serving as the set $S \subseteq \ker \pi$; we define these objects and prove that they have the requisite properties in the next subsection, \S \ref{ss:elementarytriplesandstars}. \begin{proof}[Proof of Proposition \ref{L:Q-cycl}] By Lemma \ref{L:kernel-combo}, the space $\ker \pi$ is the span of the union of the set $\mathcal{T}$ of elementary triples and the set $\mathcal{S}$ of elementary stars of norm at most $M$. Note that the set $\mathcal{T} \cup \mathcal{S}$ is finite, since $f$ is postcritically finite. Since by Lemmas \ref{L:permute} and \ref{L:S-forward-inv}, $T_2$ maps each element of $\mathcal{T} \cup \mathcal{S}$ to either $0$ or an element of $\mathcal{T} \cup \mathcal{S}$, then by Lemma \ref{L:finite-cyclo} the characteristic polynomial $Q(t)$ of the restriction of $T_2$ to $\ker \pi = \mathbb{R}^{\mathcal{T} \cup \mathcal{S}}$ is the product of $t^k$ for some $k \geq 0$ and cyclotomic polynomials. \end{proof} \subsection{Elementary triples and elementary stars} \label{ss:elementarytriplesandstars} \subsubsection{Elementary triples} \begin{definition} Define an \emph{elementary triple} of angles to be a linear combination $v \in E_2$ of $3$ angle pairs of the form $$v = p_1+p_2-p_3$$ with $p_1= \{ \alpha, \beta \}, p_2= \{ \beta, \gamma \} $ and $p_3 = \{ \alpha, \gamma \}$ are elements of $\mathcal{E}_2$ and so that $\overline{\beta}$ lies on the arc $[\overline{\alpha}, \overline{\gamma}]$. Note that, as a special case, if the rays at angle $\alpha, \alpha'$ land at the same point, setting $\alpha = \beta$ and $\gamma = \alpha'$ we obtain that the pair $\{ \alpha, \alpha' \}$ is also an elementary triple. We call such triple \emph{degenerate}. \end{definition} It is immediate to check that every elementary arc lies in the kernel of $\pi$. We now start with the following: \begin{lemma} \label{L:kernel-combo} Every angle pair configuration which lies in $\ker \pi$ and whose support is contained in a segment is a linear combination of elementary triples. \end{lemma} \begin{proof} Since the coefficients of $T_1$ are integers, if there exists a non-zero element of $\ker \pi$ there exists an element of $\ker \pi$ which is a linear combination with rational coefficients. By multiplying all coefficients by a suitable integer, we can assume that there exists a linear combination with integer coefficients. Suppose that there exists a linear combination $$v = \sum_{i = 1}^k a_i \{ \xi_i, \eta_i \}$$ with $a_i \in \mathbb{Z}$ and $ \{ \xi_i, \eta_i \} \in \mathcal{V}_2$, so that $v$ lies in $\ker \pi$ but not in the span $\textrm{span}(\mathcal{T})$ of the elementary triples. First, note that, if $\alpha, \alpha'$ land at the same point, then for any $\beta$ $$\{ \alpha, \beta \} - \{ \alpha', \beta \} = (\{ \alpha, \beta \} - \{ \alpha', \beta \} - \{ \alpha, \alpha' \} ) - (\{ \alpha, \alpha' \})$$ is a sum of elementary triples, hence, by subtracting elementary triples, we can assume that at most one angle in the formal support of $v$ lands at each point in the geometric support of $v$. Now, let us choose a configuration $v$ for which the weight $\Vert v \Vert := \sum_{i = 1}^k |a_i|$ is minimal. Let $a$ be an end of the geometric support of $v$. Since $v$ lies in the kernel, there exists two elements $\{ \alpha, \beta \}$, $\{ \alpha, \gamma \}$ in the formal support of $v$ with coefficients of opposite signs and so that $\overline{\alpha} = \overline{\alpha} = a$. Suppose by symmetry that $b = \overline{\beta}$ lies in $[\overline{\alpha}, \overline{\gamma}]$. Then $$v = a_1 \{ \alpha, \beta \} + a_2 \{ \alpha, \gamma\} + \sum_{i = 3}^k a_i \{ \xi_i, \eta_i \}$$ and, up to changing $v$ with $-v$, we can assume that $a_1 > 0$, $a_2 < 0$. Then we can write $v$ as \begin{equation} \label{E:sum} v = v_1 + v_2 \end{equation} where \begin{align*} v_1 & =\{\alpha, \beta\} + \{\beta, \gamma\} - \{\alpha, \gamma\} \\ v_2 & = (a_1-1) \{\alpha, \beta\} + (a_2 + 1) \{\alpha, \gamma\} + \sum_{i = 3}^k a_i \{ \xi_i, \eta_i \} - \{\beta, \gamma\}. \end{align*} Now, by \eqref{E:sum}, $v_2$ also lies in $\ker \pi$; moreover, its weight satisfies $$\Vert v_2 \Vert \leq |a_1| -1 + |a_2| - 1 + \sum_{i = 3}^k |a_i| + 1 < \Vert v \Vert$$ hence it has lower weight than $v$; thus, by minimality, $v_2$ must belong to $\textrm{span}(\mathcal{T})$. However, since $v_1$ also belongs to $\textrm{span}(\mathcal{T})$, by \eqref{E:sum} we also have that $v$ belongs to $\textrm{span}(\mathcal{T})$, contradicting our hypothesis. \end{proof} \begin{lemma} \label{L:permute} The map $T_2$ sends every elementary triple to either $0$ or an elementary triple. \end{lemma} \begin{proof} Let $v = \{ \alpha, \beta \} + \{ \beta, \gamma \} - \{ \alpha, \gamma\}$ be an elementary triple, so that $b:= \overline{\beta}$ lies in $[\overline{\alpha}, \overline{\gamma}]$. If $\{ \alpha, \gamma\}$ is non-separated, then $$T_2(v) = \{ \sigma(\alpha), \sigma(\beta) \} + \{ \sigma(\beta), \sigma(\gamma) \} - \{ \sigma(\alpha), \sigma(\gamma) \}$$ is clearly an elementary triple. If $\{ \alpha, \gamma\}$ is separated, then either $b$ is the critical point, $\{ \alpha, \beta\}$ is separated, or $\{\beta, \gamma\}$ is separated. If $\{ \alpha, \beta\}$ is separated, then \begin{align*} T_2(\{ \alpha, \beta\}) & = \{ \sigma(\alpha), \hat{\theta} \} + \{ \hat{\theta}, \sigma(\beta) \} \\ T_2(\{\beta, \gamma\}) & = \{ \sigma(\beta), \sigma(\gamma) \} \\ T_2(\{ \alpha, \gamma\}) & = \{ \sigma(\alpha), \hat{\theta} \} + \{ \hat{\theta}, \sigma(\gamma) \} \end{align*} where $v$ is the critical value; hence \begin{align*} T_2(v) & = \{ \sigma(\alpha), \hat{\theta} \} + \{ \hat{\theta}, \sigma(\beta) \} +\{ \sigma(\beta), \sigma(\gamma) \} - (\{ \sigma(\alpha), \hat{\theta} \} + \{ \hat{\theta}, \sigma(\gamma) \}) \\ & = \{ \hat{\theta}, \sigma(\beta) \} + \{ \sigma(\beta), \sigma(\gamma) \} - \{ \hat{\theta}, \sigma(\gamma) \} \end{align*} which is also an elementary triple. The case of $\{\beta, \gamma\}$ is symmetric. Finally, if $b$ is the critical point, then $\{\alpha, \beta\}$ and $\{\beta, \gamma\}$ are not separated, hence $$T_2(v) = \{\sigma(\alpha), \sigma(\beta)\} + \{\sigma(\beta), \sigma(\gamma)\} - (\{\sigma(\alpha), \sigma(\beta)\} + \{\sigma(\beta), \sigma(\gamma)\}) = 0.$$ \end{proof} \subsubsection{Elementary stars} We now need to take care of branch points in the Hubbard tree $T$. Recall that the \emph{valence} of a point $x \in T$ is the number of connected components of $T \setminus \{ x \}$. A \emph{branch point} is a point of valence larger than $2$. \begin{definition} Let $x$ be a branch point of $T$. A set of angles $\Theta := \{ \theta_1, \dots, \theta_k\} \subseteq \mathbb{T}$ form a \emph{star centered at }$x$ if for any pair of distinct elements of $\Theta$, the corresponding external rays lie in different connected components of $T \setminus \{ x \}$. A formal sum $$S = \sum_{i < j} a_{\{i,j\}} \{ \theta_i, \theta_j \}, \qquad a_{\{i,j\}} \in \mathbb{Z}$$ is an \emph{elementary star} if there exists a branch point $x$ and a star $\Theta$ centered at $x$ so that each $\theta_i$ lies in $\Theta$. The \emph{norm} of a star $S$ is $\max_{i,j} |a_{\{i,j\}}|$. A star has \emph{zero geometric weight} if $\sum_j a_{\{i,j\}} = 0$ for each $i$. \end{definition} \begin{lemma} \label{L:elem-star} There exists $m \geq 1$ such that any elementary star with geometric weight $0$ is a linear combination of elementary stars of geometric weight zero with norm at most $M$. \end{lemma} \begin{proof} Let $x$ be a branch point of $T$, and $m$ its valence. The map $A: \mathbb{Q}^{{m \choose 2}} \to \mathbb{Q}^m$ defined by $A(e_{\{i,j\}}) := e_i + e_j$ has a finite dimensional kernel. Let $v_1, \dots, v_h$ be a basis for the kernel $K$. By clearing denominators, we obtain a basis $v_1', \dots, v_h'$ of $K$ with integer coefficients. An elementary star supported on the neighborhood of $x$ yields an element of $K$. Let $M$ be the largest norm of all $v_i'$. Since this bound depends only on the valence of the branch point, and there are finitely many branch points in $T$, the claim follows. \end{proof} \begin{lemma} \label{L:S-forward-inv} Let $\mathcal{S}$ be the set of all elementary stars of norm at most $M$. Then $T_2(\mathcal{S}) \subseteq \mathcal{S}$. \end{lemma} \begin{proof} Consider a star $S = \sum_{i < j} a_{\{i,j\}} \{\theta_i, \theta_j\}$, centered at a branch point $x$. There are two cases. If the critical point does not lie in the interior of the support of $S$, then every arc in $S$ is non-separated, hence the image of $S$ is an elementary star with the same norm. Otherwise, if the critical point $c$ lies in the interior of the star, let us say, up to relabeling, that $c$ lies on the segment $[x, \overline{\theta_1}]$. Then all pairs $\{ \theta_i, \theta_j \}$ of $S$ which contain the index $1$ are separated. Hence \begin{align*} T_2(S) & = \sum_{i, j \neq 1} a_{\{i,j\}} \{ \sigma(\theta_i), \sigma(\theta_j) \} + \sum_j a_{\{1, j\}} \{ \sigma(\theta_j), \hat{\theta} \} + \left( \sum_j a_{\{1, j\}} \right) \{ \sigma(\theta_1), \hat{\theta} \} \\ & = \sum_{i, j \neq 1} a_{\{i,j\}} \{ \sigma(\theta_i), \sigma(\theta_j)\} + \sum_j a_{\{1, j\}} \{ \sigma(\theta_j), \hat{\theta} \} \end{align*} since the geometric weight is zero. Since $\{\hat{\theta}, \sigma(\theta_2), \dots, \sigma(\theta_k) \}$ is a star centered at $f(\alpha)$, $T_2(S)$ is also an elementary star, of the same norm as $S$. \end{proof} \begin{lemma} Every element in the kernel of $\pi$ is the linear combination of elementary triples and elementary stars with norm at most $M$. \end{lemma} \begin{proof} We denote as $B_0$ the branch points of the Hubbard tree $T$ which lie in the set $\bigcup_{n \geq 0} f^n(0)$, and as $B_1$ the other branch points. For each branch point $\alpha$ in $B_1$, define its $1$-neighborhood $N_1(\alpha)$ as the set of postcritical points which are closest to $\alpha$, meaning that there is no other postcritical point between them and $\alpha$. The complement $$T\setminus \left( \bigcup_{\alpha \in B_0} N_1(\alpha) \cup \bigcup_{\alpha \in B_1} \{\alpha \} \right)$$ is the union of segments, which we will call \emph{edges}. Let $S$ be an element in the kernel of $\pi$. If an angle pair $\{ \xi, \eta\}$ is in the support of $S$, then we can write its associated segment as a union of segments $$[\overline{\xi}, \overline{\eta}] = [x_0, x_1] \cup [x_1, x_2] \cup \dots \cup [x_{r-1}, x_r],$$ each of them lying either in a neighborhood of a branch point or in an edge. For each $1 \leq i \leq r-1$, let $\eta_i$ be a postcritical angle whose ray lands at $x_i$. Moreover, set $\eta_0 = \xi$, $\eta_r = \eta$. Then we can write $$\{ \xi, \eta \} = \sum_{i = 0}^{r-1} \{ \eta_i, \eta_{i+1} \} + \sum_{i = 0}^{r-1} ( \{ \eta_i, \eta_r\} - \{ \eta_i, \eta_{i+1}\} - \{ \eta_{i+1}, \eta_r \})$$ where all terms in the last sum are elementary triples. Thus, any configuration with zero geometric weight can be written, up to adding elementary triples, as the sum of configurations with zero geometric weight which are supported either in the $1$-neighborhood of a branch point or in an edge. By Lemma \ref{L:kernel-combo}, configurations with zero geometric weight supported in an edge can be written as linear combinations of elementary triples. Moreover, for each branch point, the configuration restricted to the $1$-neighborhood of each point is an elementary star with zero geometric weight. By Lemma \ref{L:elem-star}, this configuration is a linear combination of elementary stars with norm at most $M$. This completes the proof of the claim. \end{proof} \section{Roots of the spectral determinant for periodic angles} \label{sec:coversofthefinitemodel} In \cite{TiozzoContinuity}, Tiozzo shows (by combining Theorem \ref{t:SpectralDeterminantGrowthRate} and Proposition \ref{p:GrowthRateFiniteModel}) that for a rational angle $\theta$, the inverse of the smallest root of the spectral determinant of the graph $\Gamma_{\theta}$ associated to the labeled wedge $\mathcal{W}_{\theta}$ equals the growth rate (largest eigenvalue) of the finite model graph ($\Gamma_1$ in the notation below), which Thurston's entropy algorithm shows is the core entropy of the quadratic polynomial of external angle $\theta$. In this section, we investigate \emph{all} the roots of the spectral determinant, not only the smallest root. \bigskip \noindent \textbf{Setup.} Throughout this section, we will use the following notation: \begin{itemize} \item $\mathcal{W}$ is a periodic labeled wedge of period $p$ and preperiod $q$, \item $\Gamma$ is the (periodic) directed graph associated to $\mathcal{W}$, \item For each $k \in \mathbb{N}$, $$\Gamma_k := \Gamma \big/ \equiv_{kp,q}$$ is the quotient of the graph $\Gamma$ by the equivalence relation $\equiv_{kp,q}$ (defined in \S \ref{ss:periodiclabeledwedgesfinitemodels}). $\Gamma_k = (\mathcal{V}_k, \mathcal{E}_k)$ denotes the vertex set and edge set of $\Gamma_k$. Since the labeling on $\mathcal{W}$ is constant on $\equiv_{kp,q}$-equivalence classes, the labeling of $\mathcal{W}$ induces a labeling on vertices of $\Gamma_k$. \item For each $k \in \mathbb{N}$, we use the canonical basis $\{e_v \ : \ v \in \mathcal{V}_k \}$ for $\mathbb{R}^{\mathcal{V}_k}$, where we denote as $e_v$ the element of $\mathbb{R}^{\mathcal{V}_k}$ that has a $1$ in the position corresponding to $v$ and $0$ in the other positions. Then, $M_k$ is the incidence matrix associated to $\Gamma_k$, and $A_k$ the associated linear map corresponding to the matrix $M_k$ in the canonical basis. As vertices of $\Gamma_k$ are in bijection with the set $\{(i,j) : 1 \leq i \leq j \leq kp + q \}$, $M_k$ is a square matrix of dimension ${kp + q + 1}\choose{2}$. \item For each $k \in \mathbb{N}$, we consider the linear map $\pi_k:\mathbb{R}^{\mathcal{V}_k} \to \mathbb{R}^{\mathcal{V}_1}$ defined on its basis elements by $$\pi_k(e_v) := e_{[v]}$$ where $[v]$ denotes the class in $\Gamma_1$ of a vertex $v$ in $\Gamma_k$. \end{itemize} \subsection{Characteristic polynomials of finite covers} \label{ss:charpolysoffinitecovers} The goal of this subsection is to prove the following theorem: \begin{theorem} \label{t:cyclotomicfactors} For any $k \in \mathbb{N}$, the characteristic polynomial for the action of $A_k$ on $\mathbb{R}^{\mathcal{V}_k}$ is the product of the characteristic polynomial for the action of $A_1$ on $\mathbb{R}^{\mathcal{V}_1}$, cyclotomic factors, and $x^d$ for some integer $d \geq 0$. \end{theorem} \begin{proof} This follows from Lemma \ref{l:commutativediagram} and Proposition \ref{p:charpolyrestricted}, which we state and prove below. \end{proof} \begin{lemma} \label{l:commutativediagram} For each $k$, $\pi_k$ is a linear map which satisfies $A_1 \circ \pi_k = \pi_k \circ A_k$. That is, the following diagram commutes: \[\begin{tikzcd} \mathbb{R}^{\mathcal{V}_k} \arrow{r}{A_k} \arrow[swap]{d}{\pi_k} & \mathbb{R}^{\mathcal{V}_k} \arrow{d}{\pi_k} \\ \mathbb{R}^{\mathcal{V}_1} \arrow{r}{A_1} & \mathbb{R}^{\mathcal{V}_1} \end{tikzcd} \] \end{lemma} \begin{proof} By linearity, it suffices to verify commutativity on the set of basis vectors $\{e_v, v \in \mathcal{V}_k\}$ for $\mathbb{R}^{\mathcal{V}_k}$. So consider a fixed vector $e_v$. By condition \eqref{i:correspondingverticessamelabel}, the label of the vertex $v$ corresponding to $e_v$ in $\Gamma_k$ is the same as the label of the corresponding vertex (the image under the projection map from $\Gamma_k$ to $\Gamma_1$) in $\Gamma_1$, call it $w$. Also by the periodicity of the labeling, an edge leaves $v$ if and only if a corresponding edge leaves $w$, and their targets belong to the same $\equiv_{p,q}$ equivalence class. Since $\pi_k$ is constant on $\equiv_{p,q}$ equivalence classes, it follows that $A_1 \circ \pi_k(e_v) = \pi_k \circ A_k(e_v)$. \end{proof} As a consequence of Lemma \ref{l:commutativediagram}, $A_k$ preserves $\textrm{ker}(\pi_k)$. It remains to prove: \begin{proposition}\label{p:charpolyrestricted} The characteristic polynomial of the restriction of the linear map $A_k$ to the vector space $\textrm{ker}(\pi_k)$ is a product of cyclotomic polynomials and the polynomial $x^d$, $d \in \mathbb{N}$. \end{proposition} In order to prove Proposition \ref{p:charpolyrestricted}, we first define and investigate a related vector space $H_k$ and a linear map $L_k$ on $H_k$. Define the set $$ \mathcal{H}_k \coloneqq \left \{(v, w) \in \mathcal{V}_k \times \mathcal{V}_k: v \equiv_{p,q} w, \ v \neq w \right \}$$ and let $H_k \coloneqq \mathbb{R}^{\mathcal{H}_k}$ be the vector space over $\mathbb{R}$ for which $\mathcal{H}_k$ is a basis. Note that an element $(v,w)$ of $\mathcal{H}_k$ is an \emph{ordered} pair, while each of $v$ and $w$ denotes an element of the unlabeled wedge $\Sigma$, and elements of $\Sigma$ are \emph{unordered pairs} of natural numbers: for this reason, we will use the notation $v = \{ a, b \}$ with $a, b \in \mathbb{N}$ to denote elements of $\mathcal{V}_k$, and $(v, w)$ to denote elements of $\mathcal{H}_k$. Note also that, by periodicity, for any element $(v,w)$ of $\mathcal{H}_k$, the vertices $v$ and $w$ have the same label. We use the canonical basis $\{e_{(v, w)} : (v, w) \in \mathcal{H}_k \}$ for $H_k$, where we denote as $e_{(v, w)}$ the element of $H_k = \mathbb{R}^{\mathcal{H}_k}$ that has a $1$ in the position corresponding to $(v, w)$ and $0$ in the other positions. Moreover, if $v = w \in \mathcal{V}_k$, we define $e_{(v,w)} = e_{(v,v)}$ as $0$. Let $(v, w) \in \mathcal{H}_k$. Since $v \equiv_{p, q} w$, we can reorder the elements so as to write $v = \{ a_1, b_1 \}$ and $w = \{ a_2, b_2 \}$ such that $$a_1 \equiv_{p, q} a_2, \qquad b_1 \equiv_{p, q} b_2.$$ We now define $L_k(e_{(v, w)})$ as follows. \begin{enumerate} \item If $v$ (hence also $w$) is equivalent, set $$L_k(e_{(v, w)}) := 0.$$ \item If $v$ (hence also $w$) is non-separated, then set $$L_k(e_{(v, w)}) := e_{( \{ a_1 +1, b_1+1 \}, \{ a_2 + 1, b_2 + 1\})}.$$ \item If $v$ (hence also $w$) is separated, then set $$L_k(e_{(v, w)}) := e_{(\{1, a_1 +1\}, \{1, a_2+1\})} + e_{(\{1, b_1+1\}, \{1, b_2+1\})}.$$ \end{enumerate} Then let $L_k$ be the unique linear extension to $H_k$. Note that in the above definition, we set $e_{(v, w)} = 0$ if $v = w$. Since $a_1 \equiv_{p,q} a_2$ and $b_1 \equiv_{p,q} b_2$ together imply $a_1 +1 \equiv_{p,q} a_2 +1$ and $b_1 + 1 \equiv_{p,q} b_2 +1 $, in all cases the image under $L_k$ of $e_{(v, w)}$ belongs to $H_k$. \medskip Recall that for topological dynamical systems $f:X \to X$ and $g:Y \to Y$, $f$ is said to be \emph{semiconjugate} to $g$ if there exists a continuous surjection $\phi:X \to Y$ such that $g \circ \phi = \phi \circ f$. \begin{lemma} \label{l:Lkconjonkernel} The action of $L_k$ on $H_k$ is linearly semiconjugate to the action of $A_k$ on $\ker(\pi_k)$. That is, there exists a surjective linear map $\phi:H_k \to \ker(\pi_k)$ such that the following diagram commutes: \[\begin{tikzcd} H_k \arrow{r}{L_k} \arrow[swap]{d}{\phi} & H_k \arrow{d}{\phi} \\ \textup{ker}(\pi_k) \arrow{r}{A_k} & \textup{ker}(\pi_k) \end{tikzcd}\] \end{lemma} \begin{proof} Define $\phi:H_k \to \ker(\pi_k)$ to be the linear map whose action on the canonical basis vectors is given by $$\phi(e_{(v,w)}) := e_v - e_w.$$ For any elements $(v,w) \in \mathcal{H}_k$, by definition $v \equiv_{p,q} w$, which implies $\pi_k(e_v - e_w)$ = 0. Hence the codomain of $\phi$ is $\ker(\pi_k)$, as desired. Next we show that $A_k \circ \phi = \phi \circ L_k$. By linearity, it suffices to verify that $$A_k \circ \phi (e_{(v,w)}) = \phi \circ L_k (e_{(v,w)})$$ for each element $(v,w) \in \mathcal{H}_k$. Let us consider $(v, w) = (\{a_1, b_1\}, \{a_2, b_2\})$ an element of $\mathcal{H}_k$ as above. If $v$ and $w$ are separated, then we compute \begin{align} \phi \circ L_k(e_{(v, w)}) & = \phi(e_{ (\{1, a_1 +1\}, \{1, a_2+1\})}) + \phi(e_{ (\{1, b_1+1\}, \{1, b_2+1\}) })\\ & = e_{\{1, a_1 +1\}} - e_{\{1, a_2+1\}} + e_{\{1, b_1+1\}} - e_{\{1, b_2+1\}} \label{E:comm} \end{align} On the other hand, \begin{align} A_k \circ \phi (e_{(v,w)}) & = A_k \circ \phi \left( e_{(\{ a_1, b_1 \}, \{ a_2, b_2 \})} \right) \\ & = A_k \left(e_{\{ a_1, b_1 \}} - e_{\{ a_2, b_2 \}} \right) \\ & = e_{\{1, a_1 +1\}} + e_{\{1, b_1+1\}} - e_{\{1, a_2+1\}} - e_{\{1, b_2+1\}} \end{align} which coincides with \eqref{E:comm}, verifying commutativity. The cases of $v, w$ equivalent or non-separated are more straightforward, so we do not write out the details. \end{proof} \begin{lemma}\label{l:charpolyLk} The characteristic polynomial for the action of $L_k$ on $H_k$ is a product of cyclotomic polynomials and the polynomial $x^d$ for some $d \in \mathbb{N}$. \end{lemma} \begin{proof} First, we will define a subspace $J_k$ of $H_k$ and investigate the action of $L_k$ restricted to this subspace. Define the set $$\mathcal{J}_k := \{(\{a_i,b_i\}, \{a_j, b_j\}) \in \mathcal{H}_k \mid \textrm{at least one of } a_i = a_j \textrm{ and } b_i = b_j \textrm{ holds} \}.$$ Define $J_k$ to be the $\mathbb{R}$-vector space for which $\mathcal{J}_k$ is a basis. \smallskip We claim that $L_k$ sends each element of $\mathcal{J}_k$ to either $0$ or an element of $\mathcal{J}_k$. Consider an element $(v, w) = (\{a_i, b_i \}, \{ a_j, b_j\} ) \in \mathcal{J}_k$. By construction, $\Phi(v) = \Phi(w)$. \begin{enumerate} \item If $v$ is equivalent, then $$L_k(e_{(v,w)}) = 0.$$ \item If $v$ is non-separated, then the fact that $\{ a_i,b_i \}$ and $\{ a_j,b_j \}$ have a common coordinate implies that the two pairs $\{ a_1 +1, b_1+1 \}$, $\{ a_2 + 1, b_2 + 1\}$ that comprise $L_k(e_{(v,w)}) $ do too, hence it is an element of $\mathcal{J}_k$. \item If $v$ is separated, then, by definition of $\mathcal{J}_k$, one of the two targets $$e_{(\{1, a_1 + 1 \}, \{1, a_2 + 1 \})}, \qquad e_{(\{1, b_1 + 1 \}, \{1, b_2 + 1 \})}$$ is equal to zero, while the other belongs to $\mathcal{J}_k$, since two of its entries are equal to $1$. \end{enumerate} Since $\mathcal{J}_k$ is a finite set, it follows that there exist natural numbers $n_1 > n_2$ such that $(L_k \vert_{J_k} )^{n_1} = (L_k \vert_{J_k})^{n_2}$. Consequently, the characteristic polynomial for the restriction of $L_k \vert_{J_k}$ is a product of a cyclotomic polynomial and a factor of $x^d$ for some integer $d \geq 0$. \smallskip We now claim that $L_k$ sends in finitely many iterations every basis element $h \in \mathcal{H}_k \setminus \mathcal{J}_k$ to either $J_k$ or to a cycle of elements of $\mathcal{H}_k \setminus \mathcal{J}_k$. \begin{enumerate} \item If a pair $(v, w) \in \mathcal{H}_k \setminus \mathcal{J}_k$ is equivalent, then $L_k$ sends it to $0$, which is in $J_k$. \item If $(v, w) \in \mathcal{H}_k$ is separated, then its two pairs of targets under $L_k$ both have a coordinate equal to $1$, so both target pairs are in $\mathcal{J}_k$. \item If $(v, w) \in \mathcal{H}_k \setminus \mathcal{J}_k$ is non-separated, $L_k$ sends it to another pair $(v, w) \in \mathcal{H}_k \setminus \mathcal{J}_k$. Since $\mathcal{H}_k$ is a finite set, either the orbit of this non-separated pair $(v, w)$ enters a cycle of non-separated pairs in $\mathcal{H}_k \setminus \mathcal{J}_k$, or the orbit eventually enters $J_k$. \end{enumerate} Therefore, there exists an integer $p$ such that $L_k^p(\mathcal{H}_k)$ is contained in the union of $J_k$ and finitely many (possibly zero) cyclic orbits in $\mathcal{H}_k \setminus \mathcal{J}_k$. Therefore the characteristic polynomial of $L_k$ acting on $H_k$ is the product of the characteristic polynomial of $L_k$ acting on $J_k$ and finitely many (possibly zero) cyclotomic polynomials and $x^d$. Thus, the characteristic polynomial of $L_k$ acting on $H_k$ has the desired form. \end{proof} Equipped with Lemmas \ref{l:Lkconjonkernel} and \ref{l:charpolyLk}, we are now ready to prove Proposition \ref{p:charpolyrestricted}. \begin{proof}[Proof of Proposition \ref{p:charpolyrestricted}.] By Lemma \ref{l:Lkconjonkernel}, there is a linear map $\phi:H_k \to \ker(\pi_k)$ that semiconjugates the action of $L_k$ on $H_k$ to the action of $A_k$ on $\ker(\pi_k)$. Hence, the characteristic polynomial of $A_k$ divides the characteristic polynomial of $L_k$. By Lemma \ref{l:charpolyLk}, the characteristic polynomial for the action of $L_k$ on $H_k$ is a product of cyclotomic polynomials and the polynomial $x^d$ for some $d \in \mathbb{N}$. Thus, the same is true for all its divisors; in particular, for the characteristic polynomial of $A_k$. \end{proof} \subsection{Relating characteristic polynomials of finite covers to the spectral determinant} The main goal of this subsection is to prove the following theorem, which builds on Theorem \ref{t:cyclotomicfactors}. \begin{theorem} \label{t:approxroots} The set of roots in $\mathbb{D}$ of the spectral determinant for the infinite graph, $P_{\Gamma}$, equals the set of roots in $\mathbb{D}$ of the spectral determinant of the finite graph model, $P_{\Gamma_1}$. \end{theorem} \noindent The remainder of this subsection builds up to the statement and proof of Theorem \ref{t:approximations}, which will be then used to prove Theorem \ref{t:approxroots} above. We begin by proving two lemmas (\ref{l:cyclesshowupfinitetime} and \ref{l:shortfinitecyclesarereal}) which relate the multicycles of finite and infinite graph models. First, multicycles in the infinite graph also show up in big finite graphs: \begin{lemma} \label{l:cyclesshowupfinitetime} For each $n \in \mathbb{N}$ there exists $M \in \mathbb{N}$ such that whenever a multicycle $\gamma$ of length at most $n$ is in $\Gamma$, then $\gamma$ is also a multicycle in the finite graph $\Gamma_m$ for all $m \geq M$. \end{lemma} \begin{proof} By \cite[Proposition 6.2]{TiozzoContinuity}, every vertex in a closed path of length $n$ has width at most $2n$. Hence, it is enough to choose $M$ with $M p + q\geq 2 n$, where $p$ is the period of the critical orbit. \end{proof} \noindent Second, short multicycles in big finite graphs also show up in the infinite graph: \begin{lemma} \label{l:shortfinitecyclesarereal} For each $n \in \mathbb{N}$, there exists $m \in \mathbb{N}$ such that if $\gamma$ is a multicycle of length $n$ in $\Gamma_m$, then $\gamma$ is also a multicycle in $\Gamma$. \end{lemma} \begin{proof} Suppose that there exists a cycle of length $n$ in $\Gamma_m$ but not in $\Gamma$. This implies that it must pass through a vertex of the form $(h, pm +q)$. If you are at a vertex $(h, pm+q)$ and the vertex is separated, then you go to $(1,h+1)$ and $(1, q+1)$. In the first case, note that along our path starting at $(1,h+1)$ one needs to travel vertically at least $h - 1$ and horizontally at least $pm +q - h - 1$. (Here, directions like ``vertical,'' ``horizontal,'' etc. refer to the layout used in, for example, Figure \ref{f:1/9}.) Since each step goes up or to the right by at most one, the length $n$ satisfies $$n \geq \max \{ h -1, pm +q - h - 1\} \geq \frac{pm +q - 2}{2}.$$ In the second case, one notes that one needs to travel at least $pm - 1$ horizontally, implying $n \geq pm-1$. On the other hand, if the vertex $(h, pm + q)$ is non-separated, its outgoing edge goes to $(q+1, h+1)$ if $q \leq h$, or to $(h+1, q+1)$ if $h < q$. In the first case, the vertical displacement is at least $h - q - 1$ and the horizontal displacement is at least $pm + q - h -1$, yielding $$n \geq \max \{ h - q - 1, pm +q - h - 1\} \geq \frac{pm - 2}{2}.$$ In the second case, the horizontal displacement is at least $pm-1$, so $$n \geq pm -1.$$ Thus, in every case we have $n \geq \frac{pm - 2}{2}$, and hence taking $$m > \frac{2n + 2}{p}$$ is sufficient to exclude the existence of such additional cycles of length $n$. \end{proof} Combining Lemmas \ref{l:cyclesshowupfinitetime} and \ref{l:shortfinitecyclesarereal} immediately implies that the coefficients of the spectral determinants $P_{\Gamma_m}$ are asymptotically stable in the following sense: \begin{corollary} \label{c:truncationscoincide} For any degree $n$, there exists $M \in \mathbb{N}$ such that the terms of degree at most $n$ in the spectral determinant $P_{\Gamma}$ coincide with the terms of degree at most $n$ in the spectral determinant $P_{\Gamma_m}$ for all $m \geq M$. \end{corollary} Now, in order to bound the coefficients of the spectral determinant, we need a uniform bound on the number of simple multicycles of given length. For the infinite graph $\Gamma$, this is given by Proposition \ref{p:boundedcoeffs}; we now obtain a similar bound for the finite graph $\Gamma_m$. \begin{lemma} \label{l:cycle-upper-bound} For each $n$, the number $S_n^{(m)}$ of simple multicycles of length $n$ in $\Gamma_m$ is at most $$S_n^{(m)} \leq 2^q (pm + q)^{3 \sqrt{2 n} + 1}.$$ \end{lemma} \begin{proof} Recall that in $\Gamma_m$ there are the following types of edges: \begin{enumerate} \item If $(i,j)$ is non-separated with $j < pm + q$, one has the edge $(i, j) \to (i+1, j+1)$, which we call of type $U$ (\emph{upwards}); \item if $(i, j)$ is non-separated with $j = p m+ q$, then there is one edge coming out of it, and it may be of one of the two types: \begin{itemize} \item if $i > q$, $(i, p m + q) \to (q+1, i+1)$, which we call of type $D'$; \item if $i < q$, $(i, p m + q) \to (i+1, q+1)$, which we call of type $U'$. \end{itemize} Note that if $i = q$, the edge goes to $(q+1, q+1)$, from which there is no further edge, so no multicycle is supported there. \item If $(i,j)$ is separated with $j < pm + q$, one has two edges: \begin{itemize} \item[(a)] $(i, j) \to (1, i+1)$, which we call of type $B$ (\emph{backwards}), \item[(b)] $(i, j) \to (1, j+1)$, which we call of type $D$ (\emph{downwards}); \end{itemize} \item If $(i,j)$ is separated with $j = pm + q$, one has two edges: \begin{itemize} \item[(a)] $(i, j) \to (1, i+1)$, which we call of type $B$ (\emph{backwards}), \item[(b)] if $j = pm + q$, we have $(i, p m + q) \to (1, q+1)$, which we call of type $Q$. \end{itemize} \end{enumerate} Now, fix a simple multicycle $\gamma$ of length $n$. Consider the set of edges of type $B$ along $\gamma$, and let $h_1, \dots, h_r$ be the heights of the sources of all such edges. First, we claim that all $h_i$ are distinct: this is because their targets are $(1, h_i +1)$, and these vertices have to be disjoint by the definition of simple multicycle. Moreover, we claim that $$\sum_{i = 1}^r h_i \leq n;$$ this is because to get from $(1, h_i)$ to the vertex at height $h_{i+1}$ you need to increase the height by $h_{i+1} -1$, and each move raises the height by at most one. As a consequence, $$\frac{r(r+1)}{2} = \sum_{i =1}^r i \leq \sum_{i = 1}^r h_i \leq n, $$ and hence $r \leq \sqrt{2 n}$. Hence, since $\Gamma_m$ has $\leq (pm+q)^2$ vertices, there are at most $(pm+q)^{2 \sqrt{2n}}$ choices for the set of sources of edges of type $B$. Similarly, consider the heights $h_1', \dots, h_s'$ of the sources of the edges of type $D'$ along $\gamma$. Since each of them has target $(q+1, h_i'+1)$, all these heights are different. Moreover, we claim that $$\sum_{i= 1}^s (h_i' -q) \leq n; $$ this is because to get from $(q +1, h_i +1) $ to the next vertex of type $D'$ on the cycle, at height $h_{i+1}'$, you need to increase the height by at least $h_{i+1}' - q -1$, and each move raises the height by at most one. Thus, similarly as before, since all $(h_i - q)$ are distinct, we obtain $s \leq \sqrt{2 n}$. Since the possible sources for an edge of type $D'$ are $(i, pm+q)$ with $q+1 \leq i \leq pm + q$, there are at most $(pm)^{\sqrt{2n}}$ possible choices. We also claim that there is at most one edge labeled $Q$: this is because their target is $(1, q+1)$, hence by disjointness this can only happen once along the multicycle. Since the number of possible sources for edges of type $Q$ is at most $pm + q$, there are at most $pm + q$ such choices. Finally, since the edges $U'$ have $q$ possible targets, there are at most $q$ of them in any multicycle, and their source has height between $1$ and $q$, so there are at most $2^q$ possible choices. Now, we claim that the positions of the sources of the $B, Q, D',$ and $U'$ edges determines the multicycle. This is because any simple cycle in $\gamma$ needs to contain at least one of them (otherwise the path keeps going to the right), and, once these vertices are specified, the other vertices of the path are determined uniquely since there is only one edge coming out of non-separated vertices. Hence there are at most $((pm + q)^2)^{\sqrt{2 n}}$ choices for the sources of the $B$ edges, $(pm + q)^{\sqrt{2 n}}$ choices for the sources of the $D'$ edges, $pm+q$ choices for the sources of the $Q$ edge, and $2^q$ for the sources of the $U'$ edges. Altogether, this leads to at most $$ (pm + q)^{2 \sqrt{2 n}} \cdot (pm + q)^{\sqrt{2 n}} \cdot (pm+q) \cdot 2^q \leq 2^q (pm + q)^{3 \sqrt{2 n} + 1}$$ multicycles of length $n$, as required. \end{proof} We now obtain asymptotic stability of roots in $\mathbb{D}$ of the spectral determinants $P_{\Gamma_m}$: \begin{theorem} \label{t:approximations} The sequence of functions $(P_{\Gamma_k}(t))_{k \geq 0}$ converges uniformly to $P_\Gamma(t)$ on compact subsets of $\mathbb{D}$. As a consequence, roots of $P_{\Gamma}$ in $\mathbb{D}$ are approximated arbitrarily well by roots of $P_{\Gamma_k}$. \end{theorem} \begin{proof} We claim that for each $r< 1$, the sequence $(P_{\Gamma_k}(t))_{k \geq 0}$ converges uniformly to $P_\Gamma(t)$ on the disk $\mathbb{D}_r = \{ |t| < r \}$. Let $$P_\Gamma(t) = \sum_{n = 0}^\infty a_n t^n, \qquad P_{\Gamma_k}(t) = \sum_{n = 0}^\infty a_n^{(k)} t^n$$ be the power series expansion of $P_\Gamma$ and $P_{\Gamma_k}$. First of all, by Lemma \ref{p:boundingnumberofcycles}, the number $S_n$ of simple multicycles of length $n$ in the graph $\Gamma$ is bounded above by $(2n)^{\sqrt{2n}}$, hence $$a_n \leq (2n)^{\sqrt{2n}}.$$ As for $\Gamma_k$, note that by Lemma \ref{l:shortfinitecyclesarereal}, if $k > \frac{2n + 2}{p}$ then $$a_n^{(k)} \leq S_n \leq (2n)^{\sqrt{2 n}}.$$ On the other hand, if $k \leq \frac{2n + 2}{p}$, by Lemma \ref{l:cycle-upper-bound} we obtain $$a_n^{(k)} \leq S_n^{(k)} \leq 2^q (pk + q)^{3 \sqrt{2 n} + 1} \leq 2^q (2n + q + 2)^{3 \sqrt{2 n} + 1} .$$ Now fix $0 < r < 1$ and $\epsilon > 0$, and pick $c$ with $1 < c < \frac{1}{r}$. There exists $n_0$ such that $$\frac{(3 \sqrt{2n} + 1) \log(2n + q + 2) + q \log 2}{n} \leq \log c \qquad \textup{for all } n \geq n_0.$$ Hence, if we write $$P_\Gamma(t) = \sum_{n = 0}^\infty a_n t^n, \qquad P_{\Gamma_k}(t) = \sum_{n = 0}^\infty a_n^{(k)} t^n$$ the power series expansion of $P_\Gamma$ and $P_{\Gamma_k}$, we have by the previous estimate on multicycles $$|a_n t^n| \leq (2n)^{\sqrt{2n}} r^n \leq (rc)^n.$$ Thus, if $n_1 \geq n_0$, $$ \left|\sum_{n = n_1}^\infty a_n t^n \right| \leq \sum_{n = n_1}^\infty (rc)^n = \frac{(rc)^{n_1}}{1 - rc} < \epsilon$$ for $n_1$ sufficiently large, and exactly the same estimate holds for $P_{\Gamma_k}$. Now, by Corollary \ref{c:truncationscoincide} there exists $k_0$ such that for $k \geq k_0$ the first $n_1$ coefficients of $P_\Gamma$ and $P_{\Gamma_k}$ are the same. Hence, for $k \geq k_0$, $$ \left| P_\Gamma(t) - P_{\Gamma_k}(t) \right| \leq \left| \sum_{n = 0}^{n_1} a_n t^n - \sum_{n = 0}^{n_1} a_n^{(k)} t^n \right| + \left| \sum_{n = n_1 + 1}^{\infty} a_n t^n \right| + \left| \sum_{n = n_1 + 1}^{\infty} a_n^{(k)} t^n \right| \leq 0 + \epsilon + \epsilon = 2 \epsilon,$$ which proves the uniform convergence on the disk of radius $r$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:approxroots}] By Theorem \ref{t:approximations} and Rouch\'e's theorem, the set of roots of $P_{\Gamma_k}$ in $\mathbb{D}$ converges in the Hausdorff topology to the set of roots of $P_\Gamma$. By Theorem \ref{t:cyclotomicfactors}, for any $k$ the roots of $P_{\Gamma_k}$ in $\mathbb{D}$ coincide with the roots of $P_{\Gamma_1}$. Hence, the roots of $P_{\Gamma}$ in $\mathbb{D}$ coincide with the roots of $P_{\Gamma_1}$. \end{proof} \section{Continuous extension of $Z^+$ to $\mathbb{R}/\mathbb{Z}$} \label{S:continuous-ext} In this section, we use Theorem \ref{t:approxroots} to prove Theorem \ref{t:continuousdiskextension}, reproduced here: \begin{theoremcontinuousdiskextension} The map $Z^+ : \mathbb{Q} / \mathbb{Z} \to Com^+(\mathbb{C})$ admits a continuous extension from $\mathbb{R}/\mathbb{Z} \to Com^+(\mathbb{C})$. \end{theoremcontinuousdiskextension} Recall that we say that a sequence $(\mathcal{W}_n)$ of labeled wedges converges to a labeled wedge $\mathcal{W}$ if for any finite subgraph of $\mathcal{W}$, there exist $n_0$ such that, for $n \geq n_0$, the labels of all corresponding vertices of $\mathcal{W}_n$ agree with the labels of $\mathcal{W}$. By \cite[Proof of Proposition 8.5]{TiozzoContinuity}, for any angle $\theta$ the limits $$\mathcal{W}_{\theta^+} := \lim_{\theta' \to \theta^+}\mathcal{W}_{\theta}, \quad \mathcal{W}_{\theta^-} := \lim_{\theta' \to \theta^-} \mathcal{W}_{\theta'}$$ exist. The properties of these limit graphs are described in the following lemmas. \begin{lemma} \cite[Lemma 8.6]{TiozzoContinuity} \label{l:limitonlydiffers} Let $\theta$ be purely periodic of period $p$. Then $\mathcal{W}_{\theta}$, $\mathcal{W}_{\theta^+}$ and $\mathcal{W}_{\theta^-}$ are purely periodic of period $p$, and differ only in the labelings of pairs $(i,j)$ with either $i \equiv 0 \mod{p}$ or $j \equiv 0 \mod{p}$. \end{lemma} \begin{lemma} \cite[Lemma 7.3]{TiozzoContinuity} \label{l:isomorphicfinitemodels} Let $\mathcal{W}^a$ and $\mathcal{W}^b$ be two labeled wedges which are purely periodic of period $p$. Suppose moreover that for every pair $(i,j)$ with $i,j \not \equiv 0 \mod{p}$, the label of $(i,j)$ in $\mathcal{W}^a$ equals the label in $\mathcal{W}^b$. Then the finite models $\Gamma_1^a$ and $\Gamma_1^b$ are isomorphic graphs. \end{lemma} \begin{remark} The subscript $1$ in the notation $\Gamma_1^a$ in Lemma \ref{l:isomorphicfinitemodels} is consistent with the notation defined at the beginning of \S \ref{sec:coversofthefinitemodel} to denote the finite ``1-cover'' model associated to the labeled wedge $\mathcal{W}^a$. It differs slightly from the notation in \cite{TiozzoContinuity}. \end{remark} \begin{lemma} \label{l:Hausdorffconvergence} Let $(\mathcal{W}_n)_{n \in \mathbb{N}}$ be a sequence of labeled wedges (associated to angles $(\theta_n)_{n \in \mathbb{N}}$ that converges to a labeled wedge $\mathcal{W}$. Denote by $(P_n)_{n \in \mathbb{N}}$ and $P$ the associated spectral determinants. Then $$ \lim_{n \to \infty} S^1 \cup \{z \in \mathbb{D} \mid P_n(z) = 0\} = S^1 \cup \{z \in \mathbb{D} \mid P(z) = 0\} $$ in the Hausdorff topology. \end{lemma} \begin{proof} The proof of \cite[Lemma 6.4]{TiozzoContinuity} shows $P_n$ converges to $P$ uniformly on compact subsets contained in the open disk $\mathbb{D}$. The result then immediately follows by applying Rouch\'e's Theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{t:continuousdiskextension}] For a purely periodic angle $\theta$, denote the finite graphs associated to the labeled wedges $\mathcal{W}_{\theta}$, $\mathcal{W}_{\theta^+}$ and $\mathcal{W}_{\theta^-}$ by $\Gamma_1^{\theta}$, $\Gamma_1^{\theta^+}$, and $\Gamma_1^{\theta^-}$, respectively. Combining Lemmas \ref{l:limitonlydiffers} and \ref{l:isomorphicfinitemodels} immediately gives that for any purely periodic angle $\theta$, the finite models $\Gamma_1^{\theta}$, $\Gamma_1^{\theta^+}$ and $\Gamma_1^{\theta^-}$ are isomorphic graphs. Hence the characteristic polynomials associated to the finite models, $P_{\Gamma_1^{\theta}}$, $P_{\Gamma_1^{\theta^+}}$ and $P_{\Gamma_1^{\theta^- }}$, coincide. Consequently, Theorem \ref{t:approxroots} implies that the sets of roots in $\mathbb{D}$ of each of the spectral determinants (of the infinite models) $P_{\Gamma^{\theta}}$, $P_{\Gamma^{\theta^+}}$ and $P_{\Gamma^{\theta^- }}$ coincide. If $\theta$ is not purely periodic, $\mathcal{W}_{\theta^+} = \mathcal{W}_{\theta} = \mathcal{W}_{\theta^-}$ by \cite[Proof of Proposition 8.5]{TiozzoContinuity}. In both cases, Lemma \ref{l:Hausdorffconvergence} then gives the result. \end{proof} \section{Kneading theory for principal veins} \label{S:kneading-veins} The purpose of this section is to define a new ``kneading polynomial'' for quadratic polynomials in principal veins. Although approaches to kneading theory for tree maps already exist (e.g. \cite{kneadingtheoryfortreemaps}), they do not satisfy the continuity properties we need later (specifically, in \S \ref{s:persistence}), hence we cannot apply them directly. To this end, we formulate a new kneading determinant which is uniform along each principal vein, using the first return map to a certain subinterval. \subsection{Itineraries} \label{ss:itineraries} Fix integers $0 < p < q$, with $p$ and $q$ coprime, and let $\mathcal{V}_{p/q}$ denote the $\tfrac{p}{q}$-principal vein. As described in \S \ref{ss:veins}, there is a fixed topological/combinatorial model that describes the dynamics of the restriction of any polynomial $f_c$ in $\mathcal{V}_{p/q}$ to its Hubbard tree $T_c$. Namely, $T_c$ is, topologically, a star-shaped tree with $q$ branches, and the central vertex of the star is the $\alpha$ fixed point of $f_c$. One of the branches contains the critical point, $0$ in its interior (unless the map $f$ is conjugate to a rotation, which happens, e.g. for the Douady rabbit map). We cut this branch at $0$ to form two topological intervals; we label the interval that contains the central vertex of the star $I_1^c$, and we label the other interval $I_0^c$. We label the other branches $I_2^c,\ldots,I_q^c$, so that $f(I_k^c) = I_{k+1}^c$ for any $2 \leq k \leq q-1$. (See Figure \ref{f:Htree1/5}.) We take these intervals to be closed, so that both the $\alpha$-fixed point and $0$ belong to more than one interval. Then the dynamics of $f_c$ restricted to $T_c$ is as follows: \begin{itemize} \item $I_k^c$ is sent to $I_{k+1}^c$ homeomorphically, for $1\leq k\leq q-1$. \item $I_q^c$ is sent to $I_0^c\cup I_1^c$. \item $I_0^c$ is sent to a subset of $I_0^c\cup I_1^c\cup I_2^c$. \end{itemize} To lighten notation, we shall sometimes drop the superscript $c$ in $I_k^c$ when it is clear which parameter we are referring to. \begin{definition}[Itinerary] Let $f_c$ be a map in the principal vein $\mathcal{V}_{p/q}$. For any point $x \in T_c$ such that $x \not \in \bigcup_{k=0}^{\infty} f_c^{-k}(\{0,\alpha_c\})$, define the \emph{itinerary} of $x$ under $f_c$ to be the sequence \begin{equation} \label{E:itin1} \textup{It}_{c}(x) \coloneqq w_0 w_1 \ldots \in \{ 0,1,\ldots,q \}^{\mathbb{N}} \end{equation} such that, for all $j \geq 0$, $I_{w_j}^c$ is the interval in the Hubbard tree $T_c$ that contains $f_c^j(x)$. Additionally, if $x \neq 0$ but there exists $k \geq 1$ such that $f_c^k(x) = 0$, we define \begin{equation} \label{E:itin2} \textup{It}_{c}(x) := \lim_{y \to 0} \textup{It}_{c}(y). \end{equation} \end{definition} \begin{figure} \includegraphics[width=0.8 \textwidth]{real-5-ray.png} \caption{The itinerary for the critical orbit of the real map of characteristic angle $\theta = \frac{13}{31} = .\overline{01101}$. With respect to the highlighted partition, the itinerary $\textup{It}(c)$ of the critical value is $\overline{\texttt{20121}}$.} \label{F:real5} \end{figure} Note that the latter definition is well posed, as points on either side of $0$ map to a one-sided neighborhood of $c = f_c(0)$. In particular, Eq. \eqref{E:itin2} applies to define the itinerary $\textup{It}_c(c)$ of the critical value when the map $f_c$ is purely periodic of period $p > 1$. This will be the most important case in our paper. Thus, we can give the following: \begin{definition} \label{def:kneadingseq} We define the \emph{itinerary} associated to the map $f_c$, where $c$ belongs to any principal vein, as $$\textup{It}(c) := \textup{It}_c(c)$$ the itinerary of the critical value. \end{definition} It will turn out to be simpler to consider a version of the itinerary that uses only the symbols $\{0,1, 2\}$, independently of $q$. In order to do so, we consider the first return map to the interval $I_0 \cup I_1 \cup I_2$, which is in fact unimodal: \begin{definition}[First return map itinerary \emph{or }simplified itinerary] \label{def:firstreturnitinerary} Let $f_c$ be a map in the principal vein $\mathcal{V}_{p/q}$. Let $\widehat{f}_c$ denote the first return map of $f_c$ to $I_0^c \cup I_1^c \cup I_2^c$. Define the \emph{first return map itinerary}, or \emph{simplified itinerary}, of $f_c$, denoted $\widehat{\textup{It}}(x)$, to be the itinerary of $x$ under the map $\widehat{f}_c$. That is, $$\widehat{\textup{It}}_c(x) = w_0w_1w_2\ldots$$ is the sequence in $\{0,1,2\}^{\mathbb{N}}$ where $(\hat{f}_c)^j(x) \in I_{w_j}^c$ for all $j$. Moreover, we define the simplified itinerary for the map $f_c$ as $$\widehat{\textup{It}}(c) := \widehat{\textup{It}}_c(c).$$ \end{definition} It is easy to see that to go from $\widehat{\textup{It}}_c(x)$ to $\textup{It}_c(x)$, one simply deletes all letters greater than $2$ from $\widehat{\textup{It}}_c(x)$. To go from $\textup{It}_c(x)$ to $\widehat{\textup{It}}_c(x)$, one replaces every $2$ in the sequence $\textup{It}_c(x)$ with $234\dots q$. \subsection{Kneading polynomial and kneading determinant} We are now ready to define a new analogue to the kneading determinant for polynomials along a principal vein. Let us fix $p, q$ coprime, with $0 < p < q$. We now define maps $F_{j, q, \lambda}$ which are ``candidate" piecewise linear models for the first return map $\widehat{f_c}$ for any parameter $c$ on the $p/q$-vein (see Remark \ref{l:PLsemiconjugacy}). \begin{definition} \label{d:Fmaps} Let us define \begin{align*} F_{0,q,\lambda}(x) & := \lambda x + \lambda + 1\\ F_{1,q,\lambda}(x) & := -\lambda x + \lambda + 1\\ F_{2,q,\lambda}(x) & := -\lambda^{q-1} x + \lambda^{q-1} + 1 \end{align*} Let polynomials $a_j$, $b_j$ be such that $$F_{j, q, \lambda}(x) := a_j(\lambda) x + b_j(\lambda)$$ Then for each $j \in \{0,1,2\}$, there exist unique choices of $\epsilon_j \in \{ +1, -1 \}$ and $q_j \in \mathbb{N}^+$, and polynomial $B_j$ such that $$a_j(\tfrac{1}{t}) = \frac{\epsilon_j}{t^{q_j}}, \qquad b_j(\tfrac{1}{t}) = \frac{B_j(t)}{t^{q_j}}$$ for all $t \in \hat{\mathbb{C}}$. Let $w= \widehat{\textup{It}}(c) \in \{0, 1, 2 \}^\mathbb{N}$. For each integer $k \geq 1$, define \begin{align*} \eta_k & := \epsilon_{w_1} \dots \epsilon_{w_k} \\ d_k & := q_{w_0} + \dots + q_{w_{k-1}} \end{align*} while $\eta_0 = 1$, $d_0 = 0$. We define the \emph{$q$-principal vein kneading determinant} of $f_c$ as $$D(t) := \sum_{k =0}^\infty \eta_k B_{w_k} t^{d_k}.$$ This is a power series in the formal variable $t$. \end{definition} We will sometimes suppress the $q$ in the subscript $F_{w,q,z}$ and write just $F_{w,z}$ when the $q$ is clear from the context. The coefficients of the principal vein kneading determinant are all bounded, hence we have: \begin{lemma} For any natural number $q$, the $q$-principal vein kneading determinant $D(t)$ converges in the unit disk to a holomorphic function. The roots of $D(t)$ inside the unit circle, in particular the smallest root, change continuously with $w$. \end{lemma} In the case when the first return map itinerary is purely periodic, i.e. if the critical orbit of $f$ is purely periodic, the principal vein kneading determinant is rational. In this case, we can define more directly the following polynomial. \begin{definition}[principal vein kneading polynomial] Let $f$ be a critically periodic quadratic polynomial of period $p$, with simplified itinerary $(w_1, w_2, \dots, w_{p-1}, w_p)^{\infty}$. Then the \emph{$q$-principal vein kneading polynomial} $P_f$ is defined as \begin{equation} \label{E:pvkp} P_f(z) := F_{w_{p-1},q, z} \circ \ldots \circ F_{w_1, q, z}(1+z). \end{equation} \end{definition} The principal vein kneading polynomial and the kneading determinant are closely related: \begin{lemma} \label{L:periodicP} Suppose that $(w_j)$ is periodic of period $p$; then we have $$D(t) = P_f(\tfrac{1}{t}) \cdot \frac{ \eta_{p-1} t^{d_{p}}}{1 - \eta_p t^{d_p}}.$$ As a corollary, $D(t)$ and $P_f(\tfrac{1}{t})$ have the same roots inside the unit disk. \end{lemma} \begin{proof} If $(w_j)$ is periodic with period $p$, then $$\eta_{pn + k} = (\eta_p)^n \eta_k, \qquad B_{pn+k} = B_k, \qquad d_{pn+k} = n d_p + d_k$$ hence $$D(t) = \sum_{k = 0}^\infty \eta_k B_k t^{d_k} = \frac{ \sum_{k = 0}^{p-1} \eta_k B_k t^{d_k}}{1 - \eta_p t^{d_p}}.$$ Moreover, we can compute by induction for any $n$ $$F_{w_n, 1/t} \circ \dots \circ F_{w_1,1/t}(x) = \frac{\epsilon_{w_1} \epsilon_{w_2} \dots \epsilon_{w_n} x + \sum_{k = 1}^n B_{w_k} \epsilon_{w_{k+1}} \dots \epsilon_{w_n} t^{q_{w_1} + \dots + q_{w_{k-1}}}}{t^{q_{w_1} + \dots + q_{w_n}}}$$ hence also $$F_{w_n, 1/t} \circ \ldots \circ F_{w_1, 1/t}(1 + \tfrac{1}{t}) \eta_n t^{d_{n+1}} = \sum_{k = 0}^n \eta_k B_k t^{d_k}.$$ Setting $n = p -1$ yields $$P_f(\tfrac{1}{t}) \eta_{p-1} t^{d_{p}} = \sum_{k = 0}^{p-1} \eta_k B_k t^{d_k}$$ which implies $$ D(t) = P_f(\tfrac{1}{t}) \cdot \frac{ \eta_{p-1} t^{d_{p}}}{1 - \eta_p t^{d_p}},$$ as required. \end{proof} \begin{remark} Note that there is some ambiguity in the definition of the first letter $w_0$ of the itinerary of the critical point, since $0$ lies on the boundary of both $I_0$ and $I_1$; however, one has $1 + z = F_{0,z}(0) = F_{1, z}(0)$, hence we can interpret the formula \eqref{E:pvkp} for $P_f(z)$ as $$P_f(z) = F_{w_{p-1}, z} \circ \ldots \circ F_{w_1, z} \circ F_{w_0,z}(0)$$ where $w_0$ may be indifferently $0$ or $1$. \end{remark} \subsection{Relationship with Markov matrix} Now we will prove the main result of this section -- relating the roots of the kneading polynomial and the roots of the Markov polynomial. Note that this completes the proof of Theorem \ref{T:equalpolys} from the introduction. \begin{theorem} When $f$ is critically periodic and on the $\tfrac{p}{q}$-principal vein, the roots of the $q$-principal vein kneading polynomial $P_f(z)$ are the eigenvalues of the incidence matrix $A$ of the Markov decomposition defined using post critical points as well as the $\alpha$-fixed point. \end{theorem} \begin{proof} Let $$P := \bigcup_{n \geq 0} f^n(c) \cup \{ \alpha \}$$ be the ``postcritical" set, and let us first suppose that $f$ is critically periodic. Denote by $\widehat{f}$ the first return map, and let $p$ such that $g^p(c) = c$. Let us denote $c_j := \widehat{f}^j(c)$, let $\widehat{\textup{It}}_c(c_1) = w_1 w_2 \dots w_{p-1} \dots$ the itinerary of the critical value, and let us define the map $\pi : P \cap I \to \mathbb{C}$ as $$\begin{array}{ll} \pi(c) & = 0 \\ \pi(\alpha) & = 1 \\ \pi(c_k) & = F_{w_{k-1},q, z}\dots F_{w_1,q, z}(1+z) \qquad \textup{for }1 \leq k \leq p-1. \end{array}$$ If $z$ is a root of the $q$-principal vein kneading polynomial, then $$F_{w_{p-1}, q,z} \circ \ldots \circ F_{w_1, q,z}(1+z) = 0,$$ which implies that $\pi$ satisfies $$\pi(c_{j+1}) = F_{w_j,q, z}(c_j) \qquad \textup{for } 0 \leq j \leq p-1.$$ Moreover, let us denote $$T \setminus P := \bigcup_{\alpha \in \mathcal{A}} J_\alpha.$$ Now, fix an orientation on each branch of $T$ (for instance, we can set the orientation on $I_0$ as increasing towards $I_1$, on $I_1$ as increasing towards $\alpha$, and on any $I_k$ with $k \geq 2$ as increasing away from $\alpha$); for each interval $J_\alpha = [a, b] \subseteq I_0 \cup I_1$, with $a < b$, define $$v_\alpha := \pi(b) - \pi(a).$$ If $J_\alpha \subseteq I_k$ with $k \geq 2$, then $J_\beta = f^{q+1 - k}(J_\alpha) \subseteq I_0 \cup I_1$, and we define $$v_\alpha := z^{k-q-1} v_{\beta}.$$ We claim that $(v_\alpha)$ is an eigenvector for $A$, of eigenvalue $z$. In order to prove this, let $J_\alpha = [c_j, c_k] \subseteq I_0 \cup I_1$. Then, since $J_\alpha$ is contained in one monotonic piece, $f(J_\alpha) = [c_{j+1}, c_{k+1}]$, and also there exists $w \in \{0, 1, 2 \}$ for which $$\pi(c_{j+1}) = F_{w, q,z}(c_j) \qquad \pi(c_{k+1}) = F_{w, q,z}(c_k).$$ Now, write the decomposition of $f(J_\alpha)$ as $f(J_\alpha) = \bigsqcup_{i = 1}^{r} J_{\beta_i}$, and also denote $J_{\beta_i} = [p_i, p_{i+1}]$ with $p_i < p_{i+1}$. Note that $p_1 = c_{j+1}$, $p_{r+1} = c_{k+1}$ if $w = 0$, and $p_1 = c_{k+1}$, $p_{r+1} = c_{j+1}$ if $w = 1$. Then we compute \begin{align*} \sum_{i = 1}^r v_{\beta_i} & = \sum_{i = 1}^r (\pi(p_{i+1}) - \pi(p_{i})) \\ & = \pi(p_{r+1}) - \pi(p_{1}) \\ & = s(w) \left( \pi(c_{k+1}) - \pi(c_{j+1}) \right)\\ & = s(w) \left( F_{w,q, z}(\pi(c_k)) - F_{w, q,z}(\pi(c_j)) \right) \\ & = z ( \pi(c_k) - \pi(c_j)) = z v_\alpha \end{align*} showing that $z$ is an eigenvalue for $A$. Conversely, suppose that $z$ is an eigenvalue of $A$, with eigenvector $(v_\alpha)$. Then define $m : I_0 \cup I_1 \cup I_2 \to \mathbb{C}$ as $$m(x) := \epsilon(x) \sum_{I_\alpha \subseteq [0, x]} v_\alpha$$ where $\epsilon(x) = +1$ if $x \in I_1 \cup I_2$ and $\epsilon(x) = -1$ if $x \in I_0$ (note that, whatever choice we make about $\epsilon(0)$, we always have $m(0) = 0$). Then, since $(v_\alpha)$ is an eigenvector, $$m(\widehat{f}(c_j)) = F_{w_j,q, z}(m(c_j))\qquad \textup{for any }0 \leq j \leq p-1.$$ Thus, $$m(\widehat{f}^p(c_0)) = F_{w_{p-1, q, z}} \circ \ldots \circ F_{w_1,q, z}(1+z)$$ and, since $\widehat{f}^p(c_0) = c_0$, we obtain $$F_{w_{p-1,q, z}} \circ \ldots \circ F_{w_1,q, z}(1+z) = m(c_0) = 0.$$ \end{proof} \begin{remark} This proof shows that the eigenvalues are the same. However, that does not quite prove that the polynomials are the same, as there could be Jordan blocks of size $> 1$. \end{remark} With the previous method we can also get the following semiconjugacy to the linear model. Since a similar result is obtained e.g. by \cite{BaillifCarvalho}, we do not provide the proof. \begin{remark}[Semiconjugacy to the piecewise linear model] \label{l:PLsemiconjugacy} Let $f_c$ be in a principal vein $\mathcal{V}_{p/q}$, and let $\lambda = e^{h(f_c)}$. Set $I_{0,q,\lambda} \coloneqq [1-\lambda^q,0 ), I_{1,q,\lambda}\coloneqq [ 0, 1), I_{2,q,\lambda} \coloneqq [1,1+\lambda]$ and write $I_{q,\lambda} \coloneqq [1 -\lambda^q,1+\lambda]$. Now, the maps $F_{j, q, \lambda}$ can be ``glued" to define the continuous, piecewise linear map $F_{q, \lambda} : I_{q, \lambda} \to I_{q, \lambda}$ as $$F_{q, \lambda}(x) := \sum_{j = 0}^2 F_{j, q, \lambda}(x) \boldsymbol{1}_{I_{j, q, \lambda}}(x).$$ Then the first return map $\widehat{f}_c$ of $f_c$ acting on $I_0^c \cup I_1^c \cup I_2^c$ is semi-conjugate to $F_{q,\lambda}$ acting on $I_{q,\lambda}$: \[ \begin{tikzcd} I_0^c \cup I_1^c \cup I_2^c \arrow{r}{\widehat{f}_c} \arrow[swap]{d}{h} & I_0^c \cup I_1^c \cup I_2^c \arrow{d}{h} \\% I_{q,\lambda} \arrow{r}{F_{q,\lambda}}& I_{q,\lambda} \end{tikzcd} \] The semi-conjugacy $h$ sends the critical point of $f_c$ to $0 \in I_{q,\lambda}$ and the $\alpha$-fixed point of $f_c$ to $1 \in I_{q,\lambda}$; $h(I^c_j) = I_{j,q,\lambda}$ for $j =0,1,2$. If $f_c$ is critically periodic, then $F_{q,\lambda}$ is too, meaning $F_{q,\lambda}^n(0) = 0$ for some $n \geq 1$. \end{remark} \section{Surgery and recoding} \label{S:surgery} \subsection{The Branner-Douady surgery} Recall that postcritically finite parameters in the Mandelbrot set are partially ordered: \begin{definition} Given two postcritically finite parameters $c_1, c_2$, we denote $c_1 <_{\mathcal{M}} c_2$ if $c_1$ lies on the vein $[0, c_2]$. That is, $c_1$ lies closer to the main cardioid than $c_2$. \end{definition} Following Branner-Douady \cite{BrannerDouady}, there is a \emph{$\tfrac{p}{q}$-surgery map} $\Psi_{p/q}:\mathcal{V}_{1/2} \to \mathcal{V}_{p/q}$ between the real vein in the Mandelbrot set and the $\tfrac{p}{q}$-principal vein. The construction was extended by Riedl \cite{Riedl} to arbitrary veins. \begin{theorem}[Branner-Douady \cite{BrannerDouady}, Riedl \cite{Riedl}] \label{T:BD} The surgery map $\Psi_{p/q}$ is a homeomorphism between the real vein $\mathcal{V}_{1/2}$ and the $\tfrac{p}{q}$-principal vein $\mathcal{V}_{p/q}$. Moreover: \begin{enumerate} \item $\Psi_{p/q}$ preserves the order $<_\mathcal{M}$; \item the parameter $c' = \Psi_{p/q}(c)$ is critically periodic if and only $c$ is; \item For any real critically periodic parameter $c$, the parameter $c' = \Psi_{p/q}(c)$ is the only critically periodic parameter in the $\tfrac{p}{q}$-principal vein satisfying $$\widehat{\textup{It}}(c') = \widehat{\textup{It}}(c).$$ \end{enumerate} \end{theorem} The map $\Psi_{p/q}$ is constructed as follows, at least for critically periodic parameters (for details, see \cite{BrannerDouady}): \begin{itemize} \item $\Psi_{p/q}:\mathcal{V}^{per}_{1/2} \to \mathcal{V}^{per}_{p/q}$: given a critically periodic map $f: I \to I$ on the interval, partition its domain $I$ in three subintervals $I_0, I_1, I_2$, where the critical point separates $I_0$ and $I_1$, and the $\alpha$-fixed point, denoted $\alpha$, separates $I_1$ and $I_2$. Now, make $q-2$ additional branches starting at $\alpha$, denoted as $I_3$, $I_4$, $\dots$, $I_q$. Now we modify the dynamics as follows: instead of sending the interval $I_2$ from $\alpha$ to the critical value to the original interval, we send it to another branch $I_3$, send $I_3$ to $I_4$, etc, and then send $I_q$ back to the image of $I_2$. The resulting map is conjugate to a map in $\mathcal{V}_{p/q}^{per}$. \item $\Psi^{-1}_{p/q}:\mathcal{V}_{p/q}^{per} \to \mathcal{V}_{1/2}^{per}$: For any map corresponding to a parameter in $\mathcal{V}_{p/q}^{per}$, take the first return map on $I_0\cup I_1\cup I_2$. This first return map is an interval map, which is topologically conjugate to a map in $\mathcal{V}_{1/2}^{per}$. \end{itemize} There is some subtlety in defining $\Psi_{p/q}$ as above if the critical point eventually maps to the $\alpha$-fixed point, but we will from now on only focus on the critically periodic case, so we do not need that case. Tiozzo \cite{TiozzoTopologicalEntropy} provided a description of the Branner-Douady surgery in terms of external angles called {\bf combinatorial surgery}. Because we are doing kneading theory we are more interested in the description of Branner-Douady surgery in terms of itineraries, which we call {\bf recoding} and will be described below. \subsection{Binary itineraries} Let us first recall the classical setup of Milnor-Thurston \cite{MilnorThurston} for unimodal interval maps. This gives rise to a symbolic coding with two symbols, $0$ and $1$. Let $f_c : I \to I$ be a real quadratic map. Decompose the interval $I$ into two subintervals $J_0$ and $J_1$, separated by the critical point $0$, where $J_1$ contains the $\alpha$ fixed point. For any point $x \in I$ such that $x \not \in \bigcup_{k=0}^{\infty} f_c^{-k}(\{0\})$, define the \emph{binary itinerary} of $x$ under $f_c$ to be the sequence \begin{equation} \label{E:itin1-bin} \textup{it}_{c}(x) \coloneqq w_0 w_1 \ldots \in \{ 0,1 \}^{\mathbb{N}} \end{equation} such that, for all $j \geq 0$, $J_{w_j}$ is the interval that contains $f_c^j(x)$. Additionally, if $x \neq 0$ but there exists $k \geq 1$ such that $f_c^k(x) = 0$, we define \begin{equation} \label{E:itin2-bin} \textup{it}_{c}(x) := \lim_{y \to 0} \textup{it}_{c}(y). \end{equation} Finally, the \emph{kneading sequence}, or \emph{binary itinerary}, of $f_c$ is defined as $$\textup{it}(c) := \textup{it}_{c}(c).$$ Moreover, the \emph{twisted lexicographic order} on $\{0, 1\}^\mathbb{N}$ is defined as $w<_{lex} w'$ iff there is some $i\in\mathbb{N}$, such that $w_j=w'_j$ for all $j<i$, and $(-1)^{\sum_{j<i}w_j}(w_i-w'_i)<0$. A key property of the twisted lexicographical order is the following: \begin{lemma}[Milnor-Thurston \cite{MilnorThurston}] \label{L:twisted} If $c, c'$ are real parameters in the Mandelbrot set, then $c <_{\mathcal{M}} c'$ if and only if $$\textup{it}(c) <_{lex} \textup{it}(c').$$ \end{lemma} \subsection{Recoding} The principal vein $\mathcal{V}_{1/2}$ consists of the real polynomials in $\mathcal{M}$. For parameters $c \in \mathcal{V}_{1/2}$, the Hubbard tree $T_c$ is a real interval whose endpoints are $c$ and $f_c(c)$. This real interval may be thought of as a 2-pronged star emanating from the $\alpha$-fixed point of $f_c$; one prong contains the critical point $0$ in its interior, and so $0$ divides the prong into two topological intervals, which we label $I_0$ and $I_1$. The other prong we label $I_2$. On the other hand, the classical setting of kneading theory uses two intervals, say $J_0$ and $J_1$, separated by the critical point. The relationship is $J_0 = I_0$ and $J_1 = I_1 \cup I_2$. In order to compare the $\{0, 1\}$-coding to the $\{0, 1, 2\}$-coding, we define the following \emph{recoding map}. Let $w\in\{0, 1\}^{\mathbb{N}}$ be the periodic kneading sequence of a critically periodic real quadratic map $f_c$. Whenever there are $m$ consecutive $1$s in $w$, replace them with $\dots 21212$. More precisely, we replace any maximal block $1^{2m}$ of consecutive $1$'s of even length by $(12)^m$ and any maximal block $1^{2m+1}$ of odd length by $2 (12)^m$. In formulas: \begin{definition} \label{def:recodingmap} Let $\Sigma_1 \subseteq \{0, 1\}^\mathbb{N}$ be the set of binary sequences $(b_n)$ for which there exists $N$ such that $b_n = 1$ for all $n \geq N$. For any $k \geq 1$, define $$\begin{array}{l} \sigma(0^k) := 0^k \\ \sigma(1^{2k}) := (12)^k\\ \sigma(1^{2k-1}) := 2(12)^{k-1}. \end{array}$$ We define the \emph{recoding map} $R : \{ 0, 1 \}^\mathbb{N} \setminus \Sigma_1 \to \{0, 1, 2\}^\mathbb{N}$ as follows: if $$\mathbf{a} = 1^{a_0} 0^{a_1} 1^{a_2} \dots 1^{a_{2n}} 0^{a_{2n + 1}}\dots$$ with $a_0 \geq 0$, $a_i \geq 1$ for $i \geq 1$, then $$R(\mathbf{a}) := \sigma(1^{a_0}) \sigma(0^{a_1}) \sigma(1^{a_2}) \dots \sigma(1^{a_{2n}}) \sigma(0^{a_{2n + 1}}) \dots$$ \end{definition} The key property of the recoding map is the following: \begin{lemma} \label{L:recode} The recoding map establishes a bijective correspondence between the set of binary itineraries and the set of simplified itineraries of critically periodic real quadratic polynomials. In particular, for any critically periodic real parameter $c$, we have $$R(\textup{it}(c)) = \widehat{\textup{It}}(c).$$ \end{lemma} \begin{proof} This is because the dynamics on any $1/2$-vein Hubbard tree described above implies that $I_1$ must be sent to $I_2$, and $I_2$ may be sent to either $I_0$ or $I_1$. This also implies that the first $1$ will be replaced with $2$. To prove that $R$ is bijective, note that $R^{-1} : \{0, 1, 2\}^\mathbb{N} \to \{0, 1 \}^\mathbb{N}$ is simply defined by replacing each character $2$ with $1$. \end{proof} \begin{remark} Recoding is not well defined when the real quadratic itinerary ends with $1^\infty$, or, equivalently, the $\tfrac{p}{q}$-vein itinerary of the critical value hits the $\alpha$-fixed point. However, since we are only focusing on the critically periodic case, this do not happen in the situations we consider. \end{remark} The recoding map can also be defined on finite words $w$ where $w^\infty$ is some critical itinerary. In the classical real quadratic map context, such words must start with $10$; hence if $w$ ends with $1$ the $1$ should not be changed into $2$ in the first step. We denote the resulting simplified itinerary as $R(w)$. In symbols, if $w = a_1 \dots a_p \in \{0, 1\}^p$ is a finite word (with $w \neq 1^p$), then we compute $$R(w^\infty) = b_1 \dots b_n \dots$$ and set $$R(w) := b_1 \dots b_p.$$ More concretely, if $w$ is of the form $w = 1 0^{a_1} 1^{b_1} \dots 0^{a_r} 1^{b_r},$ with $r \geq 1, a_1 \geq 1, b_r \geq 0$, then one obtains $$R(w) = \sigma(1) \sigma(0^{a_1}) \dots \sigma(0^{a_r}) \widehat{\sigma}(1^{b_r}),$$ where we set $$\widehat{\sigma}(1^{2k}) \coloneqq (21)^k, \qquad \widehat{\sigma}(1^{2k+1}) \coloneqq 1 (21)^k,$$ so that $\sigma(1^{b + 1}) = \widehat{\sigma}(1^{b}) \sigma(1)$ for any $b \geq 0.$ It will be useful to note that $R$ respects concatenation, as follows: \begin{lemma} \label{L:R-concat} If $w_0, w_1$ are finite words in the alphabet $\{0, 1\}$ and both start with $10$, then $$R(w_0 \cdot w_1) = R(w_0) \cdot R(w_1).$$ \end{lemma} \begin{proof} Consider a maximal block of consecutive $1$s in $w_0\cdot w_1$; such a block must be can only arise via three cases: \begin{itemize} \item Consecutive $1$s in the end of $w_1$: they are turned into $\dots 2121$ by $R$. \item Consecutive $1$s entirely located in $w_0$ or $w_1$, but not at the end: they are turned into $\dots 21212$ by $R$. \item Consecutive $1$s at the end of $w_0$, followed by the first $1$ of $w_1$: they are turned into $\dots 21212$ by $R$. In particular, the first $1$ in $w_1$ is turned into a $2$ and the consecutive $1$s at the end of $w_0$ are turned into $\dots 2121$. \end{itemize} In symbols, if we write $$w_0 = 1 0^{a_1} 1^{b_1} \dots 0^{a_r} 1^{b_r},\qquad w_1 = 1 0^{c_1} 1^{d_1} \dots 0^{c_s} 1^{d_s}$$ with $r, s \geq 1, a_1, c_1 \geq 1, b_r, d_s \geq 0$, then we have $$w_0 \cdot w_1 = 1 0^{a_1} \dots 0^{a_r} 1^{b_r} 1 0^{c_1} \dots 0^{c_s} 1^{d_s}$$ hence by definition of $\widehat{\sigma}$ above \begin{align*} R(w_0 \cdot w_1) & = \sigma(1) \sigma(0^{a_1}) \dots \sigma(0^{a_r}) \sigma(1^{b_r+1}) \sigma(0^{c_1}) \dots \sigma(0^{c_s}) \widehat{\sigma}(1^{d_s}) \\ & = \sigma(1) \sigma(0^{a_1}) \dots \sigma(0^{a_r}) \widehat{\sigma}(1^{b_r}) \sigma(1) \sigma(0^{c_1}) \dots \sigma(0^{c_s}) \widehat{\sigma}(1^{d_s}) = R(w_0) \cdot R(w_1) \end{align*} showing that $R(w_0\cdot w_1)$ is the concatenation of $R(w_0)$ and $R(w_1)$. \end{proof} \subsubsection{The $q$-recoding map} We also consider the \emph{$q$-recoding map} $R_q : \{ 0, 1, 2 \}^\mathbb{N} \to \{ 0, 1, \dots, q \}^\mathbb{N}$ given by substituting each occurrence of $2$ with the word $23\dots q$. The $q$-recoding map turns simplified itineraries into ``full itineraries,''' i.e. has the property $$R_q(\widehat{\textup{It}}(c')) = \textup{It}(c')$$ for any critically periodic parameter $c'$ on the $\frac{p}{q}$-principal vein. The map $R_q$ is a bijection between the simplified itineraries and the itineraries of critically periodic parameters on the $p/q$-vein. The inverse map $R_q^{-1}: \{ 0, 1, \dots, q \}^\mathbb{N} \to \{ 0, 1, 2 \}^\mathbb{N}$ acts by deleting all characters larger than $2$. Finally, we call the map $$R^{-1} \circ {R_q}^{-1} : \{0, 1, 2, \dots, q \}^\mathbb{N} \to \{0, 1\}^\mathbb{N}$$ the \emph{binary recoding map}. \medskip Using Lemma \ref{L:recode}, Theorem \ref{T:BD}, and Lemma \ref{L:twisted}, we can now summarize our discussion in the following theorem: \begin{theorem} \label{T:recode-summary} Recoding provides a $1-1$ correspondence between all the following sets: \begin{enumerate} \item The itineraries of critically periodic parameters on the $\tfrac{p}{q}$-principal vein. \item The simplified itineraries of critically periodic parameters on the $\tfrac{p}{q}$-principal vein. \item The binary itineraries of critically periodic real quadratic maps on an interval. \end{enumerate} In greater detail, we have: \begin{itemize} \item[(a)] Let $c$ be a critically periodic, real parameter. Then the recoding map yields $$R(\textup{it}(c)) = \widehat{\textup{It}}(c).$$ \item[(b)] If $c' = \Psi_{p/q}(c)$ is the surgery map, then $$\widehat{\textup{It}}(c) = \widehat{\textup{It}}(c').$$ \item[(c)] The $q$-recoding map satisfies, for any critically periodic parameter $c'$ on the $p/q$-vein, $$R_q(\widehat{\textup{It}}(c')) = \textup{It}(c').$$ \item[(d)] Let $c$, $c'$ be two critically periodic parameters on the $p/q$-vein. Then $c <_{\mathcal{M}} c'$, i.e. $c$ is closer to the main cardioid than $c'$, iff the binary recoding of the critical itinerary of $c$ is smaller than the binary recoding of the critical itinerary of $c'$ under the twisted lexicographic order. \end{itemize} \end{theorem} The discussion is summarized in the diagram: $$\begin{array}{lllll} \{0,1\}^\mathbb{N} & \overset{R}{\longrightarrow} & \{0,1, 2 \}^{\mathbb{N}} & \overset{R_q}{\longrightarrow} & \{0, 1, \dots, q\}^{\mathbb{N}} \\ \textup{it}(c) & & \widehat{\textup{It}}(c) = \widehat{\textup{It}}(c') & & \textup{It}(c') \end{array}$$ where $c' = \Psi_{p/q}(c)$. \section{Renormalization} \label{S:renorm} One notes that the teapot behaves nicely under taking roots. This is closely related to renormalization (and its inverse, \emph{tuning}) in the quadratic family. Recall that a \emph{polynomial-like} map is a proper holomorphic map $f : U \to V$ , where $U$ and $V$ are open, simply connected subsets of $\mathbb{C}$ with $\overline{U} \subset V$. A quadratic polynomial $f$ whose Julia set is connected is \emph{$n$-renormalizable}, for $n \geq 2$, if there exists a neighborhood $U$ of the critical value such that $p^n:U \to p^n(U)$ is polynomial-like. A \emph{tuning map} of period $n \geq 2$ is a continuous injection $\tau:\mathcal{M} \to \mathcal{M}$ such that for every $c \in \mathcal{M}$, the map $f_{\tau(c)}$ is $n$-renormalizable and the corresponding polynomial-like map is \emph{hybrid equivalent} to $f_c$ (i.e. is conjugate via a quasiconformal map to $f_c$ restricted to a suitable domain) \cite{douady1984etude}. Let $f_{c_2}$ be a critically periodic quadratic polynomial and let $C$ be the hyperbolic component to which $c_2$ belongs. Then, there exists a tuning map $\tau_{c_2}$ that sends the main cardioid to $C$. For any parameter $c_1 \in \mathcal{M}$, the parameter $\tau_{c_2}(c_1)$ is called the \emph{tuning} of $c_1$ by $c_2$, and the Julia set of $f_{\tau_{c_2}(c_1)}$ is obtained by inserting copies of the Julia set of $f_{c_1}$ at the locations of the critical orbit of $f_{c_2}$. For details, see e.g. \cite{McMullenRenormalization}. We now show that, as in the classical kneading theory for real maps, the principal vein kneading polynomial behaves well under tuning. \begin{lemma} \label{L:tuned} Let $c_1$ be a critically periodic, real quadratic parameter, and let $c_2$ be a critically periodic parameter in the $\tfrac{p}{q}$-principal vein. Then the parameter $c = \tau_{c_2}(c_1)$ that is the tuning of $c_1$ by $c_2$ belongs to the $\tfrac{p}{q}$-principal vein and has $q$-principal vein kneading polynomial $$P_{c}(t) = P_{c_2}(t)\cdot \frac{P_{c_1}(t^\ell)}{1+t^\ell},$$ where $\ell$ is the period of $f_{c_2}$. \end{lemma} \begin{proof} Recall that, if we set $w = (w_1, \dots, w_{n})$ the simplified itinerary of a critically periodic $f_c$, and the piecewise linear model of $f_c$ is $$F_{w_j}(x) = A_j x + B_j = \epsilon_j t^{d_j} \cdot x + t^{d_j} + 1, \qquad \epsilon_j \in \{\pm 1\}, d_j \geq 1$$ then we have the formula $$P_c(t) = \sum_{k = 0}^{n-1} B_k \prod_{j = k+1}^{n-1} A_j = \sum_{k = 0}^{n-1} (1 + t^{d_k}) \prod_{j = k+1}^{n-1} \epsilon_{j} t^{d_j} .$$ where we set $w_0 = w_n$. Now, let $(w_1, \dots, w_{q})$ be the simplified itinerary of $c_2$, and let $(v_1, \dots, v_{p})$ be the simplified itinerary of $c_1$. Then, by looking at the combinatorics of tuning (see e.g. Figure \ref{F:tuned}), the simplified itinerary of $c$ is $$w_1 w_2 \dots w_{q-1} \widehat{v_1} w_1 w_2 \dots w_{q-1} \dots \widehat{v_{p-1}} w_1 w_2 \dots w_{q-1}\widehat{v_p} $$ where $\widehat{0} = 0$, $\widehat{1} = \widehat{2} = 1$ if $\epsilon_0 = +1$, and $\widehat{0} = 1$, $\widehat{1} = \widehat{2} = 0$ if $\epsilon_0 = - 1$. Moreover, denote the local models of $f_{c_1}$ as $$F_{v_j, t, 2}(x) = \eta_j t \cdot x + t+ 1 \qquad \eta_j \in \{\pm 1\}.$$ Now, note that, since $w_0$ is either $0$ or $1$, we have $$B_{\widehat{v_i}} = B_{w_0}$$ for any $i$. Hence, we compute \begin{align*} P_c(t) & = \sum_{j = 0}^{p-1} \sum_{i = 0}^{q-1} B_{w_i} A_{w_{i+1}} \dots A_{w_{q-1}} \left( \prod_{h = j +1}^{p-1} A_{\widehat{v_h}} A_{w_1} \dots A_{w_{q-1}} \right) \\ & = \left( \sum_{i = 0}^{q-1} B_{w_i} A_{w_{i+1}} \dots A_{w_{q-1}}\right) \cdot \sum_{j = 0}^{p-1} \left( \prod_{h = j+1 }^{p-1} A_{\widehat{v_h}} A_{w_1} \dots A_{w_{q-1}} \right) \\ & = P_{c_1}(t) \cdot \sum_{j = 0}^{p-1} \left( \prod_{h = j +1}^{p-1} A_{\widehat{v_h}} A_{w_1} \dots A_{w_{q-1}} \right). \end{align*} Note now that $$A_{\widehat{v_h}} = \epsilon_q \eta_h t, \qquad A_{w_i} = \epsilon_i t^{d_i}$$ hence, since $\epsilon_1 \dots \epsilon_{q-1} \epsilon_q = 1$, we have for any $h$ $$A_{\widehat{v_h}} A_{w_1} \dots A_{w_{q-1}} = \eta_h t^\ell$$ where $\ell$ is the period of the large scale dynamics, i.e. $f_{c_2}$. Hence \begin{align*} \sum_{j = 0}^{p-1} \left( \prod_{h = j +1}^{p-1} A_{\widehat{v_h}} A_{w_1} \dots A_{w_{q-1}} \right) & = \sum_{j = 0}^{p-1} \eta_{j+1} \dots \eta_{p-1} t^{\ell (p-1-j)} \\ & = \frac{P_{c_1}(t^\ell)}{1 + t^\ell} \end{align*} which proves the formula. \end{proof} \begin{corollary} \label{C:tuned} Given $c_1, c_2$ as above and $c = \tau_{c_2}(c_1)$, we have $$h(c) = \max \left \{ h(c_2), \frac{1}{\ell} h(c_1) \right \}.$$ Moreover, by the same proof as in \cite[Theorem 6]{Douady-entropy}, if $h(c_2) > 0$ we have $h(c) = h(c_2)$. \end{corollary} \begin{figure} \includegraphics[width=0.7 \textwidth]{complex-5.png} \caption{The itinerary for the critical orbit of the complex map $f_{c'}$ in the $\frac{1}{3}$-principal vein given by tuning the map of Figure \ref{F:real5} by the rabbit. With respect to the highlighted partition, the itinerary of the critical value is $\textup{It}(c') = \overline{\texttt{230231230230230}}$. The first return itinerary is $\widehat{\textup{It}}(c') = \overline{\texttt{2021202020}}$. The first return itinerary of the critical value for the map $f_c$ of Figure \ref{F:real5} is $\widehat{\textup{It}}(c) = \overline{\texttt{20121}}$; note that $\widehat{\textup{It}}(c')$ can be obtained by applying to $\widehat{\textup{It}}(c)$ the substitution $\texttt{0} \to \texttt{21}, \texttt{1} \to \texttt{20}, \texttt{2} \to \texttt{20}$ .} \label{F:tuned} \end{figure} \subsection{Characterization of minimal parameters} To distinguish between parameters on a given vein with the same entropy, we give the following \begin{definition}[minimal parameter] \label{d:minimalparam} We define a parameter $c$ on the $\tfrac{p}{q}$-principal vein to be \emph{minimal} if there is no parameter $c'$ with the same core entropy as $c$ and with $c' <_\mathcal{M} c$. \end{definition} \begin{lemma} \label{L:semic-nr} Let $f : T \to T$ be the restriction of a non-renormalizable, quadratic polynomial to its Hubbard tree. Then $f$ is semiconjugate to a piecewise linear tree map $g : T' \to T'$ with constant slope (i.e. expansion factor) $s = e^{h_{top}(f)}$, where $T'$ is homeomorphic to $T$ and $g$ has the same kneading sequence as $f$. \end{lemma} \begin{proof} By \cite[Theorem 4.3]{BaillifCarvalho} there exists a weakly monotone semiconjugacy $p : T \to T'$ to a piecewise linear tree map $g : T' \to T'$ with constant slope $s = e^{h_{top}(f)}$. Let $c$ be the critical point of $f$, and $c' = p(c)$ the critical point of $g$. Set also $J = p^{-1}(\{ c \})$. Since the semiconjugacy is weakly monotone, $J$ is a closed interval. If there exists $n$ such that $f^n(c)$ belongs to $J \setminus \{c \}$, then $f$ is renormalizable. Indeed, let $n$ be the smallest such number. We have $$g^n(c') = g^n(p(c)) = p(f^n(c)) = p(J) = c'.$$ Moreover, for any $x \in J$ we have $$p(f^n(x))=g^n(p(x)) = g^n(c') = c',$$ hence $f^n(J) \subseteq J$. Hence, if we set $J_i := p^{-1}(g^i(c'))$ for $i = 0, 1, \dots, n-1$, we have that the $J_i$ are disjoint, $f(J_i) \subseteq J_{i+1}$, and $f(J_{n-1}) \subseteq J$. Moreover, $J$ contains the critical point of $f$, hence $f$ is renormalizable, contradicting our assumption. Hence, $f^n(c)$ does not belong to $J \setminus \{c \}$ for any $n$, which implies that the kneading sequence of $f$ and $g$ are the same. \end{proof} \begin{lemma} \label{L:strict-mono} Let $c_1, c_2$ be non-renormalizable, PCF quadratic polynomials on the $\tfrac{p}{q}$-principal vein, with $c_1 <_\mathcal{M} c_2$. Then $h(f_{c_1}) < h(f_{c_2})$. \end{lemma} \begin{proof} By monotonicity of core entropy (\cite{TaoLi}, \cite{Zeng-landing}), $h(f_{c_1}) \leq h(f_{c_2})$. Let $g_1, g_2$ be the piecewise linear tree maps given by Lemma \ref{L:semic-nr}. If $h(f_{c_1}) = h(f_{c_2})$, then $h(g_1) = h(g_2)$. Therefore, since the growth rate of a piecewise linear map equals its slope and the underlying trees are homeomorphic, we have also $g_1 = g_2$, which implies the kneading sequence of $g_1$ is the same as the kneading sequence of $g_2$, hence by the Lemma \ref{L:semic-nr} also $f_{c_1}$ and $f_{c_2}$ have the same kneading sequence. \end{proof} A \emph{small Mandelbrot set} is the image of $\mathcal{M}$ under a tuning map $\tau$; the root of such a small Mandelbrot set is the root of the hyperbolic component onto which $\tau$ maps the main cardioid. \begin{lemma} \label{L:plateau} A parameter $c \in \mathcal{V}_{p/q}$ is minimal if and only if it does not lie in a small Mandelbrot set whose root has positive core entropy. \end{lemma} \begin{proof} By Corollary \ref{C:tuned}, core entropy is constant on small Mandelbrot sets whose roots have positive entropy. On the other hand, suppose that $c$ does not lie in any small Mandelbrot set, i.e. it is non-renormalizable. Then, there exists $c' <_\mathcal{M} c$ arbitrarily close to $c$ also non-renormalizable. By Lemma \ref{L:strict-mono}, $h(f_{c'}) < h(f_{c})$, so entropy is not constant in any neighborhood of $c$. If $c$ is renormalizable, there exists $n \geq 0$ such that $c = \tau_{p/q} \circ \tau_{1/2}^n(c')$ where $\tau_{p/q}$ is the tuning operator by the rabbit in the $p/q$-limb and $c'$ is non-renormalizable, and $h(f_{c'}) = 2^n q \cdot h(f_c)$, and we can apply the previous argument to $c'$. \end{proof} \begin{lemma} \label{l:closestrepresentative} Fix integers $0<p<q$ coprime. Let $\lambda$ be the growth rate of a parameter in $\mathcal{V}_{p/q}$. Of all the critically periodic parameters in $\mathcal{V}_{p/q}^{per}$ with growth rate $\lambda$, let $c(\lambda)$ be the one that is closest to the main cardioid along the vein. Then $\mathcal{Z}(\lambda)$ coincides with the set of all eigenvalues of $M_{c(\lambda)}$. \end{lemma} \begin{proof} By \cite{TaoLi}, \cite{Zeng-landing} core entropy is weakly increasing along veins, and by Lemma \ref{L:plateau} is constant precisely on small Mandelbrot sets with roots of positive entropy. Thus, the set of parameters along a vein with given core entropy $\lambda$ is the closure of a small Mandelbrot set. Let $c(\lambda)$ be the root of such small Mandelbrot set. By Lemma \ref{L:tuned}, for all parameters $c'$ in the small Mandelbrot set with root $c$, the polynomial $P_{c'}$ is a multiple of $P_c$, hence the eigenvalues of $M_{c'}$ contain the eigenvalues of $M_c$. Thus, the intersection $\mathcal{Z}(\lambda)$ of all eigenvalues equals the eigenvalues of $M_{c}$. \end{proof} \subsection{The teapot is closed under taking roots} We can now prove Theorem \ref{T:q-root}, reproduced here: \begin{renormalizationtheorem} If $(z,\lambda) \in \Upsilon_{1/2}$ with $|z| \neq 1$, then for any $q$, if $w^q = z$ then the point $(w,+\sqrt[q]{\lambda})$ belongs to $\Upsilon_{p/q}.$ \end{renormalizationtheorem} \begin{proof} Let $c$ be a critically periodic, real parameter, with $h(f_c) = \log \lambda$, and consider a fixed $z \in \mathbb{C}$ such that the $2$-principal vein kneading polynomial satisfies $P_c(z) = 0$. Now, let $c_2$ be the root of the $p/q$-principal vein, which has $q$-principal vein kneading polynomial $P_{c_2}(t) = 1 - t^q$. Then, by Lemma \ref{L:tuned}, the parameter $c' = \tau_{c_2}(c)$ lies on the $p/q$-vein and its kneading polynomial satisfies \begin{equation} \label{E:q-root} P_{c'}(t) = (1 - t^q) \frac{P_c(t^q)}{1+ t^q}. \end{equation} Hence, the core entropy satisfies $h(f_{c'}) = \frac{1}{q} \log \lambda$, and moreover, if $w^q = z$, by plugging $w = t$ into Eq. \eqref{E:q-root} we obtain $P_{c'}(w) = 0$. It remains to show that $c'$ is minimal. Since $\sqrt[q]{\lambda} \leq \sqrt[q]{2}$, every critically periodic map in $\mathcal{V}^{per}_{p/q}$ with growth rate $\leq \sqrt[q]{\lambda}$ is of the form $\tau_{c_2}(c)$, where $c$ is a real critically periodic map. Since $\tau_{c_2}$ respects the ordering and scales the entropy by $\frac{1}{q}$, it maps minimal parameters to minimal parameters. This shows that $(\sqrt[q]{\lambda}, w) \in \Upsilon_{p/q}$. The claim follows by taking the closure. \end{proof} \section{Persistence for Thurston teapots for principal veins} \label{s:persistence} The goal of this section is to prove the Persistence Theorem (Theorem \ref{t:persistence}) for teapots associated to principal veins. \subsection{Itineraries and roots of the kneading polynomial} The $q$-principal vein kneading polynomial can be generalized to arbitrary words in the alphabet $\{0,1,2\}$; we call this generalization the \emph{finite word $q$-kneading polynomial}. \begin{definition}[finite word $q$-kneading polynomial] Let $w$ be a finite word in $\{0, 1, 2\}^n$ and let $q$ be a natural number. Then the \emph{finite word $q$-kneading polynomial} $P^q_w$ is defined as \[ P^q_w(z) \coloneqq F_{w_{n-1},q ,z} \circ \ldots \circ F_{w_1,q, z}(1+z). \] \end{definition} \noindent When $q$ is clear from the context or the result does not depend on $q$, we will sometimes write $P_w$ instead of $P^q_w$. \medskip The following facts are immediate from the definition (see also Lemma \ref{L:periodicP} and its proof). \begin{lemma}\label{lem:bounded} The coefficients of $P^q_w$ are uniformly bounded, where the bound depends only on $q$. Moreover, for any $n$, the polynomial $P^q_{w^n}$ equals $P^q_w$ multiplied by a cyclotomic polynomial, hence has the same roots other than those on the unit circle. \end{lemma} Given any monic polynomial $P(z)=\sum_{i=0}^da_iz^i$, we let the \emph{reciprocal} of $P$ be \[r(P)(y) :={y^d\over a_d}P(1/y).\] Then, from the construction of $P_w$, we have \begin{lemma} Let $w, w'$ be two finite words in the alphabet $\{0, 1, 2\}$. \begin{enumerate} \item If $w$ and $w'$ share a large common suffix, then the lower degree terms of $P_w$ and $P_{w'}$ are identical; \item If $w$ and $w'$ share a large common prefix, then the lower degree terms of $r(P_w)$ and $r(P_{w'})$ are identical. \end{enumerate} \end{lemma} Combining this with Rouch\'e's theorem we have: \begin{lemma}\label{lem:same_prefix_suffix} Let $w, w'$ be two finite words in the alphabet $\{0, 1, 2\}$. \begin{enumerate} \item If $w$ and $w'$ share a large common suffix, then the roots of $P_w$ and $P_{w'}$ within the unit circle are close to one another; \item If $w$ and $w'$ share a large common prefix, then the roots of $P_w$ and $P_{w'}$ outside the unit circle are close to one another. \end{enumerate} \end{lemma} Combining Lemma~\ref{lem:same_prefix_suffix} and Lemma~\ref{lem:bounded} we have \begin{proposition}\label{prop:lim} Let $w_1$, $w_2$ and $w'$ be finite words in $\{0, 1, 2\}$, and let $\lambda_i$ denote the leading positive real root of $P_{w_i}$. Suppose that $w = w_1^Nw'w_2^N$ for some $N \gg 1$, and denote by $\lambda$ the leading real root of $P_{w}$. Then: \begin{itemize} \item As $N\rightarrow\infty$, $\lambda$ converges to $\lambda_1$. \item As $N\rightarrow\infty$, the roots of $P_{w}$ within the unit circle converge to roots of $P_{w_2}$. \end{itemize} \end{proposition} \subsection{Persistence} Firstly, we review some results on unimodal maps from \cite{BrayDavisLindseyWu}. \begin{definition} \label{d:unimodaldefs} Let $w$ be a finite word in the alphabet $\{0, 1\}$. Then: \begin{itemize} \item $w$ is said to be \emph{irreducible} if it cannot be written as concatenation of more than $1$ copies of another word; \item $w$ is said to have \emph{positive cumulative sign}, if there are an even number of $1$s; \item It is said to be \emph{admissible}, if given any decomposition $w=ab$, we have $$ba\leq_{lex} ab$$ \item the \emph{Parry polynomial} of $w$ is $$P^{Parry}_w(z) :=f_{w_n} \circ \dots\circ f_{w_1}\circ f_{w_0}(1)-1$$ with $f_0(x)=zx$, $f_1(x)=2-zx$. \item we define $D(w)$ as the word obtained from $w$ by replacing $1$ with $10$ and $0$ with $11$. \end{itemize} \end{definition} The Milnor-Thurston kneading theory \cite[Theorem 12.1]{MilnorThurston} implies that if $w$ is admissible, then $w^\infty$ is the critical itinerary of some real critically periodic quadratic map. \label{d:minimalparam} \begin{definition}[minimal binary word] Let $w$ be a finite word in the alphabet $\{0,1\}$. If $w$ is irreducible and $w^{\infty}$ is the binary itinerary of a minimal real parameter (Definition \ref{d:minimalparam}), we call $w$ \emph{minimal}. \end{definition} The following characterizations of minimality are well-known (see e.g. \cite{BruinBrucks}): \begin{lemma} \label{L:minimal} The following are equivalent for an irreducible admissible word $w\in \{0, 1\}^n$: \begin{enumerate} \item $w$ is minimal; \item If there is another irreducible admissible word $w'$ such that $(w')^\infty$ is the itinerary of some unimodal map with the same entropy, then $w^\infty\leq_{lex} (w')^\infty$; \item $w$ is the binary itinerary of some piecewise linear map with a single critical point and slope of constant absolute value. \item If $w^\infty$ is the binary itinerary of some quadratic map with entropy less than $\log(\sqrt{2})$, then $w$ is minimal iff $w=D(w')$ where $w'$ is minimal. (Here $D$ is the map defined in Definition \ref{d:unimodaldefs}) \end{enumerate} \end{lemma} We use the following key fact from \cite{BrayDavisLindseyWu}. \begin{proposition}[\cite{BrayDavisLindseyWu}, Proposition 2.10] \label{P:minimal} Let $w$ be an irreducible admissible word of positive cumulative sign, suppose that the Parry polynomial $P^{Parry}_w(z)$ has leading real root $\lambda$ larger than $\sqrt{2}$, and $P^{Parry}_w(z) =(z-1)g(z)$, where $g(z)$ is irreducible. Then $w$ is minimal. \end{proposition} Now we combine the Proposition above, Lemma 3.8, Proposition 4.4 and 5.3 in \cite{BrayDavisLindseyWu} and get: \begin{theorem} \label{T:concat-real} Let $(w_0)^\infty$ and $(w_1)^\infty$ be two binary itineraries of real unimodal maps with periodic critical orbit, with $(w_0)^\infty <_{lex} (w_1)^\infty$, and suppose $(w_1)^\infty$ is minimal. Then for any $N>0$, there is some word $w$ such that $(w_1^Nww_0^N)^\infty$ is the binary itinerary of the critical value of some real quadratic map, and $(w_1^Nww_0^N)$ is a minimal binary word. \end{theorem} \begin{proof} First, assume the entropy corresponding to $(w_1)^\infty$ is greater than $\log(\sqrt{2})$. Then, by \cite[Lemma 5.3]{BrayDavisLindseyWu}, we can find a word $w_1^Nw$ which is ``dominant'', and some $m>N$ such that $w'=w_1^Nww_2^m$ is admissible with positive cumulative sign, and the Parry polynomial has the form $P^{Parry}_{w'}(z)=(z-1)g(z)$, where $g(z)$ is irreducible. Now Proposition~\ref{P:minimal} implies the conclusion. If the entropy is less than $\log(\sqrt{2})$, by Lemma~\ref{L:minimal} (4) we can write $(w_1)^\infty = D^k(w_2)$ for some $k \geq 1$ so that $(w_2)^\infty$ has entropy greater than $\log(\sqrt{2})$, and apply the previous argument to $(w_2)^\infty$. \end{proof} Next, we push Theorem \ref{T:concat-real} through the surgery map (making use of Lemma~\ref{L:R-concat}) to obtain the following theorem: \begin{theorem}\label{main_comb} Fix any principal vein $\mathcal{V}_{p/q}$. Let $(w_0)^\infty$ and $(w_1)^\infty$ be the simplified itineraries of two critically periodic parameters $c_0 <_\mathcal{M} c_1$ in $\mathcal{V}_{p/q}$. Then, given any $N>0$, there is a critically periodic parameter $c_2$ such that the simplified itinerary of $c_2$ starts with $N$ copies of $w_1$ and ends with $N$ copies of $w_0$, and $c_2$ is the smallest parameter on the vein with the given core entropy. \end{theorem} \begin{proof} Set $(w_0)^{\infty} = \widehat{\textrm{It}}(c_0)$ and $(w_1)^{\infty} = \widehat{\textrm{It}}(c_1)$ for $c_0,c_1 \in \mathcal{V}_{p/q}^{per}$. Since the surgery map $\Psi_{p/q}: \mathcal{V}_{1/2} \to \mathcal{V}_{p/q}$ is surjective and sends critically periodic parameters to critically periodic parameters, we may fix parameters $b_0, b_1 \in \mathcal{V}_{1/2}^{per}$ such that $\Psi_{p/q}(b_i) = c_i$ for $i=0,1$. Let $(u_0)^\infty, (u_1)^\infty$ be the binary itineraries of $b_0, b_1$, respectively. By Theorem \ref{T:recode-summary}(c), the inequality $c_0 <_\mathcal{M} c_1$ implies $(u_0)^\infty <_{lex} (u_1)^\infty$. Then by Theorem \ref{T:concat-real} there exists a word $u$ in the alphabet $\{0, 1\}$ such that $(u_1^n u u_0^m)^\infty$ is the binary itinerary of a critically periodic real unimodal map $f_{b_2}$. Then, by Lemma \ref{L:R-concat}, $$R(u_1^n u u_0^m) = R(u_1)^n R(u) R(u_0)^m = w_1^n w w_0^m$$ is also the first return itinerary of a parameter $c_2$ in $\mathcal{V}_{p/q}^{per}$. Now, to prove that $c_2$ is minimal, recall that by Lemma \ref{L:plateau} minimal parameters are precisely the ones which do not lie in the interior of any small Mandelbrot set with root of positive entropy. Since the surgery map commutes with renormalization and preserves the set of parameters with zero entropy, $c_2 = \Psi_{p/q}(b_2)$ does not lie in the interior of a small Mandelbrot set with root of positive entropy, hence it is minimal. \end{proof} Now the Persistence Theorem follows from Theorem~\ref{main_comb} and Proposition~\ref{prop:lim}. More precisely, we restate and prove the Persistence Theorem as follows. \begin{theorem} \label{t:persistencepf} Fix coprime integers $0< p < q$. If $(z, y)\in \Upsilon_{p/q}$, $|z|<1$, then for any $y'$ satisfying $y < y' < \lambda_{p/q}$, where $\lambda_{p/q}$ is the supremum of the growth rates achieved by parameters in $\mathcal{V}_{p/q}^{per}$, we have $(z, y')\in \Upsilon_{p/q}$. \end{theorem} \begin{proof} Let $w_1$ and $w_2$ be two periodic first return map itineraries corresponding to points in $\mathcal{V}_{p/q}^{per}$ whose core entropy is close to $y$ and $y'$ respectively, and that the kneading polynomial of $w_1$ has a root close to $z$. By Theorem~\ref{main_comb}, we can find another point $c\in \mathcal{V}_{p/q}^{per}$ whose first return map itinerary $w$ is of the form $w_2^nw'w_1^n$, where $n$ can be arbitrarily large. By Proposition \ref{prop:lim}, the roots of the $q$-principal vein kneading polynomial $P_w(z)$ inside the unit circle get arbitrarily close to the roots of the kneading determinant for $w_2^\infty$, while the roots outside the unit circle get arbitrarily close to the roots of the kneading determinant for $w_1^\infty$, as $n$ goes to infinity. This implies the persistence. \end{proof} As a corollary of Theorem \ref{T:q-root} and Theorem \ref{t:persistencepf} we get: \begin{cylindercorollary} The teapot $\Upsilon_{p/q}$ contains the unit cylinder $$[1, \lambda_q] \times S^1 = \{ (\lambda, z) \ : \ 1 \leq \lambda \leq \lambda_q, |z| = 1 \}.$$ \end{cylindercorollary} \begin{proof} Let $(\lambda, z) \in \Upsilon_{1/2}$ with $\lambda > 0$. Then by applying Theorem \ref{T:q-root} $n$ times with $p/q = 1/2$, the set $$\{ ( \sqrt[2^n]{\lambda}, w) \ : \ w^{2^n} = z \}$$ belongs to $\Upsilon_{1/2}$. Hence, by applying Theorem \ref{T:q-root} once more, the set $$S_n := \{ ( \sqrt[2^n q]{\lambda}, w) \ : \ w^{2^n q} = z \}$$ belongs to $\Upsilon_{p/q}$. Since the sets $S_n$ accumulate onto $\{ 1 \} \times S^1$, then $\{ 1 \} \times S^1$ is contained in $\Upsilon_{p/q}$. Then by persistence (Theorem \ref{t:persistencepf}), the whole set $[1, \lambda_q] \times S^1$ is contained in $\Upsilon_{p/q}$. \end{proof} \section{Combinatorial veins and the Thurston set} \label{ss:combinatorialveins} The \emph{$\frac{p}{q}$-principal combinatorial vein}, which we denote $\Theta_{p/q}$, consists of the closure of the set of all angles $\theta \in \mathbb{Q}/\mathbb{Z}$ such that the external parameter ray $R_{\mathcal{M}}(\theta)$ lands on a point in $\mathcal{V}_{p/q}$. Recall that $\Theta_{p/q}^{per}$ is the set of all angles $\theta \in \Theta_{p/q}$ such that $\theta$ is strictly periodic under the doubling map, and $\Theta_{p/q}$ is the closure of $\Theta_{p/q}^{per}$. We define an equivalence relation $\sim$ on $\Theta_{p/q}$ as follows. We say $\theta_1 \sim \theta_2$ if and only if both are rational and there is a chain of adjacent hyperbolic components so that $\theta_1$ lands on the first one and $\theta_2$ lands on the last one. Set $$\mathcal{V}_{p/q}^{comb} \coloneqq \Theta_{p/q}/\sim.$$ We use $[\theta]$ to denote the $\sim$-equivalence class of $\theta \in \Theta_{p/q}$. Let $\theta_{p/q}$ be the angle of the external ray landing at the tip of the $\frac{p}{q}$-principal vein. Now, the set $\Theta_{p/q} \cap [0, \theta_{p/q}]$ is a closed subset of an interval, hence its complement is the countable union of open intervals. Endpoints of such open intervals correspond to pairs of rays landing on the same component, hence they are identified under $\sim$. Thus, the quotient space $\mathcal{V}_{p/q}^{comb}$ is homeomorphic to an interval. We define the \emph{combinatorial $\frac{p}{q}$-Master Teapot} to be the set $$ \Upsilon_{p/q}^{comb} \coloneqq \overline{ \left \{(z,\eta) \in \mathbb{C} \times \mathcal{V}_{p/q}^{comb} \mid \textrm{there exists } \theta \in \Theta_{p/q}^{per} \textrm{ s.t. } \textrm{det}(M_{\theta} - zI) = 0, \eta = [\theta] \right \} }. $$ \begin{remark} Multiple different critically periodic parameters in a principal vein can have the same core entropy while having different characteristic polynomials $\chi(t) = \textrm{det}(M_c -t I)$. The vertical coordinate of the Master Teapot $\Upsilon_{p/q}$ identifies all critically periodic parameters that have the same core entropy, and plots only the roots that are shared by all the different characteristic polynomials $\textrm{det}(M_c -t I)$ associated to that growth rate. The vertical coordinate of the combinatorial Master Teapot $\Upsilon_{p/q}^{comb}$ distinguishes between different parameters that have the same growth rate. \end{remark} \begin{lemma} \label{L:same-eigen} Fix integers $0<p<q$ coprime. For angles $\theta_1,\theta_2 \in \Theta^{per}_{p/q}$, if $[\theta_1] = [\theta_2]$ in $\mathcal{V}^{comb}_{p/q}$, then $M_{\theta_1}$ and $M_{\theta_2}$ have the same eigenvalues, except possibly on the unit circle. \end{lemma} \begin{proof} Suppose that $\theta_1$ and $\theta_2$ land on roots of adjacent hyperbolic components on the $p/q$-principal vein, and suppose by symmetry that $\theta_1$ lands closer to the main cardioid that $\theta_2$. Let $c_1, c_2$ be the corresponding critically periodic parameters. Then $c_2$ is the tuning of $c_1$ by the basilica. Hence, by Lemma \ref{L:tuned}, since the basilica has kneading polynomial $P_{c}(t) = 1 - t^2$, we have $$P_{c_2}(t) = P_{c_1}(t) \frac{1 - t^{2\ell}}{1 + t^\ell} = P_{c_1}(t) ( 1 - t^\ell).$$ Therefore $P_{c_1}$ and $P_{c_2}$ have the same roots except possibly on the unit circle. By Theorem \ref{T:equalpolys}, these roots are the same as the eigenvalues of $M_{\theta_1}$, $M_{\theta_2}$. \end{proof} As a consequence of Theorem \ref{t:continuousdiskextension}, we obtain that the part of a combinatorial Master Teapot outside the unit cylinder is connected: \begin{proposition} \label{P:connected} For any $(p, q)$ coprime, the set $$\Upsilon_{p/q}^{comb,+} := \Upsilon_{p/q}^{comb} \cap \{ (z, \eta) \in \mathbb{C} \times \mathcal{V}_{p/q}^{comb} : \ |z| \geq 1\}$$ is path connected. \end{proposition} \begin{proof} Consider any point $(z_*, \eta_*) \in \Upsilon_{p/q}^{comb,+}$. Let $\eta_0 \in \mathcal{V}_{p/q}^{comb}$ denote the $\sim$-equivalence class of the angle in $\mathbb{R}/\mathbb{Z}$ of the external ray that lands at the root (in the main cardioid) of $\mathcal{V}_{p/q}$. By Theorem \ref{t:continuousdiskextension}, the map $Z^+ : \Theta_{p/q} \to Com^+(\mathbb{C})$ is continuous, and by Lemma \ref{L:same-eigen} the map $Z^+$ is constant on equivalence classes of $\sim$, so the map $Z^+$ factors to a continuous map $$ \overline{Z^+}: \mathcal{V}_{p/q}^{comb} \to Com^+(\mathbb{C}).$$ As a consequence, there exists a continuous path in $\Upsilon_{p/q}^{comb,+}$ joining $(z_*, \eta_*)$ to some point of the form $(z', \eta_0) \in \Upsilon_{p/q}^{comb,+}$. Note that $$\Upsilon_{p/q}^{comb,+} \subseteq \{ (z, \eta) \ : \ 1\leq |z| \leq \lambda(\eta) \},$$ where $$\lambda(\eta) \coloneqq \sup \{|z| \ : \ \textrm{det}(M_{\theta}-zI)=0, \theta \in \Theta_{p/q}, [\theta] = \eta\}$$ denotes the largest eigenvalue of all matrices $M_{\theta}$ such that $[\theta] = \eta$. Furthermore, $\lambda$ is monotone increasing on $\mathcal{V}_{p/q}^{comb}$ and $\lambda(\eta_0) = 1$. Hence $|z'|=1$. By Corollary \ref{C:cylinder} the Master Teapot $\Upsilon_{p/q}$ contains the unit cylinder. Moreover, taking the growth rate defines a continuous map $$\varphi:\mathcal{V}_{p/q}^{comb} \times \mathbb{C} \to [1, \lambda_{p/q}] \times \mathbb{C}$$ and by construction $\varphi^{-1}(\Upsilon_{p/q}) \subseteq \Upsilon_{p/q}^{comb}$, hence $\Upsilon_{p/q}^{comb}$ also contains the unit cylinder. Thus, every point in $\Upsilon_{p/q}^{comb,+}$ is connected by a continuous path in $\Upsilon_{p/q}^{comb,+}$ to the unit cylinder; hence, $\Upsilon_{p/q}^{comb,+}$ is path connected. \end{proof} \begin{proof}[Proof of Theorem \ref{T:bagel-connected}] Since $\Sigma_{p/q} \cap \{ z \ : \ |z| \geq 1\}$ is the projection of $\Upsilon_{p/q}^{comb,+}$ onto the $z$-coordinate, connectivity of $\Sigma_{p/q} \cap \{ z \ : \ |z| \geq 1\}$ follows from connectivity of $\Upsilon_{p/q}^{comb,+}$. \end{proof} \section*{Appendix. Relating the Markov polynomial and Milnor-Thurston kneading polynomial for critically periodic real maps} For real postcritically finite parameters, we obtain a closer relationship between the characteristic polynomial for the Markov partition and the Milnor-Thurston kneading polynomial. If $f$ is critically periodic of period $p$, Milnor-Thurston's kneading determinant $D_{MT}(t)$ is of the form $$D_{MT}(t) = \frac{P_{MT}(t)}{1-t^p}$$ where $P_{MT}(t)$ is a polynomial of degree $p-1$, which we call the \emph{kneading polynomial}. \begin{proposition} Let $f$ be a critically periodic real quadratic polynomial of period $p$, and let $A$ be the transition matrix for the partition of the Hubbard tree minus the postcritical set into its connected components. Then we have the identity $$\det (I-tA) = P_{MT}(t).$$ \end{proposition} \begin{proof} Recall that the Artin-Mazur zeta function of $f$ is defined as $$\zeta(t) := \exp \left( \sum_{k = 1}^\infty \frac{\# \textup{Fix}(f^k)}{k} t^k \right).$$ Moreover, Milnor-Thurston \cite{MilnorThurston} also consider the \emph{reduced zeta function} $$\widehat{\zeta}(t) := \exp \left( \sum_{k = 1}^\infty \frac{ \widehat{n}(f^k)}{k} t^k \right)$$ where $\widehat{n}(f^k)$ is the number of monotone classes of fixed points of $f^k$. Two points $x, y$ lie in the same monotone equivalence class for $f^k$ if $f^k$ maps the whole interval $[x, y]$ monotonically. Moreover, for any matrix $A$, recall the formula (see e.g. \cite{TiozzoContinuity}, Lemma 4.4) $$\det(I - t A) = \exp \left( - \sum_{k = 1}^\infty \frac{\textup{Tr }A^k}{k} t^k \right).$$ Note that, if $A$ is the Markov matrix for $f$, then for each $k$ we have $$\textup{Tr}(A^k) = \widehat{n}(f^k).$$ Indeed, the trace of $A^k$ is the number of (based) closed paths of length $k$ in the graph associated to $A$, hence it corresponds to a cycle of length $k$ of intervals with respect to the partition given by the postcritical set, and two such points have the same coding if and only if the belong to the same monotonicity class. Hence, we obtain \begin{equation} \label{E:detA} \det(I - t A) = \frac{1}{\widehat{\zeta}(t)}. \end{equation} Note that for the Markov matrix, there are two choices: \begin{enumerate} \item Consider $f : I_0 \to I_0$ with $I_0 := [f(c), f^2(c)]$ the Hubbard tree, where $c$ is the critical point. In this case, the Markov matrix $A_0$ has size $p-1$. \item Consider $f : I_1 \to I_1$ with $I_1 := [- \beta, \beta]$ where $\beta$ is the $\beta$-fixed point. In this case, the Markov matrix $A_1$ has size $p+1$. \end{enumerate} Note that there is exactly one periodic point (the $\beta$-fixed point) which is in $I_0$ but not in $I_1$. Hence \begin{equation}\label{E:A1} \det(I - t A_1 ) = (1-t) \det(I - t A_0). \end{equation} Note that Milnor-Thurston take as domain of definition of $f$ the largest interval $I_1$. Now, by \cite[Corollary 10.7]{MilnorThurston} we have \begin{equation} \label{E:zeta} \frac{1}{\widehat{\zeta}(t)} = (1-t) (1-t^p) D_{MT}(t) \end{equation} where $\widehat{\zeta(t)}$ is the reduced zeta function for the action on $I_1$. Then by comparing \eqref{E:detA}, \eqref{E:A1} and \eqref{E:zeta} we have $$\det( I - t A_0) = \frac{\det (I - t A_1)}{1 - t} = \frac{1}{\widehat{\zeta}(t) (1-t)} = (1-t^p) D_{MT}(t) = P_{MT}(t)$$ which is the desired identity, since in the statement of the proposition we took $A = A_0$. \end{proof} \bibliographystyle{alpha}
1,477,468,749,932
arxiv
\section{Introduction} System-on-chips (SoCs) today are the backbone of modern electronic devices that are being commonly used in a variety of application domains from defense and aerospace to automotive and the internet of things (IoT). Hardware intellectual property (IP) cores are the basic building blocks of an SoC. The SoC developers often purchase third-party IPs (3PIPs) from different vendors to expedite the development process and cut costs. For instance, along with the dual-core central processing unit, the Apple A9 SoC present in many Apple products incorporates the PowerVR GT7600 graphics processing core from the Imagination Technologies \cite{A9GPU}. Several functional verification techniques and tools ensure that the 3PIP core can correctly execute the functions described in the specification. However, SoC integrators cannot verify if the IP is free from unspecified malicious functionalities or hardware Trojans that are designed to stay dormant during functional verification \cite{guo2015pre}. An untrusted IP vendor could introduce hardware Trojans to circumvent the functionality of the SoC or facilitate the leakage of on-chip secret information during field operation. While the malicious alteration of trusted designs at the untrusted foundry has received considerable attention, the corresponding solutions to detect Trojans in ICs do not apply for IP trust verification due to the following reasons: \begin{itemize} \item During procurement, the consumer only receives a high-level specification of the 3PIP from the untrusted IP vendor. There is no golden model or reference implementation to compare the suspect IP. \item Application of side-channel analysis-based detection techniques \cite{hoque2017golden, liu2016silicon} is not feasible in this scenario as the reference for parametric behavior (e.g., delay and power) is defined by the untrusted IP vendor itself. \item The lack of reference to the golden model, facilitates the incorporation of large Trojan circuits with complex trigger conditions by an attacker that is hard to activate using logic testing based solutions \cite{saha2015improved, chakraborty2009mero}. \end{itemize} Static analysis of the gate-level netlist has shown promising results in identifying malicious nets (wires) by observing one or two specific features (e.g., control metric of the nets ). However, dependency on a very limited number of features allows the attacker to easily redesign the Trojan to bypass them \cite{zhang2014detrust}. The application of machine learning (ML) algorithms allows the incorporation of many features collectively to raise the difficulty of circumventing the countermeasure. Researchers have explored both supervised \cite{hoque2018hardware, hasegawa2016hardware, hasegawa2017trojan} and unsupervised \cite{salmani2016cotd}, \cite{sc_cotd} ML solutions for IP trust verification. One of the essential requirements for applying supervised ML techniques is to have a set of known Trojan inserted IPs. During the training process of a supervised machine learning model, feature data for each net is extracted from known Trojan inserted IPs, and the nets are labeled (either as Trojan net or normal net) to prepare the data for training the ML classifiers. These IPs are usually obtained by inserting known Trojans in trusted IPs that do not have any other Trojans. While \cite{hasegawa2016hardware} and \cite{hasegawa2017trojan} relied on known Trojan-inserted IPs available in Trust-HUB, in \cite{hoque2018hardware} the authors suggested that such limited number of custom IPs may not be able to represent the diverse design space for any given class of Trojan (e.g., combinational, sequential, always-on). Instead, an automatic Trojan insertion tool was used to insert a large number of diverse implementations for each class of Trojans in various Trojan-free or trusted IPs \cite{hoque2018hardware}. Even if a set of IPs (e.g., signal processing, crypto, communication cores) are available that are guaranteed to be trusted, the suspect IP (e.g., a microprocessor) could be inherently very different from trusted IPs with respect to the behaviour of their internal nets. This discrepancy in implementation among the training and suspect IP could reflect in their corresponding feature data of nets, and could negatively impact the verification outcome. ML-based techniques have proposed various structural and functional features of a net. But, there are no solutions for identifying the best subset of features for different broad classes of Trojans. For ML based techniques, feature selection could accelerate the verification process by reducing the required time for feature data extraction and improving detection accuracy. Additionally, all of these features are defined assuming the design with no-scan assumption. In practice, while testing the design, the tester can access all the inputs and outputs of a flip-flop with the full-scan assumption of the design. Hence, if an attacker constructs the Trojan with a full-scan assumption, the chances of detecting the Trojan with a no-scan assumption are very low as the inserted Trojan will get very rarely triggered. Another drawback of most of the existing static analysis-based Trojan detection solutions (including the ML-guided techniques) is that instead of detecting malicious circuits, they only report a set of potential Trojan nets in a suspect IP. Thus manual intervention is required to do further analysis on the reported suspicious set of nets. Representing the verification results as a set of suspicious nets impedes understanding of the potential threat and weakens the confidence in the verification result. Such representation does not report the number of Trojans these nets may constitute, their trigger condition, and activation impact. Without this information, it is hard to make a definitive claim regarding the trustworthiness of an IP. In this paper, we propose \textbf{VIPR} (\underline{V}erification of \underline{IP} T\underline{R}ust), a novel approach to leverage supervised ML for trust verification that does not require a set of trusted sample designs for generating the training data. To eliminate the need of trusted designs, we propose a mechanism to utilize the untrusted IP itself for evaluating the trust. During the training phase, we label all the nets in the suspect IP as benign nets (i.e., not part of any Trojan). Next, we insert a large number of varied Trojans in the suspect IP using an automated Trojan insertion tool and label the nets of the resulting synthetic Trojans as Trojan nets. We extract the feature data for these benign and Trojan nets and then train a machine learning classifier. We assume that the attacker is very likely to insert very few Trojans (compared to what we insert using the Trojan insertion tool). Therefore, the training set contains those few data points of attacker-inserted Trojan nets mislabeled as benign nets. However, by incorporating a significantly larger population of tool-generated Trojan data points labeled as Trojan nets, we eliminate the impact of those few mislabeled data points during the training. In other words, by introducing a large amount of correctly labeled Trojan instances in the training data, we ``overpower'' the influence of a few mislabeled data points over the training process \cite{self_train_w_noisy_data}. Our proposed framework also systematically identifies the best combination of features for detecting different classes of Trojans. Furthermore, along with reporting a set of suspicious nets, our approach goes one step further and identifies malicious circuits with distinct trigger and payload logic. Once the ML model identifies a set of suspicious nets in the 3PIP, the nets are analyzed using an algorithm to construct the Trojan circuit(s). In particular, in this paper, we make the following major contributions: \begin{itemize} \item We propose a golden-free supervised ML-based third-party IP trust verification framework. Contrary to state-of-the-art techniques, our assumptions are practical and eliminate the need for trusted designs for training. \item We incorporate an automated design-specific feature selection flow for generating lightweight ML models and improved classification performance. We present a complete tool flow for the proposed verification process. \item We propose a set of parameterized post processing algorithms for localizing and reconstructing the hardware Trojan based on the output of the ML model. The proposed method reduces the need for manual inspection of the suspect nets and reduces false detection. \item We extensively verify the efficacy of the proposed framework through a series of well defined quantitative and qualitative analyses on various Trust-Hub Trojans. \end{itemize} The rest of the paper is organized as the following: Section II describes the existing solutions and their drawbacks; Section III describes the proposed methodology; Section IV presents results on the Trojan detection performance of our proposed approach, and finally, we conclude in section V. \section{Background and Related Works} \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figures/psr_need_trojans.png} \caption{(a) Random tool inserted 8-triggered combinational Trojan in suspect RS232 design. (b) Random tool inserted 8-triggered combinational Trojan in suspect S38417 design.} \label{fig:psr_need} \end{figure*} \subsection{State-of-the-art Hardware Trojan detection techniques} Trojan detection in IC/IP is a long standing problem. Several techniques have been proposed over the years to detect Trojans in ICs through machine learning based power side channel analysis \cite{side_channel_pc, untrusted_cots}. Works like ~\cite{signed} use watermarking techniques to detect Trojan behaviour. Authors in ~\cite{signed}, sample critical nets in the design to generate a unique signature. A modification to the original design through the addition of extra gates would cause the switching activity to be modified and thus generate a different signature. On the other hand, existing methods for hardware Trojan detection for IPs (i.e., gate-level/RTL codes) can be categorized into dynamic and static analysis-based techniques. In the case of dynamic methods, a targeted set of test vectors are generated to activate and detect potential hardware Trojans via simulation \cite{MERO} \cite{MERS} \cite{scalableMERS}. These techniques require exploring the trigger states to activate the Trojans and observe their malicious impact. However, with little to no knowledge of the possible locations of the triggers in a design, the test pattern generation problem becomes difficult \cite{testPatDifficulty}. For static approaches, structural and functional properties related to Trojans are used for detection \cite{hasegawa2016hardware}. Static techniques can be further categorized into search-based, threshold-based, and machine learning-based methods \cite{survey_1}. For search-based methods presented in \cite{UCI} and \cite{VeriTrust}, it is hard to find Trojan nets in very large designs. In \cite{UCI}, the approach relies on test pattern simulation to determine a set of unused nets in the design. Here, the result will depend on a set of patterns used during the simulation, and the test pattern set might not be able to cover all possible combinations of test patterns. The method proposed in \cite{VeriTrust} is only applicable to combinational designs. Functional analysis for nearly unused circuit identification (FANCI) presented in \cite{waksman2013fanci} uses Boolean function analysis, which is a threshold-based static approach to detect hardware Trojans in untrusted IPs. This work is extended by adding graph neighborhood analysis in \cite{HAL}. Because of the time complexity involved in performing the Boolean analysis for large fan-in logic, it is difficult to use \cite{waksman2013fanci, HAL} in large designs \cite{hasegawa2017trojan}. \begin{table}[t] \centering \caption{Feature values of trigger enable signals in RS232 and S38417 design} \label{tab:psr_features} \scriptsize\addtolength{\tabcolsep}{8pt} \begin{tabular}{|c|c|c|c|} \hline \textbf{\#} & \textbf{Features} & \textbf{r5 in Fig.(a)} & \textbf{s5 in Fig.~\ref{fig:psr_need}(b)} \\ \hline 1 & Probability & $2*10^{-6}$ & $1*10^{-9}$ \\ \hline 2 & 0-controllability & 1 & 1 \\ \hline 3 & 1-controllability & 43 & 271 \\ \hline 4 & Observability & 2 & 2 \\ \hline \end{tabular} \end{table} Recently, researchers have started exploring supervised and unsupervised machine learning techniques for identifying suspect Trojan nets in untrusted 3PIPs. Table \ref{tab:existing} presents recent efforts using ML for Trojan detection and their limitations. Unsupervised machine learning presented in \cite{salmani2016cotd}, uses combinational Sandia Controllability and Observability (SCOAP) features to cluster the nets in a design and potentially identify Trojan signals which are difficult to control and observe. Here, they have used the TetraMax tool, which limits the SCOAP to a maximum value of 254. In practical designs, SCOAP values can go beyond this preset value; hence this technique may produce multiple false positives in the classification of designs. In \cite{hasegawa2016hardware}, Hasegawa et al. use supervised support-vector based machine learning model to classify Trojans in suspicious designs. Here, different publicly available sets of Trojan designs are analyzed. However, the authors use the same limited number of Trust-Hub benchmarks for training, which does not contain a large number of diverse implementations of Trojans. Moreover, the authors tested the model using leave-one-out cross-validation, where the suspect IP being tested is also in the training data with all the non-Trojan nets labeled correctly. This work is extended by adding a few more gate-level features, and multi-layer neural networks is used for classification in \cite{nn_ht} and random forest classifiers is used in \cite{hasegawa2017trojan}. In \cite{sup_scoap}, authors have used SCOAP features to train supervised machine learning models. Here, the training dataset is created synthetically by oversampling the Trojan class. They have used an open-source `Testability Measurement Tool,' which evaluates SCOAP features for designs provided in bench (.bench) format. An infinite value is returned for the nets in a loop, which will impact the results as most sequential designs contain some form of loop. \begin{table*}[t] \centering \caption{Qualitative Comparison of the Proposed Approach with other ML-based Trust Verification Techniques} \label{tab:existing} \scriptsize\addtolength{\tabcolsep}{5pt} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\textbf{Properties}} & \begin{tabular}[c]{@{}c@{}}\textbf{Hoque}\\ \textbf{et al. \cite{hoque2018hardware}} \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Salmani }\\ \textbf{et al. \cite{salmani2016cotd}} \end{tabular} & \begin{tabular}[c]{@{}c@{}} \textbf{Hasegawa } \\ \textbf{et al. \cite{hasegawa2016hardware,nn_ht}} \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{SC-COTD }\\ \textbf{et al. \cite{sc_cotd}} \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Yang}\\ \textbf{et al. \cite{side_channel_pc}} \end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Yang }\\ \textbf{et al. \cite{untrusted_cots}} \end{tabular} & \textbf{Proposed} \\ \hline Detection Type & Supervised & Unsupervised & Supervised & Unsupervised & Supervised & Unsupervised & Supervised \\ \hline Golden Design Required & Yes & Yes & Yes & Yes & Yes & No & No \\ \hline Trojan Localization Done & No & No & No & No & No & No & Yes \\ \hline Class-wise Training & Yes & No & No & No & Yes & No & Yes \\ \hline Post-processing & No & No & No & No & No & No & Yes \\ \hline {Targeted Feature Selection} & No & No & No & No & No & No & Yes \\ \hline \end{tabular} \end{table*} \subsection{Major Challenges in ML-guided Solutions} \subsubsection{Need to learn the complete Trojan space} To have some form of understanding about the hardware Trojan space, some hardware Trojan benchmarks are made publicly available on Trust-Hub. But the total number of Trojan circuits are still limited in number. Hence, there is a need to have more data on the number of valid Trojan circuits that are hard-to-activate, valid, and have different Trojan structures. Also, for any supervised ML technique to work, there is a need to learn as much as possible about the feature space for ML models to derive the hyper-plane that can separate different types of class data. \subsubsection{Requirement to learn the design specific bias} Many researchers have presented various works to detect hardware Trojans with the help of various machine learning techniques. Authors in \cite{hoque2018hardware} proposed a method to generate the training database by relying on Trojan free IPs that are publicly available. They have generated multiple Trojan inserted designs with the help of the tool presented in \cite{cruz2018automated} and used various ML models to train the classifier. The disadvantage of this method is that the ML model may learn information specific to the training benchmarks, which may hamper the ML accuracy on the test designs. Fig.~\ref{fig:psr_need} shows random 8-trigger combinational Trojan template inserted in designs RS232 (Fig.~\ref{fig:psr_need}(a)) and S38417 (Fig.~\ref{fig:psr_need}(b)). Here, the designs are different structurally as well as functionally. Table \ref{tab:psr_features}, shows the static proabability of net being at logic 1, combinational controllability of 0, combinational controllability of 1 and combinational observability feature values of the final trigger enable signals (r5 and s5) for these two designs. For both the trigger enable signals, it is hard to transition the signals to logic level 1, which can be observed from their probability and combinational controllability of 1 values. Additionally, we can observe that the Trojan in S38417 is harder to activate than the similar Trojan in the RS232 design. Though the previous approach is correct in creating a valid training database, it is not necessarily optimized for the suspicious design under test. The Trojan space is already large enough -- learning both design and Trojan space would require a significant amount of data. If the ML model can learn how the Trojan feature space of a suspicious design behaves then we can reduce the amount of required data for training while also tailoring the ML classification performance to the suspicious design under test. Hence, there is a need to take advantage of the feature space of suspicious design to make design bias decisions. \subsubsection{Towards lightweight ML model with feature selection} In literature, various features are proposed related to hardware Trojans based on what an attacker can use to insert the malicious circuit in an IP. However, feature relevance is dependent upon several factors such as type of Trojan, type of benchmark. With the increase in the number of features, the feature space explodes, making training more challenging. In practice, enough Trojan inserted designs are not available in the public domain. Additionally, machine learning models are agnostic to domain knowledge. Hence, there is need to reduce the feature space. \subsubsection{Automated post-processing} For existing supervised and unsupervised hardware Trojan detection methods presented in the literature, predictions from the ML model are considered as the final result. In a practical situation, predictions of a net to a Trojan class are not meaningful until manual intervention is performed to verify the final predictions. Moreover, the cases in which the ML model outputs a large number of false-positive predictions, manual intervention will be time-consuming and potentially lead to erroneous conclusions for identifying the Trojan circuit. Hence, there is need to automate the post-processing such that the end result will be a reconstructed Trojan structure after removal of some of the false positives. \section{Methodology} \begin{figure*}[!h] \centering \includegraphics[width=.9\textwidth]{figures/vipr_psr.png} \caption{The flow of our proposed framework: (a) a large number of Trojans are inserted in the suspect IP, and the feature data is extracted from them. The labeling process marks all nets of the suspect IP as `normal net' and the nets of our inserted Trojans as `Trojan net'. A trained ML model is generated from this data. (b) During verification, the feature data of suspect IP's net are provided as the test data to the trained model. The trained model identifies the Trojan nets from which the malicious circuits are constructed.} \label{fig:PSR} \end{figure*} \subsection{Framework Overview} The overall flow of our proposed approach is presented in Fig.~\ref{fig:PSR}. The primary component of our method is a trained ML model that is capable of categorizing the nets of the suspect IP by observing their corresponding feature data. Therefore, as shown in Fig.~\ref{fig:PSR}(a), the first phase of the overall process involves training the ML classifier using a novel technique we propose. In contrast to the training process of earlier supervised ML techniques \cite{hoque2018hardware}, the proposed training method uses the suspect IP itself for generating the training data instead of relying on a set of trusted designs. Previous works on self-referencing have shown techniques to use untrusted hardware components as the reference to detect hardware Trojans in integrated circuits (ICs) \cite{hoque2017golden, du2010self}. We identify our training method as pseudo-self-referencing (PSR) based as introduced conceptually in \cite{chakraborty2021sail}. Our framework uses the untrusted IP itself for training after inserting a large number of known Trojans in it. To evaluate the trained model, a selected set of features are extracted from the suspicious design, and a trained model is used to classify the nets. Finally, post-processing algorithms are applied to extract the possible Trojan circuitry, as shown in Fig.~\ref{fig:PSR}(b). The training and verification process is executed separately for each broad class of Trojans (e.g., combinational and sequential). The PSR-based training process starts with the suspect IP, which is in its gate-level representation. For the desired class of Trojan to be detected, a large number of diverse Trojan implementations of that class are inserted into the suspect IP. To automate this process, we follow the tool-based insertion technique presented in \cite{cruz2018automated}. Next, we extract a large number of functional and structural features for each net of the Trojan inserted suspect IP. The nets are then inspected for labeling. Since we do not know if any of the nets of the suspect IP are malicious, we label all of them as normal nets. The nets of the known Trojans inserted by our automated Trojan benchmarking tool are labeled as Trojan nets. This labeling process facilitates PSR-based training. In training data, nets of the original suspicious designs are considered only once as these nets get repeated multiple times with little to no variation in feature values. Our assumption is that if the number of correctly labeled Trojan nets introduced though PSR is significantly higher compared to the number of Trojan nets in the suspect IP that has been labeled as normal net, the classifier will consider the few mislabeled nets as an outlier \cite{self_train_w_noisy_data}. We train our preferred ML algorithm using this labeled data and generate the trained model capable of detecting a specific class of Trojan. The second phase of the overall process is verifying the suspect IP using the trained model. As shown in Fig.~\ref{fig:PSR}(b), all the features are extracted from the suspect IP that was originally obtained from the untrusted vendor. This suspect design is applied as the test data to the trained ML model obtained earlier. The ML model categorizes each net of the suspect IP either as a Trojan net or a normal net. The identified Trojan nets are further processed using a post-processing routine to construct the malicious circuit. \subsection{Pseudo Self Referencing (being Golden Free)} To utilize an untrusted IP for training, a large number of Trojans of varied implementation must be generated and inserted in it. While Trust-Hub currently contains examples of different classes of Trojans applicable to gate-level designs, they cover a very small segment of the possible design space for any given class of Trojan. If the construction of the Trojan to be detected is analogous to the design space that is uncovered in the training data, the trained model may not be able to detect it. Hence, it is critical to cover the design space of stealthy Trojans that is undetectable by functional verification. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{figures/TRIT_flow.png} \caption{Automatic Trojan Insertion Framework used to insert broad classes of Trojans to generate the training data.} \label{fig:trit_flow} \end{figure} To automate the Trojan insertion process, we implement the tool-based Trojan insertion technique presented in \cite{cruz2018automated} as shown in Fig.~\ref{fig:trit_flow}. The tool automatically generates hardware Trojans and inserts them in appropriate location of the IP such that they cannot be easily detected during functional verification. The tool also verifies using formal tools that the Trojans are not ``dead logic" and can achieve their rare activation values from the primary inputs. Several features of the framework determine the location and the structure of trigger and payload of the Trojan, such as the number of triggers, rarity of the triggers, and inclusion of non-rare triggers. The tool can also insert any number of Trojans in the design. For inserting hard-to-activate combinational and sequential classes of Trojans we select and combine nets with low activation probabilities. To estimate the signal probabilities of the gate-level IP, vector-less based approach is used as simulating the design with a large number of input vectors might not give accurate or stable results. Both probabilities of a net being at a logic value 0 ($P_0$) and the probability of a net being at logic value 1 ($P_1$) are calculated for each net by looking at the driving gate of the respective nets. By analyzing $P_{0,1}$ for all nets of the IP, candidate locations for Trojan insertion are identified such that the $P_0$ or $P_1$ is less than a predetermined probability threshold. The tool then selects observable nets as payloads while ensuring no combinational loops are formed. Once the locations are identified, we validate the Trojans using formal tools. Once, valid set of trigger nets and payload nets are obtained, combinational or sequential Trojans are constructed and inserted in the suspicious design. In the case of combinational Trojan, the Trojan structure as well as gates used in the Trojan body varies across different sets of Trojan. For sequential Trojans, we generate finite state machines with different transitions and numbers of states or simple counters with different maximum values. \subsection{Feature Extraction Methodology} We extract two different classes of features for training the ML model: functional and structural features. To derive the features from each design, a hyper-graph is created from the gate-level netlist, and different types of algorithms are applied based on the feature. In the following section, we describe all the features used in our framework. \begin{table}[t!] \centering \caption{List of Features used in the Proposed Method.} \label{tab:func} \scriptsize\addtolength{\tabcolsep}{0pt} \begin{tabular}{|l|l|p{5 cm}|} \hline \textbf{\#} & \multicolumn{1}{|c|}{\textbf{Functional Feature}} & \multicolumn{1}{|c|}{\textbf{Description}} \\ \hline 1 & Static Probability & Static probability of the net. \\ \hline 2 & Transition Probability & Activity from 0 to 1. \\ \hline 3 & Controllability & Controllability of the net. \\ \hline 4 & Observability & Observability of the net. \\ \hline 5 & Fanin Level 1 & \# of connected inputs at level 1 \\ \hline 6 & Fanout Level 1 & \# of connected outputs at level 1 \\ \hline 7 & Fanin Level 2 & \# of connected inputs at level 2 \\ \hline 8 & Fanout Level 2 & \# of connected outputs at level 2 \\ \hline 9 & Nearest\_FF\_D & Distance of the nearest flip-flop input \\ \hline 10 & Nearest\_FF\_Q & Distance of the nearest flip-flop output \\ \hline 11 & Min. PI Distance & Min. distance from nearest primary input \\ \hline 12 & Min. PO Distance & Min. distance from nearest primary output \\ \hline \end{tabular} \end{table} \subsubsection{Functional Features} For functional features, the sequential elements are considered as full-scan flip-flops. With this assumption, the outputs of the sequential elements are considered pseudo-primary input, and inputs of the sequential elements are considered pseudo-primary outputs. Finally, hyper-graph is traversed in topological order to evaluate functional features. The functional features that are used in the proposed framework are described below. \textbf{Static Probability:} The static probability denotes the fraction of time the state of the signal or net is expected to be at logic-1 or logic-0. If static probability (also referred as signal probability) of a net being at logic-1 is 0.4, then it indicates that 40\% of the time the net is expected to be at logic-1. We observed that simulation-based probability calculation gets significantly affected by the types of test vectors and the number of total vectors. Hence, we have used a vector-less approach to derive the signal probability of each net in the design. \begin{table}[h] \centering \caption{Static Probability of basic gates \cite{stat_prob}} \label{tab:stat_prob} \scriptsize\addtolength{\tabcolsep}{18pt} \begin{tabular}{|c|l|l|} \hline \textbf{\#} & \multicolumn{1}{|c|}{\textbf{Gate}} & \multicolumn{1}{|c|}{\textbf{Signal Probability of 1}} \\ \hline 1 & NOT & {$1 - P_A$} \\ \hline 2 & AND & {$P_A*P_B$} \\ \hline 3 & OR & {$P_A + P_B - P_A*P_B$} \\ \hline \end{tabular} \end{table} The vector-less probability feature calculations are an excellent compromise between accuracy and compute efficiency compared to the simulation-based calculations that are done for a reasonably long duration with realistic stimulus. Equations of basic gates from \cref{tab:stat_prob} are referred to extract static probability feature of all the nets while traversing the hyper-graph. \begin{table}[h] \centering \caption{Transition Probability of basic gates \cite{trans_prob}} \label{tab:trans_prob2} \scriptsize\addtolength{\tabcolsep}{3pt} \begin{tabular}{|c|l|l|} \hline \textbf{\#} & {\textbf{Gate}} & {\textbf{ \begin{math} P_{0 to 1} = P_{out=0} * P_{out=1} \end{math}}} \\ \hline 1 & NOT & {\(1-A\)*\(A\)} \\ \hline 2 & AND & {(\(1-A*B\))*(\(A*B\))} \\ \hline 3 & OR & {\begin{math}(1-A)*(1-B)*(1-(1-A)*(1-B))\end{math}} \\ \hline 4 & NAND & {\begin{math}(A*B)*(1-A*B)\end{math}} \\ \hline 5 & NOR & {\begin{math}(1-(1-A)*(1-B))*(1-A)*(1-B)\end{math}} \\ \hline 6 & XOR & {\begin{math}(1-(A + B - 2*A*B))*(A + B - 2*A*B)\end{math}} \\ \hline \end{tabular} \end{table} \textbf{Transition Probability:} The transition probability (referred to as toggle rate or activity) is the number of transitions from logical level 0 to 1 or 1 to 0. Transition probability can be obtained by either simulating set of test vectors or with a vector-less approach. In the simulation approach, the transition probability gets affected by the order in which vectors are passed while performing the simulation. For instance, if two consecutive vectors used during simulation differ by only one bit, then the chances of observing transition on the nets affected by this bit change are low. Hence, for the transition probability feature as well, we have used the vector-less approach to derive transition probability for each net in the design, which considers the type of gate driving a particular net to derive the feature value. \cref{tab:trans_prob2} shows the equation of basic gates that are used to extract the transition probability of each of the net while traversing the hyper-graph from inputs to outputs. \textbf{Controllability and Observability:} The controllability metric captures the effort in assigning a net to the desired logic value by applying vectors to the primary inputs. Observability represents the effort of propagating the logic state of a net to observable points (e.g., primary outputs). Goldenstein developed the SCOAP (Sandia Controllability and Observability Program) testability measures \cite{scoap_sandia}. The SCOAP value can range from zero to infinity. The higher the value, the more effort is required to control or observe a signal in the design. For each signal in the design, there will be six numerical values, i.e., combinational and sequential variants for 0-controllability, 1-controllability, and observability. Combinational controllability measures the effort required in achieving logic 0/1 of a signal via the primary inputs whereas sequential controllability measures this effort via the amount of sequential elements that must be clocked. For observability, combinational observability is the measure of the effort required to observe the signal value at an observable point (i.e.: primary output, scan-flop), while sequential observability is the measure of the amount of sequential elements that must be clocked to observe the signal. SCOAP values are heavily influenced by the presence of test infrastructure available in the design. To this end, we have implemented two variations of the SCOAP measure. The first variant assumes the every sequential element is part of a scan-chain (full-scan), we have obtained combinational SCOAP values for each of the nets in the gate-level design in the first approach. Here, the combinational controllability is obtained by traversing the hyper-graph from inputs to outputs while evaluating the features based on controllability equations of basic gates. Similarly, for combinational observability, the hyper-graph is traversed in reversed order and basic gate equations are used to evaluate features of individual nets. In the second variant, we have extracted the SCOAP measure assuming no sequential element belongs to a scan-chain (no-scan) as per \cite{scoap_sandia}, \cite{sc_cotd}. Some of the previous works on hardware Trojan detection rely on Synopsys's TetraMax tool to extract the SCOAP measure. One of the limitations of the TetraMax tool is that the maximum value that the tool can assign to a net is limited to 254. So for nets that can have values greater than 254 get assigned with an asterisk symbol. Hence, we have not used the TetraMax tool to extract this feature as the SCOAP values can go beyond the set threshold present in TetraMax. Publicly available ``Testability Measurement Tool" does not support designs in Verilog format, and for designs that consist of loops, an infinity value is returned. Hence, we have implemented the algorithm from scratch to evaluate the SCOAP features of the design. To design a Trojan that is hard to activate during logic testing, an attacker is expected to design the trigger mechanism using nets with low controllability (i.e., higher controllability measure). To further hide the Trojan, the corresponding payload could be designed to impact only low observable (i.e., high observability measure) nodes. These two features have been used for detecting Trojans using unsupervised clustering \cite{salmani2016cotd}. \subsubsection{Structural Features} Structural features describe the location and connectivity information of the net. While most of the structural features may appear uncorrelated in categorizing a Trojan net, they provide useful information when used together with functional features \cite{hoque2018hardware}. Table \ref{tab:func} lists all the structural features (5 to 12) that were analyzed in the paper, with associated illustration in Fig.~\ref{fig:struct}, and described below. \begin{figure}[!th] \centering \includegraphics[width=\columnwidth]{figures/struct_feature_values.png} \caption{Illustration of structural features for the net marked in red. } \label{fig:struct} \end{figure} \textbf{Fanin and Fanout:} Fanin is useful in understanding if a net is driven by a logic with a large fanin structure. Level 1 fanin of a net represent the number of nets that are input to the cell driving the net. Level 2 fanin indicates the total fanin of those nets that are Level 1 input to the driving cell. Level 1 fanout represents the number of cells the net is propagating to. The trojan circuit usually provides output to the original design through the payload only. Fanout of gates in the logic cone of a trigger circuit usually goes as inputs to other gates in the trigger logic. Hence, the fanout of Trojan nets is likely to be low. Even for always-on Trojans using ring-oscillators or shift-registers, the fanout of the nets does not propagate to the normal part of the design \cite{hoque2018hardware}. \textbf{Distance from nearest Flip-Flop:} We consider combinational and sequential Trojans as two different classes of Trojans and separate their training and testing process. Since sequential Trojans nets are expected to have flip-flops in close locations, unlike the combinational Trojan nets, we extracted the distance from the nearest flop-flop as a possible feature (i.e., Nearest\_FF\_D and Nearest\_FF\_Q). While these features should help a trained model for sequential Trojans to detect sequential Trojan nets in a suspect IP, it also should prevent the model from classifying combinational Trojan nets in the suspect IP as nets of sequential Trojans. \textbf{Distance from Nearest Primary Input and Output:} Distance from primary input (PI) provides more context to various functional features. For instance, Trojans that take the trigger values directly from the PIs may have higher toggle rates than Trojans that take trigger inputs from very rare signals. However, the toggle rate could be rarer when compared to other non-Trojan nets with similar shorter distances from the PIs. Distance from the primary output (PO) is useful for identifying malicious structures that require them to be situated near the PO (e.g., Trojans that leak information through the PO). Since flip-flops are usually connected to the clock, reset, and enable signals directly coming from the PI, we only consider the D-input when calculating the distance from PI. \subsection{Design Targeted Feature Selection Methodology} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{figures/feature_extraction_flow.png} \caption{Feature selection flow in the proposed framework.} \label{fig:feature_selection} \end{figure} Existing studies have explored various structural and functional features of the nets to identify the nets of a Trojan. To train any machine learning model, there is an inherent need of feature space that can separate different class data present in the database. In \cite{hoque2018hardware}, the authors suggest that combining both structural and functional features has contributed to their improvement in detection accuracy compared to the techniques that use only one type of features \cite{hasegawa2016hardware, hasegawa2017trojan, salmani2016cotd}. However, \cite{hoque2018hardware} implemented the framework in the FPGA netlist, which supports 6 input look-up table (LUT). Features data obtained from FPGA-mapped netlist and gate-level netlist may provide a different data distribution for Trojan and normal nets. A feature useful for classification in the FPGA netlist may not have the same impact in the gate-level netlist. Additionally, to decide which features are relevant, there is a need to filter out some of the features that do not contribute towards learning of the feature space. Therefore, our framework includes a feature selection step as shown in Fig.~\ref{fig:feature_selection} to identify the best possible features for detecting Trojans in the suspect gate-level netlist. This technique can also be used in the FPGA netlist. Exhaustive feature selection is performed to find the best possible set of features based on the cross-validation score of a classifier. In each iteration, a feature that gives maximum performance for the given classifier using cross-validation is added to a possible set of features. This step is repeated until we have a set of features that gives the highest accuracy for classification. For each of the suspect designs, a different set of features are reported by the exhaustive search. For further analysis, the top 5 frequently used features across multiple suspect designs are considered to reduce the workload on the machine learning model. Fig.~\ref{fig:train_dist} shows the kernel density estimation plots of these top 5 features for tool inserted Trojan nets and already present Free nets in a suspect design. \subsection{Localizing Trojans: Post-processing Algorithms} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/post_flow.png} \caption{Circuit reconstruction with the proposed post-processing algorithms. Nets highlighted in red color represents predictions from the ML model. Specifically for the last section, nets highlighted in blue are false-positive nets, and those highlighted in red are true-positive nets.} \label{fig:post_flow} \end{figure*} Each trained model predicts the class of a net present in the suspicious design based on the model decision boundary. Nets are either classified as \emph{Trojan} or \emph{Free} nets. To get more insight on the structure of the Trojan body, Trojan circuit reconstruction is performed with the help of post-processing \cref{alg:post_algo_1,alg:post_algo_2,alg:post_algo_3}. \begin{algorithm} \caption{Net Connectivity Analysis (NCA)} \label{alg:post_algo_1} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{ $d$: Netlist of the suspect design \\ $t_{nets}$: ML predicted Trojan nets.} \Output{$d_{l1}$: NCA processed suspect design} \textit{$d_{l1}$} = $\emptyset$ \\ \ForEach{ $currNet \in t_{nets}$}{ $fwdNets$ = $fanout(currNet)$ \\ \ForEach { $net \in fwdNets$}{ \If{ ($net$ is in $t_{nets}$) }{ $currNode$ = Driving node of $net$ \\ Add ($currNet$, $currNode$, $net$) to $d_{l1}$ \\ } } $bwdNets$ = $fanin(currNet)$ \\ \ForEach { $net \in bwdNets$}{ \If{ ($net$ is in $t_{nets}$) }{ $currNode$ = Driving node of $currNet$ \\ Add ($currNet$, $currNode$, $net$) to $d_{l1}$ \\ } } } \Return $d_{l1}$ \end{algorithm} Input to the Net Connectivity Analysis (NCA) post-processing algorithm is the suspicious design netlist and predictions from the ML model. The algorithm starts with the construction of a hyper-graph with nodes and edges derived by parsing the netlist. \cref{fig:post_flow}(a) shows a partial hyper-graph of the suspicious design. Here, predictions from the ML model are highlighted with red color nets. As shown in \cref{alg:post_algo_1}, for each of the nets with Trojan class predictions, its immediate fanout and immediate fanin is obtained on line 3 and 10, respectively. These fanout and fanin cones are checked for the presence of another predicted Trojan net, as shown in lines 4-8 and lines 11-16. If Trojan nets are immediately connected to each other through a node, then the node connecting them and the respective edges are added to the $d_{l1}$ graph, as shown in lines 7 and 14. This step removes any of the predicted Trojan nets that are isolated from other Trojan nets. Finally, we get the hyper-graph of the ML predicted Trojan nets that are within one level of breadth first search of the other predicted Trojan nets as the output of the NCA post processing algorithm. \begin{algorithm} \caption{Gate Connectivity Analysis (GCA)} \label{alg:post_algo_2} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{ $d_{l1}$: NCA processed suspect design \\ $\theta_{depth}$: Threshold to select connection depth\\ $sortedNodes$ : Sorted graph nodes for $d_{l1}$ } \Output{$d_{l2}$: GCA processed suspect design} \textit{$d_{l2}$} = $d_{l1}$ \\ \Repeat{ \normalfont $lessConnectedFlag$ $==$ \normalfont 1} { \textit{$nodesToRemove$} = $\emptyset$ \\ \textit{$lessConnectedFlag$} = 0 \\ \ForEach { $node$ in reversed($sortedNodes$)}{ $succHeight$ = $fanout(currNet)$ \\ $predHeight$ = $fanout(currNet)$ \\ \If{ ($succHeight$ $<$ $\theta_{depth}$ and $predHeight$ $<$ $\theta_{depth}$) }{ $lessConnectedFlag$ = 1 \\ Add $node$ to $nodesToRemove$ \\ Remove $node$ from $d_{l2}$ \\ } } Remove $nodesToRemove$ from $sortedNodes$ \\ } \textbf{return} $d_{l2}$ \end{algorithm} The NCA processed hyper-graph is passed through Gate Connectivity Analysis (GCA) post-processing algorithm as described in \cref{alg:post_algo_2} to remove nodes that do not have minimum user-provided depth in either forward or backward direction. Additionally, the list of topologically sorted nodes present in $d_{l1}$ is also passed from the NCA post-processing algorithm for further analysis. The output of the NCA post-processing algorithm is a disconnected graph with a reduced number of nodes and edges compared to the original hyper-graph of $d$ as shown in \cref{fig:post_flow}(b). Typically, a Trojan body consists of nodes that have some form of connectivity to other nodes present in its neighbourhood. To understand how deeply a node is connected to other nodes, its maximum height is calculated in either direction, as shown on lines 6 and 7. If the node's height is not above the user-provided threshold, the node is removed from the hyper-graph along with its input and output edges. This process is repeated until each of the nodes in the hypergraph has a minimum height in either direction. From \cref{fig:post_flow}(b) and \cref{fig:post_flow}(c), we can observe that the nodes with depth less than user provided threshold gets filtered out after GCA post-processing. \begin{equation} \label{CC_equation} net_{cc} = \sqrt{CC_0^2 + CC_1^2} \\ \end{equation} \begin{equation} \label{SC_equation} net_{sc} = \sqrt{SC_0^2 + SC_1^2} \\ \end{equation} To further eliminate possible false positives, Functional Scoping Analysis (FSA) post-processing is performed as shown in \cref{alg:post_algo_3}. Mean value of \cref{CC_equation} and \cref{SC_equation} for Free nets present in the training data is used to get $\lambda_{cc}$ and $\lambda_{sc}$. Here, controllability features with the no-scan assumption are used to see the effect of primary inputs on tool inserted Trojans. For each of the nets present in the hyper-graph, combinational controllability feature ($net_{cc}$), sequential controllability feature ($net_{sc}$) and static probability of net being at 1 ($net_{prob}$) is compared with user provided thresholds as shown in line 5 to add domain specific knowledge regarding hardware Trojans. Finally, nets that fit these criteria are removed from the $TrojanCircuit$ hyper-graph to get the possible Trojan circuit present in the suspicious design as shown in \cref{fig:post_flow}(d). \begin{algorithm} \caption{Functional Scoping Analysis (FSA)} \label{alg:post_algo_3} \SetKwData{Left}{left}\SetKwData{This}{this}\SetKwData{Up}{up} \SetKwFunction{Union}{Union}\SetKwFunction{FindCompress}{FindCompress} \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \Input{ $d_{l2}$: GCA processed suspect design \\ $\theta_L$: Lower bound on probability \\ $\theta_U$: Upper bound on probability \\ $\theta_{cc}$: Combinational controllability threshold \\ $\theta_{sc}$: Sequential controllability threshold \\ $\lambda_{cc}$ = Mean combinational controllability of \\ Free nets in training data \\ $\lambda_{sc}$ = Mean sequential controllability of Free \\ nets in training data \\ } \Output{$TrojanCircuit$: Trojan circuit netlist} \textit{$TrojanCircuit$} = $d_{l2}$\\ \ForEach { $net$ in $Trojan Circuit$}{ $\delta_{cc}$ = $net_{cc}$ - $\lambda_{cc}$\\ $\delta_{sc}$ = $net_{sc}$ - $\lambda_{sc}$\\ \If{ ($\delta_{cc}$ $<$ $\theta_{cc}$ and $\delta_{sc}$ $<$ $\theta_{sc}$ and $net_{prob}$ $>$ $\theta_L$ and $net_{prob}$ $<$ $\theta_U$ ) }{ Remove $net$ from $TrojanCircuit$ } } \textbf{return} $TrojanCircuit$ \end{algorithm} \section{Results} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/train_func_kde.png} \caption{Kernel density estimation plot for MinMax scaled features used in the training of a ML model for RS232-T1400 suspicious design. Here, random valid 50 8-trigger combinational Denial of Service type of Trojans are inserted in the suspicious designs with the help of \cite{cruz2018automated} tool.} \label{fig:train_dist} \end{figure*} To evaluate the proposed approach, we have used different implementations of sequential and combinational Trojans present in different suspect IPs (RS232 and S38417), which are publicly available on the Trust-Hub site \cite{trust_hub}. Some of these IPs contain either combinational (C) or sequential (S) Trojans denoted by C/S in the first column of Table~\ref{tab:final_voted_results}. The training data is generated from the respective IPs for different triggers sets and Trojan types with the proposed pseudo-self-referencing approach. \subsection{Feature Selection Analysis} In our method, we have reduced the set of features required for training a supervised ML model. Forward exhaustive feature selection is performed with the help of a sequential feature selection wrapper available in the open source sklearn library. Probability and activity features are provided as initial features to the sequential feature selection wrapper to ease the exhaustive feature search. These basic features are provided assuming that the attacker should at least consider these minimum set of features while designing a valid hard-to-activate hardware Trojan. A different set of features are selected across different available training databases. We have used the top 5 most frequently used features across all different cases considered while training an ML model. These 5 features turn out to be functional features that relate to the functional behavior of the net based on its driving gate. Table \ref{tab:func} rows 1-4 present the list of functional features that have been incorporated in our framework. Fig.~\ref{fig:train_dist} shows the distribution of proposed features used in the Training of an ML model for RS232-T1400 design. Here, the suspicious design is used to create the training data by inserting 8-triggered combinational Trojans with the help of an automatic Trojan insertion tool. Fig.~\ref{fig:train_dist} shows kernel density estimation plots of the features scaled using the MinMaxScaler function in sklearn. The plots (a) and (f) show the kernel density estimation of static probability of a net being at logic 1 for Free nets and Trojans nets, respectively. We can see that the inserted Trojans have either very high or very low probability values. Along with low or high static probability values, we can observe that the activity values are distributed near 0, as shown in plots (b-g). From the distribution of controllability feature of 0 and 1 with full-scan assumption, we can see that some of the Trojans have high controllability values, i.e., they are difficult to control. Similarly, some of the Trojans are hard to observe, as shown in Fig.~\ref{fig:train_dist} (e). All of the observed feature behaviors from \cref{fig:train_dist} provide insight into a Trojan's stealthy behavior. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figures/train_struct_kde.png} \caption{{Kernel density estimation plots of (a) Nearest input, (b) immediate fanin, (c) level-2 fanin and (d) nearest flip-flop Q output (e) nearest output features extracted from 50 8-triggered combinational training dataset of RS232-T1400 suspicious design.}} \label{fig:train_surr_dist} \end{figure*} Fig.~\ref{fig:train_surr_dist} shows the distribution of some of the structural scaled feature values extracted from the RS232-T1400 training dataset. For structural features, we can observe in plot (c) that the Trojan and Free nets have a high density near 0.25 value, and in plot (d), centers for density plots get shifted by a small amount, but there is a large overlap of feature values. Hence, this set of features can't be directly used for the training of an ML model as it will increase the number of false negatives. \subsection{Detection of Trust-HUB Trojans} To evaluate the proposed approach, the training database for the ML model is obtained by inserting tool-generated Trojans in the suspicious gate-level design. To select the trigger nets of the Trojan, vector-less probability features of the suspicious design are used by the tool. For each suspicious design, the probability threshold is varied from 0.0001 to 0.1 to select the probable set of trigger nets. ML models are trained based on trigger class as well Trojan type inserted in the suspicious design. Two types of Trojans are inserted in the design, i.e., combinational and sequential Trojans. For each type, 50 instances of 5, 6, and 8 trigger Trojans are inserted in each suspicious UART design. The trigger combination of each Trojan is validated using the Cadence JasperGold tool, and payloads are selected to observe the effect at the primary output. \begin{figure*}[t] \centering \includegraphics[width=.9\linewidth]{figures/voted_merged_comb_results_func_struct.png} \caption{VIPR framework output with structural and functional features for RS232 designs. The ML models are trained with 5, 6, and 8 triggered combinational Trojans. Each of the bars represents voted value of ML predictions followed by three types of post-processing routines for each design.} \label{fig:vipr_framework_merged_comb_func_struct} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=.9\linewidth]{figures/voted_merged_seq_results_func_struct.png} \caption{VIPR framework output with structural and functional features for RS232 designs. The ML models are trained with 5, 6, and 8 triggered sequential Trojans. Each of the bars represents voted value of ML predictions followed by three types of post-processing routines for each design.} \label{fig:vipr_framework_merged_seq_func_struct} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=.9\linewidth]{figures/voted_merged_comb_results.png} \caption{VIPR framework output with selected features for RS232 designs. The ML models are trained with 5, 6, and 8 triggered combinational Trojans. Each of the bars represents voted value of ML predictions followed by three types of post-processing routines for each design.} \label{fig:vipr_framework_merged_comb_func} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=.9\linewidth]{figures/voted_merged_seq_results.png} \caption{VIPR framework output with selected features for RS232 designs. The ML models are trained with 5, 6, and 8 triggered sequential Trojans. Each of the bars represents voted value of ML predictions followed by three types of post-processing routines for each design.} \label{fig:vipr_framework_merged_seq} \end{figure*} All features are extracted from training designs and the suspicious design using the designed feature extraction framework. UART benchmarks from Trust-Hub with 90nm technology nodes are used to test the VIPR framework. Trojan nets in suspicious design are labeled as \emph{Normal} nets in the training data set, and the nets present in the tool generated Trojan are labeled as \emph{Trojan} nets. Both the training and test data are scaled using MinMaxScaler. An SVM model is used to train each ML model based on training class type as it is a binary classification. The hyper-parameters used for the ML model are `rbf' kernel, class\_weight=`balanced', C=0.8. The `balanced' mode is used to automatically adjust the weights of the input data based on the frequency of the class labels. The `C' parameter is chosen to allow very little outliers while performing the classification. To train the model proposed features presented in Table~\ref{tab:func} are used. Testing of each suspicious design is done separately with respective models which are trained on combinational Trojan inserted designs as well as sequential Trojan inserted designs. {For GCA post-processing, $\theta_{depth}$ is selected as 2 to find Trojans with at least circuit depth of 2 levels. For FSA post-processing, the lower bound on probability is chosen as 0.4, and the upper bound is selected as 0.6 to eliminate falsely classified Trojan nets whose features belong to the Free feature space.} The following section describes detection accuracy for the test designs after ML predictions and the sequence of post-processing routines. \begin{table*}[ht] \centering \caption{VIPR framework final results after majority voting per model class} \label{tab:final_voted_results} \scriptsize\addtolength{\tabcolsep}{7pt} \begin{tabular}{|c|cc|cc|cc|cc|cc|cc|} \hline \textbf{Suspicious Design} & \multicolumn{2}{c|}{Comb. training} & \multicolumn{2}{c|}{Seq. training} & \multicolumn{2}{c|}{Comb. + Seq. } & \multicolumn{2}{c|}{Hoque et al. \cite{hoque2018hardware}} & \multicolumn{2}{c|}{SC-COTD*} & \multicolumn{2}{c|}{SC-COTD \cite{sc_cotd} }\\ \cline{2-13} & \textbf{FP} & \textbf{FN} & \textbf{FP} & \textbf{FN} & \textbf{FP} & \textbf{FN} & \textbf{FP} & \textbf{FN} & \textbf{FP} & \textbf{FN} & \textbf{FP} & \textbf{FN} \\ \hline {RS232-T1000 (C)} & 4 & 0 & 4 & 0 & 4 & 0 & 4 & 1 & 12 & 4 & 2 & 0 \\ \hline {RS232-T1300 (C)} & 1 & 0 & 4 & 0 & 1 & 0 & 6 & 2 & 14 & 2 & 0 & 0 \\ \hline {RS232-T1700 (C)} & 2 & 0 & 1 & 0 & 0 & 0 & 8 & 3 & 0 & 7 & NA & NA \\ \hline {S38417-T100 (C)} & 6 & 0 & 6 & 0 & 6 & 0 & NA & NA & 8 & 1 & 1 & 0 \\ \hline {S38417-T200 (C)} & 1 & 0 & 1 & 0 & 1 & 0 & NA & NA & 0 & 9 & 9 & 0 \\ \hline {RS232-T1100 (S)} & 4 & 1 & 4 & 1 & 4 & 1 & 6 & 3 & 12 & 5 & 2 & 0 \\ \hline {RS232-T1200 (S)} & 3 & 4 & 4 & 4 & 1 & 4 & 7 & 1 & 0 & 11 & 2 & 0 \\ \hline {RS232-T1400 (S)} & 6 & 1 & 6 & 1 & 6 & 1 & 6 & 0 & 0 & 6 & 2 & 0 \\ \hline {RS232-T1500 (S)} & 4 & 1 & 4 & 1 & 1 & 1 & 5 & 1 & 12 & 5 & 3 & 0 \\ \hline {RS232-T1600 (S)} & 1 & 0 & 4 & 0 & 1 & 0 & NA & NA & 2 & 2 & 0 & 0 \\ \hline \end{tabular} \end{table*} \cref{fig:vipr_framework_merged_comb_func_struct,fig:vipr_framework_merged_seq_func_struct,fig:vipr_framework_merged_comb_func,fig:vipr_framework_merged_seq} show the final results for individual benchmarks with different set of features resulting from feature selection. The color bars are used to specify the results after a certain stage in the framework, and different patterns are used within each of these bars to show false negatives (FN), false positives (FP), and true positives (TP) for each of the design. For each of the benchmarks, results are represented by 4 bars to show results after ML predictions, NCA, GCA, and FSA post-processing stages in respective order. To summarize the overall performance of the proposed approach at each stage, voting is performed across 5,6, and 8 triggered combinational and sequential types of Trojan training. Voted results are used to calculate the number of false positives, false negatives, and true positives nets. The last bar represents the final result produced by the VIPR framework for each design, which is nothing but the majority voted FSA post-processed nets. \cref{fig:vipr_framework_merged_comb_func_struct,fig:vipr_framework_merged_seq_func_struct} shows results of the proposed framework without a feature selection process. Here, all the structural and functional features are used while training the ML model. As shown in Fig. \ref{fig:train_surr_dist}, most of the structural features are not useful to distinguish between the feature space of Trojan and normal nets for the respective design. Thus, the ML model is not able to produce a better set of results when all features are considered for training. As all the features extracted for each training class are not useful, there is a need to select the best set of features that can be used to fairly compare the performance of the proposed method across various suspicious designs. \cref{fig:vipr_framework_merged_comb_func,fig:vipr_framework_merged_seq} show how the VIPR framework with ML model trained on combinational and sequential Trojans reduces false positives with the help of the proposed post-processing routines. Here, a selected set of features are used to derive all the results. We can notice a downward trend for each set of results. In Fig.~\ref{fig:vipr_framework_merged_seq}, we can observe that the number of false positives is relatively high as the types of sequential Trojans used in training are mainly some type of FSMs or counters, whereas publicly available sequential Trojans contains only few flip-flops, which are just used to latch the value in the Trojan body. Particularly for design RS232-T1200 (labeled as T1200), four flip-flops are used to latch the value in the Trojan body. The ML model trained on sequential Trojans can detect these sets of Trigger nets driven by flip-flops. But in FSA post-processing, these trigger nets are eliminated from the set of Trojan nets as their probability feature value ranges in the Free feature range. Additionally, combinational and sequential controllability (with no-scan assumption) values are also less than the average SCOAP value ($\theta_{cc}$ and $\theta_{sc}$) of Free nets. We can observe from \cref{fig:vipr_framework_merged_seq_func_struct,fig:vipr_framework_merged_seq}, the effect of feature selection when sequential Trojans are used for training the model. The performance of the ML model is reduced when both structural and functional features are used to train the ML model. We believe this reduction in performance is the result of strict structural features, which do not necessarily aid in the identification of these Trojan classes. Additionally, some of the features like distance from flip-flop input, distance from flip-flop outputs, etc., get negatively affected by the inserted sequential Trojans. Using the subset of features identified during feature optimization, the results for sequential training are more acceptable as the feature selection removes the negative bias introduced by some of the structural features. A similar phenomena can be observed when combinational training is used to classify the Trojan nets. We observe the effect of structural features is less impactful for combinational training when compared to structural features in sequential training. However, the results without feature selection for combinational training introduce additional false positives as it becomes difficult to learn the diverse feature space by the ML model. We believe structural features will be more useful for side-channel or always on Trojans. As shown in \cref{tab:final_voted_results}, the proposed approach can detect suspicious parts in the design with few false positives and false negatives for combinational (C) as well sequential (S) Trojans present in various Trust-HUB designs. The first two columns show the final set of results which are graphically represented by the fourth column of each of the individual designs in Fig.~\ref{fig:vipr_framework_merged_comb_func} and Fig.~\ref{fig:vipr_framework_merged_seq}. The third column is obtained by performing voting across all (3-combinational and 3-sequential types of training database) FSA post-processed VIPR results. In the case of equal votes for Trojan and Free nets, the net is considered a Free net. The fourth column represents results presented in \cite{hoque2018hardware}. Here, we can see that our proposed framework performs better as our approach considers design-specific bias while performing the ML model training. We observed up to 92.85\% reduction in the false-positive numbers with the application of proposed post-processing algorithms. The second to last column represents our implementation of the SC-COTD \cite{sc_cotd} approach. Here, we can observe that the number of false positives is high compared to the proposed approach. By analyzing the features of the nets, we observe that multiple Free nets which are in the vicinity of the Trojan nets have a similar set of feature values. Hence, the usage of only sequential and combinational SCOAP is not enough to detect the Trojan nets. \begin{figure}[h] \centering \includegraphics[width=0.9\linewidth]{figures/trojan_ckt_t1000_final.png} \caption{VIPR framework output for RS232-T1000 design with combinational 5-trigger Denial of Service (DoS) Trojan trained ML model } \label{fig:vipr_framework_op} \end{figure} \cref{fig:vipr_framework_op} shows extracted Trojan circuit for suspicious design RS232-T1000. Here, 5-triggered combinational Trojans are inserted in the suspicious design to create the training data, and the entire framework is run to get the final hyper-graph as shown in Fig.~\ref{fig:vipr_framework_op}. As shown in the figure, all the false positive nets are connected to the original Trojan body. For all the False-negative Trojan nets, their feature values are similar to those of Free nets. Hence, classifying these nets as Trojan nets is not possible. Along with the selected features, the effect of remaining set of features was observed on the evaluations of the framework. For example, adding fanin and fanout features in the proposed feature set increased the number of false positives and negatives in ML predictions. Also, we observed that the distribution of structural features for various Trust-Hub Trojan inserted design is not consistent. But these features give a good insight into the design space that can be used by an attacker to insert the Trojan. In future work, these additional features will be used in post-processing routines to remove some of the false positives. \section{Conclusion} We have presented VIPR, a hardware Trojan detection method for 3PIP trust verification. We utilize the pseudo-self referencing approach in the hardware Trojan detection, which takes advantage of the design under test to create the training database. This process enables any machine learning model to learn the features of the suspicious design instead of learning features from the database that is generated using different Trojan free designs as discussed in \cite{hoque2018hardware}. We performed a detailed analysis on multiple structural and functional features that can be used to train an ML model. Finally, we introduced three levels of post-processing procedure to further enhance the results by localizing the Trojan body and removing the false positives. We have performed extensive evaluation of VIPR for a set of benchmark designs and Trojan instances, which show promising detection accuracy. Future work will involve extension of the feature set to improve the accuracy further, consideration of emerging Trojan attacks, and integrating VIPR into commercial tool flow. \bibliographystyle{IEEEtran}
1,477,468,749,933
arxiv
\section{Introduction} \label{sec_intro} Laboratory astrophysics is a research field, where space and astrophysical phenomena are reproduced experimentally in laboratories. In space plasmas, in-situ measurements with spacecrafts provides us the local and microscopic information of plasmas and electric/magnetic fields, however, it is hard to observe global structure of phenomena. In contrast, while imaging of astrophysical objects with telescope provides us the global and macroscopic information of phenomena, there is no local and microscopic information of plasmas since they are inaccessible. In laboratories, we can access both global and local information at the same time \cite{kuramitsu2012ppcf}. Besides this, while we have to image the emissions from the astrophysical phenomena, we can use external light sources and particle beams to diagnose the phenomena in laboratories. These are the significant advantages of laboratory astrophysics, and highly challenging in space and astrophysical plasmas \cite{kuramitsu2012ppcf,sakawa2016apx}. For instance, we have investigated collisionless shocks and magnetic reconnections relevant to space and astrophysical phenomena, such as supernova remnants, earth's bow shocks, solar flares, stellar winds, and aurorae, with Gekko XII (GXII) laser facility \cite{kuramitsu2011prl, kuramitsu2018ncom,morita2013pop,morita2019pop,bolouki2019hedp,sakawa2017hedp}, however, the number of shots is very limited due to the low repetition rate of GXII (a few shots per day) and there exist only several large facilities like GXII in the world. Therefore, the opportunities of experiments on laboratory astrophysics are limited. We are motivated to use high repetition-tabletop lasers since there are many more facilities to obtain much more data on laboratory astrophysics. We also extend the laboratory astrophysics with short pulse lasers. As the first step, we match the intensity of the tabletop lasers to that of high power lasers and confirm that the plasma with similar density and temperature can be obtained. We will study the plasma dynamics in the future. As mentioned above, in laboratories global and local information of phenomena can be obtained simultaneously. Collective Thomson scattering (CTS) is a unique tool to measure local plasma quantities of the density, velocity, and temperature both of electrons and ions \cite{morita2013pop, morita2019pop, bolouki2019hedp,sakawa2017hedp,ross2012pop}. A conventional CTS analysis assumes plasmas to be linear, stationary, equilibrium, and stable\cite{froula2011}, however, in many high energy space and astrophysical phenomena as well as laser-produced plasmas relevant to laboratory astrophysics, such as collisionless shocks and magnetic reconnections, are highly nonlinear, non-stationary, and non-equilibrium, and show various kinds of instabilities. Hence, the conventional CTS analysis may not be appropriate for such plasmas. For example, in the presence of a high-Mach number collisionless shock, some part of upstream plasma can be reflected at the shock front, and thus, in the upstream region of collisionless shock two-stream plasmas are often observed \cite{burgess2012ssr}. The two-stream plasmas can be unstable, and various wave activities resulting from the instabilities play essential roles on particle acceleration and generation of cosmic rays. We have analytically as well as numerically investigated the CTS in nonlinear, non-stationary, non-equilibrium, and unstable plasmas \cite{matsukiyo2016jpcs}. We further develop the CTS analysis for such non-equilibrium plasmas. So far, Thomson scattering from non-Maxwellian plasmas has been extensively investigated for super-Gaussian \cite{zheng1997pop,milder2019pop}, Spitzer-H\"{a}rm \cite{henchen2018prl,henchen2019pop}, and kappa distribution functions \cite{saito2000ag}. In this study, we consider two-plasma states as an example of such non-equilibrium plasmas and focus on CTS spectrum in the presence of the two-stream instability as well as the high energy components. The investigations of non-Maxwellian distribution functions \cite{zheng1997pop,milder2019pop,henchen2018prl,henchen2019pop,saito2000ag} in the past focus on the distribution functions symmetric about $v=0$. We consider here the two-stream plasmas that often seen in the upstream of collisionless shocks, where the distribution function is asymmetric. The two-stream plasmas and the resultant instabilities are significant since the relevant wave dynamics play essential roles in particle acceleration. One of our long term goals is the investigation of the origins of cosmic rays; we would like to understand the particle acceleration at collisionless shocks in a controlled manner in laboratories. To this end, in this paper we study the two-plasma states either with the different drift velocities or different temperatures. The latter can express a plasma with high energy component. In reality in space it should be mixture of these two and can be much more complicated. We start from the well-established CTS theory and extend it with two Maxwellian distributions as a typical example of upstream plasmas, which can be unstable or with high-energy component. Observing the instabilities and high energy component in collisionless shock via CTS will be an essential step toward understanding the particle acceleration in the universe. When the velocity difference of two plasmas is not large, electron distribution functions overlap each other. The Landau damping at two peaks of CTS will be different from that in the presence of a single plasma, and the CTS spectra will change the form. Moreover, when the velocity difference of two plasmas is larger than thermal velocity of plasmas, two-stream instabilities grow \cite{francis1986pof}. It is considered that one of the peaks in CTS spectrum is enhanced when the two-stream instability takes place \cite{matsukiyo2016jpcs}. In order to understand the CTS from the non-equilibrium plasmas, we theoretically investigate the CTS spectra from two-stream plasmas in Sec. \ref{sec_theory}. In Sec. \ref{sec_simulation}, we numerically investigate the CTS in the presence of two-stream instability. In Sec. \ref{sec_experiment}, we introduce our experimental approach to verify the non-equilibrium CTS in laser produced plasmas. To verify CTS in non-equilibrium plasma, large laser facilities are not convenient due to the low repetition rate of laser. In Sec. \ref{sec_summary} we summarize our research. \section{Theoretical spectrum from two-stream plasmas} \label{sec_theory} \begin{figure*} \centering \includegraphics[clip,width=\hsize]{fig_1.pdf} \caption{(a) Schematic image of CTS spectra with a single plasma. (b) Dispersion relations of light, Langmuir, and ion acoustic waves in a single plasma. (c) Schematic image of CTS spectra in two-stream plasmas. (d) Dispersion relations in two-stream plasmas. Two-stream instability grows in the orange region. When the region corresponds to one of the peaks of CTS, the peak is enhanced as shown in (c).} \label{fig_1} \end{figure*} Figures~\ref{fig_1}~(a) and (b) shows the schematic images of CTS spectra and the dispersion relation in the presence of a single plasma. An incident light wave with the frequency $\omega_I$ and the wavenumber $k_I$ can be parametrically scattered by the Langmuir waves and by the ion acoustic waves. As shown in Fig.~\ref{fig_1}~(a), while the light scattered by the ion acoustic waves corresponding to the CTS ion feature has the higher peak intensity ($I$) and the narrower spectral width, the light scatted by the Langmuir waves corresponding to the CTS electron feature has the lower and broader spectra, where the horizontal axis $\Delta k \equiv k_S-k_I$ shows the wavenumber difference between the scattered and incident waves and the subscripts $L$ and $R$ represent left and right. In the collective regime, there are two peaks in each feature coming from the scattering by the waves propagating in the same and opposite directions. For instance, the left peak of the electron feature in Fig.~\ref{fig_1}~(a) comes from the resonant interaction between the incident light, the scattered light $(k_{SL}, \omega_{SL})$, and the Langmuir wave $(k_{L}, \omega_{L})$ forming a parallelogram in Fig.~\ref{fig_1}~(b), where the incident and the Langmuir mode co-propagate. Similarly, the right peak of the electron feature comes from the other parallelogram composed by the incident $(k_{I}, \omega_{I})$, scatter $(k_{SR}, \omega_{SR})$, and Langmuir $(k_{R}, \omega_{R})$ waves in Fig.~\ref{fig_1}~(b), where the incident and the Langmuir mode counter-propagate. Figures~\ref{fig_1}~(c) and (d) show the same as Figs.~\ref{fig_1}~(a) and (b) except with two-stream instability. The velocity of beam component of plasma is expressed by the oblique line with the velocity $v_{b}$ in Fig.~\ref{fig_1}~(d). When the line intersects with the Langmuir branch, the two-stream instability can grow. Since the electron feature of CTS is the resonance interaction between the incident electromagnetic, electron plasma (Langmuir), and scattered electromagnetic waves, the amplitude of scatter wave or the peak intensity of CTS is proportional to the density fluctuation of Langmuir waves. The Langmuir waves enhanced by the two-stream instability will enhance the scattered wave amplitude. When the phase velocity of plasma wave observed in CTS is in unstable region in Fig.~\ref{fig_1}~(d) that is expressed as $\omega / k \sim v_{b}$, the plasma wave grows and the corresponding peak of CTS can be enhanced as shown in Fig.~\ref{fig_1}~(c). Since the electron distribution function becomes non-Maxwellian with a beam component of plasma, the shape of electron distribution function also changes the CTS spectra. The two-stream instability is not included in the theory, but we simply include two plasmas in the theory. In this section, we discuss CTS spectrum with electron distribution functions different from Maxwellian. We consider two cases where two plasmas coexist either with finite relative velocity or with finite temperature difference. We calculate the scattering form factor assuming two-stream plasmas. The spectrum shape of Thomson scattering is related to the scattering form factor, which is expressed as \begin{equation} S(k,\omega) = \frac{2\pi}{k} \left[\left|1-\frac{\chi_{e}}{\epsilon}\right|^{2} f_{e} \left(\frac{\omega }{k}\right) + Z \left|\frac{\chi_{e}}{\epsilon}\right|^{2} f_{i} \left(\frac{\omega}{k}\right)\right], \label{eq_form} \end{equation} where $k$, $\omega$, $\chi_{e}$, $\epsilon$, $Z$, $f_{e}(v)$, and $f_{i}(v)$ are the scattering wavenumber, scattering frequency, electron susceptibility, permittivity, ion valence, electron distribution function, and ion distribution function, respectively \cite{froula2011}. This formula assumes quasi-equilibrium plasma. In this paper, we consider only the electron feature of CTS, and ignore the second term of the right hand side. We assume electron distribution functions as superposition of two Maxwellian distributions, which is written as \begin{equation} f_{e 1+2}(v) = \sum_{j} \frac{n_{ej}}{n_{e}} f_{ej}(v), \label{eq_fe} \end{equation} where $n_{ej}$, $f_{ej}(v)$, and $n_{e}$ are the electron density and electron distribution function of the $j$-th plasma species, and total electron density, respectively. The electron distribution function of the $j$-th plasma is given by $f_{ej}(v) = \sqrt{1/(\pi v_{tej}^{2})} \exp (-(v-v_{dj})^{2}/v_{tej}^{2})$, where $v_{tej}$ and $v_{dj}$ are $j$-th electron thermal velocity and $j$-th drift velocity, respectively. The $j$-th electron thermal velocity is written as $v_{tej} = \sqrt{2 k T_{ej} / m_{e}}$, where $T_{ej}$ is $j$-th electron temperature. With the electron distribution function, the electron susceptibility is given by \begin{equation} \begin{split} \chi_{e} & = \frac{4 \pi e^{2} n_{e}}{m_{e} k^{2}} \int_{-\infty}^{\infty} \frac{\frac{\partial f_{e 1+2}}{\partial v}}{\frac{\omega}{k} - v} dv \\ & = -\frac{4 \pi e^{2} }{ m_{e} k^{2}} \sum_{j} \left[ \frac{n_{ej}}{v_{tej}^{2}} Z'\left(\frac{\frac{\omega}{k} - v_{dj}}{v_{tej}}\right) \right ], \label{eq_chie} \end{split} \end{equation} where $e$, $m_{e}$, and $Z'(\xi)$ are the elementary charge, electron mass, and the derivative of the plasma dispersion function, respectively. Since the integrand has a singular point at $v=\omega /k$ on the integration path, the imaginary part is the residue at the singular point, and the real part is the integral value outside the singular point. As Eq.~(\ref{eq_chie}) shows, the imaginary part of the electron susceptibility is proportional to the derivative of electron distribution function and this value represents Landau damping \cite{froula2011}. When the relative drift velocity between two plasmas is much larger than the thermal velocities ($\Delta v = |v_{d1} - v_{d2}| \lesssim v_{tej}$), the scattering form factor is not appropriate to express the CTS spectrum due to the quasi-equilibrium assumption. Nevertheless, we also calculate the case with $\Delta v \gg v_{tej}$ for the comparison purpose. \begin{figure*} \centering \includegraphics[clip,width=\hsize]{fig_2.pdf} \caption{(a)-(c) Theoretical CTS spectra from two-stream plasmas. (d)-(f) Electron susceptibilities. (g)-(i) Electron distribution functions. (a), (d), (g) $v_{d2} = 0$, (b), (e), (h) $v_{d2} = 0.01 c$ $(\Delta v / v_{te1} = 1.6)$, (c), (f), (i) $v_{d2} = 0.02 c$ $(\Delta v / v_{te1} = 3.2)$. The dotted vertical lines in (a)-(f) and (g)-(i) represent the wavenumbers and phase velocities of the CTS peaks with the $j=1$ plasma, respectively.} \label{fig_2} \end{figure*} First, we consider the case with finite relative drift velocity. We plot Eqs.~(\ref{eq_form}), (\ref{eq_fe}), and (\ref{eq_chie}) by changing the relative drift velocity between two plasmas keeping the other parameters same in Fig.~\ref{fig_2}. The solid, dashed, and dotted curves in Figs.~\ref{fig_2}~(a)-(c) represent the scattering form factors from two plasmas considering the overlap of electron distribution functions ($S_{1+2}$), from the $j=1$ plasma ($S_{1}$) and from the $j=2$ plasma ($S_{2}$), respectively. The solid and dashed curves in Figs.~\ref{fig_2}~(d)-(f) represent the real and imaginary parts of the electron susceptibility, respectively. The horizontal axes in Figs.~\ref{fig_2}~(a)-(f) are the difference between scattered and incident wavenumber, $\Delta k = k_{S} - k_{I}$. The solid, dashed, and dotted curves in Figs.~\ref{fig_2}~(g)-(i) represent the electron distribution functions, $f_{e 1+2}$, $f_{e1}$, and $f_{e2}$, respectively, which is normalized as $\int_{-\infty}^{\infty} f_{e 1+2}(v/c) dv = 1$. We set the parameter of plasmas as $T_{e1}=10~\mathrm{eV}$, $T_{e2}=10~\mathrm{eV}$, $n_{e1}=1\times10^{18}~\mathrm{cm^{-3}}$, $n_{e2}=1\times10^{17}~\mathrm{cm^{-3}}$, $v_{d1}=0$, $\theta = 80~\mathrm{degrees}$, where $\theta$ is the scattering angle. We change $v_{d2}$ as $0$, $0.01c$, and $0.02c$, where $c$ is speed of light. The scattering $\alpha$ is given by $1/(k\lambda_{D})$, where $\lambda_{D}$ is the Debye length. When $\alpha \gtrsim 1$, Thomson scattering from collective plasma waves is dominant \cite{froula2011}. When $\alpha \ll 1$, Thomson scattering is non-collective and comes from the random electron motions \cite{froula2011}. Collective and non-collective scatterings show two peaks and a single peak in their spectra, respectively. As shown in Fig.~\ref{fig_1}~(b), the peak scattered frequency $\omega_{S}$ and wavenumber $k_{S}$ are determined from the intersections between the light waves $\left( \omega^{2} = \omega_{pe}^{2} + c^{2}k^{2} \right)$ and shifted Langmuir waves $\left[(\omega - \omega_{I})^{2} = \omega_{pe}^{2} + v_{te}^{2}(k-k_{I})^{2} \right]$, where $\omega_{pe}$, $\omega_{I}$, $k_{I}$, and $v_{te}$ are the electron plasma frequency, incident frequency, incident wavenumber, and $\sqrt{3/2} v_{te1}$, respectively. The scattered wavenumber is calculated from the dispersion relations with the assumption of $\omega \sim \pm ck$, \begin{equation} \begin{split} k_{S} \sim & \frac{c^{2} - v_{te}^{2} \cos \theta}{c^{2} - v_{te}^{2}} k_{I} \\ & \pm \sqrt{\frac{4 v_{te}^{2} \sin^{2} (\theta/2) \left[ c^{2} - v_{te}^{2} \cos^{2} (\theta/2) \right] }{ \left( c^{2} - v_{te}^{2} \right)^{2}} k_{I}^{2} + \frac{\omega_{pe}^{2}}{c^{2} - v_{te}^{2}}}. \label{eq_k} \end{split} \end{equation} The dotted vertical lines in Figs.~\ref{fig_2}~(a)-(f) represent the peak scattered wavenumber of $j=1$ plasma. The peak phase velocities of Langmuir wave are written as \begin{equation} v_{pj} = \frac{\omega_{S}-\omega_{I}}{|{\bf k}_{\mathrm{S}}-{\bf k}_{\mathrm{I}}|} = \frac{\sqrt{\omega_{pe}^{2} + c^2 k_{S}^2} - \sqrt{\omega_{pe}^{2} + c^2 k_{I}^2}}{\sqrt{k_{I}^{2} + k_{S}^{2} -2 k_{I} k_{S} \cos \theta}}. \label{eq_vphi} \end{equation} The dotted vertical lines in Figs.~\ref{fig_2}~(g)-(i) represent the phase velocity of the corresponding peaks of CTS spectra (Figs.~\ref{fig_2}~(a)-(c)) in the $j=1$ plasma. The exact peaks of Eq.~(\ref{eq_form}) are determined by the absolute value term, $|1-(\chi_{e}/\epsilon)|^{2}$. The permittivity is approximated to $1 + \chi_{e}$ at the peak wavenumber of the electron feature, and then, the absolute value term becomes $1/|1 + \chi_{e}|^{2} = 1/\left((1 + \mathrm{Re}(\chi_{e}))^{2} + \mathrm{Im}(\chi_{e})^{2}\right)$. The peaks are at the wavenumber with the smallest denominator of the absolute value term, i.e.~$\mathrm{Re}(\chi_{e}) \sim -1$ and $\mathrm{Im}(\chi_{e}) \sim 0$. The horizontal solid and dashed lines in Figs.~\ref{fig_2}~(d)-(f) represent $\chi_{e} = -1$ and $0$, respectively. In the presence of a single plasma, this corresponds to the resonant condition, $\epsilon = 0$. Figures~\ref{fig_2}~(a), (d), and (g) show the results without the relative drift, i.e.~two peaks are nearly symmetric. As shown in Fig.~\ref{fig_1}~(b), the phase velocity of Langmuir wave in right peak ($\omega_{R}/k_{R}$) is slightly different from that in left peak ($\omega_{L}/k_{L}$). This results in the difference of Landau damping between two peaks.The wavenumbers where $\mathrm{Re}(\chi_{e}) \sim -1$ and $\mathrm{Im}(\chi_{e}) \sim 0$ slightly deviate from the vertical dotted lines since the total electron density is $1.1$ times larger than that of $j=1$ plasma. Figures~\ref{fig_2}~(b), (e) and (h) are the cases when $0<v_{d2}<v_{p1}$. Derivative of electron distribution function at the peak phase velocities in $v>0$ in Fig.~\ref{fig_2}~(h) are steeper than that in Fig.~\ref{fig_2}~(g), and the peak intensity of solid curve for $\Delta k > 0$ in Fig.~\ref{fig_2}~(b) is lower than that of dashed one. The right peak is attenuated by Landau damping when $0<v_{d2}<v_{p1}$. Figure~\ref{fig_2}~(e) shows $\mathrm{Re}(\chi_{e}) \sim -1$ and $\mathrm{Im}(\chi_{e}) > 0$ around the dotted vertical line for $\Delta k > 0$, resulting in the peak broadening and attenuation. As the right peak intensity is low despite the larger electron distribution function at the peak phase velocity, the effect of Landau damping is quite larger than that in Fig.~\ref{fig_2}~(g). Figures~\ref{fig_2}~(c), (f), and (i) show the case when $v_{d2}>v_{p1}$ and $v_{d2} \gtrsim v_{te1}$. The derivative of electron distribution function at the peak phase velocities in $v>0$ is positive in Fig.~\ref{fig_2}~(i) (opposite to Figs.~\ref{fig_2}~(g) and (h)). Thus, the right peak can be enhanced due to Landau resonance, however, there is no such amplification comparing Fig.~\ref{fig_2}~(a). The wave growth is not included in the conventional CTS theory. Note that the last case ($v_{d2} = 0.02 c$) is unstable and will change the electron distribution function in a short time. In order to consider the nonlinear evolutions of the two-stream instability, numerical simulations are necessary. Comparing the solid, dashed, and dotted curves in Figs.~\ref{fig_2}~(a)-(c) and Figs.~\ref{fig_3}~(a)-(f), $S_{1+2}$ (solid ones) are different from simple sum of $S_{1}$ and $S_{2}$ (dashed and dotted ones). The difference between $S_{1+2}$ and $S_{1} + S_{2}$ is the electron susceptibility in Eq.~(\ref{eq_chie}). The electron susceptibility is proportional to the derivative of electron distribution function, which is the factor to determine Landau damping. Thus, the Landau damping is a major factor to determine the spectrum shape of $S_{1+2}$ and $S_{1} + S_{2}$ in Fig.~\ref{fig_2}. When $v_{d2} < v_{te1}$, the effect of Landau damping seems to be appropriate. However, when there is a positive derivative in the electron distribution function, wave growth due to Landau resonance is not taken into account; the peak is damped in Fig.~\ref{fig_2}~(c). Moreover, as shown in Appendix \ref{sec_appendix}, the scattering form factor has nothing to do with the instability but is mathematically determined. Although the scattering form factor assumes quasi-equilibrium plasma, the electron distribution functions shown in Fig.~\ref{fig_2}~(i) is rather unstable. Such electron distribution functions change in a short time and the scattering form factor is not appropriate for such plasmas. In such cases, we need numerical simulations. The sign of derivative of distribution function, which determines wave damping and growth, is not considered in the theory since the squared absolute value of susceptibility and permittivity is calculated. The conventional theory only describes the Landau damping. The squared absolute value comes from the ensemble average of electron density, which defines the scattering form factor and is given by \begin{equation} S({\bf k}, \omega) \equiv \lim_{V\rightarrow \infty,T\rightarrow \infty} \frac{1}{VT} \left \langle \frac{n_e({\bf k}, \omega), n_e^*({\bf k}, \omega)}{n_{e0}} \right \rangle , \end{equation} where $n_{e0}$, $V$, and $T$ represent the mean electron density, scattering volume, and scattering time, respectively \cite{froula2011}. In order to construct completely non-equilibrium theory, we must start with the most fundamental equation or the Vlasov equation. \begin{figure*} \centering \includegraphics[clip,width=\hsize]{fig_3.pdf} \caption{(a)-(f) Theoretical CTS spectra from two-stream plasmas changing the temperature. (g)-(i) Electron distribution functions. (a)-(c) Collective scattering ($n_{e1} = 1 \times 10^{17} ~ \mathrm{cm^{-3}}$), (d)-(f) non-collective scattering ($n_{e1} = 1 \times 10^{16} ~ \mathrm{cm^{-3}}$). (a), (d), (g) $T_{e2} = 10~\mathrm{eV}$, (b), (e), (h) $T_{e2} = 3~\mathrm{eV}$, (c), (f), (i) $T_{e2} = 100~\mathrm{eV}$.} \label{fig_3} \end{figure*} Now we consider the electron distribution functions with the same drift velocity but with different temperatures, which is also the case of the high energy components of distribution function with $T_{e1} < T_{e2}$ and our experiment shown later with $T_{e1} > T_{e2}$. We plot Eqs.~(\ref{eq_form}) and (\ref{eq_fe}) by changing $T_{e2}$ keeping the other parameters the same as in Fig.~\ref{fig_3}. We set the parameter of plasmas as $T_{e1}=10~\mathrm{eV}$, $v_{d1}=v_{d2}=0$, $\theta = 30~\mathrm{degrees}$. Figures~\ref{fig_3}~(a)-(c) and Figs.~\ref{fig_3}~(d)-(f) show the cases of collective scattering ($n_{e1} = 1 \times 10^{17} ~ \mathrm{cm^{-3}}$ and $\alpha_{1} = 2.2$) and rather non-collective scattering ($n_{e1} = 1 \times 10^{16} ~ \mathrm{cm^{-3}}$ and $\alpha_{1} = 0.7$), respectively. The rather non-collective scattering is shown here because the scattering from the high energy component or non-thermal component in an upstream plasma of collisionless shock can be non-collective. The ratio of $n_{e2}$ to $n_{e1}$ is 0.3. We change $T_{e2}$ as 10 for reference, 3, and 100 eV. Figures~\ref{fig_3}~(g)-(i) show the normalized electron distribution functions common to both cases. First, we consider collective scattering. In Fig.~\ref{fig_3}~(a), $T_{e1} = T_{e2}$ and $\alpha_{2} = 1.2$, and thus, the scattering from $j=2$ plasma is also collective. Three curves in Fig.~\ref{fig_3}~(a) show two peaks. In Fig.~\ref{fig_3}~(b), $T_{e1} > T_{e2}$ and $\alpha_{2} = 2.2$. Although the electron distribution function $f_{e 1+2}$ in Fig.~\ref{fig_3}~(h) has more low energy electrons than that in Fig.~\ref{fig_3}~(g), the spectra are similar to that in Fig.~\ref{fig_3}~(a). Figure~\ref{fig_3}~(c) shows the case when $T_{e1} < T_{e2}$ and $\alpha_{2} = 0.38$. As $f_{e 1+2}$ in Fig.~\ref{fig_3}~(i) has more high energy component than that in Fig.~\ref{fig_3}~(g), $S_{1+2}$ in Fig.~\ref{fig_3}~(c) is asymptotic to $S_{2}$ when $|\Delta k| \gg 0$. Now, we consider non-collective scattering. In Fig.~\ref{fig_3}~(d), while $S_{1}$ and $S_{2}$ show a single peak, $S_{1+2}$ shows two peaks. As $\alpha_{2} = 0.38$ in Fig.~\ref{fig_3}~(d), $S_{2}$ is also non-collective. Figure~\ref{fig_3}~(e) shows the case when $T_{e1} > T_{e2}$ and $\alpha_{2} = 0.7$. The two peaks of $S_{1+2}$ in Fig.~\ref{fig_3}~(e) are more prominent than that in Fig.~\ref{fig_3}~(d). Figure~\ref{fig_3}~(f) shows the case when $T_{e1} < T_{e2}$ and $\alpha_{2} = 0.12$, and $S_{1+2}$ is asymptotic to $S_{2}$ as seen in Fig.~\ref{fig_3}~(c). When the both plasmas are at rest but with $T_{e1} > T_{e2}$ in collective scattering, since the derivatives of $f_{e 1+2}$ in Fig.~\ref{fig_3}~(h) at the resonant velocities are not greatly different from that in Figs.~\ref{fig_3}~(g), the second plasma has rather minor effect, as seen in Fig.~\ref{fig_3}~(a) and (b). In Fig.~\ref{fig_3}~(d), $S_{1+2}$ shows two peaks, which are the characteristics of collective scattering, although $S_{1}$ and $S_{2}$ are rather non-collective. Since the total electron density is larger than that of $S_{1}$ and $S_{2}$, the $\alpha$ is higher and the scattering seems rather collective. When $T_{e1} > T_{e2}$, the electron distribution function $f_{e 1+2}$ in Fig.~\ref{fig_3}~(h) has the lower effective temperature and the larger effective $\alpha$ than that in Fig.~\ref{fig_3}~(g), resulting in more clear two peaks in Fig.~\ref{fig_3}~(e) than that in Fig.~\ref{fig_3}~(d). Regardless of the scattering process being collective or non-collective, $S_{1+2}$ asymptotically approaches to $S_{2}$ when $T_{e1} \ll T_{e2}$ as shown in Figs.~\ref{fig_3}~(c) and (f). \section{Numerical simulations} \label{sec_simulation} \begin{figure} \centering \includegraphics[clip,width=\hsize]{fig_4.pdf} \caption{Numerical results when $v_{d2}=0$. (a) Spatio-temporal evolution of the electron density fluctuation from the PIC simulation. (b) Distribution functions from the PIC simulation. (c) Dispersion relation from the FDTD simulation. (d) CTS spectra from the simulation and theory (Eq.~(\ref{eq_form})).} \label{fig_4} \end{figure} \begin{figure} \centering \includegraphics[clip,width=\hsize]{fig_5.pdf} \caption{Numerical results when $v_{d2}=0.01c$ $(\Delta v / v_{te1} = 1.6)$. (a) Electron density fluctuation. (b) Distribution functions. (c) Dispersion relation. (d) CTS spectra.} \label{fig_5} \end{figure} \begin{figure} \centering \includegraphics[clip,width=\hsize]{fig_6.pdf} \caption{Numerical results when $v_{d2}=0.02c$ $(\Delta v / v_{te1} = 3.2)$. (a) Electron density fluctuation. (b) Distribution functions. (c) Dispersion relation. (d) CTS spectra.} \label{fig_6} \end{figure} Although we considered two-stream plasmas in the previous section, the conventional CTS theory assumes a plasma not far from the equilibrium. We calculate the CTS spectrum directly by solving wave equation. In the previous study\cite{matsukiyo2016jpcs}, the wave equation of the scattered waves is numerically solved by giving wave spectra of the incident electromagnetic wave and longitudinal fluctuations of a plasma, which are assumed before and after the development of the two-stream instability. In this study, we numerically calculate the density fluctuations in two-stream plasmas using particle-in-cell (PIC) simulations in order to express the growth of the two-stream instability. Using the density fluctuations obtained from the PIC simulations, we solve the wave equation of the scattered waves by finite-difference time-domain (FDTD) simulations. We numerically calculate two-stream interactions and obtain temporal and spatial evolutions of electron density fluctuations. We performed PIC simulations with EPOCH open source code \cite{arber2015ppcf}. We simulate the theoretical configurations considered in the Sec. \ref{sec_theory}, where two Maxwellian plasmas exist: one is stationary high density plasma and the other is moving low density plasma. We calculate the time evolution of these plasmas. The parameters used in the PIC simulations are $n_{e1}=1 \times 10^{18}~\mathrm{cm^{-3}}$, $n_{e2}=1 \times 10^{17}~\mathrm{cm^{-3}}$, $T_{e1}=10~\mathrm{eV}$, $T_{e2}=10~\mathrm{eV}$, and $v_{d1} = 0$. We change $v_{d2}$ as $0$, $0.01c$, and $0.02c$. The grid size is $\Delta x =\lambda_{D}$ and the time step is $\Delta t = 0.45 \Delta x /c$. The number of grids is $N_{x} = 8192$ and the total number of time steps is $N_{t} = 65536$, corresponding to the system size $L = N_{x} \Delta x \sim 36c/\omega_{pe}$ and the computation time $T = N_{t} \Delta t \sim 130\omega_{pe}$. The number of particles in cell is 1000. We performed FDTD simulation with the electron density fluctuation, $\delta n_{e}$, calculated from the PIC simulation. The wave equation is written as \cite{matsukiyo2016jpcs} \begin{equation} \left(- c^{2} \frac{\partial^{2} }{ \partial x^{2} }+ \frac{\partial^{2} }{ \partial t^{2}} + \omega_{pe} \right) \delta \mathrm{{{\bf E}}_{S}} = 4 \pi e \frac{\partial }{ \partial t } \left( {\bf v}_{\mathrm{Ie}} \delta n_{e} \right), \end{equation} where $\delta {{\bf E}}_{\mathrm{S}}$ and $\mathrm{{\bf {v}}_{\mathrm{Ie}}}$ are the electric field of scattered wave and electron velocity determined by the electric field of incident wave \cite{matsukiyo2016jpcs}. The numerical parameters are the same as ones in the PIC simulations. The wavelength of incident wave is 532~nm and scattering angle is 80~degrees. Figure~\ref{fig_4} represents the results of simulations when $v_{d2} = 0$ and these show the single plasma. Figure~\ref{fig_4}~(a) shows the spatio-temporal evolution of the electron density fluctuation during the entire computation time, and there is almost no drastic change of electron density over time. The vertical stripes show the ion acoustic waves. The solid and dashed curves in Fig.~\ref{fig_4}~(b) are the electron distribution functions from the PIC simulations when $t=0$ and $t=T$, respectively. We fit the dashed curve with two Maxwellian distributions, and the result is the dotted curve in Fig.~\ref{fig_4}~(b). The fitted curves are in good agreement with the simulated distribution functions at $t=T$. The dotted vertical lines are the peak phase velocities from Eq.~(\ref{eq_vphi}). Without the relative drift, there is no difference in electron distribution functions during the simulation. We numerically calculate the spatio-temporal evolution of $\delta {{\bf E}}_{\mathrm{S}}$ and perform the Fourier transform in all the $x-t$ space. Figure~\ref{fig_4}~(c) shows the amplitude of $\delta {{\bf E}}_{\mathrm{S}}$ in $\omega-k_{x}$ space, where $k_{x}$ is the incident wavenumber component parallel to the scattering vector, ${\bf k}$. The dashed curve in Fig.~\ref{fig_4}~(c) is the dispersion relation of electromagnetic waves or light waves in plasma. The peak at $(c k_{x} / \omega_{pe},~ \omega / \omega_{pe}) \sim (-38,~59)$ corresponds to the incident wave. The line extending right and left at $\omega \sim 59 \omega_{pe}$ is the dispersion relation of ion acoustic wave, and the curves at $\omega \sim (59 \pm 1) \omega_{pe}$ are the dispersion relations of Langmuir wave. Picking up $\delta E_{\mathrm{S}}$ on the dispersion relation of light waves, the CTS spectrum in terms of the wavenumber is obtained. The solid and dashed curves in Fig.~\ref{fig_4}~(d) are the CTS spectra obtained by numerical simulation and by the square root value of theoretical function, Eq.~(\ref{eq_form}), which is proportional to the scattered electric field, respectively. We obtain the simulated spectrum using all the computation time. The electron distribution function to calculate the scattering form factor is the dotted curve in Fig.~\ref{fig_4}~(b). The dashed curve in Fig.~\ref{fig_4}~(d) is normalized by the peak intensity of the solid curve. The region surrounded by dotted vertical lines in Fig.~\ref{fig_4}~(c) corresponds to the horizontal axis of Fig.~\ref{fig_4}~(d). We recognize three peaks of the spectrum in Fig.~\ref{fig_4}~(d). A peak at $\Delta k \sim 0$ is ion feature, and two peaks at $\Delta k \sim \pm \omega_{pe} / c$ are the electron feature. The dotted vertical lines are the peak wavenumbers from Eq.~(\ref{eq_k}), which are consistent with the peaks corresponding to the electron feature. As shown in Fig.~\ref{fig_1}~(a), the two peaks of ion feature are generally much larger than those of electron feature, however, we focus on the electron feature in this study and our computation time is not long enough to properly describe the ion acoustic waves. Figure~\ref{fig_5} shows the same plots as Fig.~\ref{fig_4} except with $v_{d2} = 0.01 c$. Since $v_{d2} < v_{te1}$, the plasmas are rather stable as in Fig.~\ref{fig_5}~(a) and do not change the electron distribution function as in Fig.~\ref{fig_5}~(b). As it is stable, Fig.~\ref{fig_5}~(c) is similar to Fig.~\ref{fig_4}~(c). In Fig.~\ref{fig_5}~(d), we can recognize the damping of the right peak due to the smaller derivative of the electron distribution function at the corresponding velocity in Fig.~\ref{fig_5}~(d). The two-plasma theory well represents the numerical spectrum in Fig.~\ref{fig_5}~(d). Figure~\ref{fig_6} shows the same plots as Fig.~\ref{fig_4} except with $v_{d2} = 0.02 c$, where the plasmas are unstable and the two-stream instability can take place. The two-stream instability grows at $t \sim 20 / \omega_{pe}$, and then saturates. In Fig.~\ref{fig_6}~(b), the positive slope at $v \sim 0.015 c$ at the beginning becomes flatter and forms the plateau at the end of the computation time as a result of the instability. Figure~\ref{fig_6}~(c), which is schematically shown in Fig.~\ref{fig_1}~(b), also indicates the excitations of two-stream instability. The peak intensity of solid curve in Fig.~\ref{fig_6}~(d) is much larger than those in Fig.~\ref{fig_4}~(d) and Fig.~\ref{fig_5}~(d). Although the theoretical spectrum shows similar tendency to the simulated spectrum, where the left peaks have similar value to that in Fig.~\ref{fig_4}~(d) and the right peaks are enhanced, the solid curve is much larger than the dashed one at the right peak. In this figure, we additionally plot the simulated spectrum as the dotted curve using the electron density fluctuations in the latter half of the computation time, which is after the saturation of the two-stream instability. Since the distribution function is flat in the latter half of the computation time, the peak intensity after the saturation is higher than that using all the computation time. Please note that the intensities of the ion feature in Figs.~\ref{fig_4}~(d), \ref{fig_5}~(d), and \ref{fig_6}~(d) are similar value ($\sim 7\times 10^{-6}$), i.e., the range of the vertical axis in Fig.~\ref{fig_6}~(d) is 10 times larger than those in Figs.~\ref{fig_4}~(d) and \ref{fig_5}~(d). Comparing the simulated spectra of $v_{d2}=0$ and $0.01c$ in Fig.~\ref{fig_4}~(d) and Fig.~\ref{fig_5}~(d), the peaks at $\Delta k \sim -\omega_{pe}/c$ are similar, while the peak at $\Delta k \sim \omega_{pe}/c$ in the solid curve is larger than that in the dashed curve. This tendency is consistent with the theoretical spectra when $v_{d2} < v_{te1}$. It is considered that the electron distribution functions overlap and Landau damping becomes strong at the peak of $\Delta k \sim \omega_{pe}/c$. The electron density fluctuation in Fig.~\ref{fig_5}~(a) and electron distribution function in Fig.~\ref{fig_5}~(b) does not change over time. Thus, the theoretical function is appropriate to express the shape of spectrum when $v_{d2} < v_{te1}$. The electron density fluctuation in Fig.~\ref{fig_6}~(a) shows the excitations of two-stream instability when $t \gtrsim 20 / \omega_{pe}$. As the peak phase velocity is $\sim 0.017 c$ from the dispersion relations, the drift velocity of $0.02 c$ is reasonable to enhance Langmuir wave via two-stream instability. Thus, it is feasible to diagnose two-stream instability directly via CTS measurements. At the end of the computation time, the electron distribution function in Fig.~\ref{fig_6}~(b) shows the plateau. This is quasi-equilibrium and the slopes is $\sim 0$ in the electron distribution function. Since the derivative of the electron distribution function at the peak phase velocity is $\sim 0$, the right peak of theoretical spectrum in Fig.~\ref{fig_6}~(d) is enhanced. In quantitative sense, the distribution function approximated by two Maxwellian distributions can deviate from the condition of $\partial f/\partial v\sim 0$. Therefore, the numerical result shows larger peak than that of theory in Fig.~\ref{fig_6}~(d). In such quasi-equilibrium plasmas, the scattering form factor in Eq.~(\ref{eq_form}) with two plasmas qualitatively explains the numerical spectra. \section{Experiment} \label{sec_experiment} \begin{figure} \centering \includegraphics[clip,width=\hsize]{fig_7.pdf} \caption{(a)~Schematic view of the experiment with a single target. (b)~Schematic view of the experiment with a double target. (c)~The design of the CTS spectrometer \cite{bolouki2019hedp}.} \label{fig_7} \end{figure} In order to verify and develop CTS in the non-equilibrium plasmas, we have designed and conducted the experiments with 100~TW laser facility at National Central University (NCU 100~TW), which is relatively small but high repetition laser with flexible beam lines \cite{hung2014apb,kuramitsu2015hedp}. Figures~\ref{fig_7}~(a) and (b) show the schematic top view of our experimental system. Figure~\ref{fig_7}~(a) shows the setup with a single target. We use the drive laser with the wavelength of 810~nm, the energy of 3.3~J, the uncompressed pulse duration of 150~ps, and the intensity of $\sim 1\times10^{15}$ $\mathrm{W/cm^{2}}$. Without pulse compression, the intensity is a similar level to that of GXII, and we expect to generate a plasma with the similar temperature to GXII. The drive laser is focused on an aluminum slab target. We define the position of 1~mm upstream of the drive laser from the focal position as the reference point, the target chamber center (TCC). The target is irradiated with the drive laser and a plasma is created from the target as shown in Fig.~\ref{fig_7}~(a). The target chamber is filled with nitrogen gas to generate two-stream state where aluminum and nitrogen plasmas coexist. The radiation from the interaction between the aluminum target and the drive laser ionizes the nitrogen gas to produce an ambient plasma (Fig.~\ref{fig_7}~(a)). We move the target by the motional stage every 20~shots to irradiate a new planer surface with the drive laser. In order to measure the local plasma quantities, we use the CTS measurement system \cite{bolouki2019hedp}. We use an independent probe beam for CTS in the direction of 105~degrees from the direction parallel to the drive laser propagation, as shown on the right side in Fig.~\ref{fig_7}~(a). We use Nd:YAG laser as the probe beam with the wavelength of 532~nm, the energy of 50~mJ, the pulse duration of 5~ns, and the focal spot size of 100~$\mathrm{\mu}$m. The probe beam is focused at TCC and is scattered by the plasmas. Due to the limitation of the target chamber, we observe the scattered light in the direction of 30~degrees from the direction the probe beam propagates. The CTS system measures the local parameters of plasmas in the direction parallel to the drive laser as shown on the right side in Fig.~\ref{fig_7}~(a). The scattered light is detected with the spectrometer of CTS in Fig.~\ref{fig_7}~(c) \cite{bolouki2019hedp}. The image of probe beam is transferred through the slit to the detector of ICCD keeping the spatial information. We use a reflective holographic grating (1200 grooves/mm) to spectrally resolve the electron feature. We put a notch filter after the grating to remove the stray light at the same wavelength as the probe beam. The spectrum of the scattered light is obtained and accumulated 20 times using ICCD camera with the gate width of 2~ns. The spatial and temporal resolutions are determined by the focal spot size of the probe beam and the gate width of ICCD camera, respectively. Figure~\ref{fig_7}~(b) shows the setup with a double target. The setup of the right target and CTS is same as that in Fig.~\ref{fig_7} (a). We put another aluminum slab target 5~mm away from the existing target surface to generate counterstreaming plasmas. Even though the laser energy is not large enough to excite shock, we are still able to study the counterstreaming plasmas to develop the diagnostics with NCU 100~TW. We divide the drive laser beam into two beams with the beam splitter. The two beams are focused on the left and right targets, respectively. The targets are irradiated with the drive laser beams at the angle of 30 degrees from the target normal as shown in Fig.~\ref{fig_7}~(b). Two ablation plasmas created by both beams have different velocities and propagate in the opposite directions, hence the two plasmas contact at a point. The contacting point is placed on TCC and we diagnose the non-equilibrium counterstreaming plasmas via CTS. \begin{figure*} \centering \includegraphics[clip,width=\hsize]{fig_8.pdf} \caption{Experimental results. (a), (d) and (b), (e) are the image of CTS spectrometer without and with probe beam. (c) and (f) are the profiles of (b) and (e), respectively. (a)-(c) and (d)-(f) are the results 30~ns and 70~ns after the drive laser irradiation, at the pressures of 0.0046~Torr and 0.044~Torr, respectively.} \label{fig_8} \end{figure*} \begin{table*} \centering \caption{Fitting results.} \begin{tabular}{c|cccccccccc} \hline $P$ [Torr]& &$T_{e1}$ [eV]&$T_{e2}$ [eV]&$n_{e1}$ [$10^{16} \mathrm{cm^{-3}}$]&$n_{e2}$ [$10^{15}\mathrm{cm^{-3}}$]&$Z_{1}$&$Z_{2}$&$v_{d1}$ [km/s]&$\alpha_{1}$&$\alpha_{2}$\\ \hline \multirow{2}{*}{0.0046} &$S_{1}$ &$30.1\pm0.9$ & &$2.66\pm0.13$ & &$5.40$ & &$383\pm23$&$0.654$ & \\ &$S_{1+2}$&$30.4\pm1.1$ &$6.76\pm8.68$ &$2.69\pm0.25$ &$0.85$ &$5.42$ &$3.02$ &$391\pm23$&$0.655$ &$0.246$ \\ \hline \multirow{2}{*}{0.044} &$S_{1}$ &$10.3\pm0.7$ & &$0.91\pm0.14$ & &$3.13$ & &$134\pm23$&$0.654$ & \\ &$S_{1+2}$&$10.9\pm1.0$ &$2.63\pm1.36$ &$1.03\pm0.26$ &$3.14$ &$3.20$ &$1.11$ &$161\pm28$&$0.676$ &$0.760$ \\ \hline \end{tabular} \label{exp_param} \end{table*} Here we show the preliminary experimental results with a single target. Figures~\ref{fig_8}~(a)-(c) and (d)-(f) show the results of CTS with 0.0046~Torr and 0.044~Torr of the ambient gas at the timing of 30~ns and 70~ns, respectively. There is the result at the same timing, however, the signal is the CTS signal is very weak and noisy. Thus, the results with different timing are shown here. Figures~\ref{fig_8}~(a), (d) and (b), (e) are the images of CTS without and with the probe beam, respectively. The vertical and horizontal axes of each image represent the distance along the probe beam and wavelength, respectively. The probe beam propagates from upper side to lower side of the vertical axis. The position where the distance equals to zero is TCC. There is no signal from 531~nm to 533.2~nm due to the notch filter in Fig.~\ref{fig_8}. It is possible to observe only emission from plasma such as bremsstrahlung without the probe beam in Figs.~\ref{fig_8}~(a), (d) and both emission and CTS with the probe beam in Figs.~\ref{fig_8}~(b), (e). The CTS signals are found in Figs.~\ref{fig_8}~(b) and (e), and they are not found in Figs.~\ref{fig_8}~(a) and (d). Figures~\ref{fig_8}~(c) and (f) show the CTS profiles at the distance of $0.23$ and $-0.69$ mm, respectively. To reduce the emission, we electronically subtract the profiles in Figs.~\ref{fig_8}~(a) and (d) from those in Figs.~\ref{fig_8}~(b) and (e), respectively. We averaged the signals over vertical 50~pixels (575~$\mathrm{\mu m}$) to reduce the noise and enhance the signal to noise ratio. The regions of images to make the profiles are shown in the dotted rectangles in Figs.~\ref{fig_8}~(a), (b), (d), and (e). The curves with markers are the experimental data. The solid and dashed curves represent the fitting result with two-stream theoretical function in Eqs.~(\ref{eq_form}), (\ref{eq_fe}), and (\ref{eq_chie}) (two-plasma fitting), and that with the conventional analysis method (single-plasma fitting), respectively. The notch filter is shaded in Figs.~\ref{fig_8}~(c) and (f). Although the width of notch filter in Figs.~\ref{fig_8} (c) and (f) looks slightly larger than that shown in the images of Figs.~\ref{fig_8} (a), (b), (d), and (e), we define the width in Figs.~\ref{fig_8} (c) and (f) using the image without CTS signals in order to prevent the effect of notch filter in the fitting. We choose the shaded width where the broad signal begins to decrease. We fit the result by the least-squared method in the region without the notch filter to estimate the parameters and the errors. The fitting variables are $T_{e1}$, $T_{e2}$, $n_{e1}$, and $v_{d1}$ in the two-plasma fitting, while $T_{e1}$, $n_{e1}$, and $v_{d1}$ in the single-plasma fitting. The $j=1$ and $j=2$ plasma species are aluminum and nitrogen, respectively. The ion valences ($Z_{1}$ and $Z_{2}$) are calculated with FLYCHK \cite{flychk}. The electron density from nitrogen is expressed as $n_{e2} = Z_{2} n_{i2}$, where $n_{i2}$ is the nitrogen density. The nitrogen density at the pressures of 0.0046~Torr and 0.044~Torr are $2.8 \times 10^{14} ~ \mathrm{cm^{-3}}$ and $2.8 \times 10^{15} ~ \mathrm{cm^{-3}}$, respectively. The aluminum plasma is assumed to be used in the single-plasma fitting. We assume the ambient nitrogen plasma at rest in laboratory, $v_{d2} = 0$, in the two-plasma fitting. Table~\ref{exp_param} shows the parameters and errors obtained by the fittings. The parameters are averaged over 575~$\mathrm{\mu m}$ in the direction of probe beam since we averaged the signal over 50~pixels. Although the aluminum parameters are similar with both fitting methods, the changes in parameters between the fitting methods are larger in 0.044~Torr case. In Figs.~\ref{fig_8}~(c) and (f), the solid curves are in agreement with the dashed ones at the position away from the notch filter, however, the solid curve is a little different from the dashed ones near the notch filter in Fig.~\ref{fig_8}~(f). From the fitting results shown in Table~\ref{exp_param}, the scattering $\alpha_{j}$ are less than 1 at both pressures due to the low electron density. The parameters of electron density and temperature shown in Table~\ref{exp_param} obtained from NCU 100~TW experiment are as the same order as that obtained with high power lasers which is typically used in laboratory astrophysics \cite{bolouki2019hedp}. The result shows that it is possible to conduct experiments of laboratory astrophysics using tabletop lasers. We constructed the CTS electron feature spectrometer and obtained CTS spectra at NCU 100~TW laser facility. Our plasma is not completely non-collective. The $\alpha < 1$ but close to 1, so there are still collective feature as shown in Fig.~\ref{fig_3} in the Sec. \ref{sec_theory}; superposition of two "rather non-collective" plasmas result in two peaks in scattered spectra with two-plasma theory. In the experiment, we observed the region where the drift velocity was quite smaller than the thermal velocity and there is little difference between two fitting methods. The electron distribution function became almost symmetric about $v=0$ and there were little difference of intensity among peaks. However, there are some differences in the spectra between the two-plasma and single-plasma fittings in Fig.~\ref{fig_8}~(f). Although the results are limited, the intensity in Fig.~\ref{fig_8}~(f) begins to decrease toward the central wavelength outside the notch filter unlike that in Fig.~\ref{fig_8}~(c). It is considered that the electron temperatures shown in Table~\ref{exp_param} are different for different plasma species, and that the electron distribution function has more low energy component than Maxwellian. As the electron density from aluminum and nitrogen plasma have similar values, especially at the pressure of 0.044~Torr, the influence of nitrogen plasma on the spectrum becomes large and the difference appears in the fitted curves. However, since the results are limited, further experiments are required to verify this. We have generated plasmas with similar temperature and density to that produced with large lasers such as GXII, but with smaller volume due to the much smaller energy of the tabletop laser. We are planning to observe the plasma dynamics by the uncompressed laser pulse and to compare the dynamics to that in high-power laser pulse in the future. We have also obtained electron features of CTS from the plasmas. When the ambient plasma density is higher, the two-plasma fitting spectrum shows finite difference from the single-plasma fitting. This may be a detection of the abundance of lower energy electrons in distribution function. Although the obtained plasma parameters are still mostly non-collective, we will increase the ambient gas pressure or use counterstreaming plasmas to obtain higher density plasma in the future. Moreover, we will change the scattering angle to observe two-stream instability. \section{Summary} \label{sec_summary} In summary, we investigate the electron feature of CTS in two-plasma states theoretically, numerically, and experimentally. The theoretical analysis in the presence of two-stream plasmas show qualitative modifications of the CTS spectra due to the derivative of electron distribution functions at the resonant velocities. When $v_d<v_{te}$, the peak in $\Delta k > 0$ will be damped due to the Landau damping. On the other hand, when $v_{d}>v_{te}$, Landau resonance is not well expressed. Therefore, we numerically calculate the spatio-temporal evolutions of two-stream plasmas with PIC simulations, and then, numerically solve the wave equation for the scattered waves using the density fluctuations from PIC simulations as the source of the wave equation. From the numerical simulations, the results when $v_d<v_{te}$ are consistent with those in the analytical investigations. When $v_d>v_{te}$, it is considered that the two-stream instability also affected CTS spectra. After the saturation of two-stream instability, the electron distribution function forms a plateau at the positive velocity wing, which is a quasi-equilibrium state. In this state, the derivative of electron distribution function $\sim 0$ at the peak phase velocity and one of the peaks of theoretical spectrum is enhanced, which is qualitatively consistent with that of the simulated spectrum. When $v_{d1} = v_{d2}$ and $T_{e1} \neq T_{e2}$, the low and high energy components of the electron distribution functions affect the scattering spectra. In order to verify the analytic and numerical investigations of CTS in the presence of two-stream plasmas and instability, we have been developing a down scaled experimental system for laboratory astrophysics with a relatively small laser facility, the 100~TW laser facility at National Central University with an uncompressed laser pulse. Once we fully understand the non-equilibrium CTS, we can use the CTS as diagnostics not only for the non-equilibrium plasmas but also for the instabilities. \begin{acknowledgments} This work is supported by the Ministry of Science and Technology of Taiwan under grant no.~103-2112-M-008-001-MY2, 104-2112-M-008-013-MY3, and 105-2112-M-008-003-MY3, and JSPS KAKENHI Grant number JP15H02154, JP17H06202, and JP18H01232. Partially supported by JSPS Core-to-Core Program B. Asia-Africa Science Platform Grant No. JPJSCCB20190003 \end{acknowledgments} \section*{Data availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
1,477,468,749,934
arxiv
\section{Introduction} \label{s:intro} Water is a cornerstone molecule in the oxygen chemistry of dense interstellar clouds and a major coolant of warm molecular gas\footnote{This paper uses the word `water' to denote the chemical species, and the notations H$_2$O, H$_2^{18}$O\ and HDO to denote specific isotopologues.}. In the surroundings of embedded protostars, water can be formed by three very different mechanisms. In cold ($\sim$10~K) protostellar envelopes, water may be formed in the gas phase by ion-molecule chemistry, through dissociative recombination of H$_3$O$^+$. Simultaneously, on the surfaces of cold dust grains, O and H atoms may combine to form water-rich ice mantles. These mantles will evaporate when the grains are heated to $\sim$100~K, either by protostellar radiation or by grain sputtering in outflow shocks. Third, in gas with temperatures ${_>\atop{^\sim}}$250~K, reactions of O and OH with H$_2$\ drive all gas-phase oxygen into water. Such high temperatures may occur very close to the star due to radiation, or further out in outflow shocks. The water molecule thus offers the opportunity to study the relative importance of each of these types of chemistry in the protostellar environment. There has been considerable controversy about the water abundance around high-mass protostars. Observations of the H$_2$O\ 6~$\mu$m\ bending mode with ISO-SWS have revealed abundant water (H$_2$O/H$_2$\ $\sim$10$^{-5}$--10$^{-4}$) in absorption toward several high-mass protostars \citep{boonman:h2o}. The absorption data do not tell us the location of the H$_2$O\ along the line of sight, except that the high excitation temperatures ($\sim$300--500~K) imply an origin in warm gas. In contrast, observations of the o-H$_2$O\ ground state line at 557~GHz with SWAS of the same sources indicate much lower abundances (H$_2$O/H$_2$\ $\sim$10$^{-7}$--10$^{-6}$; \citealt{snell:swas}). The narrow line width indicates an origin in the envelopes rather than the outflows of the sources, but the data have too low angular resolution (several arcminutes) for more detailed statements. \new{\citet{boonman:models} performed a simultaneous analysis of ISO-SWS, ISO-LWS and SWAS data and inferred a water abundance jump in the inner envelope by four orders of magnitude for several high-mass YSOs.} \begin{table*}[t] \caption{Source sample.} \label{tab:samp} \begin{tabular}[lccccc]{lccccc} \hline \hline Source & R.A.\ (1950)& Dec.\ (1950) & $L$ & $d$ & $N$(H$_2$)$^a$ \\ &(h m s) & (\degr\ \arcmin\ \arcsec) & ($10^{4}$~L$_{\odot}$) & (kpc) & ($10^{23}$~cm$^{-2}$)\\ \hline W3 IRS5 & 02 21 53.1 & +61 52 20 & 17 & 2.2 & 2.3 \\ AFGL 490 & 03 23 38.9 & +58 36 33 & 0.2 & 1 & 2.0 \\ W33A & 18 11 43.7 &--17 53 02 & 10 & 4 & 6.2 \\ AFGL 2136 & 18 19 36.6 &--13 31 40 & 7 & 2 & 1.2 \\ AFGL 2591 & 20 27 35.8 & +40 01 14 & 2 & 1 & 2.3 \\ S140 IRS1 & 22 17 41.1 & +63 03 42 & 2 & 0.9 & 1.4 \\ NGC 7538 IRS1 & 23 11 36.7 & +61 11 51 & 13 & 2.8 & 6.5 \\ NGC 7538 IRS9 & 23 11 52.8 & +61 10 59 & 4 & 2.8 & 3.3 \\ \hline \end{tabular} \new{$^a$: Column density in a 15$''$ beam.} \end{table*} Locating the water around protostars requires observations at high spatial and spectral resolution, which presently can only be done from the ground. Most ground-based observations of H$_2$O\ have targeted the maser lines at 22 and 183~GHz (e.g., \citealt{cernicharo:h2o}). However, the anomalous excitation of these lines makes it hard to derive column densities from such data, which may in any case not be representative of the surrounding region. The only thermal water lines that can be studied from the ground are the $3_{13}$--$2_{20}$ line of H$_2^{18}$O\ at 203~GHz (\citealt{phillips:h2o18}; \citealt{jacq:h2o18}), and several HDO lines. \new{These lines were used by \citet{gensheimer:h2o} and \citet{helmich:hdo} to estimate envelope-averaged abundances of H$_2$O\ and HDO around several young high-mass stars. } \new{Advances in sensitivity and resolution allow us to consider lower-luminosity objects closer than the Sun than before, and also enable us to study abundance variations with position in the envelope. } This paper presents \new{new } observations of these lines toward sources that have been studied previously with ISO and SWAS, including the first published interferometric observations of the H$_2^{18}$O\ line (and in fact of any non-masing water line). \cut{The goal is to find the location of the water seen with the space observatories.} The sources are eight deeply embedded high-mass stars, with luminosities of 2$\times$10$^3$ -- 2$\times$10$^5$~L$_{\odot}$, distances of 1 -- 4~kpc, and H$_2$\ column densities of 1 -- 7$\times$10$^{23}$~cm$^{-2}$, as listed in Table~\ref{tab:samp}. Single-dish mapping of dust continuum and molecular line emission at (sub-)mil\-li\-me\-ter\ wavelengths indicates envelope masses of 30 -- 1100~M$_{\odot}$\ within radii of 0.09 -- 0.36~pc \citep{vdtak:massive}. The sources drive powerful outflows as revealed by mid-infrared and millimeter-wave observations of CO and HCO$^+$ (\citealt{mitchell:episodic}; \citealt{hasegawa:outflow}). The unique aspect of this source sample is its high mid-infrared brightness, which allows us to compare its (sub-)mil\-li\-me\-ter\ emission with solid state data for the chemistry, and with rovibrational absorption lines for the geometry. This paper is organized as follows. Section~\ref{s:obs} describes the observations, and Section~\ref{s:res} their direct results. Section~\ref{s:radtrans} describes modeling of the data with a radiative transfer program. Section~\ref{s:disc} discusses the results of the observations and the models in the context of a disk/outflow geometry for these sources. Section~\ref{s:conc} concludes the paper with an outlook toward future opportunities in this field. \section{Observations} \label{s:obs} Table~\ref{t:lines} summarizes spectroscopic parameters of the observed lines, and gives the relevant telescope and its FWHM beam size at that frequency. With $E_{\rm up}\approx 200$~K, the $3_{13}$ -- $2_{20}$ line is the lowest-lying transition of H$_2^{18}$O\ that can be observed from the ground. We use this line to measure the abundance of H$_2$O\ in the warm inner envelopes of the sources. The HDO lines cover the range of excitation energies from 20 to 200~K, and are used to constrain the excitation and chemical state of the gas, in particular its deuterium fractionation. The SO$_2$\ and CH$_3$OH\ lines have comparable excitation requirements, and are used to measure the effects of shock chemistry (SO$_2$ ) and ice evaporation (CH$_3$OH ). The difference in Einstein $A-$coefficients of the lines is mostly due to the $\nu^3$ dependence: all the lines have transition dipole moments of a few Debye. \begin{table*} \caption{Observed transitions.} \label{t:lines} \begin{tabular}{lcrrccc} \hline \hline Species & Transition & Frequency & $E_{\rm up}$ & $A_{ul}$ & Telescope & Beam \\ & $J_{K_pK_o}$ & MHz & K & s$^{-1}$ & & $''$ \\ \hline HDO & $1_{10}$--$1_{11}$ & 80578.3 & 47 & 1.3$\times$10$^{-6}$ & IRAM 30m & 30 \\ HDO & $3_{12}$--$2_{21}$ & 225896.7 & 168 & 1.3$\times$10$^{-5}$ & IRAM 30m & 11 \\ HDO & $2_{11}$--$2_{12}$ & 241561.6 & 95 & 1.2$\times$10$^{-5}$ & JCMT 15m & 21 \\ HDO & $1_{01}$--$0_{00}$ & 464924.5 & 22 & 1.7$\times$10$^{-4}$ & JCMT 15m & 12 \\ H$_2^{18}$O\ & $3_{13}$--$2_{20}$ & 203407.5 & 204 & 4.9$\times$10$^{-6}$ & IRAM 30m & 12 \\ SO$_2$\ & $12_{0,12}$--$11_{1,11}$ &203391.6 & 70 & 8.1$\times$10$^{-5}$ & IRAM 30m & 12 \\ CH$_3$OH\ & $5_{-1}$--$4_0$ E & 84521.2 & 40 & 2.0$\times$10$^{-6}$ & IRAM 30m & 30 \\ \hline \end{tabular} \end{table*} \subsection{Single-dish observations} \label{s:sd-obs} Observations of lines of H$_2^{18}$O, HDO, SO$_2$\ and CH$_3$OH\ in the 80 -- 225~GHz range were made with the 30-m telescope of the Institut de Radio Astronomie Millim\'etrique (IRAM)\footnote{IRAM is an international institute for research in millimeter astronomy, co-funded by the Centre National de la Recherche Scientifique (France), the Max Planck Gesellschaft (Germany) and the Instituto Geografico Nacional (Spain).} on Pico Veleta, Spain, in May 2003. The front ends were the facility receivers A100, B100, A230 and B230, and the backend was the Versatile Spectral Assembly (VESPA) autocorrelator. The five lines were observed simultaneously with a spectral resolution of 0.1 -- 0.3~km~s$^{-1}$. Integration times are 60 -- 180 minutes (on+off) using double beam switching with a throw of 180$''$. System temperatures were 100 -- 150~K at 3~mm and 300 -- 600~K at 1.3~mm wavelength. Data were calibrated onto $T_{\rm MB}$\ scale by multiplying by $\eta_f / \eta_b$, where the forward efficiency $\eta_f$ is 0.95 at 3~mm and 0.91 at 1.3~mm, and the main beam efficiency $\eta_b$ is 0.78 at 3~mm and 0.57 at 1.3~mm wavelength. \new{The spectra have noise levels per 0.25~km~s$^{-1}$\ channel of $T_{\rm MB}$ =10 -- 15~mK at 80~GHz and 20 -- 30~mK at 215~GHz.} Additional observations of HDO lines at 225, 241 and 464~GHz toward selected sources were carried out with the James Clerk Maxwell Telescope (JCMT)\footnote{The JCMT is operated by the Joint Astronomy Centre, on behalf of the Particle Physics and Astronomy Research Council of the United Kingdom, the Netherlands Organization for Scientific Research and the National Research Council of Canada.} on Mauna Kea, Hawaii, in 1995 -- 1997. These data were taken as part of a spectral line survey program, and have a lower spectral resolution and signal to noise ratio than the IRAM spectra. The facility receivers A2 and C2 were used as front ends and the Dutch Autocorrelation Spectrometer (DAS) as back end. The JCMT has a main beam efficiency of 0.65 at 1.3~mm and 0.53 at 0.6~mm wavelength. Integration times are 30 minutes at 241~GHz and 40 minutes at 464~GHz, resulting in rms noise levels in $T_{\rm MB}$\ per 625~kHz channel of $\approx$40~mK at 241~GHz and $\approx$200~mK at 464~GHz. All single-dish spectra have been reduced with the CLASS package, developed at IRAM\footnote{\tt http://www.iram.fr/IRAMFR/GILDAS/}. Linear baselines were subtracted and the spectra were smoothed once and calibrated onto $T_{\rm MB}$\ scale. We estimate a calibration uncertainty of 30\% for the 80~GHz IRAM and 240~GHz JCMT data, and of 50\% for the 230~GHz IRAM and 460~GHz JCMT data, due to higher atmospheric opacity. \new{The estimated pointing uncertainty for all single-dish data is 3$''$ rms.} \subsection{Interferometric observations} \label{s:pdb-obs} The IRAM interferometer on Plateau de Bure (France) consists of six 15-m antennas on North-South and East-West baselines. We used this instrument to map the HDO $1_{10}$--$1_{11}$, H$_2^{18}$O\ $3_{13}$--$2_{20}$ and SO$_2$\ $12_{0,12}$--$11_{1,11}$ lines and continuum at 80.6~and 204.9~GHz toward AFGL 2591. Due to tuning problems, only five antennas took 1.3~mm data in $C$--configuration on December 6, 2003; these problems were solved before the $D$--array observations on May 15 -- 16, 2004. The correlator was configured to produce `narrow' 80~MHz and `broad' 160~MHz windows, with one of each centered on the HDO and H$_2^{18}$O\ lines. The number of channels is 128 per window. The SO$_2$\ line falls in the 160~MHz window of the H$_2^{18}$O\ line. The continuum bandwidth is 640~MHz at 81~GHz and twice as much at 205~GHz, where tuning is double side band. Antenna gains and phases were monitored by observing 2013+370 and 2005+403 for 2~minutes every 20~minutes. The total observing time was 13.2~hr of good weather in $C$--array and 10~hr of excellent weather in $D$--array. The combined dataset has baselines ranging from the antenna shadowing limit out to 309~m, corresponding to an angular scale of 2.2$''$ at 81~GHz and 0.85$''$ at 205~GHz. Data reduction was performed at the IRAM headquarters in Grenoble, using the GILDAS software. Bandpass was checked on 3C273 and 2145+067. Flux calibration was performed on MWC 349, assuming $S_\nu$=1.0~Jy at 81~GHz and 2.0~Jy at 205~GHz. \section{Results} \label{s:res} \begin{figure}[tb] \begin{center} \includegraphics[width=6cm,angle=0]{3937f1.ps} \caption{Spectra of the H$_2^{18}$O\ $3_{13}$--$2_{20}$ transition, obtained with the IRAM 30m telescope. The line at $V$=$+$16~km~s$^{-1}$\ is the SO$_2$\ $12_{0,12}$--$11_{1,11}$ transition; the lines in the W33A spectrum at $V$=$-$5 and $-$17~km~s$^{-1}$\ are due to CH$_3$OCH$_3$.} \label{f:30m} \end{center} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=7cm,angle=0]{3937f2.ps} \caption{Observations of HDO lines toward AFGL 2591 with the JCMT. Top to bottom: 465 GHz, 241 GHz, 225 GHz lines. The bottom two spectra have been multiplied by 2 and all spectra are vertically offset for clarity. The feature at V$_{\rm LSR}$=--15~km~s$^{-1}$\ in the upper spectrum is due to the CH$_3$OH\ $18_{8,10}$--$18_{9,10}$~E line.} \label{f:jcmt} \end{center} \end{figure} \begin{table*} \caption{Line fluxes (K km~s$^{-1}$) or 1$\sigma$ upper limits (mK) observed with the IRAM 30m. Numbers in brackets are uncertainties in units of the last decimal place. \new{The errors do not take calibration into account but only spectral noise. } } \label{t:flux} \begin{tabular}{lcrrcr} \hline \hline Source & HDO & CH$_3$OH\ & H$_2^{18}$O\ & SO$_2$\ & HDO \\ & $1_{10}$--$1_{11}$ & $5_{-1}$--$4_0$~E & $3_{13}$--$2_{20}$ & $12_{0,12}$--$11_{1,11}$ & $3_{12}$--$2_{21}$ \\ \hline W3 IRS5 & 0.14(2) & 0.17(2) & 0.84(7) & 19.44(8) & $<$36 \\ AFGL 490 & $<$13 & 0.39(2) & $<$24 & $<$24 & $<$31 \\ W33A & 0.66(3) & 7.69(3) & 0.46(2) & 4.73(2) & 4.06(6) \\ AFGL 2136 & $<$9 & 0.50(1) & $<$21 & 0.85(3) & $<$27 \\ AFGL 2591 & 0.15(1) & 1.51(1) & 0.86(3) & 4.01(3) & 0.59(3) \\ S140 IRS1 & $<$10 & 1.41(1) & $<$22 & 2.26(3) & $<$23 \\ NGC 7538 IRS1 & 0.26(3) & 2.64(2) & 0.43(9) & 2.09(8) & 1.57(8) \\ NGC 7538 IRS9 & $<$12 & 2.07(2) & $<$22 & 0.62(4) & $<$29 \\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Widths (FWHM in km~s$^{-1}$) of the lines observed with the IRAM 30m. Numbers in brackets are uncertainties in units of the last decimal place.} \label{t:width} \begin{tabular}{lcrrcr} \hline \hline Source & HDO & CH$_3$OH\ & H$_2^{18}$O\ & SO$_2$\ & HDO \\ & $1_{10}$--$1_{11}$ & $5_{-1}$--$4_0$ E & $3_{13}$--$2_{20}$ & $12_{0,12}$--$11_{1,11}$ & $3_{12}$--$2_{21}$ \\ \hline W3 IRS5 & 3.2(5) & 2.7(4) & 6.8(10) & 6.67(4)$^a$ & ... \\ AFGL 490 & ... & 2.5(2)$^a$ & ... & ... & ... \\ W33A & 4.6(2) & 4.67(2)$^a$ & 3.8(1) & 6.1(1)$^a$ & 5.06(9)$^a$ \\ AFGL 2136 & ... & 2.79(7)$^a$ & ... & 4.2(2) & ... \\ AFGL 2591 & 3.3(4) & 2.87(3)$^a$ & 4.3(2) & 5.33(5) & 3.2(2) \\ S140 IRS1 & ... & 2.68(3)$^a$ & ... & 2.67(4)$^a$ & ... \\ NGC 7538 IRS1 & 3.6(5) & 3.21(3)$^a$ & 5.2(16) & 6.1(3) & 3.8(2) \\ NGC 7538 IRS9 & ... & 2.94(3)$^a$ & ... & 5.3(4) & ... \\ \hline \end{tabular} \medskip {\scriptsize a}: Line core only; also wings visible \end{table*} \begin{table} \caption{JCMT observations of HDO.} \label{t:jcmt} \begin{tabular}{lcccc} \hline \hline Line & $\int$$T_{\rm MB}$\textit{dV} & V$_{\rm LSR}$ & $\Delta$\textit{V} & $T_{\rm MB}$ \\ & K~km~s$^{-1}$ & km~s$^{-1}$ & km~s$^{-1}$ & K \\ \hline \multicolumn{5}{c}{AFGL 2591} \\ $3_{12}$--$2_{21}$ & 0.308(48) & --4.70(22) & 2.83(52) & 0.10 \\ $2_{11}$--$2_{12}$ & 0.394(80) & --5.27(45) & 4.24(88) & 0.09 \\ $1_{01}$--$0_{00}$ & 2.19(41) & --5.48(43) & 4.35(81) & 0.47 \\ \multicolumn{5}{c}{NGC 7538 IRS1} \\ $3_{12}$--$2_{21}$ & 0.74(12) & --58.43(28) & 3.57(62) & 0.20 \\ \hline \end{tabular} \end{table} \subsection{Single-dish spectra} With the IRAM 30m, we have detected H$_2^{18}$O\ in four objects (Fig.~\ref{f:30m}). The HDO 80~GHz line is detected in the same four objects, and the 225~GHz line in three of them. The CH$_3$OH\ line is seen in all eight sources, and the SO$_2$\ line in all but one. Tables~\ref{t:flux} and~\ref{t:width} list the integrated intensities and widths of the lines, obtained through Gaussian fits to the profiles. \new{Note that for strong lines, calibration dominates the uncertainty on the line flux, while spectral noise dominates for weak lines. } The spectra also show a few unexpected lines. In W33A, the $4_{04}$--$3_{03}$ line of formamide (NH$_2$CHO) at 84542.4~MHz was detected with $T_{\rm MB}$\ = 42~mK and $\Delta$\textit{V}\ = 5.6~km~s$^{-1}$. Next to the H$_2^{18}$O\ line, the $3_{30}$--$2_{21}$, $3_{31}$--$2_{21}$ and $3_{30}$--$2_{20}$ lines of dimethyl ether (CH$_3$OCH$_3$) at 203420, 203410 and 203384~MHz are detected with $T_{\rm MB}$\ = 0.21, 0.23 and 0.25~K and $\Delta$\textit{V}\ = 4.6, 4.4 and 4.9~km~s$^{-1}$. The $3_{30}$--$2_{20}$ line of CH$_3$OCH$_3$ is also detected toward NGC 7538 IRS1, with $T_{\rm MB}$\ = 0.12~K and $\Delta$\textit{V}\ = 3.7~km~s$^{-1}$. With the JCMT, we have detected three HDO lines in AFGL 2591, and one in NGC 7538 IRS1 (Fig.~\ref{f:jcmt}; Table~\ref{t:jcmt}). Upper limits \new{(1$\sigma$) } of $T_{\rm MB}$ = 0.24, 0.35 and 0.17~K on 0.625~MHz channels were obtained for the 464~GHz line toward W33A, AFGL 2136 and S140 IRS1. For the 225~GHz line, upper limits of $T_{\rm MB}$\ = 38~mK were found for S140 IRS1 and NGC 7538 IRS9. \citet{helmich:hdo} report a tentative detection of the 464~GHz line toward W3 IRS5 and upper limits on the 225 and 241 GHz lines; \citet{schreyer:gl490} set an upper limit to the 464~GHz emission from AFGL 490. The JCMT spectra of HDO have a lower signal to noise ratio and spectral resolution than the IRAM 30m spectra, and the line positions and widths in Table~\ref{t:jcmt} are too uncertain to extract kinematic information. The central velocities and the widths of the HDO lines in the 30m spectra (Table~\ref{t:width}) are consistent with the values for the molecular envelopes of these objects, derived from C$^{17}$O and C$^{34}$S spectra \citep{vdtak:massive}. In contrast, the H$_2^{18}$O\ lines are 30 -- 90\% broader than the HDO lines in the same sources (Table~\ref{t:width}), which may be an indication that part of the H$_2^{18}$O\ emission arises in the molecular outflows of these sources. Only in W33A, the fitted width of the H$_2^{18}$O\ line is smaller than that of the HDO lines, but for this source, the H$_2^{18}$O\ line is blended with other lines (see Fig.~\ref{f:30m}), making its width hard to measure. Evidence for a contribution to the observed emission by outflows is even more pronounced in the SO$_2$\ and CH$_3$OH\ spectra, which have higher signal-to-noise than those of HDO and H$_2^{18}$O. The profiles of SO$_2$\ in three sources and of CH$_3$OH\ in seven show low-level emission at high velocities (Table~\ref{t:width}). We have fitted these profiles with the sum of two independent Gaussians: a narrow one corresponding to the `envelope' component seen in C$^{17}$O and C$^{34}$S, and a broader one (Fig.~\ref{f:meth}) which we attribute to the outflow. We find widths for the broad components between 4.7~km~s$^{-1}$\ in AFGL 2591 and 12.8~km~s$^{-1}$\ in W3 IRS5; the source-averaged width is 7.1~km~s$^{-1}$. The broad component is blueshifted from the narrow one in all sources except S140 IRS1, which ties in with the tendency for blueshifted mid-infrared absorption of CO \citep{mitchell:episodic}, which also arises in outflows. The fraction of the line flux carried by the broad \new{CH$_3$OH\ } component ranges from 16\% in S140 IRS1 to 58\% in AFGL 2591, and is 39\% on average. These fractions are comparable with the values of 10 -- 50\% found for CS, SO and SO$_2$\ in these sources \citep{vdtak:sulphur} \new{which may depend on excitation energy or beam size }. Previous CH$_3$OH\ spectra of these sources which had lower signal to noise ratios and lower spectral resolution did not show line wings, although they did show wings for two other, similar objects \citep{vdtak:meth}. \new{The line profiles of H$_2^{18}$O\ and HDO do not have high enough signal to noise ratios to perform two-component fits. Therefore we assume that the outflow contribution estimated for CS, SO, SO$_2$\ and CH$_3$OH\ also holds for H$_2^{18}$O\ and HDO, which have similar excitation requirements and optical depths. However, instead of a double Gaussian fit, the outflow contribution may be estimated as the line flux at high velocities only. The result is somewhat less than half of that from a double Gaussian fit, or $\sim$10 -- 20\%. This value is probably consistent with the fraction of $\approx$50\% estimated for the o-H$_2^{16}$O ground state line observed with SWAS (\citealt{snell:swas}; \citealt{boonman:models}), given the different excitation requirements. } The presence of significant amounts of CH$_3$OH\ in the outflow raises the question of its desorption from grain mantles. Toward young low-mass protostars, the methanol emission of transitions with low energy levels is dominated by outflow gas \citep{bachiller:l1157}, indicating that grain mantle desorption by shocks is at work. Higher excitation methanol lines are broad toward some low-mass protostars and narrow towards others, indicating that shocks or thermal desorption may dominate in the warmer regions of the envelope (\citealt{maret:ch3oh}; \citealt{jorgensen:methanol}). The present data show that for high-mass protostars, radiation and shocks both have a relevant role in releasing ice mantles from dust grains \new{too}. \begin{figure}[tb] \begin{center} \includegraphics[width=7cm,angle=0]{3937f3.ps} \caption{Line profiles of CH$_3$OH, observed with the IRAM 30m, with two-component fits overplotted. For W3 IRS5, the SO$_2$\ line is plotted instead of the CH$_3$OH\ line.} \label{f:meth} \end{center} \end{figure} \subsection{Excitation of HDO} \label{s:txc} For the sources where several HDO lines are detected, we have estimated the excitation temperature using rotation diagrams. This method, described in detail by \citet{blake:orion} and \citet{helmich:w3}, assumes that the lines are optically thin and describes the molecular excitation by a single temperature, the `rotation temperature'. However, this temperature is only meaningful if all the data refer to the same physical volume. The beam sizes of our observations range from 12 to 30$''$, and it is important to consider the effect of beam dilution. Indeed, if the HDO emission from W33A, AFGL 2591 and NGC 7538 IRS1 were extended on scales as large as 30$''$, the upper-state column densities of the higher-excitation lines would be larger than those of the low-excitation lines, implying an infinite \new{or negative } excitation temperature. \cut{To avoid this unphysical situation, } \new{Since such non-thermal excitation is unlikely, } the data must be corrected for the effect of a finite source size. The size of the HDO emission can be estimated for AFGL 2591 and NGC 7538 IRS1, where the 225~GHz line has been measured both with the IRAM 30m and the JCMT. The emission is about twice as bright in the IRAM 30m beam, suggesting a compact source size. Therefore we assume a source size of 12$''$ for the HDO in W33A, AFGL 2591 and NGC 7538 IRS1, and correct the 80~GHz line fluxes upward by (30/12)$^2$. This factor is much larger than any plausible optical depth effect on the 80~GHz line, especially since the 464~GHz line, which lies lower in excitation, is expected to have a larger optical depth. Statistical equilibrium calculations indeed indicate optical depths of $\sim$10$^{-2}$ for the excitation temperatures and brightness levels of HDO found here. Table~\ref{t:txc} lists the assumed sizes for all sources where HDO has been detected, and the resulting HDO excitation temperatures. The data do not rule out source sizes $<$12$''$, and the assumed size may be regarded as an upper limit. Smaller source sizes would influence all lines equally, though, and not change the excitation temperature estimates. In the case of W3 IRS5, the only firm detection of HDO is the 80~GHz line. The observational limits on the 225, 241 and 464~GHz lines indicate $T_{\rm ex}$=85--115~K, but do not constrain the source size. \begin{table} \caption{Sizes and excitation temperatures of the HDO emission, \new{derived from the single-dish observations.} } \label{t:txc} \begin{tabular}{lcc} \hline \hline Source & Size & $T_{\rm ex}$ \\ & $''$ & K \\ \hline W3 IRS5 & ... & 85 -- 115 \\ W33A & 12 & 110$$\pm$$58 \\ AFGL 2591 & 12 & 117$$\pm$$57 \\ NGC 7538 IRS1 & 12 & 108$$\pm$$56 \\ \hline \end{tabular} \end{table} The excitation temperatures found for HDO may be used as a first clue to its chemical origin by comparison with SO$_2$\ (a product of shock chemistry), CH$_3$OH\ (a product of ice evaporation), and C$_2$H$_2$\ (a product of hot gas-phase reactions). The excitation temperatures of HDO are similar to the values of 50 -- 200~K derived for CH$_3$OH\ \citep{vdtak:meth} and SO$_2$\ \citep{vdtak:sulphur}, measured in (sub-)mil\-li\-me\-ter\ emission in 14 -- 18$''$ beams. These excitation temperatures are lower limits to the kinetic temperature of the emitting gas, but this \cut{effect} probably has little effect on the comparison with HDO since the molecules have similar dipole moments. The excitation temperatures are much lower than the values of 500 -- 1000~K measured in mid-infrared absorption of C$_2$H$_2$\ \citep{lahuis:hcn} as expected for two reasons. First, the C$_2$H$_2$\ molecule does not have a permanent dipole moment, and its excitation temperature directly reflects the kinetic temperature of its surroundings. Second, C$_2$H$_2$\ is seen in absorption, which in these centrally condensed envelopes preferentially probes smaller radii, where the temperature is higher. The excitation temperatures of \new{HCN 14~$\mu$m\ and } H$_2$O\ 6.2~$\mu$m\ absorption of 300 -- 500~K for these sources \citep{boonman:h2o} are between the values for HDO emission and C$_2$H$_2$\ absorption, suggesting that the two effects contribute about equally. The HDO, CH$_3$OH, SO$_2$\ and C$_2$H$_2$\ in these objects are thus located in warm (several 100~K) gas, such as found in the inner envelopes and outflow shocks of protostars. The exact temperature is hard to estimate because of the above caveats, so that we have searched for trends instead. Even here, the data appear inconclusive (Fig.~\ref{f:txc}). Further discussion of the origin of the HDO is deferred until after the radiative transfer model in \S~\ref{s:disc}. \begin{figure}[tb] \begin{center} \includegraphics[width=5cm,angle=0]{3937f4.ps} \caption{Excitation temperatures of HDO plotted versus values of SO$_2$\ (bottom), CH$_3$OH\ (middle) and C$_2$H$_2$\ (top). The low $T_{\rm ex}$\ of C$_2$H$_2$\ in W33A may be affected by a high continuum optical depth at 14~$\mu$m. } \label{f:txc} \end{center} \end{figure} \subsection{Column densities of HDO and H$_2$O } \label{s:cold} Knowing the excitation conditions, we derive molecular column densities from the observed line strengths. For HDO, these follow directly from the rotation diagrams. For H$_2$O, these come from the H$_2^{18}$O\ data, assuming optically thin H$_2^{18}$O\ emission with the same excitation temperature as the HDO in that source. We use an oxygen isotope ratio of $^{16}$O/$^{18}$O=500 and assume an ortho/para ratio of 3 for H$_2$O, as expected for warm gas. The resulting column densities (Table~\ref{t:cold}) are uncertain by a factor of $\approx$2, mainly through the uncertain excitation temperature. The sensitivity of $N$(H$_2$O) to $T_{\rm ex}$\ is such that increasing $T_{\rm ex}$\ from 110 to 220~K increases the derived column density just slightly, while decreasing $T_{\rm ex}$\ from 110 to 60~K almost doubles it and further lowering $T_{\rm ex}$\ leads to implausibly high H$_2$O\ column densities (Figure~\ref{f:nh2o}). In addition, if the source size is smaller than the 12$''$ assumed above, the column density estimates would increase. \begin{figure}[tb] \begin{center} \includegraphics[width=5cm,angle=-90]{3937f5.ps} \caption{Sensitivity of the derived $p-$H$_2^{18}$O\ column density to the adopted excitation temperatures, for a line flux of 1.0~K\,km~s$^{-1}$.} \label{f:nh2o} \end{center} \end{figure} For the four sources where HDO and H$_2^{18}$O\ were not detected, the noise levels of the spectra imply limits on the column densities. Assuming $T_{\rm ex}$=110~K and $\Delta$\textit{V}=3.0~km~s$^{-1}$, the 3$\sigma$ limits \new{in an 11$''$ beam } are $N$(HDO)$<$6.5$\times$10$^{13}$~cm$^{-2}$\ and $N$(H$_2$O)$<$6.4$\times$10$^{16}$~cm$^{-2}$, which for both species is a factor of $\approx$2 below the weakest detection. In the cases of AFGL 490, S140 IRS1 and NGC 7538 IRS9, the non-detection of HDO and H$_2^{18}$O\ emission ties in with non-detections of H$_2$O\ 6.2~$\mu$m\ absorption (\citealt{boonman:h2o}; \citealt{schreyer:gl490}) and low column densities of warm H$_2$\ as traced by $^{13}$CO 4.7~$\mu$m\ absorption (\citealt{mitchell:hot+cold}; \citealt{mitchell:gl490}). \cut{For AFGL 490, H$_2$O\ 6.2~$\mu$m\ data are not available, but the column density of warm H$_2$O and CO is also low \citep{mitchell:gl490}.} In contrast, AFGL 2136 does have high H$_2$O\ and H$_2$\ column densities measured in mid-infrared absorption, but this gas must be compact, since the H$_2$\ column density from (sub-)mil\-li\-me\-ter\ data is low \citep{vdtak:sulphur}. \new{The derived molecular column densities may be used as a clue to the source geometry by comparing them with values measured in mid-infrared absorption. } The H$_2$O\ column densities in Table~\ref{t:cold} are consistent with the values from ISO 6.2~$\mu$m\ absorption \citep{boonman:h2o} to within factors of a few. This situation is similar to that for CO and dust, where (sub-)mil\-li\-me\-ter\ data indicate column densities $\sim$3--5$\times$ higher than mid-infrared data \citep{vdtak:massive}. \new{Therefore, if these species are spherically distributed around the central star, this region must be extended on the scales of the single-dish beams } (diameter ${_>\atop{^\sim}}$1$''$, corresponding to ${_>\atop{^\sim}}$2000~AU at $d$=2~kpc), \cut{although this conclusion depends on the geometry of the emitting region.} \new{Alternatively, these molecules do not have spherically symmetric distributions. } In contrast, for SO$_2$\ and HCN, mid-infrared absorption lines indicate $\sim$100$\times$ higher column densities than (sub-)mil\-li\-me\-ter\ emission lines (\citealt{keane:so2}; \citealt{boonman:hcn}). These molecules must have distributions as compact as ${_<\atop{^\sim}}$0\farcs1 (${_<\atop{^\sim}}$200~AU) which may or may not be spherical. Additional constraints on the source geometry come from the interferometer data (\S~\ref{s:bure_lines}). \begin{table*} \caption{Column densities of HDO and H$_2$O\ in an 11$''$ beam, \new{derived from the single-dish observations.} } \label{t:cold} \begin{tabular}{lccccr} \hline \hline Source & HDO & H$_2$O & H$_2$O $^a$ & $\bar{T}$ $^d$ & HDO/H$_2$O \\ & 10$^{14}$ cm$^{-2}$ & 10$^{17}$ cm$^{-2}$ & 10$^{17}$ cm$^{-2}$ & K & 10$^{-4}$ \\ \hline W3 IRS5 & 0.3 -- 0.6$^b$ & 2.6 -- 5.0$^c$ & 3 & 33 & 1 \\ W33A & 6.9 & 1.4 & $<$8 & 20 & 70 \\ AFGL 2591 & 2.3 & 3.0 & 4 & 28 & 8 \\ NGC 7538 IRS1 & 2.7 & 1.3 & $<$5 & 25 & 30 \\ \hline \end{tabular} \medskip {\scriptsize a}: From ISO 6.2~$\mu$m\ absorption \citep{boonman:h2o} in a pencil beam. {\scriptsize b}: Values for source sizes of 30$''$ and 12$''$. {\scriptsize c}: Values for $T_{\rm ex}$=110 and 60~K. {\scriptsize d}: Mass-weighted envelope temperature from \citet{vdtak:massive}. \end{table*} The $N$(HDO) / $N$(H$_2$O) ratios (Table~\ref{t:cold}, column~6) are consistent with the limits on solid-state HDO/H$_2$O\ obtained for these same sources \citep{dartois:hdo}, and similar to the values measured for `hot core' type regions (\citealt{jacq:hdo}; \citealt{gensheimer:h2o}). The ratios correspond to enhancements of the HDO/H$_2$O\ ratio over the elemental abundance ratio (D/H=1.5$\times$10$^{-5}$; \citealt{linsky:d}) of 7 for W3 IRS5, 50 for AFGL 2591, 200 for NGC 7538 IRS1 and 470 for W33A. The enhancement level shows a correlation with the mass-weighted average envelope temperature $\bar{T}$ of these sources \citep{vdtak:massive}, listed in the fifth column of Table~\ref{t:cold}, in the sense that colder sources have higher HDO/H$_2$O\ ratios. The decrease of the HDO enhancement from 470 in W33A ($\bar{T}$=20~K) to 7 in W3 IRS5 ($\bar{T}$=33~K) is the combined result of a decrease in $N$(HDO) by a factor of 16 and an increase in $N$(H$_2$O) by a factor of 4. The HDO/H$_2$O ratios in Table~\ref{t:cold} are 10 -- 100$\times$ higher than the equilibrium value at the estimated gas temperatures ($\sim$few 100~K), which suggests that the HDO (and H$_2$O) molecules are a remnant from an earlier, colder phase of the protostars (like in the case of low-mass protostars; see \S~\ref{ss:lmpo}). A natural explanation is that HDO (and H$_2$O) molecules are sublimated from the grain mantles in the warm region where the dust temperature exceeds 100~K, the ice sublimation temperature. In this scenario, the coldest sources would be also the youngest, where gas-phase reactions occurring in the warm region containing the sublimated ices have had less time to bring the HDO/H$_2$O ratio back down to the equilibrium ratio at $\geq$100~K. \subsection{Interferometric continuum images} \label{s:bure_cont} \begin{figure}[tb] \includegraphics[width=8cm,angle=0]{3937f6.ps} \caption{Maps of the continuum emission of AFGL 2591 made with the IRAM interferometer. Contours are drawn every 8~mJy/beam at 205~GHz (top) and every 3~mJy/beam at 81~GHz (bottom). \cut{The strong 81~GHz source is VLA1; the strong 205~GHz source is VLA3} \new{Source nomenclature and synthesized beam size are indicated}. } \label{fig:cont_maps} \end{figure} Figure~\ref{fig:cont_maps} presents continuum maps of AFGL 2591, made by gridding and Fourier transforming the IRAM interferometer data and deconvolving with the Clean algorithm. Using uniform weight, the size (FWHM) of the synthesized beam is (1.33$\times$1.07)$''$ at PA=47$^\circ$ at 205~GHz and (4.60$\times$3.38)$''$ at PA=74$^\circ$ at 81~GHz. The rms noise level of the maps is 0.34~mJy/beam at 81~GHz and 1.3~mJy/beam at 205~GHz. \begin{table*}[tb] \caption{Positions, deconvolved sizes, and strengths of continuum sources in the AFGL 2591 region, detected with the interferometer. Numbers in brackets denote uncertainties in units of the last decimal.} \label{t:bure_c} \begin{tabular}{lcccccc} \hline \hline Component$^a$ & R.A. (J2000) & Dec. (J2000) & Major axis & Minor axis & Pos.~angle & Flux density \\ & hh mm ss & dd mm ss & arcsec & arcsec & deg. & Jy \\ \hline \multicolumn{7}{c}{\it 81 GHz} \\ \\ VLA1 & 20:29:24.5507(10) & 40:11:15.250(10) & 2.64(4) & 2.36(6) & --22(8) & 0.061(1) \\ VLA2 & 20:29:24.4795(50) & 40:11:22.460(46) & ... & ... & ... & 0.009(1) \\ VLA3 & 20:29:24.8917(30) & 40:11:19.687(28) & ... & ... & ... & 0.016(2) \\ \\ \multicolumn{7}{c}{\it 205 GHz}\\ \\ VLA1 & 20:29:24.5721(28) & 40:11:14.669(29) & 2.13( 9) & 1.29( 9) & 90(4) & 0.061(2) \\ VLA2 & 20:29:24.4394(49) & 40:11:23.505(51) & 3.24(17) & 2.21(16) & 90(5) & 0.065(4) \\ VLA3 & 20:29:24.8694( 3) & 40:11:19.498( 5) & 1.08( 2) & 0.85( 1) & 00(3) & 0.194(1) \\ \hline \end{tabular} \medskip {\scriptsize a}: Nomenclature from \citet{trinidad:gl2591}. \end{table*} The 205~GHz map shows three sources, and Table~\ref{t:bure_c} lists their properties, derived by fitting two-dimensional Gaussians to the \textit{u,v} data. The strongest, Eastern source (VLA3) coincides with the `dust peak' and the infrared source AFGL 2591. The South-Western source VLA1 is a compact H~{\sc II}\ region which dominates the field at frequencies ${_<\atop{^\sim}}$100~GHz. The weakest, North-Western source coincides with feature `VLA2' in low-frequency ($<$10~GHz) VLA maps. Our 81~GHz map (Fig.~\ref{fig:cont_maps}, bottom) shows the same three features, but their emission is blended due to the lower angular resolution. These results are consistent with previous mapping at similar frequencies with the OVRO interferometer \citep{vdtak:gl2591}. However, the Bure data have higher sensitivity, making these the first detections of the NW and SW H~{\sc II}\ regions at frequencies $>$200~GHz. On the other hand, the OVRO data have higher resolution, so that the infrared source and the H~{\sc II}\ regions are well separated at 87~GHz. Source VLA2 is visible in the OVRO 87~GHz data, but not firmly detected. To study the physical nature of the continuum sources, we calculate their millimeter-wave spectral indices $\gamma$, defined as $S_\nu \propto \nu^\gamma$. To do so we combine the 87~GHz data from OVRO with the 205~GHz data from IRAM. The \textit{uv} coverage of these telescopes is similar, and the effect of `missing flux' on their comparison is probably small. The result is $\gamma$=0.0$$\pm$$0.05 for VLA1, $\gamma$=$+$2.1$$\pm$$0.2 for VLA2 and $\gamma$=$+$2.7$$\pm$$0.1 for VLA3. These values indicate optically thin free-free emission for VLA1, optically thick dust or free-free emission in VLA2, and optically thin dust emission in VLA3. The brightness temperatures of the 205~GHz sources are 0.7~K for VLA1, 0.3~K for VLA2 and 6.1~K for VLA3. These values are much lower than the expected physical temperatures of either ionized gas or dust, and indicate either a low optical depth or a small filling factor. For the `dust' source VLA3, we calculate the mass from the observed 205~GHz flux density, assuming a dust temperature of 100~K, \new{a standard gas to dust ratio of 100 } and a mass opacity of 0.9~cm$^2$ per gram of dust (\citealt{ossenkopf:opacities}; \citealt{henning:massive}). The result is 0.8~M$_{\odot}$, which is inversely proportional to the assumed temperature. Interestingly, the spectral index of VLA3 indicates a value of the dust opacity index of $\beta \approx 1$, which is smaller than the `usual' value of $\approx$2 and suggests grain growth. This process is thought to occur in circumstellar disks, which is not inconsistent with the observed elongated shape of VLA3. The very compact 43~GHz emission from ionized gas seen by \citet{vdtak:qband} may then originate in the ionized inner part of the disk, a disk atmosphere, or a disk wind (\citealt{hollenbach:photevap}; \citealt{lugo:photevap}). \subsection{Interferometer observations of H$_2$O, HDO and SO$_2$\ line emission} \label{s:bure_lines} \begin{figure}[tb] \includegraphics[width=8cm,angle=0]{3937f7.ps} \caption{Interferometric maps of line emission toward AFGL 2591 at the central velocity. For HDO (top), first contour and contour step are 30~mJy/beam. For H$_2^{18}$O\ (middle) and SO$_2$\ (bottom), first contour is 0.15~Jy/beam and contour step is 0.3~Jy/beam.} \label{fig:line_maps} \end{figure} Figure~\ref{fig:line_maps} shows the maps of the HDO, H$_2^{18}$O\ and SO$_2$\ line emission observed with Bure. The beam sizes are the same as those of the continuum maps at that frequency. The rms noise levels of the line maps are 9~mJy/beam for HDO, 23~mJy/beam for H$_2^{18}$O, and 50~mJy/beam for SO$_2$. In the case of SO$_2$, the noise is limited by dynamic range problems. The line maps show compact emission, coincident with the `dust peak' of AFGL 2591. Columns 2--6 of Table~\ref{t:bure_l} list the position, size and shape of the emission, obtained by fitting 2D Gaussians to the \textit{u,v} data. Figure~\ref{fig:bure_spectra} shows spectra of the line emission, taken at the peak positions of the images. Columns 7--9 of Table~\ref{t:bure_l} list the central positions, widths, and peak strengths of the lines, obtained by fitting Gaussian profiles to the spectra at the image maxima. The central velocities of the lines are consistent with the values measured at the 30-m telescope (Table~\ref{t:width}). The width of the SO$_2$\ line is consistent with the single-dish value, while the H$_2^{18}$O\ line is 23\% broader and the HDO line 50\% broader. Within the errors, all of the single-dish flux is recovered by the interferometer: apparently, both telescopes trace the same gas, which has a compact (${_<\atop{^\sim}}$1$''$) distribution. \new{Given the constraints on the source geometry from the comparison of single-dish and infrared column density estimates (\S~\ref{s:cold}), we conclude that the H$_2^{18}$O\ and HDO emitting regions have sizes of $\sim$1$''$.} \begin{figure}[tb] \includegraphics[width=8cm,angle=0]{3937f8.ps} \caption{Interferometric spectra of line emission toward AFGL 2591 taken at the image maxima.} \label{fig:bure_spectra} \end{figure} \begin{table*}[tb] \caption{Positions, deconvolved sizes, velocities, and strengths of emission lines detected toward AFGL 2591 with the interferometer. Numbers in brackets denote uncertainties in units of the last decimal. } \label{t:bure_l} \begin{tabular}{lllllrrrr} \hline \hline Line & R.A. (J2000) & Dec. (J2000) & Major axis & Minor axis & Pos.~angle & V$_{\rm LSR}$\ & $\Delta$\textit{V}\ & Peak $T_B$ \\ & hh mm ss & dd mm ss & arcsec & arcsec & deg & km~s$^{-1}$ & km~s$^{-1}$ & K \\ \hline HDO & 20:29:24.872(11) & 40:11:19.48(10) & ${_<\atop{^\sim}} 1$ & ${_<\atop{^\sim}} 1$ & ...& --4.92(5) & 3.63(10) & 2.05( 5) \\ H$_2^{18}$O\ & 20:29:24.8691(6) & 40:11:19.493(8) & 0.81(2) & 0.71(3) & --47(12) & --5.29(3) & 4.12( 7) & 24.26(36) \\ SO$_2$\ & 20:29:24.8688(4) & 40:11:19.363(6) & 1.00(2) & 0.91(2) & --28( 9) & --5.16(2) & 5.44( 5) & 53.93(40) \\ \hline \end{tabular} \end{table*} \subsection{Velocity structure of the line emission} \label{s:velo} Channel maps of H$_2^{18}$O, HDO and SO$_2$\ do not show clear changes of the emitting structure with velocity, but the signal to noise ratio of the Bure line data is high enough to locate the emission peak to a fraction of a synthesized beam width. Therefore, to study the velocity structure of the compact molecular gas, we have determined the position of the emission peak in each channel by a fit to the \textit{(u,v)} data, and Fig.~\ref{f:velo} shows the result. The H$_2^{18}$O\ and SO$_2$\ lines show a clear velocity gradient, where the redshifted gas is located in the South-West and the blueshifted gas in the North-East. The HDO shows the same trend, but not as clearly due to the lower angular resolution. In each case, the gradient is smooth, which suggests that the emission traces a physically coherent structure. Since the central position of the line emission coincides with that of the dust continuum peak, and its central velocity with that of the large-scale molecular envelope, the most plausible origins of the velocity gradient are outflowing motions in a bipolar cavity, or rotation in a circumstellar disk. \begin{figure}[tb] \includegraphics[height=12cm,angle=0]{3937f9.ps} \caption{Position of the emission peak in AFGL 2591 in each channel, as offset from the phase center of the Bure observations, for HDO (top), H$_2^{18}$O\ (middle) and SO$_2$\ (bottom). Colour coding corresponds to velocity offset: red = redshifted (--V$_{\rm LSR}$=2.5 -- 4~km~s$^{-1}$), green = line center (--V$_{\rm LSR}$=4.5 -- 6~km~s$^{-1}$), blue = blueshifted (--V$_{\rm LSR}$=6.5 -- 8~km~s$^{-1}$). \new{The arrow in the bottom panel indicates the orientation of the large-scale CO outflow.} } \label{f:velo} \end{figure} \new{One central prediction of disk accretion models of low-mass star formation is that the outflow axis is perpendicular to the disk plane. } The orientation of the structure seen in Figure~\ref{f:velo} does not agree with that of the large-scale outflow which is known to emanate from AFGL 2591. The position angle of the velocity gradient is 39$^\circ$ in H$_2^{18}$O\ and 67$^\circ$ in SO$_2$\ (measured East from North). The value for HDO is 13$^\circ$, but this number is uncertain due to the large scatter in the data points, and not inconsistent with the value for H$_2^{18}$O. In contrast, single-dish CO 3--2 and HCO$^+$ 4--3 mapping \citep{hasegawa:gl2591} shows an outflow of size 90$\times$20$''$, embedded in an arcminute-scale outflow seen before in lower-$J$ CO lines (e.g, \citealt{lada:gl2591}). The East-West orientation agrees with the positions of Herbig-Haro objects and spots of shock-excited H$_2$\ emission (\citealt{tamura:gl2591}; \citealt{poetzel:h-h}). On smaller scales, VLBI observations of H$_2$O\ masers by \citet{trinidad:gl2591} show a shell-like structure which they interpret as an outflow cavity. The spots are spread over 0.02$''$, in an elongated structure oriented about 20$^\circ$ West from North. On the other hand, if the line emission seen with Bure is due to a disk, the inclination is probably close to face-on. For example, the Gaussian fits presented in Table~\ref{t:bure_l} imply axis ratios of 0.90$$\pm$$0.02 for H$_2^{18}$O\ and SO$_2$, corresponding to an inclination of (26$$\pm$$3)$^\circ$. A face-on disk is also consistent with the much higher outflow velocities seen in CO mid-IR absorption than in mm emission. The outflow is thus directed almost along the line of sight. The total magnitude of the velocity gradient is 4.6~km~s$^{-1}$\ over an \cut{angle} \new{offset } of 0.3$''$. Adopting a distance of 1~kpc and assuming the above inclination angle, the implied rotation period is $\sim$1000~yr. The central star has a mass of $\approx$16~M$_{\odot}$, based on the luminosity of the region \citep{vdtak:qband}. For this orbital period and stellar mass, Kepler's third law implies an orbital radius of 250~AU, which is within a factor of 2 from the measured value. This agreement does not prove that the line emission originates in a rotating disk, but it does indicate that the gas motion is controlled by stellar gravity. Higher resolution observations are necessary to resolve the velocity field of the compact line emission. \section{Radiative transfer analysis} \label{s:radtrans} Dividing the single-dish column densities in Table~\ref{t:cold} by the $N$(H$_2$) values based on (sub-)mil\-li\-me\-ter\ observations of dust continuum and C$^{17}$O lines in $\approx$15$''$ beams \cut{\citep{vdtak:sulphur}} \new{(Table~\ref{tab:samp}) } leads to beam-averaged abundances of 2$\times$10$^{-10}$ -- 1$\times$10$^{-9}$ for HDO and 2$\times$10$^{-7}$ -- 2$\times$10$^{-6}$ for H$_2$O. \cut{Our data may thus sample $\sim$100\% of gas of the sort seen with SWAS, which has an H$_2$O\ abundance of $\sim$10$^{-7}$, or $\sim$0.1\% of the gas seen with ISO where H$_2$O\ is a major carrier of oxygen and has an abundance of $\sim$10$^{-4}$.} On the other hand, the interferometer data of AFGL 2591 indicate a H$_2$O\ column density of \pow{3.4}{19}~cm$^{-2}$, assuming \new{optically thin emission with } $T_{\rm ex}$=100~K, while the 205~GHz continuum indicates $N$(H$_2$)=\pow{2.1}{24}~cm$^{-2}$\ for a dust temperature of 100~K. The H$_2$O/H$_2$\ column density ratio of \pow{1.6}{-5} is ${_>\atop{^\sim}}$10$\times$ higher than estimated from the single-dish data. To resolve this discrepancy, we have run radiative transfer models of the line emission. \subsection{Model description} \label{s:hst} The column density ratios evaluated in the previous section are crude estimates of the molecular abundances because they do not take excitation gradients along the line of sight into account. In protostellar envelopes and disks, where density and temperature vary over orders of magnitude, these gradients are very significant. Furthermore, column densities and their ratios do not give insight into the location of the gas along the line of sight. To estimate more accurate abundances, the line emission of H$_2^{18}$O\ and HDO in our sources has been modeled with the Monte Carlo radiative transfer program of \citet{hvdt:hst}\footnote{\tt http://www.mpifr-bonn.mpg.de/staff/fvandertak/ratran/}. The application of this program to H$_2$O\ has been explicitly tested in a dedicated benchmark campaign \citep{vdtak:h2o-benchmark}\footnote{\tt http://www.mpifr-bonn.mpg.de/staff/fvandertak/H2O/}. Spectroscopic and collisional input for the modeling comes from the molecular database by \citet{schoeier:moldata}\footnote{\tt http://www.strw.leidenuniv.nl/$\sim$moldata/}. Besides collisional excitation, dust radiation is taken into account using grain properties from \citet{ossenkopf:opacities}, Model~5. Radial profiles of the temperature and density of the envelopes of our sources were determined by \citet{vdtak:massive} based on single-dish maps of dust continuum and molecular line emission at (sub-)mil\-li\-me\-ter\ wavelengths. \subsection{Constant-abundance models} \label{s:mres} We first assume that H$_2$O\ and HDO are distributed evenly over the protostellar envelopes. Table~\ref{t:abun} reports the results of these models. Stringent convergence criteria had to be applied, as the upper (emitting) level of the H$_2^{18}$O\ line has a relative population of only $\sim$10$^{-7}$ in the outer parts of the envelopes. Nevertheless, Figure~\ref{f:cnv_plot} shows that the calculations are well converged. We calculate an optical depth for the H$_2^{18}$O\ line of $\sim$10$^{-2}$. Maser action, as often observed in the H$_2$O\ 183~GHz line, does not occur at H$_2^{18}$O\ abundances of $\sim$10$^{-8}$. The derived abundances are factors of $\sim$30 higher than the line-of-sight averages from \S~\ref{s:cold}, due to the strong excitation gradients in these sources. \begin{figure}[tb] \includegraphics[width=6cm,angle=-90]{3937f10.ps} \caption{Excitation temperature of the H$_2$O\ 3$_{13}$--2$_{20}$ line as a function of radius for the AFGL 2591 model. The curves for the last few iteration stages in the Monte Carlo calculation coincide, which shows that the simulation has converged.} \label{f:cnv_plot} \end{figure} \begin{table} \caption{Envelope-averaged abundances$^a$ of HDO and H$_2$O\ relative to H$_2$, derived from radiative transfer models.} \label{t:abun} \begin{tabular}{lccc} \hline \hline Source & HDO & H$_2$O \\ & 10$^{-8}$ & 10$^{-5}$ \\ \hline W3 IRS5 & 1 & 0.8 \\ W33A & 3 & 1.0 \\ AFGL 2591 & 2 & 6.0 \\ NGC 7538 IRS1 & 3 & 0.8 \\ \hline \end{tabular} \medskip {\scriptsize a}: Average value over envelope \end{table} \subsection{Jump models} \label{s:jres} Although the constant-abundance models in the previous section fit the strength of the H$_2^{18}$O\ line and the average strength of the HDO lines well, they have several shortcomings. First, they predict FWHM sizes of the H$_2^{18}$O\ and HDO emission in AFGL 2591 of 4--5$''$, significantly larger than the observed ${_<\atop{^\sim}}$1$''$. Second, the fit residuals of HDO are correlated with energy level, in the sense that low-excitation lines tend to be overproduced and high-excitation lines underproduced. Both effects suggest that H$_2$O\ and HDO are not distributed evenly throughout the sources, but have enhanced abundances in the warm inner envelopes. Single-dish spectra and interferometer maps of H$_2$CO and CH$_3$OH \citep{vdtak:meth} and of SO and SO$_2$ \citep{vdtak:sulphur} show the same effect. Assuming that the water is produced by evaporation of icy grain mantles, we have run `jump' models for H$_2^{18}$O\ in AFGL 2591. The parameters of this model are the abundance in the warm inner region and the temperature at which the ice evaporates. Laboratory studies indicate that the evaporation temperature lies in the 90 -- 110~K range, depending on ice composition and structure (\citealt{pontoppidan:co_ice}; \citealt{fraser:co_ice}). With only one transition of H$_2^{18}$O\ observed, it is not possible to constrain both parameters simultaneously, so we have initially fixed the boundary of the inner region at the 100~K point. The water abundance is assumed negligible outside this radius, which for AFGL 2591 lies at 2000~AU. The upper level of the observed transition is too high to set useful limits on the water abundance in the outer envelope. We find that the observed line flux and source size are reproduced for H$_2$O/H$_2$\ = 1.4$\times$10$^{-4}$ which represents a major fraction of the available oxygen. Alternatively, the ice mantles may evaporate at a somewhat different temperature. We have run a model with the H$_2$O\ abundance in the warm gas fixed at 2$\times$10$^{-4}$, and varied the radius of the inner region to match the data. The best-fit model of this kind has the ice evaporating at $T$=115~K. We consider both H$_2$O\ jump models plausible; multi-line observations of H$_2^{18}$O\ are needed to rule out either model. \new{This result is consistent with modeling of the SWAS data of AFGL 2591 \citep{boonman:models} which indicates an evaporation temperature of 90 -- 110~K.} \new{The present data do not constrain the abundance of H$_2$O/H$_2$\ in the outer envelope very well. \citet{boonman:models} derive an upper limit of $\sim$10$^{-8}$ from a combined analysis of ISO-SWS, ISO-LWS and SWAS data which cover a range of energy levels. However, \cut{these data suffer from lack of spectral resolution, and} such a low H$_2$O\ abundance would imply an HDO/H$_2$O\ ratio of $\sim$unity in the outer envelope, which is implausibly high for this type of object. Our best-fit model to the SWAS data has H$_2$O/H$_2$\ $\sim$10$^{-6}$ in the outer envelope, inconsistent with the results by Boonman et al, but uncertain because based on only one transition. Clearly, \textit{Herschel}-HIFI data are needed to settle this issue. } For HDO, `jump' models were run for each source except W3 IRS5, where too few lines were observed to constrain such models. In these models, the jump occurs at the fixed location of $T$=100~K, and the HDO abundance inside and outside this radius are allowed to vary independently. The `jump' models reproduce all the single-dish line fluxes to within 50\%, which is about the expected error margin. Table~\ref{t:jump} reports the results of these models. For AFGL 2591, the size of the 81~GHz line emission measured with Bure acts as an extra constraint. The best-fit constant-abundance model predicts an emitting region of $\approx$5$''$ FWHM, whereas the jump model predicts $\approx$2$''$, consistent with the measured value, so that this latter model is favoured. The `jump' models indicate that the HDO/H$_2$O\ ratio is \cut{$\approx$\pow{1.4}{-3}} \new{$\approx$\pow{5}{-4} } in the inner region. This value corresponds to an enhancement over the interstellar D/H ratio by 100, and is consistent with the observational limits on the HDO/H$_2$O\ ratio in the solid state. We conclude that the bulk of the HDO and H$_2$O\ seen with the 30m telescope and the Plateau de Bure interferometer is evaporated ice. \begin{table*} \caption{Abundances of HDO and H$_2$O\ in the inner and outer envelopes of the sources, derived from radiative transfer models.} \label{t:jump} \begin{tabular}{lrrcccc} \hline \hline Source & \multicolumn{2}{c}{HDO/H$_2$} & \multicolumn{2}{c}{H$_2$O/H$_2$} & \multicolumn{2}{c}{HDO/H$_2$O} \\ & \multicolumn{2}{c}{10$^{-9}$} & \multicolumn{2}{c}{10$^{-4}$} & \multicolumn{2}{c}{10$^{-4}$} \\ & inner & outer & inner & outer & inner & outer \\ \hline W33A & 200 & 10 & ... & ... & ... & ... \\ AFGL 2591 & 100 & 4 & 1.4 -- 2 $^a$ & 10$^{-2}$ -- 10$^{-4}$ $^b$ & 5 & 40 -- 4000 \\ NGC 7538 IRS1 & 100 & 20 & ... & ... & ... & ... \\ \hline \end{tabular} \medskip {\scriptsize a}: Values for ice evaporating at $T$=100~K and $T$=115~K. {\scriptsize b}: From \citet{boonman:models}. \end{table*} \section{Discussion} \label{s:disc} \subsection{Geometry of AFGL 2591: A massive circumstellar disk?} \label{ss:geom} An important result from this study is that the H$_2^{18}$O\ 203~GHz emission from AFGL 2591 is very compact (800~AU diameter) and coincident within the errors with the continuum emission from VLA3. \cut{From the 1.3~mm continuum data, we estimate a mass of 0.8~M$_{\odot}$, assuming a dust temperature $T_d$ of 100~K and a mass opacity of the dust of 0.9~cm$^2$ per gram of dust \citep{ossenkopf:opacities}. For $T_d$=50 and 200~K, the mass estimates are 1.8 and 0.4~M$_{\odot}$. These masses are of order} \new{The 1.3~mm continuum data indicate a mass of 0.8~M$_{\odot}$, which is $\approx$}5\% of the stellar mass of $\approx$16~M$_{\odot}$\ \citep{vdtak:qband}. One possible interpretation of these data is that we are observing the circumstellar disk surrounding the central star of AFGL 2591. The mass ratio seems plausible for a young protostar and given the observed axis ratios of the line and continuum emission (Tables~\ref{t:bure_c} and~\ref{t:bure_l}), this putative disk is likely to be close to face-on (inclination 26 -- 38 degrees). This inclination is compatible with the spectral energy distribution of the source in the near- to mid-infrared range (e.g., \citealt{whitney:geometry}). However, there are certainly other possible interpretations of the data, at least as far as the line emission is concerned. Another significant result from the interferometer observations is that neither H$_2^{18}$O\ nor SO$_2$\ is distributed in a spherically symmetric fashion around the central star. Figure~\ref{f:velo} shows that the red and blue wings of the lines are offset from one another along a roughly NE -- SW direction. The orientation seems to be somewhat different for water and SO$_2$. While one clearly needs much better angular resolution to interpret the data, we conclude that the velocity gradient in Fig.~\ref{f:velo} is probably due to either disk rotation or the effect of the interaction of an outflow with the sides of a cavity (presumably directed toward us). SO$_2$\ is a known outflow indicator \citep{schilke:345survey} and, more in general, is found associated with shocked gas. It seems likely that in a wide-angle wind inclined at a moderate angle to the line of sight, different outflow tracers will show different orientations in a representation such as that of Fig.~\ref{f:velo}. Such differences in orientation may particularly occur if the jet driving the outflow were to be precessing, which VLBI observations of H$_2$O\ 22~GHz indicate for AFGL 2591 \citep{trinidad:gl2591}. Although higher resolution imaging is required to settle this issue, we note that the column densities of H$_2$O, SO$_2$\ and H$_2$\ are well above the values in other massive molecular outflows \citep{beuther:outflow}, making this scenario implausible. Near-infrared speckle imaging of AFGL 2591 by \citet{preibisch:afgl2591} supports our picture of the geometry of the source. These images (as well as older ones) show several loops of emission extending due West from the central source, with major axes of $\approx$10$''$ and axis ratios of $\approx$3. This emission presumably traces a limb-brightened cavity around the approaching outflow lobe. Since the outflow is expected to be perpendicular to the disk, the Western orientation of the loops is nicely consistent with the N--S orientation of the 205~GHz continuum emission. The near-infrared images thus indicate that the Western part of the disk is tilted away from us. The axis ratios of the loops are larger than that of the disk in the Bure images, as expected because the outflow is an intrinsically elongated structure, unlike the disk. The proposed outflow orientation is also consistent with the other observations mentioned in \S~\ref{s:velo}. In these data, the general direction of the outflow is East-West, with some diversity among tracers as expected in the proposed pole-on orientation of the system. It also is worth noting that the source-averaged $N$(H$_2$O) toward AFGL 2591, corrected to a 0$''$.8 source size, is 5$\times$10$^{19}$~cm$^{-2}$, two orders of magnitude higher than the value derived from ISO 6~$\mu$m\ data \citep{boonman:h2o}. This difference again suggests an asymmetrical distribution of material around the central source as, for example, in a disk geometry. Along the line of sight to the central source, one observes across the outflow cavity but column densities are much larger in the disk or perhaps (see above) toward the cavity edges where outflow and inner envelope interact. In this scenario, the high excitation temperatures derived from the ISO data indicate that the cavity is hot (several 100~K). \begin{figure}[tb] \includegraphics[height=9cm,angle=-90]{3937f11.ps} \caption{Sketch of the inner part of the AFGL 2591 region, as projected on the sky, with the observational characteristics of each physical component indicated. At a distance of 1~kpc, 1$''$ corresponds to 1000~AU.} \label{f:view} \end{figure} \new{Figure \ref{f:view} summarizes our combined interpretation of the new and the existing observations of the central region of AFGL 2591. The circumstellar disk and the molecular outflow are embedded in a large-scale molecular envelope, observed as the low-velocity single-dish emission and mid-infrared absorption. For clarity, this large-scale envelope is not drawn here, but it is depicted in Fig.~11 of \cite{vdtak:gl2591}, along with additional large-scale features. Note that part of the cm-wave emission observed with the VLA may arise in the base of the outflow.} \subsection{Chemistry of water} \label{ss:chem} It is also of great interest that the `jump' models in Section~\ref{s:jres} show that our observations are consistent with the idea that the H$_2^{18}$O\ emission traces gas with a temperature above 100~K. Moreover, the H$_2$O\ abundances derived from these models are \cut{(to within a factor of 2)} equal to \new{or a few times higher than } those estimated for H$_2$O\ ice in these sources (\citealt{vdishoeck:faraday}; \citealt{boonman:models}). Thus, we believe that the 3$_{13}$--2$_{20}$ transition of H$_2^{18}$O\ can be used to trace the behaviour of high temperature gas where water ice has evaporated. Since the interferometer recovers all single-dish H$_2^{18}$O\ line flux, we conclude that the H$_2^{18}$O\ emission appears to be an excellent tracer of the inner $\sim$1000~AU of protostars. In our model of AFGL 2591, the mass inside the 100~K point is 0.2~M$_{\odot}$, similar to the mass derived from the Bure continuum data. The warm-up of the central region to 100~K is evidently sufficiently recent that gas-phase chemistry has not had time to modify the abundances substantially. The H$_2$O/H$_3$O$^+$\ ratio in W3~IRS5 is close to the equilibrium value of 1000 \citep{phillips:h3o+}. For W33A, AFGL 2136, AFGL 2591, S140 IRS1 and NGC 7538 IRS9, our unpublished JCMT observations of the H$_3$O$^+$\ 364~GHz line indicate upper limits of $T_{\rm MB}$$<$0.13 -- 0.21~K. Because of the large Einstein A coefficient of the transition, its excitation temperature is probably significantly below that of H$_2$O\ and HDO. Assuming $T_{\rm ex}$=25~K and an ortho/para ratio of 2, our upper limits on $N$(H$_3$O$^+$) are (4--7)$\times$10$^{13}$~cm$^{-2}$. These numbers correspond to lower limits on the H$_2$O/H$_3$O$^+$\ ratio of 2000 in W33A and 6000 in AFGL 2591. Thus, in these sources, gas-phase chemistry seems not to have had time to return the H$_2$O/H$_3$O$^+$\ ratio to its equilibrium value since the evaporation of the grain mantles. \def$t_{\rm ion}${$t_{\rm ion}$} The time scale $t_{\rm ion}$\ to reach chemical equilibrium between H$_2$O\ and H$_3$O$^+$\ can be estimated by realizing that H$_3$O$^+$\ is produced in reactions between H$_2$O\ and molecular ions, in particular HCO$^+$, He$^+$ and H$_3^+$. Destruction of H$_3$O$^+$\ is by dissociative recombination, which for 25\% re-forms water, but for 60\% makes OH and 15\% O \citep{jensen:h3o+}. We thus estimate $t_{\rm ion}$\ as $(\alpha_L n_X)^{-1}$, where $\alpha_L$ is the Langevin reaction rate of $\sim$10$^{-9}$ cm$^{-3}$~s$^{-1}$, and the concentration of molecular ions $n_X$ is given by the balance of cosmic-ray ionization on the one hand and reactions with CO and O on the other. Models by \citet{vdtak:zeta} indicate $n_X \sim 10^{-4}$~cm$^{-3}$, so that $t_{\rm ion}$$\sim$\pow{3}{5}~yr. We conclude that the evaporation of grain mantles in AFGL 2591 has taken place less than $\sim$0.1~Myr ago. The time scale may be even shorter, given the mass loss rate of $\sim$10$^{-4}$~M$_{\odot}$\,yr$^{-1}$ measured in the CO outflow \citep{hasegawa:gl2591}. Assuming that the 0.8~M$_{\odot}$\ disk accretes at the same rate, the disk lifetime is only $\sim$10$^4$~yr. \new{This value is similar to the age estimate of \pow{3}{4}~yr from multi-species chemical modeling of the envelope of AFGL 2591 \citep{doty:massive}.} \begin{figure}[tb] \includegraphics[height=9cm,angle=0]{3937f12.ps} \caption{Filled squares: Visibility amplitude of H$_2^{18}$O\ line emission observed toward AFGL 2591 with the Plateau de Bure interferometer, integrated over velocity and binned. Superposed are model points for constant abundance (open squares) and for the `jump' model (open circles).} \label{f:uvmodel} \end{figure} It may be asked whether the Monte Carlo treatment of radiative transfer in \S~\ref{s:radtrans} is useful given our uncertainty on the geometry. In fact, we think that our spherically symmetric model is adequate to determine the mass of warm ($>$100~K) gas necessary to explain the observations, and this mass is, for an essentially optically thin line, geometry independent. However, understanding line profiles as well as observations at still higher angular resolution will require a more sophisticated treatment such as axisymmetric modeling. \new{ Figure~\ref{f:uvmodel} illustrates the limitations of our models by comparing them to the interferometer data in the \textit{uv} plane. Although both models reproduce the total flux and the jump model also the source size as estimated through Gaussian fits, neither model reproduces the observations in detail. This discrepancy hints at the existence of additional geometrical structure which is not present in the model. The two models predict different source sizes, but the same overall emission shape, probably because the H$_2^{18}$O\ line is a tracer of warm gas. } Finally, our HDO data and the implied HDO/H$_2$O\ abundance ratios are interesting in combination with the result from the `jump' models that the gas-phase water abundance is consistent with material which has recently evaporated off grains. Thus the observed HDO/H$_2$O\ should reflect the ratio of these species in the solid state and indeed, our result is consistent with current limits on HDO ice. \subsection{Comparison with low-mass protostars} \label{ss:lmpo} The observations of H$_2^{18}$O presented here have shown that in the studied high-mass protostars the water emission likely originates in the envelopes, and that the H$_2$O abundance jumps in the inner warm region where the grain mantles sublimate. A similar analysis of the ISO-LWS spectra of two low-mass protostars (\citealt{ceccarelli:16293}; \citealt{maret:h2o}) shows that also in those cases, a jump in the water abundance at approximatively the radius where the dust temperature reaches 100~K is needed to explain the observed far-infrared water line spectrum. However, the water abundance in the warm gas is strikingly different: $\sim$10$^{-4}$ in the high-mass, and $\sim$3$\times$10$^{-6}$ in the low-mass protostars respectively. Mid-infrared spectra of low-mass protostars give solid water abundances of $\sim$10$^{-4}$ as for high-mass protostars \citep{pontoppidan:ice_map}. \cut{2004 (Mapping protostellar ice in Serpens)} It is not obvious why the water abundance in the warm gas should be only a few percent of the ice abundance. As \S~\ref{s:jres} discusses, the exact ice evaporation temperature depends somewhat on the ice composition and structure, but not enough to make a factor of 100 difference. Unless the water abundances in the studied low-mass protostellar envelopes are affected by the large (80$''$) ISO-LWS beam, it seems that the break-down of evaporated water ice is faster around low-mass than around high-mass stars. This conclusion is somewhat surprising since water is destroyed by ion-molecule chemistry and the few available data suggest that if anything, the ionization rate is higher around high-mass than around low-mass stars \citep{vdtak:catania}. In the future, \textit{Herschel}-HIFI data will be helpful to settle this issue. HDO emission, practically at all the same frequencies of the present work, has been observed towards the low-mass protostar IRAS 16293-2422 \citep{parise:hdo}. Similarly to the present work, the authors analyzed the observations with an abundance jump model. They found that the abundance of HDO has a jump from a value $\leq 10^{-9}$ in the outer envelope, where the dust temperature is lower than 100~K, to $\sim 10^{-7}$ in the inner envelope. The abundance found in the region of sublimated ices is therefore very similar in high- and low-mass protostars. However, the HDO/H$_2$O ratio is substantially different. In high-mass protostars it is at most $3\times 10^{-3}$, whereas in the low-mass protostar IRAS 16293-2422 it is $\sim 0.03$, namely ten times larger and close to to the observational limit on solid HDO \citep{parise:solid}. This difference is as expected from recent measurements of extreme molecular deuteration around low-mass protostars, where doubly and triply deuterated molecules have been detected with fractionations of a few percent (for references, see \citealt{vdtak:catania}). The degree of deuteration in high-mass protostellar envelopes is much lower, starting from the failure to detect H$_2$D$^+$ in massive protostars (\citealt{pagani:h2d+}; \citealt{stark:h2d+}). Possibly, for high-mass protostars, the very cold and dense pre-collapse phase where the CO freezes out onto the grain mantles lasts only a short time. The present measurement of HDO/H$_2$O in high mass protostars, compared with the value found in IRAS 16293-2422, confirms this hypothesis: high-mass protostars show indeed a lower degree of water deuteration. The water ice on grain surfaces is laid down in relatively low density molecular cloud gas (10$^3$ -- 10$^4$~cm$^{-3}$) where H~atoms are as abundant as O~atoms and thus there is a relatively high probability of H$_2$O\ forming subsequent to O sticking to a grain. After formation, the ice presumably stays frozen until the dust is heated up by protostellar radiation. It is the temperature of this `primordial' low-density molecular cloud gas that counts for the chemistry of HDO/H$_2$O. The present observations suggest that this temperature is higher in the Giant Molecular Clouds producing high-mass stars than in low-mass star-forming regions such as Taurus and Ophiuchus. \section{Conclusion and Outlook} \label{s:conc} The chemical composition of star-forming matter depends on both temperature and time. For over a decade, people have tried to use the time dependence to estimate chemical ages of star-forming regions. These estimates, the so-called `chemical clocks' remain unreliable, probably due to uncertainties in the initial conditions as well as in the physical structure (cf.\ \citealt{vdtak:catania}). This paper has shown that the temperature dependence may instead be used to apply `chemical filters'. In particular, we have used the H$_2$O\ molecule to image the material at $T>100$~K, filtering out the surrounding cooler material. The success of this filter lies in the evaporation of icy grain mantles which enhance the H$_2$O\ gas-phase abundance by two orders of magnitude at $T>100$~K. By using this chemical filter, we have shown that the dust and water inside 2000~AU from the central star in the AFGL 2591 region is asymmetrically distributed. Most likely, the observations trace a circumstellar disk of diameter 800~AU which rotates at about Keplerian speed. The result of the SO$_2$\ observations is qualitatively the same as for H$_2^{18}$O, but differs in the details. In this case, the `chemical filter' may be a bit leaky because the SO$_2$\ abundance in the large-scale envelope is non-negligible, and because SO$_2$\ is also abundant in the bipolar outflow of AFGL 2591. Furthermore, comparison of our results with those for the prototypical low-mass system IRAS 16293 shows that the H$_2$O\ abundance in the warm gas around young high-mass stars is much higher, but the HDO/H$_2$O\ ratio much lower than around low-mass protostars. In the future, observations with the PACS camera and the HIFI spectrometer onboard the \textit{Herschel} space observatory will further refine this picture \citep{walmsley:paris}. Observations of multiple H$_2$O\ and H$_2^{18}$O\ lines will constrain the excitation and ortho/para ratio of water much better than is possible from ground-based data. The high sensitivity will allow us to measure the water abundance in each chemical zone (disk/outflow/envelope), not only around high-mass protostars, but also around lower-mass objects. \new{Given our estimated outflow contribution of 10 -- 20\% to the H$_2^{18}$O\ and HDO lines in HIFI-sized beams, the contribution from outflows to H$_2^{16}$O spectra from HIFI will probably be much larger, which should be considered in the planning of the HIFI observations. } New, deep searches for H$_3$O$^+$\ in high-mass protostellar envelopes with APEX, HIFI and ALMA would also be useful. Observations of the H$_2^{18}$O\ and SO$_2$\ lines in AFGL 2591 on longer baselines are necessary to resolve the velocity field and test other possibilities such as a binary system, as seen in the W3~(H$_2$O) source by \citet{wyrowski:w3(h2o)}. \begin{acknowledgement} We thank the staffs of the IRAM 30m, JCMT, and Plateau de Bure telescopes for assisting with the observations, especially Jan Martin Winters and J\'er\^ome Pety at IRAM Grenoble. The JCMT data were taken in collaboration with Ewine van Dishoeck and Annemieke Boonman at Leiden Observatory. Holger M\"uller at the University of Cologne kindly provided H$_2$O\ term values to spectroscopic accuracy. \end{acknowledgement} \bibliographystyle{aa}
1,477,468,749,935
arxiv
\section{Introduction} One of the essential steps in the construction of any algorithm for multi-particle final states is the appropriate analysis of the phase space parametrization. In the {\tt PHOTOS} Monte Carlo \cite{Barberio:1994qi} for multi-photon production, an exact phase space parametrization is embodied in an iterative algorithm, the details of which are best described in \cite{Nanava:2006vv}. Control of the distributions and relative size of sub-samples for distinct numbers of final state particles requires a precise knowledge of the matrix elements including virtual corrections as well. In the {\tt KKMC} Monte Carlo, the phase space generation is different, but the necessity to control matrix elements is also essential \cite{kkcpc:1999,Jadach:2000ir}. Iterative procedures for parts of amplitudes, which are at the foundation of exponentiation \cite{Jadach:2000ir,Yennie:1961ad} and structure functions \cite{Altarelli:1977zs,Gribov:1972ri,Gorshkov:1966ht,Skrzypek:1992vk,RichterWas:1985gi} were exploited for the sake of use in {\tt KKMC} Monte Carlo. In particular the description of dominantly $s$-channel processes $e^+e^- \to \nu_e \bar \nu_e \gamma \gamma $ where, $t$-channel $W$-exchange diagrams with gauge boson couplings, contribute to matrix elements provide an interesting example \cite{Was:2004ig}. These studies were motivated by practical reasons, but also pointed at quite astonishing properties of tree-level spin amplitudes, namely that they can be separated into gauge invariant parts in a semi-automated way, easy to apply in the Kleiss-Stirling methods \cite{Kleiss:1985yh,Jadach:1998wp}. One could ask the question whether similar techniques can be used in QCD, whether they are of any practical use, and in fact to which degree they were already included in previous publications. These questions will be discussed elsewhere \cite{vanHameren,vanHameren:2008dy}. We will not elaborate on these points requiring good understanding of factorization in QCD. Instead let us point to old, but important for me ref.~\cite{Berends:1982ie}, where properties of factorization for cross section, visualize themselves in a fully differential environment, even though only for QED and at first order of perturbartion expansion. For the sake of caution, let us mention the existence of limitations in such strategies, if applied to parton shower applications beyond NLO \cite{Kleiss:1990jv}. Our presentation is organized as follows. In \Section{SecNotation} we will discuss different aspects of phase space parametrization, as used in {\tt PHOTOS} Monte Carlo and how it compares to other prgrams. Discussion of approximations necessary to construct crude distributions is started in \Section{SecNotation}. Presentation of the form of first order cross section, matrix elements and approximations which were essential for construction of the first version of the program is given in \Section{SecCommAnti}. With all material collected, we will point in \Section{SecSingProd} to mathematical properties of elements used in the project, which actually made it possible, even though their documentation was never of high priority until now. The summary in \Section{SecSummary} closes the paper. \section{Phase space\label{SecNotation}} It is of no surprize that phase space must play a central role in preparation of the algorithm of any Monte Carlo based on predictions originating from field theory. That is direct consequence of Quantum Mechanics, basic formula for cross section consist of phase space element, matrix element squared and the flux factor. Over many years we were stressing, in a multitude of talks and papers that the control of the eventual approximations is essential. Let me recall here one of such S. Jadach's plots, see Fig. \ref{Fig1}. \begin{figure} \begin{center} \epsfig{file=flow-mcgen2.eps,width=0.45\linewidth} \caption{Phase space plot for the KKMC and KORALZ Monte Carlo programs.\label{Fig1}} \end{center} \end{figure} At that time it was an achievement\cite{koralz4:1994,kkcpc:1999}. It required enormous amount of work to prepare such an organization of the phase space that would be exact, cover complete multibody phase space, and capable to manage highly peaked distributions of complex structure due to collinear and soft singularities. As these programs are discussed elsewhere in the proceedings, let us follow here the phase space organization of another program originating from S. Jadach group, that is {\tt PHOTOS} Monte Carlo\footnote{ The most detailed decription of the program \cite{Barberio:1990ms,Barberio:1994qi}, can be found in recent ref.~\cite{Nanava:2006vv}. }. It is also capable of covering multibody phase space distributions without any approximation, but contrary to {\tt KKMC/KORALZ} solutions conformal symmetry of the eikonal approximation is not used. Thanks to that, this solution is closer to iterative solution used in QCD parton showers, but is still relatively simple to explain and formalize. Let us start with the explicit expression for the parametrization of an $n+1$ body phase space in decay of the object of four-momentum $P$\; ($P^2=M^2$), as used in {\tt PHOTOS} Monte Carlo. As our aim is to define iterative relations, let us denote the four momenta of the first $n$ decay products as $k_i$ ($i=1,n$) and the last $n+1$ decay product as $k_{n+1}$. In our case the $n+1$-th particle will always be the real and massless photon\footnote{However the construction does not rely on a photon to be massless. In principle it can be applied to define other phase space relations, for example the emission of an extra massive pion or emission of a pair of heavy particles.}. In the later steps of our construction the masslessnes of photons and properties of QED matrix elements will be used. In the following, notation from refs. \cite{Was:1994kg,Jadach:1993hs} will be used. We will not rely on any particular results of these papers. We only point to other, similar options for the exact $n$-body phase space parametrizations, which are also in use. The Lorentz invariant phase space is defined as follows: \begin{eqnarray} dLips_{n+1}(P) &=& {d^3k_1 \over 2k_1^0 (2\pi)^3}\; . . .\;{d^3k_n \over 2k_n^0 (2\pi)^3} {d^3k_{n+1} \over 2k_{n+1}^0 (2\pi)^3} (2\pi)^4 \delta^4\Bigl(P - k_{n+1}- \sum_{i=1}^n k_i\Bigr)\nonumber\\ &=& d^4p\delta^4(P -p-k_{n+1}){d^3k_{n+1} \over 2k_{n+1}^0 (2\pi)^3} {d^3k_1 \over 2k_1^0 (2\pi)^3} \;. . .\;{d^3k_n \over 2k_n^0 (2\pi)^3} (2\pi)^4 \delta^4\Bigl(p -\sum_{i=1}^n k_i\Bigr)\nonumber\\ &=& d^4p\delta^4(P -p-k_{n+1}){d^3k_{n+1} \over 2k_{n+1}^0 (2\pi)^3} dLips_n(p\to k_1 ... k_n), \label{Lips_n+1} \end{eqnarray} where extra integration variables: four components of $p$ (compensated with $\delta^4\bigl(p -\sum_1^n k_i\bigr) $) is introduced. If further, $M_{1...n}$ (compensated with $\delta\bigl(p^2 -M_{1...n}^2\bigr) $) is introduced, the element of the phase space takes the form: \begin{eqnarray} dLips_{n+1}(P) &=& {dM_{1...n}^2 \over (2\pi)} dLips_2(P \to p\ k_{n+1}) \times dLips_n(p \to k_1 ... k_n)\nonumber\\ &=& dM_{1...n}^2 \biggl[d\cos\hat{\theta} d\hat{\phi} {1 \over 8(2\pi)^3} {\lambda^{1\over 2}(M^2, {M_{1...n}^2 },{m_{n+1}^2 })\over M^2}\biggr] \times dLips_n(p \to k_1 \dots k_n).\nonumber\\ \label{Lips_n+1.3} \end{eqnarray} The part of the phase space Jacobian corresponding to integration over the direction and energy of the last particle (or equivalently invariant mass $M_{1...n}$ of the remaining system of ${1...n}$ particles) is explicitly given. We will use later in the formulas $m_i^2=k_i^2$, and analogously $M_{i \dots n}$, defining invariant masses of $k_i \dots k_n$ systems. The integration over the angles $\hat{\theta}$ and $\hat{\phi}$ is defined in the $P$ rest-frame. The integration over the invariant mass, $M_{1\dots n}$, is limited by phase space boundaries. Anybody familiar with the phase space parametrization as used in {\tt FOWL} \cite{FOWL}, {\tt TAUOLA} \cite{Jadach:1993hs}, or many other programs will find the above explanation quite standard\footnote{% The parametrizations of such a type, use properties of the Lorentz group in an explicit manner, in particular measure, representations and their products. That is why, they are useful, for event building Monte Carlo programs in phase space constructions based on boosts and rotations. }. The question of choice of axes with respect to which angles are defined, and order in kinematical construction, is less trivial. The choice for the particular option stems from necessity to presample collinear singularities. It is rather well known that the choice of the reference directions for the parametrization of the unit sphere is free, and can be used to advantage. We will use related, but somewhat different freedom of choice. Instead of variables $\hat{\theta}\; \hat{\phi}$ defining orientation of $k_{n+1}$ in $P$ rest-frame we will use angles $\theta_1 \; \phi_1$ orienting $k_1$ (also in $P$ rest-frame). The Jacobian for this reparametrization of unit sphere equals unity. Formula (\ref{Lips_n+1.3}) can be iterated and provide a parametrization of the phase space with an arbitrary number of final state particles. In such a case, the question of orientation of the frames used to define the angles and the order of $M_{i\dots n}$ integrations (consequently, the choice of limits for $M_{i \dots n}$ integration), becomes particularly rich. Our choice is defined in ref. \cite{Barberio:1994qi}. We will not elaborate on this point here. If the invariant mass $M_{1\dots n}$ is replaced with the photon energy defined in the $P$ rest-frame, $k_\gamma$, then the phase space formula can be written as: \begin{eqnarray} dLips_{n+1}(P) &=& \biggl[ k_\gamma dk_\gamma d\cos\hat{\theta} d\hat{\phi} {1 \over 2(2\pi)^3} \biggr] \times dLips_n(p \to k_1 ... k_n), \label{Lips_n+1.5} \end{eqnarray} If we would have $l$ photons accompanying $n$ other particles, then the factor in square brackets is iterated. The statistical factor ${1 \over l!}$ would complete the form of the phase space parametrization, similar to the exponent. The last formula, supplemented with definition of frames with respect to which angles are defined is used to define the full kinematic configuration of the event. From angles and energies ($k_{\gamma_i})$ of photons and also angles, energies and masses of other decay products, four-momenta of all final state particles can be constructed. If in formula (\ref{Lips_n+1.5}) instead of $dLips_n(p \to k_1 ... k_n)$ one would use $dLips_n(P \to k_1 ... k_n)$ the {\it \bf tangent space} would be obtained. Then $k_{n+1}$ photon does not affect other particles' momenta at all, and thus has no boundaries on energy or direction. If this formula would be iterated then all such photons would be independent from one another as well\footnote{ Expression (\ref{Lips_n+1.5}) would be slightly more complicated if instead of photons a massive particle was to be added.}. Energy and momentum constraints on the photon(s) are introduced with the relation between tangent and real $n+1$-body phase space. The formula defining one step in the iteration reads as follows\footnote{The $ \{ \bar k_1,\dots,\bar k_{n}\}$ can be identified with the event before the radiation of $k_\gamma$ is introduced.}: \begin{eqnarray} && dLips_{n+1}(P\to k_1 ... k_n,k_{n+1})= dLips_{n}^{+1\; tangent} \times W^{n+1}_n, \nonumber\\[3mm] &&dLips_{n}^{+1\; tangent} = dk_\gamma d\cos\theta d\phi \times dLips_n(P \to \bar k_1 ... \bar k_n), \nonumber \\ &&\{k_1,\dots,k_{n+1}\} = {\bf T}\bigl(k_\gamma,\theta,\phi,\{\bar k_1,\dots,\bar k_n\}\bigr). \label{Jacobians} \end{eqnarray} The $W^{n+1}_n$ depends on details of ${\bf T}$, and will be thus given later in formula~(\ref{Wnn}). To justify (\ref{Jacobians}), we have to convolute formula (\ref{Lips_n+1.3}) for $Lips_{n+1}(P \to k_1 ... k_n,k_{n+1})$ with itself (for $Lips_{n}(p \to k_1 ... k_n)$): \begin{eqnarray} Lips_{n+1}(P \to k_1 ... k_n,k_{n+1}) &=& {dM_{1\dots n} ^2 \over 2\pi} Lips_{2}(P \to k_{n+1} p) \times Lips_{n}(p \to k_1 ... k_n) \nonumber \\ Lips_{n}(p \to k_1 ... k_n) &=& {dM_{2\dots n}^2 \over 2\pi} Lips_{2}(p \to k_1 p') \times Lips_{n-1}(p' \to k_2 ... k_n) \label{AA} \end{eqnarray} and use it also for $Lips_{n}(P \to \bar k_1 ... \bar k_n)$: \begin{eqnarray} Lips_{n}(P \to \bar k_1 ... \bar k_n) &=& {dM_{2\dots n}^2 \over 2\pi} Lips_{2}(P \to \bar k_1 \bar p') \times Lips_{n-1}(\bar p' \to \bar k_2 ... \bar k_n). \label{BB} \end{eqnarray} Note that our tangent space of variables $ dk_\gamma d\cos{\theta} d{\phi}$ is unbounded from above and the limit is introduced by $W_n^{n+1}$ which is set to zero for the configuations outside the phase sace. In principle, we should distinguish between variables like $M_{2\dots n} $ for invariant mass of $k_2 \dots k_n$ and $\bar M_{2\dots n} $ for invariant mass of $\bar k_2 \dots \bar k_n$, but in our choice for $G_n$, $G_{n+1}$ below, $M_{2\dots n}= \bar M_{2\dots n}$ and $M_{1\dots n} $ is defined anyway for the $n+1$-body phase space only. We direct the reader to refs.\cite{Barberio:1990ms,Barberio:1994qi} for an alternative presentation. Let us remark that formula (\ref{Jacobians}) is quite general, many options, motivated by the properties of the matrix elements, can be introduced. Generally the transformation $T$ may differ from the choice to choice quite a lot. The most straightforward choice can be based on any $n$ and $n+1$ body phase space parametrizations using invariant masses and angles (e.g. exactly as in {\tt TAUOLA} \cite{Jadach:1993hs} formulas 11 to 13). If \begin{equation} G_n \; : \; M_{2\dots n} ^2,\theta_{1},\phi_{1}, M_{3\dots n} ^2,\theta_{2},\phi_{2}, \dots, \theta_{n-1},\phi_{n-1} \; \to \;\bar k_1 \dots \bar k_n \label{G-1} \end{equation} and \begin{equation} G_{n+1} \;: \; k_\gamma,\theta,\phi,M_{2\dots n}^2,\theta_{1},\phi_{1}, M_{3\dots n} ^2,\theta_{2},\phi_{2},\dots, \theta_{n-1},\phi_{n-1} \; \to \;k_1 \dots k_n,k_{n+1} \label{G-2} \end{equation} then \begin{equation} {\bf T}=G_{n+1}( k_\gamma,\theta,\phi,G_n^{-1}(\bar k_1,\dots,\bar k_n)). \end{equation} The ratio of the Jacobians (factors $\lambda^{1/2}$ like in formula (\ref{Lips_n+1.3}), etc.) form the factor $W^{n+1}_n$, which in our case is rather simple, \begin{equation} W^{n+1}_n= {k_\gamma} {1 \over 2(2\pi)^3} \times \frac{\lambda^{1/2}(1,m_1^2/M_{1\dots n}^2,M^2_{2 \dots n}/M_{1\dots n}^2)}{\lambda^{1/2}(1,m_1^2/M^2,M^2_{2\dots n}/M^2)}, \label{Wnn} \end{equation} because of choice for $G$ as explained in the Appendix of ref.\cite{Nanava:2006vv}. Note that ${k_\gamma}=\frac{M^2-M_{1\dots n}^2}{2M}$. There are additional benefits from such a choice. In all relations $\bar k_2= Lk_2$, ..., $\bar k_n= Lk_n$ and $\bar p'= Lp'$ common Lorentz transformation $L$ is used. Transformation $L$ is defined by $k_1,\bar k_1,\bar p',p'$ and $P$; internal relations between four vectors $k_2 ... k_n$, ($\bar k_2 ... \bar k_n$) are not needed. Formula (\ref{Jacobians}) can be realized algorithmically in the following way: \begin{enumerate} \item For any point in n-body phase space (earlier generated event), described for example with the explicit configuration of four vectors $\bar k_1 ... \bar k_n$, coordinate variables can be calculated, using formula (\ref{G-1}). \item Photon variables can be generated according to Eq. (\ref{Jacobians}). The weight $W^{n+1}_n$ has to be also attributed. \item Variables obtained in this way from the old configuration and the one of a photon can be used to construct the new kinematical configuration for the $n+1$-body final state. The phase space weight, which is zero for configurations outside phase space boundaries, can be calculated at this point from (\ref{Jacobians},\ref{Wnn}) and finally combined with the matrix element. \end{enumerate} Here we have chosen two sub-groups of particles. The first one consisted of particle 1 alone, and the second, of particles 2 to n combined together. Obviously in the case of 2-body decays, there is not much choice when construction of the first photon is performed. By iteration, we can generalize formula (\ref{Jacobians}) to the case of $l$ photons and we write: \begin{eqnarray} && dLips_{n+l}(P\to k_1 ... k_n, k_{n+1} ... k_{n+l})= \frac{1}{l!} \prod_{i=1}^l \biggl[ dk_{\gamma_i} d\cos\theta_{\gamma_i} d\phi_{\gamma_i} W^{n+i}_{n+i-1}\biggr] \times dLips_n(P \to \bar k_1 ... \bar k_n), \nonumber\\ && \{k_1,\dots,k_{n+l}\} = {\bf T}\bigl(k_{\gamma_l},\theta_{\gamma_l},\phi_{\gamma_l},{\bf T}\bigl( \dots, {\bf T}\bigl(k_{\gamma_1},\theta_{\gamma_1},\phi_{\gamma_1},\{\bar k_1,\dots,\bar k_n\}\bigr) \dots\bigr). \label{barred} \end{eqnarray} In this formula we can easily localize the { \bf tangent space} for the multiple photon configuration. In this space, each photon is independent from other particles' momenta. Note that it is also possible to fix upper boundary on $k_{\gamma_i} $ arbitrary high. Photons are independent one from another as well. Correlations appear later, thanks to iterated transformation {\bf T}. The factors $ W^{n+i}_{n+i-1}$ are calculated when constraints on each consecutive photon are introduced; the previously constructed ones are included in the $n+i-1$ system\footnote{Configurations of $k_{\gamma_i} $ which can not be resolved are reduced to the ones with that photon dropped out.}. Of course, for the tangent space to be useful, the choice of the definition of {\bf T} must be restricted at least by the condition $\{ k_1, \cdots k_n \} \to \{ \bar k_1, \cdots \bar k_n \}$ if all $k_{\gamma_i} \to 0$.\footnote{In fact further constraints have to be fulfilled to enable presampling for the collinear singularities. Note that variables $k_{\gamma_m},\theta_{\gamma_m},\phi_{\gamma_m}$ are used at a time of the $m-$th step of iteration only, and are not needed elsewhere in construction of the physical phase space; the same is true for invariants and angles $M_{2\dots n} ^2,\theta_{1},\phi_{1} ,\dots, \theta_{n-1},\phi_{n-1} \; \to \;\bar k_1 \dots \bar k_n $ of (\ref{G-1},\ref{G-2}), which are also redefined at each step of the iteration. } It is important to realize that one has to choose matrix elements on the tangent space to complete the construction used in {\tt PHOTOS}. The number and energies of photons will be generated on the tangent space first. Regularization of (at least) soft singularity must be defined. Rejection, and event construction, is performed with the help of formula (\ref{Jacobians}) for each consecutive photon. It diminishes photon multiplicity with respect to the one defined for the tangent space. Of course, as rejection implements changes in phase space density, a matrix element (with virtual corrections) of the physical space can be introduced as well. The treatment of the phase space presented here lies at the heart of the construction of {\tt PHOTOS} kinematics, and was used since its beginning. It exhausts the case when there is only one charged particle in final state. For multiple charged particle final states new complication appear, because all collinear configurations need simultaneous attention, and not only the one along $k_1$ direction. A presampler with multichannel generation is needed. In our case we follow the same method as explained in ref. \cite{Jadach:1993hs}. Let us now sum the above expression over $l$. If we add arbitrary factors $f(k_{\gamma_i},\theta_{\gamma_i},\phi_{\gamma_i})$ and sum over $l$ we obtain: \begin{eqnarray} && {\sum_{l=0} \exp(-F) \frac{1}{l!} \prod_{i=1}^l f(k_{\gamma_i},\theta_{\gamma_i},\phi_{\gamma_i}) } dLips_{n+l}(P\to k_1 ... k_n, k_{n+1} ... k_{n+l})= \nonumber\\ && { \sum_{l=0} \exp(-F) \frac{1}{l!} \prod_{i=1}^l } \biggl[ { f(k_{\gamma_i},\theta_{\gamma_i},\phi_{\gamma_i})dk_{\gamma_i} d\cos\theta_{\gamma_i} d\phi_{\gamma_i} } W^{n+i}_{n+i-1}\biggr]\times \nonumber\\ && dLips_n(P \to \bar k_1 ... \bar k_n), \\ && \{k_1,\dots,k_{n+l}\} = {\bf T}\bigl(k_{\gamma_l},\theta_{\gamma_l},\phi_{\gamma_l},{\bf T}\bigl( \dots, {\bf T}\bigl(k_{\gamma_1},\theta_{\gamma_1},\phi_{\gamma_1},\{\bar k_1,\dots,\bar k_n\}\bigr) \dots\bigr), \nonumber \\ && F =\int_{k_{min}}^{k_{max}} dk_{\gamma} d\cos\theta_{\gamma} d\phi_{\gamma}f(k_{\gamma},\theta_{\gamma},\phi_{\gamma}). \nonumber \label{barred0} \end{eqnarray} Some parts of rhs. taken alone, give crude distribution over tangent space (orthogonal set of variables $k_i,\theta_i, \phi_i$). Factors $f$ must be integrable over this tangent space and regulators of singularities must be introduced. We may simply request that \begin{eqnarray} && { \sigma_{tangent}} = 1= \nonumber\\ && { \sum_{l=0} \exp(-F) \frac{1}{l!} \prod_{i=1}^l } \biggl[ { f(k_{\gamma_i},\theta_{\gamma_i},\phi_{\gamma_i})dk_{\gamma_i} d\cos\theta_{\gamma_i} d\phi_{\gamma_i} } \biggr] \nonumber \label{barred1} \end{eqnarray} and that sum rule originating from perturbative approach (Kinoshita-Lee-Nauenberg theorem) can be used to control virtual corrections; both for tangent and later also final distributions. At this point we already have Monte Carlo solution of {\tt PHOTOS} phase space. In reality, for that solution to work, real emission and virtual corrections need to be calculated and their factorization properties must be understood. That is why, choice of $f$ is free only in principle, in practice it must be synchronized with those results for the sake of program efficiency. In case of final state QED bremsstrahlung it is rather simple, eventual complications due to QED corrections to rates are of no major consequences \cite{Golonka:2006tw} for the program construction. Non leading corrections appear only. Note that this formula is very close to other ones, used in other progams or calculations. For example formal solution \cite{RichterWas:1985gi,Skrzypek:1992vk} of evolution equation reads \begin{equation} D(x,\beta_{ch})=\delta(1-x) + \beta_{ch}P(x) + \frac{1}{2!}\beta_{ch}^2 \{P \times P \}(x) + \frac{1}{3!}\beta_{ch}^3 \{P \times P \times P \}(x) + \dots \end{equation} where $P(x)=\delta(1-x)(\ln\varepsilon + 3/4) + \Theta(1-x - \varepsilon) \frac{1}{x}(1+x^2)/(1-x)$ and $\{ P \times P\}(x) = \int_0^1 dx_1\int_0^1 dx_2 \delta(x-x_1 x_2) P(x_1) P(x_2)$. One can easily observe, that in the LL contributing regions, the phase space Jacobian's as used in {\tt PHOTOS} trivialize \cite{Barberio:1994qi} and lead directly to this solution. In 1994, this solution was truncated to second order. It was indeed profitable that solutions for similar problems were available in Cracow at that time. Let us give one example \cite{Jadach:1987ii}. In this first, on multiphoton Monte Carlos, paper written in 1987 by S. Jadach formula (3.1) is basically the same as tangent space of multi-photon {\tt PHOTOS} (and not much different from $D(x,\beta_{ch}$)\; discussed just above): { \begin{eqnarray} \sigma(K)& =exp \Bigl( \frac{2\alpha}{\pi}(\ln\frac{s}{m^2} -1) \ln\frac{k_s}{E} + \frac{\alpha}{\pi} \ln\frac{s}{m^2}\Bigr) \hskip 2 cm \nonumber \\ & \sum_{n=0} \frac{1}{n!} \prod_{m=1}^n \int_{k_s < k_m < K } \; \; \frac{d^3k_m}{k_m} \tilde S(k_1) \dots \tilde S(k_n) \tilde \beta_0 \end{eqnarray} } The difference appears in projection from this tangent space to the physical one. Classical solution as proposed by Jadach, use conformal symmetry, projection from eikonal (tangent) to physical space is performed in one step. In {\tt PHOTOS} eikonal symmetry is not used. Iterative projection is used instead, it is somewhat similar to the one introduced in {\tt TAUOLA} \cite{Jezabek:1991qp} for radiative corrections in leptonic tau decays. Analogies to solutions used in QCD parton shower algorithms can be found. Very important aspect of all these solutions is that the structure of singularities is the same in tangent and final physical space. \section{Matrix elements\label{SecCommAnti}} It is out of question, that detailed analysis of {\tt MUSTRAAL} Monte Carlo \cite{Berends:1982ie}, which was a consequence of accidental error in copying source code from punch cards to tape, was essential for the design of {\tt PHOTOS} program. At that time (1983) I was forced to study {\tt MUSTRAAL} line after line. Not only the two missing lines\footnote{Punch card reader glued them together at the last time they were ever to be read?} of code were found, but I have studied the matrix element and crude distributions in all possible details. This unintentionally collected experience combined with importance of QED radiative corrections in phenomenology of leptonic $Z$ couplings at the time of preparation for first measurements of $\tau$ polarization at LEP was few years later a starting point for {\tt PHOTOS}. Let us recall the properties of the $Z \to l^+ l^- \gamma$ matrix element as studied by me at that early time and also the approximate matrix element, which was and still is used in PHOTOS. Let us write the explicit form of the real-photon matrix element (separated from the phase space Jacobians), for the $e^{+}e^{-} \to Z^{0}/\gamma^{*} \to \mu^{+}\mu^{-} (\gamma)$ process and as used in the standard version of {\tt PHOTOS} (published in \cite{Barberio:1990ms,Barberio:1994qi}): \begin{eqnarray} X_{f}^{\mathrm{PHOTOS}}=&\frac{Q'^{2}\alpha(1-\Delta)}{4\pi^{2}s}s^{2} \hskip 3 mm \Bigg\{ \hskip 8 cm \nonumber \\ \frac{1}{k'_{+}+k'_{-}}\frac{1}{k'_{-}}&\bigg[(1+(1-x_{k})^{2}) \frac{{d}\sigma_{B}}{d\Omega}\Big(s,\frac{s(1-\cos\Theta_{+})}{2}, \frac{s(1+\cos\Theta_{+}) }{2}\Big)\bigg]\frac{(1+\beta\cos\Theta_{\gamma})}{2}\;\;\; \nonumber\\ + \frac{1}{k'_{+}+k'_{-}}\frac{1}{k'_{+}}&\bigg[(1+(1-x_{k})^{2}) \frac{{d}\sigma_{B}}{d\Omega}\Big(s,\frac{s(1-\cos\Theta_{-})}{2}, \frac{s(1+\cos\Theta_{-}) }{2}\Big)\bigg]\frac{(1-\beta\cos\Theta_{\gamma})}{2}\Bigg\} \nonumber \\ \mathrm{where:} & \Theta_{+}=\angle(p_{+},q_{+}),\; \Theta_{-}=\angle(p_{-},q_{-}), \;\hskip 4 cm \nonumber\\ & \Theta_{\gamma}=\angle(\gamma,\mu^{-})\; \textrm{is\, defined\, in}\;(\mu^{+},\mu^{-})\textrm{-pair\, rest\, frame.} \hskip 1.2 cm \label{X-fotos} \end{eqnarray} For its calculation (with respect to the Born cross-section) it is enough to know the four momenta of the $Z$ and its decay products. In the presented formulae we follow the notation from refs.~\cite{Golonka:2006tw,Berends:1982ie}. This expression is to be compared with the exact one, taken from ref.~\cite{Berends:1982ie}: \begin{eqnarray} X_{f}=\frac{Q'^{2}\alpha(1-\Delta)}{4\pi^{2}s}s^{2} & \Bigg\{\frac{1}{(k'_{+}+k'_{-})}\frac{1}{k'_{-}}\bigg[\frac{{d}\sigma_{B} }{{d}\Omega}(s,t,u')+\frac{{d}\sigma_{B}}{{d}\Omega}(s,t',u )\bigg]\nonumber \\ & +\frac{1}{(k'_{+}+k'_{-})}\frac{1}{k'_{+}}\bigg[\frac{{d}\sigma_{B}}{{d} \Omega}(s,t,u')+\frac{{d}\sigma_{B}}{{d}\Omega}(s,t',u)\bigg ]\Bigg\}. \label{X-mustraal} \end{eqnarray} The resulting weight is rather simple, and reads: \begin{eqnarray} WT_1=& \frac{\frac{{d}\sigma_{B} }{{d}\Omega}(s,t,u')+\frac{{d}\sigma_{B}}{{d}\Omega}(s,t',u )}{\bigg[(1+(1-x_{k})^{2}) \frac{{d}\sigma_{B}}{d\Omega}\Big(s,\frac{s(1-\cos\Theta_{+})}{2}, \frac{s(1+\cos\Theta_{+}) }{2}\Big)\bigg]\frac{(1+\beta\cos\Theta_{\gamma})}{2}\; \big(1+ \frac{3}{4} \frac{\alpha}{\pi}\big)}, \hskip 5 cm \nonumber \\ WT_2=& \frac{\frac{{d}\sigma_{B}}{{d} \Omega}(s,t,u')+\frac{{d}\sigma_{B}}{{d}\Omega}(s,t',u)}{\bigg[(1+(1-x_{k})^{2}) \frac{{d}\sigma_{B}}{d\Omega}\Big(s,\frac{s(1-\cos\Theta_{-})}{2}, \frac{s(1+\cos\Theta_{-}) }{2}\Big)\bigg]\frac{(1-\beta\cos\Theta_{\gamma})}{2}\; \big(1+ \frac{3}{4} \frac{\alpha}{\pi}\big)}. \hskip 5 cm \label{wgt1} \end{eqnarray} For its calculation the numerical value of the electroweak couplings of $Z$ to fermions, as well as information on the state from which the $Z$ was produced is nonetheless necessary. This seemingly trivial requirement puts new stress on the event record: the details of the process of the $Z$ production need to be coded in the event record, then correctly deciphered by {\tt PHOTOS} to calculate the process-dependent weight. From our experience this requirement of {\tt PHOTOS} may be difficult to accept by other users of event records. The authors of event generators often choose their own conventions in encoding the details of hard process such as $q \bar q \to ng Z/\gamma^*; Z/\gamma^* \to \mu^+ \mu^-$ into the event record. The NLO solution for {\tt PHOTOS}, as presented in ref.~\cite{Golonka:2006tw}, would therefore be feasible with some universal, {\it standard} event record, nonetheless difficult due to practical issues of interfacing. One should ask the question, what is the price related to the approximation as implemented in public version of {\tt PHOTOS}. The results for this standard and NLO improved {\tt PHOTOS} are collected in figures \ref{FigA} and \ref{FigB}. As one can see, improvement due to the use of exact first order matrix elements is unquestionable. On the other hand, the standard, easier to use, version seem to be sufficient in practically all phenomenological applications as well. For the time being the problem of the optimal choice remains rather academic. \vspace{0.2cm} \begin{figure} {\small { \resizebox*{0.49\textwidth}{!}{\includegraphics{PhotosAtNLObooklet5Z0.TO.mu-.mu+.gamma.gamma.M1a0001.eps}} } { \resizebox*{0.49\textwidth}{!}{\includegraphics{PhotosAtNLObooklet5Z0.TO.mu-.mu+.gamma.gamma.M1a0203.eps}} } \caption{ The comparison \cite{Golonka:2006tw} of the standard {\tt PHOTOS} (with multiple photon emission) and the {\tt KKMC} generator (with second-order matrix-element and exponentiation). In the left frame the invariant mass of the $\mu^+\mu^-$ pair; SDP= 0.00918. In the right frame the invariant mass of the $\gamma \gamma$ pair; SDP=0.00268. The fraction of events with two hard photons was 1.2659 $\pm$ 0.0011\% for {\tt KORALZ} and 1.2952 $\pm$ 0.0011\% for {\tt PHOTOS}. For the definition of shape difference parameter (SDP) see \cite{Golonka:2002rz}. \label{FigA}}} \end{figure} \begin{figure} {\small { \resizebox*{0.49\textwidth}{!}{\includegraphics{PhotosAtNLObooklet6Z0.TO.mu-.mu+.gamma.gamma.M1a0001.eps}} } { \resizebox*{0.49\textwidth}{!}{\includegraphics{PhotosAtNLObooklet6Z0.TO.mu-.mu+.gamma.gamma.M1a0203.eps}} } \caption{{ The comparisons \cite{Golonka:2006tw} of the improved {\tt PHOTOS} (with multiple photon emission) and the {\tt KKMC} generator ( with second order matrix element and exponentiation). In the left frame the invariant mass of the $\mu^+\mu^-$ pair; SDP= 0.00142. In the right frame the invariant mass of the $\gamma \gamma$; SDP=0.00293. The fraction of events with two hard photons was 1.2659 $\pm$ 0.0011\% for {\tt KORALZ} and 1.2868 $\pm$ 0.0011\% for {\tt PHOTOS}. For the definition of shape difference parameter (SDP) see \cite{Golonka:2002rz}. \label{FigB}}}} \end{figure} In ref \cite{Nanava:2006vv}, we presented similar modifications in the {\tt PHOTOS} kernel for the decay of $B$ mesons into a pair of scalars. As one can see from the comparison of plots in figures \ref{FigC}, \ref{FigD} and \ref{FigE} the implementation of the exact (but scalar-QED only) kernel brings a minuscule improvement in the agreement between {\tt PHOTOS} and the reference exact simulation of {\tt SANC} \cite{Andonov:2004hi}. In this case both: {\tt SANC} and {\tt PHOTOS} are used to simulate single photon emission. (There exists no reference simulation with which the multi-photon version of {\tt PHOTOS} could be compared.) For the NLO kernel in {\tt PHOTOS} the results are indistinguishable from those of {\tt SANC}, even at statistical level of $10^9$ events. In this case, the price paid for improvement seems to be zero, as there is no need for extra information to be pumped from the event record to the calculation of the {\tt PHOTOS} weight. Actually, the exact kernel is even simpler than the standard one. \begin{figure} \begin{center} \includegraphics[ width=155mm,height=265mm, keepaspectratio]{PmKp_distr_NotCorrected_pn.eps} \end{center} \caption{\label{PmKp_distr_NotCorrected_p} Results \cite{Nanava:2006vv} from {\tt PHOTOS}, standard version, and {\tt SANC} for $B^0 \to \pi^- K^+(\gamma)$ decay are superimposed on the consecutive plots. Standard distributions, as defined in the text and logarithmic scales are used. The distributions from the two programs overlap almost completely. Samples of $10^9$ events were used. The ultraviolet scale, $\mu_{_{UV}}$, was chosen to leave total decay width unchanged by QED.\label{FigC} } \end{figure} \begin{figure} \begin{center} \includegraphics[ width=155mm,height=265mm, keepaspectratio]{PmKp_ratio_NotCorrected_p.eps} \end{center} \caption{\label{PmKp_ratio_NotCorrected_p} Results \cite{Nanava:2006vv} from {\tt PHOTOS}, standard version, and {\tt SANC} for ratios of the $B^0 \to \pi^- K^+(\gamma)$ distributions are presented. Differences between {\tt PHOTOS} and {\tt SANC} are small, but are clearly visible now \label{FigD} } \end{figure} \begin{figure} \begin{center} \includegraphics[ width=155mm,height=265mm, keepaspectratio]{PmKp_ratio_corrected_p.eps} \end{center} \caption{\label{PmKp_ratio_corrected_p} Results \cite{Nanava:2006vv} from {\tt PHOTOS} with the exact matrix element, and {\tt SANC} for ratios of the $B^0 \to \pi^- K^+(\gamma)$ distributions. Differences between {\tt PHOTOS} and {\tt SANC} are below statistical error for samples of $10^9$ events.\label{FigE} } \end{figure} This high precision as documented in figs.~\ref{FigD} and ~\ref{FigE} is elusive: the dependencies on the production process may appear if form-factors (originating from some unspecified here models) which have to be fitted to the data. From the technical side, one can interpret this excellent agreement as a strong test of numerical performance of the program. The necessary studies of the exact parametrization of the phase space used in {\tt PHOTOS}, which will also be important for future version of {\tt PHOTOS}, are described in detail in the journal version of ref. \cite{Nanava:2006vv}. \section{Mathematical aspects of the solution\label{SecSingProd}} \begin{figure} \begin{center} \epsfig{file=fig8.eps,width=0.55\linewidth} \caption{Symbolic reprentation of phase space with up to two extra particles. Curved surface represent actual phase space and the flat one tangent space. The thin bands represent configurations where only one extra particle is added. Point in the center configuration of the Born level. It is implicitly assumed that particles of soft momenta do not provide much difference with respect to configurations when they are absent. That is why symbolically such configuration seem to coincide. \label{FigX}} \end{center} \end{figure} One can ask if there is anything substantial in common in all these solutions presented in \Section{SecNotation}, and whether systematization with the help of mathematical language is worth an effort. Indeed, at the time of writing the first versions of the programs, which are now in a wide use, such considerations were of low priority. In fact to a good reason: they were expected to slow progress and bring little. At present, when multidude of different solutions is available and technical complexity of details dominates over main principles of construction such effort may be well motivated and bring useful results. Let us look at fig.~\ref{FigX} where points, curved lines and surfaces on this heuristic plot represent consecutive manifolds of phase spaces for n, n+1, n+2 particles. Note that the dimensionality of manifolds is in principle counted by number of particles times dimension of Lorentz group representation, minus overall energy-momentum and orientation constraints. Curvature appears as an ultimate expansion parameter. The crude level distribution is also defined for phase spaces of n, n+1, n+2 particles but as energy momentum constraint affects only first n particles, the further ones constitute flat Carthesian sub-space. One step of the iterative projections as presented in Section~\ref{SecNotation} is symbolically presented in fig.~\ref{FigY}. \begin{figure} \begin{center} \epsfig{file=fig9.eps,width=0.55\linewidth} \caption{As in fig.\ref{FigX} this plot symbolically reprents phase space with up to two extra particles. Curved surface represent actual phase space and the cylindiric one the tangent space, where projection of kinematical constraint of one of its dimensions was already executed. \label{FigY}} \end{center} \end{figure} Case of QED and exponetiation of multiple photon radiation is rather simple, we do not need to worry about topological structure which is the same for the final (physical) phase space of multiphoton configuration and for the tangent space (constructed from eikonal phase space and matrix elements). The projection from tangent space to the real one is trivial (at least from the point of view of topological properties). In case of QCD we may expect complications, on the other hand hadonization models simplify the task anyway as they enforce separation of colour in the specific way. On the other hand it may be unhelpful for the discussion of the systematic errors. There is another mathematical concept which is worth mentioning. Thanks to infrared sensitive regions of n+1 body phase space we obtain, in a natural way, a triangulation line for this n+1 body phase space manifold. In fact, structure of such induced triangulation needs to be (topologically) the same for tangent and physical space, the projections must match these triangulations. One can realize that the language of CW complexes (known in theory of homotopy groups) may be useful to systematize the description and to separate it into easier to digest parts. Finally let us point to nice relation between {\tt PHOTOS} algorithm for single (and fixed order) bremsstrahlung on one side and for the multibremsstrahlung cases. The relation is a consequence of the properties of the tangent spaces. It can be seen from formal expansion of Poissonian distribution into sum of binomial ones. In the following formula we identify coefficients of binomial and poissonian distributions: $p=\lambda$, $q=1-p$. Powers of $p$ denote distinct multiplicities. \begin{eqnarray} \exp(-\lambda) \sum_{n=0} \frac{1}{n!}\; p^n\; |_1 = & 1 \cdot (p+q)^1 \nonumber \\ \exp(-\lambda) \sum_{n=0} \frac{1}{n!} \; p^n\; |_2 = & \frac{1}{2} \cdot (p+q)^0 + \frac{1}{2} \cdot (p+q)^2\nonumber \\ \exp(-\lambda) \sum_{n=0} \frac{1}{n!} \; p^n\; |_3 = & \frac{2}{6} \cdot (p+q)^0 + \frac{3}{6} \cdot (p+q)^1 + \frac{1}{6} \cdot (p+q)^3\nonumber \\ \exp(-\lambda) \sum_{n=0} \frac{1}{n!} \; p^n\; |_4 = & \frac{9}{24} \cdot (p+q)^0 + \frac{8}{24} \cdot (p+q)^1 + \frac{6}{24} \cdot (p+q)^2 + \frac{1}{24} \cdot (p+q)^4 \label{expi} \end{eqnarray} These somewhat unexpected numerical constants, just ratios of natural numbers, provide trivial example of expansion of one set of special functions into another one. The consecutive lines of formula (\ref{expi}) correspond to expansion at respectively $1^{st}$,$2^{nd}$,$3^{rd}$ and $4^{th}$ orders. \section{Summary\label{SecSummary}} In this talk we have presented some principles used in Monte Carlo construction. It was a perfect occasion to look into history of projects, often common with Prof. Jadach's. For that purpose iluminating mathematical aspects of the constructions seemed to be useful. They were one of the cornerstones in achieving quality and robustness of the results. In the presented talk we have concentrated on phase space and its possible description with the help of iterative Monte Carlo methods. Of course, main motivation of such a systematization is to search for prototypes of algorithms to be applied e.g. in QCD. Work on matrix elements was only marginally mentioned here. It is only starting, but some results could have been already presented now, see talk by Andr\'e van Hameren. For more, I am afraid, we need to wait for some time, even though some promising results are already available \cite{Jadach:2007qa,Placzek:2007xb}. The next anniversary Epiphany conference, ten years from now, will hopefully bring some nice summary on that development. \bibliographystyle{JHEP} \addcontentsline{toc}{section}{\refname}
1,477,468,749,936
arxiv
\section{Introduction} \label{sec:intro} Precise and reliable determinations of the charm and bottom quark masses are an important input for a number of theoretical predictions, such as Higgs branching ratios to charm and bottom quarks or for the corresponding Yukawa couplings \cite{Heinemeyer:2013tqa, Petrov:2015jea}. They also affect the theoretical predictions of radiative and inclusive B decays, as well as rare kaon decays. For example, the inclusive semileptonic decay rate of B mesons depends on the fifth power of the bottom quark mass. These weak decays provide crucial methods to determine elements of the CKM matrix, which in turn are important for testing the validity of the Standard Model, as well as for indirect searches of new physics. In this context, having a reliable estimate of uncertainties for the quark masses is as important as knowing their precise values~\cite{Antonelli:2009ws}. Due to confinement quark masses are not physical observables. Rather, they are scheme-dependent parameters of the QCD Lagrangian which have to be determined from quantities that strongly depend on them. One of the most precise tools to determine the charm and bottom quark masses is the QCD sum rule method, where weighted averages of the normalized cross section $R_{e^+e^-\to\, q\bar{q}\,+X}$, with $q = c, b$, \begin{align} \label{eq:momentdefvector} & M_n^V = \! \int\!\dfrac{{\rm d}s}{s^{n+1}}R_{e^+e^-\to\, q\bar{q}\,+X}(s)\,,\qquad R_{e^+e^-\to\, q\bar{q}\,+X}(s) = \dfrac{\sigma_{e^+e^-\to\, q\bar{q}\,+X}(s)}{\sigma_{e^+e^-\to\,\mu^+\mu^-}(s)}\,, \end{align} can be related to moments of the quark vector current correlator $\Pi_V$~\cite{Shifman:1978bx, Shifman:1978by}: \begin{align} &M_n^{V,\,\rm th} =\dfrac{12\pi^2 Q_q^2}{n!}\,\dfrac{{\rm d}^n}{{\rm d}s^n}\Pi_V(s)\Big|_{s=0}\,,\quad\, j^{\mu}(x) = \bar{q}(x)\gamma^\mu q(x) \,,\nonumber \\ &\big(g_{\mu\nu}\,s-q_{\mu}q_{\nu}\big)\Pi_V(s) = -\, i\!\int\!\mathrm{d}x\, e^{iqx}\left\langle \,0\left|T\, j_{\mu}(x)j_{\nu}(0)\right|0\,\right\rangle. \end{align} Here $Q_q$ is the quark electric charge and $\sqrt{s} = \sqrt{q^2}$ is the $e^+e^-$ center-of-mass energy. Given that the integration over the experimental $R$-ratio extends from the quark pair threshold up to infinity {\it but} experimental measurements only exist for energies up to around $11$\,GeV, one relies on using theory input for energies above that scale (which we call the ``continuum'' region). For the charm moments, the combination of all available measurements is actually sufficient to render the experimental moments essentially independent of uncertainties one may assign to the theory input for the continuum region~\cite{Dehnadi:2011gc}. For the bottom moments, the dependence on the continuum theory input is very large, and the dependence of the low-$n$ experimental moments on unavoidable assumptions about the continuum uncertainty can be the most important component of the error budget, see e.g.~\cite{Corcella:2002uu}. In fact, the use of the first moment $M_1^V$ to determine the bottom mass appears to be excluded until more experimental data becomes available for higher energies. Alternatively one can also consider moments of the pseudoscalar current correlator to extract the heavy quark masses. Experimental information on the pseudoscalar correlator $\Pi_P$ is not available in a form useful for quark mass determinations, but for the charm quark very precise lattice calculations have become available recently~\cite{Allison:2008xk}. For the pseudoscalar correlator it turns out that the first two Taylor coefficients in the small-$q^2$ expansion need to be regularized and defined in a given scheme, and that the third term (which we will denote by $M_0^P$) is hardly sensitive to $m_q$. We adopt the definitions \begin{align} \label{eq:momentdefpseudo} &\Pi_P(s) = i\!\int\!\mathrm{d}x\, e^{iqx}\left\langle \,0\left|T\, j_P(x)j_P(0)\right|0\,\right\rangle ,\quad j_P(x) = 2\,m_q\,i\,\bar{q}(x)\gamma_5 q(x)\,,\\ &M_n^{P,\,\rm th} =\dfrac{12\pi^2 Q_q^2}{n!}\,\dfrac{{\rm d}^n}{{\rm d}s^n} P(s)\Big|_{s=0}\,, \qquad P(s) = \dfrac{\Pi_P(s) - \Pi_P(0) - \Pi^\prime_P(0)\,s }{s^2}\,,\nonumber \end{align} where the explicit mass factor in the definition of the pseudoscalar current ensures it is renormalization-scheme independent. For small values of $n$ such that $m_q/n\gtrsim\Lambda_{{\rm QCD}}$, the theoretical moments for the vector and pseudoscalar correlators can be computed in the framework of the OPE, i.e.\ as an expansion in vacuum matrix elements involving operators of increasing dimension~\cite{Shifman:1978bx, Shifman:1978by}. The leading matrix element corresponds to the perturbative QCD computations, which greatly dominates the series. Nonperturbative corrections are parametrized by vacuum condensates, and we find that even the leading correction, given by the gluon condensate, has a very small effect for low $n$, particularly so for the bottom correlator. For moments at low values of $n$, it is mandatory to employ a short-distance mass scheme such as $\overline{\rm MS}$~\cite{Kuhn:2001dm}, which renders the quark mass $\overline m_q(\mu_m)$ dependent on its renormalization scale $\mu_m$, similar to the strong coupling $\alpha_s(\mu_\alpha)$, which depends on $\mu_\alpha$. This method of determining the heavy quark masses with high precision is frequently called relativistic charmonium/bottomonium sum rules. For the perturbative term, the exact analytic expressions for the $\Pi$ functions are known at ${\mathcal O}(\alpha_s^0)$ and ${\mathcal O}(\alpha_s)$, \cite{Kallen:1955fb}. Therefore any moment can be obtained simply by Taylor expanding around $q^2 = 0$. At ${\mathcal O}(\alpha_s^2)$ moments are known to up to $n=30$~\cite{Chetyrkin:1995ii, Chetyrkin:1996cf, Boughezal:2006uu, Czakon:2007qi,Maier:2007yn}. At ${\mathcal O}(\alpha_s^3)$ they are known analytically for $n=1$ \cite{Chetyrkin:2006xg, Boughezal:2006px, Sturm:2008eb}, $n=2$ and $n=3$ (and even $n=4$ for the pseudoscalar correlator) \cite{Maier:2008he, Maier:2009fz}. Higher moments at ${\cal O}(\alpha_s^3)$ have been determined by a semi-analytical procedure \cite{Hoang:2008qy, Kiyo:2009gb, Greynat:2010kx, Greynat:2011zp}. The Wilson coefficient of the gluon condensate contribution is known to ${\mathcal O}(\alpha_{s})$ \cite{Broadhurst:1994qj}. The most recent determinations of the $\overline{\rm MS}$ charm mass from charmonium sum rules for the vector correlator~\cite{Dehnadi:2011gc, Bodenstein:2011ma, Chetyrkin:2009fv} obtain very accurate results, but differ in the way they estimate theoretical uncertainties, and also in the computation of the moments from experimental data. Concerning the estimate of the perturbative uncertainties, Ref.~\cite{Dehnadi:2011gc} obtained $19$\,MeV compared to $1$ and $2$\,MeV obtained in Refs.~\cite{Bodenstein:2011ma} and \cite{Chetyrkin:2009fv}, respectively. The discrepancy arises from two differences. First, in Refs.~\cite{Bodenstein:2011ma, Chetyrkin:2009fv} the renormalization scales $\mu_m$, and $\mu_\alpha$ were set equal, while in Ref.~\cite{Dehnadi:2011gc} it was argued that they should be varied independently. Second, in Refs.~\cite{Bodenstein:2011ma, Chetyrkin:2009fv} the lowest renormalization scale was chosen to be $2$\,GeV, while in Ref.~\cite{Dehnadi:2011gc} variations down to the charm mass value were considered. For the case of the pseudoscalar moments, the HPQCD collaboration has made a number of very accurate predictions for charm and bottom masses \cite{Allison:2008xk, McNeile:2010ji, Chakraborty:2014aca, Colquhoun:2014ica}, the last of which has the smallest uncertainty claimed so far for the charm mass. In all these analyses the renormalization scales $\mu_m$, and $\mu_\alpha$ are set equal when estimating the truncation uncertainty. A detailed discussion on the estimates of theoretical and experimental uncertainties can be found in Secs.~\ref{sec:previous-results} and \ref{sec:comparison-masses} of this article (see also Ref.~\cite{Dehnadi:2011gc}). Similarly, bottomonium sum rules have been used to determine the bottom mass from low-$n$ moments. To the best of our knowledge, the most recent and precise determinations are from Refs.~\cite{Bodenstein:2012, Chetyrkin:2009fv}. These two analyses estimate their perturbative uncertainties in the same way as the corresponding charm mass extractions from the same collaborations~\cite{Bodenstein:2011ma,Chetyrkin:2009fv}. Furthermore, when it comes to compute the experimental moments, they use theoretical input at $\mathcal{O}(\alpha_s^3)$ with perturbative uncertainties to model the high-energy region (continuum region) of the spectrum. As we discuss in this work, similar caveats as for their charm analyses can be argued to also affect their bottom quark results. In this work we revisit the charmonium sum rules for the vector correlator, refining our perturbative error estimate from Ref.~\cite{Dehnadi:2011gc} by incorporating a convergence test. The convergence test addresses the issue that the independent variation of $\mu_m$ and $\mu_\alpha$ together with the relatively large value of the $\alpha_s$ close to the charm mass scale, might lead to an overestimate of the perturbative uncertainty. The convergence test allows to quantify the convergence property of each perturbative series with a single parameter and to discard series for which the convergence is substantially worse then for the rest of the series. We show that this procedure is meaningful, since the complete set of series for the moments shows a strongly peaked distribution in these convergence values, which allows to define an overall convergence for the set of perturbative series. This leads to a reduction of the perturbative uncertainty from $19$ to $14$\,MeV, and the corresponding result for the $\overline{\rm MS}$ charm mass supersedes the main result given in Ref.~\cite{Dehnadi:2011gc}. We also apply this improved method of estimating theory uncertainties to obtain a new charm mass determination from the pseudoscalar correlator, and to extract the bottom mass from the vector correlator. For the latter, we compute the bottom experimental moments by combining contributions from narrow resonances, experimental data taken in the continuum, and a theoretical model for the continuum region. We carefully study the assignment of adequate uncertainties to this last contribution, to make sure that the the model dependence is reduced to an acceptable level. This paper is organized as follows: In Sec.~\ref{sec:theory} we summarize the theoretical framework introduced in \cite{Dehnadi:2011gc}, and adapt it to cover the case of the pseudoscalar moments. We also introduce the ratios of moments, also used before in Ref.~\cite{Kuhn:2001dm}, and the perturbative expansions associated to them. Sec.~\ref{sec:previous-results} contains a brief summary of the results obtained in \cite{Dehnadi:2011gc}, and the discussion is extended to the case of the pseudoscalar correlator and the bottom mass. In Sec.~\ref{sec:convergence} we introduce the convergence test, and discuss how it allows to identify and discard series with a bad convergence. Sec.~\ref{sec:lattice-data} contains a discussion on the lattice simulation results we use for our analysis. In Sec.~\ref{sec:exp} we present our computation of the experimental moments for the bottom correlator. The results are compared to previous determinations in Sec.~\ref{sec:comp-exp}. The computation of the ratio of experimental moments is presented in Sec.~\ref{sec:exp-ratio}. Our final results for the quark masses are given in Sec.~\ref{sec:results}. The results are compared to previous charm and bottom mass analyses in Sec.~\ref{sec:comparison-masses}. We present our conclusions in Sec.~\ref{sec:conclusions}. In Appendix~\ref{app:coefs} the numerical values of the perturbative coefficients that enter into our analysis and are not yet provided by Ref.~\cite{Dehnadi:2011gc} are collected for the convenience of the reader. \section{Theoretical Input} \label{sec:theory} \subsection{Perturbative Contribution} \label{sec:perturbative} The moments of the vector and pseudoscalar current correlators are defined in Eqs.~(\ref{eq:momentdefvector}) and (\ref{eq:momentdefpseudo}), respectively. In the OPE framework they are dominated by the perturbative contribution (that is, a partonic computation), which exhibits a nonlinear dependence on the mass. Within perturbation theory one can decide to manipulate the series expansion to get a more linear dependence on the mass. Conceptually there is no preference. As advocated in our previous analyses~\cite{Dehnadi:2011gc}, one might consider various versions of the expansion to reliably estimate the perturbative uncertainties. Four types of expansion were suggested in Ref.~\cite{Dehnadi:2011gc}, which we briefly review below. \vskip 5mm \noindent {\bf (a) Standard fixed-order expansion}\\ We write the perturbative vacuum polarization function as \begin{eqnarray} \label{eq:Mnpertfixedorder1} \widehat\Pi_X(s, n_f, \alpha_s^{(n_f)}(\mu_\alpha), \overline m_q(\mu_m), \mu_\alpha, \mu_m) \, = \, \dfrac{1}{12\pi^2 Q_q^2}\sum_{n=0}^\infty s^{n} \hat M^X_n \,, \end{eqnarray} where $X = V, P$ for vector and pseudoscalar currents, respectively. Note that for notation reasons, in Eqs.~(\ref{eq:Mnpertfixedorder1}, \ref{eq:Mnpertcontour1}, \ref{eq:Mnpertcontour2}, \ref{eq:Pi0msbar}) we use $\Pi_P(q^2) = P(q^2)$, where $P$ is the twice-subtracted pseudoscalar correlator defined in Eq.~(\ref{eq:momentdefpseudo}). Here $Q_q$ is the quark electric charge with $q = c, b$, and $n_f = 4,5$ for charm and bottom, respectively. In full generality, the perturbative moments $\hat M_n$ can be expressed as the following sum: \begin{eqnarray} \label{eq:Mn-theo-FO} \hat M^X_n & = & \frac{1}{[4\,\overline m^{\,2}_q(\mu_m)]^n} \sum_{i,a,b} \bigg(\frac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi}\bigg)^{\!i} [C_X(n_f)]^{a,b}_{n,i}\,\ln^a\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_m}\bigg)\! \ln^b\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_\alpha}\bigg). \end{eqnarray} This is the standard fixed-order expression for the perturbative moments. The numerical values for the $[C_V(n_f = 5)]^{a,b}_{n,i}$ coefficients are given in Table~\ref{tab:cfixedorder} (for the vector current with $n_f = 4$, these coefficients can be found in Table~1 of Ref.~\cite{Dehnadi:2011gc}). Likewise, $[C_P(n_f = 4)]^{a,b}_{n,i}$ are collected in a numerical form in Table~\ref{tab:cPfixedorder}. The expression in Eq.~(\ref{eq:Mn-theo-FO}) is the common way to write the perturbative series of the moments. However, as noted in Ref.~\cite{Dehnadi:2011gc}, the nonlinear dependence on $\overline m_q$ of the standard fixed-order expansion has the disadvantage that for charm (bottom) quarks there are frequently no solutions for the mass in the sum rule mass determination, for moments higher than the first (second), for some set of values of the renormalization scales. \vskip 5mm \noindent {\bf (b) Linearized expansion}\\ One can linearize the the fixed-order form expansion of Eq.~(\ref{eq:Mn-theo-FO}) with respect to the exponent of the quark mass pre-factor by taking the \mbox{$2n$-th} root. This choice is e.g.\ made in Ref.~\cite{McNeile:2010ji}, and in general one can write: \begin{align} \label{eq:Mn-theo-exp} \Big(\hat M^X_n\Big)^{\!1/2n} = \frac{1}{2\,\overline{m}_q(\mu_m)} \,\sum_{i,a,b}\bigg(\frac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi}\bigg)^{\!i} [\bar C_X(n_f)]_{n,i}^{a,b}\,\ln^a\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_m}\bigg)\! \ln^b\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_\alpha}\bigg). \end{align} The coefficients $[\bar C_V(n_f = 5)]_{n,i}^{a,b}$ and $[\bar C_P(n_f = 4)]_{n,i}^{a,b}$ are given in Tables~\ref{tab:ctildefixedorder} and \ref{tab:cPtildefixedorder}, respectively (for $n_f = 4$ the coefficients for the vector current can be found in Table~2 of Ref.~\cite{Dehnadi:2011gc}). Even though relation (\ref{eq:Mn-theo-exp}) still exhibits some nonlinear dependence on ${\overline m}_q$ through perturbative logarithms, we find that it always has a solution for the quark mass. \vskip 5mm \noindent {\bf (c) Iterative linearized expansion}\\ For the expansion methods (a) and (b) shown in Eqs.~(\ref{eq:Mn-theo-FO}) and (\ref{eq:Mn-theo-exp}), one solves for the quark masses $\overline m_{c,b}(\mu_m)$ numerically keeping the exact mass dependence on the theory side of the equation. Alternatively, one can solve for $\overline m_{c,b}(\mu_m)$ iteratively order by order, which is perturbatively equivalent to the exact numerical solution, but gives different numerical results. The method consists of inserting the lower order values for $\overline m_{c,b}(\mu_m)$ in the higher order perturbative coefficients, and re-expanding consistently. This method has been explained in detail in Sec. 2.1(c) of Ref.~\cite{Dehnadi:2011gc}, and we only quote the final results here: \begin{align}\label{eq:iterative-general} {\overline m}_q(\mu_m) & =\,{\overline m}_q^{(0)} \sum_{i,a,b}\bigg(\frac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi}\bigg)^{\!i} [\tilde C_X(n_f)]_{n,i}^{a,b}\, \ln^a\!\bigg(\frac{{\overline m}_q^{(0)\,2}}{\mu^2_m}\bigg) \! \ln^b\!\bigg(\frac{{\overline m}_q^{(0)\,2}}{\mu^2_\alpha}\bigg),\\ \overline m_q^{(0)}(\mu_m) &= \frac{1}{2\big(M^X_n\big)^{1/2n}} \, [\tilde C_X(n_f)]_{n,0}^{0,0}\,,\nonumber \end{align} where the numerical value of the coefficients $[\tilde C_V(n_f=5)]_{n,i}^{a,b}$ and $[\tilde C_P(n_f=4)]_{n,i}^{a,b}$ are collected in Tables~\ref{tab:vector-it} and \ref{tab:cPhat}, and the values for the vector current with $n_f = 4$ can be found in Table~3 of Ref.~\cite{Dehnadi:2011gc}. By construction, the iterative expansion always has a solution for the quark mass. Accordingly, potential biases on the numerical analysis related to any possible nonlinear dependence are eliminated. \vskip 5mm \noindent {\bf (d) Contour-improved expansion}\\ For the expansions (a), (b) and (c) the moments and the quark masses are computed for a fixed value of the renormalization scale $\mu_\alpha$. Using the analytic properties of the vacuum polarization function, one can rewrite the fixed-order moments as integrals in the complex plane. This opens the possibility of making $\mu_\alpha$ dependent on the integration variable, in analogy to the contour-improved methods used for \mbox{$\tau$-decays} (see e.g.\ Refs.~\cite{LeDiberder:1992te, Pivovarov:1991rh, Braaten:1991qm, Narison:1988ni, Braaten:1988ea, Braaten:1988hc}). Therefore we define the contour-improved moments~\cite{Hoang:2004xm} as (see Fig.~\ref{fig:contour}), \begin{figure}[t] \center \includegraphics[width=0.3\textwidth]{figs/contour} \caption{ One possible integration path in the complex \mbox{$\bar s$-plane} for the computation of the contour-improved moments.\label{fig:contour} } \end{figure} \begin{eqnarray} \label{eq:Mnpertcontour1} \hat M_n^{X,\mathcal{C}} & = & \frac{6\pi Q_q^2}{i}\,\int_\mathcal{C}\,\frac{{\rm d}s}{s^{n+1}} \widehat\Pi_X(s, n_f, \alpha_s^{(n_f)}(\mu_\alpha^c(s,\overline m_q^{\,2})), \overline{m}_q(\mu_m), \mu_\alpha^q(s,\overline m_q^{\,2}), \mu_m) \,, \end{eqnarray} and we employ the following path-dependent $\mu_\alpha^c$, first used in Ref.~\cite{Hoang:2004xm} \begin{eqnarray} \label{mualphacontour} (\mu_\alpha^q)^2(s,\overline m_q^{\,2}) & = & \mu_\alpha^2\,\bigg(\,1-\frac{s}{4\,\overline m_q^{\,2}(\mu_m)}\,\bigg) \,. \end{eqnarray} It weights in a different way the threshold versus the high energy parts of the spectrum. It was shown in Ref.~\cite{Dehnadi:2011gc} that the resulting moments $\hat M_n^{X,\mathcal{C}}$ can be obtained analytically from the Taylor expansion around $s=0$ of the vacuum polarization function using an $s$-dependent $\mu_\alpha^c(s,\overline m_q^{\,2})$: \begin{eqnarray} \label{eq:Mnpertcontour2} \widehat\Pi_X^{\overline {\rm MS}}\Big(s,\alpha^{(n_f)}_s(\mu_\alpha^c(s,\overline m_q^{\,2})), \overline m_q(\mu_m), \mu_\alpha^c(s,\overline m_q^{\,2}), \mu_m\Big) & = & \sum\limits_{n=0}^\infty \, s^{n}\,\hat M_n^{X,\mathcal{C}} \,. \end{eqnarray} This trick works because $\alpha_s(\mu_\alpha^c(s,\overline m_q^{\,2}))$ has the same cut as the fixed-order expression for $\widehat\Pi_X$. Other choices could spoil this property. Expanding the analytic expression for $\hat M_n^{X,\mathcal{C}}$ on $\alpha_s$ at a given finite order, one recovers the fixed-order moments $\hat M_n^X$. This shows that the dependence on the contour is only residual and represents an effect from higher order terms from beyond the order one employs for the calculation. The contour-improved moments have a residual sensitivity to the value of the vacuum polarization function at zero momentum transfer.\footnote{This means that the dependence vanishes in the large-order limit.} For the case of the vector correlator this value depends on the UV-subtraction scheme and corresponds to $\widehat\Pi(0)=\hat M_0^V$. For the case of the pseudoscalar correlator, $\hat M_0^P$ is scheme-independent, since $P(q^2)$ already includes two UV subtractions. However, one could as well define a three-times-subtracted pseudoscalar correlator of the form $\overline P(q^2) = P(q^2) - P(0)$. Slightly abusing notation, we denote $\overline P$ as the ``on-shell'' scheme for $P(q^2)$, and the twice subtracted (original) definition as the $\overline{\rm MS}$ scheme for $P(q^2)$. Using the OS scheme with $\widehat\Pi^X(0)=0$ for either vector or pseudoscalar correlator, we find that the first moment for the contour-improved expansion gives exactly the first fixed-order moment, $\hat M_1^{X,\mathcal{C}} = \hat M_1^X$. Thus, in order to implement a non-trivial modification, and following Ref.~\cite{Dehnadi:2011gc}, we employ the $\overline{\rm MS}$ scheme for $\widehat\Pi_V(0)$ defined for $\mu={\overline m}_q({\overline m}_q)$, and the twice-subtracted expression for $P(q^2)$. Generically it can be written as \begin{eqnarray} \label{eq:Pi0msbar} \widehat\Pi_X^{\overline{\rm MS}}(0,n_f) & = & \sum_{i,a,b} \bigg(\frac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi}\bigg)^{\!i} [C_X(n_f)]^{a,b}_{0,i}\,\ln^a\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_m}\bigg)\! \ln^b\!\bigg(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_\alpha}\bigg). \end{eqnarray} The numerical values for the coefficients $[C_X]^{a,b}_{0,i}$ are collected in Table~\ref{tab:cPi0vec} for the vector correlator with $5$ flavors and the pseudoscalar correlator with $4$ flavors. In Table~4 or Ref.~\cite{Dehnadi:2011gc} one finds the the numerical values of $[C_V(n_f = 4)]^{a,b}_{0,i}$. \begin{figure*}[t] \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Pseudo-Contour-FO}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Pseudo-Contour-Exp}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Pseudo-Contour-Iter}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Pseudo-Contour-CI}} \caption{Contour plots for $\overline{m}_c(\overline{m}_c)$ as obtained from the first moment of the pseudoscalar correlator $M_1^P$, as a function of $\mu_\alpha$ and $\mu_m$ at ${\mathcal O}(\alpha_s^3)$, for methods (a)\,--\,(d). The shaded areas represent regions with $\mu_m,\mu_\alpha < \overline{m}_c(\overline{m}_c)$, and are excluded of our analysis. For this plot we employ $\alpha_s(m_Z) = 0.118$. \label{fig:mccontour1}} \end{figure*} \begin{figure*}[t] \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Bottom-Contour-FO}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Bottom-Contour-Exp}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Bottom-Contour-Iter}} \subfigure[] {\includegraphics[width=0.5\textwidth]{figs/Bottom-Contour-CI}} \caption{Contour plots for $\overline{m}_b(\overline{m}_b)$ as obtained from the second moment of the vector correlator $M_2^V$ with $n_f=5$, as a function of $\mu_\alpha$ and $\mu_m$ at ${\mathcal O}(\alpha_s^3)$, for methods (a)\,--\,(d). The shaded areas represent regions with $\mu_m,\mu_\alpha < \overline{m}_b(\overline{m}_b)$, and are excluded of our analysis. For this plot we employ $\alpha_s(m_Z) = 0.118$. \label{fig:mbcontour1}} \end{figure*} \subsection{Gluon Condensate Contribution} \label{subsectioncondensate} We estimate nonperturbative power corrections by including the gluon condensate contribution. The gluon condensate is a dimension-4 matrix element and gives the leading power correction in the OPE for the moments~\cite{Novikov:1977dq,Baikov:1993kc} \begin{eqnarray} \label{MnOPE1} M^X_n & = & \hat M_n^X + \Delta M_n^{X,\,\langle G^2\rangle}\,+\,\ldots \end{eqnarray} Here the ellipses represent higher-order power corrections of the OPE involving condensates with dimensions bigger than $4$. The Wilson coefficients of the gluon condensate corrections are known to $\mathcal{O}(\alpha_s)$ accuracy~\cite{Broadhurst:1994qj}. Following Ref.~\cite{Chetyrkin:2010ic}, we express the Wilson coefficient of the gluon condensate in terms of the pole mass, since in this way the correction is numerically more stable for higher moments. However, as we did in Ref.~\cite{Dehnadi:2011gc}, we still write the pole mass in terms of the $\overline{\mbox{MS}}$ quark mass at one loop. The resulting expression reads \begin{eqnarray} \label{eq:GG}\Delta M_n^{X,\,\langle G^2\rangle} & = & \dfrac{1}{(4M_q^2)^{n+2}}\Big\langle\frac{\alpha_s}{\pi} G^2\Big\rangle_{\rm RGI} \left[ [a_X(n_f)]^0_n+\dfrac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi}\,[a_X(n_f)]^1_n\right],\\ M_q & = & \overline{m}_q(\mu_m)\left\{1 + \dfrac{\alpha^{(n_f)}_s(\mu_\alpha)}{\pi} \left[\dfrac{4}{3} - \ln\left(\frac{\overline{m}^{\,2}_q(\mu_m)}{\mu^2_m}\right)\right]\right\}.\nonumber \end{eqnarray} We use the renormalization group invariant (RGI) scheme for the gluon condensate \cite{Narison:1983kn}. The numerical value of the $[a_V(n_f = 5)]^a_n$ and $[a_P(n_f = 4)]^a_n$ coefficients are collected in Table~\ref{tab:gluoncondensate}. The values for $[a_V(n_f = 4)]^a_n$ can be found in Table~5 of Ref.~\cite{Dehnadi:2011gc}. For methods (b) and (c) one can obtain the gluon condensate contribution by performing simple algebra operations and re-expansions in $\alpha_s^{(n_f)}$ and $\langle G^2\rangle$. For method (d) we employ Eqs.~(\ref{MnOPE1}) and (\ref{eq:GG}) as shown. For the RGI gluon condensate we adopt~\cite{Ioffe:2005ym} \begin{eqnarray} \label{condensatevalue1} \Big\langle\frac{\alpha_s}{\pi} G^2\Big\rangle_{\rm RGI} & = & 0.006\pm0.012\;\mathrm{GeV}^4\,. \end{eqnarray} \subsection{Ratios of Moments} \label{sec:ratios} An alternative set of observables which are also highly sensitive to the quark masses are the ratios of consecutive moments. To that end we define $R_n^X(n_f) \equiv M^X_{n+1}(n_f)/M^X_n(n_f)$. Such ratios are proportional to the inverse square of the quark mass for any value of $n$. Their perturbative series can be expressed as an expansion in powers of $\alpha_s^{(n_f)}$ analogous to Eq.~(\ref{eq:Mn-theo-FO}), with the replacements $[4\,\overline{m}_q^{\,2}(\mu_m)]^n\to4\,\overline{m}_q^{\,2}(\mu_m)$ and $[C_X(n_f)]_{a,b}^{i,j} \to [R_X(n_f)]_{a,b}^{i,j}$. Their computation is trivial, as one only needs to take the ratio of the two consecutive theoretical moments and re-expand as a series in $\alpha_s^{(n_f)}$. We call this the standard fixed-order expansion, analogous to the expansion (a) of Sec.~\ref{sec:perturbative}. The numerical expressions for the $[R_X(n_f)]_{a,b}^{i,j}$ coefficients for the vector correlator with $n_f = 4, 5$ are given in Table~\ref{tab:Rfixedorder}, and for the pseudoscalar correlator with $n_f = 4$ in Table~\ref{tab:RPFOorder}. As for the regular moments, we find that the nonlinear dependence of $R_n^X$ on the quark mass sometimes causes that there is no numerical solution for $\overline m_q$. By taking the square root of the ratio of two consecutive moments one gets a linear dependence on the inverse of the quark mass. The corresponding theoretical expression is obtained by re-expanding the perturbative expansion of $\sqrt{R_n^X(n_f)}$ as a series in powers of $\alpha_s^{(n_f)}$. Thus we obtain an expression of the form of Eq.~(\ref{eq:Mn-theo-exp}) with the replacement $[\bar C_X(n_f)]_{a,b}^{i,j} \to [\bar R_X(n_f)]_{a,b}^{i,j}$. This is referred to as the linearized expansion, in analogy to the expansion (b) of Sec.~\ref{sec:perturbative}. The numerical values for the $[\bar R_X(n_f)]_{a,b}^{i,j}$ coefficients are collected for the vector correlator with $n_f = 4, 5$ in Table~\ref{tab:Rexporder}, and for the pseudoscalar correlator with $n_f = 4$ in Table~\ref{tab:RPexporder}. Finally, one can use $\sqrt{R_n^X(n_f)}$ to solve for $\overline m_q(\mu_m)$ in an iterative way, exactly as explained in Sec.~\ref{sec:perturbative} for expansion (c). The theoretical expression is analogous to Eq.~(\ref{eq:iterative-general}) with the replacement $[\tilde C_X(n_f)]_{a,b}^{i,j} \to [\tilde R_X(n_f)]_{a,b}^{i,j}$. We collect the numerical values for the $[\tilde R_X(n_f)]_{a,b}^{i,j}$ coefficients, in Tabs.~\ref{tab:RITorder} and \ref{tab:RPITorder} for the vector ($n_f = 4, 5$) and pseudoscalar ($n_f = 4$) correlators, respectively. We call this the iterative linearized expansion. One cannot implement a contour-improved expression for the ratios of moments, as they cannot be computed as the contour integral of a correlator. For the ratios of moments, in any of the three expansions, one can include non-perturbative corrections in the form of a gluon condensate OPE term, just using Eq.~(\ref{eq:GG}) and performing simple algebra operations and re-expansions in $\alpha_s^{(n_f)}$ and $\langle G^2\rangle$. \section{Previous Results and Scale Variations} \label{sec:previous-results} In a number of recent low-$n$ sum rule analyses~\cite{Bodenstein:2012, Bodenstein:2011ma, McNeile:2010ji, Chetyrkin:2009fv, Allison:2008xk, Kuhn:2007vp, Boughezal:2006px, Chakraborty:2014aca}, which determined the charm and bottom quark masses with very small uncertainties using ${\cal O}(\alpha_s^3)$ theoretical computations for the moments~\cite{Hoang:2008qy, Kiyo:2009gb, Greynat:2010kx, Greynat:2011zp}, the theory uncertainties from the truncation of the perturbative series have been estimated with the scale setting $\mu_m=\mu_\alpha$ based on just one type of expansion, which was either the fixed-order [expansion (a)] for the vector correlator, or the linearized [expansion (b)] for the pseudoscalar correlator. In Ref.~\cite{Dehnadi:2011gc} we analyzed the perturbative series for the moments $M^V_{1,2,3,4}$ of the charm vector correlator at ${\cal O}(\alpha_s^3)$ using an alternative way to estimate the perturbative uncertainties, based on the four different expansion methods (a)\,--\,(d), as explained in Sec.~\ref{sec:perturbative}. We also focused on the question whether renormalization scale variation restricted to $\mu_m=\mu_\alpha$ leads to a compatible estimate of the perturbative uncertainties. From our analysis we found: \begin{itemize} \item The extractions for the $\overline{\rm MS}$ charm mass using the expansions (a)\,--\,(d) with correlated variations of $\mu_m$ and $\mu_\alpha$ (e.g.\ $\mu_m=\mu_\alpha$) for the vector correlator can lead to very small scale variations, which can be very different depending on the method.\footnote{We judge the compatibility of the perturbative error estimates based on the size of scale variations alone, i.e.\ without accounting at the same time for other sources of uncertainties such as experimental errors or the uncertainty in the value of the strong coupling.} Moreover, for some expansions also the results from the different orders can be incompatible to each other. It was therefore concluded that using correlated scale variation and one type of expansion can lead to an underestimate of the perturbative uncertainty. \item Uncorrelated (i.e.\ independent) variation of $\mu_m$ and $\mu_\alpha$ leads to charm mass extractions with perturbative uncertainty estimates that are in general larger, but fully compatible among the expansions (a)\,--\,(d) and for the different orders. It was therefore concluded that $\mu_m$ and $\mu_\alpha$ should be varied independently to obtain a reliable estimate of the perturbative uncertainty. \item The size of the charm mass perturbative uncertainty has a significant dependence on the value of the lower bound of the range of the scale variation. The choice of the upper bound has a marginal impact. \item The pattern of size of the correlated scale variations for the different expansions can be traced back to the form of the contours of constant charm mass in the $\mu_m$\,--\,$\mu_\alpha$ plane, which happens to be located along the diagonal $\mu_m\sim\mu_\alpha$ for expansions (a) and (b), but roughly orthogonal for expansions (c) and (d), see Fig.~6 of Ref.~\cite{Dehnadi:2011gc}. \end{itemize} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/charm-variations} \label{fig:charm-variations}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/pseudo-variations} \label{fig:pseudo-variations}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/bottom-variations} \label{fig:bottom-variations}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/charm-ratio-variations} \label{fig:charm-ratio-variations}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/pseudo-ratio-variations} \label{fig:pseudo-ratio-variations}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/bottom-ratio-variations} \label{fig:bottom-ratio-variations}} \caption{Charm and bottom mass values from the first [second] moment of the vector (a) for charm [(c) for bottom] and pseudoscalar [(b), charm] currents at $\mathcal{O}(\alpha_s^3)$; and for the ratio of the second over the first moment for the vector [(d) for charm, (g) for bottom] and pseudoscalar [(e), charm] correlators. We show the outcome of various scale variations for the perturbative expansions (a)\,--\,(d) [(a)\,--\,(c) for ratios], where green (rightmost) corresponds to $2\,{\rm GeV}\le\mu_m=\mu_\alpha\le4\,{\rm GeV}$ [$5\,{\rm GeV}\le\mu_m=\mu_\alpha\le15\,{\rm GeV}$ for bottom], blue (second from the left) $2\,{\rm GeV}\le\mu_m,\mu_\alpha\le4\,{\rm GeV}$ [$5\,{\rm GeV}\le\mu_m,\mu_\alpha\le15\,{\rm GeV}$ for bottom], purple (second from the right) $\overline m_c(\overline m_c)\le\mu_m,\mu_\alpha\le\,4{\rm GeV}$ [$\overline m_b(\overline m_b)\le\mu_m,\mu_\alpha\le15\,{\rm GeV}$ for bottom] and in red (rightmost) we supplement the latter variation with a cut on the series with larger values of $V_c$.}\label{fig:variations} \end{figure*} For example,\footnote{The size of the scale variations quoted in this paragraph applies to $\overline m_c(\overline m_c)$ as well as to $\overline m_c(3\,{\rm GeV})$, and all numerical results are obtained at ${\cal O}(\alpha_s^3)$. We also stress that there are no perturbative instabilities concerning the use of the RGE down to the scale $\overline m_c(\overline m_c)$.} in Ref.~\cite{Chetyrkin:2009fv} method (a) has been used for $M_1^V$ with $\mu_m=\mu_\alpha$ varied between $2$ and $4$\,GeV, quoting a perturbative error estimate of $2$\,MeV. For the expansion methods\footnote{To compute the charm and bottom masses in this section we use $M_1^{V,\,\rm exp}=0.2121\,{\rm GeV}^{-2}$ for the charm vector correlator, result obtained in Ref.~\cite{Dehnadi:2011gc}, $M_1^{P,\,\rm latt}=0.1402\,{\rm GeV}^{-2}$ for the charm pseudoscalar correlator, from Ref.~\cite{Allison:2008xk}, and our own computation $M_2^{V,\,{\rm exp}} = 2.834\times10^{-5}\,{\rm GeV}^{-4}$ for the bottom vector correlator, see Sec.~\ref{sec:exp}. We also use $\alpha_s(m_Z) = 0.1184.$} (a)\,--\,(d) we obtain for the same scale variation $1.2893 \pm 0.0007$, $1.2904\pm0.0004$, $1.2963\pm0.0045$ and $1.3009\pm0.0020$\,GeV, respectively, for $\overline m_c(\overline m_c)$, which are inconsistent. This can be compared to the corresponding results using independent variations as suggested in Ref.~\cite{Dehnadi:2011gc}. Using $2\,{\rm GeV}\le \mu_\alpha,\mu_m\le4\,{\rm GeV}$ we obtain $1.291\pm0.003$, $1.291\pm0.003$, $1.296\pm0.005$ and $1.302\pm0.003$\,GeV, respectively, for expansions (a)\,--\,(d). These results are not consistent either. It was furthermore argued in Ref.~\cite{Dehnadi:2011gc} that an adequate variation range should include the charm mass itself (after all, that is the scale that governs the series), motivated by the range $2\,m_c\,\pm\, m_c$ around the pair production threshold. Thus, adopting independent scale variation in the range $\overline{m}_c(\overline{m}_c) \le \mu_m, \mu_\alpha \le 4$\,GeV one obtains $1.287\pm0.018$, $1.287\pm0.015$, $1.282\pm0.019$ and $1.291\pm0.014$\,GeV respectively. The results show consistency and demonstrate the strong dependence on the lower bound of the renormalization scale variation. The outcome is illustrated graphically in Fig.~\ref{fig:charm-variations}, and the order-by-order dependence in Fig.~1 of Ref.~\cite{Dehnadi:2014kya}. In Ref.~\cite{Dehnadi:2011gc} we also explored scale setting in which $\mu_m$ was fixed to $\overline m_c(\overline m_c)$ and only $\mu_\alpha$ was varied. The outcome is shown in Figs.~4 and 5 of that reference. The contour lines in the $\mu_m$\,--\,$\mu_\alpha$ plane for the mass extraction from the first moment of the vector correlator for all methods are shown in Fig.~6 of Ref.~\cite{Dehnadi:2011gc}. The final result quoted in Ref.~\cite{Dehnadi:2011gc}, using $\alpha_s(m_Z)=0.1184\, \pm \,0.0021$, was $\overline m_c(\overline m_c) = 1.282 \, \pm \, (0.006)_{\rm stat}\, \pm \, (0.009)_{\rm syst} \, \pm \, (0.019)_{\rm pert}\, \pm \, (0.010)_{\alpha_s} \, \pm \, (0.002)_{\langle GG\rangle}\,$GeV based on the iterative expansion method (c). We have repeated this analysis for the first moment of the pseudoscalar correlator $M_1^P$. Ref.~\cite{Allison:2008xk} uses method (b) with the same scale variation as Ref.~\cite{Chetyrkin:2009fv}, quoting $4$\,MeV for the truncation error. For methods (a)\,--\,(d) and using $2\,{\rm GeV}\le\mu_m=\mu_\alpha\le4\,{\rm GeV}$ we obtain $1.276 \pm 0.003$, $1.277\pm0.004$, $1.275\pm0.005$ and $1.297\pm0.004$\,GeV, respectively. For independent double scale variation between $2$ and $4$\,GeV we obtain, $1.276 \pm 0.013$, $1.277\pm0.012$, $1.271\pm0.012$ and $1.294\pm0.012$\,GeV, and if we use $\overline m_c(\overline m_c)$ as the lower bound to we obtain $1.260 \pm 0.039$, $1.267\pm0.037$\,GeV, $1.259\pm0.041$ and $1.272\pm0.034$. These results are displayed graphically in Fig.~\ref{fig:pseudo-variations}. The contour lines for the mass extraction from the first moment of the pseudoscalar correlator for all methods are shown in Fig.~\ref{fig:mccontour1}. We see that the results show a qualitative agreement with the situation for the vector current, but at a level of perturbative scale variations that are in general roughly larger by a factor of two. A similar study can be performed for the extraction of the bottom mass $\overline m_b(\overline m_b)$ from the second moment of the vector correlator $M_2^V$. Ref.~\cite{Chetyrkin:2009fv} uses the fixed-order expansion [method~(a)] and correlated scale variation between $5\,{\rm GeV}\le\mu_m=\mu_\alpha\le 15\,{\rm GeV}$, quoting a perturbative error of $3$\,MeV. For the same variation we obtain $4.1781\pm0.0005$, $4.1771\pm0.0015$, $4.1818\pm0.0034$ and $4.1792\pm0.0044$\,GeV for methods (a) and (d), respectively. As in the charm case the results are not consistent, but the variations of the results have a much smaller size, as is expected from the fact that for the bottom the renormalization scales are much larger. For independent variation between the same values we get $4.183 \pm 0.008$, $4.181\pm0.006$, $4.180\pm0.006$ and $4.186\pm0.013$\,GeV. Finally, if the lower limit of the double variation starts at $\overline m_b(\overline m_b)$ we find $4.179 \pm 0.011$, $4.181\pm0.011$, $4.175\pm0.011$ and $4.184\pm0.0015$\,GeV. These results are collected in Fig.~\ref{fig:bottom-variations}. The corresponding $\overline m_b(\overline m_b)$ contours in the $\mu_m$\,--\,$\mu_\alpha$ plane are shown in Fig.~\ref{fig:mbcontour1}. As for the charm case, we find fully consistent results for the independent scale variation and using $\overline m_b(\overline m_b)$ as the lower bound. We have also studied the ratio of the first over the second moments for the three cases, and observe a very similar pattern. We do not provide a detailed discussion in the text, but display the outcome graphically in Figs.~\ref{fig:charm-ratio-variations} to \ref{fig:bottom-ratio-variations}. \section{Convergence Test} \label{sec:convergence} At this point it is useful to consider the perturbative series for all choices of $\mu_\alpha$ and $\mu_m$ as different perturbative expansions, which can have different convergence properties. To estimate the perturbative uncertainties one analyzes the outcome of this set of (truncated) series. While the uncorrelated scale variation certainly is a conservative method, one possible concern is that it might lead to an overestimate of the size of the perturbative error. For instance, this might arise for a non-vanishing value of $\ln(\mu_m/\mu_\alpha)$ in connection with sizeable values of $\alpha_s(\mu_\alpha)$ for $\mu_\alpha$ close to the charm mass scale, which might artificially spoil the convergence of the expansion. One possible resolution might be to simply reduce the range of scale variation (such as increasing the lower bound). However, this does not resolve the issue, since the resulting smaller variation merely represents a matter of choice. Furthermore, there is in general no guarantee that the series which are left have a better convergence despite the fact that the overall scale variation might become reduced. Preferably, the issue should be fixed from inherent properties of the perturbative series themselves. It is possible to address this issue by supplementing the uncorrelated scale variation method with a convergence test constraint, which we explain in the following. We implement a finite-order version of the root convergence test. Let us recall that in mathematics, the root test (also known as Cauchy's radical test) states that for a series of terms $a_n$, $S[a]=\sum_{n} a_n$, if the quantity $V_\infty$ defined as \begin{equation}\label{eq:cauchi} V_\infty \equiv \limsup_{n\to\infty} (a_n)^{1/n}\,, \end{equation} is smaller (bigger) than $1$, the sum is absolutely convergent (divergent). If $V_\infty$ approaches $1$ from above then the series is still divergent, otherwise the test is not conclusive. In Eq.~(\ref{eq:cauchi}) $\limsup$ stands for the superior limit, which essentially means that in case of oscillating series, one takes the maximum value of the oscillation. In the context of our analysis with truncated series the relevant property is that a smaller $V_\infty$ implies a better convergent series. For the different expansion methods we use, it is simplest to apply the method directly to the sequence of quark masses that are extracted order by order, rewriting the results as a series expansion. Since we only know a finite number of coefficients of the perturbative series, we need to adapt the test. We now introduce $V_c$ and proceed as follows:\,\footnote{One could think of implementing the ratio test as well. However, since we only known a small number of terms, it is likely that one of them becomes close to zero, making one of the ratios blow up. This makes this test very unstable.} \begin{itemize} \item[(a)] For each pair of renormalization scales $(\mu_m,\mu_\alpha)$ we define the convergence parameter $V_c$ from the charm mass series $\overline{m}_c(\overline{m}_c)=m^{(0)}+\delta m^{(1)}+\delta m^{(2)}+\delta m^{(3)}$ resulting from the extractions at ${\cal O}(\alpha_s^{0,1,2,3})$: \begin{equation} V_c = \max\!\bigg[\frac{\delta m^{(1)}}{m^{(0)}}\,,\Big(\frac{\delta m^{(2)}}{m^{(0)}}\Big)^{\!\!1/2}, \Big(\frac{\delta m^{(3)}}{m^{(0)}}\Big)^{\!\!1/3}\,\bigg]. \end{equation} \item[(b)] The resulting distribution for $V_c$ values can be conveniently cast as a histogram, and the resulting distribution is a measure for the overall convergence of the perturbative expansion being employed. We apply the convergence analysis to the region \mbox{$\overline{m}_c(\overline{m}_c) \leq \mu_\alpha,\mu_m \le 4$\,GeV} for charm, and $\overline{m}_b(\overline{m}_b) \leq \mu_\alpha,\mu_m \le 15$\,GeV for bottom. If the distribution is peaked around the average $\langle V_c\rangle$ it has a well-defined convergence. Hence discarding series with $V_c\gg \langle V_c\rangle$ (particularly if they significantly enlarge the estimate of the perturbative error) is justified. \end{itemize} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-vector-charm} \label{fig:histograms-vector}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-pseudo-charm} \label{fig:histograms-pseudo}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-vector-bottom} \label{fig:histograms-bottom}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-vector-ratio} \label{fig:histograms-vector-ratio}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-pseudo-ratio} \label{fig:histograms-pseudo-ratio}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/histo-bottom-ratio} \label{fig:histograms-bottom-ratio}} \caption{$V_c$ distribution for $\overline{m}_c(\overline{m}_c)$ from the first moment of the vector (a) and pseudoscalar (b) correlator, and for $\overline{m}_b(\overline{m}_b)$ for the second moment of the vector correlator (c), for expansions (a)\,--\,(d). The three lower panels show the same for the ratio of the second over the first moment for expansions (a)\,--\,(c).} \label{fig:histograms} \end{figure*} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-vector-charm} \label{fig:trimming-vector}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-pseudo-charm} \label{fig:trimming-pseudo}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-vector-bottom} \label{fig:trimming-bottom}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-vector-ratio} \label{fig:trimming-vector-ratio}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-pseudo-ratio} \label{fig:trimming-pseudo-ratio}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/discard-bottom-ratio} \label{fig:trimming-bottom-ratio}} \caption{Half of the scale variation of $\overline{m}_q(\overline{m}_q)$ at ${\mathcal{O}(\alpha_s^3)}$ as a function of the fraction of the discarded series with highest $V_c$ values for the first moment of the vector (a) and pseudoscalar (b) correlators for charm, the second moment of the vector correlator for bottom (c); the ratio of the second over the first moment for the vector [charm (d) and bottom (f)] and pseudoscalar [charm (e)] correlators.} \label{fig:trimming} \end{figure*} Fig.~\ref{fig:histograms-vector} shows the $V_c$ distributions for expansions (a)\,--\,(d) for the extraction of the charm mass from the vector moment $M_1^V$. We find $\langle V_c\rangle_{\rm double}=(0.15,\,0.15,\,0.17,\,0.19)$\footnote{Interestingly, the same analysis for the correlated variation with $2\,{\rm GeV} \leq \mu_\alpha=\mu_m \le 4$\,GeV yields \mbox{$\langle V_c\rangle_{\rm corr}=(0.14,\,0.16,\,0.19,\,0.19)$}, which is similar to the outcome for the double variation. This same observation can be made for the rest of the correlators.} and that the distributions are peaked around $\langle V_c\rangle$, indicating a very good overall convergence. The scale variation error (defined as half the overall variation) as a function of the fraction of the series (with the largest $V_c$ values) that are being discarded is shown in Fig.~\ref{fig:trimming-vector}. We see that only around 2\% of the series with the highest $V_c$ values by themselves cause the increase of the scale variation from well below $15$\,MeV to up to $20$\,MeV. These series are located at the upper-left and lower-right corners of Figs.~\ref{fig:mccontour1} and \ref{fig:mbcontour1}, and Fig.~6 of Ref.~\cite{Dehnadi:2011gc}, corresponding to values of $\mu_m$ and $\mu_\alpha$ far from each other. Given that these series have very large $V_c$ values and do not reflect the overall good convergence behavior of the bulk of the series, it is justified to remove them from the analysis. The $V_c$ distributions for the pseudoscalar first moment $M_1^P$ are shown in Fig.~\ref{fig:histograms-pseudo}, again showing a clear peak. However, with $\langle V_c\rangle_{\rm double}=(0.24,\,0.24,\,0.25,\,0.21)$ [for correlated variation $\langle V_c\rangle_{\rm corr}=(0.22,\,0.23,\,0.22,\,0.15)$ with $2$\,GeV as the lower bound], the average $V_c$ values are significantly larger than for the vector correlator, indicating that the overall perturbative convergence for the pseudoscalar moment is still excellent but worse than for the vector moment. This means that the vector correlator method is superior, and we expect that the perturbative uncertainty in the charm mass from the pseudoscalar is larger. This expectation is indeed confirmed as we discussed in Sec.~\ref{sec:previous-results}, see also Sec.~\ref{sec:results}. Fig.~\ref{fig:trimming-pseudo} shows that the effect of discarding the series with the worst convergence is very similar to that of the vector correlator. For our determination of the bottom mass we use the second moment $M_2^V$ (see Sec.~\ref{sec:exp} for a discussion on why we discard the first moment), and employ uncorrelated scale variations in the range $\overline m_b(\overline m_b) \leq \mu_m , \mu_\alpha \leq 15$\,GeV. Fig.~\ref{fig:histograms-bottom} shows the corresponding histograms, and we find that the convergence test yields $\langle V_c\rangle_{\rm double}=(0.13,\,0.11,\,0.12,\,0.15)$ for expansions (a)\,--\,(d) [for the correlated variation with scales set equal and $5\,{\rm GeV} \leq \mu_\alpha=\mu_m \le 15$\,GeV we find $\langle V_c\rangle_{\rm corr}=(0.13,\,0.09,\,0.13,\,0.15)$]. As expected, the averages for the bottom are much smaller than for the charm. We further find that discarding series with the highest $V_c$ values only has minor effects on the perturbative error estimate for fractions up to $5\%$, see Fig~\ref{fig:trimming-bottom}. This is a confirmation that the series for bottom moments overall are more stable, which is again expected from the fact that perturbation theory should work better for the bottom than for the lighter charm. The behavior of the ratios of moments is very similar as that for regular moments, as can be seen in Figs.~\ref{fig:trimming} and \ref{fig:histograms}, panels (d)\,--\,(f). We find the following average values for $V_c$ for methods (a)\,--\,(d): ratios of charm vector moments $\langle V_c\rangle_{\rm double}=(0.19,\,0.18,\,0.19)$ [$\langle V_c\rangle_{\rm corr} = (0.16,\,0.16,\,0.23)$]; ratios of charm pseudoscalar moments $\langle V_c\rangle_{\rm double}=(0.25,\,0.23,\,0.18)$ [$\langle V_c\rangle_{\rm corr}=(0.25,\,0.20,\,0.16)$]; ratios of bottom vector moments $\langle V_c\rangle_{\rm double}=(0.13,\,0.12,\,0.13)$ [$\langle V_c\rangle_{\rm corr}=(0.13,\,0.11,\,0.14)$]. Therefore we conclude that the perturbative convergence of the ratios of moments is in general terms a bit worse than that of regular moments, except for the linearized iterative method of the pseudoscalar ratios. In our final numerical analyses we discard $3$\% of the series with the worst $V_c$ values. As can be seen from Fig.~\ref{fig:histograms}, this only affects series with $V_c$ values much larger than the average values for the whole set of series. It is our intention to keep the fraction of discarded series as small as possible, since it is our aim to remove only series with convergence properties that are obviously much worse than those of the bulk of the series. We call this procedure {\it trimming} in the following. As we see in Figs.~\ref{fig:order-plots}, the results including the trimming show a very good order-by-order convergence for the heavy quark mass determinations, and at each order every expansion method gives consistent results for the central values as well as for the estimate of the perturbative uncertainties. Figs.~\ref{fig:order-plot-mc-vec} and \ref{fig:order-plot-mc-pseudo} show the results for $\overline{m}_c(\overline{m}_c)$ from the vector and pseudoscalar correlators, respectively, for expansions (a)\,--\,(d) at ${\cal O}(\alpha_s^{1,2,3})$ and with $\overline{m}_c(\overline{m}_c)\le\mu_m,\mu_\alpha\le 4$\,GeV, using the first moment. Figs.~\ref{fig:order-plot-mc-vec-rat} and \ref{fig:order-plot-mc-pseudo-rat} show results for methods (a)\,--\,(c), using the ratio of the second over the first moment. Analogously, Figs.~\ref{fig:order-plot-mb-vec} and \ref{fig:order-plot-mb-vec-ratio} show the results for $\overline{m}_b(\overline{m}_b)$ for the second moment, and the ratio of the second over the first moment, respectively, with the uncorrelated variation $\overline{m}_b(\overline{m}_b)\le\mu_m,\mu_\alpha\le 15$\,GeV. \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-charm-vector} \label{fig:order-plot-mc-vec}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-charm-pseudo} \label{fig:order-plot-mc-pseudo}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-bottom-vector} \label{fig:order-plot-mb-vec}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-charm-ratio} \label{fig:order-plot-mc-vec-rat}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-pseudo-ratio} \label{fig:order-plot-mc-pseudo-rat}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/orders-bottom-ratio} \label{fig:order-plot-mb-vec-ratio}} \caption{Charm and bottom mass values from the first [second] moment of the vector (a) for charm [(c) for bottom] and pseudoscalar [(b), charm] currents at $\mathcal{O}(\alpha_s^{1,2,3})$; and for the ratio of the second over the first moment for the vector [(d) for charm, (g) for bottom] and pseudoscalar [(e), charm] correlators for expansions (a)\,--\,(d) [(a)\,--\,(c) for ratios], in red, blue, green and purple, respectively.} \label{fig:order-plots} \end{figure*} \section{Lattice Simulation Data} \label{sec:lattice-data} The pseudoscalar current is not realized in nature in a way which is useful to compute the moments of the corresponding correlator from experimental data. Results for the moments of the pseudoscalar current correlator can, however, be obtained from simulations on the lattice. The strategy of these numerical simulations is to tune the lattice parameters (such as bare coupling constant and masses) to a selected number of observables (e.g.\ the energy splitting of $\Upsilon$ resonances). Once this tuning is performed, the lattice action is fully specified and no further changes are implemented. The tuned lattice action can then be used to perform all sorts of predictions, moments of correlators among them. Lattice simulations have to face a number of challenges, which usually translate into sizeable uncertainties. Among those are the extrapolations to the infinite volume and the zero lattice spacing (the latter being much harder), as well as the extrapolation to physical quark masses. On top of these systematic uncertainties, there are also statistical ones, which are related to the finite sampling used to perform the path integrations. On the other hand there are also concerns on the type of lattice action which is being used for the fermions. According to Ref.~\cite{Allison:2008xk}, the moments of the pseudoscalar correlator are least affected by systematic uncertainties, and so HPQCD has focused on those for their subsequent high-precision analyses. To the best of our knowledge, HPQCD is the only lattice collaboration which has published results on QCD correlators. They have used the staggered-quarks lattice action, and MILC configurations for gluons. These results have been used to determine the charm mass and the strong coupling constant~\cite{Allison:2008xk,McNeile:2010ji,Chakraborty:2014aca} with high accuracy, as well as the bottom mass~\cite{McNeile:2010ji,Colquhoun:2014ica}, with smaller precision. We will use the simulation results as given in Ref.~\cite{Allison:2008xk}, even if the results quoted in of \cite{McNeile:2010ji,Chakraborty:2014aca} are a bit more precise. The reason for this choice is that while \cite{Allison:2008xk} makes a straightforward extrapolation to the continuum, which is independent of the charm mass and $\alpha_s$ extractions, in \cite{McNeile:2010ji,Chakraborty:2014aca} the fit for the quark masses, the strong coupling constant and the extrapolation to the continuum is performed all at once, in a single fit. Furthermore that fits contains a lot of priors for the parameters one is interested in fitting. In any case, as we have seen, the charm mass extraction from the pseudoscalar correlator is dominated by perturbative uncertainties, as a result of the bad convergence of the series expansion for its moments. Ref.~\cite{Allison:2008xk} provides simulation results for the so-called reduced moments $R_k$, which are collected in their Table~II. The index $k$ takes only even values, and starts with the value $k = 4$, which is fairly insensitive to the charm mass. Hence the lowest moment we consider is $R_6$. Reduced moments are defined as (up to a global power) the full moment divided by the tree-level result. By taking this ratio, the authors of Ref.~\cite{Allison:2008xk} claim that large cancellations between systematic errors take place. The reduced moments are scaleless, and the mass-dimension that one obviously needs to determinate the charm quark mass is regained by dividing with the mass of the $\eta_c$ pseudoscalar particle. Thus one can easily translate the reduced moments into the more familiar correlator moments $M_n^P$ with the following relation: \begin{equation} M_n^P = [C_P(n_f=4)]^{0,0}_{n,0}\bigg(\dfrac{R_{2n+4}}{m_{\eta_c}}\bigg)^{\!2n}\,, \end{equation} where the $C_P$ coefficients correspond to the tree-level terms of the standard fixed-order expansion of Eq.~(\ref{eq:Mn-theo-FO}). Although the experimental value for $\eta_c$ is $2.9836(7)\,$GeV, we use the value $m_{\eta_c} = 2.980\,$GeV given in Ref.~\cite{Allison:2008xk}, in order to ease comparison with that analysis. In Ref.~\cite{Chakraborty:2014aca} the value $m_{\eta_c} = 2.9863(27)\,$GeV is used. It is claimed that (as for the lattice action) it has no QED effects, and the error accounts for $c\bar{c}$ annihilation. Using the quote in Ref.~\cite{Chakraborty:2014aca} changes $M_1^P$ by $0.4\%$ and the effect on the charm mass is of the order of $2$\,MeV. The uncertainty in the $\eta_c$ mass has no effect on the $M_n^P$ errors. In Table~\ref{tab:Latt} we quote the lattice simulation results written as regular moments $M_n^P$. \begin{table}[tbh!] \center \begin{tabular}{|cccc|} \hline $M_1^P$ & $M_2^P$ & $M_3^P$ & $M_4^P$ \tabularnewline\hline $1.402(20)$ & $1.361(40)$ & $1.426(59)$ & $1.558(89)$ \tabularnewline \hline \end{tabular} \caption{Lattice simulation results for the moments of the twice-subtracted pseudoscalar correlator $P(q^2)$ for $n_f = 4$. Moments given in units of $10^{-n}\times$\,GeV$^{-2n}$.\label{tab:Latt}} \end{table} \section{Computation of the Experimental Moments for the Bottom Correlator} \label{sec:exp} In this section we present our computation of the moments for the bottom vector current correlator from experimental data. These are made of three distinct contributions: the narrow resonances below threshold, the region of broader resonances, explored by BABAR \cite{:2008hx}, and the continuum region, where no data has been taken and some modeling is required. The BABAR data has to be corrected for initial-state radiation and vacuum polarization effects. In the continuum region we use a model which consists of a combination of a linear fit to the BaBar experimental points with energy larger than $11.05\,$GeV, and perturbation theory as a model for missing experimental data, which are joined smoothly by a cubic interpolation. We assign a conservative uncertainty guided by the error function of the linear fit to the BABAR data in the region with measurements with the highest energy. Our determination of the experimental bottom moments differs from Ref.~\cite{Chetyrkin:2009fv} in the way we model the uncertainties for the hadronic cross section in the continuum region, plus other minor differences in the contributions from the narrow widths and the threshold region, see discussion in Sec.~\ref{sec:comp-exp}. We also provide the correlation matrix among different moments, which cannot be found in the literature. Therefore, even though there are some similarities with the computations outlined in \cite{Chetyrkin:2009fv}, we find it justified to discuss our computation of the experimental moments in some detail. We note that our results for the moments of the bottom vector current correlator have already been used in the analyses of high-$n$ moments of Ref.~\cite{Hoang:2012us}, in the context of nonrelativistic large-$n$ sum rules. \subsection{Narrow Resonances} The contribution of resonances below the open bottom threshold $\sqrt{s}=10.62\,\rm{GeV}$ includes $\Upsilon (1S)$ up to $\Upsilon(4S)$. We use the narrow width approximation to compute their contribution to the experimental moments, finding \begin{equation} M_n^{\rm res} = \frac{9\pi \Gamma_{ee}}{\alpha(M_\Upsilon)^2 M_\Upsilon^{2n+1}}\,. \end{equation} The masses and electronic widths of these four resonances are taken from the PDG \cite{Agashe:2014kda}, and the values of the effective electromagnetic coupling constant evaluated at the $\Upsilon$ masses are taken from Ref.~\cite{Kuhn:2007vp}. This information is collected in Table~\ref{tab:NarrowRes}. We have also checked that if one uses a Breit-Wigner instead of the narrow width approximation the results change by an amount well within the error due to the uncertainty in the electronic width. In analogy to what we found in our study of the charm moments \cite{Dehnadi:2011gc}, the effect of the mass uncertainty in the moments is negligible. Therefore one only needs to consider the experimental uncertainty in the electronic widths. There is no information on the correlation between the measurements of these widths. The PDG averages of the electronic partial widths for the first three resonances is dominated by the CLEO measurement~\cite{Rosner:2005eu}.\footnote{Refs.~\cite{Kuhn:2007vp,Chetyrkin:2009fv} assume that the error of the electronic width for the first three narrow resonances is $100\%$ correlated, and uncorrelated to that of the $\Upsilon(4S)$.} Therefore we take the approach that half of the width's uncertainty (in quadrature) is uncorrelated (therefore mainly of statistical origin), whereas the other half is correlated among the various resonances (therefore coming from common systematics in the measurements). \begin{table}[t!] { \centering \begin{tabular}{|c|cccc|} \hline {} & $\Upsilon(1S)$ & $\Upsilon(2S)$ & $\Upsilon(3S)$ & $\Upsilon(4S)$ \\ \hline $M_{\Upsilon}(\rm{GeV})$ & 9.46030(26) & 10.02326(31) & 10.3552(5) & 10.5794(12) \\ $ \Gamma_{\rm{ee}}(\rm{KeV})$ & 1.340(18) & 0.612(11) & 0.443(8) & 0.272(29) \\ $ \Big(\frac{\alpha_{\rm QED}}{\alpha(M_{\Upsilon})}\Big)^{\!2}$ & 0.932069 & 0.93099 & 0.930811 & 0.930093 \\ \hline \end{tabular} \caption{Masses and electronic widths of narrow $\Upsilon$ resonances \cite{Agashe:2014kda} and effective electromagnetic coupling constant \cite{Kuhn:2007vp}. $\alpha_{\rm QED} = 1/137.035999084(51)$ represents the fine structure constant. \label{tab:NarrowRes}} } \end{table} \subsection{Threshold Region} The region between the open bottom threshold and the experimental measurement of the \mbox{$R_b$-ratio} at the highest energy, $10.62\,\rm{GeV} \le \sqrt{s}\le 11.2062\,{\rm GeV}$, is referred to as the threshold region. The region above the last experimental measurement will be collectively denoted as the continuum region. The first experimental data close to the B meson threshold were taken by the CLEO \cite{Ammar:1997sk, Besson:1984bd, Huang:2006em} and CUSB \cite{Lovelock:1985nb} collaborations. The measurements at each c.m.\ energy have a $6 \% $ systematic uncertainty. More recently the BABAR collaboration~\cite{:2008hx} has measured the \mbox{$R_b$-ratio} in the energy region between $10.54\,\rm{GeV}$ and $11.20\,\rm{GeV}$, with significantly higher statistics and better control of systematic uncertainties (of the order of $3\%$). These measurements are taken in small energy bins, densely populating the threshold region. The BABAR data supersedes the older data of CLEO and CUSB, and it has already been used in Refs.~\cite{Chetyrkin:2009fv, Chetyrkin:2010ic, Bodenstein:2012}, in which the bottom mass was also determined. This BABAR data for the \mbox{$R_b$-ratio} has not been corrected for initial-state radiation and vacuum polarization effects. Moreover, the effect of the $\Upsilon(4S)$ resonance has not been subtracted,\footnote{The radiative tails of the first three resonances are provided by BABAR, so they can be subtracted at the data level, before correcting for vacuum polarization effects.} so we have performed the subtraction ourselves, using the Breit-Wigner approximation and using for the total width the PDG value $\Gamma_{4S} = 20.5$\,MeV: \begin{equation} \label{eq:Y4S} R^{\rm{BW}}(s)= \frac{9\, M_{4S}^2\, \Gamma^{4S}_{\rm{ee}}}{\alpha(M_{4S})^2} \frac{\Gamma_{4S}}{(s-M_{4S}^2)^2+\Gamma_{4S}^2 M_{4S}^2}\, . \end{equation} For the subtraction of the $\Upsilon(4S)$ resonance and the correction for the initial state radiation we take an approach similar to Ref.~\cite{Chetyrkin:2009fv}. \subsubsection{Subtraction of the $\Upsilon(4S)$ Radiative Tail} \label{sub:Subtraction} Before subtracting the radiative tail of the $\Upsilon(4S)$ resonance one has to account for vacuum polarization effects. BaBar experimental data has been normalized to the theoretical Born level dimuon cross section (using the fine structure constant rather than the running effective electromagnetic coupling), instead of normalizing to the number of events with muons in the final state. Therefore one has to multiply the BaBar data with $[\alpha(s)/\alpha_{\rm em}]^2$, which we take as constant with value $0.93$. The contribution to be subtracted from the BABAR data (already corrected for vacuum polarization effects) is the ISR-distorted tail of the $\Upsilon(4S)$, which reaches to energies above its mass. The cross section $R$ and the ISR-distorted cross-section $\hat R$ are related by a convolution relation \begin{equation} \label{convolution} \hat{R}(s)= \int_{z_0}^{1} \dfrac{{\rm d} z}{z}\, G(z,s) \ R(s \, z) \, , \end{equation} which can be used to determine the ISR effects on the $\Upsilon(4S)$ resonance given in Eq.~(\ref{eq:Y4S}). Here the lower integration bound is $z_0=(10\,\rm{GeV})^2/s$. This value is not fully fixed by theoretical arguments, and it is chosen such that it excludes the narrow resonances, but keeps the major part of the $\Upsilon(4S)$ line shape. The radiator function $G$ is given as~\cite{Jadach:1988gb,Chetyrkin:1996tz} \begin{align} G(z,s) & = (1-z)^{\beta(s) \,-\, 1}\, \tilde{G}(z,s)\,,\\ \widetilde{G}(z,s) & = \beta(s)\, e^{\delta_{\rm{yfs}(s)}} \, F(s) \big[\,\delta_{C}^{V+S}(s) + \delta_C^H(s,z) \,\big]\,,\nonumber \end{align} where the specific form of $\beta$, $F$ and the two $\delta$'s can be found in Eq.~(7) of \cite{Chetyrkin:2009fv}. Note that the function $G(z,s)$ is divergent as $z \rightarrow 1 $, but since $0 < \beta -1 < -1 $, it is integrable. The divergent behavior is absent in $\widetilde{G}$, which in the limit $z\rightarrow 1$ reduces to \begin{equation} \widetilde{G}(1,s) \,=\, \beta(s)\,e^{\delta_{\rm{yfs}}(s)} \, F(s)\,\delta_{C}^{V+S}(s) \,. \end{equation} After subtracting the radiative tail of the $\Upsilon(4S)$ we find that to a good approximation the cross section vanishes for energies below $10.62\,\rm{GeV}$. Therefore we add an additional point to our BABAR dataset: $R_b(10.62\,\rm{GeV})=0$ and take $R_b=0$ for energies below $10.62$\,GeV. Since the subtracted cross section does not exactly vanish between $10.5408$\,GeV and $10.62$\,GeV, we take the (small) contribution of the subtracted cross section in that region to the moments as an additional source of systematic correlated uncertainty. \subsubsection{Deconvolution of Initial-State Radiation} After subtraction of the radiative tails and correcting for vacuum polarization effects, the BABAR threshold data are corrected for ISR. The inversion of the convolution in Eq.~(\ref{convolution}), can be carried out in an iterative way~\cite{Chetyrkin:2009fv}. Defining $\delta G(z,s)\,=\, G(z,s)\, -\, \delta(1-z)$ one can use a successive series of approximations \begin{equation}\label{eq:ISR-inversion} R^j(s)=R^0(s) -\int_{z_0}^1 \, \dfrac{{\rm d} z}{z}\,\delta G(z,s)\,R^{j-1}(s\,z), \end{equation} where we denoted the $j$-th approximation of $R(s)$ as $R^{j}(s)$ and use as starting point \mbox{$R^0(s)=\hat{R}(s)$}, the BABAR data after correcting for vacuum polarization effects and subtracting the radiative tails. In Eq.~(\ref{eq:ISR-inversion}) we take $z_0=(10.62\,\rm{GeV})^2/s$, using as a starting point the energy value for which the cross section vanishes after the subtraction of the radiative tails. To isolate the singularity at the higher endpoint one can perform a subtraction at $z = 1$, resulting in: \begin{eqnarray} R^j(s) &=& R^0(s) + R^{j-1}(s) - \int_{z_{0}}^{1} \dfrac{{\rm d} z}{z}\, \, \big(1-z \big)^{\beta(s)\,-\,1} \, \Big[ \widetilde{G}(z,s)\, R^{j-1}(s\, z) - z\,\widetilde{G}(1,s)\, R^{j-1}(s) \Big] \nonumber \\ & - & \frac{1}{\beta(s)}\, \widetilde{G}(1,s)\, R^{j-1}(s) \big(1-z_0 \big)^{\beta(s)}\,. \end{eqnarray} We use the trapezoidal rule to evaluate the integration on the discrete set of experimental data measurements labeled by the index $i$. Changing the integration variable from $z$ to energy we find \begin{align} \label{eq:master-iterative} &R^j_i = R^0_i + R^{j-1}_i+\widetilde{G}(1,E_i^2)R_i^{j-1}\Bigg( 1- \frac{E_1^2}{E_i^2}\Bigg)^{\!\beta(E_i^2)}\Bigg( \frac{E_1(E_2-E_1)}{E_i^2-E_1^2} -\frac{1}{\beta (E_i^2)}\Bigg) \\ & -\sum_{k=2}^{i-1}\Bigg(1- \frac{E_k^2}{E_i^2} \Bigg)^{\!\beta(E_i^2)\,-\,1}\! E_k \Bigg[ \widetilde{G}\Bigg( \frac{E_k^2}{E_i^2},E_i^2 \Bigg)\frac{R_k^{j-1}}{E_k^2}- \widetilde{G}(1,E_i^2) \frac{R_i^{j-1}}{E_i^2} \Bigg](E_{k+1}-E_{k-1})\,, \nonumber \end{align} where we have used $R_1^j=R(10.62\,\rm{GeV})\equiv 0$ for all iterations. After applying the procedure as many times as necessary to obtain a stable solution, one obtains the ISR-corrected cross section. Among the experimental measurements one finds two data points taken at very similar values of the energy: $10.86$\,GeV and $10.8605$\,GeV. It turns out that the fact that they lie very close makes the iterative procedure unstable. Therefore we drop the latter point from our analysis. In Fig.~\ref{fig:BABAR-data} we show the BABAR data after the subtraction of all radiative tails, before (red) and after (blue) ISR and vacuum polarization corrections. \begin{figure} \center \includegraphics[width=0.90\textwidth]{figs/BABAR-data} \caption{BABAR experimental data before (blue) and after (red) the ISR correction is applied. The purple bar on the right refers to the pQCD prediction for the continuum region. We have removed one data point at $E = 10.8605$\,GeV. \label{fig:BABAR-data}} \end{figure} \subsubsection{Determination of the Unfolding Error Matrix} \label{sec:unfold} The BABAR collaboration splits the experimental uncertainties into statistical, systematic uncorrelated, and systematic correlated. We add the two former in quadrature to obtain the total uncorrelated uncertainty $\epsilon^{\rm uncor}$ and rename the latter as the total correlated uncertainty $\epsilon^{\rm{cor}}$. The removal of the radiative tails of the $\Upsilon$ mesons has no effect on these uncertainties. Therefore, the correlation matrix for the BABAR data after the subtraction of the radiative tails, before it is corrected for ISR effects, can be written as \begin{equation}\label{eq:M00} M^{0\,0}_{ij}=(\epsilon_i^{\rm{uncor}})^2\,\delta_{ij}+\epsilon^{\rm{cor}}_i\epsilon^{\rm{cor}}_j\,. \end{equation} One needs to compute a new correlation matrix after each iteration. In this way we determine the unfolding error matrix. The master formula in Eq.~(\ref{eq:master-iterative}) can be cast in a matrix form as follows: \begin{equation} R^j_i = R^0_i + \sum_{k=2}^{i} G_{ik} R_k^{j-1}\,, \end{equation} where $R^j_i$ is to be thought as the $i$-th component of the column vector $R^j$, and $G_{ik}$ represents the $(i,k)$-component of a matrix $G$. Here $R_i^j$ depends only on the initial value $R_i^0$ and the result of the previous iteration $R_i^{j-1}$. The $G_{ik}$ do not depend on $R_k^j$ or the iteration step $j$. Therefore, for the error propagation one uses \begin{equation} \frac{\partial R^j_i}{\partial R^0_k} = \delta_{ik}\,, \qquad \frac{\partial R^j_i}{\partial R^{j-1}_k} = G_{ik}\,, \end{equation} both of them $j$-independent. We will denote with $M^{j\,j}$ the correlation matrix among the entries of the vector $R^j$ for a given iteration $j$. We also find it convenient to introduce the correlation matrix among $R^j$ and $R^0$, referred to as $M^{j\,0}$. Finally we use the notation $M^{0\,j}\equiv(M^{j\,0})^T$. We find for the correlation matrix after $j$ iterations: \begin{align} M^{j\,j} & = M^{0\,0}+M^{0\,j-1} \ G^{T}+G \ M^{(j-1)\,0} + G \ M^{(j-1)\,(j-1)}\, G^T ,\\ M^{j\,0} & = M^{0\,0} + G \,M^{(j-1)\,0} ,\nonumber \end{align} where the elements of the matrix $M^{0\,0}$ are given in Eq.~(\ref{eq:M00}). We find that after five iterations the result has converged already to a level well below the experimental uncertainties. Our unfolded BaBar data agrees well with that worked out in Ref.~\cite{Chetyrkin:2010ic}. \subsubsection{Contribution of the Threshold Region} After having corrected BABAR data for ISR and vacuum polarization effects, we use the trapezoidal rule for integrating the threshold region between $10.62\,\rm{GeV}$ and $11.20\,\rm{GeV}$: \begin{equation}\label{eq:trapezoid} M^{\rm{thr}}_n = \dfrac{1}{2n}\bigg[ \sum_{i=2}^{N-1}R_i\bigg(\frac{1}{E_{i-1}^{2n}} - \frac{1}{E_{i+1}^{2n}}\bigg) + R_N\bigg(\frac{1}{E_{N-1}} - \frac{1}{E_N}\bigg)\bigg]\,, \end{equation} where $R_i$ has been already ISR corrected and $N$ is the number of data points. We have added the boundary condition point $R_1 = R(10.62) = 0$. From Eq.~(\ref{eq:trapezoid}) one can compute the correlation matrix among $M^{\rm{thr}}_n$ for various $n$ values, using the unfolding matrix among the $R_i$ computed in Sec.~\ref{sec:unfold}. \subsection{Continuum Region} \label{sec:continuum} For the determination of the experimental moments from the region above $11.2$\,GeV we use pQCD (which has essentially negligible errors) supplemented by a modeling uncertainty. Comparing pQCD (purple line in Fig.~\ref{fig:BABAR-pQCD}) to a linear fit to the BaBar data in the region between $11.06\,$GeV and $11.2\,$GeV (red dotted line in Fig.~\ref{fig:BABAR-pQCD}) we find a $10\%$ discrepancy concerning the central values. The fit function has a roughly constant $4\%$ relative uncertainty. The fit linear function shows a growing pattern such that it would meet the pQCD prediction at around $11.5\,$GeV. This result is very robust, since a quadratic fit yields the same meeting point. To model the continuum in the region between $11.2$\,GeV and $11.52\,$GeV we patch together the linear fit function to the BaBar data and the result of pQCD above $11.52\,$GeV using a cubic function, demanding continuity and smoothness at $11.2$\,GeV and $11.52\,$GeV. The result is shown as the central red line in Fig.~\ref{fig:BABAR-pQCD}. Given that the relative discrepancy between experiment and pQCD for $R_b$ at the $Z$-pole is about $0.3\%$~\cite{ALEPH:2010aa}, we adopt a relative modeling error that decreases linearly from $4\% $ at $11.2$\,GeV to $0.3\%$ at $m_Z$, and stays constant for energies larger than $m_Z$. This is shown as the red band in Fig.~\ref{fig:BABAR-pQCD}. This uncertainty makes up for $96.9\%$ of the total error for the first moment $M_1^V$ (which has an total $2.45\%$ relative error), and $86.15\%$ of the second moment $M_2^V$ (which has a total $1.85\%$ relative error). Note that if we would adopt a constant $4\%$ error for all energies above $11.2$\,GeV, this continuum uncertainty would make up for $97.24\%$ of the total error for the first moment $M_1^V$ (from a total $2.60\%$ relative error), and $86.46\%$ of the second moment $M_2^V$ (from a total $1.87\%$ relative error). The difference is small because contributions from higher energies are suppressed. Following our procedure in Ref.~\cite{Dehnadi:2011gc} we consider this uncertainty as fully correlated for the various moments, but without any correlation to the narrow resonances or the threshold region. \begin{figure} \center \includegraphics[width=0.7\textwidth]{figs/BABAR-pQCD} \caption{Comparison of ISR-corrected BABAR data in the continuum region (black dots with error bars) with pQCD (purple band). The red band shows our reconstruction of the continuum, which includes a linear fit to the BaBar data, patched to the pQCD prediction in a smooth way using a cubic polynomial in the energy. \label{fig:BABAR-pQCD}} \end{figure} The perturbative QCD theoretical expression which we use to determine this contribution includes the non-singlet massless quark cross section supplemented with bottom mass corrections up to $O(\overline{m}_b^{\,4}/s^2$).\footnote{We note that the double massive fermion bubble contribution to $R_{bb}$ in Eq.~(\ref{eq:Rhad}) includes both virtual and real radiation terms in the large energy expansion. However, when this formula is used to compare pQCD to the existing BABAR data, below the four-bottom-quarks threshold, the real radiation should be excluded. We have checked that this inconsistency has an effect below $0.1$\%.} It takes into account only contributions from the electromagnetic current coupled to the bottom quark. It reads:~\cite{Bernreuther:1981sp, Gorishnii:1986pz, Chetyrkin:1993hi, Chetyrkin:1994ex, Chetyrkin:2000zk}\footnote{The authors of Ref.~\cite{Chetyrkin:2010ic} use the pole mass instead of the $\overline{\rm MS}$, and include $\alpha_s^4$ and QED corrections. This explains some numerical differences in the analyses.} \begin{equation} R^{\rm{th}}_{b\bar b}(s) = N_c\, Q_b^2\, R^{\rm{ns}}(s,\overline{m}^{\,2}_b(\sqrt{s}),n_f = 5,\alpha_s^{(n_f=5)}(\sqrt{s}))\,, \end{equation} where \begin{eqnarray}\label{eq:Rhad} && R^{\rm{ns}} (s, \overline{m}^{\,2}_b(\mu),n_f=5,\alpha_s^{(n_f=5)}(\mu),\mu) \nonumber \\ && = 1 + \frac{\alpha_s}{\pi}+ \Big(\frac{\alpha_s}{\pi} \Big)^2 (1.40923-1.91667\, L_s)+\Big(\frac{\alpha_s}{\pi}\Big)^3(-\,12.7671-7.81872\, L_s+3.67361\,L_s^2\,)\nonumber \\ && +\,\frac{\overline{m}^{\,2}_b(\mu)}{s} \Big[ 12 \frac{\alpha_s}{\pi}+\Big(\frac{\alpha_s}{\pi}\Big)^2 (104.833 - 47\, L_s) +\Big(\frac{\alpha_s}{\pi}\Big)^3 (541.753-724.861\, L_s + 137.083\, L_s^2\,) \Big] \nonumber \\ && +\,\frac{\overline{m}_b^{\!4}(\mu)}{s^2} \Big[-6 + \Big(\frac{\alpha_s}{\pi}\Big)(-\,22+24\, L_s)+ \Big(\frac{\alpha_s}{\pi}\Big)^2 (139.014 - 4.83333\, L_m + 214.5\,L_s - 71\, L_s^2\,)\nonumber \\ && +\Big(\frac{\alpha_s}{\pi}\Big)^3 \ (3545.81 - 158.311\, L_m + 9.66667 \,L_m^2 - 538.813\, L_s + 37.8611 \, L_m\, L_s \nonumber \\ && -\,1037.79\, L_s^2+185.389 \,L_s^3\,) \Big]\,, \end{eqnarray} with \begin{equation} L_s\equiv \ln \Big(\frac{s}{\mu^2}\Big), \quad L_m\equiv \ln\Big(\frac{\overline{m}^{\,2}_b(\mu)}{s}\Big)\,,\quad \alpha_s = \alpha_s^{(n_f=5)}(\mu)\,. \end{equation} We use the initial conditions $\overline{m}_b(\overline{m}_b)=4.2 \,\rm{GeV}$ and $\alpha_s(m_Z)=0.118$. Therefore, for the continuum region we use the following expression, \begin{align} M^{\rm pQCD}_n = &\int_{s_0}^{s_1} {\rm d} s\, \frac{R^{\rm{cubic}}_{bb} (s)}{s^{n+1}} \bigg[1 + \gamma' \dfrac{0.04(m_Z^2 - s) + 0.003(s-s_0)}{m_Z^2-s_0}\bigg] \\ & +\int_{s_1}^{m_Z^2} {\rm d} s\, \frac{R^{\rm{th}}_{bb} (s)}{s^{n+1}} \bigg[1 + \gamma' \dfrac{0.04(m_Z^2 - s) + 0.003(s-s_0)}{m_Z^2-s_0}\bigg]\nonumber\\ & + (1 + 0.003\,\gamma')\int_{m_Z^2}^{\infty} {\rm d} s\, \frac{R^{\rm{th}}_{bb} (s)}{s^{n+1}}\, ,\qquad \gamma' = 0 \pm 1\,,\nonumber \end{align} with $s_0 = (11.2062\,{\rm GeV})^2$, $s_1 = (11.52\,{\rm GeV})^2$ and $R^{\rm{cubic}}_{bb}$ is a cubic function that smoothly interpolates between the linear fit to BaBar data and pQCD. Here $\gamma'$ is the auxiliary variable used to parametrize our uncertainty, which we consider as $100\%$ correlated among the various moments. The related entries of the correlation matrix are trivially computed as \begin{equation} C_{nn'}^{\rm{pQCD}} = \frac{\partial M^{\rm{pQCD}}_n}{\partial \gamma'} \ \frac{\partial M^{\rm{pQCD}}_{n'}}{\partial \gamma'}\,. \end{equation} \subsection{Final Results for the Experimental Moments} \begin{table}[t!] \centering \begin{tabular}{|c|cccc|} \hline n & Resonances & $10.62-11.2062$ & $11.2062-\infty$ & Total\tabularnewline \hline $1$ & $1.394(12|22)$ & $0.270(2|9)$ & $2.862(0|108)$ & $4.526(12|111)$ \\ $2$ & $1.459(12|22)$ & $0.226(1|8)$ & $1.148(0|45)$ & $2.834(12|51)$ \\ $3$ & $1.538(12|22)$ & $0.190(1|7)$ & $0.611(0|24)$ & $2.338(12|34)$ \\ $4$ & $1.630(13|22)$ & $0.159(1|6)$ & $0.365(0|15)$ & $2.154(13|27)$\\ \hline \end{tabular} \caption{ Results for our computations of the experimental moments. The second column collects the contribution from the first four $\Upsilon$ resonances (using the narrow width approximation). The third to fifth columns show the contributions from the threshold (using ISR-corrected BABAR data) and continuum (using an interpolation between a linear fit to the BaBar data with highest energy and pQCD as a model for the lack of data) regions, and the total moment determinations, respectively. The two numbers quoted in parentheses correspond to the uncorrelated and correlated experimental uncertainties, respectively. All numbers are given in units of $10^{-(2n+1)} \, \rm{GeV}^{-2n}$.\label{tab:moments-results}} \end{table} The full result for the experimental moments is obtained by summing up all the portions described before, \begin{equation} M^{\rm exp}_n=M^{\rm res}_n+M^{\rm thr}_n+M^{\rm pQCD}_n\,. \end{equation} We determine two correlation matrices among the first four moments. One of them comes from the various uncorrelated uncertainties, whereas the other encodes the systematic uncertainties. We denote them as the correlated and uncorrelated correlation matrices, respectively. These are computed by summing up the respective individual matrices from each region, and in the same way as we did for our charm analysis \cite{Dehnadi:2011gc}, we assume there is no region-to-region correlation. We find: \begin{equation} \label{eq:corr-mat}\!\!\!\! C^{\rm exp}_{\rm uc} = \left( \begin{array}{cccc} 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ 0.0002 & 0.0002 & 0.0002 & 0.0002 \\ \end{array}\right)\!, \, C^{\rm exp}_{\rm cor} = \left( \begin{array}{cccc} 0.0122 & 0.0055 & 0.0032 & 0.0021 \\ 0.0055 & 0.0026 & 0.0017 & 0.0012 \\ 0.0032 & 0.0017 & 0.0011 & 0.0009 \\ 0.0021 & 0.0012 & 0.0009 & 0.0007 \\ \end{array}\right)\! , \end{equation} where the $(n,m)$ entry of each matrix is given in units of $10^{-2(n + m + 1)} \, \rm{GeV}^{-2(n+m)}$ , and the total correlation matrix is the sum of $C^{\rm exp}_{\rm uc}$ and $C^{\rm exp}_{\rm cor}$. The contribution of each region to the final experimental moments and the corresponding uncertainties are presented in Table~\ref{tab:moments-results}. \section{Comparison to other Determinations of the Experimental Moments} \label{sec:comp-exp} In this section we compare our result for the experimental moments for the bottom vector current correlator with previous determinations. These are collected in Table~\ref{tab:moments-comparison}.\footnote{In the case of Ref.~\cite{Corcella:2002uu}, we reconstruct the experimental moments from their Table~3, where the moments are split in several different contributions. For the reconstructed uncertainty, we take one half of the error of the narrow resonances correlated to each other, and the other half as uncorrelated. The errors from patches where theory input is used are taken as fully correlated to one another. The total narrow-resonance error, and the total ``theory-patch'' error are added in quadrature to get the final uncertainty.} The most relevant comparison is between the second and third columns, where the most recent data on the narrow resonances and the BABAR continuum data are used. For the contributions from the narrow resonances we have a perfect agreement with \cite{Chetyrkin:2009fv}, although slightly larger errors. For the threshold region our results are slightly smaller, and our uncertainties are almost identical; however, this is not a one-to-one comparison, since our integration region is slightly smaller. Indeed if we consider their energy range we agree with their numbers almost perfectly. The main difference between these two determinations is the estimate of the uncertainties coming from the continuum region, where the pQCD prediction for the \mbox{$R_b$-ratio} is employed. Whereas we adopt the more conservative approach described in Sec.~\ref{sec:continuum}, Ref.~\cite{Chetyrkin:2009fv} employs only the perturbative uncertainties related to the purple band in Fig.~\ref{fig:BABAR-pQCD}. In Ref.~\cite{Chetyrkin:2010ic} the same collaboration presents a more critical analysis of their errors. In particular they observe that the last experimental measurement of BABAR, after being corrected for ISR, disagrees with the pQCD prediction at the $20\%$ level (way outside the corresponding uncertainties).\footnote{From our own computation of the ISR-corrected $R_b$-ratio, we only observe a $10\%$ deviation between the last data point and the pQCD prediction, see Fig.~\ref{fig:BABAR-pQCD}.} To resolve this discrepancy they assume two possible scenarios: a) pQCD starts being reliable at energies above $13$\,GeV (therefore the authors interpolate between the last experimental point and pQCD at $13$\,GeV); b) BABAR systematic errors have been underestimated (therefore the central values of the experimental measurements are rescaled by a factor of $1.21$). Ref.~\cite{Chetyrkin:2010ic} quotes the values of the experimental moments and the resulting values for the bottom mass for these two scenarios. Since the effect of these differences of the two bottom masses obtained from $M_2^V$ is only slightly larger than the size of the other uncertainties (that is, the uncertainties of the theoretical moments plus the other experimental errors) added quadratically, it is argued that this issue can be ignored. We disagree with this argument, since the issue constitutes an independent source of uncertainty not covered by the other errors and, in particular, being unrelated to uncertainties in the theoretical moments. Therefore this shift must be taken as an additional source of error on the experimental moments (and indeed would then dominate the corresponding total error). The additional error (to be added in quadrature to the one in round brackets) is quoted in square brackets in the third column of Table~\ref{tab:moments-comparison}. It amounts to an additional error of $30,18,11$ and $7$\,MeV for $\overline{m}_b(\overline{m}_b)$ extracted from moments $M_1^V$ to $M_4^V$, respectively. Refs.~\cite{Kuhn:2007vp,Boughezal:2006px,Corcella:2002uu} have used the older CLEO and CUSB experimental measurements, resulting in relatively large uncertainties. In Ref.~\cite{Kuhn:2007vp} the CLEO measurements are divided by a factor of $1.28$, and an error of $10\%$ is assigned. It is argued that this procedure is necessary to reconcile old and new CLEO measurements, as well as to improve the agreement with pQCD predictions. Ref.~\cite{Corcella:2002uu} uses values for the $\Upsilon$-states electronic partial widths given by the PDG 2002, which have larger uncertainties. This makes their determination of the experimental moments rather imprecise. Concerning the continuum region where no measurements exist, while some previous analyses have taken a less conservative approach than ours, in Ref.~\cite{Beneke:2014pta} a much more conservative approach is adopted. In this region they consider the \mbox{$R_b$-ratio} as constant with a $66\%$ uncertainty. In Ref.~\cite{Corcella:2002uu} also a more conservative approach is adopted. Between $11.1$ and $12$\,GeV $\mathcal{O}(\alpha_s^2)$ pQCD errors are used, which are larger than $10$\%; for energies above $12$\,GeV a global $10\%$ correlated error is assigned. \begin{table}[h!] { \begin{tabular}{|c|cccc|} \hline $n$ & This work & Chetyrkin et al.\ '09~\cite{Chetyrkin:2009fv} & Kuhn et al.\ '07~\cite{Kuhn:2007vp} & Corcella et al.\ '03~\cite{Corcella:2002uu}\\ \hline $1$ & $4.526(12|111)$ & $4.592(31)[67]$ & $4.601(43)$ & $4.46(17)$ \\ $2$ & $2.834(12|51)$ & $2.872(28)[51]$ & $2.881(37)$ & $2.76(15)$ \\ $3$ & $2.338(12|34)$ & $2.362(26)[40]$ & $2.370(34)$ & $2.26(13)$ \\ $4$ & $2.154(13|27)$ & $2.170(26)[35]$ & $2.178(32)$ & $2.08(12)$ \\ \hline \end{tabular} \caption{Comparison of our results for the experimental moments of the bottom vector correlator (2nd column) to previous determinations (3rd to 5th columns). The 2nd and 3rd columns use BABAR data from Ref.~\cite{:2008hx}, while 4th and 5th use older data from Refs.~\cite{Besson:1984bd,Ammar:1997sk}. The 3rd and 4th columns use perturbative uncertainties in the continuum region, while 2nd and 5th use a more conservative estimate based on the agreement of data and pQCD. In the 3rd column, we quote in square brackets our own estimate of an additional systematic error from the considerations made in Ref.~\cite{Chetyrkin:2010ic}. All numbers are given in units of $10^{-(2n+1)} \,\rm{GeV}^{-2n}$. \label{tab:moments-comparison}} } \end{table} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Comparison-M1}} \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Comparison-M2} } \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Comparison-M3}} \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Comparison-M4} } \caption{Comparison of various determinations of the experimental moments for the bottom vector correlator. Results in blue correspond to analyses of the same collaboration. The green result and the determination at the top do not use the new BABAR results.} \label{fig:comparison-experimental} \end{figure*} \section{Computation of the Experimental Values for the Ratios of Moments} \label{sec:exp-ratio} Once the experimental values for the moments of the vector and pseudoscalar correlators have been computed, it is in principle a straightforward exercise to calculate ratios of them. The central value is obtained by simply taking the ratio of the corresponding central values. To obtain the uncertainties (or more generally, the correlation matrix among the different ratios of moments), one needs to have access to the complete correlation matrix among the moments. Our computation in Ref.~\cite{Dehnadi:2011gc} [see Eqs.~(3.21) and (3.22)] for the charm experimental moments, and the procedure presented in Sec.~\ref{sec:exp} [see Eq.~(\ref{eq:corr-mat})] to determine the bottom experimental moments, yield the two desired correlation matrices, for statistical and systematical correlations. For the pseudoscalar moments the information on correlations is not provided in Ref.~\cite{Allison:2008xk}. Therefore we make the simplest possible assumption, which is that the moments are fully uncorrelated. This will most certainly overestimate the uncertainties for the ratios of moments, but given that we are anyway dominated by perturbative uncertainties, our approach appears justified. We collect our results for the computation of the ratios of experimental moments in Table.~\ref{tab:exp-ratios}. Readers interested in the full correlation matrix among them can send a request to the authors. \begin{table}[t!] \centering \begin{tabular}{|c|ccc|} \hline $n$ & Vector $n_f = 4$ & Vector $n_f = 5$ & Pseudoscalar $n_f = 4$\tabularnewline\hline $1$ & $6.969(32|59)$ & $6.262(10|53)$ & $0.971(32)$ \tabularnewline $2$ & $8.807(23|26)$ & $8.251(09|48)$ & $1.048(53)$ \tabularnewline $3$ & $9.547(14|13)$ & $9.212(08|35)$ & $1.092(77)$ \tabularnewline \hline \end{tabular} \caption{Ratios of experimental moments for the vector correlator with $4$ and $5$ flavors (second and third column, respectively), and for the pseudoscalar correlator with $4$ flavors (fourth column). For the vector current, the first error in parenthesis corresponds to the statistical uncertainty, whereas the second corresponds to the systematic one. For the pseudoscalar correlator we only quote the lattice error. Moments given in units of $10^{-2}$\,GeV$^{-2}$, $10^{-3}$\,GeV$^{-2}$ and, $10^{-1}$\,GeV$^{-2}$ for the second, third, and fourth column, respectively.\label{tab:exp-ratios}} \end{table} \section{Results} \label{sec:results} In this section we present the final results for our analyses at $\mathcal{O}(\alpha_s^3)$. We take method (c) (linearized iterative expansion) as our default expansion. For the estimate of the perturbative uncertainty, we perform double scale variation in the ranges $\overline m_c(\overline m_c)\le\mu_\alpha,\mu_m\le4$\,GeV for charm (either correlator), and $\overline m_b(\overline m_b)\le\mu_\alpha,\mu_m\le15$\,GeV for bottom, and we discard 3\% of the series with the worst convergence (that is, with highest values of the $V_c$ convergence parameter). For the charm mass determinations (either vector or pseudoscalar correlator) we use the first moment as our default, given that it is theoretically more reliable than the higher moments. For the analysis of the bottom mass from regular moments, we use $M_2^V$ as our default, since it is less afflicted by systematic experimental errors than the first moment, and is nevertheless theoretically sound. For the charm and the bottom mass analyses we also examine the ratio of the second over the first moment as a cross check and validation of the results from regular moments. The results for the experimental moments are collected in: the last column of Table~9 in Ref.~\cite{Dehnadi:2011gc} (charm vector correlator regular moments); the last column of Table~\ref{tab:moments-results} (bottom vector correlator regular moments); Table~\ref{tab:Latt} (lattice regular moments); and Table~\ref{tab:exp-ratios} (all ratios of moments). We also analyzed higher (and also lower for the case of bottom) moments and ratios of moments for all correlator and quark species. Since, as already discussed, fixed-order and contour-improved higher moments are particularly afflicted by their nonlinear dependence on the quark mass, we only consider the linearized and iterative methods for this analysis. In any case, since higher moments have a larger sensitivity to infrared effects and are therefore theoretically less sound, the analysis involving higher moments mainly aims at providing cross checks. The results are collected in a graphical form in Fig.~\ref{fig:higher-moms}, and the numerical results can be obtained from the authors upon request. \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mc-moments}} \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mc-moments-pseudo} } \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mb-moments} } \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mc-moments-ratio} } \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mc-pseudo-ratio} } \subfigure[] { \includegraphics[width=0.31\textwidth]{figs/mb-moments-ratio} } \caption{Charm and bottom quark mass determinations for different moments (upper row) or ratios of moments (lower row), for the linearized (in blue) and iterative (in red) methods. Panels (a), (b) [(e), (f)] show the results for the charm mass from moments [ratios of moments] of the vector and pseudoscalar correlator, respectively. Panels (c) and (g) show the results for the bottom mass from the vector correlator, for moments and ratios of moments, respectively.} \label{fig:higher-moms} \end{figure*} Our final determinations include nonperturbative effects through the gluon condensate including its Wilson coefficients at order $\mathcal{O}(\alpha_s)$. Furthermore, we assign as a conservative estimate of the nonperturbative uncertainty twice the shift caused by including the gluon condensate. In any case, this error is very small, particularly for the bottom mass determination. One source of uncertainty which we have not discussed so far is that coming from the strong coupling constant. Although the world average $\alpha_s(m_Z) = 0.1185 \pm 0.006$ has a very small error, see Ref.~\cite{Agashe:2014kda}, one cannot ignore the fact that it is fairly dominated by lattice determinations, e.g.~\cite{Chakraborty:2014aca}. Furthermore, there are other precise determinations with lower central values and in disagreement with the world average from event-shapes~\cite{Abbate:2010xh, Abbate:2012jh, Gehrmann:2012sc,Hoang:2015hka} and DIS~\cite{Alekhin:2012ig}. A review on recent $\alpha_s$ determinations can be found in Refs.~\cite{Bethke:2011tr,Bethke:2012jm,Pich:2013sqa}. Therefore, in analogy with Ref.~\cite{Dehnadi:2011gc}, we perform our analyses for several values of $\alpha_s(m_Z)$ between $0.113$ and $0.119$, and provide the central values and perturbative errors as (approximate) linear functions of $\alpha_s(m_Z)$. The other uncertainties are essentially $\alpha_s$-independent, so we just provide the average. We also quote quark mass results for $\alpha_s$ taken from the world average: \begin{equation} \label{eq:alphaswa} \alpha_s(m_Z) = 0.1185 \pm 0.0021\,, \end{equation} where we adopt an uncertainty $3.5$ times larger than the current world average~\cite{Agashe:2014kda}. We note that in Ref.~\cite{Dehnadi:2011gc} we have taken $\alpha_s(m_Z) = 0.1184 \pm 0.0021$ as an input which causes only tiny sub-MeV differences in the quark masses. We refrain ourselves from presenting the $\alpha_s$ dependence of the higher-moment result, which the reader can get from the authors upon request. For the numerical analyses that we carry out in this article we have created two completely independent codes: one using Mathematica~\cite{mathematica} and another using Fortran~\cite{gfortran}, which is much faster and suitable for parallelized runs on computer clusters. The two codes agree for the extracted quark masses at the level of $1\,$eV. \subsection{Results for the Charm Mass from the Vector Correlator} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Vector-mc-central} \label{fig:mc-vector-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Vector-mc-errors} \label{fig:mc-vector-err} } \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Vector-Ratio-mc-central} \label{fig:mc-vector-ratio-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Vector-Ratio-mc-errors} \label{fig:mc-vector-ratio-err} } \caption{ Dependence on $\alpha_s(m_Z)$ of the central values of ${\overline m_c}({\overline m_c})$ and the corresponding perturbative (red), statistical (orange), systematical (blue) and nonperturbative uncertainties (green), for the analysis of the first moment [panels (a) and (b)] and the ratio of the second over the first moment, [panels (c) and (d)], corresponding to the vector correlator.} \label{fig:charm-alphaS} \end{figure*} For the analysis using the first moment of the charm vector correlator we use the experimental value quoted in Eq.~(4.1) of Ref.~\cite{Dehnadi:2011gc}: $M_1^{V,\,\rm exp}=(0.2121\,\pm\, 0.0020_{\rm stat}\,\pm\, 0.0030_{\rm syst})\,{\rm GeV}^{-2}$. The outcome of this analysis, and one of the main results of this paper, is: \begin{align}\label{eq:vector-result} \overline m_c(\overline m_c) = & \,1.288 \, \pm \, (0.006)_{\rm stat} \, \pm \, (0.009)_{\rm syst} \, \pm \, (0.014)_{\rm pert}\\ &\, \pm \, (0.010)_{\alpha_s} \, \pm \, (0.002)_{\langle GG\rangle}\,{\rm GeV}\,,\nonumber \end{align} where the quoted errors are (from left to right) experimental uncorrelated, experimental correlated, peturbative, due to the uncertainty in $\alpha_s$ as given in Eq.~(\ref{eq:alphaswa}), and nonperturbative. If we adopt the correlated scale variation $2\,{\rm GeV}\le\mu_\alpha=\mu_m\le4$\,GeV, we obtain for method (c) $1.297\, \pm \, (0.005)_{\rm pert}$, with the other errors essentially unchanged. For method (a) we would get $1.290\, \pm \, (0.0007)_{\rm pert}$, with a scale variation even smaller than the nonperturbative uncertainty, and $20$ times smaller than our perturbative error estimate with double scale variation [$3$ times for method (c)]. The dependence on $\alpha_s(m_Z)$ is shown graphically in Figs.~\ref{fig:mc-vector-central} and \ref{fig:mc-vector-err}, and analytically the result reads: \begin{eqnarray} \label{eq:mc-vec-alphas} \overline m_c(\overline m_c)& = &(1.288 + 4.40\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.006)_{\rm stat} \, \pm \, (0.009)_{\rm syst}\\ && \, \pm \, (0.014 + 0.95\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.002)_{\langle GG\rangle}\,.\nonumber \end{eqnarray} Eqs.~(\ref{eq:vector-result}) and (\ref{eq:mc-vec-alphas}) supersede the results given in Eqs.~(4.5) and (4.2) of Ref.~\cite{Dehnadi:2011gc}, respectively. For the ratio of the second over the first moment of the vector correlator we use as the experimental input $R_1^{V,\,\rm exp} = (6.969\,\pm\, 0.032_{\rm stat}\,\pm\, 0.059_{\rm syst})\times 10^{-2}$\,GeV$^{-2}$, which yields the following result for the charm mass: \begin{align} \overline m_c(\overline m_c) = & \,1.271 \, \pm \, (0.003)_{\rm stat} \, \pm \, (0.005)_{\rm syst} \, \pm \, (0.016)_{\rm pert}\\ &\, \pm \, (0.004)_{\alpha_s} \, \pm \, (0.004)_{\langle GG\rangle}\,{\rm GeV}\,.\nonumber \end{align} With correlated variation $2\,{\rm GeV}\le\mu_\alpha=\mu_m\le4\,{\rm GeV}$ we get $1.258\, \pm \, (0.005)_{\rm pert}$ and $1.279\, \pm \, (0.007)_{\rm pert}$ for methods (a) and (c), respectively. In this case the scale variations are a factor of $2$ to $3$ smaller than our perturbative error estimate.\footnote{Had we taken the fixed-order expansion (a) and correlated scale variation $2\,{\rm GeV}\le\mu_\alpha=\mu_m\le4\,{\rm GeV}$ as the estimate for the perturbative uncertainty, the result from $R_1^V$ with all errors added quadratically would be $1.258\pm0.013$\,GeV, whereas the result from $M_1^V$ would read $1.290\pm0.015$\,GeV. Both results would not be consistent to each other.} The $\alpha_s$ dependence, which can be seen in Figs.~\ref{fig:mc-vector-ratio-central} and \ref{fig:mc-vector-ratio-err}, has the form: \begin{align} \label{eq:mc-rat-alphas} \overline m_c(\overline m_c)& = (1.271 + 1.64\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.003)_{\rm stat} \, \pm \, (0.005)_{\rm syst}\\ &\, \pm \, (0.016 + 1.081\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.004)_{\langle GG\rangle}\,.\nonumber \end{align} We observe that the central value for the ratios of moments is $17$\,MeV smaller than for the first moment analysis, but fully compatible within theoretical uncertainties. Furthermore, the dependence on $\alpha_s$ of the central value obtained from the regular moment analysis is larger, which translates into a corresponding larger error due to the uncertainty in $\alpha_s$. Both determinations have very similar perturbative uncertainties for any value of $\alpha_s$. We also see that the charm mass from the ratio of moments has smaller experimental uncertainties, as a result of cancellations between correlated errors. Moreover, the charm mass result from $R_1^V$ has a nonperturbative error twice as large as that from $M_1^V$. The two charm mass results from the first moment and the moment ratio are compared graphically in Fig.~\ref{fig:comparison-observables-charm}. \subsection{Results for the Charm Mass from the Pseudoscalar Correlator} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Pseudo-mc-central} \label{fig:mc-pseudo-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Pseudo-mc-errors} \label{fig:mc-pseudo-err} } \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Pseudo-Ratio-mc-central} \label{fig:mc-pseudo-ratio-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Pseudo-Ratio-mc-errors} \label{fig:mc-pseudo-ratio-err} } \caption{ Dependence on $\alpha_s(m_Z)$ of the central values of ${\overline m_c}({\overline m_c})$ and the corresponding perturbative (red), lattice (blue) and nonperturbative uncertainties (green), for the analysis of the first moment [panels (a) and (b)] and the ratio of the second over the first moment, [panels (c) and (d)], corresponding to the pseudoscalar correlator.} \label{fig:mc-ratios-alphaS} \end{figure*} For the analysis of the first moment of the charm pseudoscalar correlator we employ $M_1^{P,\,{\rm latt}}\,=\,(0.1402\,\pm\, 0.0020_{\rm latt})\,{\rm GeV}^{-2}$~\cite{Allison:2008xk}, which yields the following charm mass determination: \begin{align} \overline m_c(\overline m_c) = \,1.267 \, \pm \, (0.008)_{\rm lat} \, \pm \, (0.035)_{\rm pert} \, \pm \, (0.019)_{\alpha_s} \, \pm \, (0.002)_{\langle GG\rangle}\,{\rm GeV}\,. \end{align} With correlated scale variation $2\,{\rm GeV}\le\mu_\alpha=\mu_m\le4\,{\rm GeV}$ we obtain the central values $1.278$ and $1.276$\,GeV, for methods (b) and (c), respectively. In both cases the scale variation is $4$\,MeV, 8 times smaller than our perturbative error estimate with double scale variation. For the $\alpha_s$ dependence, we find \begin{align} \label{eq:mc-lat-alphas} \overline m_c(\overline m_c)& = (1.267 + 8.36\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.008)_{\rm lat}\\ &\, \pm \, (0.035 + 2.38\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.002)_{\langle GG\rangle} \,,\nonumber \end{align} which is also displayed in Figs.~\ref{fig:mc-pseudo-central} and \ref{fig:mc-pseudo-err}. As expected, the perturbative error is much larger than for the vector correlator, and has a stronger dependence on $\alpha_s$. We see that the central value has a much stronger dependence on $\alpha_s$ as well, which again translates into a large error due to the uncertainty in the strong coupling. The central value is $21$\,MeV lower than Eq.~(\ref{eq:vector-result}), but fully compatible within errors [see Fig~\ref{fig:comparison-observables-charm}]. The nonperturbative uncertainties are identical to the vector current case. For the ratio of second over the first moment of the pseudoscalar correlator we use $R_1^{P,\,\rm latt} = (0.0971\, \pm\, 0.0032_{\rm latt})\,$GeV$^{-2}$. We find for the charm mass \begin{align} \overline m_c(\overline m_c) = \,1.266\, \pm \, (0.020)_{\rm latt} \, \pm \, (0.018)_{\rm pert} \, \pm \, (0.006)_{\alpha_s} \, \pm \, (0.002)_{\langle GG\rangle}\,{\rm GeV}\,. \end{align} Using correlated variation $2\,{\rm GeV}\le\mu_\alpha=\mu_m\le4\,{\rm GeV}$ one obtains $1.270\, \pm \, (0.007)_{\rm pert}$ and $1.278\, \pm \, (0.003)_{\rm pert}$ for methods (a) and (c), respectively. These scale variations are a factor $3$ and $6$ smaller than our perturbative error estimate, respectively. The $\alpha_s$ dependence is \begin{align} \label{eq:mc-rat-lat-alphas} \overline m_c(\overline m_c) & = (1.266 + 2.31\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.020)_{\rm latt}\\ &\, \pm \, (0.018 + 1.25\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.002)_{\langle GG\rangle} \,.\nonumber \end{align} The central values for both $M_1^P$ and $R_1^P$ are almost identical, but their $\alpha_s$ dependence is not: the latter is much smaller (even smaller than for $M_1^V$, but larger than for $R_1^V$). Note that the lattice error is larger for the ratio since we made the very conservative assumption that they are fully uncorrelated. This is because the correlation matrix for various lattice moments is unknown. The perturbative error reduces by a factor of two for any value of $\alpha_s$ when using the ratio, but we have checked that this only happens for the iterative expansion. On the other hand, the $\alpha_s$ dependence of the perturbative uncertainty is smaller for the regular moment determination. The nonperturbative errors are identical. All charm determinations are illustrated graphically in Fig.~\ref{fig:comparison-observables-charm}, where in red we show our preferred determination from the first moment of the vector correlator. \subsection{Results for the Bottom Mass from the Vector Correlator} \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Vector-mb-central} \label{fig:mb-vector-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Vector-mb-errors} \label{fig:mb-vector-err} } \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/Vector-Ratio-mb-central} \label{fig:mb-vector-ratio-central} } \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/Vector-Ratio-mb-errors} \label{fig:mb-vector-ratio-err} } \caption{ Dependence on $\alpha_s(m_Z)$ of the central values of ${\overline m_b}({\overline m_b})$ and the corresponding perturbative (red), statistical (orange), systematical (blue) and nonperturbative uncertainties (green), for for the analysis of the second moment [panels (a) and (b)] and the ratio of the second over the the first moment [panels (c) and (d)] of the vector correlator.} \label{fig:mb-alphaS} \end{figure*} For our determination of the bottom quark mass from the second moment of the vector correlator we use for the experimental moment $M_2^{V,\,\rm exp}=(2.834\,\pm\,0.012_{\rm stat}\,\pm\,0.051_{\rm syst})\times 10^{-5}$\,GeV$^{-4}$, and we obtain \begin{align} \overline m_b(\overline m_b) = & \,4.176 \, \pm \, (0.004)_{\rm stat} \, \pm \, (0.019)_{\rm syst} \, \pm \, (0.010)_{\rm pert}\\ &\, \pm \, (0.007)_{\alpha_s} \, \pm \, (0.0001)_{\langle GG\rangle}\,{\rm GeV}\nonumber\,. \end{align} The perturbative error is $30\%$ smaller than for the charm vector correlator analysis, as a result of the smaller value of $\alpha_s$ at the scales close to the bottom mass. This is consistent with our discussion on the convergence properties of perturbation series for the bottom quark carried out in Sec.~\ref{sec:convergence}. The total error is dominated by the experimental systematic uncertainty, which in turn mainly comes from the continuum region where one relies on modeling in the absence of any experimental measurements. The nonperturbative error is completely negligible. This is expected since it is suppressed by two powers of the bottom mass. Using the correlated scale variation $5\,{\rm GeV}\le\mu_\alpha=\mu_m\le15\,{\rm GeV}$ for methods (a) and (c) we get $4.178$ and $4.182$ for the central values and scale variations which are $20$ and $3$ times smaller, respectively. The $\alpha_s$ dependence reads \begin{align} \label{eq:mb-alphas} \overline m_b(\overline m_b)& = (4.176 + 3.22\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.004)_{\rm stat} \, \pm \, (0.019)_{\rm syst}\\ &\, \pm \, (0.010 + 0.472\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.0001)_{\langle GG\rangle} \,.\nonumber \end{align} For the analysis based on the ratio of the second moment over the first, we use $R_2^{V,\,\rm exp} = (6.262\,\pm\,0.010_{\rm stat}\,\pm\,0.053_{\rm syst})\times 10^{-3}$\,GeV$^{-2}$, and with this value we obtain the bottom mass \begin{align} \overline m_b(\overline m_b) = & \,4.179 \, \pm \, (0.003)_{\rm stat} \, \pm \, (0.017)_{\rm syst} \, \pm \, (0.009)_{\rm pert}\\ &\, \pm \, (0.003)_{\alpha_s} \, \pm \, (0.0002)_{\langle GG\rangle}\,{\rm GeV}\nonumber\,. \end{align} With correlated scale variation $5\,{\rm GeV}\le\mu_\alpha=\mu_m\le15\,{\rm GeV}$ we obtain $4.175\, \pm \, (0.003)_{\rm pert}$ and $4.182\, \pm \, (0.004)_{\rm pert}$ for methods (a) and (c), respectively. In this case the scale variation is smaller by a factor $3$ and $2$, respectively. The $\alpha_s$ dependence reads as follows: \begin{align} \label{eq:mb-ratio-alphas} \overline m_b(\overline m_b)& = (4.179 + 1.199\times[\alpha_s(m_Z) - 0.1185]) \, \pm \, (0.003)_{\rm stat} \, \pm \, (0.017)_{\rm syst}\\ &\, \pm \, (0.009 + 0.426\times[\alpha_s(m_Z) - 0.1185])_{\rm pert} \, \pm \, (0.0002)_{\langle GG\rangle}\,.\nonumber \end{align} Although the central value for the ratio analysis is $3$\,MeV higher, this has no significance given the size of the uncertainties. The dependence of the central value on $\alpha_s$ is three times smaller for the ratio analysis. The perturbative error and its $\alpha_s$ dependence are roughly the same for the ratio and the single moment analysis. Moreover, the two experimental errors are very similar. This is because, even though there is some cancellation of correlated errors in the ratio, a significant part of the huge systematic error of the first moment remains uncanceled. A graphical illustration of the two bottom mass determinations is shown in Fig.~\ref{fig:comparison-observables-bottom}. Both combined uncertainties and central values are rather similar, and we adopt the result from the second moment (in red) as our default result. \begin{figure*}[t!] \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/observable-charm} \label{fig:comparison-observables-charm}} \subfigure[] { \includegraphics[width=0.48\textwidth]{figs/observable-bottom} \label{fig:comparison-observables-bottom}} \caption{Charm (a) [bottom (b)] mass determinations from the first [second] moment of the vector correlator (in red), the first moment of the pseudoscalar correlator (green, charm only), and the ratio of the second over the first moment of the vector (blue) and pseudoscalar correlator (purple, charm only).} \label{fig:comparison-observables} \end{figure*} \section{Comparison to other Determinations} \label{sec:comparison-masses} In this section we make a comparison to previous analyses of our updated charm mass determination from the vector correlator, our new results for the charm mass from the pseudoscalar current correlator and of our bottom mass determination. We restrict our discussion to determinations which use QCD sum rules with infinite as well as finite energy range for the vector or pseudoscalar current correlators, and including relativistic and nonrelativistic versions of the sum rules. We do not cover charm mass determinations from DIS or bottom mass determinations from jets (which are in any case rather imprecise), as well as determinations which are based on the mass of bound states (B mesons or quarkonia) or B decays. In Figs.~\ref{fig:charm-comparions} and \ref{fig:bottom-comparions} we present in a graphical form a compilation of recent sum rule determinations of the charm and bottom masses, respectively. We have labeled them from top to bottom with numbers from $1$ to $14$. We note that comparing these results, one has to keep in mind that different analyses in general employed different values and uncertainties for the strong coupling. Only the analyses in Refs.~\cite{Hoang:2004xm, Chetyrkin:2009fv, Bodenstein:2010qx, Dehnadi:2011gc, Hoang:2012us} and ours have provided the dependence of their results on the value of $\alpha_s(m_Z)$. \subsection{Charm Mass} Let us first focus our attention on the charm mass, Fig.~\ref{fig:charm-comparions}. Within each color, determinations are ordered according to publication date. In red (determinations $12$ to $14$) we show the results of our collaboration: $12$ and $13$ for the vector correlator, the former (dashed) corresponding to Ref.~\cite{Dehnadi:2011gc} without trimming procedure, and the latter (solid) corresponding to this work, which includes the trimming procedure. Determinations $12$ to $14$ are the only analyses using uncorrelated scale variation. Determination $6$ (gray)~\cite{Hoang:2004xm} sets $\mu_m = \overline{m}_c(\overline{m}_c)$, and all the other analyses have set $\mu_m = \mu_\alpha$. Determinations in blue ($1$~\cite{Chakraborty:2014aca}, $2$~\cite{McNeile:2010ji} and $3$~\cite{Allison:2008xk}) were performed by the HPQCD collaboration, which employ method (b) for the pseudoscalar correlator moments used for the mass determination in their lattice analyses. Only $1$ to $3$ and $14$ use pseudoscalar moments, while all the other analyses use the vector correlator. Among those $7$~\cite{Bodenstein:2011ma}, $8$~\cite{Chetyrkin:2009fv}, $9$~\cite{Kuhn:2007vp} and $10$~\cite{Boughezal:2006px} use data in the threshold region only up to $4.8$\,GeV; analysis $6$ uses two patches of data in the threshold region, one from threshold to $4.7$\,GeV, and another between $7.2$ and $11$\,GeV; analyses $12$ and $13$ use all available data (see Ref.~\cite{Dehnadi:2011gc} for the complete bibliographic information on charm data). The result in $6$ uses only $\mathcal{O}(\alpha_s^2)$ perturbative input [all the other analyses utilize $\mathcal{O}(\alpha_s^3)$ computations] and older information on the narrow resonances, from the PDG 2006. They also study the fixed-order expansion and two methods of contour improvement. In black ($8$ to $10$) we display results using fixed order analyses at ${\cal O}(\alpha_s^3)$ from the Karlsruhe ($8$ and $9$) and W\"urzburg ($10$) collaborations. Analysis $7$~(orange) corresponds to weighted finite-energy sum rules. They employ a kernel which enhances the sensitivity to the charm mass, and at the same time reduces the sensitivity to the continuum region. Green color analyses, collected in $4$~\cite{Narison:2010cg, Narison:2011xe} and $5$~\cite{Bodenstein:2010qx}, apply other kinds of sum rules. Analysis $5$ uses a finite energy sum rule similar to $7$, but the kernel makes the sensitivity to the charm mass quite small. On the other hand, the two determinations of $4$ use shifted moments, ratios of shifted moments, and exponential sum rules, and consider only the contributions from the first 6 vector resonances in the narrow width approximation, and the pQCD prediction for the continuum. The lower analysis of $4$~\cite{Narison:2010cg} includes contributions from condensates up to dimension 6; the higher~\cite{Narison:2011xe} includes condensates up to dimension 8. In purple ($11$~\cite{Signer:2008da}) we show the only analysis which uses large-$n$ moments for the charm mass fits, employing NRQCD methods to sum up large logs and threshold enhanced perturbative corrections, supplemented with fixed order predictions for the formally power suppressed terms. This analysis uses contributions from narrow resonances, plus a crude model for the threshold and continuum patches, for which a conservative uncertainty is assigned. We note, however, that this analysis might be questioned since perturbative NRQCD is in general not applicable for the charmonium states. Our new vector correlator result agrees well with the world average, having a similar uncertainty. Our result is fully compatible with the other determinations shown in Fig.~\ref{fig:charm-comparions}. As mentioned already before, we disagree with the small perturbative uncertainties related to the scale variations of the vector and/or pseudoscalar moments adopted in analyses $1$ to $3$ and $7$ to $10$. \subsection{Bottom Mass} Let us now turn our attention to the bottom mass results, see Fig.~\ref{fig:bottom-comparions}. The coloring and chronological conventions are analogous to Fig.~\ref{fig:charm-comparions}, and we try to keep a similar ordering. We show three nonrelativistic determinations ($11$~\cite{Hoang:2012us}, $12$~\cite{Beneke:2014pta} and $13$~\cite{Penin:2014zaa}) in purple; $\mathcal{O}(\alpha_s^2)$ fixed-order analyses are shown in gray ($5$~\cite{Corcella:2002uu}), black ($10$ \cite{Erler:2002bu}) and green ($4$~\cite{Bordes:2002ng}); finite energy sum rules also based on fixed-order appear in orange ($6$~\cite{Bodenstein:2012}) and green ($4$); there are two lattice analyses in blue, collected in $1$~\cite{Colquhoun:2014ica, McNeile:2010ji}. Analyses $3$, $6$, $7$ and $11$ to $13$ use the new BABAR data, whereas the others use the older CLEO and CUSB data. Analyses $4$ and $11$ include only the contributions of the first six vector resonances. Analyses $4$ and $5$ use older measurements of the electronic width for the narrow resonances. Analyses $3$, $4$, $6$ to $9$ use pQCD in the high-energy spectrum for the experimental moments. The theoretical treatment of the bottom mass analyses in red, gray, black, blue and green are in complete analogy to their charm mass analyses: $3$ and $4$ for bottom correspond to $4$ and $5$ for charm, respectively; $1$ in bottom corresponds to $1$ and $2$ for charm. The upper analysis of $1$~\cite{Colquhoun:2014ica} uses a nonrelativistic lattice action to compute ratios of large-$n$ moments, which are later compared to relativistic continuum perturbation theory. Because the continuum computation do not sum up Sommerfeld enhanced terms, this procedure is questionable. The analysis $10$ uses a combination of $M_6^V$ with the infinite momentum transfer moment, both in fixed-order, in order to constrain the continuum region. They only use experimental information on narrow resonances, and model the rest of the spectrum with theory predictions. Finally, they make the following scale choice: $\mu_\alpha = \mu_m = \overline{m}_b(\overline{m}_b)$ and estimate the truncation error from an ansatz for the $\mathcal{O}(\alpha_s^3)$ term.\footnote{Ref.~\cite{Erler:2002bu} also makes a determination of the charm mass. We exclude it from our comparison since it is not used in the PDG average.} Analyses $11$ and $13$ use large-$n$ moments and NRQCD methods for their theoretical moments. Analysis $12$ uses NRQCD fixed-order perturbation theory at N$^3$LO (which accounts for the summation of the Coulomb singularities) and $13$ uses renormalization group improved perturbation theory in the framework of vNRQCD at N$^2$LL order. Both analyses employ low-scale short-distance masses to avoid ambiguities related to the pole mass renormalon. Analysis $12$ also uses N$^3$LO NRQCD fixed-order input, but is incomplete concerning the contributions from the continuum region in the theoretical moments. Moreover, they extract the pole mass which is then converted to the ${\overline{\rm MS}}$ scheme. Our result $14$ is in full agreement with the world average, having a slightly smaller uncertainty. It also agrees with the other analyses shown, with slightly smaller or comparable uncertainties. We disagree with the small perturbative uncertainties related to scale variations quoted in $6$ to $10$. \begin{figure*}[tbh!] \subfigure[] {\label{fig:charm-comparions} \includegraphics[width=0.497\textwidth]{figs/charm-comparison}}~~~ \subfigure[] {\label{fig:bottom-comparions} \includegraphics[width=0.481\textwidth]{figs/bottom-comparison} } \caption{Comparison of recent determinations of charm (a) and bottom (b) quark masses from sum rule analyses. Red results correspond to our determination. Black and gray correspond to $\mathcal{O}(\alpha_s^2)$ and $\mathcal{O}(\alpha_s^3)$ analyses, respectively. Purple results use nonrelativistic sum rules. Orange use weighted finite energy sum rules. Blue results are based on QCD sum rules using lattice simulation results as experimental data. Green labels other kinds of sum rule analyses (FESR, $Q^2$-dependent moments, ratios of moments).\label{fig:comparison}} \end{figure*} \section{Conclusions} \label{sec:conclusions} In this work we have determined the $\overline {\rm MS}$ charm and bottom quark masses from quarkonium sum rules in the framework of the OPE, using $\mathcal{O}(\alpha_s^3)$ perturbative computations, plus nonperturbative effects from the gluon condensate including its Wilson coefficient at $\mathcal{O}(\alpha_s)$. For the determination of the perturbative uncertainties we independently varied the renormalization scales of the strong coupling and the quark masses, in order to account for the variations due to different possible types of $\alpha_s$ expansions, as suggested earlier in Ref.~\cite{Dehnadi:2011gc}. In order to avoid a possible overestimate of the perturbative uncertainties, coming from the double scale variation in connection with a low scale of $\alpha_s$ and resulting in badly convergent series, we have re-examined the charm mass determination from charmonium sum rules (vector correlator) supplementing the analysis with a convergence test. The convergence test is based on Cauchy's radical test, which is adapted to the situation in which only a few terms of the series are known, and quantifies the convergence rate of each series by the parameter $V_c$. We find that the distribution of the convergence parameter $V_c$ coming from the complete set of series peaks around its mean value, and allows to quantify the overall convergence rate of the set of series for each moment in a meaningful way. This justifies discarding (or ``trimming'') series with values of $V_c$ much larger than the average. For our analysis we discard $3\%$ of the series having the highest $V_c$ values, which results in a reduction of the perturbative uncertainty for the $\overline{\rm MS}$ charm mass $\overline m_c(\overline m_c)$ from $19$\,MeV in~\cite{Dehnadi:2011gc} to $14$\,MeV (which amounts to 26\%), and a small shift of $+\,5$\,MeV in the central value. Our new determination of the charm mass from the first moment (which is theoretically the cleanest) of the vector correlator reads: \begin{equation}\label{eq:vector-quadrature} \overline m_c(\overline m_c) = \,1.288 \, \pm 0.020\,\rm GeV\,, \qquad{\rm [Vector~Correlator]} \end{equation} where all sources of uncertainty have been added in quadrature. This result supersedes our corresponding earlier result from Ref.~\cite{Dehnadi:2011gc}, which was $1.282\,\pm\, 0.024$\,GeV. This makes it clear that the trimming procedure discards series which produce small values of the charm mass. We have applied the same method of theory uncertainty estimate to analyze the HPQCD lattice simulation results for the pseudoscalar correlator. Our convergence test signals that the pseudoscalar moments have far worse convergence than the corresponding vector ones. This translates into an uncertainties of $35$\,MeV due to the truncation of the perturbative series and the error in $\alpha_s$ (roughly twice as big as for the vector determination). In contrast, using correlated scale variation (e.g.\ setting the scales in the mass and the strong coupling equal) the scale variation can be smaller by a factor of $8$. Our new determination from the first moment (again being the most reliable theoretical prediction) of the pseudoscalar correlator reads $\overline m_c(\overline m_c) = 1.267 \pm 0.041$\,GeV, where again all individual errors have been added in quadrature. The combined total error is twice as big as for the vector correlator, and therefore we consider it as a validation of Eq.~(\ref{eq:vector-quadrature}) in connection with the convergence test. The result is in sharp contrast with the analyses carried out by the HPQCD collaboration~\cite{Allison:2008xk, McNeile:2010ji, Chakraborty:2014aca}, where perturbative uncertainties of $4$\,MeV are claimed. We have checked that, as for the vector correlator, for the different possible types of \mbox{$\alpha_s$-expansion} the correlated variation in general leads to a bad order-by-order convergence of the charm mass determination. The second important result of this work is the determination of the bottom quark mass from the vector correlator. We have reanalyzed the experimental moments by combining experimental measurements of the first four narrow resonances, the threshold region covered by BABAR, and a theoretical model for the continuum. This theoretical model is an interpolation between a linear fit to the BaBar data points with highest energy and pQCD, to which we assign a $4\%$ systematic uncertainty which decreases linearly to reach $0.3\%$ at the $Z$-pole, and stays constant at $0.3\%$ for higher energies. Our treatment is motivated by the error function yielded by the fit to BaBar data in the energy range between $11.0$ and $11.2$\,GeV and the discrepancy between pQCD and experimental measurements at the \mbox{$Z$-pole}. This results into a large error for the first moment, and therefore we choose the second moment (which is theoretically as clean as the first one for the case of the bottom quark) for our final analysis, giving a total experimental uncertainty of $18$\,MeV. Our treatment of the experimental continuum uncertainty is in contrast to Ref.~\cite{Chetyrkin:2009fv}, where instead the very small perturbative QCD uncertainties (less than $1$\%) are used, claiming an experimental uncertainties of $6$\,MeV. In the light of the analysis carried out here, supported by the observations made in Ref.~\cite{Chetyrkin:2010ic}, we believe this is not justified. Our convergence test reveals that, as expected for the heavier bottom quark, the perturbative series converge faster than for the charm quark. Correspondingly, the perturbative and $\alpha_s$ uncertainties are $\sim 30\%$ smaller than those for charm. Taking correlated scale variation as used in Refs.~\cite{Chetyrkin:2009fv,Bodenstein:2012} the perturbative error estimate can shrink up to a factor of $20$. We also find that correlated variation leads to incompatible results for the different types of $\alpha_s$-expansions. Our final result for the bottom mass from the second moment, with all errors added in quadrature, reads: \begin{equation}\label{eq:bottom-quadrature} \overline m_b(\overline m_b) = \,4.176 \, \pm 0.023\,\rm GeV\,, \qquad{\rm [Vector~Correlator]} \end{equation} where the total error is fairly dominated by the systematic error, which comes from the continuum region of the spectrum. Our uncertainty is very similar to the one obtained by the HPQCD analysis, but $30\%$ larger than the $16$\,MeV claimed by \cite{Chetyrkin:2009fv}. Our central value is $13$\,MeV larger than the latter. This good agreement is a result of two effects that push in opposite directions: smaller value of the second experimental moment, and different perturbative analysis. Curiously enough, a similar accidental cancellation was observed for the charm mass in \cite{Dehnadi:2011gc}. In order to further validate the results discussed above, we have also analyzed the ratios of consecutive moments of each one of the three correlators as alternative observables. In all cases the results from the moment ratios agree very well the regular moment analyses. \bigskip \acknowledgments This work was supported in part by the European Community's Marie-Curie Research Network under contract PITN-GA-2010-264564 (LHCphenOnet). VM has been partially supported by a Marie Curie Fellowship under contract PIOF-GA-2009-251174. BD thanks the FWF Doktoratskollegs ``Particles and Interactions'' (DK W 1252-N27) for partial support. We thank the {\it Erwin Schr\"odinger International Institute for Mathematical Physics} (ESI Vienna), where a part of this work has been accomplished, for partial support. We thank Riccardo Faccini for information on the treatment of vacuum polarization effects by the BaBar collaboration, and Matthias Steinhauser and Christian Sturm for pointing out that they were missing in a previous version of the manuscript.
1,477,468,749,937
arxiv
\section{Introduction} Although Bell's inequalities \cite{BE64,BE66} are usually discussed in the context of quantum Bell experiments with spins and observers, they can be established in a far wider variety of settings. Here we bring one such example in a rather unexpected field of the theory of games of incomplete information \cite{HA67}. Game theory now occupies a central place in areas of applied mathematics, economics, sociology, and in mathematical biology. It is well known that Bell inequality can be broken only when the assumption of local realism is abandoned. This result, when considered in the context of game theory of incomplete information, links together the breaking of Bell inequality and the existence of nonlocal correlation between players. To explore this further, a consideration of quantum strategies \cite{ME99,EW99,OS04,CT06,FH07} in games of incomplete information becomes both relevant and interesting. With rapid advancement of quantum information technologies, playing games with quantum resources is within the technical reach of advanced laboratories \cite{DLX02,PSWZ07}. It is quite conceivable that playing games with quantum strategies, using properly coordinated quantum devices, becomes commonplace in the near future. It is therefore timely that we analyze the physical contents of quantum strategies, and examine the relevance of Bell inequality breaking. It is now generally agreed that quantum strategy can shift the classical outcome of the game in favor of all players, but how much of it is due to truly quantum effect, never achievable classically, is still under debate \cite{IQ04}. Games with incomplete information synergetic to Bell experiment setup appears to be a good candidate to settle this issue, which is one of the basic unanswered question of quantum game theory. To study quantum strategies in games of incomplete information, we develop a formalism of game theory based on multi-sector probability matrix. We then analyze a game of incomplete information which is an extension of the well known game of Battle of Sexes and find the classical and the quantum Bayesian Nash equilibria. We find two distinct effects of quantum entanglement in games of incomplete information: \textit{pseudo-classical distortion} and \textit{quantum nonlocality}. These two effects, in fact, has been already identified as two separate correction terms in the payoff functions in a previous study of games with complete information \cite{CT06}. It has been found there, however, that the pseudo-classical term, which can be simulated classically, tends to overshadow the subtle effect of quantum nonlocality term. It is shown, in this work, that the purely quantum element of quantum game strategy can be unambiguously separated in a proper setup utilizing Bell inequality, and that setup is exactly found in Harsanyi's theory of games with incomplete information. \\ \section{Joint Probability Formalism of Incomplete Information Game} We start by formulating game strategies in terms of joint probabilities which do not, in general, factorize into individual player strategies \cite{IC07}. Consider a system consisting of two players, Alice and Bob, who are to {play two-strategy games}, that is, to make selection from respective dichotomic choices, which we label as $A = 0$ or $1$ for Alice, and $B = 0$ or $1$ for Bob. The players are assumed to be autonomous decision makers interested in increasing their respective utility functions, or {\it payoffs} $\Pi_{Alice}$ and $\Pi_{Bob}$. Game theory tries to answer the question what the stable pattern of selections are after sufficient repetitions of game plays. In the game theory, both payoffs $\Pi_{Alice}$ and $\Pi_{Bob}$ are functions of $A$ and $B$ at the same time. In general, there is no unilateral optimal choice for neither players. In determining the form of payoffs, we assume that not all information necessary to specify payoff functions are known to players. Following Harsanyi \cite{HA67}, we represent this unknown elements of the game by the concept of {\it player type}; Both players comes into the play in one of two types denoted by $a = 0$, $1$ for Alice, and $b = 0$, $1$ for Bob, and payoffs are uniquely determined only after determination of types. Specifically, when Alice in type $a$ mode makes her move $A$ and Bob in $b$ makes his move $B$, we assign real numbers $M^{[ab]}_{AB}$ for Alice's payoff, and $L^{[ab]}_{AB}$ for Bob's. With varying indices $a$, $b$, $A$ and $B$, both $M^{[ab]}_{AB}$ and $M^{[ab]}_{AB}$ form payoff matrices. After sufficient run of repeated game play, the pattern of the play is specified by the joint probability $P^{[ab]}_{AB}$ which represents the fraction of plays in which the move of Alice of type $a$ is $A$, and that of Bob of type $b$, $B$. The average payoffs for Alice and Bob of respective types $a$ and $b$ are given by \begin{eqnarray} \label{ee01} \Pi^{[ab]}_{Alice} = \sum_{A,B} M^{[ab]}_{AB} P^{[ab]}_{AB} , \quad \Pi^{[ab]}_{Bob} = \sum_{A,B} L^{[ab]}_{AB} P^{[ab]}_{AB} . \end{eqnarray} As probabilities, $P^{[ab]}_{AB}$ satisfy the relations $\sum_{A,B} P^{[ab]}_{AB} = 1$ for any given types of players $a$ and $b$. If we further assume that the types of the players at each turn of play is determined randomly (by Nature's move) with probabilities $S^{[a]}$ and $T^{[b]}$, we obtain the total average payoffs in the forms \begin{eqnarray} \label{ee02} \Pi_{Alice} \!\!=\! \sum_{a,b} S^{[a]} T^{[b]} \Pi^{[ab]}_{Alice} , \ \Pi_{Bob} \!=\! \sum_{a,b} S^{[a]} T^{[b]} \Pi^{[ab]}_{Bob} . \end{eqnarray} Central to the theory of game with incomplete information is the assumption of {\it local knowledge of player types}, which postulates that the type of a player at each turn of the play is known only to herself (himself) and not to the other player. The statistical distributions of types $S^{[a]}$, $T^{[b]}$ are treated as common knowledge. It then follows that the pattern of play, or the {\it strategy} of Alice, which we assign symbol $\alpha$, has to be determined only by the knowledge of $a$, but not with $b$. Likewise strategy of Bob $\beta$ can depend on his type $b$ but not on Alice's $a$. Since the strategies of Alice and Bob jointly determine the joint probability of play, we can express the assumption of locality of player type as \begin{eqnarray} \label{ee03} P^{[ab]}_{AB} = P^{[ab]}_{AB} (\alpha^{[a]}, \beta^{[b]}) . \end{eqnarray} In traditional game theory, which has exclusively considered strategy based on classical resources, the joint probability is given by the product of individual probabilities as \begin{eqnarray} \label{ee04} P^{[ab]}_{AB} = P^{[a]}_{A} Q^{[b]}_{B} , \end{eqnarray} where $P^{[a]}_{A}$ represents the probability of Alice of type $a$ selecting the move $A$, and $Q^{[b]}_{B}$ the probability of Bob of type $b$ selecting the move $B$. In this case, we can identify the $P^{[a]}_{A}$ itself as Alice's strategy $\alpha^{[a]}$ and $Q^{[b]}_{B}$ itself as Bob's strategy $\beta^{[b]}$, and no distinction between strategy and individual probability is necessary. To generate desired strategies, players need access to devices that can generate probability distribution, such as dices. If, on the other hand, we are to consider strategy based on quantum resources, we can construct joint probability out of individual strategies that correspond to the individual actions on Hilbert space vector. For definiteness, we adopt the quantum strategy based on Schmidt decomposition \cite{IT07}, that is known to cover \textit{entire $2\times 2$ dimensional Hilbert space}, which is given by \begin{eqnarray} \label{ee05} P^{[ab]}_{AB}(\alpha^{[a]}, \beta^{[b]}) = \left| {\left<AB \right| U_{\alpha^{[a]}} V_{\beta^{[b]}} \left| \Phi_{\gamma \phi} \right>} \right| ^2, \end{eqnarray} where the ``initial'' state $ \left| \Phi_{\gamma \phi} \right>$ residing on $2 \times 2$ dimensional Hilbert space is given by \begin{eqnarray} \label{ee06} \left| \Phi_{\gamma \phi} \right> = \cos\frac{\gamma}{2} \left| 00 \right> + e^{i\phi} \sin\frac{\gamma}{2} \left| 11 \right> , \end{eqnarray} and individual rotations $U_\alpha$ and $V_\beta$ in $2$ dimensional subspaces, which are now identified as individual strategies, are given by \begin{eqnarray} \label{ee07} && U_\alpha \left| 0 \right> = \cos\frac{\alpha}{2} \left| 0 \right> + \sin\frac{\alpha}{2} \left| 1 \right>, \nonumber \\ && U_\alpha \left| 1 \right> = -\sin\frac{\alpha}{2} \left| 0 \right> + \cos\frac{\alpha}{2} \left| 1 \right>, \nonumber \\ && V_\beta \left| 0 \right> = \cos\frac{\beta}{2} \left| 0 \right> + \sin\frac{\beta}{2} \left| 1 \right>, \\ \nonumber && V_\beta \left| 1 \right> = -\sin\frac{\beta}{2} \left| 0 \right> + \cos\frac{\beta}{2} \left| 1 \right>. \end{eqnarray} Note that, in addition to individual strategy variables $\alpha$ and $\beta$, which are defined within the range $[0,\pi]$, that are respectively controlled by Alice and Bob, there appear two more variables $\gamma\in [0,\pi]$ and $\phi\ [0,\pi]$ ``from nowhere'', as a result of the requirement that strategies be described by Hilbert space vectors. A natural interpretation of these new variables is that they belong to a third person, {\it the coordinator} of the game\cite{CT06}. There are alternative choices of quantum strategies \cite{CT06,NT04} than the one given by (\ref{ee05})-(\ref{ee07}), but they do not change our main conclusion, as long as entire Hilbert space is exhausted, and thus all possible quantum joint probabilities are included. The quantum joint probability (\ref{ee05}) can be realized, for example, by the coordinator first generating two $z$-axis-polarized spins in entangled state (\ref{ee06}), then Alice and Bob obtaining one spin each, and performing spin rotations and their subsequent measurement along $z$-axis, or equivalently, just measuring spins along properly rotated axes. If the initial state is prepared disentangled, for example in $\gamma=0$ state, the quantum strategy (\ref{ee05}) is simply reduced to classical strategy (\ref{ee04}) with identification \begin{eqnarray} \label{ee08} P^{[a]}_A= |\left< A\right| U_{\alpha^{[a]}} \left| 0 \right>|^2 , \quad Q^{[b]}_B= |\left< B\right| V_{\beta^{[b]}} \left| 0 \right>|^2 , \end{eqnarray} which means that we have replaced usual dice by quantum spin systems that act exactly as classical dices, albeit with far greater cost. The payoffs are now the functions of strategy variables $\alpha$ and $\beta$, and also of coordinator variables $\gamma$ and $\phi$; \begin{eqnarray} \label{ee09} &&\Pi_{Alice}= \Pi_{Alice}(\alpha,\beta; \gamma,\phi), \nonumber \\ &&\Pi_{Bob}= \Pi_{Bob}(\alpha,\beta; \gamma,\phi). \end{eqnarray} Here, we have adopted the obvious shorthand notations $\alpha = (\alpha^{[1]},\alpha^{[2]})$ and $\beta = (\beta^{[1]},\beta^{[2]})$. Once payoff functions are calculated as functions of strategies, the solution of the game is given by constructing {\it Bayesian Nash equilibria} $(\alpha^{\star},\beta^{\star})$ which are obtained from local maximum specified by \begin{eqnarray} \label{ee10a} && \left. \frac{\partial}{\partial \alpha^{[a]}} \Pi_{Alice}(\alpha,\beta; \gamma,\phi) \right|_{(\alpha^\star,\beta^\star)}= 0, \nonumber \\ && \left.\frac{\partial}{\partial {\beta^{[b]}}}\Pi_{Bob}(\alpha,\beta; \gamma,\phi) \right|_{(\alpha^\star,\beta^\star)}= 0. \end{eqnarray} If the payoffs do not have maxima as functions of $\alpha$ and $\beta$, {\it classical pure Nash equilibria} emerge as the ``edge'' solutions; \begin{eqnarray} \label{ee11a} \alpha^{[a]\star}=0,\ \beta^{[b]\star}=0 \ {\rm if} \!\!\!\!\!\!\!\!\!\!\!\!&& \frac{\partial}{\partial \alpha^{[a]}} \Pi_{Alice}(\alpha,\beta; \gamma,\phi) \!< 0, \nonumber \\ && \frac{\partial}{\partial {\beta^{[b]}}}\Pi_{Bob}(\alpha,\beta; \gamma,\phi) \!< 0, \end{eqnarray} \begin{eqnarray} \label{ee12a} \alpha^{[a]\star}=\pi,\ \beta^{[b]\star}=0 \ {\rm if} \!\!\!\!\!\!\!\!\!\!\!\!&& \frac{\partial}{\partial \alpha^{[a]}} \Pi_{Alice}(\alpha,\beta; \gamma,\phi) \! > 0, \nonumber \\ && \frac{\partial}{\partial {\beta^{[b]}}}\Pi_{Bob}(\alpha,\beta; \gamma,\phi) \! < 0, \end{eqnarray} \begin{eqnarray} \label{ee13a} \alpha^{[a]\star}=0,\ \beta^{[b]\star}=\pi \ {\rm if} \!\!\!\!\!\!\!\!\!\!\!\!&& \frac{\partial}{\partial \alpha^{[a]}} \Pi_{Alice}(\alpha,\beta; \gamma,\phi) \! < 0, \nonumber \\ && \frac{\partial}{\partial {\beta^{[b]}}}\Pi_{Bob}(\alpha,\beta; \gamma,\phi) \! > 0, \end{eqnarray} \begin{eqnarray} \label{ee14a} \alpha^{[a]\star}=\pi,\ \beta^{[b]\star}=\pi \ {\rm if} \!\!\!\!\!\!\!\!\!\!\!\!&& \frac{\partial}{\partial \alpha^{[a]}} \Pi_{Alice}(\alpha,\beta; \gamma,\phi) \! > 0, \nonumber \\ && \frac{\partial}{\partial {\beta^{[b]}}}\Pi_{Bob}(\alpha,\beta; \gamma,\phi) \! > 0. \end{eqnarray} In all cases, the Bayesian Nash payoffs are obtained as \begin{eqnarray} \label{ee15a} && \Pi_{Alice}^\star(\gamma,\phi)= \Pi_{Alice}(\alpha^\star,\beta^\star; \gamma,\phi), \nonumber \\ && \Pi_{Bob}^\star(\gamma,\phi)= \Pi_{Bob}(\alpha^\star,\beta^\star; \gamma,\phi) , \end{eqnarray} for all combinations of ${[ab]} = {[00]}$, $[10]$, $[01]$ and $[11]$. Note that Bayesian Nash equilibria are defined for each fixed values for coordinator variables. \\ \section{Extended Battle of Sexes Game} In this section, we analyze a particular example of game with incomplete information, that shows the power of quantum strategies in dramatical fashion. We now consider the following payoff matrices \begin{eqnarray} \label{ee16b} M = \left( \begin{array} {cc} \begin{array}{cc} +3 & 0 \cr 0 & +1 \end{array} &\begin{array}{cc} -3 & 0 \cr 0 & -1 \end{array} \cr \begin{array}{cc} -3 & 0 \cr 0 & -1 \end{array} &\begin{array}{cc} -1 & 0 \cr 0 & -3 \end{array} \end{array} \right) , \nonumber \\ L = \left( \begin{array} {cc} \begin{array}{cc} +1 & 0 \cr 0 & +3 \end{array} &\begin{array}{cc}-1 & 0 \cr 0 & -3 \end{array} \cr \begin{array}{cc} -1 & 0 \cr 0 & -3 \end{array} &\begin{array}{cc} -3 & 0 \cr 0 & -1 \end{array} \end{array} \right) . \end{eqnarray} Here, $2 \times 2$ blocks represent payoff matrices for fixed player types, $[ab] = [00]$, $[01]$, $[10]$ and $[11]$ from top-left to bottom right, namely \begin{eqnarray} \label{ee17b} M = \left(\!\! \begin{array}{cc} M^{[00]} & M^{[01]} \cr M^{[10]} & M^{[11]} \end{array} \!\!\right) , \ L = \left(\!\! \begin{array}{cc} L^{[00]} & L^{[01]} \cr L^{[10]} & L^{[11]} \end{array} \!\!\right) . \end{eqnarray} We assume the ``democratic'' mixture of two types, $S^{[0]}=S^{[1]}=1/2$ and $T^{[0]}=T^{[1]}=1/2$. The payoffs are given, in terms of joint probabilities $P^{[ab]}_{AB}$ as \begin{eqnarray} \label{ee18b} &&\!\!\!\! \Pi_{Alice} = \frac{3}{4} ( { P^{[00]}_{00} - P^{[10]}_{00} - P^{[01]}_{00} - P^{[11]}_{11} } ) \nonumber \\ &&\qquad\!\! +\frac{1}{4} ( { P^{[00]}_{11} - P^{[10]}_{11} - P^{[01]}_{11} - P^{[11]}_{00} } ), \nonumber \\ &&\!\!\!\! \Pi_{Bob} = \frac{1}{4} ( { P^{[00]}_{00} - P^{[10]}_{00} - P^{[01]}_{00} - P^{[11]}_{11} } ) \\ \nonumber &&\qquad\!\! +\frac{3}{4} ( { P^{[00]}_{11} - P^{[10]}_{11} - P^{[01]}_{11} - P^{[11]}_{00} } ). \end{eqnarray} It is easily seen that the two terms of $\Pi_{Alice}$ and those of $\Pi_{Bob}$ are identical apart from the different weights. They are both made up of $P^{[ab]}_{AB}$ of four different type combinations $[ab]$. This is so by design, which soon becomes evident in the followings. The factors $3/4$ and $1/4$ are, of course, the result of our specific choice of numbers $\pm 3$ and $\pm 1$ in the entries of payoff matrices $M$ and $L$, and any other positive numbers will leave our analysis essentially unchanged. The game in ``main sector'' that is played by type $a=0$ Alice and type $b=0$ Bob is nothing but usual Battle of Sexes game. If both players are limited within this sector, there are two obvious pure Nash equilibria, $(A^\star,B^\star) = (0,0)$ and $(A^\star,B^\star) = (1,1)$, or equivalently, $(\alpha^{[0]^\star},\beta^{[0]^\star})=(0,0)$ and $(\alpha^{[0]^\star},\beta^{[0]^\star})=(\pi,\pi)$. The former solution is advantageous to Alice and the latter to Bob as being evident from the payoffs $(\Pi^{[00]\star}_{Alice}, \Pi^{[00]\star}_{Bob}) = (3,1)$ for the former and $(\Pi^{[00]\star}_{Alice}, \Pi^{[00]\star}_{Bob}) = (1,3)$ for the latter. In the ``shadow sectors'' $[ab] = [10]$, $[01]$ and $[11]$, the game table is that of Chicken Game. If both players are limited within each sectors, Nash equilibria are achieved by $(A^\star,B^\star) = (1,0)$ and $(A^\star,B^\star) = (0,1)$, both of which results in zero payoffs $(\Pi^{[ab]\star}_{Alice}, \Pi^{[ab]\star}_{Bob}) = (0,0)$. For the full game with incomplete information, the lack of knowledge leaves the players guessing on the type of other party, and they have to be content with settling with less payoffs on average, in comparison with the case of full information above. For the calculation of full game, we define \begin{eqnarray} \label{ee19b} &&\!\!\!\! \Delta_{00} \equiv P^{[00]}_{00} - P^{[10]}_{00} - P^{[01]}_{00} - P^{[11]}_{11} , \nonumber \\ &&\!\!\!\! \Delta_{11} \equiv P^{[00]}_{11} - P^{[10]}_{11} - P^{[01]}_{11} - P^{[11]}_{00} . \end{eqnarray} Obviously, we have Bayesian Nash equilibria when we have simultaneous maxima for $\Delta_{00}$ and $\Delta_{11} $ as functions of $\alpha$ and $\beta$. Explicit form for $\Delta_{00}$ is \begin{eqnarray} \label{ee20} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \Delta_{00} = \cos^2\frac{\gamma}{2} \left( \cos^2\frac{\alpha^{[0]}}{2} \cos^2\frac{\beta^{[0]}}{2} -\cos^2\frac{\alpha^{[1]}}{2} \cos^2\frac{\beta^{[0]}}{2} \right. \nonumber \\&&\quad\!\! \left. -\cos^2\frac{\alpha^{[0]}}{2} \cos^2\frac{\beta^{[1]}}{2} -\sin^2\frac{\alpha^{[1]}}{2} \sin^2\frac{\beta^{[1]}}{2} \right) \nonumber \\ &&\!\!\!\!\!\! +\sin^2\frac{\gamma}{2} \left( \sin^2\frac{\alpha^{[0]}}{2} \sin^2\frac{\beta^{[0]}}{2} -\sin^2\frac{\alpha^{[1]}}{2} \sin^2\frac{\beta^{[0]}}{2} \right. \nonumber \\&&\quad\!\! \left. -\sin^2\frac{\alpha^{[0]}}{2} \sin^2\frac{\beta^{[1]}}{2} -\cos^2\frac{\alpha^{[1]}}{2} \cos^2\frac{\beta^{[1]}}{2} \right) \nonumber \\ &&\!\!\!\!\!\! +\frac{1}{4}\cos\phi\sin\gamma \left( \sin\alpha^{[0]} \sin\beta^{[0]} -\sin\alpha^{[1]} \sin\beta^{[0]} \right. \\ \nonumber &&\qquad\qquad \left. -\sin\alpha^{[0]} \sin\beta^{[1]} -\sin\alpha^{[1]} \sin\beta^{[1]} \right) . \end{eqnarray} The other quantity $\Delta_{11}$ is obtained by the simultaneous replacements $\alpha \to \pi-\alpha$ and $\beta \to \pi-\beta$, or equivalently, by the replacement $\gamma \to \pi-\gamma$. From these, we obtain the condition for Bayesian Nash equilibrium as \begin{eqnarray} \label{ee21} &&\!\!\!\!\!\! \sin\alpha^{[0]}(\cos\beta^{[0]}-\cos\beta^{[1]}) \nonumber \\ && -\cos\alpha^{[0]}(\sin\beta^{[0]}-\sin\beta^{[1]})\cos\phi\sin\gamma = 0 , \nonumber \\ &&\!\!\!\!\!\! \sin\alpha^{[1]}(\cos\beta^{[0]}+\cos\beta^{[1]}) \nonumber \\ && -\cos\alpha^{[1]}(\sin\beta^{[0]}+\sin\beta^{[1]})\cos\phi\sin\gamma = 0 , \nonumber \\ &&\!\!\!\!\!\! \sin\beta^{[0]}(\cos\alpha^{[0]}-\cos\alpha^{[1]}) \nonumber \\ && -\cos\beta^{[0]}(\sin\alpha^{[0]}-\sin\alpha^{[1]})\cos\phi\sin\gamma = 0 , \\ \nonumber &&\!\!\!\!\!\! \sin\beta^{[1]}(\cos\alpha^{[0]}+\cos\alpha^{[1]}) \\ \nonumber&& -\cos\beta^{[1]}(\sin\alpha^{[0]}+\sin\alpha^{[1]})\cos\phi\sin\gamma = 0 . \end{eqnarray} The classical game is obtained as the limit of no entanglement, $\gamma=0$, for which we recover the separability of probabilities, $P^{[ab]}_{AB}=P^{[a]}_{A} Q^{[b]}_{B}$. There are eight sets of Bayesian Nash equilibria found in this game, all supporting the {\it break-even payoffs} \begin{eqnarray} \label{ee22} \Pi^\star_{Alice}=0,\quad \Pi^\star_{Bob} = 0 . \end{eqnarray} They are \begin{eqnarray} \label{ee23} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 1: \alpha^{[0]}=0, \ \alpha^{[1]}=0, \ \beta^{[0]}={\rm arbitrary}, \ \beta^{[1]}=\pi , \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 2: \alpha^{[0]}=\pi, \ \alpha^{[1]}=\pi, \ \beta^{[0]}={\rm arbitrary}, \ \beta^{[1]}=0 , \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 3: \alpha^{[0]}=0, \ \alpha^{[1]}=\pi, \ \beta^{[0]}=0, \ \beta^{[1]}={\rm arbitrary} , \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 4: \alpha^{[0]}=\pi, \ \alpha^{[1]}=0, \ \beta^{[0]}=\pi, \ \beta^{[1]}={\rm arbitrary}, \\ \nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 5: \alpha^{[0]}={\rm arbitrary}, \ \alpha^{[1]}=\pi, \ \beta^{[0]}=0, \ \beta^{[1]}=0 , \\ \nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 6: \alpha^{[0]}={\rm arbitrary}, \ \alpha^{[1]}=0, \ \beta^{[0]}=\pi, \ \beta^{[1]}=\pi , \\ \nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 7: \alpha^{[0]}=0, \ \alpha^{[1]}={\rm arbitrary}, \ \beta^{[0]}=0, \ \beta^{[1]}=\pi , \\ \nonumber &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! 8: \alpha^{[0]}=\pi, \ \alpha^{[1]}={\rm arbitrary}, \ \beta^{[0]}=\pi, \ \beta^{[1]}=0 . \end{eqnarray} There is a deeper reason for the fact that the Bayesian Nash equilibria for this game only gives zero payoffs and nothing more: That is exactly the Bell inequalities. In our setting of $2 \times 2$ game, Bell inequalities are the relations among joint probabilities for Alice and Bob each being capable of turning up in two types $a=0$, $1$ and $b=0$, $1$. If each player choose her/his probability only with the knowledge of her/his own type, but not of other player, a set of inequalities can be proven. Since this condition is exactly the type-locality assumption we have postulated in the game of incomplete information, it is reasonable that we expect Bell inequalities to be satisfied in our settings. A specifically relevant ones in our case are the inequalities first proven by Cereceda \cite{CE01} which read \begin{eqnarray} \label{ee24} &&\!\!\!\! P^{[00]}_{00} - P^{[10]}_{00} - P^{[01]}_{00} - P^{[11]}_{11} \le 0 , \nonumber \\ &&\!\!\!\! P^{[00]}_{11} - P^{[10]}_{11} - P^{[01]}_{11} - P^{[11]}_{00} \le 0 . \end{eqnarray} There are 64 Cereceda inequalities obtainable by renaming of superscripts $[ab]$ and subscripts $AB$, that can be divided into 16 quartets. Each quartet sums up to give a single CHSH inequality, and can be regarded as a set of ``elementary'' pieces of a CHSH inequality. Each Cereceda inequality contains all four combinations of types $[ab]$. It is shown by Fine \cite{FI82} that the Bell inequality breaking occurs if and only if the assumption of \textit{factorizability of joint probabilities}, (\ref{ee04}), is violated. Since LHSs of the Cereceda inequality, (\ref{ee24}) are nothing other than $\Delta_{00}$ and $\Delta_{11}$, we always have \begin{eqnarray} \label{ee25} \Pi_{Alice} \le 0, \quad \Pi_{Bob} \le 0 . \end{eqnarray} We can now see that, with the Bayesian Nash payoffs (\ref{ee22}), both players are getting maximum payoffs mathematically possible under the assumption of type-locality. If players are allowed to share quantum objects, it is possible to have nonlocal strategies, and it is expected that the classical limit of payoffs imposed by Cereceda inequality can be exceeded. There are, however, limits on the amount of quantum breaking of Bell inequalities. According to Cirel'son \cite{TS80}, the nonlocality supplied by quantum mechanics can break the Cereceda-Bell inequality up to the following amount; \begin{eqnarray} \label{ee26} &&\!\!\!\! P^{[00]}_{00} - P^{[10]}_{00} - P^{[01]}_{00} - P^{[11]}_{11} \le \frac{\sqrt{2}-1}{2} , \nonumber \\ &&\!\!\!\! P^{[00]}_{11} - P^{[10]}_{11} - P^{[01]}_{11} - P^{[11]}_{00} \le \frac{\sqrt{2}-1}{2} . \end{eqnarray} which should limits the possible payoffs to \begin{eqnarray} \label{ee27} \Pi_{Alice} \le \frac{\sqrt{2}-1}{2}, \quad \Pi_{Bob} \le \frac{\sqrt{2}-1}{2} , \end{eqnarray} even with the quantum strategies. As is well known, Bell inequalities are, in general, maximally broken when the quantum entanglement is largest. For the quantum strategy given by (\ref{ee05}) and (\ref{ee06}), that corresponds to the case of $\gamma=\pi/2$ and $\phi=0$. In this case, Bayesian Nash condition (\ref{ee21}) becomes \begin{eqnarray} \label{ee28} &&\!\!\!\! \sin(\alpha^{[0]}-\beta^{[1]}) - \sin(\alpha^{[0]}-\beta^{[0]}) = 0 , \nonumber \\ &&\!\!\!\! \sin(\alpha^{[1]}-\beta^{[1]}) + \sin(\alpha^{[1]}-\beta^{[0]}) = 0 , \nonumber \\ &&\!\!\!\! \sin(\alpha^{[1]}-\beta^{[0]}) - \sin(\alpha^{[0]}-\beta^{[0]}) = 0 , \\ \nonumber &&\!\!\!\! \sin(\alpha^{[1]}-\beta^{[1]}) + \sin(\alpha^{[0]}-\beta^{[1]}) = 0 . \end{eqnarray} From this condition, we can identify a single set of quantum Bayesian Nash equilibria for the case of $\cos\phi\sin\gamma=1$ as \begin{eqnarray} \label{ee29} && \beta^{[0]\star}-\alpha^{[0]\star}=\frac{\pi}{4} , \nonumber \\&& \beta^{[1]\star}-\alpha^{[0]\star}=\frac{3\pi}{4} , \\ \nonumber&& \alpha^{[1]\star}-\beta^{[0]\star}=\frac{5\pi}{4} . \end{eqnarray} Since there are only three constraints on four quantities, Bayesian Nash we have is a continuous set. For this set of values, we have \begin{eqnarray} \label{ee30} \Delta_{00} = \Delta_{11} = \cos^2\frac{\pi}{8}-3\sin^2\frac{\pi}{8}, \end{eqnarray} which immediately leads to the quantum Bayesian Nash payoffs \begin{eqnarray} \label{ee31} \Pi^\star_{Alice} =\frac{\sqrt{2}-1}{2}, \quad \Pi^\star_{Bob} =\frac{\sqrt{2}-1}{2} . \end{eqnarray} In this quantum case again, both players are getting maximal payoffs allowable under the Cirel'son limit (\ref{ee26}). Note again that positive payoffs are never possible under classical strategies even with correlations, for example, cheap-talk and altruism, and it is a signature of nonclassical correlation inherent in quantum strategies. \\ \section{Pseudo-classical and Quantum Interference Components} In order to examine the physical contents of the quantum strategies in detail, we define the single player probabilities \begin{eqnarray} \label{ee32} p_{A}^{[a]} = \left| {\left<A | U_{\alpha^{[a]}} | 0 \right>} \right| ^2 , \quad q_{B}^{[b]} = \left| {\left<B \right| U_{\beta^{[b]}} \left| 0 \right>} \right| ^2 , \end{eqnarray} with which, we express the joint probability \begin{eqnarray} \label{ee33} &&\!\!\!\!\!\!\!\!\!\!\!\! P_{AB}^{[ab]} = \cos^2\frac{\gamma}{2} p_A^{[a]} q_B^{[b]} +\sin^2\frac{\gamma}{2} p_{\bar A}^{[a]} q_{\bar B}^{[b]} \nonumber \\ &&\qquad +(-)^{A+B} \cos\phi \sin\gamma\sqrt{p_A^{[a]} p_{\bar A}^{[a]} q_B^{[b]} q_{\bar B}^{[b]}} . \end{eqnarray} Here, the notation ${\bar 0} = 1$, ${\bar 1} = 0$ is used. The sector payoffs $\Pi^{ab}$ take the form \begin{eqnarray} \label{ee34} &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \Pi^{[ab]}_{Alice} = \sum_{A,B} \,(\, \cos^2\frac{\gamma}{2} M^{[ab]}_{AB} + \sin^2\frac{\gamma}{2} L^{[ab]}_{AB} \,)\, p^{[a]}_{A} q^{[b]}_{B} \nonumber \\ &&\!\!\!\! + \cos\phi\sin\gamma \sqrt{p_0^{[a]} p_1^{[a]} q_0^{[b]} q_1^{[b]}} \sum_{A,B} (-)^{A+B} M^{[ab]}_{AB}, \nonumber \\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \Pi^{[ab]}_{Bob} = \sum_{A,B} \,(\, \cos^2\frac{\gamma}{2} L^{[ab]}_{AB} + \sin^2\frac{\gamma}{2} M^{[ab]}_{AB} \,)\, p^{[a]}_{A} q^{[b]}_{B} \nonumber \\ &&\!\!\!\! + \cos\phi\sin\gamma \sqrt{p_0^{[a]} p_1^{[a]} q_0^{[b]} q_1^{[b]}} \sum_{A,B} (-)^{A+B} L^{[ab]}_{AB} , \end{eqnarray} where we have used the symmetry property $L^{[ab]}_{AB}=M^{[ab]}_{{\bar A}{\bar B}}$ of our game. In this form, we clearly see that both payoffs are composed of two components. First component, formerly termed as classical family \cite{CT06} represents essentially classical payoff coming from ``altruistic'' modification of the game matrix \cite{CH03,CH05}. Even with this modification, the payoffs are still constructed from factorizable probabilities $p^{[a]}_A q^{[b]}_B$, and therefore, payoffs will never exceed the limit (\ref{ee25}). This leaves the second component, previously known as interference term, as the sole source of truly quantum gain in the payoff that is achieved through the Bell inequality breakings. In hindsight, it should have been naturally expected that the probabilities generated from ``successful'' quantum strategies shall break some form of Bell inequalities, since extra quantum gains obtained from such strategies should be, by definition, the result of breakdown of the assumption of factorizability, (\ref{ee04}). However, in order to establish a {\it direct link} between the Bell inequality breaking and the extra quantum gain in game's payoff, it is necessary to have a proper game theoretic setup, and that is exactly what we have shown here with the game of incomplete information. It is rather miraculous that essentially identical setup has been conceived contemporaneously in two separate disciplines as Harsanyi's game theory and Bell's quantum measurement theory. It seems possible that future investigation may reveal a hidden intellectual thread between the two. It could also be that they originate from a common mid-twentieth century \textit{Zeitgeist}. \\ \section{Extensions to Many Player Games} The following inequality is shown to hold by Cereceda\cite{CE04} for $2\times 2 \times 2$ system, that is a system with three spins measured by three observers, Alice, Bob and Chris, each equipped with detector capable of performing spin projection measurement along two possible directions. \begin{eqnarray} \label{ee35} &&\!\!\!\!\!\!\!\! P^{[000]}_{000} -P^{[100]}_{000} - P^{[010]}_{000} - P^{[001]}_{000} - P^{[111]}_{111} \le 0 , \nonumber \\ &&\!\!\!\!\!\!\!\! P^{[000]}_{111} -P^{[100]}_{111} - P^{[010]}_{111} - P^{[001]}_{111} - P^{[111]}_{000} \le 0 . \end{eqnarray} It is easy to conceive a $2\times 2 \times 2$ game that shows quantum gain using this inequality. Following type of game matrix will do; \begin{eqnarray} \label{ee36} && \{M^{[0bc]}_{0BC}\} = \left( \begin{array} {cc} \begin{array}{cc} +3 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} -3 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} -3 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \end{array} \right), \nonumber \\&& \{L^{[0bc]}_{0BC}\} = \left( \begin{array} {cc} \begin{array}{cc} +1 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} -1 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} -1 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \end{array} \right), \nonumber \\ && \{M^{[0bc]}_{1BC}\} = \left( \begin{array} {cc} \begin{array}{cc} 0 & 0 \cr 0 & +1 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & -1 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & -1 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \end{array} \right), \nonumber \\&& \{L^{[0bc]}_{1BC}\} = \left( \begin{array} {cc} \begin{array}{cc} 0 & 0 \cr 0 & +3 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & -3 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & -3 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \end{array} \right), \nonumber \\ && \{M^{[1bc]}_{0BC}\} = \left( \begin{array} {cc} \begin{array}{cc} -3 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} -1 & 0 \cr 0 & 0 \end{array} \end{array} \right), \nonumber \\&& \{L^{[1bc]}_{0BC}\} = \left( \begin{array} {cc} \begin{array}{cc} -1 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} -3 & 0 \cr 0 & 0 \end{array} \end{array} \right), \\ \nonumber && \{M^{[1bc]}_{1BC}\} = \left( \begin{array} {cc} \begin{array}{cc} 0 & 0 \cr 0 & -1 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & -3 \end{array} \end{array} \right) , \\ \nonumber&& \{L^{[1bc]}_{1BC}\} = \left( \begin{array} {cc} \begin{array}{cc} 0 & 0 \cr 0 & -3 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} \cr \begin{array}{cc} 0 & 0 \cr 0 & 0 \end{array} &\begin{array}{cc} 0 & 0 \cr 0 & -1 \end{array} \end{array} \right) . \end{eqnarray} Further geralization of this result to $2\times 2 \times ... \times 2$ system is \begin{eqnarray} \label{ee37} &&\!\!\!\!\!\!\!\!\!\! P^{[00...0]}_{00...0} - P^{[100...0]}_{00...0} - P^{[010...0]}_{00...0} -... \nonumber \\&&\qquad\qquad\qquad\ - P^{[00...01]}_{00...0} - P^{[11...1]}_{11..1} \le 0 , \nonumber \\ &&\!\!\!\!\!\!\!\!\!\! P^{[00...0]}_{11...1} - P^{[100...0]}_{11...1} - P^{[010...0]}_{11...1} -... \nonumber \\&&\qquad\qquad\qquad\ - P^{[00...01]}_{11...1} - P^{[11...1]}_{00...0} \le 0 . \end{eqnarray} It should again be easy to formulate a multi-party game based on this general Cereceda inequality. Although current set of examples of fully solvable quantum game of incomplete information is of rather special sort, having very sparse nonzero elements, it shows the two different novel aspects of quantum strategy that are not present in classical counterpart in very clear fashion. The relation between purely quantum gain in the payoff and Bell inequality breaking is indeed striking. It is obvious that the same effects should persist in more general quantum games albeit with less clearly discernible form. In summary, we have shown that there is a genuine advantage in quantum strategy that is not accessible by classical resources, and that advantage is to be found most clearly and unambiguously in a refined settings of Harsanyi's games of incomplete information. \section*{Acknowledgment} This work has been partially supported by the Grant-in-Aid for Scientific Research of Ministry of Education, Culture, Sports, Science and Technology, Japan under the Grant numbers 18540384 and 18.06330.
1,477,468,749,938
arxiv
\section{Introduction} Galois representations are an indispensable tool for analyzing the structure of the absolute Galois group of a number field $F$. One example of considerable interest comes from the fact that $G_F \vcentcolon=\text{Gal}(\bar{F}/F)$ acts (up to inner automorphism) on the algebraic fundamental group of the projective line minus three points. More precisely, if we let $X=\mathbb{P}^1_{F} \setminus \{0,1,\infty\}$ and $\bar{X}=X \otimes_F \bar{F}$, we may relate $\pi_1(X)$ and $\pi_1(\bar{X})$ through the following exact sequence: \[ 1 \to \pi_1(\bar{X}) \to \pi_1(X) \to G_F \to 1. \] \noindent From this we deduce the natural representation \[ \Phi \colon G_F \to \text{Out}(\pi_1(\bar{X})). \] One approach to studying this representation, championed by Ihara in the 1980s, is to fix a prime number $\ell$ and consider the corresponding representation involving the pro-$\ell$ fundamental group of $\bar{X}$. Since $\pi_1^{\ell}(\bar{X})$ is a characteristic quotient of $\pi_1(\bar{X})$, we may define $\Phi_{\ell} \colon G_F \to \text{Out}(\pi_1^{\ell}(\bar{X}))$ via the following commutative diagram: \begin{center} \begin{tikzpicture}[node distance=2cm] \node (G) {$G_F$}; \node (1) [right of=G, node distance = 2.4 cm] {$\text{Out}(\pi_1(\bar{X}))$}; \node (2) [below of=1] {$\text{Out}(\pi_1^{\ell}(\bar{X}))$}; \draw[->] (G) edge node [auto] {$\Phi$} (1); \draw[->] (G) edge node [left] {$\Phi_{\ell}$} (2); \draw[->] (1) to node {} (2); \end{tikzpicture} \end{center} \noindent The fixed field of the kernel of $\Phi_{\ell}$, which we will denote $\san(F, \ell)$, is a large subfield of $\bar{F}$ that as of yet is not entirely understood. We know from Anderson and Ihara's 1988 paper \cite{ihara} that $\san(F, \ell)$ is a pro-$\ell$ extension of $F(\mu_{\ell^{\infty}})$ unramified away from $\ell$, but it is still an open question, first posed by Ihara in the case where $F=\mathbb{Q}$, to determine whether $\san(F, \ell)$ is the \emph{maximal} such extension. Subfields arising from geometric objects have helped to shed light on the structure of $\san(F, \ell)$. For example, by the work of Rasmussen, Papanikolas, and Tamagawa in \cite{ras04}, \cite{ras07}, and \cite{ras1}, we know that if $E$ is an elliptic curve defined over $\mathbb{Q}$ with complex multiplication by $\mathbb{Q}(\sqrt{-\ell})$ and good reduction away from $\ell$, the field $\mathbb{Q}(E[\ell^\infty])$ is a subfield of $\san(\mathbb{Q}, \ell)$. However, a finiteness conjecture made by Rasmussen and Tamagawa in \cite{ras1} implies such examples arising from abelian varieties are quite rare. As this conjecture motivates our work, we pause to introduce some notation and formalize its statement. Let $\mathscr{A}(F,g,\ell)$ be the set of $F$-isomorphism classes of abelian varieties $A/F$ of dimension $g$ for which $F(A[\ell^\infty])$ is both a pro-$\ell$ extension of $F(\mu_{\ell^{\infty}})$ and unramified away from $\ell$. We will use $\mathscr{A}(F, g)$ to denote the disjoint union of the $\mathscr{A}(F,g,\ell)$ over the set of all primes $\ell$, i.e., $\mathscr{A}(F, g)\vcentcolon=\{([A], \ell): [A] \in \mathscr{A}(F,g,\ell) \}$. Then we may state the Rasmussen-Tamagawa finiteness conjecture as follows: \vspace{1 mm} \begin{conj} Let $F$ be a number field and $g>0$. The set $\mathscr{A}(F, g)$ is finite. \end{conj} \noindent This implies that, for a fixed $F$ and $g$, there exists a constant $C$ for which $\mathscr{A}(F,g,\ell)=\varnothing$ when $\ell>C$. One may ask whether stronger behavior should be expected for the bound $C$, and indeed the following uniform (meaning uniform in the degree of $F/\mathbb{Q}$) conjecture appears in \cite[Conj. 2]{ras2}: \vspace{1 mm} \begin{conj} Let $F$ be a number field and $g>0$. There exists a constant $C$ depending only on $g$ and the degree of $F/ \mathbb{Q}$ for which $\mathscr{A}(F,g,\ell)=\varnothing$ when $\ell>C$. \end{conj} In \cite{ras1}, Rasmussen and Tamagawa prove Conjecture 1 in the cases where $g=1$ and $[F:\mathbb{Q}]=1$ or $2$, excluding the 9 imaginary quadratic fields of class number one. In addition, they find a complete list of $\mathbb{Q}$-isogeny classes in $\mathscr{A}(\mathbb{Q}, 1)$, showing that $\mathscr{A}(\mathbb{Q},1,\ell)$ is empty if $\ell > 163$. Later work by Ozeki in \cite{ozeki} proves Conjecture 1 for abelian varieties with complex multiplication defined over a number field containing the CM field, and Arai has explored the conjecture in the context of QM-abelian surfaces (see \cite{arai}). More recently in \cite{ras2}, Rasmussen and Tamagawa prove that the Generalized Riemann Hypothesis implies Conjecture 1, and they give unconditional proofs in several new cases where the degree of $F$ is restricted and $g \leq 3$. (In particular, they resolve the conjecture in the case where $F$ is an imaginary quadratic field of class number one and $g=1$.) In addition, they prove a slightly stronger version of Conjecture 2 under the Generalized Riemann Hypothesis for any $g$ and any $F$ of odd degree. Despite this progress, Conjecture 2 is known unconditionally only in the case where $g=1$ and $[F:\mathbb{Q}]=1$ or $3$ (see \cite{ras1}, \cite{ras2}). Moreover, aside from the case when $F=\mathbb{Q}$, any known bounds are likely far from optimal. In this article, we prove a result stronger than Conjecture 2 for elliptic curves with complex multiplication, and we give improved bounds for number fields of degree $1 < n \leq 100$. Specifically, we have the following theorem: \begin{thm1} Let $F$ be a number field with $[F:\mathbb{Q}]=n$. There exists a constant $C=C(n)$ depending only on $n$ with the following property: If there exists a CM elliptic curve $E/F$ with $F(E[\ell^\infty])$ a pro-$\ell$ extension of $F(\mu_{\ell})$ for some rational prime $\ell$, then $\ell \leq C$. \end{thm1} We record the consequences of this theorem for Conjectures 1 and 2. Let $\mathscr{A}^{\mathrm{CM}}(F,g,\ell)$ be the subset of $\mathscr{A}(F,g,\ell)$ consisting of abelian varieties with complex multiplication, and define $\mathscr{A}^{\mathrm{CM}}(F,g)\vcentcolon=\{([A], \ell): [A] \in \mathscr{A}^{\mathrm{CM}}(F,g,\ell) \}$. Then as a direct consequence of Theorem 1, we have the following corollaries: \begin{cor} Let $F$ be a number field with $[F:\mathbb{Q}]=n$. There exists a constant $C=C(n)$ depending only on $n$ such that $\mathscr{A}^{\mathrm{CM}}(F,1,\ell) = \varnothing$ if $\ell > C$. \end{cor} \begin{cor} $\mathscr{A}^{\mathrm{CM}}(F,1)$ is finite. \end{cor} Note that the bound of Theorem 1 is achieved even as we relax the ramification requirement, thereby allowing the inclusion of elliptic curves with bad reduction at primes other than $\ell$. However, without the ramification requirement, we cannot guarantee (nor should we expect) a finiteness result as in Corollary 2. A discussion of computed bounds is included at the end of section 3. \subsection*{Notation} \begin{itemize} \item $\mu_{\ell}$ denotes the group of $\ell$th roots of unity in $\bar{\mathbb{Q}}$, and $\mu_{\ell^{\infty}}= \cup_{n\geq 1}\mu_{\ell^n}$. \item For an abelian variety $A$ defined over a field $F$, we denote the extension of $F$ generated by the $\ell$-torsion points of $A$ by $F(A[\ell])$. The field $F(A[\ell^\infty])$ is generated over $F$ by the $\ell$-powered torsion points of $A$. \item If $F$ is a number field, $d_F$ is the absolute discriminant of $F/ \mathbb{Q}$. We denote the ring of integers of $F$ by $\mathcal{O}_F$, and $\mathcal{O}_F^{\times}$ is its group of units. \item $w_F$ denotes the number of distinct roots of unity in $F$, i.e., $|\mathcal{O}_F^{\times}|= w_F$. \item If $\mathfrak{a}$ is an integral ideal in the number field $F$, we denote the norm of $\mathfrak{a}$ by $\mathcal{N}(\mathfrak{a})$. In other words, $\mathcal{N}(\mathfrak{a})= [\mathcal{O}_F:\mathfrak{a}]$. \item $ \left( \frac{a}{\ell} \right)$ is the Kronecker symbol. \end{itemize} \section{Background on Ray Class Fields and Elliptic Curves} We first recall the definition of the $\mathfrak{m}$-ray class group. Though the theory exists in more generality, here we restrict our attention to the case where $K$ is an imaginary quadratic field so we may take $\mathfrak{m}$ to be an integral ideal of $\mathcal{O}_K$. Then relative to $\mathfrak{m}= \prod \mathfrak{p}^{m(\mathfrak{p})}$, we define the following two subsets:\\ \hspace{1 cm} $I_K(\mathfrak{m})=$ the set of all fractional ideals of $K$ relatively prime to $\mathfrak{m},$\\ \hspace{1 cm} $P_K(\mathfrak{m}) = \{ \left(\alpha \right): \alpha \in K^{\times}, \ord_{\mathfrak{p}}(\alpha -1) \geq m(\mathfrak{p}) \text{ for all } \mathfrak{p} \text{ dividing } \mathfrak{m} \}.$\\ \noindent Note $P_K(\mathfrak{m})$ is a subgroup of $I_K(\mathfrak{m})$, and the quotient $I_K(\mathfrak{m})/P_K(\mathfrak{m})$ is the $\mathfrak{m}$-ray class group of $K$. As with the ideal class group, whose definition we recover when $\mathfrak{m}=1$, the $\mathfrak{m}$-ray class group is finite. In fact, we have the following explicit formula for its cardinality: \begin{prop} Let $\mathfrak{m}$ be an integral ideal in a number field $K$. The order of the ray class group modulo $\mathfrak{m}$ is given by: \[ h_{\mathfrak{m}}=h_K \cdot [U:U_{\mathfrak{m}}]^{-1} \cdot \mathcal{N}(\mathfrak{m}) \cdot \prod_{\mathfrak{p} \mid \mathfrak{m}} \left( 1- \mathcal{N}(\mathfrak{p})^{-1} \right) \] where \begin{align*} h_K &= \text{class number of } K\\ U &= \mathcal{O}_K^{\times}\\ U_{\mathfrak{m}} &= \{ \alpha \in U : \ord_{\mathfrak{p}}(\alpha -1) \geq m(\mathfrak{p}) \text{ for all } \mathfrak{p} \text{ dividing } \mathfrak{m} \}. \end{align*} \end{prop} \begin{proof} See Corollary 3.2.4 in \cite{cohen2}. Note we have restricted to the case where our modulus is an integral ideal. \end{proof} \noindent In the special case $\mathfrak{m}=\ell \mathcal{O}_K$, we obtain: \begin{cor} Let $\ell$ be a prime and $\mathfrak{m}$ be the modulus $\ell \mathcal{O}_K$ in a quadratic field $K$. Then: \begin{enumerate} \item If $\ell$ is ramified in $K$, $h_{\mathfrak{m}}=h_K \cdot [U:U_{\mathfrak{m}}]^{-1} \cdot \ell \cdot (\ell -1).$ \item If $\ell$ splits in $K$, $h_{\mathfrak{m}}=h_K \cdot [U:U_{\mathfrak{m}}]^{-1} \cdot (\ell -1) \cdot (\ell -1).$ \item If $\ell$ is inert in $K$, $h_{\mathfrak{m}}=h_K \cdot [U:U_{\mathfrak{m}}]^{-1} \cdot (\ell +1) \cdot (\ell -1).$ \end{enumerate} \end{cor} Just as the ideal class group gives the Galois group of the Hilbert class field, in general the $\mathfrak{m}$-ray class group gives the Galois group of an abelian extension of $K$ called the ray class field of $K$ with modulus $\mathfrak{m}$, denoted by $K_{\mathfrak{m}}$. Ramification in the extension $K_{\mathfrak{m}}/K$ is restricted to primes dividing $\mathfrak{m}$, and the primes that split completely are precisely the primes in $P_K(\mathfrak{m})$. The fact that a unique $K_{\mathfrak{m}}$ exists for any ideal $\mathfrak{m}$ of $\mathcal{O}_K$ is a consequence of the Existence Theorem of Class Field Theory, and we direct the interested reader to Chapter 5 of \cite{janusz} for details. We can construct the ray class fields of an imaginary quadratic field $K$ using torsion points of elliptic curves possessing complex multiplication. Recall that if $E/F$ is an elliptic curve, we say $E$ has complex multiplication, or CM, if its ring of $\bar{F}$-endomorphisms is strictly larger than $\mathbb{Z}$. In this case, $\text{End} (E)\otimes \mathbb{Q}$ is isomorphic to an imaginary quadratic field $K$, and $\text{End} (E)$ is isomorphic to an order in that field. If $E$ has CM by the maximal order in $K$, then the ray class field of $K$ with modulus $N\mathcal{O}_K$ can be generated from the $N$-torsion points of $E$, as we will now explain. Since char($F) \neq 2 \text{ or } 3$, $E$ is isomorphic over $F$ to a curve having an equation of the form $y^2=4x^3-g_2x-g_3$, with $g_2, g_3 \in F$. The Weber function $\mathfrak{h}$ on $E$ is defined as follows, where $j_E$ is the $j$-invariant of $E$ and $\Delta=g_2^3-27g_3^2$: \[ \mathfrak{h}(x,y) = \left\{ \begin{array}{l l} \vspace{3 mm} \dfrac{g_2g_3}{\Delta}x \text{ if } j_E \neq 0, 1728,\\ \vspace{3 mm} \dfrac{g_2^2}{\Delta}x^2 \text{ if } j_E=1728,\\ \dfrac{g_3}{\Delta}x^3 \text{ if } j_E=0.\\ \end{array} \right. \] \noindent We then obtain an explicit description of the ray class field from the following theorem, the roots of which can be traced back to the work of Hasse in ~\cite{hasse}: \begin{thm2} Let $E$ be an elliptic curve defined over $K(j_E)$ with End($E$) $\cong \mathcal{O}_K$ for some imaginary quadratic field $K$. Let $\mathfrak{h}$ be the Weber function on $E$. Then $K(j_E, \mathfrak{h}(E[N]))$ is the ray class field of $K$ with modulus $N\mathcal{O}_K$. \end{thm2} \begin{proof} See, for example, Theorem 2 in ~\cite[p.126]{lang}. \end{proof} Note that the Weber function is model independent. That is, if $\varphi \colon E \to E'$ is an $\bar{F}$-isomorphism and $\mathfrak{h}_E$, $\mathfrak{h}_{E'}$ the Weber functions of $E$ and $E'$, respectively, we have $\mathfrak{h}_E=\mathfrak{h}_{E'} \circ \varphi$. (See \cite[p.107]{shimura}.) This allows us to extend the result of Theorem 2 to include elliptic curves defined over an arbitrary number field $F$. \begin{cor} Let $E$ be an elliptic curve defined over a number field $F$ with End($E$) $\cong \mathcal{O}_K$ for some imaginary quadratic field $K$. Then $K(j_E, \mathfrak{h}(E[N]))$ is the ray class field of $K$ with modulus $N\mathcal{O}_K$. \end{cor} \begin{proof} Let $E$ be an elliptic curve defined over a number field $F$ with End($E$) $\cong \mathcal{O}_K$. $E$ is isomorphic over $\mathbb{C}$ to an elliptic curve $E'$ defined over $K(j_E)$. (See \cite[p.105]{advanced}.) If we let $\varphi$ denote the isomorphism from $E$ to $E'$, the model independence of the Weber function gives $\mathfrak{h}_E(E[N])=\mathfrak{h}_{E'}(\varphi(E[N]))=\mathfrak{h}_{E'}(E'[N]).$ By Theorem 2, $K(j_{E'}, \mathfrak{h}_{E'}(E'[N]))=K(j_{E}, \mathfrak{h}_E(E[N]))$ is the ray class field of $K$ modulo $N\mathcal{O}_K$, as desired. \end{proof} \section{Proof of Main Result} For an arbitrary elliptic curve, the mod-$\ell$ Galois representation is an injective homomorphism Gal($F(E[\ell])/F) \to \text{GL}_2(\mathbb{F}_{\ell})$. As a consequence, $[F(E[\ell]):F]$ must divide \#$\text{GL}_2(\mathbb{F}_{\ell})$. However, if $E$ has CM, much more is known: \begin{prop} Let $E$ be an elliptic curve with CM by $\mathcal{O}_K$ in $K$. Then for an odd prime $\ell$: \begin{enumerate} \item $\text{If } \left( \frac{d_K}{\ell} \right)=1,$ then $[F(E[\ell]):F] \mid 2(\ell -1)^2.$ \item $\text{If } \left( \frac{d_K}{\ell} \right)=-1,$ then $[F(E[\ell]):F] \mid 2(\ell^2 -1).$ \item $\text{If } \left( \frac{d_K}{\ell} \right)=0,$ then $[F(E[\ell]):F] \mid 2(\ell^2 -\ell).$ \end{enumerate} \end{prop} \begin{proof} See, for example, Corollary 17 of ~\cite{clark1}. \end{proof} From these conditions, we see that it is only possible for an odd prime $\ell$ to divide the degree of $F(E[\ell])/F$ when $\ell$ divides $d_K$. This is a fact we are able to exploit: \begin{lem} Suppose $E$ is an elliptic curve defined over a number field $F$ with complex multiplication by $\mathcal{O}_K$ in $K$. Suppose $\ell$ is prime and $\ell >\dfrac{w_K}{2}n+1$, where $n$ is the degree of $F/ \mathbb{Q}$. If $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered, $\ell$ must divide $d_K$. \end{lem} \begin{proof} Let $[F(E[\ell]):F(\mu_{\ell})]$ be $\ell$-powered, and suppose $\ell$ is an odd prime which does not divide $d_K$. Since $\ell \neq 2$, Proposition 2 forces $F(E[\ell])=F(\mu_{\ell})$. We will show this is a contradiction unless $\ell \leq \dfrac{w_K}{2}n+1$. By Lemma 15 in ~\cite{clark1}, $K \subset F(E[\ell])$. Thus $F(E[\ell])$ also contains $K(j_E, \mathfrak{h}(E[\ell]))$, the ray class field of $K$ modulo $\mathfrak{m}=\ell \mathcal{O}_K$. This gives us the following diagram of fields: \begin{center} \begin{tikzpicture}[node distance=2cm] \node (Q) {$\mathbb{Q}$}; \node (F) [above left of=Q, node distance=3 cm] {$F$}; \node (K) [above right of=Q, node distance=1.8cm] {$K$}; \node (Fm) [above of =F, node distance=2.5cm] {$F(E[\ell]))=F(\mu_{\ell})$}; \node (Kh) [above of =K, node distance=2.2 cm] {$K(j_E, \mathfrak{h}(E[\ell]))$}; \draw[-] (Q) edge node[auto] {$n$} (F); \draw[-] (Q) edge node[right] {2} (K); \draw[-] (F) edge node[auto] {$\leq \ell-1$} (Fm); \draw[-] (K) edge node[right] {$h_{\mathfrak{m}}$} (Kh); \draw (Kh) -- (Fm); \end{tikzpicture} \end{center} From Corollary 3, if $\ell$ splits in $K$ then \begin{align*} h_{\mathfrak{m}} &= h_K \cdot [U:U_{\mathfrak{m}}]^{-1} \cdot (\ell -1) \cdot (\ell -1)\\ &\geq 1 \cdot \dfrac{1}{w_K}\cdot (\ell -1) \cdot (\ell -1).\\ \end{align*} Similarly, if $\ell$ is inert in $K$, \begin{align*} h_{\mathfrak{m}} \geq 1 \cdot \dfrac{1}{w_K}\cdot (\ell +1) \cdot (\ell -1).\\ \end{align*} In either case, $h_{\mathfrak{m}} \geq \dfrac{1}{w_K}\cdot (\ell -1)^2$. But \[ 2 \cdot \dfrac{1}{w_K}\cdot (\ell -1)^2 > n \cdot (\ell -1) \] whenever $\ell > \dfrac{w_K}{2}n+1$. In other words, $F(E[\ell]) \neq F(\mu_{\ell})$ for $\ell > \dfrac{w_K}{2}n+1$. \end{proof} In fact, the same result holds for elliptic curves with CM by an arbitrary order: \begin{prop} Suppose $E$ is an elliptic curve defined over a number field $F$ with complex multiplication by an order in $K$. Suppose $\ell$ is prime and $\ell >\dfrac{w_K}{2}n+1$, where $n$ is the degree of $F/ \mathbb{Q}$. If $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered, $\ell$ must divide $d_K$. \end{prop} \begin{proof} Suppose E has CM by the order $\mathcal{O}_f = \mathbb{Z} + f\mathcal{O}_K$ in $K$. Then there exists an $F$-rational isogeny $\varphi \colon E \to E'$ where $E'$ is defined over $F$ and has CM by $\mathcal{O}_K$. Since $\varphi$ is cyclic of degree $f$ (Proposition 25 in ~\cite{clark1}), we need only to show that $f$ and $\ell$ are relatively prime. This will ensure the induced map $\varphi \colon E[\ell] \to E'[\ell]$ is in fact an isomorphism, and since the isomorphism is defined over $F$, we will have $F(E[\ell])=F(E'[\ell])$. The result will then be a consequence of the previous lemma. Let $f=p_1^{a_1} \cdots p_r^{a_r}$ be the prime factorization of $f$, with $p_1<p_2<\ldots<p_r$. As shown in ~\cite[p.146]{cox}, the class number of $\mathcal{O}_f$ satisfies: \[ h(\mathcal{O}_f) = \frac{h(\mathcal{O}_K) p_1^{a_1 -1} \cdots p_r^{a_r -1}}{[\mathcal{O}_K^{\times}:\mathcal{O}_f^{\times}]} \prod_{i=1}^r \left(p_i-\left( \frac{d_K}{p_i} \right) \right). \] Since $| \mathcal{O}_K^{\times}| = w_K$ and $|\mathcal{O}_f^{\times}| \geq 2$, \begin{align*} h(\mathcal{O}_f) &\geq \frac{h(\mathcal{O}_K) p_1^{a_1 -1} \cdots p_r^{a_r -1}}{w_K/2} \prod_{i=1}^r \left(p_i-\left( \frac{d_K}{p_i} \right) \right)\\ &\geq \frac{2}{w_K} \left(p_r-\left( \frac{d_K}{p_r} \right) \right)\\ &\geq \frac{2}{w_K} \left(p_r-1 \right). \end{align*} We may obtain an upper bound on $h(\mathcal{O}_f)$ by recalling that $K(j_E)$ is the ring class field of $K$ of the order $\mathcal{O}_f$ (see ~\cite[p.220]{cox}). Thus $[K(j_E):K] =$ \#cl$(\mathcal{O}_f)$, and $h(\mathcal{O}_f) \leq n$. Combining this with the inequality above, we find $p_r \leq \dfrac{w_K}{2}n+1$. Since $\ell > \dfrac{w_K}{2}n+1$, this is enough to conclude $\ell$ and $f$ are relatively prime, as desired. \end{proof} If $n$ is odd we can extend the result to all odd primes: \begin{cor} Suppose $E$ is an elliptic curve defined over a number field $F$ with complex multiplication by an order in $K$. Suppose the degree of $F/ \mathbb{Q}$ is odd, and let $\ell$ be an odd prime number. If $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered, $\ell$ must divide $d_K$. \end{cor} \begin{proof} Suppose $\ell \nmid d_K$, and assume for the sake of contradiction that $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered. By Lemma 15 in ~\cite{clark1}, $K=\mathbb{Q}(\sqrt{D}) \subset F(E[\ell])$. In fact, $K \subseteq F(\mu_{\ell})$, for otherwise $F(\mu_{\ell})(\sqrt{D})$ would be a proper extension of $F(\mu_{\ell})$ contained in $F(E[\ell])$ and 2 would divide $[F(E[\ell]): F(\mu_{\ell})]$. However, since $\ell \nmid d_K$, we know $K \nsubseteq \mathbb{Q}(\mu_{\ell})$. Thus $\mathbb{Q}(\mu_{\ell})(\sqrt{D})$ is a proper extension of $\mathbb{Q}(\mu_{\ell})$ contained in $F(\mu_{\ell})$. Since $[F(\mu_{\ell}): \mathbb{Q}(\mu_{\ell})]=[F:\mathbb{Q}(\mu_{\ell}) \cap F]$, this forces $2 \mid [F:\mathbb{Q}(\mu_{\ell}) \cap F]$, which is a contradiction. \end{proof} We are now ready to prove our main result: \begin{thm1} Let $F$ be a number field with $[F:\mathbb{Q}]=n$. There exists a constant $C=C(n)$ depending only on $n$ with the following property: If there exists a CM elliptic curve $E/F$ with $F(E[\ell^\infty])$ a pro-$\ell$ extension of $F(\mu_{\ell})$ for some rational prime $\ell$, then $\ell \leq C$. \end{thm1} \begin{proof} Suppose there exists a CM-elliptic curve $E/F$ with $F(E[\ell^\infty])$ a pro-$\ell$ extension of $F(\mu_{\ell})$. Thus $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered. Since $w_K \leq 6$ for an imaginary quadratic field $K$, the previous proposition shows $\ell \leq 3n+1$ or $\ell \mid d_K$ where $K$ is the CM-field of $E$. However, since $h(\mathcal{O}_K)$ divides \#cl$(\mathcal{O}_f)=[K(j_E):K] \leq n$, it follows that $K$ has class number less than or equal to $n$. As there are only a finite number of such $K$, proved by Heilbronn in \cite{heilbronn}, the result follows. \end{proof} It is clear that obtaining an explicit bound depends only on knowing the imaginary quadratic fields with a given class number, i.e., it depends on having a solution to the Gauss class number problem for imaginary quadratic fields. For class numbers up through 7 and odd class numbers up to 23, complete lists of the corresponding fields exist (see ~\cite{arno2} for a history of the many mathematicians involved in the early work on this problem and for a list of imaginary quadratic fields with odd class number up to 23; see ~\cite{stark}, ~\cite{arno}, ~\cite{wagner} for lists of imaginary quadratic fields of class number 2, 4, and 6, respectively). More recent work by Watkins in \cite{watkins} gives a solution for class numbers up to 100. To illustrate how these results may be used, we have compiled a table of bounds for elliptic curves defined over a number field $F$ of degree $n$ where $n \leq 7$: \begin{center} \begin{tabular}{l l} $n$ & $\hfill C(n)$\\ \hline 1, 2 & \hspace{.5 cm} \hfill 163\\ 3, 4 & \hfill 907\\ 5, 6 & \hfill 2683\\ 7 & \hfill 5923\\ \end{tabular} \end{center} We justify the claimed bounds. Suppose there exists a CM-elliptic curve $E/F$ with $F(E[\ell^\infty])$ a pro-$\ell$ extension of $F(\mu_{\ell})$. Then $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered, and by Proposition 3, we know $\ell \leq \dfrac{w_K}{2}n+1 \leq 3n+1$ or $\ell$ divides $d_K$ where $K$ is the CM field of $E$. As mentioned in the proof of Theorem 1, the class number of $K$ is less than or equal to $n$, so we need only consult the lists of the discriminants of imaginary quadratic fields satisfying this constraint. A check of the possible primes dividing those discriminants yields the bounds above. Since Rasmussen and Tamagawa in \cite{ras1} find an example of a CM-elliptic curve defined over $\mathbb{Q}$ with $\mathbb{Q}(E[163^{\infty}])$ a pro-$163$ extension of $\mathbb{Q}(\mu_{163})$, we see that in fact 163 is the best possible bound for $n=1$ and $n=2$. We may also achieve a rough bound when $F$ has degree up to 100. In Table 4 from Watkins paper \cite{watkins}, he records the largest fundamental discriminant (in absolute value) for each class number up to 100. This is sufficient to generate additional bounds. For example, we know the largest fundamental discriminant in Watkins's table, 2383747, occurs when $K$ has class number 98. Hence the largest possible prime dividing any discriminant is 2383739, so $C(n) \leq 2383739$ for all $n \leq 100$. \section{Closing Remarks} Although finding the conditions necessary for $[F(E[\ell]): F(\mu_{\ell})]$ to be $\ell$-powered was enough to prove the uniform bound in this paper, it is desirable to discover sufficient conditions as well. As discussed in the introduction, elliptic curves possessing this characteristic help us better understand $\san(F, \ell)$, provided they also have good reduction away from $\ell$. Here, we present two additional applications. If $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered, this partially determines the form of the Galois representation attached to $E$. Let $\rho_{E,\ell}:G_F \to GL_2(\mathbb{F}_{\ell})$ be the mod $\ell$ Galois representation, let $\chi$ be the cyclotomic character mod $\ell$, and let $\delta = [\mathbb{F}_{\ell}^{\times}: \chi(G_F)]$. Then by a result of Rasmussen and Tamagawa: \begin{lem} Suppose $E$ is an elliptic curve defined over a number field $F$ where $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered. Then there exists a basis of $E[\ell]$ with respect to which \[ \rho_{E,\ell}(G_F)= \left[\begin{matrix} \chi^{i_1} & *\\ 0& \chi^{i_2} \end{matrix}\right]. \] Furthermore, $i_1$, $i_2$ may be chosen to be nonnegative integers less than $\frac{\ell - 1}{\delta}$. \end{lem} \begin{proof} A version of this appears as Lemma 3 in \cite{ras1}, and a more general version appears in \cite{ras2}. Note that although the result in \cite{ras2} is stated for abelian varieties $A/F$ where $F(A[\ell^\infty])$ is both a pro-$\ell$ extension of $F(\mu_{\ell})$ and unramified away from $\ell$, the ramification requirement is not used in the proof. \end{proof} Knowing the sufficient conditions for $[F(E[\ell]): F(\mu_{\ell})]$ to be $\ell$-powered would also establish when there exist $\ell$-torsion points rational over $F(\mu_{\ell})$: \begin{lem} Suppose $E$ is an elliptic curve defined over a number field $F$. $E$ has a non-trivial $\ell$-torsion point rational over $F(\mu_{\ell})$ if and only if $[F(E[\ell]): F(\mu_{\ell})]$ is $\ell$-powered. \end{lem} \begin{proof} Suppose $P \in E[\ell]$ is rational over $F(\mu_{\ell})$. Then we can choose a basis $\{P, Q\}$ of $E[\ell]$, yielding Gal$(F(E[\ell])/F(\mu_{\ell})) \cong $ $\left< \left[\begin{smallmatrix} 1&b\\ 0&1 \end{smallmatrix}\right] \right>$, where $b \in \mathbb{F}_{\ell}$. (Recall the determinant of this matrix will equal the cyclotomic character, which is trivial in this extension.) But this group has size 1 or $\ell$. The other direction is a result of the Orbit-Stabilizer Theorem, but we can also see it as an immediate consequence of Lemma 2. \end{proof} Unfortunately, the converse of Proposition 3 does not hold. As a counterexample, consider the elliptic curve defined by the equation \[ y^2=x^3-595x+5586 \] at the prime $\ell=7$. This curve has CM by an order in $K=\mathbb{Q}(\sqrt{-7})$, so $\ell$ divides $d_K$ and $\ell > 3\cdot1+1$. By Lemma 4 in \cite{Dieulefait}, $\sqrt{7}$ is contained in $\mathbb{Q}(E[\ell])$. Since $\sqrt{7}$ is not contained in $\mathbb{Q}(\mu_{\ell})$, we find that $\mathbb{Q}(\mu_{\ell})(\sqrt{7})$ gives a degree 2 extension of $\mathbb{Q}(\mu_{\ell})$ inside of $\mathbb{Q}(E[\ell])$. In other words, 2 divides $[F(E[\ell]): F(\mu_{\ell})]$ and so the desired extension is not $\ell$-powered. \section{Acknowledgements} The author is grateful to her advisor, Chris Rasmussen, for suggesting the problem and for his guidance in preparing this paper. The author also thanks Akio Tamagawa for his helpful comment on the Weber function. \vspace{2 cm} \bibliographystyle{amsplain}
1,477,468,749,939
arxiv
\section{Introduction}\label{sec:intro} The photoinduced production of $\eta$ and $\eta'$ mesons is a selective probe to study excitations of the nucleon. These mesons represent the isoscalar members of the fundamental pseudoscalar-meson nonet and, in contrast to the isovector $\pi$, excitations with isospin $I = 3/2$ ($\Delta$ resonances) do not decay into $\eta N$ and $\eta' N$ final states. An overview of the current status of nucleon resonances can be found in Ref.~\cite{Tanabashi:2018} and of the experimental and phenomenological progress in $\eta$ photoproduction can be found in Ref.~\cite{Krusche2014}. The isobar model EtaMAID is part of the Mainz MAID project~\cite{Tiator:2018pjq,MAID,MAID07} with online programs performing real-time calculations of observables, amplitudes and partial waves (multipoles). EtaMAID was introduced in 2001~\cite{Chiang:2002vq} as a model with eight prominent nucleon resonances: $N(1535)\frac{1}{2}^-(S_{11})$, $N(1650)\frac{1}{2}^-(S_{11})$, $N(1710)\frac{1}{2}^+(P_{11})$, $N(1720)\frac{3}{2}^+(P_{13})$, $N(1520)\frac{3}{2}^-(D_{13})$, $N(1700)\frac{3}{2}^-(D_{13})$, $N(1675)\frac{5}{2}^-(D_{15})$, and $N(1680)\frac{5}{2}^+(F_{15})$.\footnote{Throughout this paper we will use two notations for nucleon resonances, the general notation as $N(1535)\frac{1}{2}^-$, introduced by PDG in 2012 and the older $\pi N$ notation as $S_{11}(1535)$.} The background was modeled with Born terms and $t$-channel vector meson exchanges of $\omega$ and $\rho$ mesons. The model was developed for photo- and electroproduction on protons and neutrons and was well fitted to the few available data in 2001. Since that time a lot of developments occurred, first of all for the experimental data base. There was a huge effort at several accelerator facilities to combine high intensity polarized photon beams with modern 4$\pi$ detectors and spin-polarized targets. In particular, the Crystal Ball/TAPS setup at MAMI in Mainz (Germany) \cite{McNicoll:2010qk}, the Crystal Barrel/TAPS at ELSA in Bonn (Germany) \cite{Crede:2009zzb}, and the CLAS detector at JLab in Newport News (USA) \cite{Mecking:2003zu} have reached this goal and provided new, valuable information about photo-induced $\eta$ and $\eta^\prime$ production. At the GRAAL facility in Grenoble (France) \cite{Ajaka:1998zi} and the LEPS facility at SPring-8 in Osaka (Japan) \cite{Sumihama:2009gf}, photon beams with high linear polarization are available via laser-backscattering and also data from ELPH at Tohoku University in Sendai (Japan) \cite{Miyahara:2007zz} became available. The CLAS detector was using a magnetic field in order to reconstruct the recoiling proton with high resolution. The final state neutral mesons were identified via a missing mass analysis. The other detectors used electromagnetic calorimeters with almost 4$\pi$ coverage to detect photons, pions, protons and neutrons. The $\gamma N \to \eta N$ and $\gamma N \to \eta' N$ reactions were identified via a combination of missing mass and invariant mass techniques. Photoproduction of $\eta$ or $\eta^\prime$ on the nucleon has been studied in various theoretical approaches, in quark models~\cite{Li:1997gd,Saghai:2001yd,Golli:2016dlj}, Lagrangian models~\cite{Feuster:1998cj,Davidson:1999in}, effective field theory~\cite{Borasoy:2001pj,Ruic:2011wf}, dispersion theoretical calculations~\cite{Aznauryan:2003zg,Nikonov:2018}, Regge models~\cite{Sibirtsev:2010yj}, isobar models~\cite{Knochlein:1995qz,Tiator:1999gr,Chiang:2001as,Chiang:2002vq,Aznauryan:2003zg, Tryasuchev:2003st,Tryasuchev:2018pdq}, and combined analyses by using the additional information from $NN$ interaction~\cite{Huang:2012xj,Nakayama:2008tg}. Most flexible and successful have been isobar models, where nucleon resonances are treated in $s$-channel Breit-Wigner parametrization with energy-dependent widths due to the coupling with other decay channels. The non-resonant background in those models is described by $s$- and $u$-channel Born terms and $t$-channel vector meson exchanges. Besides single-channel investigations, a series of coupled-channel partial wave analyses (PWA)~\cite{Shklyar:2006xw,McNicoll:2010qk,Shrestha:2012ep,Kamano:2013iva, Ronchen:2015vfa,Anisovich:2017} have been performed with multiple channels as $\pi N$, $\sigma N$, $\pi \Delta$, $\eta N$, $K \Lambda$, $K \Sigma$, $\rho N$, $\omega N$, and $\eta^\prime N$. Within the last few months, new updates have been obtained by the Bonn-Gatchina group~\cite{Anisovich:2018}, the J\"ulich-Bonn group~\cite{Ronchen:2018} and the Kent State University group~\cite{KSU2018}. All these PWA are energy-dependent (ED) analyses, where an underlying model determines the functional dependence on the energy and provides continuity and, in an optimal case, also analyticity of the partial wave amplitudes. An energy-independent or single-energy (SE) PWA is free of such a model dependence but depends very much on the availability of a `complete experiment'~\cite{Barker:75} and on analyticity and unitarity constraints. This has been done very successfully for $\pi N$ scattering and pion photoproduction. For $\eta N$ photoproduction such constraints are mostly unavailable, making a single-energy PWA much more difficult and can lead to ambiguous solutions. In a very recent work, we have accomplished such a SE PWA for $\eta$ photoproduction using constraints from fixed-$t$ analyticity~\cite{Osmanovic:2017fwe}. The last update of EtaMAID was done in 2003~\cite{Chiang:2002vq} with a reggeized isobar model for $p(\gamma,\eta)p$ and an extension to $p(\gamma,\eta^\prime)p$ was established for the threshold region, when new data on differential cross sections became available from SAPHIR at ELSA in Bonn~\cite{Plotzke:1998ua}. Combining reggeized $t$-channel exchanges with resonances in the direct channel is by no means a new idea, see e.g. the model of Ref. \cite{Barger:1966zzd} for charge-exchange $\pi N$ scattering or models for meson photoproduction, e.g. EtaMAID2003 \cite{Chiang:2002vq} or Regge-plus-resonance approach for $K\Lambda$ photoproduction \cite{Corthals:2005ce}. In these models, the Regge amplitude is obtained from a fit to high energy data and continued into the resonance region. For $\eta$ and especially $\eta^\prime$ production the Regge regime that sets in at $W\geq 2.5$~GeV is quite close to the accessible part of the resonance region. Matching the invariant amplitudes that are obtained from the low-energy fit onto Regge amplitude thus represents a valuable physics constraint. The advantage from the technical point of view is that it is not necessary to introduce many free parameters which would have been necessary to fix the non-resonant background amplitude, so only resonance parameters are used as fit parameters. However, it has been realized early on that when projected on the $s$-channel partial waves, Regge amplitudes generate resonance-like Schmid loops on the Argand diagram for each partial wave \cite{Schmid:1968zz}, which leads to a general problem of double counting in the extraction of resonance parameters. Collins et al.~\cite{Collins:1969ab} pointed out that to state a correspondence between Regge asymptotic and $s$-channel resonances, one would have to invoke unitarity, as per finite energy sum rules (FESR), see e.g. Ref.~\cite{Dolen:1967jr} for an early application to $\pi N$ scattering. With these reservations in mind, we pursue here another method which uses as background the Regge amplitude with kinematical suppression factor applied in the resonance region. This damping factor is needed to at least partially remove the double counting. To address this double counting in detail the FESR is the most natural tool, and we postpone this study to the upcoming work. Moreover, in view of the ambiguity Regge-resonances we opt not to discuss the Breit-Wigner resonance parameters returned by the fit in detail. Independent whatever procedure is applied, the resonance parametrization using Breit-Wigner amplitudes remains model dependent. Generalized Breit-Wigner amplitudes have enough freedom with the energy dependence of the widths and of the vertex functions, that changes in the background can usually be absorbed by the resonance contributions, therefore leading to sizeable model uncertainties to masses, widths, branching ratios and photo couplings. In careful treatments, and for resonances with widths $\Gamma \lesssim 120$~MeV, the model dependence is rather mild. Therefore, PDG~\cite{Tanabashi:2018} decided to keep such traditional resonance parameters, even if the spread of values is often quite large. First priority in newer PWA are the fundamental $t$-matrix pole positions and residues of various elastic and inelastic reactions involving nucleon resonance excitations. In an upcoming work we will use our obtained partial waves and analyze nucleon resonances by its pole position and residues with the Laurent-plus-Pietarinen (L+P) method~\cite{Svarc:2014sqa,Svarc:2013laa}. The paper is organized as follows. In section~\ref{sect:Formalism} we will first give the basic formalism for kinematics, amplitudes and observables. In section~\ref{sect:IsobarModel} we present the details of our isobar model. We shortly describe the Regge-cut model which has already been published and give our formulation for nucleon resonance excitations. In section~\ref{sect:Results} we present our results on $\eta$ and $\eta^\prime$ photoproduction from protons and neutrons with comparisons to the data and PWA from other analysis groups. In section~\ref{sect:NarrowResonances} we discuss a recent attempt to search for a narrow $N^*$ resonance near the $\eta^\prime$ threshold. Partial waves are compared with recent solutions by the Bonn-Gatchina, J{\"u}lich-Bonn and Kent-State-University groups in section~\ref{sec:pwa}, before we summarize our method and results in section~\ref{sec:conclusions}. In an appendix we give the formulas for polarization observables used in our analysis and tables of our background and Breit-Wigner resonance parameters. \section{Formalism}\label{sect:Formalism} \subsection{\boldmath Kinematics in $\eta$ photoproduction} For $\eta$ photoproduction on the nucleon, we consider the reaction \begin{equation} \gamma(k)+N(p_i)\rightarrow \eta(q)+N'(p_f)\,, \end{equation} where the variables in brackets denote the four-momenta of the participating particles. These are $k^\mu=(k,\bold{k})$, $q^\mu=(\omega,\bold{q})$ for photon and $\eta$ meson, and $p_i^\mu=(E_i,\bold{p}_i)$, $p_f^\mu=(E_f,\bold{p}_f)$ for incoming and outgoing nucleon, respectively. The familiar Mandelstam variables are given as \begin{equation} s=W^2=(p_i+k)^2,\qquad t=(q-k)^2,\qquad u=(p_i-q)^2, \end{equation} the sum of the Mandelstam variables is given by the sum of the external masses \begin{equation} s+t+u=2m_N^2+m_{\eta}^2\,, \end{equation} where $m_N$ and $m_{\eta}$ are masses of proton and $\eta$ meson, respectively. The crossing symmetrical variable is \begin{equation} \nu=\frac{s-u}{4m_N}\,. \end{equation} In the $\eta N$ center-of-mass (c.m.) system, we have $\bold{p}_i=-\bold{k}$, $\bold{p}_f=-\bold{q}$, and the energies and momenta can be related to the Mandelstam variable $s$ by \begin{equation} k=|\bold{k}|=\frac{s-m_N^2}{2\sqrt{s}},\quad \omega=\frac{s+m_{\eta}^2-m_N^2}{2\sqrt{s}}\,, \end{equation} \begin{equation} q=|\bold{q}|=\left[\left(\frac{s-m_{\eta}^2+m_N^2}{2\sqrt{s}}\right)^2-m_N^2\right]^{\frac{1}{2}}\,, \end{equation} \begin{equation} E_i=\frac{s+m_N^2}{2\sqrt{s}},\quad E_f=\frac{s+m_N^2-m_{\eta}^2}{2\sqrt{s}}\,, \end{equation} $W=\sqrt{s}$ is the c.m. energy. Furthermore, we will also refer to the lab energy of the photon, $E=(s-m_N^2)/(2m_N)$. \subsection{Cross section and polarization observables} \begin{figure*} \begin{center} \includegraphics[width=12.0cm]{figurs/etakinematics-3} \vspace{3mm} \caption{ Kinematics of photoproduction and frames for polarization. The frame $\{x,y,z\}$ is used for target polarization $\{P_x,P_y,P_z\}$, whereas the recoil polarization $\{P_{x'},P_{y'},P_{z'}\}$ is defined in the frame $\{x',y',z'\}$, which is rotated around $y'=y$ by the polar angle $\theta$. The azimuthal angle $\varphi$ is defined in the $\{x,y\}$ plane (b) and is zero in the projection shown in the figure (a).}\label{fig:kin} \end{center} \end{figure*} As depicted in Fig.~\ref{fig:kin}, the photon polarization can be linear or circular. For a linear photon polarization $(P_T=1)$ along the direction $\hat{\bold{x}}$ we define the azimuthal angle $\varphi=0$, and perpendicular, in direction ${\hat{\bold{y}}}$, the polarization angle is $\varphi=\pi/2$. For right-handed circular polarization $P_{\odot}=+1$. We may classify the differential cross sections by the three classes of double polarization experiments and one class of triple polarization experiments, which, however, do not give additional information: \begin{itemize} \item polarized photons and polarized target \end{itemize} \begin{eqnarray} \frac{d \sigma}{d \Omega} & = & \sigma_0 \left\{ 1 - P_T \Sigma \cos 2 \varphi \right. \nonumber \\ & & + P_x \left( - P_T H \sin 2 \varphi + P_{\odot} F \right) \nonumber \\ & & + P_y \left( T - P_T P \cos 2 \varphi \right) \nonumber \\ & & \left. + P_z \left( P_T G \sin 2 \varphi - P_{\odot} E \right) \right\} \, , \end{eqnarray} \begin{itemize} \item polarized photons and recoil polarization \end{itemize} \begin{eqnarray} \frac{d \sigma}{d \Omega} & = & \sigma_0 \left\{ 1 - P_T \Sigma \cos 2 \varphi \right. \nonumber \\ & & + P_{x'} \left( -P_T O_{x'} \sin 2 \varphi - P_{\odot} C_{x'} \right) \nonumber \\ & & + P_{y'} \left( P - P_T T \cos 2 \varphi \right) \nonumber \\ & & \left. + P_{z'} \left( -P_T O_{z'} \sin 2 \varphi - P_{\odot} C_{z'} \right) \right\} \, , \end{eqnarray} \begin{itemize} \item polarized target and recoil polarization \end{itemize} \begin{eqnarray} \frac{d \sigma}{d \Omega} & = & \sigma_0 \left\{ 1 + P_{y} T + P_{y'} P + P_{x'} \left( P_x T_{x'} - P_{z} L_{x'} \right) \right. \nonumber \\ & & \left. + P_{y'} P_y \Sigma + P_{z'}\left( P_x T_{z'} + P_{z} L_{z'}\right) \right\}\,. \end{eqnarray} In these equations $\sigma_0$ denotes the unpolarized differential cross section. Instead of asymmetries, in the following we will also discuss the product of the unpolarized cross section with the asymmetries and will use the notation $\check{\Sigma}=\sigma_0\Sigma\,, \check{T}=\sigma_0T\,,\cdots\,$. In appendix \ref{app:obs} we give expressions of the observables in terms of CGLN amplitudes. \subsection{\boldmath Invariant amplitudes} The nucleon electromagnetic current for pseudoscalar meson photoproduction can be expressed in terms of four invariant amplitudes $A_i$~\cite{Chew:1957tf}, \begin{eqnarray}\label{eq:19} J^\mu = \sum_{i=1}^4 A_i(\nu,t)\, M^\mu_i, \end{eqnarray} with the gauge-invariant four-vectors $M^\mu_i$ given by \begin{eqnarray} M^\mu_1&=& -\frac{1}{2}i\gamma_5\left(\gamma^\mu\sl{k}-\sl{k}\gamma^\mu\right)\, , \nonumber\\ M^\mu_2&=&2i\gamma_5\left(P^\mu\, k\cdot(q-\frac{1}{2}k)- (q-\frac{1}{2}k)^\mu\,k\cdot P\right)\, ,\nonumber\\ M^\mu_3&=&-i\gamma_5\left(\gamma^\mu\, k\cdot q -\sl{k}q^\mu\right)\, ,\nonumber\\\ M^\mu_4&=&-2i\gamma_5\left(\gamma^\mu\, k\cdot P -\sl{k}P^\mu\right)-2m_N \, M^\mu_1\, , \label{eq:tensor} \end{eqnarray} where $P^\mu=(p_i^\mu+p_f^\mu)/2$, and the gamma matrices are defined as in Ref.~\cite{Bjo65}. The nucleon pole terms for $N(\gamma,\eta)N$, $A_i^{I,pole}$ ($I=+,0$) are given by \begin{eqnarray}\label{eq:Born} A_1^{I,pole} & = & \ \ \ \frac{e\,g_{\eta N}}{2} \left(\frac{1}{s-m_N^2}+\frac{1}{u-m_N^2}\right)\,,\nonumber \\ A_2^{I,pole} & = & -\frac{e\,g_{\eta N}}{t-m^2_\eta} \left(\frac{1}{s-m_N^2}+\frac{1}{u-m_N^2}\right)\,,\nonumber \\ A_3^{I,pole} & = & -\frac{e\,g_{\eta N}}{2m_N}\frac{\kappa^{(I)}}{2} \left(\frac{1}{s-m_N^2}-\frac{1}{u-m_N^2}\right)\,,\nonumber \\ A_4^{I,pole} & = & -\frac{e\,g_{\eta N}}{2m_N}\frac{\kappa^{(I)}}{2} \left(\frac{1}{s-m_N^2}+\frac{1}{u-m_N^2}\right)\,, \label{eq:a1-4pole} \end{eqnarray} with $\kappa^{(+)}= \kappa_p-\kappa_n$, and $\kappa^{(0)}=\kappa_p+\kappa_n$, where $\kappa_p$ and $\kappa_n$ are the anomalous magnetic moments of the proton and the neutron, respectively. \subsection{CGLN amplitudes and multipoles} For PWA the CGLN amplitudes $F_i(W,x)$~\cite{Chew:1957tf} are conveniently used. They are defined in the c.m. frame using Coulomb gauge. The matrix element ${\cal F}$ with the $e.m.$ current of Eq.~(\ref{eq:19}) then reads \begin{eqnarray}\label{cgln2} \begin{split} {\cal F} &= -\epsilon_\mu J_{}^\mu \\ &= i\,({\vec {\sigma}}\cdot{ \hat{\epsilon}}) \, { F}_1 + ({\vec {\sigma}} \cdot\hat { {q}})\, ({\vec {\sigma}} \times \hat{ {k}})\cdot{ \hat{\epsilon}}\,{ F}_2 \\ &+ i\,({\hat{\epsilon}}\cdot\hat { {q}})\, ({\vec {\sigma}} \cdot\hat {{k}}) { F}_3 + i ({ \hat{\epsilon}} \cdot \hat{{q}})({\vec {\sigma}} \cdot \hat { {q}}) \, { F}_4\,, \end{split} \end{eqnarray} where $\epsilon^\mu=(0,\vec{\epsilon})$ and $\vec{\epsilon}\cdot \vec{k} = 0$. In partial wave analysis of pseudoscalar meson photoproduction it is convenient to work with CGLN amplitudes giving simple representations in terms of electric and magnetic multipoles and derivatives of Legendre polynomials \begin{eqnarray} \label{eq:multipoles} \begin{split} F_{1}(W,x) &=\sum_{l=0}^{\infty}[(lM_{l+}(W)+E_{l+}(W))P'_{l+1}(x) \\ &+((l+1)M_{l-}(W)+E_{l-}(W))P'_{l-1}(x)] \,, \\ F_{2}(W,x) &=\sum_{l=1}^{\infty}[(l+1)M_{l+}(W)+lM_{l-}(W)]P_{l}'(x)\,,\\ F_{3}(W,x) &=\sum_{l=1}^{\infty}[(E_{l+}(W)-M_{l+}(W))P''_{l+1} \\ &+(E_{l-}(W)+M_{l-}(W))P''_{l-1}(x)]\,,\\ F_{4}(W,x) &=\sum_{l=2}^{\infty}[M_{l+}(W)-E_{l+}(W)-M_{l-}(W) \\ &-E_{l-}(W)]P_{l}''(x)\,, \end{split} \end{eqnarray} where $x=\cos\theta$ is the cosine of the scattering angle. In appendix \ref{app:FtoA} we give relations between the CGLN and the invariant amplitudes. \section{The isobar model}\label{sect:IsobarModel} In the isobar model the photoproduction amplitudes of $\eta$ and $\eta^\prime$ mesons are written in terms of nucleon resonance excitations in generalized Breit-Wigner forms and in non-resonant background amplitudes. For simplicity we write all formulas in terms of $(\gamma,\eta)$. For $(\gamma,\eta^\prime)$ all those formulas and kinematical relations can easily be extended. For a specific partial wave $\alpha=\alpha(\ell,j=\ell\pm 1/2,{\cal M})$, where $\ell$ is the angular momentum of the $\eta N$ system in the final state, $j$ is the total spin and ${\cal M}$ stands either for an electric (E) or magnetic (M) transition. The total partial wave amplitude can be written as a sum of a background amplitude $t^{\alpha,b}$ and a resonance amplitude $t^{\alpha,r}$ \begin{equation} t_{\gamma,\eta}^\alpha(W) = t_{\gamma,\eta}^{\alpha,b}(W) + t_{\gamma,\eta}^{\alpha,r}(W)\,. \end{equation} In photoproduction we identify the partial wave amplitudes directly with the electromagnetic multipoles $E_{\ell\pm}$ and $M_{\ell\pm}$. \subsection{The non-resonant background} Traditionally, the background amplitude is taken as a sum of Born terms and $t$-channel meson exchange contributions \begin{equation} t_{\gamma,\eta}^{\alpha,b}(W) = t_{\gamma,\eta}^{\alpha,Born}(W) + t_{\gamma,\eta}^{\alpha,VM}(W)\,. \end{equation} The Born terms for $\eta$ and $\eta^\prime$ photoproduction play a minor role due to the small coupling constants. Whereas the $\pi NN$ coupling is very large, $g^2_{\pi NN}/4\pi\approx 14$, for $\eta$ and $\eta^\prime$ photoproduction $g^2_{\eta NN}/4\pi\sim g^2_{\eta^\prime NN}/4\pi \lesssim 0.1$. This is a rather old observation~\cite{Tiator:1994et} in contradiction to SU(3) symmetry, where the coupling constants are predicted in the range of 1. In all $\eta$ photoproduction analyses this suppression of the Born terms has been confirmed and extensive studies have even found $g^2_{\eta NN}/4\pi\le 10^{-3}$~\cite{Nakayama:2008tg}. For the $\eta^\prime NN$ coupling our value is in agreement with a combined analysis including also $NN\eta^\prime$~\cite{Huang:2012xj}. Nevertheless, in interference terms and at high energies, the Born terms can play some role, and similarly to our previous EtaMAID models, the couplings are determined in the fits to the data. The Born terms are most easily expressed in terms of invariant amplitudes and in pseudoscalar coupling they are given by the nucleon pole terms, Eq.~(\ref{eq:Born}). As our goal in the 2018 update is a continuous description of photoproduction from threshold up to the highest energies, where experimental data exists ($W\sim 5$~GeV), we introduced an energy dependence (damping) in order to suppress the strong rise of the Born terms, and therefore a violation of unitarity at high energies by \begin{equation}\label{eq:Born_damp} g_{\eta N} \rightarrow g_{\eta N}\,\left(\frac{W_{thr}}{W}\right)^{\alpha_B}\,, \end{equation} where $\alpha_B$ will be found in the fit to the data. Ideally, a correct high-energy behavior for the Born contribution should be achieved by replacing the single nucleon exchange in the $u$-channel by a Regge exchange of the nucleon trajectory. Such a modification alone would, however, violate gauge invariance and a more elaborate approach needs to be applied. We leave this to an upcoming work. For $t$-channel exchanges the invariant amplitudes for vector and axial-vector poles are given by \begin{eqnarray} A_1(t) & = & \frac{e\,\lambda_V\,g_V^{\mathfrak{t}}}{2m_\eta M_N}\; \frac{t}{t-M_V^2}\, , \label{Eq:A1}\\ A_2'(t) & = & - \frac{e\,\lambda_A\,g_A^{\mathfrak{t}}}{2m_\eta M_N} \; \frac{t}{t-M_A^2}\, , \label{Eq:A2}\\ A_3(t)& = & \frac{e\,\lambda_A\,g_A^v}{m_\eta}\; \frac{1}{t-M_A^2}\, , \label{Eq:A3}\\ A_4(t) & = & \frac{-e\,\lambda_V\,g_V^v}{m_\eta}\; \frac{1}{t-M_V^2}\,,\label{Eq:A4} \end{eqnarray} where $\lambda_{V(A)}$ denotes the electromagnetic coupling of the vector ($V$) or axial ($A$) vector mesons with masses $M_{V(A)}$. The constants $g_{V(A)}^{v(\mathfrak{t})}$ denote their vector $(v)$ or tensor $(\mathfrak{t})$ couplings to the nucleon. In order to separate the vector and tensor contributions from individual mesons we have used the amplitude \begin{equation}\label{Eq:A2p} A_2'(t)=A_1(t)+t\,A_2(t)\, , \end{equation} which has only contributions from the tensor coupling of an axial vector exchange. Unlike in pion production, the physical region for $\eta$ and especially $\eta^\prime$ production starts at considerably high energy. It is generally expected that already at $\nu \sim 2$ GeV the low-$t$ data are well-represented by Regge exchanges. At the same time, a model with simple vector exchanges becomes inadequate at high energy: a spin-1 exchange leads to a linearly increasing amplitude which violates unitarity, so one is forced to introduce phenomenological form factors to suppress this behavior. To make use of all the data available for $\eta$ photoproduction we propose here an alternative approach: A background function that is smoothly joined onto a Regge amplitude at high energy, but is modified in the resonance region to accommodate the nucleon resonances by avoiding double counting. For the Regge amplitudes we follow our recent work on Regge phenomenology in $\pi^0$ and $\eta$ photoproduction, Ref.~\cite{Kashevarov:2017vyl}. In that work we compared and discussed four solutions, different in the Regge formulation and in the data sets used in the fits. Here for EtaMAID we use our preferred solution I, a Regge-cut model, where the full data set was fitted. \begin{figure} \begin{center} \resizebox{0.5\textwidth}{!}{\includegraphics{figurs/Regge_eta}} \caption{$t$-channel contributions to $\eta$ photoproduction from single poles (a), Regge poles (b), and Regge cuts (c). An example for $\rho$ and $\omega$ meson exchange and $\mathbb P$ (Pomeron) and $f_2$ mesons for rescattering of two Reggeons.} \label{fig:regge} \end{center} \end{figure} Technically, the $t$-channel exchange of Regge trajectories is done by replacing the single meson propagator by the following expression \begin{eqnarray} \label{eq:Regge-1} \begin{split} &\frac{1}{t-M^2} \Rightarrow \\ &D(s,t)=(\frac{s}{s_0})^{\alpha(t)-1}\; \frac{\pi\,\alpha^\prime}{\mbox{sin}[\pi\alpha(t)]}\; \frac{{\cal S} + e^{-i\pi\alpha(t)}}{2}\; \frac{1}{\Gamma(\alpha(t))}\,, \end{split} \end{eqnarray} where $M$ is the mass of the Reggeon, $\cal S$ is the signature of the Regge trajectory (${\cal S}=-1$ for vector and axial-vector mesons), and $s_0$ is a mass scale factor, commonly set to 1~GeV$^2$. The Gamma function $\Gamma(\alpha(t))$ is introduced to suppress additional poles of the propagator. In addition Regge-cuts are added in our model. The Regge cuts can be understood as a rescattering effect at high energies, e.g. an $\eta$ is produced via a vector or axial vector exchange at the first step, and then rescattered via a Pomeron or tensor exchange. This effect is shown in Fig.~\ref{fig:regge}~(c) as contracted box diagrams, where two trajectories are exchanged consequently. \begin{figure*} \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{figurs/Regge_traj}} \caption{Regge trajectories: (a) $\rho$ black, $\omega$ red, $\phi$ blue, $b_1$ and $h_1$ green, $\rho_2$ and $\omega_2$ magenta; dashed and dash-dotted magenta lines are $\rho_2$ and $\omega_2$ of Ref.~\cite{Anisovich:2002-1,Anisovich:2002-2}; (b) $f_2$ red, $\mathbb P$ magenta, $\rho f_2$ black solid, $\omega f_2$ blue dashed, $\rho \mathbb P$ black solid, $\omega \mathbb P$ black dashed. } \label{fig:traj} \end{center} \end{figure*} The trajectories for $f_2$ and ${\mathbb P}$ are shown in Fig.~\ref{fig:traj}~(b) together with four cut trajectories $\rho{\mathbb P}$, $\omega{\mathbb P}$ (black solid and dashed lines) and $\rho f_2$, $\omega f_2$ (blue solid and dashed lines). Parameters of the Reggeon and cut trajectories used in the present work are given in our previous paper~\cite{Kashevarov:2017vyl}. All four Regge cuts can contribute to vector and axial vector exchanges and can be written in the following form~\cite{Donnachie:2015jaa} \begin{equation} D_{cut} =\left(\frac{s}{s_0}\right)^{\alpha_c(t)-1}\; e^{-i\pi\alpha_c(t)/2}\; e^{d_c t} \,. \end{equation} In total, the vector meson propagators are replaced by \begin{equation} D_V \Rightarrow {D}_{V} + c_{V\mathbb P}\,{D}_{V\mathbb P} + c_{Vf_2}\,D_{Vf_2},\; V=\rho,\omega \end{equation} and the axial vector meson propagators are replaced by \begin{equation} D_A \Rightarrow {D}_{A} + \sum_{V=\rho,\omega} ({\tilde c}_{V\mathbb P}\,{D}_{V\mathbb P} + {\tilde c}_{Vf_2}\,D_{Vf_2}),\; A=b_1, h_1\,, \end{equation} where the coefficients $c_{V\mathbb P},c_{Vf_2}$ are for natural parity cuts and ${\tilde c}_{V\mathbb P},{\tilde c}_{Vf_2}$ for un-natural parity cuts and are obtained by a fit to the data. In detail, the invariant amplitudes will be changed in the following way \begin{eqnarray}\label{Eq:cuts} \begin{split} \lambda_\rho\,g_\rho^{v,\mathfrak{t}}\;\frac{1}{t-M_\rho^2} &\rightarrow \lambda_\rho\,g_\rho^{v,\mathfrak{t}}\\ &\hspace{-1.8cm}[D_\rho(s,t) + c_{\rho \mathbb P}\,D_{\rho \mathbb P}(s,t) + c_{\rho f}\,D_{\rho f}(s,t)] \,,\\ \lambda_\omega\,g_\omega^{v,\mathfrak{t}}\;\frac{1}{t-M_\omega^2} &\rightarrow \lambda_\omega\,g_\omega^{v,\mathfrak{t}}\\ &\hspace{-1.8cm}[D_\omega(s,t) + c_{\omega \mathbb P}\,D_{\omega \mathbb P}(s,t) + c_{\omega f}\,D_{\omega f}(s,t)]\,,\\ \lambda_{b_1}\,g_{b_1}^{\mathfrak{t}}\;\frac{1}{t-M_{b_1}^2} &\rightarrow \lambda_{b_1}\,g_{b_1}^{\mathfrak{t}} D_{b_1}(s,t)\\ &\hspace{-1.8cm}+\lambda_\rho \,g_\rho^{\mathfrak{t}}\,[{\tilde c}_{\rho \mathbb P} \,D_{\rho \mathbb P}(s,t) + {\tilde c}_{\rho f_2} \,D_{\rho f_2}(s,t)] \\ &\hspace{-1.8cm}+\,\lambda_\omega\,g_\omega^{\mathfrak{t}}\,[{\tilde c}_{\omega \mathbb P}\,D_{\omega \mathbb P}(s,t) + {\tilde c}_{\omega f_2}\,D_{\omega f_2}(s,t)]\,. \end{split} \end{eqnarray} In practical calculations, it turns out that the axial vector Regge pole contributions, proportional to $D_A$, can be neglected, but the axial vector Regge cuts arising from $\rho$ and $\omega$ together with $\mathbb P$ and $f_2$ are very important, in particular for polarization observables, as the photon beam asymmetry $\Sigma$. The Regge cuts also allow us to describe a long standing problem of suitable candidates for an $A_3$ amplitude. While vector and axial-vector single pole or Regge pole exchanges do not contribute, Regge-cut exchanges $\rho{f_2}$ and $\omega{f_2}$ satisfy all conservation law requirements. On the other hand, the $\rho{\mathbb P}$ and $\omega{\mathbb P}$ cuts do not contribute to the $A_3$ amplitude. The main aspect in EtaMAID is the exploration of nucleon resonance excitation. Adding Regge amplitudes and resonances together, one runs into the well-known double-counting problem. The duality principle states that the full amplitude can be obtained by summing an infinite tower of either $s$- or $t$-channel resonances. In isobar models only a finite number of nucleon resonances are considered in the $s$-channel, still one cannot fully avoid this problem. Various methods have been discussed in the literature to treat with that problem. The so-called Regge-plus-Resonance models simply ignore double counting. In another approach, applied e.g in EtaMAID2003~\cite{Chiang:2002vq} and in the Bonn-Gatchina model~\cite{Anisovich:2017}, the lowest partial waves, where $s$-channel resonances are added, were projected out of the Regge amplitudes. In models, where a lot of nucleon resonances are taken into account, this would, however, lead to an almost completely removed background amplitude in the resonance region. Recently, the concept of finite-energy sum rules was discussed and applied to $\pi^0$ and $\eta$ photoproduction~\cite{Nys:2016vjz,Mathieu:2018mjw}, where resonance and Regge regions can be well separated and smoothly matched together. Those applications for $\eta$ photoproduction are still in progress. Here we want to apply a further method, where the double counting is removed by introducing a damping factor $F_d(W)$ to the Regge amplitudes, which goes to zero at $\eta$ threshold and approaches unity above some energy, \begin{eqnarray}\label{eq:Regge_damp} A_i^{Regge} &\rightarrow& A_i^{Regge}\cdot F_d(W)\\ \mbox{with}\quad F_d(W) &=& \left( 1-e^{-\frac{W-W_{thr}}{\Lambda_R}}\right) \theta(W-W_{thr})\,. \end{eqnarray} The scale $\Lambda_R$ describes at which energy Regge description fully sets in and is obtained in the fit. For a very small $\Lambda_R$ the damping factor introduced above is a step function, whereas for large $\Lambda_R$ it only approaches unperturbed Regge asymptotically. The way this damping factor cures the double counting problem can be seen as follows. Assume that an exact dual representation of the scattering amplitude $t$ is realized and entails an infinite sum over the entire resonance spectrum in either $s$- or $t$-channel, \begin{eqnarray} t=\sum_{i=1}^\infty t^{Res_i}_s=\sum_{i=1}^\infty t^{Res_i}_t. \end{eqnarray} At high $s$-channel energy, the $t$-channel sum can actually be performed in terms of an exchange of a few leading Regge trajectories $\alpha_i$, $t^{Regge}\sim \sum_i c_i\nu^{\alpha_i}$. For the $s$-channel resonances, in turn, accounting for the full spectrum is not possible, and we limit ourselves to explicitly including only the lowest resonances up to $i=N$. We write, \begin{eqnarray} t&=&\sum_{i=1}^N t^{Res_i}_s+\left[ \sum_{i=1}^\infty t^{Res_i}_t-\sum_{i=1}^N t^{Res_i}_s\right]\nonumber\\ &\approx&\sum_{i=1}^N t^{Res_i}_s+ F_d(W)t^{Regge}. \end{eqnarray} The exact balance between the $s$-channel resonances and the part of the Regge amplitude removed by the damping factor can be controlled explicitly by the FESR. We will address these in an upcoming work. Parameters for the background can be found in table~\ref{tab:background} in appendix \ref{app:BG-BW}. \subsection{Nucleon resonance excitations} For a given partial wave $\alpha$, a set of $N_\alpha$ nucleon resonances are added as generalized Breit-Wigner (BW) functions with a unitary phase $\phi$ for each resonance, \begin{equation}\label{eq:tres} t_{\gamma,\eta}^{\alpha,r}(W)=\sum_{j=1}^{N_\alpha}\, t_{\gamma,\eta}^{\alpha,BW,j}(W)\,e^{i\phi_j}\,. \end{equation} Due to the weakness of photoproduction, where the moduli of the $t$-matrices are typically of the order $10^{-2}$ or smaller, a simple addition of multiple resonances is sufficient and does not violate unitarity. The phase $\phi_j$ introduced in Eq.~(\ref{eq:tres}) is new for our EtaMAID models but was always applied in pion photo- and electroproduction. Whereas in $(\gamma,\pi)$ the Watson theorem determines the phase $\phi_j$ at least below the $\pi\pi$ threshold, in $\eta$ and $\eta^\prime$ production we have no theoretical guideline and use $\phi_j$ as a fit parameter. Furthermore, $\phi_j$ will be a constant in this work, while in general it can be an energy-dependent function with proper threshold behavior. The phase $\phi_j$ is often also called the `background phase', because it is indirectly determined by the background, which is different for the different channels $\eta p, \eta n, \eta^\prime p, \eta^\prime n$ and also different for electric and magnetic multipoles. For a given partial wave $\alpha$, the relevant multipoles $\mathcal{M}_{\ell\pm}$ ($E_{\ell\pm},\, M_{\ell\pm}$) are assumed to have a Breit-Wigner energy dependence of the following form \begin{eqnarray}\label{Eq:BWres} \begin{split} &t_{\gamma,\eta}^{\alpha,BW}(W)= \mathcal{M}_{\ell\pm}(W) \\ &= \bar{\mathcal{M}}_{\ell\pm}\, f_{\gamma N}(W)\, \frac{M_R \Gamma_\mathrm{tot}(W)}{M_R^2-W^2-i M_R \Gamma_\mathrm{tot}(W)}\, f_{\eta N}(W)\, C_{\eta N} \,, \end{split} \end{eqnarray} where $f_{\eta N}(W)$ is the usual Breit-Wigner factor describing the $\eta N$ decay of the $N^*$ resonance with total energy dependent width $\Gamma_\mathrm{tot}(W)$, partial width $\Gamma_{\eta N}(W)$ and spin $J$, \begin{equation} \label{eq:fetaN_Maid} f_{\eta N}(W) = \zeta_{\eta N} \left[ \frac{1}{(2J+1)\pi}\, \frac{k(W)}{q_{\eta}(W)}\, \frac{M_N}{M_R}\, \frac{\Gamma_{\eta N}(W)}{\Gamma_\mathrm{tot}(W)^2} \right]^{1/2}, \end{equation} with $k$ and $q_{\eta} = q$ the photon and $\eta$ meson momenta in the c.m. system, and $\zeta_{\eta N} = \pm 1$ a relative sign between the $N^* \rightarrow \eta N$ and $N^* \rightarrow \pi N$ couplings. $C_{\eta N}$ is an isospin factor, which is $-1$ for $\eta N$ and $\eta^\prime N$ final states in the conventions used in our work. For the total widths of the resonances, we assume up to seven decay channels, $\pi N$, $\pi\pi N$, $\eta N$, $K\Lambda$, $K\Sigma$, $\omega N$, and $\eta^\prime N$, \begin{equation} \Gamma_\mathrm{tot}(W) = \Gamma_{\pi N}(W) + \Gamma_{\pi\pi N}(W) + \Gamma_{\eta N}(W)+ \cdots\,. \end{equation} The threshold energies for the decays are listed in table~\ref{tab:thresholds}. \begin{table} [h] \caption{Threshold energies in MeV of various $N^*$ decay channels.\label{tab:thresholds}} \begin{tabular}{|c|c|c|c|c|c|c|} \hline $\pi N$ & $\pi\pi N$ & $\eta N$ & $K \Lambda$ & $K \Sigma$ & $\omega N$ & $\eta^\prime N$ \\ \hline 1077.84 & 1217.41 & 1486.13 & 1609.36 & 1686.32 & 1720.92 & 1896.05\\ \hline \end{tabular} \end{table} The energy dependence of the partial widths are given by \begin{eqnarray}\label{eq:BW_widths_std} \Gamma_{\pi N}(W) &=& \beta_{\pi N }\,\Gamma_R \left(\frac{q_\pi(W)}{q_{\pi,R}}\right)^{2\ell+1} \left(\frac{X^2+q_{\pi,R}^2}{X^2+q_\pi(W)^2}\right)^\ell \,,\\ \Gamma_{\eta N}(W) &=& \beta_{\eta N }\,\Gamma_R \left(\frac{q_\eta(W)}{q_{\eta,R}}\right)^{2\ell+1} \left(\frac{X^2+q_{\eta,R}^2}{X^2+q_\eta(W)^2}\right)^\ell \,,\\ \Gamma_{\pi\pi N}(W) &=& \beta_{\pi\pi N }\,\Gamma_R \left(\frac{q_{2\pi}(W)}{q_{2\pi,R}}\right)^{2\ell+5} \left(\frac{X^2+q_{2\pi,R}^2}{X^2+q_{2\pi}(W)^2}\right)^{\ell+2}\,,\label{eq:BW_widths_std_2pi} \end{eqnarray} where $X$ is a cut-off parameter, which has been fixed in the present work to $X=450$~MeV. The c.m. momenta of pion and eta are denoted by $q_\pi$ and $q_\eta$, for the effective $2\pi$ channel we use a mass of $2m_\pi$. All momenta, taken at the resonance position, $W=M_R$, are denoted by an additional index $R$. All other 2-body channels are parameterized similarly as for $\pi N$ or $\eta N$. In general the dynamics of 3-body decays as for the $\pi\pi N$ channel are rather complicated and have most extensively been studied in the J{\"u}lich model. For single meson photoproduction the effective 2-body treatment works very well. For the energy dependence of the photon vertex, we assume the form \begin{equation} \label{eq:fgammaN} f_{\gamma N}(W) = \left(\frac{k(W)}{k_R}\right)^2 \, \left(\frac{X_{\gamma}^2+k_R^2}{X_{\gamma}^2+k(W)^2}\right)^2\,, \end{equation} with the photon c.m. momentum $k$, which takes the value $k_R$ at the resonance position. In EtaMAID2018 we found best fits for $X_{\gamma}=0$. The so-called reduced multipoles $\bar{\mathcal{M}}_{\ell\pm}$ are related to the photon decay amplitudes $A_{1/2}$ and $A_{3/2}$ by \begin{eqnarray} \bar{M}_{\ell+} &=& -\frac{1}{\ell+1}\left(A_{1/2}^{\ell+}+\sqrt{\frac{\ell+2}{\ell}}A_{3/2}^{\ell+}\right)\,,\\ \bar{E}_{\ell+} &=& -\frac{1}{\ell+1}\left(A_{1/2}^{\ell+}-\sqrt{\frac{\ell}{\ell+2}}A_{3/2}^{\ell+}\right)\,,\\ \bar{M}_{\ell+1, -} &=& +\frac{1}{\ell+1}\left(A_{1/2}^{\ell+1, -}-\sqrt{\frac{\ell}{\ell+2}}A_{3/2}^{\ell+1, -}\right)\,,\\ \bar{E}_{\ell+1, -} &=& -\frac{1}{\ell+1}\left(A_{1/2}^{\ell+1, -}+\sqrt{\frac{\ell+2}{\ell}}A_{3/2}^{\ell+1, -}\right)\,. \end{eqnarray} For specific resonances, see table~\ref{tab:amplitudes}. \begin{table}[ht] \caption{The reduced multipoles ${\bar{\cal M}}_{\alpha}$ in terms of the photon decay amplitudes $A_\lambda$.\\}\label{tab:amplitudes} \begin{center} \begin{tabular}{|c|ccc|ccc|} \hline $N^{\ast}$ & & $\bar{E}$ & & & $\bar{M}$ & \\ \hline $S_{11}$ & & $-A_{1/2}$ & & & --- & \\ $P_{11}$ & & --- & & & $A_{1/2}$ & \\ $P_{13}$ & & $\frac{1}{2}(\frac{1}{\sqrt{3}}A_{3/2}-A_{1/2})$ & & & $-\frac{1}{2}(\sqrt{3}A_{3/2}+A_{1/2})$ & \\ $D_{13}$ & & $-\frac{1}{2}(\sqrt{3}A_{3/2}+A_{1/2})$ & & & $-\frac{1}{2}(\frac{1}{\sqrt{3}}A_{3/2}-A_{1/2})$ & \\ $D_{15}$& & $\frac{1}{3}(\frac{1}{\sqrt{2}}A_{3/2}-A_{1/2})$ & & & $-\frac{1}{3}(\sqrt{2}A_{3/2}+A_{1/2})$ & \\ $F_{15}$ & & $-\frac{1}{3}(\sqrt{2}A_{3/2}+A_{1/2})$ & & & $-\frac{1}{3}(\frac{1}{\sqrt{2}}A_{3/2}-A_{1/2})$ & \\ $F_{17}$ & & $\frac{1}{4}(\sqrt{\frac{3}{5}}A_{3/2}-A_{1/2})$ & & & $-\frac{1}{4}(\sqrt{\frac{5}{3}}A_{3/2}+A_{1/2})$ & \\ $G_{17}$ & & $-\frac{1}{4}(\sqrt{\frac{5}{3}}A_{3/2}+A_{1/2})$ & & & $-\frac{1}{4}(\sqrt{\frac{3}{5}}A_{3/2}-A_{1/2})$ & \\ $G_{19}$ & & $\frac{1}{5}(\sqrt{\frac{2}{3}}A_{3/2}-A_{1/2})$ & & & $-\frac{1}{5}(\sqrt{\frac{3}{2}}A_{3/2}+A_{1/2})$ & \\ \hline \end{tabular} \end{center} \end{table} So far we assumed that the resonance mass $M_R$ is above all considered decay channels. However, as nucleon resonances obtain decay widths of the order of 100~MeV and more, also excitations of resonances are very likely, if the nominal Breit-Wigner mass is only a few MeV below threshold. But even for the Roper resonance, which is about 50~MeV below $\eta N$ threshold, an excitation in $\eta$ photoproduction can be considered due to the large width of $350$~MeV. In such a case, however, the c.m. momentum $q_{a,R}$, which appears in the parametrization of the partial width $\Gamma_a(W)$, is no longer defined. In fact, one can analytically continue the momenta below zero and obtains imaginary values. In the literature, two different methods are discussed. The first one takes a sharp cut-off with a $\theta$-function, giving a zero value for the partial width below threshold. This is our EtaMAID approach. The second one (Flatte's approach~\cite{Flatte:1976xu}) uses the analytical continuation of the momentum below threshold and accepts the imaginary contribution of the width as a physical contribution to the mass. For both methods we can generalize the parametrization of a partial width for arbitrary resonance masses \begin{eqnarray} \Gamma_{a}(W) &=& g_a^2\;q_a(W)\, \left(\frac{|q_{a}^2(W)|}{X^2+|q_a^2(W)|}\right)^\ell\,. \end{eqnarray} The squared momenta $q_a^2(W)$ become negative below threshold and could even produce singularities on the real axis in the physical region. Therefore, we take the absolute values. For resonances with masses larger than $W_{a,thr}$ this form can be compared with the previous one, e.g. Eq.~(\ref{eq:BW_widths_std}), and this gives the relation between the coupling constants $g_a$ and the branching ratios $\beta_a$, \begin{eqnarray} \beta_{a} &=& \frac{g_a^2\;q_a(M_{R})}{\Gamma_{R}\; (1+X^2/q_a^2(M_{R}))^\ell}\,,\\ g_{a}^2 &=& \frac{\beta_a\,\Gamma_{R}}{q_a(M_{R})}\; (1+X^2/q_a^2(M_{R}))^\ell\,. \end{eqnarray} For the 3-body $2\pi$ channel we also make a small adjustment, \begin{eqnarray} \Gamma_{\pi\pi}(W) &=& g_{\pi\pi}^2\;q_{2\pi}(W)\, \left(\frac{q_{2\pi}^2(W)}{X^2+q_{2\pi}^2(W)}\right)^{\ell+2}\,, \end{eqnarray} however, with a slightly different asymptotic behavior compared to Eq.~(\ref{eq:BW_widths_std_2pi}). For both $\pi N$ and $\pi\pi N$ channels, all nucleon resonances are above threshold and the conventional definition of branching ratios can be used. For $\eta N$ only the Roper resonance $N(1440)\frac{1}{2}^+$ is below threshold. In the $K\Sigma$ channel $N(1650)\frac{1}{2}^-$ and in the $\omega N$ channel $N(1710)\frac{1}{2}^+$ are below threshold but with large couplings that make significant contributions above threshold. Finally, in the $\eta^\prime N$ channel we even find four states below threshold, see table~\ref{tab:BWhadronic} in the appendix~\ref{app:BG-BW}. \section{Results}\label{sect:Results} \subsection{\boldmath Data base} \label{sec:data-base} In our analysis we only use modern data which cover a broad energy and angular range. We prefer datasets with smallest statistical uncertainties and we only combine data from different experiments if they are in agreement in overlapping energy regions without including additional scaling parameters. The unpolarized differential cross section has been measured with by far highest accuracy at MAMI. From several datasets we use those with the most sophisticated reconstruction and error analysis \cite{Kashevarov:2017}. The energy range of MAMI is limited to $W < 1970$~MeV. We used the differential cross section from the CLAS Collaboration~\cite{Williams:2009yj} in this fit because of their much smaller statistical errors, larger energy coverage, and better agreement with the high statistics data from A2MAMI~\cite{Kashevarov:2017} in an overlapping energy region than CBELSA/TAPS data~\cite{Crede:2009zzb}. The angular-dependent systematic uncertainty for results of Run-I and Run-II above $W=1796$~MeV was evaluated as 3\%, for Run-III and for the $\eta'$ differential cross sections - as 5-6\%. These uncertainties were added in quadrature to the statistical uncertainties~\cite{Kashevarov:2017}. For other data, we use only statistical uncertainties in the fit. The photon beam asymmetry $\Sigma$ has been measured over the full resonance region by GRAAL and CLAS. We include all polarized target and beam-target asymmetries from modern experiments. Old data, in particular an early target asymmetry measurement at ELSA \cite{Bock1998}, cannot compete with regard to statistical and systematic uncertainties and are not used in our analysis. The differential cross sections cover the energy region from threshold up to $W=2.8$~GeV. Polarization observables are from threshold up to $W=1.85$~GeV for $T$ and $F$, up to $W=2.13$~GeV for $E$, and up to $W=2.08$~GeV for $\Sigma$. These are five polarization observables for the $\eta p$ channel with good energy and angular coverage, which is however, still far away from a complete experiment, that would require at least 8 observables including those with recoil polarization detection. Therefore, some ambiguities in the PWA can be expected. Data sets for the other reactions are much more scarce than for $\gamma p \to \eta p$. In the $\eta n$ channel we have only three observables, for $\eta^\prime p$ two and for $\eta^\prime n$ just the differential cross section alone, see table~\ref{tab:expdata}. In our fits to the data we have used a total of 208 parameters. For the resonance sector with 21 $N^*$ resonances we have 112 parameters for BW parametrization and 66 for unitarity phases. The background is described with 20 parameters, mainly for the Regge parametrization. \begin{table*}[htbp] \begin{center} \caption{\label{tab:expdata} Experimental data on $\eta$ and $\eta^\prime$ photoproduction. The column `used' shows the data that were included in our fits and those that were ignored. $N$ is the number of data points and $\chi^2$ is the total weighted deviation from our standard 2018 solution for that dataset.} \bigskip \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Observable & Reaction & used &$W$~[MeV] &$N$ &$\chi^2$ &$\chi^2/N$ & Reference \\ \hline $\sigma_0$ &$p(\gamma,\eta)p$ & --- &$1488 - 1870$ &2880 &9502& 3.3& A2MAMI-17 (Run I)~\cite{Kashevarov:2017} \\ $\sigma_0$ &$p(\gamma,\eta)p$ &$\surd$&$1488 - 1891$ &2712 &4437& 1.6& A2MAMI-17 (Run II)~\cite{Kashevarov:2017} \\ $\sigma_0$ &$p(\gamma,\eta)p$ &$\surd$&$1888 - 1957$ & 288 &329& 1.1& A2MAMI-17 (Run III)~\cite{Kashevarov:2017} \\ $\sigma_0$ &$p(\gamma,\eta)p$ &$\surd$&$1965 - 2795$ & 634 &2276& 3.6& CLAS-09~\cite{Williams:2009yj} \\ $\sigma_0$ &$p(\gamma,\eta)p$ & --- &$1588 - 2370$ & 680 &8640& 13.& CBELSA/TAPS-09~\cite{Crede:2009zzb} \\ $\Sigma$ &$p(\gamma,\eta)p$ &$\surd$&$1496 - 1908$ & 150 &394& 2.6& GRAAL-07~\cite{Bartalini:2007fg} \\ $\Sigma$ &$p(\gamma,\eta)p$ &$\surd$&$1700 - 2080$ & 214 &617& 2.9& CLAS-17~\cite{Collins:2017} \\ $T$ &$p(\gamma,\eta)p$ &$\surd$&$1497 - 1848$ & 144 &246& 1.7& A2MAMI-14~\cite{Akondi:2014} \\ $F$ &$p(\gamma,\eta)p$ &$\surd$&$1497 - 1848$ & 144 &246& 1.7& A2MAMI-14~\cite{Akondi:2014} \\ $E$ &$p(\gamma,\eta)p$ &$\surd$&$1525 - 2125$ & 73 &155& 2.1& CLAS-16~\cite{Senderovich:2016} \\ $E$ &$p(\gamma,\eta)p$ &$\surd$&$1505 - 1882$ & 135 &255& 1.9& A2MAMI-17~\cite{Witthauer:2017} \\ \hline $\sigma_0$ &$n(\gamma,\eta)n$ &$\surd$&$1492 - 1875$ & 880 &3079& 3.5& A2MAMI-14~\cite{Werthmueller:2014} \\ $\sigma_0$ &$n(\gamma,\eta)n$ & --- &$1505 - 2181$ & 322 &2986& 9.3& CBELSA/TAPS-11~\cite{Jaegle2:2011} \\ $\sigma_0$ &$n(\gamma,\eta)n$ & --- &$1588 - 2070$ & 317 &4992& 16.& CBELSA/TAPS-17~\cite{Witthauer:2017bonn} \\ $\Sigma$ &$n(\gamma,\eta)n$ &$\surd$&$1504 - 1892$ & 99 &177& 1.8& GRAAL-08~\cite{Fantini:2008} \\ $E$ &$n(\gamma,\eta)n$ &$\surd$&$1505 - 1882$ & 135 &209& 1.5& A2MAMI-17~\cite{Witthauer:2017} \\ \hline $\sigma_0$ &$p(\gamma,\eta^\prime)p$ &$\surd$&$1898 - 1956$ & 120 &198& 1.7& A2MAMI-17~\cite{Kashevarov:2017} \\ $\sigma_0$ &$p(\gamma,\eta^\prime)p$ &$\surd$&$1925 - 2795$ & 681 &2013& 3.0& CLAS-09~\cite{Williams:2009yj} \\ $\sigma_0$ &$p(\gamma,\eta^\prime)p$ & --- &$1934 - 2351$ & 200 &278& 1.4& CBELSA/TAPS-09~\cite{Crede:2009zzb} \\ $\Sigma$ &$p(\gamma,\eta^\prime)p$ &$\surd$&$1903 - 1913$ & 14 &35& 2.5& GRAAL-15~\cite{Sandri:2015} \\ $\Sigma$ &$p(\gamma,\eta^\prime)p$ &$\surd$&$1904 - 2080$ & 62 &85& 1.4& CLAS-17~\cite{Collins:2017} \\ \hline $\sigma_0$ &$n(\gamma,\eta^\prime)n$ &$\surd$&$1936 - 2342$ & 170 &191& 1.1& CBELSA/TAPS-11~\cite{Jaegle:2011} \\ \hline \end{tabular} \end{center} \end{table*} \subsection{Total cross sections} We begin the discussions of our results with the total cross sections of the four channels considered in our work: $p(\gamma,\eta)p$, $n(\gamma,\eta)n$, $p(\gamma,\eta^\prime)p$, $n(\gamma,\eta^\prime)n$. The data in Figs.~\ref{fig:tcs} - \ref{fig:tcs_4models_eta} are from A2 Collaboration at MAMI: A2MAMI-17~\cite{Kashevarov:2017} and A2MAMI-14~\cite{Werthmueller:2014}; CBELSA/TAPS Collaboration: CBELSA/TAPS-09~\cite{Crede:2009zzb}, CBELSA/TAPS-11 for $\eta n$~\cite{Jaegle2:2011} and for $\eta^\prime n$~\cite{Jaegle:2011}, and CBELSA/TAPS-17~\cite{Witthauer:2017bonn}. In the case of the CLAS-09 data, we show data points that were obtained in a Legendre fit to the differential cross sections from CLAS collaboration~\cite{Williams:2009yj} and are affected by additional uncertainties due to a limited angular range of the data especially in forward direction. The total cross section data shown here have not been used in our fit, only the differential cross sections were fitted. The fit results for the total cross sections are presented in Fig.~\ref{fig:tcs} together with corresponding experimental data. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_v2}} \caption{Total cross section for $(\gamma,\eta)$ (a) and $(\gamma,\eta^\prime)$ (b) on protons and neutrons. The solid red and dashed blue lines show our EtaMAID solution for proton and neutron, respectively. } \label{fig:tcs} \end{center} \end{figure*} In Fig.~\ref{fig:tcs}~(a), there are very interesting features visible at energies $W\approx 1680$~MeV and $W\approx 1890$~MeV, which can be explained by cusp effects due to the opening of new strong channels in the $S$-wave. The cusp in the $\eta p$ total cross section, in connection with the steep rise of the $\eta^\prime p$ from its threshold, Fig.~\ref{fig:tcs}~(b), is explained by a strong coupling of the $S_{11}(1895)$ resonance to both channels, see also Figs.~\ref{fig:tcs_resonances_eta_log} - \ref{fig:tcs_pw_eta_log}. Unfortunately, there are no data for the $\eta^\prime n$ channel near threshold and only one data point exists in the cusp region for the $\eta n$ channel, Fig.~\ref{fig:tcs}~(a). Nevertheless our solution demonstrates also a strong coupling of the $S_{11}(1895)$ for these neutron channels. Other interesting structures are observed as a dip in $\gamma p \rightarrow \eta p$ and a bump in $\gamma n \rightarrow \eta n$ around $W \approx 1680$ MeV, Fig.~\ref{fig:tcs}~(a). Both structures were observed experimentally many times and its existence is unambiguous. However its nature is not yet fully understood. See for more details Ref.~\cite{Krusche2014}. Our analysis shows that the narrow bump in $\eta n$ and the dip in $\eta p$ channels have different origin. The first is a result of an interference of few resonances with a dominant contribution of the $P_{11}(1710)$, see Fig.~\ref{fig:tcs_resonances_eta_log}~(b) and Fig.~\ref{fig:tcs_pw_eta_log}~(b). The second one is mainly a sum of $S_{11}(1535)$ and $S_{11}(1650)$ with opposite signs. However the narrowness of this structure is explained by a cusp effect due to the opening of the $K\Sigma$ decay channel of the $S_{11}(1650)$ resonance, see Fig.~\ref{fig:tcs_resonances_eta_log}~(a) and Fig.~\ref{fig:tcs_pw_eta_log}~(a). In Fig.~\ref{fig:tcs_resonances_eta_log} - \ref{fig:tcs_resonances_etapr} we show partial resonance contributions for $\eta$ and $\eta^\prime$ photoproduction in four channels. In Fig.~\ref{fig:tcs_resonances_eta_log} we concentrate on the most important $S_{11}$ resonances $N(1535)\frac{1}{2}^-$, $N(1650)\frac{1}{2}^-$ and $N(1895)\frac{1}{2}^-$. The $S_{11}(1535)$ completely dominates both proton and neutron channels. And, as a side remark, due to the large branchings into the $\pi N$ and $\eta N$ channels, this resonance produces a very significant cusp effect in the cross sections of pion photoproduction~\cite{Althoff:1979mb,Ahrens:2006gp}. The second $S_{11}(1650)$ exhibits visible cusp effects due to the opening of the $K\Lambda$ and $K\Sigma$ channels. Also the third $S_{11}(1895)$ shows a visible cusp at $\eta^\prime N$ threshold. In the full solution the $K\Lambda$ cusp remains hidden under the strong $S_{11}(1535)$ contribution, also the $K\Sigma$ cusp becomes invisible in the neutron channel. But in the proton channel this cusp appears as a very pronounced dip with even a kind of a bump afterwards. The $\eta^\prime N$ cusps due to the third $S_{11}(1895)$ resonance are visible in both proton and neutron channels, and in case of the proton the cusp is very well supported by the high-precision data of A2-MAMI. \begin{figure*}[!h] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_resonances_eta_log}} \caption{Partial contributions of the $S$-wave resonances to the total cross section for $(\gamma,\eta)$ on protons (a) and neutrons (b) in comparison with the non-resonant background. The solid red lines show our full EtaMAID solution. The individual contribution of $S_{11}(1535)$, $S_{11}(1650)$ and $S_{11}(1895)$ resonances are shown by solid black lines. The dashed line shows the total background of Born and Regge contributions including the damping factors. Vertical lines correspond to thresholds of $K\Lambda$, $K\Sigma$, and $\eta^\prime N$ photoproduction. } \label{fig:tcs_resonances_eta_log} \end{center} \end{figure*} The cusp structures are even better visible in Fig.~\ref{fig:tcs_pw_eta_log}, where all resonances within the same partial wave are summed up. In the cases of $P_{11}$ and $D_{13}$ these are sums over even four $N^*$ resonances. From this figure it becomes very clear that the bump structures at $W\approx 1680$~MeV is a cusp effect of the $S_{11}(1650)$ in the proton channel and a resonance effect of the $P_{11}(1710)$ in the neutron channel. The largest $N^*$ resonance contributions in $(\gamma,\eta)$ total cross sections are from $S_{11}(1535,1650,1895)$, $P_{11}(1710)$, $P_{13}(1720,1900)$, and $D_{13}(1700,1875)$. \begin{figure*}[!h] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_pw_eta_log}} \caption{Resonance contributions of partial waves to the total cross section for $(\gamma,\eta)$ on protons (a) and neutrons (b). The solid red lines show our full EtaMAID solution including background. The black solid lines are the sum of three $S_{11}(1535,1650,1895)$ resonances, magenta solid: four $P_{11}(1440,1710,1880,2100)$, magenta dashed: two $P_{13}(1720,1900)$, green solid: four $D_{13}(1520,1700,1875,2120)$, green dashed: two $D_{15}(1675,2060)$, blue solid: three $F_{15}(1680,1860,2000)$, blue dashed: $F_{17}(1990)$ and cyan solid: $G_{17}(2190)$. Vertical lines correspond to thresholds of $K\Sigma$ and $\eta^\prime N$ photoproduction. } \label{fig:tcs_pw_eta_log} \end{center} \end{figure*} Fig.~\ref{fig:tcs_resonances_etapr} shows the partial contributions of the $N^*$ resonances to the total cross sections for $(\gamma,\eta^\prime)$ on proton and neutron. The largest resonance contributions in the total cross sections for $(\gamma,\eta^\prime)$ are from $S_{11}(1895)$, $P_{11}(1880)$, $P_{11}(2100)$, $F_{15}(2000)$ and $F_{17}(1990)$. It is interesting to note, that the first two of them have Breit-Wigner masses below threshold but appear as resonance bumps above threshold due to phase space factors. In both channels the $S_{11}$ resonance dominates near threshold and the second largest peak arises from $P_{11}$, followed by large contributions from $F$-wave resonances. This is different in the most recent BnGa analysis~\cite{Anisovich:2017}, where the $P_{13}(1900)$ plays a dominant role and $F$-waves are practically negligible. Such ambiguities in the PWA can be expected when only two observables are measured as in the $\eta^\prime$ proton channel. For the neutron channel, there is even only the differential cross section measured. Such an incompleteness in the polarization observables naturally leads to large ambiguities in the partial wave analysis. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_resonances_etapr_v2}} \caption{Partial contributions of the resonances to the total cross section for $(\gamma,\eta^\prime)$ on protons (a) and neutrons (b). The solid red lines show our full EtaMAID solution. The other curves show resonance contributions of $S_{11}(1895)$: black solid, $P_{11}(1880,2100)$: magenta solid, $P_{13}(1900)$: magenta dashed, $D_{13}(1875)$: green solid, $D_{15}(2000)$: green dashed, $F_{15}(2000)$: blue solid, and $F_{17}(1990)$: blue dashed.} \label{fig:tcs_resonances_etapr} \end{center} \end{figure*} In Fig.~\ref{fig:tcs_bgr_eta} and Fig.~\ref{fig:tcs_bgr_etapr} we show the background contributions for $\eta$ and $\eta^\prime$ photoproduction in four channels. The blue dotted, dash-dotted and dashed lines are obtained by Born terms, $t$-channel vector meson exchanges in Regge parametrization and the sum of both, respectively. The Born terms rise very strongly already near threshold in $\eta^\prime$ photoproduction and appear also very large in $\eta$ photoproduction for energies above 2~GeV. The $t$-channel Regge contributions are also quite large in the resonance region below 2.5~GeV and dominate the cross section for energies above 2~GeV. The double counting of Regge and resonances becomes quite obvious. Therefore, as explained in sect.~III before, we have introduced damping factors for the background contributions, Eqs.~(\ref{eq:Born_damp}) and (\ref{eq:Regge_damp}), yielding to the black dotted, dash-dotted and dashed lines for Born, Regge and total background, respectively. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_bgr_eta}} \caption{Partial contributions of the background to the total cross section for $(\gamma,\eta)$ on protons (a) and neutrons (b). The solid red lines show our full EtaMAID solution. The wide blue dotted, dash-dotted, and dashed lines show Born, Regge, and Born+Regge, respectively, without damping factors. The thin black dotted, dash-dotted, and dashed lines show the same, when damping factors are applied. The CBELSA/TAPS data have not been used in our fit. } \label{fig:tcs_bgr_eta} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_bgr_etapr_v2}} \caption{Partial contributions of the background to the total cross section for $(\gamma,\eta^\prime)$ on protons (a) and neutrons (b). Notation of curves are as in Fig.~\ref{fig:tcs_bgr_eta}. } \label{fig:tcs_bgr_etapr} \end{center} \end{figure*} Finally, in Fig.~\ref{fig:tcs_4models_eta} we compare our EtaMAID2018 solution with the new 2018 updates of Bonn-Gatchina (BnGa)~\cite{Anisovich:2018}, J\"ulich-Bonn (J\"uBo)~\cite{Ronchen:2018} and Kent-State University (KSU)~\cite{KSU2018}. While EtaMAID has analyzed all four channels up to $W=4.5$~GeV ($E\approx 10$~GeV), BnGa analyzed three, $\eta p$, $\eta n$, and $\eta^\prime p$ up to $W=2.5$~GeV, KSU analyzed two, $\eta p$ up to $W=2.0$~GeV and $\eta n$ up to $W=1.9$~GeV and J\"uBo analyzed only the $\eta p$ channel up to $W=2.4$~GeV. J\"uBo and KSU, which did not include the latest A2MAMI-17 data, differ significantly from the data in the dip region around $W\approx 1680$~MeV and around the $\eta^\prime$ threshold, the BnGa solution describes the data much better. The best description of the data is obtained with EtaMAID. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/tcs_4models_eta_v2}} \caption{Total cross section for $(\gamma,\eta)$ on protons (a) and neutrons (b) in comparison with other newly updated PWA. The solid red lines show our full EtaMAID solution. The black dash-dotted, dotted and blue dashed curves are obtained from the newly updated BnGa~\cite{Anisovich:2018}, J\"uBo~\cite{Ronchen:2018} and KSU~\cite{KSU2018} partial wave analyses. Near $\eta$ threshold, below 1.6~GeV, all solutions are practically identical. Vertical lines correspond to thresholds of $K\Sigma$, and $\eta^\prime N$ photoproduction. } \label{fig:tcs_4models_eta} \end{center} \end{figure*} In appendix~\ref{app:BG-BW} we list all background and resonance parameters of our model. For a few selected very important $S_{11}$ and $P_{11}$ resonances we also give an error analysis for Breit-Wigner parameters based on MINUIT-MINOS in table~\ref{tab:bw-with-errors}. We also have calculated effective $\eta^\prime N$ branching ratios by integrating the decay spectrum above $\eta^\prime N$ threshold according to Ref.~\cite{Beringer:1900zz}. For the $N(1880)1/2^+$ and $N(1895)1/2^-$ we obtained $(6.3\pm 2)\%$ and $(19.5\pm 5)\%$, respectively. A complete resonance analysis, especially with pole positions and residues will be published in a following paper. \begin{table*} \caption{\label{tab:bw-with-errors} Breit-Wigner parameters for selected resonances: mass M$_{BW}$, total width $\Gamma_{BW}$, branching ratio $\beta_{\eta N}$ to $\eta$N, and helicity amplitudes $A_{1/2}^{p(n)}$ for proton (neutron). The first row for each resonance gives a parameter set of the presented EtaMAID solution. The parameters indicated without errors were fixed during the fit. The second row indicate an overall status of the resonance and lists the corresponding parameters estimated by PDG~\cite{Tanabashi:2018} (NE means "No Estimates" given by PDG.). The effective $\eta^\prime N$ branching ratios according to Ref.~\cite{Beringer:1900zz} for the $N(1880)1/2^+$ and $N(1895)1/2^-$ are $(6.3\pm 2)\%$ and $(19.5\pm 5)\%$, respectively.} \label{tab:bwpar} \begin{center} \begin{tabular*}{16.5cm {@{\hspace{0.1cm}}c @{\hspace{0.1cm}}| @{\hspace{0.1cm}}c @{\hspace{0.3cm}}c @{\hspace{0.3cm}}c @{\hspace{0.3cm}}c @{\hspace{0.5cm}}c @{\hspace{0.5cm}}c @{\hspace{0.5cm}}c @{\hspace{0.5cm}}c } \hline\hline\noalign{\smallskip} Resonance $J^P$ & $M_{BW}$ [MeV] & $\Gamma_{BW}$ [MeV] & $\beta_{\eta N}$ $[\%]$ & $A_{1/2}^p$ $[10^{-3}$GeV$^{-1/2}]$ & $A_{1/2}^n$ $[10^{-3}$GeV$^{-1/2}]$ & \\ \noalign{\smallskip}\hline\noalign{\smallskip} $N(1535)1/2^-$ &$1522\pm8$ &$175\pm25$ &$34\pm5$ &$+115$ &$-102\pm8$ \\ **** &$1530\pm15$ &$150\pm25$ &$42\pm13$ &$+105\pm15$ &$-75\pm20$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $N(1650)1/2^-$ &$1626^{+10}_{-5}$ &$133\pm20$ &$19\pm6$ &$+55$ &$-25\pm20$ \\ **** &$1650\pm15$ &$125\pm25$ &$25\pm10$ &$+45\pm10$ &$-10^{+40}_{-30}$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $N(1710)1/2^+$ &$1670\pm20$ &$63^{+55}_{-18}$ &$12\pm4$ &$5.5$ &$-42^{+16}_{-12}$ \\ **** &$1710\pm30$ &$140\pm60$ &$30\pm20$ &NE &NE \\ \noalign{\smallskip}\hline\noalign{\smallskip} $N(1880)1/2^+$ &$1882\pm24$ &$90^{+70}_{-30}$ &$43^{+10}_{-20}$ &$60$ &$-7^{+60}_{-60}$ \\ *** &$1880\pm50$ &$300\pm100$ &NE &NE &NE \\ \noalign{\smallskip}\hline\noalign{\smallskip} $N(1895)1/2^-$ &$1894.4^{+5}_{-15}$ &$71^{+25}_{-13}$ &$3.3\pm1.5$ &$-32$ &$+43^{+30}_{-50}$ \\ **** &$1895\pm25$ &$120^{+80}_{-40}$ &$25^{+15}_{-10}$ &NE &NE \\ \noalign{\smallskip}\hline\hline \end{tabular*} \end{center} \end{table*} \subsection{\boldmath Comparison with the data of $d\sigma/d\Omega, \Sigma, T, F, E$ for $\gamma\,p \rightarrow \eta\,p$} In this subsection we turn to differential cross sections and polarization observables for $\eta$ production on the proton target. Figs.~\ref{fig:eta-p_dcs1_mami} - \ref{fig:eta-p_dcs_clas} display the differential cross section for the reaction $\gamma p\to\eta p$ as function of the cosine of the c.m. scattering angle in comparison to the full solution (solid red curves). We point out that our full solution provides an excellent description of the data over the whole energy and angular range, including the $K\Sigma$ and $\eta^\prime$ cusp regions, $W\approx1680$~MeV and $W\approx1890$~MeV, respectively. It is informative to observe the impact of the background contributions, Born (dotted curves), Regge (dash-dotted), and Born + Regge (dashed) contributions. Throughout the whole energy range of MAMI data~\cite{Kashevarov:2017} and well into the CLAS~\cite{Williams:2009yj} energy range, for $W\leq 2200$~MeV the background contributions are quite small, although background-resonance interference may be non-negligible. This observation is of importance to assess the issue of double counting mentioned in the introduction in view of using a modified Regge amplitude in the resonance region. We interpret the small relative impact of the background amplitudes for $W\leq 2200$~MeV as an indication that double counting does not pose problems for those energies, and only the two highest resonances of our analysis, $N(2190)\frac{7}{2}^-$ and $N(2250)\frac{9}{2}^-$, may be severely affected by that problem. Above $W\approx2500$~MeV the Regge contribution becomes dominant. We postpone a detailed study of the contribution of the modified Regge background to resonant partial waves in the transition region 2200~MeV~$\le W \le$~2500~MeV and extraction of higher resonance parameters to the upcoming work. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_dcs1_mami}} \caption{Differential cross section for $(\gamma,\eta)$ on the proton for 1488~MeV~$\le W \le$~1654~MeV as function of cosine of the c.m. scattering angle. The solid red lines show our full solutions, whereas the black dotted, dash-dotted, and dashed lines are Born terms, Regge, and full background, respectively. The data are from A2MAMI~\cite{Kashevarov:2017}. } \label{fig:eta-p_dcs1_mami} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_dcs2_mami}} \caption{Same as in Fig.~\ref{fig:eta-p_dcs1_mami} for c.m. energies 1658~MeV~$\le W \le 1957$~MeV. } \label{fig:eta-p_dcs2_mami} \end{center} \end{figure*} \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_dcs_clas}} \caption{Same as in Fig.~\ref{fig:eta-p_dcs1_mami} for c.m. energies 1985~MeV~$\le W \le 2795$~MeV. The data are from CLAS~\cite{Williams:2009yj}.} \label{fig:eta-p_dcs_clas} \end{center} \end{figure*} Polarization observables are much more critical tests for models than total and differential cross sections as they are sensitive to real and imaginary parts of interferences of amplitudes. Fig.~\ref{fig:eta-p_tf} shows a comparison of EtaMAID2018, BnGa, J\"{u}Bo and KSU models to data on target polarization $T$ and beam-target polarization $F$ asymmetries from A2MAMI for 1497~MeV~$\le W \le$~1848~MeV as function of cosine of the c.m. scattering angle. It is seen that our solution describes the data nicely for all energies and in the full angular range, whereas other models show considerable deviations from data for $W\geq1600$~MeV. We observe a significant spread between data points in some neighboring angular bins, so more precise and self-consistent data on this observable will help discriminating among the models. \begin{figure*}[!ht] \begin{center} \resizebox{0.8\textwidth}{!}{\includegraphics{figurs/eta-p_tf}} \caption{Target polarization $T$ (upper panels) and beam-target polarization $F$ (lower panels) asymmetries for $(\gamma,\eta)$ on the proton. The solid red lines show our full solution. Results of other PWA are shown by the black dash-dotted (BnGa~\cite{Anisovich:2018}), the black dotted (J{\"u}Bo~\cite{Ronchen:2018}), and the blue dashed (KSU~\cite{KSU2018}) lines. The data points are from A2MAMI~\cite{Akondi:2014}.} \label{fig:eta-p_tf} \end{center} \end{figure*} In Fig.~\ref{fig:eta-p_sigma}, data on the photon beam asymmetry $\Sigma$ from GRAAL~\cite{Bartalini:2007fg} and CLAS~\cite{Collins:2017} in comparison with models are shown. The two data sets show a disagreement in several energy bins in the overlap region 1700~MeV~$\le W \le$~1900~MeV which makes it difficult to judge the quality of the model description of the data. The J\"{u}Bo model fails to reproduce the high quality GRAAL data at lower energies, especially at backward angles, while all other models describe that energy region successfully. At highest energies, this asymmetry shows a peculiar shape, peaks at forward and backward angles and a dip around $90^\circ$. Since EtaMAID2018, J\"{u}Bo and BnGa models deviate somewhat from each other, better statistics data on $\Sigma$ at these energies will be helpful. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_sigma}} \caption{Photon beam asymmetry $\Sigma$ for $(\gamma,\eta)$ on the proton. The black circles and blue triangles are data from GRAAL~\cite{Bartalini:2007fg} and CLAS~\cite{Collins:2017} respectively. Notation of the curves are as in Fig.~\ref{fig:eta-p_tf}. } \label{fig:eta-p_sigma} \end{center} \end{figure*} For the beam-target polarization asymmetry $E$, Fig.~\ref{fig:eta-p_e}, the situation is similar: All models give very similar results for $W\leq1700$~MeV but start deviating above. Current quality of the data does not permit to draw firm conclusions from this comparison. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_e}} \caption{Beam-target polarization asymmetry $E$ for $(\gamma,\eta)$ on the proton. The black circles and blue triangles are data from MAMI~\cite{Witthauer:2017} and CLAS~\cite{Senderovich:2016} respectively. Notation of the curves are as in Fig.~\ref{fig:eta-p_tf}. } \label{fig:eta-p_e} \end{center} \end{figure*} In view of this general sensitivity of polarization observables to models, in Fig.~\ref{fig:eta-p_predict} we plot predictions of the four models for the observables $P$, $H$, $G$, $C_x$, and $C_z$ for $(\gamma,\eta)$ on the proton, for which no data exist. All these observables look very promising for discriminating the models, especially at higher energies. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-p_predict}} \caption{ Predictions for P, H, G, $C_x$, and $C_z$ observables for $(\gamma,\eta)$ on the proton. Notations of the curves are as in Fig.~\ref{fig:eta-p_tf}. } \label{fig:eta-p_predict} \end{center} \end{figure*} It is interesting to note that in some energy regions, the observables $P$ and $H$ are almost identical up to a sign. As can be seen from $P+H$, Eq.~(A9) together with the multipole expansion, Eq.~(\ref{eq:multipoles}), all $S$-wave contributions cancel exactly and the leading terms are imaginary parts of $P-D$ interferences. In EtaMAID a sizable deviation between $P$ and $-H$ is only seen at higher energies in Fig.~\ref{fig:eta-p_predict}, while BnGa and J\"uBo exhibit larger differences. \subsection{\boldmath Comparison with the data of $d\sigma/d\Omega, \Sigma, E$ for $\gamma\,n \rightarrow \eta\,n$} Results for $\gamma\,n \rightarrow \eta\,n$ reaction are shown in Figs.~\ref{fig:eta-n_dcs} - \ref{fig:eta-n_e}. Similar to the proton target, we observe a very good description of the differential cross section data in the full energy range where very precise A2MAMI data are available, with the mere exception of some very backward or very forward (in the c.m. frame) data points at low energies where nuclear effects may lead to some systematic effects that were not fully accounted for~\cite{Krusche2014}. This lack of strength in the extreme backward and forward kinematics is however not reflected in the description of the total cross section, see Fig.~\ref{fig:tcs}. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-n_dcs}} \caption{Differential cross section for $(\gamma,\eta)$ on the neutron. The red solid lines show our full solutions, whereas the black dotted, dash-dotted and dashed lines are Born terms, Regge, and full background, respectively. The data are from A2MAMI~\cite{Werthmueller:2014}. } \label{fig:eta-n_dcs} \end{center} \end{figure*} Polarization observables $\Sigma$ and $E$ are shown in Figs.~\ref{fig:eta-n_sigma} and \ref{fig:eta-n_e}, respectively. Our model describes the data nicely, although at present the uncertainties of the data do not allow for a definitive comparison of the models and will help to remove ambiguities, which are still visible in the partial wave analysis, see sect.~\ref{sec:pwa}. \begin{figure*}[!ht] \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-n_sigma}} \caption{Photon beam asymmetry $\Sigma$ for $(\gamma,\eta)$ on the neutron. The data are from GRAAL~\cite{Fantini:2008}. The solid red lines show our full solution. Results of other PWA analyses are shown by the black dotted (BnGa~\cite{Anisovich:2017}), and blue dashed (KSU~\cite{KSU2018}) lines. } \label{fig:eta-n_sigma} \end{figure*} \begin{figure*}[!ht] \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-n_e}} \caption{Beam-target polarization asymmetry $E$ for $(\gamma,\eta)$ on the neutron. The data are from A2MAMI~\cite{Witthauer:2017}. Notation of the curves are as in Fig.~\ref{fig:eta-n_sigma}. } \label{fig:eta-n_e} \end{figure*} In Fig. \ref{fig:eta-n_predict} we plot the polarization observables $P$, $H$, $G$, $T$, and $F$ for $(\gamma,\eta)$ on the neutron. As for the proton target, data on these observables will yield a crucial test for our understanding of the models. As also discussed for the proton, the symmetry between the $P$ and $H$ observables is again much more pronounced for EtaMAID than for other solutions, a signature for a stronger $S$-wave dominance in EtaMAID. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/eta-n_predict}} \caption{ Predictions for P, H, G, T and F observables for $(\gamma,\eta)$ on the neutron. Notation of the curves are as in Fig.~\ref{fig:eta-n_sigma}. } \label{fig:eta-n_predict} \end{center} \end{figure*} \subsection{\boldmath Comparison with the data of $d\sigma/d\Omega$ and $\Sigma$ for $\gamma\,p \rightarrow \eta^\prime\,p$ and $\gamma\,n \rightarrow \eta^\prime\,n$} Results for $\gamma\,p \rightarrow \eta^\prime\,p$ are presented in Figs.~\ref{fig:etapr-p_dcs} and ~\ref{fig:etapr-p_sigma}. For the differential cross sections, we show the impact of the background contributions, Born (dotted), Regge (dash-dotted), and Born + Regge (dashed) contributions. Because of the higher threshold for $\eta^\prime$ production, the background has much more relative impact than for $\eta$ production. In particular, we observe that the Born contribution, more precisely, the $u$-channel Born diagram, gives a very sizable contribution at backward angles for $W\geq$1912~MeV. We note in this respect that a reggeization of the $u$-channel nucleon exchange is worthwhile to study in the upcoming work. With this in mind, we notice a very good description of the data in the full energy and angular range by our solution. Apparent disagreement between A2MAMI and CLAS data, where the two data sets overlap, play little role at present due to a much better statistics of Mainz data. \begin{figure*}[!ht] \begin{center} \resizebox{0.7\textwidth}{!}{\includegraphics{figurs/etapr-p_dcs}} \end{center} \caption{Differential cross section for $(\gamma,\eta^\prime)$ on the proton. The red solid lines show our full solutions, whereas the black dotted, dash-dotted, and dashed lines are Born terms, Regge, and full background, respectively. The black circles are data from A2MAMI~\cite{Kashevarov:2017} and blue triangles from CLAS~\cite{Williams:2009yj}. } \label{fig:etapr-p_dcs} \end{figure*} The new solution reproduces all data for this reaction quite well, with the exception of the first two energy bins for $\Sigma$, where GRAAL data show a clear $\sim\sin\theta$ structure, see Fig.~\ref{fig:etapr-p_sigma}. CLAS data are presently too uncertain to confirm or disprove this behavior. The models do not show any flexibility in that energy bin to possibly describe such a sine structure. This will be further discussed in section~\ref{sect:NarrowResonances}. \begin{figure*}[!ht] \resizebox{0.6\textwidth}{!}{\includegraphics{figurs/etapr-p_sigma}} \caption{Photon beam asymmetry $\Sigma$ for $(\gamma,\eta^\prime)$ on the proton. The black full and red opened circles are data from GRAAL~\cite{Sandri:2015} and CLAS~\cite{Collins:2017}, respectively. Notation of the curves as in Fig.~\ref{fig:eta-p_tf}. } \label{fig:etapr-p_sigma} \end{figure*} For $\gamma\,n \rightarrow \eta^\prime\,n$ only one data set exists, the unpolarized cross sections measured by the CBELSA/TAPS Collaboration~\cite{Jaegle:2011}. The data together with our full solution are presented in Fig~\ref{fig:etapr-n_dcs}. There is some disagreement in the range $W = 2077 - 2121$ MeV. This channel has not been analyzed by other PWA groups. \begin{figure*}[!ht] \begin{center} \resizebox{0.6\textwidth}{!}{\includegraphics{figurs/etapr-n_dcs_v2}} \caption{Differential cross section for $(\gamma,\eta^\prime)$ on the neutron. The solid red lines show our full solutions, whereas the black dotted, dash-dotted, and dashed lines are Born terms, Regge, and full background, respectively. The data points are from CBELSA/TAPS~\cite{Jaegle:2011}. } \label{fig:etapr-n_dcs} \end{center} \end{figure*} \section{\boldmath Narrow resonances in $\eta$ and $\eta^\prime$ photoproduction}\label{sect:NarrowResonances} Around 2005, in $\eta$ photoproduction on the neutron a bump in the total cross section in the vicinity of $W=1685$~MeV was observed that was especially pronounced in the cross section ratio $\sigma_n/\sigma_p$~\cite{Kuznetsov:2006kt,Kuznetsov:2010as,Werthmuller:2013rba}. Many attempts were made to explain this effect, some explanations introduced a new narrow $N(1685)$ resonance, where, however, the quantum numbers were not uniquely determined~\cite{Arndt:2003ga,Anisovich:2013sva}. Mostly a $P_{11}$ resonance was assumed, which also matched the position of a predicted non-strange pentaquark state~\cite{Diakonov:1997mm}. The range of the width was determined as $15-45$~MeV. Due to further lack of evidence and more conventional explanations of the bump structure in terms of interferences of $S_{11}$ resonances, in 2016 PDG has decided to remove this state from the listings. For further reading see Ref.~\cite{Krusche2014}. Here we want to discuss a further attempt to study possible consequences from a narrow $N(1900)$ state, a few MeV above $\eta^\prime$ threshold. Anisovich et al.~\cite{Anisovich:2018-narrow} have shown that a narrow $N(1900)\frac{3}{2}^-$ $D_{13}$ resonance with a mass $M_R=1900\pm 1$~MeV and a total width of less than 3~MeV can explain the unexpected energy and angular dependence of the differential cross section $d\sigma/d\Omega$ from A2MAMI and of the beam asymmetry $\Sigma$ from GRAAL. In our EtaMAID analysis we can confirm the possibility for an explanation with a narrow resonance, however, in EtaMAID we would obtain a narrow $S_{11}$ resonance with quantum numbers $\frac{1}{2}^-$, mass $M_R=1902.6\pm 1.0$~MeV and width $\Gamma_R=2.1\pm 0.5$~MeV. As it was pointed out before, the photon beam asymmetry $\Sigma$ measured at GRAAL exhibits a very unexpected behavior. First of all, it shows a nodal structure with a sinus-type shape in the angular distribution, which is a sign of higher partial wave content compared to the beam asymmetry in threshold $\eta$ photoproduction. Second, it appears with a strong energy behavior, changing the magnitude of the beam asymmetry significantly within only a few MeV. And third, it appears very close to the $\eta^\prime$ threshold and decreases strongly within only a few MeV. Naturally, in this region an effect would increase in magnitude rather than decrease, when the energy rises. The first issue can be easily investigated by the partial wave series of the beam asymmetry. Expanded into partial waves up to $F$ waves ($L_{max}=3$), the beam asymmetry observable $\check{\Sigma}$ (see appendix \ref{app:BG-BW}) can be expressed in its angular dependence up to $x^4$ with $x=\mbox{cos}\,\theta$ \begin{equation} \check{\Sigma}=\sigma_0(x)\Sigma(x)=(1-x^2)\sum_{k=0}^4 a_k\, x^k\,, \end{equation} where the observed nodal structure arises from the coefficient $a_1$, which can be separated into $S-F$, $P-D$ and $D-F$ interferences of partial waves, $a_1 = a_1^{SF} + a_1^{PD} + a_1^{DF}$. Using Eq.~(A2) and the partial wave expansion of the CGLN amplitudes, we get in details \begin{eqnarray}\label{eq:a1_interferences} a_1^{S_{11}-F_{15}}&=&15\mbox{Re}\{E_{0+}^*(E_{3-}+M_{3-})\}\,,\\ a_1^{S_{11}-F_{17}}&=&15\mbox{Re}\{E_{0+}^*(E_{3+}-M_{3+})\}\,,\nonumber\\ a_1^{P_{11}-D_{15}}&=&15\mbox{Re}\{M_{1-}^*(M_{2+}-E_{2+})\}\,,\nonumber\\ a_1^{P_{13}-D_{13}}&=&18\mbox{Re}\{E_{1+}^*E_{2-}+M_{1+}^*M_{2-}\}\,,\nonumber\\ a_1^{P_{13}-D_{15}}&=&3\mbox{Re}\{-9E_{1+}^*E_{2+}+M_{1+}^*(5E_{2+}+4M_{2+})\}\,,\nonumber\\ a_1^{D_{13}-F_{15}}&=&-3\mbox{Re}\{E_{2-}^*(4E_{3-}-5M_{3-})-9M_{2-}^*M_{3-}\}\,,\nonumber\\ a_1^{D_{13}-F_{17}}&=&-15\mbox{Re}\{E_{2-}^*(5E_{3+}+M_{3+})+6M_{2-}^*M_{3+}\}\,,\nonumber\\ a_1^{D_{15}-F_{15}}&=&-\frac{189}{2}\mbox{Re}\{E_{2+}^*E_{3-}+M_{2+}^*M_{3-}\}\,.\nonumber \end{eqnarray} Interferences of $P_{11}-D_{13}$ and $D_{15}-F_{17}$ do not contribute. In Fig.~\ref{fig:etapr-p_sigma_narrow} we show our result with a narrow $S_{11}(1900)$ and the BnGa solution with a narrow $D_{13}(1900)$ for $\eta^\prime$ photoproduction on the proton. Both solutions can describe the GRAAL data similarly well, whereas without a narrow resonance both solutions predict an almost zero value for the threshold beam asymmetry, see Fig.~\ref{fig:etapr-p_sigma}. According to the multipole expansion of the $a_1$ coefficient, Eq.~(\ref{eq:a1_interferences}), the nodal structure of the angular dependence of the beam asymmetry is explained with a $S_{11}-F_{15}$ interference in EtaMAID and with a $P_{13}-D_{13}$ interference in BnGa. \begin{figure} \begin{center} \resizebox{0.5\textwidth}{!}{\includegraphics{figurs/etapr-p_sigma_narrow_v2}} \caption{Photon beam asymmetry $\Sigma$ for $(\gamma,\eta^\prime)$ on the proton for selected energy bins. The black full and red open circles are data from GRAAL~\cite{Sandri:2015} and CLAS~\cite{Collins:2017}, respectively. The dashed red lines show our solution with a narrow $S_{11}(1900)$ resonance and corresponding $\chi^2$ in the lower right corner for each panel and the black dotted lines BnGa~\cite{Anisovich:2018-narrow} with a narrow $D_{13}(1900)$ resonance and $\chi^2$ on the left. } \label{fig:etapr-p_sigma_narrow} \end{center} \end{figure} Besides the beam asymmetry, also the differential cross section exhibits small unexplained structures in the standard solutions, see Fig.~\ref{fig:etapr-p_dcs}. This is also much improved with the inclusion of a narrow resonance as shown in Fig.~\ref{fig:etapr-p_dcs_narrow}. \begin{figure} \begin{center} \resizebox{0.5\textwidth}{!}{\includegraphics{figurs/etapr-p_dcs_narrow_v2}} \caption{Differential cross section for $(\gamma,\eta^\prime)$ on the proton for selected energy bins. The black circles are data from A2MAMI~\cite{Kashevarov:2017}. Notations are as in Fig.~\ref{fig:etapr-p_sigma_narrow}. } \label{fig:etapr-p_dcs_narrow} \end{center} \end{figure} With the two energy bins of the GRAAL beam asymmetry and the lowest energy bins of the A2MAMI differential cross sections, the evidence for the existence of a narrow resonance is rather weak. Especially, as with only two observables the quantum numbers of such a state cannot uniquely be determined. Therefore, we investigate the effects of such narrow resonances on further not yet measured polarization observables using beam and target polarization. In Fig.~\ref{fig:etapr-p_predict} we show the standard solutions and the addition of narrow resonances from EtaMAID and BnGa on the full set of 8 polarization observables that could be measured with beam- and target-polarization techniques, without recoil polarization detection. For such narrow resonances small energy bins are certainly needed. The differential cross section, which can be expected with highest statistics, should be re-measured and analyzed in finer energy bins. Most important, due to the nodal structure change, is a new measurement of the photon beam asymmetry, aiming for a similar precision as in the GRAAL measurement. $P$ and $H$ observables, which are almost identical up to a sign, are sensitive to a narrow $D_{13}$ resonance, but almost independent of a narrow $S_{11}$ state. Also $T$ and $F$ observables are less sensitive but could be obtained at MAMI with high accuracy. \begin{figure*}[!ht] \begin{center} \resizebox{0.65\textwidth}{!}{\includegraphics{figurs/etapr-p_predict_v2}} \caption{ Predictions for all 8 single- and beam-target double polarization observables for $(\gamma,\eta^\prime)$ on the proton. The red solid and black dash-dotted lines are the 2018 standard solutions of EtaMAID and BnGa without narrow resonances. The red dashed lines show the predictions of our EtaMAID solution with a narrow $S_{11}(1900)$ resonance, while the black dotted lines are obtained with the BnGa solution and a narrow $D_{13}(1900)$ resonance~\cite{Anisovich:2018-narrow}. } \label{fig:etapr-p_predict} \end{center} \end{figure*} \section{Partial wave amplitudes}\label{sec:pwa} Compared to pion photoproduction, a comparison of partial waves from different PWA is not straightforward in $\eta$ or $\eta^\prime$ photoproduction. First of all, different conventions for isospin matrix elements are used in the literature, which appear as $+1$ in the BnGa, J\"uBo and KSU analysis and $-1$ in the MAID and SAID analysis. This overall sign or phase convention is denoted as $C_{\eta N}$ in our BW ansatz of Eq.~(\ref{eq:BWres}). Second, for $\eta$ and $\eta^\prime$ photoproduction no such convenient unitarity constraints as the Watson Theorem exist, that determine the phases in the low-energy regime. The only, somewhat weaker constraints arise from channel couplings, which is more advanced in coupled-channels approaches as BnGa, J\"uBo and KSU. In EtaMAID we introduce coupling to pion channels only via the Breit-Wigner ansatz and the parametrization of the energy dependent widths. E.g. the $N(1535)\frac{1}{2}^-$ provides a very strong constraint because of its large branchings of about $50\%$ for $\pi N$ and $40\%$ for $\eta N$. For other partial waves, such BW constraints are much less effective. Therefore, even if complete experiments were performed, final ambiguities would remain, which could not be resolved by experimental observables. All physical observables are sums of bi-linear products of amplitudes and conjugated amplitudes, e.g. Re~$\{H_i(W,\theta)\,H_j^*(W,\theta)\}$, and are therefore invariant under an overall energy- and angle-dependent phase $\phi(W,\theta)$. This phase depends very much on the models and on couplings with other channels, which finally will always be incomplete. For a better comparison between the different newly updated 2018 PWA, that all use practically the same database, we have performed a phase rotation of all amplitudes to our EtaMAID2018 phase, \begin{equation}\label{eq:phaserotation} H_i^{BG}\rightarrow \tilde{H}_i^{BG} = H_i^{BG}\,\cdot\, e^{i( \phi_{H1}^{MD}(W,\theta) - \phi_{H1}^{BG}(W,\theta) )}\,,\; i=1,\ldots,4\,, \end{equation} where MD stands for the EtaMAID model and BG for any other PWA, as BnGa, J\"uBo, and KSU. For a detailed discussion of angle-dependent phase ambiguities, see Ref.~\cite{Svarc:2017yuo,Svarc:2018aay}. In Figs.~\ref{fig:mult1_rotated} and \ref{fig:mult2_rotated} we compare the multipoles from rotated helicity amplitudes of EtaMAID, BnGa, J\"uBo, and KSU. While the $S$ wave is practically identical among all solutions, all other partial waves show deviations from small up to huge. Moderate deviations we can see in $E_{1+}, M_{1+}, E_{2-}$, and $M_{3-}$, those we can already expect from different fits to the measured data, as can be seen in sect.~\ref{sect:Results}. Other partial waves as $M_{1-}$ and especially $E_{2+}$ show very large deviations, which are most likely due to the incompleteness of the database, where such ambiguities must be expected. A possible solution of this problem could be obtained along the lines of Ref.~\cite{Osmanovic:2017fwe} by using constraints from fixed-$t$ analyticity. But in addition also improvements of the database with further observables and higher statistics would be very helpful. \begin{figure}[!ht] \begin{center} \resizebox{0.5\textwidth}{!}{\includegraphics{figurs/ROTFig1}} \caption{Comparison of $E_{0+}(S_{11})$, $M_{1-}(P_{11})$, $E_{1+}(P_{13})$, $M_{1+}(P_{13})$, and $E_{2-}(D_{13})$ multipoles for $\gamma p \rightarrow \eta p$, obtained from rotated helicity amplitudes (see Eq.~(\ref{eq:phaserotation})) of different PWA. The solid red lines show our EtaMAID2018 solution. Results of other PWA analyses are shown by the black dash-dotted (BnGa~\cite{Anisovich:2018}), the black dotted (J{\"u}Bo~\cite{Ronchen:2018}), and the blue dashed (KSU~\cite{KSU2018}) lines. The multipoles are given in units of mfm.} \label{fig:mult1_rotated} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \resizebox{0.5\textwidth}{!}{\includegraphics{figurs/ROTFig2}} \caption{Comparison of $M_{2-}(D_{13})$, $E_{2+}(D_{15})$, $M_{2+}(D_{15})$, $E_{3-}(F_{15})$, and $M_{3-}(F_{15})$ multipoles from different PWA. Further details as in Fig.~\ref{fig:mult1_rotated}} \label{fig:mult2_rotated} \end{center} \end{figure} \section{Summary and Conclusions}\label{sec:conclusions} Here we present a new update of EtaMAID for $\eta$ and $\eta^\prime$ photoproduction with four channels, $\eta p$, $\eta n$, $\eta^\prime p$, $\eta^\prime n$. A large amount of data has been measured during the last decade, mostly from A2MAMI, CBELSA and CLAS. Some of the new polarization observables showed large discrepancies with our previous solutions EtaMAID2001 and EtaMAID2003, and gave therefore a lot of insight in further details of the partial wave analysis. In a new approach, the high-energy regime $W>2.5$~GeV was first described with a Regge approach, and the resonance regime from threshold up to $W<2.5$~GeV with 21 $N^*$ resonances for $(\gamma,\eta)$ and 12 $N^*$ resonances for $(\gamma,\eta^\prime)$. All known $N^*$ states listed by PDG have been investigated and, except for only 2 cases, an improvement in our fit was found. Resonances found to be insignificant for our analysis are $N(2040)\frac{3}{2}^+$ (a one-star state only seen in $J/\Psi$ decays) and $N(2220)\frac{9}{2}^+$ (a four-star high spin state mainly seen in $\pi N$). In order to avoid or at least strongly reduce the double counting from Regge plus resonances, we introduced damping factors for Born and $t$-channel exchange contributions. We obtained very good fits to almost all data, except for some cases, where data from MAMI and CBELSA were in conflict and it did not make sense to use both in the database for our fit. In these cases we decided to use the MAMI data. From all $N^*$ resonances that were significantly improving our fits, we found the largest contributions in $(\gamma,\eta)$ from: $N(1535)\frac{1}{2}^-$, $N(1650)\frac{1}{2}^-$, $N(1895)\frac{1}{2}^-$, $N(1710)\frac{1}{2}^+$, $N(1720)\frac{3}{2}^+$, $N(1900)\frac{3}{2}^+$, $N(1520)\frac{3}{2}^-$, $N(1700)\frac{3}{2}^-$, and $N(1875)\frac{3}{2}^-$. For $(\gamma,\eta^\prime)$ these are $N(1895)\frac{1}{2}^-$, $N(1880)\frac{1}{2}^+$, $N(2100)\frac{1}{2}^+$, $N(2000)\frac{5}{2}^+$, and $N(1990)\frac{7}{2}^+$. While $N(1700)\frac{3}{2}^-$ and $N(1710)\frac{1}{2}^+$ are practically neutron resonances, $N(1650)\frac{1}{2}^-$ and $N(1880)\frac{1}{2}^+$ are much larger in the proton channel. Other resonances contribute about equally in the proton and neutron channels, see also photon couplings in table~\ref{tab:BWelectro} of appendix~\ref{app:BG-BW}. Generally, in a Breit-Wigner resonance analysis, the resonance parameters are subject to model dependence. This could be rather weak for prominent resonances with widths $\Gamma\lesssim 120$~MeV, but for broad resonances with widths of several hundred MeV, the model dependence can be very large. In an upcoming work we plan to perform a detailed resonance analysis with a search of $t$-matrix poles and residues. In an application of the L+P method, successfully applied in pion elastic scattering and pion photoproduction, we can expect to reduce the model dependence for the resonance properties considerably. The new solution EtaMAID2018 is online available on the MAID web pages~\cite{MAID}. \begin{acknowledgments} We want to thank Deborah R\"onchen of the J\"ulich-Bonn, Victor Nikonov of the Bonn-Gatchina, and Mark Manley of the Kent-State-University collaborations for providing us with their most recent partial wave results and Andrey Sarantsev for many helpful discussions. This work was supported by the Deutsche Forschungsgemeinschaft (SFB 1044). \end{acknowledgments} \clearpage \begin{appendix} \section{Observables expressed in CGLN amplitudes}\label{app:obs} Here we give the differential cross section, the three single-spin asymmetries and the eight beam-target and beam-recoil double-polarization observables expressed in CGLN amplitudes. In addition we give the combination $\check{P}+\check{H}$, where most of the terms cancel. A full list of all polarization observables including also target-recoil polarization expressed in CGLN and in helicity amplitudes can be found in Ref.~\cite{Osmanovic:2017fwe}. In the literature, the sign definitions of double-polarization observables is not unique. For an overview of the conventions see Ref.~\cite{Sandorfi:2011nv}. Here we follow the conventions by Barker~\cite{Barker:75}, SAID~\cite{Arndt:2002} and MAID~\cite{MAID07}. \begin{eqnarray} \begin{split} \sigma_{0} =& \,\mbox{Re}\,\left\{ \fpf{1}{1} + \fpf{2}{2} + \sin^{2}\theta\,(\fpf{3}{3}/2 + \fpf{4}{4}/2 \right. \\ & \mbox{} \left. + \fpf{2}{3} + \fpf{1}{4} + \cos\theta\,\fpf{3}{4}) - 2\cos\theta\,\fpf{1}{2} \right\} \rho \nonumber\\ \check{\Sigma} =& -\sin^{2}\theta\;\mbox{Re}\,\left\{\left(\fpf{3}{3} +\fpf{4}{4}\right)/2 +\fpf{2}{3} +\fpf{1}{4} \right.\\ &\mbox{} \left.+ \cos\theta\,\fpf{3}{4}\right\}\rho \\ \check{T} =& \sin\theta\;\mbox{Im}\,\left\{\fpf{1}{3}-\fpf{2}{4}+\cos\theta\,(\fpf{1}{4}-\fpf{2}{3}) \right. \\ &\mbox{} \left. - \sin^{2}\theta\,\fpf{3}{4}\right\}\rho \\ \check{P} =& -\sin\theta\;\mbox{Im}\,\left\{ 2\fpf{1}{2} + \fpf{1}{3} - \fpf{2}{4} \right. \\ &\mbox{} \left. + \cos\theta\,(\fpf{1}{4} -\fpf{2}{3}) - \sin^{2}\theta\,\fpf{3}{4}\right\}\rho \\ \check{E} =& \,\mbox{Re}\,\left\{ \fpf{1}{1} + \fpf{2}{2} - 2\cos\theta\,\fpf{1}{2} \right. \\ &\mbox{} \left. + \sin^{2}\theta\,(\fpf{2}{3} + \fpf{1}{4}) \right\}\rho \\ \check{F} =& \sin\theta\;\mbox{Re}\,\left\{\fpf{1}{3} - \fpf{2}{4} - \cos\theta\,(\fpf{2}{3} - \fpf{1}{4})\right\}\rho \\ \check{G} =& \sin^{2}\theta\;\mbox{Im}\,\left\{\fpf{2}{3} + \fpf{1}{4}\right\}\rho \\ \check{H} =& \sin\theta\;\mbox{Im}\,\left\{2\fpf{1}{2} + \fpf{1}{3} - \fpf{2}{4} \right. \\ &\mbox{} \left. + \cos\theta\,(\fpf{1}{4} - \fpf{2}{3})\right\}\rho \\ \check{P}+\check{H} &= \sin^3\theta\;\mbox{Im}\,\left\{ \fpf{3}{4}\right\}\rho \\ \check{C}_{x'} =& \sin\theta\;\mbox{Re}\,\left\{\fpf{1}{1} -\fpf{2}{2} - \fpf{2}{3} + \fpf{1}{4} \right. \\ &\mbox{} \left. - \cos\theta\,(\fpf{2}{4} - \fpf{1}{3})\right\}\rho \\ \check{C}_{z'} =& \,\mbox{Re}\,\left\{2\fpf{1}{2} - \cos\theta\,(\fpf{1}{1} + \fpf{2}{2}) \right. \\ &\mbox{} \left. + \sin^{2}\theta\,(\fpf{1}{3} + \fpf{2}{4})\right\}\rho \\ \check{O}_{x'} =& \sin\theta\;\mbox{Im}\,\left\{\fpf{2}{3} - \fpf{1}{4} + \cos\theta\,(\fpf{2}{4} - \fpf{1}{3})\right\}\rho \\ \check{O}_{z'} =& - \sin^{2}\theta\;\mbox{Im}\,\left\{\fpf{1}{3} + \fpf{2}{4}\right\}\rho \,. \end{split} \end{eqnarray} with $\check{\Sigma}={\Sigma}\,\sigma_0$ and $\rho=q/k$. \section{Expansion of CGLN amplitudes in terms of invariant amplitudes}\label{app:FtoA} The CGLN amplitudes are obtained from the invariant amplitudes $A_i$ by the following equations \cite{Den61}: \begin{eqnarray}\label{eq:CGLN1} \begin{split} {F}_1 =& \frac{W-M_N}{8\pi\,W}\,\sqrt{(E_i+M_N)(E_f+M_N)}\big[ A_1 \\ &+(W-M_N)\,A_4 - \frac{2M_N\nu_B}{W-M_N}\,(A_3-A_4)\big]\,,\nonumber \\ {F}_2 =& \frac{W+M_N}{8\pi\,W}\,|{\bold q}|\,\sqrt{\frac{E_i-M_N}{E_f+M_N}}\big[-A_1 + (W+M_N)\,A_4\\ &- \frac{2M_N\nu_B}{W+M_N}\,(A_3-A_4)\big]\,, \nonumber \\ {F}_3 =& \frac{W+M_N}{8\pi\,W}\,|{\bold q}|\,\sqrt{(E_i-M_N)(E_f+M_N)}\big[(W-M_N)\,A_2 \\ &+ A_3-A_4\big]\,, \nonumber \\ {F}_4 =& \frac{W-M_N}{8\pi\,W}\,{\bold q}^2\,\sqrt{\frac{E_i+M_N}{E_f+M_N}} \big[-(W+M_N)\,A_2 \\ &+A_3 - A_4\big]\,, \end{split} \end{eqnarray} with $\nu_B=(t-m_{\eta}^2)/(4m_N)$. \section{Background and Breit-Wigner resonance parameters}\label{app:BG-BW} In this appendix we list all parameters used in our isobar model. In table~\ref{tab:BWhadronic} we give the hadronic parameters for 21 $N^*$ resonances used in EtaMAID2018. For all of them we found couplings to the $\eta N$ channel, and for 12 of them also to the $\eta^\prime N$ channel. Table~\ref{tab:BWelectro} gives all photon couplings for proton and neutron targets and the newly introduced unitarization phases for all four channels. Finally, table~\ref{tab:background} gives all background parameters for Born terms and Regge amplitudes. \begin{table*}[ht] \caption{Hadronic Breit-Wigner parameters for nucleon resonances. Masses $M_R$ and widths $\Gamma_R$ are given in MeV and the branching ratios $\beta$ in $\%$. The coupling constants $g$ are dimensionless. The damping parameters of the hadronic vertex functions are fixed at $X=450$~MeV. For channel openings below threshold, conventional branching ratios are not defined and are marked with $-$. Further non-zero couplings are also found for $N(1440)\frac{1}{2}^+$ with $g_{\eta N}=1.0$, for $N(1650)\frac{1}{2}^-$ with $g_{K \Sigma}=1.21$ and for $N(1710)\frac{1}{2}^+$ with $g_{\omega N}=0.907$. \\}\label{tab:BWhadronic} \begin{tabular}{|c|ccc|ccccccccc|c|} \hline $N(\cdots)J^\pi$ & $\ell$ & $\zeta_{\eta N}$ &$\zeta_{\eta^\prime N}$ & $M_R$ & $\Gamma_R$ & $\beta_{\pi N}$ & $\beta_{\pi\pi N}$ & $\beta_{\eta N}$ & $\beta_{K \Lambda}$ & $\beta_{K \Sigma}$ & $\beta_{\omega N}$ & $\beta_{\eta^\prime N}$ & $g_{\eta^\prime N}$\\ \hline $N(1440)\frac{1}{2}^+$ & 1 & $+1$ & & 1430.0 & 350.0 & $65.0$ & $35.0$ & $-$ & $ - $ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1520)\frac{3}{2}^-$ & 2 & $+1$ & & 1520.0 & 100.0 & $61.0$ & $38.9$ & $0.08$ & $ - $ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1535)\frac{1}{2}^-$ & 0 & $+1$ & & 1521.7 & 174.7 & $52.0$ & $13.6$ & $34.7$ & $ - $ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1650)\frac{1}{2}^-$ & 0 & $-1$ & & 1626.3 & 132.5 & $51.0$ & $27.2$ & $18.8$ & $ 3.0$ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1675)\frac{5}{2}^-$ & 2 & $-1$ & & 1680.0 & 100.0 & $41.0$ & $57.1$ & $0.94$ & $ 1.0$ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1680)\frac{5}{2}^+$ & 3 & $+1$ & & 1690.0 & 145.3 & $62.0$ & $37.8$ & $0.16$ & $ 0 $ & $ - $ & $ - $ & $ - $ & $ 0 $ \\ $N(1700)\frac{3}{2}^-$ & 2 & $+1$ & & 1659.6 & 83.9 & $15.0$ & $80.8$ & $1.16$ & $ 3.0$ & $ 0 $ & $ - $ & $ - $ & $ 0 $ \\ $N(1710)\frac{1}{2}^+$ & 1 & $+1$ & & 1669.5 & 63.2 & $ 5.0$ & $68.2$ & $11.9$ & $15.0$ & $ 0 $ & $ - $ & $ - $ & $ 0 $ \\ $N(1720)\frac{3}{2}^+$ & 1 & $+1$ & & 1750.0 & 395.5 & $11.0$ & $79.7$ & $1.28$ & $ 8.0$ & $ 0 $ & $ - $ & $ - $ & $ 0 $ \\ $N(1860)\frac{5}{2}^+$ & 3 & $-1$ & $+1$ & 1885.8 & 197.4 & $20.0$ & $76.5$ & $3.55$ & $ 0 $ & $ 0 $ & $ 0 $ & $ - $ & $0.700$ \\ $N(1875)\frac{3}{2}^-$ & 2 & $+1$ & $-1$ & 1893.9 & 320.0 & $ 4.0$ & $46.0$ & $11.0$ & $ 4.0$ & $15.0$ & $20.0$ & $ - $ & $0.168$ \\ $N(1880)\frac{1}{2}^+$ & 1 & $+1$ & $-1$ & 1882.1 & 90.0 & $ 6.0$ & $74.6$ & $0.44$ & $ 2.0$ & $17.0$ & $ 0 $ & $ - $ & $0.400$ \\ $N(1895)\frac{1}{2}^-$ & 0 & $+1$ & $+1$ & 1894.4 & 70.7 & $ 2.5$ & $63.2$ & $3.27$ & $18.0$ & $13.0$ & $ 0 $ & $ - $ & $0.405$ \\ $N(1900)\frac{3}{2}^+$ & 1 & $-1$ & $-1$ & 1898.7 & 450.0 & $ 3.0$ & $63.9$ & $3.06$ & $12.0$ & $ 5.0$ & $13.0$ & $0.03$ & $0.563$ \\ $N(1990)\frac{7}{2}^+$ & 3 & $+1$ & $+1$ & 2227.0 & 389.0 & $ 2.0$ & $89.9$ & $3.61$ & $ 0 $ & $ 0 $ & $ 0 $ & $ 4.5$ & $0.347$ \\ $N(2000)\frac{5}{2}^+$ & 3 & $-1$ & $+1$ & 2116.8 & 246.9 & $ 8.0$ & $87.3$ & $2.30$ & $ 0 $ & $ 0 $ & $ 0 $ & $ 2.4$ & $0.300$ \\ $N(2060)\frac{5}{2}^-$ & 2 & $+1$ & $-1$ & 1984.5 & 159.8 & $11.0$ & $84.1$ & $1.58$ & $ 0 $ & $ 3.0$ & $ 0 $ & $ 0.3$ & $0.130$ \\ $N(2100)\frac{1}{2}^+$ & 1 & $+1$ & $+1$ & 2010.0 & 260.0 & $16.0$ & $78.2$ & $1.69$ & $ 0 $ & $ 0 $ & $ 0 $ & $ 4.1$ & $0.300$ \\ $N(2120)\frac{3}{2}^-$ & 2 & $+1$ & $-1$ & 2061.3 & 101.9 & $ 5.0$ & $94.9$ & $0.05$ & $ 0 $ & $ 0 $ & $ 0 $ & $0.03$ & $0.021$ \\ $N(2190)\frac{7}{2}^-$ & 4 & $-1$ & $+1$ & 2250.0 & 591.2 & $16.0$ & $78.8$ & $4.54$ & $ 0.5$ & $ 0 $ & $ 0 $ & $0.18$ & $0.100$ \\ $N(2250)\frac{9}{2}^-$ & 4 & $+1$ & $-1$ & 2250.0 & 733.2 & $12.0$ & $84.4$ & $3.50$ & $ 0 $ & $ 0 $ & $ 0 $ & $0.10$ & $0.085$ \\ \hline \end{tabular} \end{table*} \begin{table*}[ht] \caption{Electromagnetic Breit-Wigner parameters for nucleon resonances. Photon couplings $_{N}\!A_{\lambda}$ are given in $10^{-3}/\sqrt{\mbox{GeV}}$. Unitary phases $\phi$ are given in degrees. The damping parameters of the electromagnetic vertex functions are fixed at $X_\gamma=0$. \\}\label{tab:BWelectro} \begin{tabular}{|c|cccc|cccc|} \hline $N(\cdots)J^\pi$ & $_pA_{1/2}$& $_pA_{3/2}$& $_nA_{1/2}$& $_nA_{3/2}$ & $\phi_{\eta p}$ & $\phi_{\eta n}$ & $\phi_{\eta^\prime p}$ & $\phi_{\eta^\prime n}$\\ \hline $N(1440)\frac{1}{2}^+$ & $-60.0$ & $0$ & $40.0$ & $0$ & $-0.4$ & $-89.0$ & $0$ & $0$ \\ $N(1520)\frac{3}{2}^-$ & $-39.7$ & $116.8$ & $-160.0$ & $-94.0$ & $55.3$ & $73.5$ & $0$ & $0$ \\ $N(1535)\frac{1}{2}^-$ & $115.0$ & $0$ & $-101.9$ & $0$ & $29.0$ & $28.2$ & $0$ & $0$ \\ $N(1650)\frac{1}{2}^-$ & $ 55.0$ & $ 0 $ & $-25.4$ & $ 0 $ & $ 6.0$ & $ 15.5$ & $ 0 $ & $ 0 $ \\ $N(1675)\frac{5}{2}^-$ & $ 23.7$ & $ 20.0$ & $ -9.8$ & $ 43.2$ & $ 78.4$ & $ 59.1$ & $ 0 $ & $ 0 $ \\ $N(1680)\frac{5}{2}^+$ & $-29.4$ & $133.0$ & $129.7$ & $ 10.0$ & $ 64.6$ & $ 89.0$ & $ 0 $ & $ 0 $ \\ $N(1700)\frac{3}{2}^-$ & $ 15.2$ & $-14.0$ & $ 93.4$ & $-32.1$ & $ 60.9$ & $ 57.7$ & $ 0 $ & $ 0 $ \\ $N(1710)\frac{1}{2}^+$ & $ 5.5$ & $ 0 $ & $-42.2$ & $ 0 $ & $-47.1$ & $-79.4$ & $ 0 $ & $ 0 $ \\ $N(1720)\frac{3}{2}^+$ & $100.0$ & $ 7.7$ & $-64.9$ & $ 63.9$ & $ 87.8$ & $ 56.3$ & $ 0 $ & $ 0 $ \\ $N(1860)\frac{5}{2}^+$ & $-30.7$ & $ 29.0$ & $-24.5$ & $ 33.7$ & $-83.0$ & $-89.0$ & $-39.6$ & $-61.3$ \\ $N(1875)\frac{3}{2}^-$ & $ 18.0$ & $-35.4$ & $-32.0$ & $ 50.4$ & $ 34.6$ & $ 30.3$ & $-20.8$ & $ 86.2$ \\ $N(1880)\frac{1}{2}^+$ & $ 60.4$ & $ 0 $ & $ -6.6$ & $ 0 $ & $ 84.9$ & $ 89.0$ & $ 89.0$ & $ 60.7$ \\ $N(1895)\frac{1}{2}^-$ & $-32.0$ & $ 0 $ & $ 42.9$ & $ 0 $ & $ 51.5$ & $ 58.9$ & $ 57.8$ & $ 41.0$ \\ $N(1900)\frac{3}{2}^+$ & $-50.2$ & $-67.0$ & $-42.5$ & $ 17.9$ & $ 47.6$ & $ 89.0$ & $ 43.4$ & $ 89.0$ \\ $N(1990)\frac{7}{2}^+$ & $-12.4$ & $ 57.0$ & $-43.3$ & $-28.1$ & $ 6.3$ & $ 3.7$ & $ 11.8$ & $ -7.9$ \\ $N(2000)\frac{5}{2}^+$ & $-73.1$ & $-12.9$ & $ 12.8$ & $-59.2$ & $ 89.0$ & $ 51.5$ & $ 89.0$ & $ 50.8$ \\ $N(2060)\frac{5}{2}^-$ & $ 21.3$ & $ 62.0$ & $ 43.0$ & $ 6.1$ & $ 70.6$ & $ 67.3$ & $ 89.0$ & $ 89.0$ \\ $N(2100)\frac{1}{2}^+$ & $ 63.9$ & $ 0 $ & $-82.7$ & $ 0 $ & $ 89.0$ & $ 14.5$ & $ 58.1$ & $ 36.3$ \\ $N(2120)\frac{3}{2}^-$ & $113.5$ & $160.0$ & $160.0$ & $100.0$ & $-26.2$ & $-89.0$ & $ 56.6$ & $ 24.3$ \\ $N(2190)\frac{7}{2}^-$ & $ 26.7$ & $ 60.0$ & $ 34.5$ & $ 18.7$ & $-89.0$ & $-89.0$ & $ 59.2$ & $ 7.5$ \\ $N(2250)\frac{9}{2}^-$ & $-31.2$ & $-20.0$ & $ 24.1$ & $ 12.5$ & $ 82.8$ & $ 89.0$ & $ 89.0$ & $ 88.2$ \\ \hline \end{tabular} \end{table*} \begin{table*}[ht] \caption{Background parameters for Born terms and Regge exchanges. The Regge damping parameters $\Lambda_{R}$ for $\eta$ and $\eta^\prime$ photoproduction are given in units of GeV, the Regge-cut parameters $d_c$ in GeV$^{-2}$, all other parameters are dimensionless. The Regge-cut parameters are the same for $\eta$ and $\eta^\prime$ photoproduction. \\}\label{tab:background} \begin{tabular}{|c|c||c|c|} \hline $g_{\eta NN}^2/4\pi$ & 0.063 & $g_{\eta^\prime NN}^2/4\pi$ & 0.060 \\ $\alpha_{B,\eta}$ & 4.51 & $\alpha_{B,\eta^\prime}$ & 3.95 \\ $\Lambda_{R,\eta}$& 0.974 & $\Lambda_{R,\eta^\prime}$& 0.440 \\ $\lambda_{\eta \gamma}^{\rho}$ & 0.910 & $\lambda_{\eta^\prime \gamma}^{\rho}$ & 1.049 \\ $\lambda_{\eta \gamma}^{\omega}$ & 0.246 & $\lambda_{\eta^\prime \gamma}^{\omega}$ & 0.363 \\ $\lambda_{\eta \gamma}^{b_1}$ & 0.1 & $\lambda_{\eta^\prime \gamma}^{b_1}$ & 1 \\ $g_{\rho}^{v}$ & 2.71 & $g_{\rho}^{t}$ & 4.20 \\ $g_{\omega}^{v}$ & 14.2 & $g_{\omega}^{t}$ & 0 \\ $g_{h_1}/g_{b_1}$ & 0.667 & $g_{b_1}^{t}$ & $-7.0$ \\ $c_{\rho \mathbb P}$ & 4.64 & $c_{\omega \mathbb P}$ & $-5.00$ \\ $c_{\rho f_2}$ & 3.10 & $c_{\omega f_2}$ & $1.11$ \\ ${\tilde c}_{\rho\mathbb P}$ & 0 & ${\tilde c}_{\omega\mathbb P}$ & 0 \\ ${\tilde c}_{\rho f_2}$ & 0.245 & ${\tilde c}_{\omega f_2}$ & $-0.122$ \\ $d_{c,\rho\mathbb P}$ & 12.1 & $d_{c,\rho f_2}$ & 12.1 \\ $d_{c,\omega\mathbb P}$ & 2.09 & $d_{c,\omega f_2}$ & 2.09 \\ \hline \end{tabular} \end{table*} \end{appendix} \clearpage
1,477,468,749,940
arxiv
\section{Introduction} \emph{Cops and Robber} is probably the most classical combinatorial pursuit-evasion game on graphs. The robber models an intruder in a network that the cops try to capture. Two players play with complete information on a fixed finite graph~$G = (V,E)$. The cop player controls a set of~$k$ cops, each occupying a vertex of~$G$ (possibly several cops on the same vertex), while the robber player controls a single robber that also occupies a vertex of~$G$. The players take alternating turns, where the cop player in his turn can decide for each cop individually whether to stay at its position or move the cop along an edge of~$G$ onto an adjacent vertex. Similarly, the robber player on her turn can leave the robber at its position or move it along an edge of~$G$. The cop player starts by choosing starting positions for his~$k$ cops and wins the game as soon as at least one cop occupies the same vertex as the robber, i.e., when the robber is captured. The robber player, seeing the cops positions, chooses the starting position for her robber and wins if she can avoid capture indefinitely. The least integer~$k$ for which, assuming perfect play on either side,~$k$ cops can always capture the robber, is called the \emph{cop number} of~$G$, usually denoted by~$c(G)$. In this paper, we introduce \emph{Primal-Dual Cops and Robber} which is played on a plane graph~$G$, i.e., with a fixed plane embedding. Here, the cops occupy the faces of~$G$ and can move between adjacent faces (i.e., faces that share an edge), while the robber still moves along edges between adjacent vertices of~$G$. In this game, the robber is captured if \emph{every} face incident to the robber's vertex is occupied by at least one cop. Analogously, we call the least integer~$k$ for which~$k$ cops can always capture the robber in the Primal-Dual Cops and Robber game the \emph{primal-dual cop number} of~$G$ and denote it by~$c^*(G)$. An obvious lower bound for~$c^*(G)$ is the maximum number of faces incident to any vertex in~$G$: The robber can choose such a vertex as its start position and just stay there indefinitely (note that there is no \emph{zugzwang}, i.e., no obligation to move during ones turn). In particular, if~$G$ has maximum degree~$\Delta(G)$ and there exists a vertex~$v$ with~$\deg(v) = \Delta(G)$, which is not a cut-vertex,~then $c^*(G) \geq \Delta(G)$. E.g., $c^*(K_{2,n}) = \Delta(K_{2,n}) = n$ for any~$n \geq 2$. \subparagraph{Our contribution.} We investigate, whether the primal-dual cop number~$c^*(G)$ is bounded in terms of~$\Delta(G)$ for all plane graphs~$G$. The answer is `Yes' if $\Delta(G) \leq 4$ and `No' otherwise. \begin{theorem} \label{thm:main} Each of the following holds. \begin{enumerate} \item\label{enum:degree_3} For every plane graph~$G$ with~$\Delta(G) \leq 3$ we have~$c^*(G) \leq 3$. \item\label{enum:degree_4} For every plane graph~$G$ with~$\Delta(G) \leq 4$ we have~$c^*(G) \leq 12$. \item\label{enum:degree_5} For some~$n$-vertex plane graphs~$G$ with~$\Delta(G) = 5$ we have~$c^*(G) = \Omega\bigl(\sqrt{\log(n)}\bigr)$. \end{enumerate} \end{theorem} \subparagraph{Related work.} Let us just briefly mention that Cops and Robber was introduced by Nowakowski and Winkler~\cite{Nowakowski1983_VertexToVertex} and Quillot~\cite{Quilliot1978_Jeux} for one cop and Aigner and Fromme~\cite{Aigner1984_Planar3Cops} for $k$ cops 40 years ago. Since then numerous results and variants were presented, see e.g.,~\cite{Bonato2022_Invitation,Bonato2011_TheGameOfCopsAndRobbers}. Perhaps most similar to our new variant are the recent surrounding variant of Burgess et al.~\cite{Burgess2020_CopsThatSurroundARobber} with vertex-cops and the containment variant of Cryster et al.~\cite{Crytser2020_Containment,Pralat2015_Containment} with edge-cops. In these variants the robber is captured if every adjacent vertex, respectively every incident edge, is occupied by a cop. The smallest number of cops that always suffices for any planar graph $G$ is $3$ in the classical variant~\cite{Aigner1984_Planar3Cops}, $7$ in the surrounding variant~\cite{Bradshaw2019_SurroundingBoundedGenus}, $7\Delta(G)$ in the containment variant~\cite{Crytser2020_Containment} and $3$ when both, cops and robber, move on edges~\cite{Dudek2014_CopsAndRobberPlayingOnEdges}. \section{Cops win always if the maximum degree is at most four} We start with an observation that simplifies the proofs of items~\ref{enum:degree_3} and~\ref{enum:degree_4} in Theorem~\ref{thm:main}. \begin{observation} \label{obs:robber_avoids_deg1_vertices} Let the robber be on a vertex~$u$ with a neighbor~$v$ of degree~$1$. Then the robber is never required to move to~$v$ to evade the cops. \end{observation} This is true because the set of faces required to capture the robber at~$v$ is a subset of the faces required to capture him at~$u$. Further, his only possible moves at~$v$ are either staying there or moving back to~$u$. As there is no zugzwang, he could just stay at~$u$ all along. In both of the following proofs we assume that the graph contains only degree-$3$-vertices (respectively degree-$4$-vertices) and degree-$1$-vertices. This can always be achieved by adding leaves to vertices not yet having the correct degree. \begin{proof}[Proof of item \ref{enum:degree_3} in Theorem~\ref{thm:main}] We give a winning strategy for three cops~$c_1, c_2, c_3$ in a planar graph~$G$ with~$\Delta(G) \leq 3$. First the cops choose arbitrary faces to start on. Then the robber chooses its start vertex~$u$, which we assume to be of degree~$3$ by Observation~\ref{obs:robber_avoids_deg1_vertices} (it is trivial to capture him if all vertices have degree~$1$). Let~$\angle^u_1, \angle^u_2, \angle^u_3$ be the three angles incident to~$u$. We denote the face containing an angle~$\angle$ by~$f(\angle)$ and define for each cop~$c_i$ a \emph{target face}~$f_i$, $i = 1,2,3$. Initially we set~$f_i = f(\angle^u_i)$. The goal of each cop is to reach his target face, thereby capturing the robber when all three cops arrive. If the robber moves, each cop updates his target face. Our strategy guarantees that the total distance of all three cops to their targets faces decreases over time, so it reaches zero after finitely many turns. Clearly, in every game the robber has to move at some point to avoid being captured. Assume that the robber moves from vertex~$u$ to vertex~$v$ (both of degree~$3$ by Observation~\ref{obs:robber_avoids_deg1_vertices}). Without loss of generality the angles around~$u$ and~$v$ are labeled as in Figure~\ref{fig:upper_bound_max_degree3_angles} with~$f_i = f(\angle^u_i)$ being the current target face of cop~$c_i$, $i=1,2,3$. \begin{figure}[tb] \centering \includegraphics{fig/upper_bound_max_degree3_angles.pdf} \caption{Labeling of the angles for a robber move from~$u$ to~$v$ (and possibly further to~$w$).} \label{fig:upper_bound_max_degree3_angles} \end{figure} First assume that~$c_3$ (or symmetrically~$c_2$) has not reached his target face yet. In this case we assign the new target faces~$f_1 = f(\angle^v_1)$, $f_2 = f(\angle^v_2)$ and~$f_3 = f(\angle^v_3)$. Note that for~$i = 1,2$ faces~$f(\angle^u_i)$ and~$f(\angle^v_i)$ are adjacent, so cop~$c_i$ can keep his distance to his target face unchanged (or even decrease it) during his next turn. Further note that~$f(\angle^u_3) = f(\angle^v_3)$, so cop~$c_3$ can even decrease his distance by one during the next turn. Thus the total distance of the three cops to their target faces decreased by at least one. It remains the case that~$c_2$ and~$c_3$ have already reached their target faces (but~$c_1$ did not, as the game would be over otherwise). In this case we move~$c_1$ one step towards his target face~$f_1 = f(\angle^u_1)$ and~$c_2,c_3$ both to~$f(\angle^v_2)$. Now its the robber's turn again. If she does not move, we assign target faces $f_i = f(\angle^v_i)$, $i=1,2,3$, and the total distance decreases after the cops' next turn. If she moves back to~$u$, we assign target faces $f_i = f(\angle^u_i)$, $i=1,2,3$, and the total distance decreases after the cops' next turn. The last possibility for the robber is to move towards another neighbor $w$ of~$v$, see Figure~\ref{fig:upper_bound_max_degree3_angles}. Then we assign~$f_1 = f(\angle^v_1)$ and~$f_2,f_3$ to be the faces containing the other two angles at~$w$. In their next turn, $c_2$ and $c_3$ can again reach their target faces, while $c_1$ can decrease his distance to his target face $f(\angle^v_1)$ by one compared to the initial situation with the robber at vertex~$u$. Again, the total distance is decreased, which concludes the proof. \end{proof} To prove item \ref{enum:degree_4} in Theorem~\ref{thm:main}, we reduce our Primal-Dual Cops and Robber to the classical Cops and Robber with cops on vertices of $G$ and then use a result from the literature. \begin{lemma} \label{lem:face_cops_simulate_vertex_cop} In a plane graph~$G$ with~$\Delta(G) \leq 4$, four face-cops can simulate a vertex-cop. \end{lemma} \begin{proof} Let~$c$ be a vertex-cop starting at a vertex~$u \in V(G)$ with up to four incident angles~$\angle^u_i$ (for~$i \in \{1,2,3,4\}$). We place four face-cops on the (up to) four faces~$f(\angle^u_i)$. If the vertex-cop moves to an adjacent vertex~$v$, the four face cops around it can in one step also move to faces containing the angles incident to~$v$, see Figure~\ref{fig:upper_bound_max_degree4_angles} for the case that~$u$ and~$v$ both have degree~$4$. For vertices of degree less then~$4$ it only gets easier for the face-cops. \begin{figure}[tb] \centering \includegraphics{fig/upper_bound_max_degree4_angles.pdf} \caption{A vertex cop and its four accompanying face-cops moving from~$u$ to~$v$.} \label{fig:upper_bound_max_degree4_angles} \end{figure} \end{proof} An immediate corollary of Lemma~\ref{lem:face_cops_simulate_vertex_cop} is that~$c^*(G) \leq 4 \cdot c(G)$ for planar graphs~$G$ with~$\Delta(G) \leq 4$. With $c(G) \leq 3$ for all planar graphs~$G$~\cite{Aigner1984_Planar3Cops}, item~\ref{enum:degree_4} in Theorem~\ref{thm:main} follows. \section{Robber wins sometimes if the maximum degree is at least five} In this section we prove item~\ref{enum:degree_5} in Theorem~\ref{thm:main}, i.e., that~$c^*(G) = \Omega\bigl(\sqrt{\log(n)}\bigr)$ for some $n$-vertex plane graphs~$G$ with~$\Delta(G) \geq 5$. We utilize a result of Nisse and Suchan~\cite{Nisse2008_FastRobber} about the cop number~$c_{p,q}(G)$ for a different variant of Cops and Robber for any graph~$G$ and positive integers~$p$ and~$q$. Here (as in the classical variant) the cops and the robber are on the vertices of~$G$. However, in each turn the cops may traverse up to~$p$ edges of~$G$, while the robber may traverse up to~$q$ edges of~$G$. We refer to~$p$ and~$q$ as the \emph{velocities} of the cops and the robber, respectively. \begin{theorem}[\cite{Fomin2010_FastRobber,Nisse2008_FastRobber}] \label{thm:fast_robber} Let~$G_n$ be the $n \times n$ grid graph, $p$ be the velocity of the cops and~$q$ be the velocity of the robber. If~$p < q$, then~$c_{p,q}(G_n) = \Omega\bigl(\sqrt{\log(n)}\bigr)$. \end{theorem} The idea to prove item~\ref{enum:degree_5} in Theorem~\ref{thm:main} is to construct a \enquote{grid-like} graph~$G_{n,s,r}$ for positive integers $n,s,r$ in which the robber in the primal-dual variant can move around faster than the cops. Then she can simulate the evasion strategy of the robber in the variant of Nisse and Suchan. We start with the~$n \times n$ grid graph~$G_n$, $n \geq 3$, with a planar embedding such that the $4$-faces are the inner faces. We call the vertices of $G_n$ the \emph{grid vertices}. Then, each edge of~$G_n$ is subdivided by~$2s$ new vertices, called \emph{subdivision vertices}, to obtain~$G_{n,s}$. Two grid vertices are called \emph{neighboring} if they are adjacent in~$G_n$. Further, inside each inner face of $G_{n,s}$ we add~$r$ nested cycles, called \emph{rings}, of length~$12s$ each and call their vertices the \emph{ring vertices}. Between any two consecutive rings we add a planar matching of~$12s$ edges. Each inner face of~$G_{n,s}$ has~$8s$ subdivision vertices on its boundary and $12s$ ring vertices on its outermost ring. At last, we add (in a crossing-free way) three edges from each subdivision vertex to the outermost ring vertices in the two incident faces of $G_{n,s}$ such that two edges go to one ring, the third edge to the other ring, and every ring vertex receives exactly one such edge. Along the $2s$ vertices of each subdivision path in~$G_{n,s}$ the side with two edges to the ring should always switch. Thus each inner face of~$G_{n,s}$ receives~$12s$ edges which are connected to the~$12s$ vertices of the outermost ring such that the drawing remains planar. \begin{figure}[tb] \centering \includegraphics[page=2]{fig/lower_bound_max_degree5_grids.pdf} \caption{ $G_{4,2,2}$: An $n \times n$ grid with each edge subdivided four times and two rings. Faces are colored according to their closest grid vertex. Deep and shallow faces are light and dark, respectively. } \label{fig:lower_bound_grid} \end{figure} Call the resulting graph~$G_{n,s,r}$ and note that~$\Delta(G_{n,s,r}) = 5$. See also Figure~\ref{fig:lower_bound_grid}. We shall use a robber strategy in which she only focuses on grid vertices and moves between these through the paths of subdivision vertices, i.e., only plays on~$G_{n,s}$. The purpose of the additional rings in~$G_{n,s,r}$ is to slow down the cops and force them to stay close to grid and subdivision vertices, too, thereby simulating the game of Nisse and Suchan on~$G_n$. Formally, we call an inner face of $G_{n,s,r}$ \emph{shallow} if it is incident to some subdivision vertex, and \emph{deep} otherwise. Our first lemma implies that, due to the number of rings, cops should not use deep faces. \begin{lemma} \label{lem:cop_moves_along_shallow_faces} Let~$a_1,a_2$ be two shallow faces of~$G_{n,s,r}$ inside the same inner face~$A$ of~$G_n$. If~$r > 3s$, then any cop moving from~$a_1$ to~$a_2$ along a shortest path without leaving~$A$ uses only shallow faces. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:cop_moves_along_shallow_faces}] First observe that there are exactly~$12s$ shallow faces inside~$A$; one for each edge of the outermost ring. Hence, the cop may move from~$a_1$ to~$a_2$ using only shallow faces in no more than~$6s$ steps. On the other hand, the deep face~$b$ inside the innermost ring is at distance~$r > 3s$ from each of~$a_1,a_2$ and hence no shortest path between~$a_1$ and~$a_2$ uses~$b$. Let~$H$ be the subgraph of the plane dual of~$G_{n,s,r}$ induced by all inner faces inside~$A$, except~$b$. Then~$H \cong P_{r} \ \square\ C_{12s}$ is a square grid on a cylinder of height~$r$ and circumference~$12s$, with the shallow faces forming a boundary cycle~$C$. Since~$a_1,a_2$ are on~$C$ and each shortest path lies inside~$H$, such path is contained in~$C$, i.e., uses only shallow faces. \end{proof} We have to hinder the cops from taking shortcuts through the outer face~$f_0$ of~$G_{n,s,r}$. To this end let~$G'_{n,s,r}$ be a copy of~$G_{n,s,r}$ with outer face~$f'_0$. Change the outer face of~$G'_{n,s,r}$ such that~$f'_0$ is an inner face (while not changing the cyclic ordering of the edges around the vertices) and define~$\overline{G}_{n,s,r}$ to be the graph obtained from gluing~$G_{n,s,r}$ into face~$f'_0$ of~$G'_{n,s,r}$ and identifying corresponding vertices. The robber will always stay on vertices of~$G_{n,s,r}$ and whenever a cop uses a vertex~$v'$ of~$G'_{n,s,r}$ she acts as if he was on the corresponding vertex~$v$ of~$G_{n,s,r}$. Without loss of generality, we can therefore assume below that the game is played on~$G_{n,s,r}$ with the cops being prohibited to enter the outer face. For a face~$f \in F$, we denote by~$v_f$ be the grid vertex closest to~$f$, breaking ties arbitrarily. \begin{lemma} \label{lem:cop_moves_along_grid} Let~$a,b$ be two shallow faces whose closest grid vertices~$v_a,v_b$ have distance~$d$ in~$G_n$. If~$r > 3s$, then in~$G_{n,s,r}$ the robber moving from~$v_a$ to~$v_b$ needs at most~$(2s+1)d$ steps, while any cop moving from~$a$ to~$b$ needs at least~$3s(d - 4)$ steps. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:cop_moves_along_grid}] For the first part it is enough to observe that the robber may go along subdivision vertices, taking exactly~$2s+1$ steps for every corresponding edge in~$G_n$. For the second part, i.e., the lower bound on the number of moves for a cop, let~$A$ and~$B$ be the inner faces of~$G_n$ containing the inner faces~$a$ and~$b$ of~$G_{n,s,r}$, respectively. We assume that~$d \geq 5$, as otherwise~$3s(d-4) \leq 0$ and there is nothing to show, and hence we have~$A \neq B$. More precisely, traveling from~$a$ to~$b$, the cop must traverse (inner faces of~$G_{n,s,r}$ corresponding to) at least~$d-1$ different inner faces of~$G_n$. Cutting off the initial part inside~$A$ and final part inside~$B$, Lemma~\ref{lem:cop_moves_along_shallow_faces} implies that the remaining shortest path for the cop uses only shallow faces. Thus, on her way, the cop visits shallow faces incident to at least~$d-3$ distinct grid vertices, i.e., $d-4$ transitions from a shallow face at a grid vertex to a shallow face at a neighboring grid vertex. As each such transition requires~$3s$ moves, the claim follows. \end{proof} \begin{proof}[Proof of item~\ref{enum:degree_5} in Theorem~\ref{thm:main}] Nisse and Suchan~\cite{Nisse2008_FastRobber} (see also~\cite{Fomin2010_FastRobber} for the omitted proofs) describe an evasion strategy for a robber with velocity~$q$ that requires~$\Omega\bigl(\sqrt{\log(n)}\bigr)$ vertex-cops with velocity~$p$ to capture him in~$G_n$, provided $q > p$; see Theorem~\ref{thm:fast_robber}. We describe how a robber with velocity~$1$ in~$G_{n,s,r}$ (for sufficiently large~$n, s, r$) can simulate this strategy against face-cops with velocity~$1$. We choose~$p = 15$,~$q = 16$ and consider the game of Nisse and Suchan for these velocities. For their graph~$G_n$ in which the robber can win against $k = \Omega\bigl(\sqrt{\log(n)}\bigr)$ vertex-cops, we then consider $G_{n,s,r}$ with $s = 16$ and $r = 3s+1 = 49$. Now we copy the evasion strategy~$\mathcal{S}$ for the robber as follows: Whenever it is the robber's turn and the face-cops occupy faces $f_1, f_2, \ldots, f_k$ in~$G_{n,s,r}$, consider the corresponding situation in~$G_n$ where the vertex-cops occupy $v_{f_1}, v_{f_2}, \ldots, v_{f_k}$. Based on these positions,~$\mathcal{S}$ tells the robber to go to a vertex~$v$ at distance~$d \leq q = 16$ from the current position of the robber in~$G_n$. By Lemma~\ref{lem:cop_moves_along_shallow_faces}, the robber in~$G_{n,r,s}$ can go to~$v$ in at most $(2s+1)d \leq (2\cdot 16+1)\cdot 16 = 528$ turns. In the meantime, each face-cop also makes up to~$528$ moves in~$G_{n,r,s}$, traveling from some face~$a$ to some face~$b$, which is interpreted in~$G_n$ as the corresponding vertex-cop traveling from~$v_a$ to~$v_b$. For~$v_a$ and~$v_b$ to be at distance~$d' \geq 16$ in~$G_n$, by Lemma~\ref{lem:cop_moves_along_shallow_faces} the face-cop needs at least $3s(d'-4) \geq 3 \cdot 16 \cdot 12 = 576$ turns, which is strictly more than~$528$. Thus, after~$528$ turns, each vertex-cop made at most $p = 15$ steps in~$G_n$, as required for strategy~$\mathcal{S}$. Hence, the robber can evade~$k$ face-cops in~$G_{n,s,r}$, proving~$c^(G_{n,s,r}) > k$. Since~$G_{n,s,r}$ for~$s,r \in O(1)$ has~$O(n^2)$ vertices, this completes the proof. \end{proof} \section{Conclusions} Let $c^*_\Delta$ denote the largest primal-dual cop number among all plane graphs with maximum degree $\Delta$. We have shown that $c^*_3 = 3$, $c^*_4 \leq 12$ (this bound is certainly not optimal), and $c^*_5 = \infty$, while it is easy to see that $c^*_1 = 1$, $c^*_2 = 2$, and $c^*_\Delta = \infty$ for all $\Delta > 5$. Let us remark that our proof for $\Delta = 5$ also holds for a variant of the game where the robber is already captured when one cop is on one incident face. On the other hand, our proof for $\Delta = 3$ holds verbatim to prove that three cops also suffice in a variant of the game where the graph is embedded without crossings in any other surface, which makes it is interesting to consider $\Delta = 4$ here. \bibliographystyle{plainurl}
1,477,468,749,941
arxiv
\section{Introduction} Galaxy mergers are believed to be not just common events in the universe, but also fundamental pieces in the evolution of galaxies since they trigger bursts of star formation (Larson \& Tinsley 1978; Sanders \& Mirabel 1996) and they are a key ingredient in the formation Elliptical Galaxies and Bulges (Toomre \& Toomre 1972; Mihos \& Hernquist 1994, 1996; Kazantzidis et al. 2005; Di Matteo et al. 2007). More recently, it was also found that many interacting and merging galaxies are places of current massive cluster formation (Schweizer 1998; Mengel et al. 2008). Standard numerical simulations of galaxy mergers that include gas (Mihos \& Hernquist 1994; Barnes \& Hernquist 1996; Kazantzidis et al. 2005; Cox et al. 2006; Di Matteo et al. 2007), have been able to reproduce the observed starbursts occurred during the merging process on nuclear gaseous disks. However, they intentionally avoid fragmentation through high minimum temperatures and large gravitational softening lengths, therefore, they failed to reproduce formation of massive star clusters. Only recently, simulations have the resolution required to study the gas fragmentation on at least large scales (Bournaud et al. 2008; Saitoh et al. 2009; Teyssier et al. 2010; Matsui et al 2012). In those simulations, massive star clusters are indeed formed by gas fragmentation into collapsing clumps and therefore is relevant to have a criteria for gravitational instabilities in such complex environment. The study of gravitational stability of fluids started with the work of Jeans (1902) for a uniform, infinite and isothermal gas. Later extended by Bonnor (1956) and Ebert (1955) for a finite and spherically symmetric fluid, a rotationally supported one (Toomre 1964; Goldreich \& Lynden Bell 1965), a magnetized fluid (Chandrasekhar \& Fermi 1953), among others. In this letter, we study the stability of the gaseous streams in the complex environment of a galaxy merger, by means of Smoothed Particle Hydrodynamics (SPH) numerical simulations. This work is organized as follows. We start with a discussion of the physical processes relevant in stabilizing gaseous streams in galaxy mergers, with analytical estimates for a stability criterion in \S 2. Section 3 continues discussing the setup of the galaxy mergers simulations and the resolution needed to resolve gravitational instabilities in galaxy mergers. In \S 4, we test the stability criteria by performing hydrodynamical simulations of galaxy mergers with the resolution discussed in \S3. Finally in \S 5, we summarize the results of this work. \section{Basic Physical Ingredients: Gas Pressure and Motion} High-resolution simulations of galaxy mergers generally found galactic streams, such as tails and bridges at large scales and more complex ones on the inner kpc scales, in which collapsing clumps are ubiquitous features formed by gravitational instabilities. However, no gravitational instability criterion for the complex and irregular case of gaseous streams in a galaxy merger has been found. The most basic physical processes that could overcome gravity in absence of magnetic and other fields are gas pressure and motion. Since gas pressure is isotropic, does not depend on the geometry of the fluid, only on its local density and temperature, therefore its stabilizing role on small scales is the same that in a fluid with a regular geometry (i.e with a given symmetry). On the order hand, the motion of streams is much more complex and not constrained to a single plane, but on its central regions (inner kpc, where the bulk of the star and cluster formation happens) is characterized by streams orbiting around the center of mass of the newly formed system. A simple and useful approach is to model individual streams as a piece of a rotating annulus. In such a case, from vector calculus is known that the rotational component of its motion is well described by an angular frequency, which is defined relative to an origin O of the coordinate system in which we describe the motion: \begin{equation} \rm \vec{\Omega}_{o} = \frac{\vec{r} \times \vec{v}}{\vec{r} \, \cdot \, \vec{r}} = \frac{\vec{r} \times \vec{v}}{r^2} = \hat{r} \times \frac{\vec{v}}{r}\,\,\,\, , \label{Omega} \end{equation} where $\rm \vec{r}=r\,\hat{r} \,\, and \,\, \vec{v}$ are, respectively, the position and velocity vectors. Under this approximation, the stability of individual streams is a very similar problem to the stability of annuli in a rotating sheet, with the difference that the streams in the merger case do not belong to the same plane ($\rm \vec{\Omega}_{o}$ of individual streams can have a different magnitude and direction). In such a case, from dimensional analysis it is straightforward to conclude that results from standard gravitational instability analysis in a rotating sheet (Toomre 1964; Goldreich \& Lynden Bell 1965; Binney \& Tremaine 2008) should still be valid for a given stream: there is a range of unstable length scales limited on small scales by thermal pressure (at the Jeans length $\rm \lambda_{\rm Jeans} = C_{\rm s}^2 / G\Sigma_{\rm gas} $) and on large scales by rotation (at the critical length set by rotation, which for this case can be defined as $\rm \lambda_{\rm rot} \equiv \pi^2 G \Sigma_{\rm gas} /|\vec{\Omega}_{o}|^2$). All intermediate length scales are unstable, the most rapidly growing mode has a wavelength $2 \, \lambda_{\rm Jeans}$ and the most unstable mode has a wavelength $\lambda_{\rm rot}/2$. Only a combination of pressure and rotation can stabilize the stream, this happens when the range of unstable wavelengths shrinks to zero (i.e. the two scales are comparable) and this occurs for $\lambda_{\rm Jeans} \geq (q/\pi)^2 \, \lambda_{\rm rot}$ (Escala \& Larson 2008 . Therefore a stream will be stable if: \begin{equation} \rm |\vec{Q}_{o}| \equiv \frac{ C_{S} \, |\vec{\Omega}_{o}| }{G \, \Sigma_{gas}} \ge q \,\,\,\, , \label{Q} \end{equation} where $\rm C_{S}$ is the gas sound speed, $\rm |\vec{\Omega}_{o}|$ is the norm of the angular frequency vector, G is the gravitational constant, $\rm \Sigma_{gas}$ is the gas surface density and q is a number of the order of unity. Otherwise, if $\rm |\vec{Q}_{o}| < q$, a stream will be unstable. It is important to point out that the concept of angular frequency depends on the origin of the coordinate system chosen to describe the motion, and contrary to the case of the rotating sheet, there is not an obvious single choice for all streams in a galaxy merger. This becomes relevant in section 4, when we compare the results of this section with numerical simulations, for testing if the stability criteria given by Eq $\ref{Q}$ is valid or not. Our approach in section 4 will be to check if with a single origin of the coordinate system O, Eq $\ref{Q}$ is able to predict the gravitational instability of the streams. This assumption will introduce changes in the value of the angular frequency and therefore, in the determination of the value for the threshold q (will be an average value for all streams). However, our aim is to have a simple criteria that can be easily applied be other authors and in such a case, is better to have a single $\rm |\vec{Q}_{o}|$ with an average fitting parameter q, than one $\rm |\vec{Q}_{o\alpha}|$ and $\rm q_\alpha$ for each $\rm \alpha$th stream. In the case that all the streams are coplanar and orbit around the same point, we recover the standard Toomre Q parameter for a rotating sheet (=$\rm C_{S} \, \Omega/ G \, \Sigma_{gas}$; Toomre 1964), since now the direction of $\rm \vec{\Omega}_{o}$ and the origin O chosen to describe the motion, is the same for all streams. \section{Simulations of gravitational fragmentation in galaxy mergers} In the following we perform a set of idealized numerical experiments aimed to test if the stability parameter (Eq. \ref{Q}), successfully predicts the streams in a galaxy merger that will be gravitationally unstable to fragment into clumps. These experiments are constructed as simple as possible, in order to guarantee that the gas fragmentation is only due to gravitational instabilities. For that reason, we use an isothermal equation of state instead of having a multiphase medium and do not include any feedback processes from star formation and/or AGN which will make more complex the analysis as it includes new sources that may trigger fragmentation. Without including this extra physics we cannot aim to have a realistic description of the ISM, but will be enough for our main purpose, which is to study the onset of gravitational instability at large scales in a galaxy merger. The simulation consists on the merger of two equal mass disk galaxies and we let the two galaxies collide in a parabolic orbit with pericentric distance $\rm R_{min}$ =7.35 kpc. The simulations start with an initial separation of 49 kpc, where the separation distance is measured between the mass centers of the two galaxies and the initial inclination angle between disk planes of individual galaxies is $\rm 90^{o}$. The galaxies are initialized using the code GalactICS, in particular we used their `Milky Way model A' (see Kuijken \& Dubinski 1995 for details). In each galaxy model, we include a gaseous disk with the same exponential profile as the stellar component (Kuijken \& Dubinski 1995) and with a total gas mass corresponding to the 10\% of the total stellar disk mass. The gas has an isothermal equation of state, $\rm P = c_{S}^{2} \, \rho$, where the sound speed is fixed at $\rm c_{S}= 12.8 \, km s^{-1}$, corresponding to a gas temperature of $\rm \sim 2 \times 10^{4}$K. At t=0, the gaseous disk of each isolated galaxy is gravitationally stable. In our simulations, we use the following internal units : [Mass] = $5.8 \times 10^{11}M_{\odot}$, [Distance] = 1.2 kpc and G=1. The total number of particles is 420,000, being 200,000 for sampling the gas, 120,000 for the dark matter halo, and 80,000 for the disk component and 20,000 for the bulge. The simulations were evolved using the SPH code Gadget-2 (Springel 2005), up to a time t=160 (in internal time units), which correspond to a point where the galaxies are after their third (and final) pericentric passage and in which most of the gas ($>$ 80\%) has been fueled to the central kpc. Fig 1 (a, b, c, d) show the evolution of the system at four times t = 32 (a), 54 (b), 120 (c) and 136 (d) which corresponds to before (a) and after (b) the first pericentric passage, second pericentric passage (c) and in their third pericentric encounter (d). The ring/oval structures seen on Fig. 1(a, b, c), are ubiquitous features since early simulations of galaxy mergers (e.g., Schwarz 1984) and are believed to be tightly wound spirals that are the gas response to tidal forcing (e.g., Barnes \& Hernquist 1996). \subsection{Minimum Gravitational Resolution} Before analyzing the stability of gaseous streams it is necessary to check that we have the gravitational resolution required to resolve the fragmentation of streams into collapsing clumps, for that reason we performed a convergence test. We restarted the original simulation with a gravitational softening length $\rm \epsilon_{soft}$=0.4 at t=132, with the following gravitational softening $\rm \epsilon_{soft}$: 0.04, 0.02, 0.01, 0.008, 0.006 in internal units. Fig 1 (e, f, g, h) shows the evolution of the restarted simulation at a later time t=134, in a region of radius 2 internal distance units (2.4 kpc) for different gravitational softening lengths: 0.4(e), 0.04(f), 0.01(g), 0.006(h). Fig 1 (e, f, g, h) shows that as the softening lengths decrease, we find more gas fragmentation until a point in which the resulting simulations converge. We find convergence of the results for $\rm \epsilon_{soft} \le 0.01$ and for that reason, we choose to use in the following section a gravitational softening length of $\rm \epsilon_{soft} = 0.01$. The convergence can be understood if we take into account that over 99.6\% of the particles in such region fulfill the condition $\rm \lambda_{rot} \ge 4 \, \epsilon_{soft}$ for $\rm \epsilon_{soft} = 0.01$ and below. The convergency when $\rm \lambda_{rot}$ is resolved for all particles, is the first suggestion for supporting our definition for $\rm \lambda_{rot} $ in this environment with disordered motion ($\rm \lambda_{\rm rot} \equiv \pi^2 G \Sigma_{\rm gas} /|\vec{\Omega}_{o}|^2$). This convergence is an evidence that the minimum requirement to resolve fragmentation at least on the largest scales, is to be able to resolve gravity below our definition for the largest unstable scale $\rm \lambda_{rot}$. In this set of numerical experiments, we resolve fragmentation from the largest unstable scale down to our gravitational resolution. Below the gravitational softening, sub-fragmentation is artificially damped but our aim is to study if Eq 2 can predict the instability of streams and for that purpose, is not required to resolve all the range of unstable wavelengths and is enough with the largest one, since $\rm \lambda_{rot}$ is the first unstable wavelength to appear (i.e. the most unstable mode; Binney \& Tremaine 2008). This resolution test illustrates that in a set of simulations with the same temperature, fragmentation can be prevented just by the gravitational resolution. This contradics the interpretation of Teyssier et al. (2010), in which the onset of fragmentation is always asociated with a decrease in the temperature. However, Teyssier et al. (2010) changes both temperature and resolution, being unable to disentangle which one (or both) is responsible for the onset of fragmentation. This reinforces our approach of testing the stability criteria (Eq. 2) against a set of simple simulations, where the variation of parameters can be fully controlled. Finally, it is worth to mention that in this section and in the rest of the paper, we will focus only in the fragmentation of the gaseous component. The reason is that the stellar component behaves approximately as an adiabatic fluid (i.e. the kinetic energy in stellar motions cannot be lost or "radiated" away from the merging system). This yields to a rapid conversion of coherent motions into random ones during the merger, with the subsequent increase of the velocity dispersion in the stellar component, stabilizing the stellar system against runaway fragmentation. \section{Test of the stability criteria $\rm |\vec{Q}_{o}|$} After checking the gravitational resolution needed to resolve fragmentation at least on scales below those of the largest collapsing clumps, we will focus on testing the stability criteria discussed in \S 2 (Eq. \ref{Q}). Since most of the streams in the inner 2.4 kpc of the system already fragments for T $\rm \sim 2 \times 10^{4}$K, we will perform a set of simulations in which we increase the temperature and see how some streams become stable. We will check if the criteria given by Eq $\ref{Q}$, successfully predicts if a stream should be stable or not. Fig 2 (a, b, c) shows the evolution of the gas density for the system restarted with a gravitational softening length of $\rm \epsilon_{soft} = 0.01$, at t=132, for different temperatures T=$\rm 2 \times 10^{4}(a), 2 \times 10^{5}(b) \, and \,10^{6} \,K(c)$ and evolved to a later time t=133.2. The comparison between different temperatures (a to c in Fig 2) clearly shows that more streams become stable as we increase the temperature. Fig 2 (d, e, f) shows $\rm |\vec{Q}_{o}|/q$ computed for each particle at the time in which all the simulations were restarted with $\rm \epsilon_{soft} = 0.01$ (t= 132), for different temperature T=$\rm 2 \times 10^{4}(d), 2 \times 10^{5}(e) \, and \,10^{6} \,K(f)$. For computing $\rm \vec{\Omega}_{o}$, we choose as origin O of the coordinate system the total center of mass (G) of the merging galaxies, because the system as a whole orbits around G and is also an inertial point for an isolated merger (in absence of external forces). From Eq. \ref{Q}, the threshold for stability should be around $\rm |\vec{Q}_{o}|/q = 1$, which corresponds to the yellow particles in the figure 2, the green and blue particles should be unstable and the red ones stable. The direct comparison of two sides of Fig 2 (a-d, b-e and c-f pairs), shows overall a good agreement between the predicted unstable streams (showed in green and blue in Fig 2 (d, e, f)) and the streams that eventually fragments on the corresponding Fig 2 (a, b, c). In particular, the bluest stream in Fig 2 d (and in light green in Fig 2 f) is the most unstable region and the only one that strongly fragments in all simulations (including Fig 2 c) besides the increase in temperature up to $\rm 10^{6} \,K$. We find that the stability of gaseous streams is better described for a threshold value q$\sim$0.4, in fact, we plot $\rm |\vec{Q}_{o}|/0.4$ in Fig 2 (d, e, f). This is approximately a factor of 2 lower than the value expected for a uniformly rotating isothermal disk (q = 1.06; Goldreich \& Lynden-Bell 1965). However, it is important to emphasize that the actual value of q should depend on the origin O chosen for the coordinates system. In order to check the numerical reliability of our results, we performed the same fragmentation runs with 2,000,000 particles for the different temperatures showed in Fig 2 (a, b, c). Fig 2 (g, h, i) shows the evolution of the gas density for the high-resolution runs. By direct comparison of two sides of Fig 2 (a-g, b-h, and c-i pairs), we found differences in the low-density regions, as expected in the SPH technique, and slightly different positions in some streams, also expected due to the different granularity of the gravitational potential. However, we found the same results in terms of the onset of fragmentation on streams (i.e. which ones fragments at a given temperature and which ones don't) and in the number of clumps formed. Therefore, for the purpose of our study, we found consistent results between the low and high resolution runs. This supports the reliability of our numerical experiments that tests Eq. 2. Finally, it is important to note that although we successfully tested Eq. 2 for t=132, it should be valid at any given time for the value of the angular velocity vector in such moment (only with small variations on the threshold q). This is particularly important because, during the evolution of a galaxy merger, the properties of any stream (angular frequency vector, surface density, etc.) can drastically change on a timescale comparable to a crossing time. To check that our criteria is valid at any time, we restarted the original low-resolution simulation again at a different time t=137.2, with a gravitational softening $\rm \epsilon_{soft} = 0.01$ and a temperature T=$\rm 2 \times 10^{4} K$. Fig 3a shows the evolution of the gas density for the system restarted at t=137.2 and evolved to a later time t=138.4. Fig 3b shows $\rm |\vec{Q}_{o}|/q$ computed for each particle at the time t=137.2 using the same previously used threshold value q$=$0.4. The direct comparison of two sides of Fig 3, shows again an overall good agreement between the predicted unstable streams (showed in green and blue in Fig 3b) and the streams that eventually fragments in the evolution of the SPH run (Fig 3a). Although the gas properties (and predicted $\rm |\vec{Q}_{o}|$) in this second restarting time has considerably changed compared to the properties in the original ones (Fig 2), Fig 3 shows that the $\rm |\vec{Q}_{o}|$ computed at the restarting time t=137.2, successfully predicts which streams will fragment and which ones don't. This is besides some minor discrepancies that may arise if we focus on some small scale features showed in Fig 3. For example, a careful inspection of Fig 3a shows a smooth spiral pattern around a big clump (slightly up from the center of the image). On the corresponding Fig 3b, both features are in blue ($\rm |\vec{Q}_{o}|/q <<1$) which is expected for the big clump but not for the smooth spiral feature. Fig 4 shows a zoom-in of such region, in which the left panel shows the gas density and the right panel the corresponding $\rm |\vec{Q}_{o}|/q$. From Fig 4 is straightforward to realize that the spiral is composed by only few tens of particles that were lost within the several thousands of particles that compose a clump (i.e. less than 1\%). These particles, were probably lost thru the interactions with other clumps and in fact the spiral pattern ends on the closest collapsed clump, suggesting that are particles lost during strong gravitational interactions between the clumps. These minor discrepancies are inherent of the complexity of the problem, because processes that happens on the subsequent evolution, such as clump-clump interactions, are of course not included on this or any stability criterion. In order to quantify these minor discrepancies, we plot in Fig. 5 the surface density of each particle against $\rm |\vec{Q}_{o}|/q$. The left panel of Fig. 5, shows the surface density at the restarting time t=132 against $\rm |\vec{Q}_{o}|/q$ computed at the same time. The dashed line represents $\rm \Sigma_{gas} \propto (|\vec{Q}_{o}|/q)^{-1}$, which is the overall trend of particles at the restarting time. This is expected from the definition given by Eq 2, taking into account that $\rm C_{S}$ is constant and that the dispersion from the overall trend, is due to variations of $\rm |\vec{\Omega}_{o}|$ among particles. The middle and right panels of Fig. 5, shows the surface density at the t=133 (middle) and 134 (right) against $\rm |\vec{Q}_{o}|/q$ computed at the restarting time t=132. By construction, the particles on middle and right panels of Fig. 5 can only move in the vertical direction (relative to their position in the left panel) and therefore it will inherently introduce scatter into the $\rm \Sigma_{gas} \propto (|\vec{Q}_{o}|/q)^{-1}$ trend, since the surface density and $\rm |\vec{Q}_{o}|/q$ are computed at two different times. Beside the increase in scatter, we see a coherent vertical change for $\rm |\vec{Q}_{o}|/q \leq 1$ as the collapse proceeds. For isothermal simulations like the ones carried out in this work, once the gravitational instability is started it will proceed until the particles reach separations comparable to the softening length and this behaviour happens regardless the $\rm |\vec{Q}_{o}|/q$ or $\rm \Sigma_{gas}$ values. The horizontal sarturated region on middle and right panels of Fig. 5 (at $\rm log(\Sigma_{gas}) \geq 0$) denotes this behaviour. On the other hand, the time to move to the saturated region (free fall time) it does depends on $\rm \Sigma_{gas}$ and this explains why dark region of particles with $\rm log(\Sigma_{gas}) \leq 0$ recedes towards higher $\rm |\vec{Q}_{o}|/q$ as times evolves (from $\rm log(|\vec{Q}_{o}|/q) \geq -1.5$ in the middle panel to $\rm log(|\vec{Q}_{o}|/q) \geq -1$ in right panel of Fig. 5). In the right panel of Fig. 5, the particles in the region $\rm log(\Sigma_{gas}) \leq 0$ and $\rm log(|\vec{Q}_{o}|/q) \leq -1$, are representative of particles lost from the collapsing clumps, like the ones showed in the spiral feature of Fig. 4. In the same way, the particles in the region $\rm log(\Sigma_{gas}) \geq 0$ and $\rm log(|\vec{Q}_{o}|/q) \geq 0$, represents particles from stable regions that are gravitationally captured by the collapsing clumps. Besides these departures, the overall trend is clear and is showed in the right panel of Fig. 5, where a drastic change at $\rm log(|\vec{Q}_{o}|/q) = 0$ is clearly seen and that is due to the collapse of the unstable regions, which moves their particles vertically up, producing the vertical saturated region seen for $\rm log(|\vec{Q}_{o}|/q) \leq 0$. \section{Summary} In this paper, we have studied the gravitational stability of gaseous streams in the complex environment of a galaxy merger, using hydrodynamic simulations. We find that the standard Toomre Q stability parameter can be generalized for case of gaseous streams orbiting around the merger remnant, by using the angular frequency vector of each stream. This is valid as long as the orbital motion of a stream can be well approximated by the rotational motion around the center of gravity on a given plane, which is what happens in the inner regions of the merger remnant We test our generalized stability criteria, $\rm |\vec{Q}_{o}| \geq q$, using SPH numerical simulations specially designed for that purpose. We find that this criteria successfully predicts the streams that will be gravitationally unstable to fragment into clumps. We find that the stability of streams is better described choosing a threshold value q$\sim$0.4. The generalization of $\rm \lambda_{rot}$ in a galaxy merger, is also relevant for the formation of massive globular-type clusters since its associated mass $\rm M_{\rm rot} = \Sigma_{\rm gas} \, (\lambda_{\rm rot}/2)^2$, is related to the characteristic mass of the most massive clusters that are able to form (Escala \& Larson 2008; Shapiro et al. 2010) and has a role in the triggering of star formation, since it correlates with the galactic star formation rate (Escala 2011). The numerical validation of stability for $\rm |\vec{Q}_{o}| \geq q$ opens new possibilities for future research. One is to apply the criterion given by Eq. 2 to observations of gas-rich galaxy mergers and also, to simulations with a more realistic description for the ISM, that includes feedback processes from star formation and/or AGN. Another interesting possibility is to study when in the evolution of a merger, you have a larger portion of the gaseous mass with $\rm |\vec{Q}_{o}| \leq q$ and then be able to determine when the streams would fragment more vigorously. A.E. acknowledges partial support from the Center for Astrophysics and Associated Technologies CATA (PFB 06), FONDECYT Iniciacion Grant 11090216. F.B. and L. del V. acknowledge support from Programa Nacional de Becas de Posgrado (Grant D-22100632 and Grant D-21090518). The simulations were performed using the HPC clusters Markarian (FONDECYT 11090216), Geryon (PFB 06) and Levque.
1,477,468,749,942
arxiv
\section{INTRODUCTION} In a prescient paper, \citet{hills88} suggests that a stellar binary interaction with the Milky Way's central black hole could eject one member of the binary with a velocity $>1,000$ km s$^{-1}$. \citet{yu03} further develop Hill's analysis and suggest two additional mechanisms to eject ``hyper-velocity'' stars from the Galactic center: close encounters of two single stars and three-body interactions between a single star and a binary black hole. \citet{yu03} predict production rates for all three mechanisms. Even the discovery of a single hyper-velocity star can place important constraints on the formation mechanism and the nature of the Galactic center. In our survey of faint blue horizontal branch candidates in the Galactic halo, we have discovered a star, SDSS J090745.0+024507, traveling with a heliocentric radial velocity of $+853\pm12$ km s$^{-1}$. Corrected to the local standard of rest and for the solar reflex motion, the Galactic rest frame velocity of this star is $v_{RF}=+709$ km s$^{-1}$. The observed radial velocity is only a {\it lower} limit to the star's true space velocity, but the radial velocity alone substantially exceeds the escape velocity from the Galaxy. The distance to the hyper-velocity star (hereafter HVS) is $\sim$ 55 kpc. At a Galacto-centric distance of 50 kpc, the mass of the Milky Way is 5.4$\times10^{11}$ M$_{\odot}$ \citep{wilkinson99} and the escape velocity is 305 km s$^{-1}$. Thus the HVS is moving well over {\it twice} the escape velocity and in a direction 173.8$^{\circ}$ from the Galactic center. By comparison, traditional ``high-velocity'' and ``run-away'' stars are stars with peculiar velocities greater than 30 km s$^{-1}$. High-velocity stars are typically early-type stars in the Galactic disk moving away from star formation regions \citep[e.g.][]{hoogerwerf01}. Run-away B-type stars have been observed up to $\sim$15 kpc above the Galactic plane and moving with radial velocities up to $\pm200$ km s$^{-1}$ \citep{lynn04,magee01,ramspeck01,rolleston99,mitchell98,holmgren92,conlon90}. The highest velocity halo star, to our knowledge, was observed by \citet{carney88} moving through the solar neighborhood with a total Galactic rest frame velocity of 490 km s$^{-1}$. In all cases, these high velocity and run-away stars are very probably bound to the Galaxy. In \S 2 we describe the target selection and the spectroscopic observations of the blue horizontal branch candidate sample. The HVS is a 6$\sigma$ outlier from the distribution of radial velocities of this sample. In \S 3 we demonstrate the robustness of the radial velocity and discuss the HVS's stellar properties. In \S 4 we discuss the significance of the star's hyper-velocity. \section{TARGET SELECTION AND OBSERVATIONS} We have been using blue horizontal branch (BHB) stars to trace velocity structure in the Milky Way halo \citep{brown04,brown03}. In 2003, as part of an effort to measure the dynamical mass of the Milky Way more accurately, we used Sloan Digital Sky Survey (SDSS) Early Data Release and Data Release 1 photometry to select faint $19.75 < g'_0 < 20.25$ BHB candidates for spectroscopic observations. We identified BHB candidates by their A-type colors following \citet{yanny00}: $0.8<(u'-g')<1.5$, $-0.3<(g'-r')<0.0$. Figure \ref{fig:ugr} shows this color selection box, and colors of the 36 observed BHB candidates. The density of BHB candidates in this color/magnitude range is 0.3 objects deg$^{-2}$. Our observational strategy was to observe objects well-placed in the sky at the time of observation. Thus our sample of 36 stars is ``randomly'' selected from the area covered by the SDSS Early Data Release and Data Release 1. We obtained spectra of the 36 BHB candidates with the 6.5m MMT telescope during April, July, and December 2003. We used the MMT Blue Channel spectrograph with a 832 line/mm grating in second order. This set-up provides 1.0 \AA\ spectral resolution using a 1 arcsec slit. With a dispersion of 0.36 \AA/pix on the 2688$\times$512 CCD, \includegraphics[width=3.25in]{f1.eps} \figcaption{ \label{fig:ugr} Color-color plot of our sample. The dashed box indicates the selection box for A-type stars. Points mark the colors of observed targets, with solid points marking the BHB stars. Most of the solid points fall in the \citet{sirko04} BHB-selection box indicated by the solid line. The solid triangle marks the HVS.} ~ \noindent our spectral coverage is 950 \AA. All observations were made at the parallactic angle. On nights of good seeing $<1$ arcsec, we obtained $S/N \sim 20$ at 4,000 \AA\ in one hour of integration. The 36 BHB/A stars are all located in the Galactic halo with heliocentric distances ranging $30<d<80$ kpc. We measure heliocentric radial velocities using the cross-correlation package RVSAO \citep{kurtz98}. We correct the velocities to the local standard of rest \citep{dehnen98}. We assume the stellar halo has no mean rotation, and correct the velocities for the reflex motion of the local 220 km s$^{-1}$ orbital motion. Figure \ref{fig:veldisp} plots a histogram of the radial velocities in the Galactic rest frame for all 36 BHB/A stars. Ignoring the HVS, the sample has a velocity dispersion of $\pm120$ km s$^{-1}$ consistent with a halo population. The mean velocity of the sample (ignoring the HVS) is $-7$ km s$^{-1}$, consistent with our assumption of no rotation. The $+709\pm12$ kms$^{-1}$ HVS, SDSS 090745.0+024507, is a $6\sigma$ outlier from the observed distribution of radial velocities. \section{THE HYPER-VELOCITY STAR} The HVS is located at $9^{h} 07^{m} 45\fs0,$ $+2^{\circ} 45\arcmin 07\arcsec$ (J2000). In Galactic coordinates, the star is located at $(l,b)=(227^{\circ} 20^{\arcmin} 07^{\arcsec}, +31^{\circ} 19\arcmin 55\arcsec)$. Fig \ref{fig:spectrum} shows the MMT Blue Channel spectrum of the HVS. The spectrum represents 60 minutes of integration time, and has S/N=20 at 4,000 \AA. The spectrum shows a raw heliocentric radial velocity of $+853\pm12$ km s$^{-1}$. Corrected to the local standard of rest, the Galactic velocity components are $U=-491$ km s$^{-1}$ (radially outwards), $V=-532$ km s$^{-1}$ (opposite the Galactic rotation direction), and $W=+441$ km s$^{-1}$ (vertically upwards). \includegraphics[width=3.25in]{f2.eps} \figcaption{ \label{fig:veldisp} Galactic rest frame radial velocity distribution for our 36 BHB/A star sample. The HVS is a 6$\sigma$ outlier from the observed distribution.} ~ \subsection{Verifying the Radial Velocity} The substantial radial velocity is evident even from inspection of the two-dimensional spectra. Figure \ref{fig:twod} shows the $H\gamma$ line of the HVS shifted nearly on top of the night sky line at 4358.34 \AA. Moreover, the spectrum in Fig \ref{fig:spectrum} is not based on a single observation, but on a series of three 20 minute observations obtained over 1 hour. The wavelength solution for the three observations is nearly identical to the wavelength solutions obtained throughout the rest of that night. We verify the wavelength solution by measuring the velocity of the night sky lines. The Hg line at 4358.34 \AA\ (see Fig \ref{fig:twod}) has the best S/N and a $-0.7 \pm 2.5$ km s$^{-1}$ velocity consistent with zero. We conclude that the wavelength solution is robust. We measure radial velocities for the three individual observations and find that the velocities agree to $\pm5$ km s$^{-1}$. Moreover, we measure radial velocities using three different stellar templates (for B7, B9, and A1 spectral types) and find that the velocities agree to $\pm3$ km s$^{-1}$. We conclude the radial velocity is accurate. If the HVS were a close binary, any systematic velocity offset is likely to be small fraction of the observed velocity. \subsection{Stellar Properties} The HVS has de-reddened colors $(u'-g')_0=1.04\pm0.09$ and $(g'-r')_0=-0.30\pm0.03$, marked by the solid triangle in Fig \ref{fig:ugr}. These colors indicate it is probably a hot BHB star or a late B-type main sequence star. Its apparent magnitude is $g'=19.81\pm0.02$. We classify the HVS's spectral type as B9.2 with an uncertainty of 1.2 spectral sub-types. This classification is based on line indices described in \citet{brown03}. The spectral type is in perfect agreement with the star's estimated effective temperature, $T_{eff} \sim 10,500$ K (R.\ Wilhelm 2004, private communication). We measure the HVS's metallicity based on the equiv- \includegraphics[width=3.25in]{f3.eps} \figcaption{ \label{fig:spectrum} MMT spectrum of the HVS.} ~ \noindent alent width of the Ca{\sc ii} K line and the star's photometric colors. However, the Ca{\sc ii} K line provides little leverage on [Fe/H] at high $T_{eff}$. Thus there is considerable uncertainty in [Fe/H]. Interpolated tables \citep{wilhelm99a} show that [Fe/H] can range from -0.4 to well over 0.0; we conclude that the star's metallicity is roughly solar [Fe/H]$\sim$0.0. Solar metallicity suggests the HVS is probably a B9 main sequence star. However, given the uncertainty we cannot rule out the star being a hot BHB star. We estimate the HVS's surface gravity by measuring the size and steepness of its Balmer jump \citep{kinman94}, and the widths and the shapes of its Balmer lines \citep{clewley02}. These independent techniques indicate that the star is a low surface gravity BHB star. However, classification by surface gravity is ambiguous at this $T_{eff}$. The $T_{eff}$ and $\log{g}$ of the horizontal branch crosses the main sequence around $\sim$10,000 K. Thus we cannot reliably distinguish between a hot BHB or a B9 main sequence star based on its surface gravity. This uncertainty is problematic for estimating the star's distance: a hot BHB star and a B9 main sequence star with the same $T_{eff}$ and $\log{g}$ differ in luminosity by a factor of $\sim$4. We estimate the HVS's distance first assuming it is a hot BHB star. We calculate luminosity using the $M_V(BHB)$ relation of \citet{clewley02}, which combines the \citet{gould98} {\it Hipparcos}-derived $M_V$ zero point, the \citet{clementini03} LMC-derived $M_V$-metallicity slope, and the \citet{preston91} $M_V$-temperature correction. If it is a hot BHB star, the HVS has $M_V(BHB)=1.9$ and a heliocentric distance $d_{BHB}=39$ kpc. Hot BHB stars are intrinsically less-luminous stars that hook down off the classical horizontal branch. We thus consider 39 kpc as a lower limit for the HVS's distance. We next estimate the HVS's distance assuming it is a B9 main sequence star. To estimate the luminosity of a B9 star we look at the \citet{schaller92} stellar evolution tracks for a 3 $M_{\odot}$ star with $Z=0.02$. Such a star spends $3.5\times10^8$ yrs on the main sequence and produces 160 $L_{\odot}$ when it has $T_{eff}\sim10,500$ K. We convert this luminosity to absolute magnitude using the bolometric corrections of \citet{kenyon95}. If it is a B9 main sequence star, the HVS has $M_V(B9)=0.6$ and a heliocentric distance of $d_{B9}=71$ kpc. For purposes of discussion, we assume the HVS has the average distance of these two estimates: $d=55$ kpc. This estimate places the star at $z$ = $29 ~(d/55)$ kpc above the Galactic disk, and at $r$ = $60 ~(d/55)$ kpc from the \includegraphics[width=3in]{f4.eps} \figcaption{ \label{fig:twod} A portion of the two-dimensional spectrum. The large radial velocity is evident from $H\gamma$ being shifted nearly on top of the night sky line Hg 4358.34 \AA.} ~ \noindent Galactic center. \section{DISCUSSION} Remarkably, \citet{hills88} predicted the existence of hyper-velocity stars as a consequence of the presence of a massive black hole at the Galactic center. Other mechanisms fail to produce velocities as large as the one we observe for the HVS. For example, the star is not associated with the Sgr stream, nor is its radial velocity consistent with any other member of the Local Group. Association with high velocity clouds is unlikely given their low velocity dispersion $\pm120$ km s$^{-1}$ \citep{putnam02}. The HVS is also unlikely to be in the tail of the Galactic halo velocity distribution: our target selection yields 10$^4$ stars over the entire sky, and we should have to observe 10$^9$ objects to find one 6$\sigma$ outlier. Supernova ejection \citep{blaauw61} and dynamical ejection \citep{poveda67} are mechanisms thought to produce run-away B stars, but both mechanisms predict maximum ejection velocities of $\sim$300 km s$^{-1}$ \citep{leonard93}. \citet{hills88} predicts that tightly bound stellar binaries encountering the central black hole can eject one star with velocity $\gtrsim 1,000$ km s$^{-1}$ from the Galactic center. \citet{yu03} also predict that the close encounter of two single stars or the three-body interaction between a single star and a binary black hole can result in similar ejection velocities from the Galactic center. \citet{yu03} show that the production rates for these mechanisms are: (1) 10$^{-11}$ yr$^{-1}$ for single star encounters, (2) 10$^{-5}$ yr$^{-1}$ for binary star encounters with the central black hole, and (3) 10$^{-4}$ yr$^{-1}$ for single star encounters with a binary black hole. Thus the very existence of the HVS rules out the single star encounter mechanism. Recent measurements of stellar orbits around the Galactic center provide overwhelming evidence of a $4\times10^6$ M$_{\odot}$ black hole at the Galactic center \citep{ghez05,schodel03,ghez03}. Although it is difficult to imagine a B star forming near the central black hole, many young, massive stars {\it are} observed within 1 pc of the Galactic center \citep[e.g.][]{genzel03}. Moreover, one of the stars used to measure the mass of the BH has a $45\pm16$ AU periapse and an orbital eccentricity $e=0.974\pm0.016$, giving it a periapse velocity $12,000\pm2,000$ km s$^{-1}$ \citep{ghez05}. The radial velocity vector of the HVS points 173.8$^{\circ}$ from the Galactic center, suggestive of a Galactic center origin. Even assuming that the observed radial velocity is the full space motion of the HVS, the travel time from the Galactic center is $\lesssim$80 Myr. The lifetime of a B9 main sequence star, by comparison, is approximately 350 Myr; the age of a star on the horizontal branch is much longer. Thus the star's solar metallicity, its direction of travel, and its travel time are all consistent with a Galactic center origin. If the HVS is traveling on a radial path from the Galactic Center, we predict its proper motion is $\sim 0.3 ~(d/55)$ mas yr$^{-1}$. The USNOB1 \citep{monet03} and the GSC2.3.1 (B.\ McLean, 2005 private communication) catalogs list the star with a proper motion of $14\pm12$ and $20\pm11$ mas yr$^{-1}$, respectively, but in nearly opposite directions. The average of these proper motions is $3\pm8$ mas yr$^{-1}$ consistent with zero. We searched for the star on 50 to 100 year-old plates in the Harvard Plate Archive to increase the observed time baseline, but the HVS was too faint. The {\it GAIA} mission should be able to observe 20 mag stars with 0.16 mas yr$^{-1}$ accuracy \citep{perryman02} and thus may determine the full space motion of the HVS. We can use our sample to place an upper limit on the production rate of hyper-velocity stars. Our sample of 36 BHB/A stars fills an effective volume of $\sim$10$^3$ kpc$^3$ indicating an upper limit on the density of hyper-velocity BHB/A stars, $\sim$10$^{-2}$ kpc$^{-3}$. If it takes 10$^8$ yr for a hyper-velocity star to leave the Galaxy, the \citet{yu03} production rates imply a total of 10$^3$-10$^4$ hyper-velocity stars within the halo, a density of 10$^{-3}$-10$^{-2}$ kpc$^{-3}$. At first glance there appears to be rough agreement between the observed and predicted density of hyper-velocity stars. However, BHB/A stars represent only a small fraction of the total population of halo stars, and there are essentially no constraints on the fraction of the binary population they might represent near the Galactic center. The discovery of a HVS, as predicted by \citet{hills88} and \citet{yu03}, provides an interesting piece of new evidence for a massive black hole at the Galactic center. Ironically, this evidence comes from the radial velocity of a star $\sim$60 kpc from the Galactic center. We are now using the MMT Hectospec spectrograph, a 300 fiber spectrograph with a 1$^{\circ}$ diameter field of view \citep{fabricant98hecto}, to observe additional $g'_0\sim20$ BHB candidates over wide areas of sky. The identification of more hyper-velocity stars as a function of distance and position on the sky will place better constraints on the production mechanism and production rate of these unusual stars. Observing hyper-velocity stars over a wide range of distances will allow us to measure the production history at the Galactic center and could probe the history of the formation of the central black hole. \acknowledgements We thank C.\ Heinke and A.\ Milone for their assistance with the MMT observations. We thank E.\ Falco for obtaining follow-up imaging at the Whipple 1.2m telescope. We thank T.\ Beers, D.\ Latham, B.\ McLean, D.\ Mink, D.\ Monet, K.\ Stanek, and R.\ Wilhelm for helpful discussions. This work was supported by W.\ Brown's CfA Fellowship.
1,477,468,749,943
arxiv
\section{Introduction} There is considerable investor interest across several financial contexts in constructing portfolios which mix liquid and illiquid assets, especially illiquid alternative investments. We wish to perform strategic asset allocation to asset classes that include illiquid alternative assets, as well as more liquid asset classes. Several challenges arise. First, we can only augment our illiquid positions by making capital commitments. Moreover, these commitments only indirectly affect our illiquid position through uncertain and delayed capital calls, that we have no direct control over. A further challenge is the solvency requirement: we should be able to fund the capital calls from our liquid positions with very high probability. A simple strategy to guarantee coverage of capital calls is to keep an amount equal to the uncalled capital commitments in cash. However this creates significant cash drag, since this cash could be invested in higher returning liquid assets. The method we describe in this paper addresses all of these issues. \section{Previous work} There is a rich history of studying portfolio construction. Our work helps extend the modern portfolio theory framework developed by Markowitz~\cite{Markowitz1952} and Merton~\cite{Merton1969}, which focuses on liquid assets. We contribute to the further study of illiquidity and multi-period planning. While this work takes as an input a stochastic model which describes the risk and return of illiquid investments, calibrating such models is a nuanced and well studied problem. For a guide to the literature on the risks and returns of private equity investments, see Kortweg~\cite{Korteweg2019}. \paragraph{Continuous time.} There is a breadth of work on modeling portfolio construction with illiquid assets. Many authors consider continuous time stochastic processes. Dimmock et al.~study the endowment model, under which university endowments hold high allocations in illiquid alternative assets, via a continuous time dynamic choice model with deterministic-in-time discrete liquidity shocks every $T$ periods~\cite{Dimmock2019}. They allow the investor to increase the position in the illiquid asset instantaneously, not modeling the delayed nature of capital calls. Ang et al.~also study a continuous time problem, but model the timing of liquidity events of the illiquid asset as an independent Poisson process~\cite{Ang2013}. Optimal solutions are assumed to have almost surely non-negative liquid wealth, meaning that the investor must always be able to cover the effects of illiquidity. Another important line of inquiry studies the effects of illiquidity through the commitment risk of a fixed alternative's commitment. Sorensen, Wang, and Yang~\cite{Sorensen2014} study this problem by focusing on an investor who can modify their positions in stocks and bonds, taking an investment in an illiquid asset as given and held to maturity. \paragraph{Discrete time.} The discrete time case is also well studied. Takahashi and Alexander first introduced what amounts to a deterministic linear system to model an illiquid asset's calls, distributions, and asset value~\cite{Takahashi2002}. This model posits that calls are a time-varying fraction of uncalled commitments, and that distributions are a time-varying fraction of the illiquid asset value, and returns are constant. Our model is similar, but differs in two important ways. First, our model is time-invariant. Second our model incorporates randomness in these fractions as well as the returns. Giommetti and Sorensen use the Takahashi and Alexander model in a standard, discrete-time, infinite-horizon, partial-equilibrium portfolio model to determine optimal allocation to private equity~\cite{Giommetti2021}. Here the calls and distributions are deterministic fractions of the uncalled commitments and illiquid asset value, but the illiquid asset value grows with stochastic returns. Again, there is an almost sure constraint which insists that the investor's liquid wealth is never exhausted, which means that the investor maintains a liquidity reserve of safe assets to cover calls. \paragraph{Optimal allocation to illiquid assets.} Broadly, across the literature we have reviewed, the reported optimal allocations to illiquid assets are strikingly low compared to the de facto wants and need of institutional investors who are increasingly allocating larger and larger shares of their portfolios to illiquid alternatives. In their extensive survey of Illiquidity and investment decisions, T\'edongap and Tafolong~\cite{Tedongap2018} report that recommended illiquid allocations range from the low single digits to around 20\% on the upper end. This is strikingly lower than the target levels observed in practice. For example, the National Association of College and University Business Officers (NACUBO) provide data showing the allocation weights of illiquid alternatives in University endowments reaching 52\% in 2010. Unlike other analyses, our method does not require investors be able to cover calls with probability one, and instead provides a tool for maintaining an optimized target asset allocation under uncertain calls, distributions, returns, and growth. Hayes, Primbs, and Chiquoine propose a penalty cost approach to asset allocation whereby an additional term is added to the traditional mean-variance optimization (MVO) problem to compensate for the introduction of illiquidity \cite{Hayes2015}. They solicit a user provided marginal cost curve which captures the return premium needed for an illiquid asset to be preferred over a theoretically equivalent liquid alternative. This leads to a formulation nearly identical to the standard MVO problem, with a liquidity-adjusted expected return (a function of the allocation). In their work the notion of liquidity is captured in a scalar between $0$ and $1$. \paragraph{Multi-period optimization.} Our policy is based on solving a multi-period optimization problem. Dantzig and Infanger~\cite{Dantzig1993} introduce a multi-stage stochastic linear programming approach to multi-period portfolio optimization. Mulvey, Pauling, and Madey survey the advantages of multi-period portfolio models, including the potential for variance reduction and increased return, as well as the ability to analyze the probability of achieving or missing goals \cite{Mulvey2003}. Boyd et al.~\cite{Boyd2017} describe a general framework for multi-period convex optimization. This framework focuses on planning a sequence of trades over a set of periods trades given return forecasts, trading costs, and holding costs. Our framework also solves a multi-period convex optimization problem, but we do not make an approximation of the dynamics, which is more appropriate for the longer time horizons and thus more significant growth observed in strategic asset allocation. \paragraph{Model predictive control.} Our method falls under the category of Model Predictive Control (MPC), which is both widely studied in academia and used in industry. For a survey of MPC, see for example the books \textit{Model Predictive Control}~\cite{mpcbook} or Garc\'ia et al.~\cite{Garcia1989}. Herzog et al.~\cite{Herzog2006} use an MPC approach for multi-period portfolio optimization, but only consider normally distributed returns and standard liquid assets. They do include a factor model of returns, as well as a conditional value at risk (CVaR) constraint which is different in interpretation but takes the same form as our insolvency constraint. The closest work we have identified to our own is the thesis of Lee, who uses a very similar multi-period optimization problem with linear illiquid dynamics \cite{Lee2012}. We both use a quadratic risk, and use certainty equivalent planning to solve an open loop control problem. Lee's problem is multi-period, but the objective is a function of only the final period wealth, whereas in our case we have stage costs, as well as constraints on the solvency of our portfolio. Additionally, in our stochastic model we use random call and distribution intensities. \paragraph{Contributions.} The linear dynamics of the illiquid wealth motivate model predictive control (MPC) as a solution method. To the best of our knowledge, there do not exist multi-period optimization-based policies for constructing portfolios with both liquid and illiquid alternative assets. We believe our contributions include the following. \begin{enumerate} \item Incorporating random intensities with the classic linear model of the illiquid asset's calls and distributions. \item Formulating a multi-period optimization problem to perform strategic asset allocation with liquid and illiquid assets. \item Using homogeneous risk constraints to account for growth in the multi-period planning problem. \item Using liquidity/insolvency constraints to ensure calls are covered with high probability. \item Obtaining a performance bound for the problem by considering a stylized liquid world where the illiquid asset is completely liquid. \end{enumerate} \clearpage \section{Stochastic dynamic model for an illiquid asset}\label{stochastic-dynamic-model-illiquid} In this section we describe our stochastic dynamic model of one illiquid asset. Our model is closely related to the linear system proposed by Takahashi and Alexander~\cite{Takahashi2002}, with the addition of uncertainty in the capital calls and distributions. We demonstrate a straightforward extension of our model which would include the Takahashi model in \S\ref{s-extensions}. We consider a discrete-time setting, with period denoted by $t=1,2,3 \ldots$, which could represent months, quarters, years, or any other period. Our model involves the following quantities, all denominated in dollars. \begin{itemize} \item $I_t\geq0$ is the illiquid wealth (or position in or NAV of the illiquid asset) at period $t$. \item $K_t\geq 0$ is the total uncalled commitments at period $t$. \item $C_t\geq 0$ is the capital call at period $t$. \item $D_t\geq 0$ is the distribution at period $t$. \item $n_t\geq 0$ is the amount newly committed to the illiquid asset at period $t$. \end{itemize} The commitment $n_t$ is the only variable we can directly control or choose. All the others are affected indirectly by $n_t$. \paragraph{Dynamics.} Here we describe how the variables evolve over time. At period $t$, \begin{itemize} \item we make a new capital commitment $n_t$ (which we can choose) \item we receive capital call $C_t$ (which is not under our control) \item we receive distribution $D_t$ (which is not under our control) \end{itemize} The uncalled commitment in period $t+1$ is \[ K_{t+1}=K_t+n_t-C_t, \] and the illiquid wealth in period $t+1$ is \[ I_{t+1}=I_tR_t+C_t-D_t, \] where $R_t\geq 0 $ is a random total return on the illiquid asset. \paragraph{Calls and distributions.} We model calls and distributions as random fractions of $K_t,P_t, $ and $n_t$. We model calls as \[ C_t =\lambda_t^0n_t+\lambda_t^1K_t, \] where $\lambda_t^0$ $\in [0,1]$ is the random immediate commitment call intensity and $\lambda_t^1$ $\in [0,1]$ is the random existing commitment call intensity. Similarly, we model distributions as \[ D_t =I_tR_t\delta_t, \] where $\delta_t$ $\in [0,1]$ is the random distribution intensity. We assume the random variables $(R_t,\lambda_t^0,\lambda_t^1,\delta_t) \in {\mbox{\bf R}} \times [0,1]^3$ are I.I.D., i.e., independent across time and identically distributed. (But for fixed period $t$, the components $R_t$, $\lambda_t^0$, $\lambda_t^1$, and $\delta_t$ need not be independent.) We do not know these random variables when we choose the current commitment $n_t$. Formally, we assume that $n_t \perp \!\!\! \perp (R_t,\lambda_t^0,\lambda_t^1, \delta_t)$. The current commitment can depend on anything known at the beginning of period $t$ (including for example past values of returns and intensities), but the current period return and intensities are independent of the commitment. \subsection{Stochastic linear system model} The model above can be expressed as a linear dynamical system with random dynamics and input matrices. With state $x_t=(I_t,K_t)\in{\mbox{\bf R}}^2$ and the control or input $u_t=n_t\in{\mbox{\bf R}}$, the dynamics are given by \[ x_{t+1}=A_tx_t+B_tu_t, \] where \begin{equation}\label{e-illiquid-matrix-eqn} A_t=\bmat{ R_t(1-\delta_t) &\lambda_t^1 \\ 0 & 1-\lambda_t^1 },\qquad B_t=\bmat{ \lambda_t^0\\ 1-\lambda_t^0 }. \end{equation} With output $y_t = (I_{t}, K_{t}, C_{t}, D_{t}) \in {\mbox{\bf R}}^4$, we have \[ y_{t}=F_tx_t+G_tu_t, \] where \begin{equation}\label{e-output-eqn} F_t=\bmat{ 1 & 0\\ 0 & 1\\ 0 & \lambda^1_t\\ R_t(1-\delta_t) & 0 },\qquad G_t=\bmat{ 0\\ 0\\ \lambda^0_t\\ 0}. \end{equation} We assume the initial state is known. We observe that $x_t \perp \!\!\! \perp (F_t,G_t)$, since the former depends on the initial state, $n_t$, and $(F_\tau,G_\tau)$ for $\tau <t$, and these are all independent of $(F_t,G_t)$. A careful reader might notice that these linear dynamics mean that the commitments and distributions asymptotically approach zero but never terminate. However, the fractions of calls and distributions relative to the initial amounts are minuscule after several periods, and are negligible in the presence of new commitments coming in each period. Additionally, Gupta and Van Nieuwerburgh~\cite{Gupta2021} found in analyzing long-term private equity behavior that often funds have activity even fifteen years after inception, further justifying the persisting calls and distributions in the linear systems model. \subsection{Mean dynamics}\label{illiquid-mean-dynamics} Let $\overline x_t = {\mathbf E} x_t$ denote the mean of the state, $\overline u_t = {\mathbf E} u_t$ denote the mean of the input or control, and $\overline y_t = {\mathbf E} y_t$ denote the mean of the output. We define the mean matrices \[ \overline{A}={\mathbf E}{A_t}, \qquad \overline{B}={\mathbf E}{B_t}, \qquad \overline{F}={\mathbf E}{F_t}, \qquad \overline{G}={\mathbf E}{G_t} \] (which do not depend on $t$). We then have \begin{equation}\label{e-mean-dynamics} \overline{x}_{t+1} = \overline{A}\overline{x}_t+\overline{B}\overline{u}_t, \qquad \overline{y}_{t} = \overline{F}\overline{x}_t+\overline{G}\overline{u}_t, \end{equation} which states that the mean state and output is described by the same linear dynamical system, with the random matrices replaced with their expectations. The mean dynamics is a time-invariant deterministic linear dynamical system. \subsection{Impulse and step responses} Our linear system model implies that the mapping from the sequence of commitments or inputs $u_1, u_2, \ldots$ to the resulting outputs $y_1, y_2, \ldots$ is linear but random. We review three standard concepts from linear dynamical systems. \paragraph{Commitment impulse response.} We can consider the response of uncalled commitments, calls, illiquid wealth, and distributions to committing $n_1=1$ at period $t=1$ and $n_t=0$ for all $t>1$, with zero initial state. This is referred to as the impulse response of the system. The impulse response is the stochastic process \[ y_t=\bmat{ I_t\\ K_t\\ C_t\\ D_t }= F_tA_{t-1}\cdots A_2 B_1, \quad t=1,2, \ldots . \] From the mean dynamics \eqref{e-mean-dynamics}, we know that the mean impulse response is given by \[ \overline{y}_t = \overline{F}~{\overline A}^{t-1}\overline B , \quad t=1,2, \ldots . \] \paragraph{Commitment step response.} We can also consider the effect of committing $n_1=n_2 = \cdots = 1$, which is referred to as the step response of the system. The step response shows how a simple policy of constant commitment impacts our exposure over time in the illiquid asset, calls, distributions, and our level of uncalled commitment. The step response is the stochastic process \[ y_t = F_t\paren{\paren{\sum_{i=0}^{t-3}A_{t-1}\cdots A_{2+i}B_{i+1}}+B_{t-1}}+G_t,\quad t=1,2, \ldots. \] From the mean dynamics \eqref{e-mean-dynamics}, we know that the mean step response is given by \begin{equation}\label{e-mean-step-response} \overline y_t = \overline F\paren{\sum_{i=0}^{t-2}\overline A^i}\overline B+\overline G,\quad t=1,2, \ldots . \end{equation} \paragraph{Steady state response.} We define the unit steady-state mean response $y^\text{ss}$ as $\lim_{t \to \infty} \overline y_t^\text{step}$. Assuming the spectral radius of $\overline A$ is less than one, we take the limit of \eqref{e-mean-step-response} as $t\to\infty$ to obtain \[ y^\text{ss} = \overline F(I-\overline A)^{-1} \overline B + \overline G. \] We refer to the entries of \begin{equation}\label{e-steady-state-gains} y^\text{ss} = (\alpha_K,\alpha_C,\alpha_I, \alpha_D) \end{equation} as the steady state gains from commitment to illiquid wealth, uncalled commitment, capital calls, and distribution. These numbers have a simple interpretation. For example, $\alpha_I$ tells us what the asymptotic mean illiquid wealth is, if we repeatedly make a commitment of \$1. It can be shown that $\alpha_C=1$, {\it i.e.}, if we constantly commit \$1, then asymptotically, and in mean, the capital calls will also be \$1. \subsection{A particular return and intensity distribution} We suggest the following parametric joint distribution for $(\lambda_t^1,\lambda_t^0,\delta_t,R_t)$. They are generated from a random 3-vector \begin{equation}\label{e-illiquid-latent} z_t\sim \mathcal{N}(\mu,\Sigma)\in{\mbox{\bf R}}^3. \end{equation} From these we obtain \begin{equation}\label{e-illiquid-intensity-model} \lambda_t^1= \frac{1}{1+\exp(z_t)_1},\qquad \lambda_t^0 = \frac{1}{2}\lambda_t^1,\qquad \delta_t = \frac{1}{1+\exp {(z_t)}_2},\qquad R_t= \exp {(z_t)}_3. \end{equation} With this model, the return is log-normally distributed while the call and distribution intensities are logit-normally distributed. Dependency among the return and the intensities are modeled by the off-diagonal entries of $\Sigma$. \subsection{Example}\label{example_illiquid_stochastic} Here we describe a particular instance of the distribution described above, that we will use in various numerical examples in the sequel. \paragraph{Example return and intensity distribution.} In this example we use the following parameters for the distribution of $(\lambda_t^1,\lambda_t^0,\delta_t,R_t)$ specified in \eqref{e-illiquid-latent}: \begin{equation}\label{e-intensity-dist} \mu = \bmat{-0.700\\-0.423\\ 0.158},\qquad \Sigma = \bmat{ 0.068 & 0.072 & 0.006\\ 0.073 & 0.271 & 0.043\\ 0.006 & 0.043 & 0.079 }. \end{equation} This example is based on yearly periods. The mean return of the illiquid asset is derived from the BlackRock Capital Market Assumptions as of July 2021, which reports one private equity asset, Buyout, with a mean annual return of 15.8\% \cite{cma}. The call and distribution mean intensities are calibrated from private equity data for the eFront Buyout fund. The mean values of the intensities are we report the empirical means \[ \overline \lambda_t^1 = .26,\qquad \overline \lambda_t^0 = .128,\qquad \overline \delta_t=.33. \] (These are found by Monte Carlo simulation, since the mean of a logit-normal distribution doesn't have an analytical expression.) The covariance matrix is calibrated from the same data. \paragraph{Commitment impulse response.} The impulse response from commitment to uncalled commitment, calls, illiquid wealth, and distributions is shown in figure~\ref{impulse_response}. The top row shows the mean response, and the bottom shows 100 realizations, with the empirical mean shown as the white curve. We see that the uncalled commitments peak in the next period at a level of about 0.8. The calls peak at the next period at .28. We can see that our initial commitment translates into an illiquid holding which, in expectation, peaks four periods later with a value of about .47. Similarly, the resulting distributions peak with the illiquid wealth four periods later, with a level of .24. \begin{figure} \centering \includegraphics[scale=.45]{figures/impulse_new.png} \caption{Commitment impulse response. The top plot shows the mean values, and the bottom plot shows 100 realizations of the stochastic model.} \label{impulse_response} \end{figure} \paragraph{Commitment step response.} In figure~\ref{step_response}, we see the step response to constant unit commitment of uncalled commitments, calls, illiquid wealth, and distributions. The top row shows the mean response, and the bottom shows 100 realizations. The mean step responses converge in around 8 periods to values near their asymptotic values. \begin{figure} \centering \includegraphics[scale=.45]{figures/step_new.png} \caption{Commitment step response. The top plot shows the mean values, and the bottom plot shows 100 realizations of the stochastic model.} \label{step_response} \end{figure} \paragraph{Asymptotic expected response to constant commitment.} The steady-state gains are \[ \alpha_K= 2.491, \qquad \alpha_C= 1.000, \qquad \alpha_I= 3.685, \qquad \alpha_D= 1.804. \] For example, repeatedly committing \$1 leads to an asymptotic mean illiquid wealth of \$3.685. \subsection{Comparison with the Takahashi and Alexander model} Our stochastic model of an illiquid asset is closely related to that of Takahashi and Alexander \cite{Takahashi2002}, but it differs in to key ways. The most important difference is that our model is Markovian; the calls, distributions, and returns at time $t$ are conditionally independent of the all previous quantities, given the state at time $t$. In comparison, Takahashi and Alexander's model specifies time varying call and distribution intensity parameters. These time varying intensities mean that the final intensities can be set to 1, meaning calls and distributions can have deterministic end times, and the exposure will not geometrically decline. In \S\ref{s-extensions} we describe how to modify our model to depend on arbitrarily many previous time periods. This means we can exactly capture the original Takahashi and Alexander model with this extension of our model. We emphasize that this generalization remains fully tractable from the portfolio optimization standpoint described in this paper. The second difference between our model and that of Takahashi and Alexander is that ours is a stochastic model, with random intensities, whereas theirs is deterministic. \clearpage \section{Commitment optimization}\label{s-commitment-opt} Because the dynamics is linear, we can use convex optimization to choose a sequence of commitments to meet various goals in expectation. We consider a simple example here to illustrate. We consider the task of starting with no illiquid exposure or uncalled commitments, {\it i.e.}, $I_1=0$, $K_1=0$, and choosing a sequence of commitments, $n_t$, $t=1,\ldots,T$. Our goal is to reach and maintain an illiquid wealth of $I^\text{tar}$, so we use as our primary objective the mean-square tracking error, \[ {\mathbf E} \frac{1}{T+1} \sum_{t=1}^{T+1} (I_t - I^\text{tar})^2. \] In addition, we want a smooth sequence of commitments, so we add a secondary objective term which is the mean square difference in commitments, \[ {\mathbf E} \frac{1}{T-1} \sum_{t=2}^T (n_t-n_{t-1})^2, \] We also impose a maximum allowed per-period commitment, {\it i.e.}, $n_t \leq n^\text{lim}$. (Of course we can add other constraints and objective terms; this example is meant only to illustrate the idea.) We illustrate two methods. The first is simple open-loop planning, in which assume that state follows its mean trajectory, and we determine a fixed sequence of commitments, and then simply execute this plan. The second method is a closed-loop method, which adapts the commitments based on previously realized returns, capital calls, and distributions. This method is called model predictive control (MPC). We evaluate both policies under the mean dynamics and the stochastic dynamics. \subsection{Open loop commitment control} \paragraph{Planning.} We will find a plan of commitments based on the mean dynamics. This leads to the convex optimization problem \[ \begin{array}{ll} \mbox{minimize} & \frac{1}{T+1}\sum_{t=1}^{T+1}(\hat I_t-I^\text{targ})^2 +\gamma^\text{smooth}\frac{1}{T-1}\sum_{t=2}^T(\hat n_t-\hat n_{t-1})^2\\ \mbox{subject to} & \hat x_{t+1}=\overline{A}\hat x_t +\overline{B}\hat n_t,\qquad t=1,\ldots,T\\ & \hat x_1 = 0\\ & 0\leq \hat n_t\leq n^\text{lim},\qquad t=1,\ldots,T, \end{array} \] where $\gamma^\text{smooth}>0$ is a hyperparameter that determines the weight of the smoothing penalty, and $\overline A$ and $\overline B$ are as defined in \eqref{e-output-eqn}. The variables in this problem are $\hat n_1,\ldots,\hat n_T$ and $\hat x_1,\ldots,\hat x_{T+1}$, with $\hat I_t=(x_t)_1$ for $t=1,\ldots,T+1$. We use the notation $\hat I_t$, $\hat n_t$, $\hat x_t$ to emphasize that these are quantities in our plan, and not the realized values. This is a simple convex optimization problem, a quadratic program (QP), and readily solved \cite{cvxbook}. \paragraph{Example.} Solving this problem with our running example defined in \eqref{e-intensity-dist} for \begin{equation}\label{e-commitment-planning-params} T=20,\qquad \gamma^\text{smooth}=1,\qquad I^\text{targ}=1,\qquad n^\text{lim}=.5, \end{equation} we find the optimal planned sequence of commitments and corresponding uncalled commitment and exposure shown in figure~\ref{commit_plan}. \begin{figure}\label{commit_plan} \begin{center} \includegraphics[width=0.8\textwidth]{figures/just_commit_plan.pdf} \end{center} \caption{Deterministic commitment planning.} \end{figure} The mean-squared tracking error attained by our plan is 0.133. We can also calculate the tracking error from $t=5$ onwards to account for the large contribution to tracking error of the first four periods. Thus, a perhaps more meaningful metric is the delayed root-mean-square (RMS) tracking error, \[ \paren{\frac{1}{T-4}\sum_{t=5}^{T}(I_t-I^\text{targ})}^{1/2}. \] (This is on the same scale as $I^\text{targ}$ and is readily compared to it.) The plan attains a delayed RMS tracking error of 0.071. The optimal commitment sequence makes sense. It hits the limit for the first two periods, in order to quickly bring up the illiquid wealth; then it backs off to a lower level by around period $6$, and finally converges to an asymyptotic value near $I^\text{targ}/\alpha_I = 0.27$, which is the constant commitment value that would asymptotical give mean illiquid value $I^\text{targ}$. \paragraph{Results.} These results above are with the mean dynamics. We can also execute this sequence of planned commitments under random calls and distributions as specified by our stochastic model in \S\ref{example_illiquid_stochastic}. The results for 100 simulated realizations is shown in figure \ref{commit_open}. The mean-squared tracking error, averaged across the realizations, is 0.199. The delayed root-mean-square tracking error, averaged across the realizations is 0.274. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{figures/just_commit_open.pdf} \end{center} \caption{Open loop commitment control, 100 realizations.} \label{commit_open} \end{figure} \subsection{Closed loop commitment control via MPC} We now perform model predictive control, which is a \emph{closed loop} method, meaning $n_t$ can depend on $x_t$, {\it i.e.}, we can adapt our commitments to the current values of uncalled commitments and illiquid wealth. \paragraph{Planning.} At every time $t=1,\ldots,T$, we plan commitments over the next $H$ periods $t,t+1,\ldots,T+H$, where $H$ is a planning horizon. We use $\hat x_{\tau|t},\hat n_{\tau|t}$ to indicate that these are the quantities in the plan at time $\tau$, from the plan made at time $t$. These planned quantities are found by solving the optimization problem \[ \begin{array}{ll} \mbox{minimize} & \frac{1}{H+1}\sum_{\tau=t}^{t+H+1}(\hat I_{\tau|t}-I^\text{targ})^2+ \gamma^\text{smooth}\frac{1}{H-1}\sum_{t=2}^T(\hat n_{\tau|t}-\hat n_{\tau-1|t})\\ \text{subject to} & \hat x_{t|t}=x_t\\ & \hat x_{\tau+1|t}=\overline{A}\hat x_{\tau|t}+\overline{B}\hat n_{\tau|t},\quad\tau=t,\ldots,t+H\\ & 0\leq \hat n_{\tau|t}\leq n^\text{lim},\quad \tau=t,\ldots,t+H, \end{array} \] with variables $\hat x_{t|t},\ldots,\hat x_{t+H+1|t}$ and $ \hat n_{t|t},\ldots,\hat n_{t+H|t}$, $\hat I_{\tau|t} = (\hat x_{t|t})_1$ Note that when we plan at time $t$, we include the constraint $\hat x_{t|t} = x_t$; this closes the feedback loop by planning based on the current realized state. \paragraph{Policy.} Our policy is simple to explain: at time $t$, after planning as described above, we execute control \[ n_t=\hat n_{t|t}. \] Note that the planned quantities $\hat I_{\tau|t}$, $\hat x_{\tau|t}$, $\tau=t+1,\ldots,t+H+1$, and $\hat n_{\tau|t}$, $\tau=t+1,\ldots,t+H$, are never executed by the MPC policy. They are only part of the plan. \paragraph{Results.} We again execute our policy under random calls and distributions as specified by our stochastic model in \S\ref{example_illiquid_stochastic}. The results for 100 simulated trajectories a shown in figure \ref{commit_closed}. The average mean-squared error is 0.182. The average delayed root-mean-square tracking error is 0.244, an 11\% reduction from the open loop policy. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{figures/just_commit_closed.pdf} \end{center} \caption{MPC commitment control, 100 realizations.} \label{commit_closed} \end{figure} \clearpage \section{Joint liquid and illiquid model}\label{joint_illiquid_liquid_model} We now describe a model for an investment universe consisting of multiple illiquid alternative and liquid assets. First, we extend to a universe of $n^\text{ill}$ illiquid assets. \paragraph{Multiple illiquids.} We extend the same quantities as in \S\ref{stochastic-dynamic-model-illiquid} from scalars to vectors of dimension $n^\text{ill}$. \[ K_t,I_t,C_t,D_t \in {\mbox{\bf R}}^{n^\text{ill}}, \qquad n_t\in{\mbox{\bf R}}^{n^\text{ill}}, \qquad R_t^\text{ill},\lambda_t^1,\lambda_t^0,\delta_t \in{\mbox{\bf R}}^{n^\text{ill}}. \] We have the exact same dynamics as before, duplicated for each illiquid asset. Each has its own states for exposure and uncalled commitment, and its own control for its new commitments. The illiquid calls, distributions, and returns are now part of a joint distribution. The illiquid dynamics extend in vectorized form to \begin{equation*} K_{t+1} = K_t+n_t-C_t,\qquad I_{t+1} = \mathop{\bf diag}(R_t)I_t+C_t-D_t, \end{equation*} with \begin{equation*} C_t = \mathop{\bf diag}(\lambda_t^0)n_t+\mathop{\bf diag}(\lambda_t^1)K_t, \qquad D_t = \mathop{\bf diag}(R_t)\mathop{\bf diag}(\delta_t)I_t. \end{equation*} We emphasize that while the return, call, and distribution dynamics here are separable across the illiquid assets, the underling random variables $((R_t)_j,(\lambda_t^0)_j,(\lambda_t^1)_j,(\delta_t)_j)$ can be modeled jointly. We continue with our assumption that these random variables are independent across time. \paragraph{Multiple liquids.} There are now a set of $n^\text{liq}$ liquid assets available to us. The liquid assets are simple: we can buy and sell them at will at each period; they suffer none of the complex dynamics of the illiquid assets. We add one new state, $L_t$, the (total) liquid wealth at period $t$. In addition to new commitments for each illiquid asset, at each time $t$ we now control how we allocate our liquid wealth each period, as well as how much outside cash to inject into our liquid wealth. Thus we have the additional quantities, which we can control: \begin{itemize} \item $h_t\geq 0\ (\in {\mbox{\bf R}}^{n^\text{liq}})$ is the allocation in dollars invested in liquid assets at period $t$ \item $s_t\geq 0\ (\in {\mbox{\bf R}})$ is the outside cash injected at period $t$ \end{itemize} At the beginning of period $t$, we invest (or allocate) our liquid wealth in liquid assets. This corresponds to the constraint $L_t=\mathbf{1}^Th_t$. We receive multiplicative liquid returns $(R_t^\text{liq})_j\in {\mbox{\bf R}}$ on liquid asset $j$, yielding total return $h_t^T R_t^\text{liq}$. We then pay out capital calls from and receive distributions to our liquid wealth, for all illiquid assets. This corresponds to a net increase in liquid wealth given by $-\mathbf 1^TC_t+\mathbf 1^TD_t$. Lastly, if at this stage our liquid wealth is negative, we are forced to add outside cash $s_t$ to at least bring our liquid wealth to zero. Compactly, the liquid dynamics are \[ L_{t+1}=h_t^T R_t^\text{liq}-\mathbf 1^TC_t+\mathbf 1^TD_t+s_t, \] with constraints \begin{equation*} h_t,n_t,s_t\geq 0, \qquad L_t\geq 0, \qquad L_t=\mathbf{1}^Th_t. \end{equation*} \subsection{Stochastic linear system model}\label{stochastic_linear_system_joint} We can again represent these dynamics as a stochastic linear system. Let $x_t = (L_t,I_t,K_t)\in {\mbox{\bf R}}^{1+2n^\text{ill}}$ be the state vector. The control is $u_t = (h_t,n_t,s_t)\in {\mbox{\bf R}}^{1+n^\text{liq}+n^\text{ill}}$. Extending the $A$ and $B$ matrices from \S\ref{e-illiquid-matrix-eqn}, define \begin{equation}\label{e-joint-matrix-dynamics} A_t = \bmat{ 0 & (\delta_t\circ R_t^\text{ill})^T & {-\lambda^1_t}^T\\ 0 & \mathop{\bf diag}((\mathbf 1-\delta_t)R_t^\text{ill}) & \mathop{\bf diag}(\lambda_t^1)\\ 0 & \mathop{\bf diag}(\mathbf 0) & \mathop{\bf diag}(\mathbf 1-\lambda_t^1) },\qquad B_t = \bmat{ {R^\text{liq}_t}^T & {-\lambda_t^0}^T & 1\\ \mathbf 0^T & \mathop{\bf diag}(\lambda_t^0) & 0\\ \mathbf 0^T & \mathop{\bf diag}(\mathbf 1-\lambda_t^0) & 0}. \end{equation} Then the random linear dynamics with multiple illiquids and liquids are \[ x_{t+1}= A_t x_t+ B_tu_t, \] with constraints \begin{equation*} h_t,n_t,s_t\geq 0, \qquad L_t\geq 0, \qquad L_t=\mathbf{1}^Th_t. \end{equation*} The presence of the outside cash control $s_t$ implies that a feasible control exists for any feasible value of the states, since $s_t$ prevents the liquid wealth from ever being negative. As in \S\ref{illiquid-mean-dynamics}, we let $\overline x_t = {\mathbf E} x_t$ denote the mean of the state, $\overline u_t = {\mathbf E} u_t$ denote the mean of the input or control, define the mean system matrices as \begin{equation*} \overline{A}={\mathbf E}{A_t},\qquad \overline{B}={\mathbf E}{B_t}, \end{equation*} and recover the same mean dynamics \begin{equation}\label{e-mean-liquid-dynamics} \overline{x}_{t+1} = \overline{A}\overline{x}_t+\overline{B}\overline{u}_t. \end{equation} \subsection{Return and intensity distribution} We extend the previous generative model specified in \eqref{e-illiquid-latent} and \eqref{e-illiquid-intensity-model} to include liquid returns, \begin{equation}\label{e-join-latent} z_t=\bmat{ z_t^\text{int}\\ z_t^\text{ret} } \sim \mathcal{N}(\mu,\Sigma)\in{\mbox{\bf R}}^{3n^\text{ill}+n^\text{liq}},\qquad \mu=\bmat{\mu^\text{int}\\ \mu^\text{ret}\\},\qquad \Sigma=\bmat{ \Sigma^\text{int} & \Sigma_{12}\\ \Sigma_{21} & \Sigma^\text{ret} }. \end{equation} From these we obtain delayed and immediate call intensities \[ \lambda_t^1= \frac{1}{1+\exp(z_t)_{1:n^\text{ill}}},\qquad \lambda_t^0 = \frac{1}{2}\lambda_t^1,\] distribution intensities \[ \delta_t = \frac{1}{1+\exp+(z_t)_{n^\text{ill}+1:2n^\text{ill}}}, \] and returns \[ \bmat{ R_t^\text{ill}\\ R_t^\text{liq} } = \bmat{ \exp {(z_t)}_{2n^\text{ill}+1:3n^{n^\text{ill}}}\\ \exp {(z_t)}_{3n^\text{ill}:}}. \] \clearpage \section{Strategic asset allocation under the relaxed liquid model}\label{relaxed-SAA} In this section we introduce a highly simplified model, where all of the challenges of illiquid alternative assets are swept under the rug. This model is definitely not realistic, but we can use it to develop an unattainable benchmark for performance that can be obtained with the more accurate model. \subsection{Relaxed liquid model}\label{Relax} As a thought experiment, we imagine the illiquid assets are completely liquid: we have arbitrary control of illiquid asset positions (immediate increase or decrease). This is a relaxation of the actual problem setting, where we must face stochastic and only indirectly controllable calls and distributions. The idea of a relaxed liquid model is not new; for example, Giommetti et al.~\cite{Giommetti2021} consider the target allocations resulting from treating illiquid assets as fully liquid for comparison, but do not evaluate stochastic control policies trying to achieve these allocations in an illiquid world. The relaxed liquid model is also implicitly behind various Captial Market Assumptions, where return ranges, and correlations, are given for both liquid and illiquid assets. The relaxed liquid model is very simple. There is only one state, the total wealth $W_t$. The quantities we have control over are the allocations to liquid and illiquid assets, denoted $h_t^\text{liq} \in {\mbox{\bf R}}^{n^\text{liq}}$ and $h_t^\text{ill} \in {\mbox{\bf R}}^{n^\text{ill}}$. The wealth evolves according to the dynamics \[ W_{t+1} = u_t^Tr_t,\qquad u_t^T\mathbf 1=1,\qquad u=\bmat{ h_t^\text{liq}\\ h_t^\text{ill} }, \] where $r_t=z_t^\text{ret}$ is defined in \eqref{e-join-latent}. We use the standard trick of working with the weights of the allocations in each period, denoted $w_t$, instead of $u_t$. This is defined as $w_t = u_t/W_t$, so $\mathbf 1^T w_t=1$. We recover the dollar allocations as $u_t = W_tw_t$. \subsection{Markowitz allocation and policy}\label{markowitz} A standard way to choose a portfolio allocation is to solve the one period risk-constrained Markowitz problem, \begin{equation} \begin{array}{ll}\label{e-markowitz-problem} \text{maximize} & \mu^Tw\\ \text{subject to} &\mathbf 1^Tw =1,\quad w\geq 0\\ & \|\Sigma^{1/2}w\|_2 \leq \sigma,\\ \end{array} \end{equation} where $\sigma$ is the maximum tolerable return standard deviation, and $\mu$ and $\Sigma$ are the expected return and return covariance, respectively. We denote the optimal allocation as $w^\star$. The natural policy associated with solving the Markowitz problem simply rebalances to $w^\star$: it sets $u_t=W_tw^\star$ for each period $t$. This simple rebalancing is of course not possible under the accurate model that includes the challenges of alternative assets, but it is under the relaxed liquid model. \subsection{Example}\label{relaxed_example} \paragraph{Liquid performance.} Under the assumptions of \S\ref{Relax}, we solve the one period Markowitz problem with $\mu_\text{ret},\Sigma_\text{ret}$ the mean and covariance of the joint distribution of liquid, illiquid asset returns. Using this relaxed Markowitz target, we simulate the fantasy performance achieved by being able to perfectly rebalance both liquids and illiquids to the Markowitz target each period, for multiple periods, using the policy described earlier in \ref{markowitz}. For the parameters defined in \eqref{e-join-latent}, we use the specific values of { \begin{eqnarray}\label{e-example-mean-return} \mu_\text{ret} &=& \pmat{ 0.158& 0.000& 0.072& 0.023& 0.036& 0.046},\\ \sigma_\text{ret} &=& \pmat{ 0.281& 0.000& 0.206& 0.046& 0.047& 0.162},\\ C_\text{ret} &=& \bmat{ 1.000 & 0.000 & 0.422 & -0.298 & -0.002 & 0.261 \\ 0.000 & 1.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ 0.422 & 0.000 & 1.000 & -0.843 & 0.197 & 0.800 \\ -0.298 & 0.000 & -0.843 & 1.000 & -0.018 & -0.739 \\ -0.002 & 0.000 & 0.197 & -0.018 & 1.000 & 0.628 \\ 0.261 & 0.000 & 0.800 & -0.739 & 0.628 & 1.000 },\label{e-example-cov-return} \end{eqnarray} with $\Sigma_\text{ret} = \mathop{\bf diag}(\sigma_\text{ret})C\mathop{\bf diag}(\sigma_\text{ret})$. The liquid return mean and covariance matrix are gathered from the BlackRock Capital Market Assumptions for equities as of July 2021 \cite{cma}. The corresponding expected returns are \[ \overline R_t^\text{liq}= \pmat{1.21834088 & 1.0976099 & 1.02436202 & 1.0377968 & 1.0609715 & 1}, \] where the last asset is cash. \paragraph{Risk-return trade-off.} By solving the Markowitz problem with these parameters across a range of values for the risk tolerance $\sigma$ (which give rise to corresponding Markowitz targets), we can create a risk-return trade-off plot, shown in figure~\ref{baseline_results}. We should consider this trade-off curve as an unattainable performance benchmark, that we can only strive to attain when the challenges of illiquid alternatives are present. \begin{figure} \begin{center} \includegraphics[scale=.3]{figures/baseline.png} \caption{Relaxed liquid risk-return trade-off, obtained by pretending the illiquid assets are fully liquid. This gives an unattainable performance benchmark for problem when the challenges of illiquid alternatives are present.} \label{baseline_results} \end{center} \end{figure} \clearpage \section{Strategic asset allocation with full illiquid dynamics} In \S\ref{relaxed-SAA}, we describe an approach to strategic asset allocation for portfolios including an imagined class of illiquid alternatives which are rendered completely liquid. In this section, we provide methods to perform strategic asset allocation with mixed liquid and illiquid alternative portfolios where we can only augment our illiquid position by making new commitments, and the effect of this action is random and delayed. First, we describe a method which over time establishes and then maintains a given target allocation under growth. Then, we describe a more sophisticated MPC method which jointly selects a target allocation based on a user's risk tolerance, establishes the target, and maintains the target in growth. \subsection{Steady-state commitment policy}\label{heuristic} We first describe a simple policy, which seeks to track a target allocation $\theta^\text{targ}$. It allocates liquid assets proportionate to its desired liquid allocation, and makes new commitments of a target level of illiquid wealth scaled by the asymptotic expected private response to constant commitment. The input is a target allocation $\theta^\text{targ}$, current liquid wealth L and illiquid wealth I. First, the policy checks if $L$ is negative. If it is, it returns control \[ u=(h,n,s),\qquad h=0,\quad n=0,\quad s=|L|. \] Otherwise, if the liquid wealth is positive, the policy proceeds as follows. First, the policy rebalances the liquid holdings proportionately to $\theta^\text{targ}$, \[ h=L\frac{\theta^\text{liq}}{1^T\theta^\text{liq}}, \] where $\theta^\text{liq},\theta^\text{ill}$ are the liquid and illiquid blocks of the allocation vector $\theta^\text{targ}=\bmat{\theta^\text{liq}\\\theta^\text{ill}}$, respectively. Then, with $\alpha_I$ as the 1 dollar private commitment step response defined in \eqref{e-steady-state-gains}, and $I^\text{targ}$ as the target illiquid level, $p^\text{targ}=\theta^\text{ill}(L+I)$, the policy commits \[ n_i=\frac{I^\text{targ}_i}{\alpha_{I_i}} \] and returns control $u=(h,n,0)$. \subsection{Model predictive control policy}\label{MPC} We now describe a more sophisticated policy which plans ahead based on a model of the future, seeking to maximize wealth subject to various risk constraints. For a sequence of prospective actions, the policy forecasts future state variables using the mean dynamics described in \eqref{e-mean-liquid-dynamics}. The policy then chooses a sequence of actions by optimizing an objective which depends on the planned actions and forecast states. Finally, the policy executes solely the first step of the planned sequence. The impact of that action is observed, and the resulting state is observed, and then this cycle repeats. The policy selects a planned sequence of actions by trying to maximize the ultimate total liquid and illiquid wealth. However, it is also constrained by a user's risk tolerance, which caps the allowable per period return volatility. Additionally, because capital calls are stochastic in nature, the policy seeks to guarantee that with high probability, all capital calls can be funded from the liquid wealth. \paragraph{Modified Markowitz constraint.} Motivated by the standard one period risk-constrained Markowitz problem \eqref{e-markowitz-problem}, we would like to include a risk constraint in our planning problem. However, the Markowitz problem has variables in weight space rather than wealth space. Other multi-period optimization problems based on the Markowitz problem, such as in \cite{Boyd2017}, assume a timescale over which the wealth does not grow significantly over the planning horizon. In our case, since potential application contexts include endowments and insurers, we must handle substantial growth over the investment horizon. Thus, we consider an analogous risk constraint in wealth space rather than weight space, \[ \frac{y^T\Sigma y}{(\mathbf 1^Ty)^2}\leq \sigma^2 \iff \|\Sigma^{1/2} y\|_2\leq \sigma\mathbf 1^Ty. \] $y=(h,I)$ is the liquid and illiquid exposure. Thus, we use the constraint \[ \|\Sigma^{1/2} y\|_2\leq \sigma\mathbf 1^Ty, \] which is invariant in wealth. It is also convex, which means that problems with such constraints can be reliably solved. \paragraph{Insolvency constraint.} An important challenge in performing strategic asset allocation with illiquid alternatives is ensuring that the probability of being unable to pay a capital call is extremely low. In our model, this corresponds to requiring \[ P(W_{t+1}<0 \mid X_t,n_t,h_t) \leq \epsilon^\text{ins} \] for a small probability of failure $\epsilon^\text{ins}$. We make several approximations to facilitate a convex constraint. First, we approximate $R_t^\text{liq}$ as a multivariate normal random variable, \[ R_t^\text{liq}\sim N(\mu_\text{liq},\Sigma_\text{liq}). \] It is important to note that these parameters are the mean and covariance of the liquid returns, rather than the mean and covariance which parameterize the log normal liquid return distribution given by $\mu_\text{ret}$ and $\Sigma_\text{ret}$ in \eqref{e-join-latent}. Then we assume we receive the expected calls $\overline{c}_t ={\mathbf E}[C_t\mid X_t,n_t,h_t]$, which is a linear function of our controls. They are given by $\overline{c}_t =\overline{\lambda}_t^{1, T}K_t+\overline{\lambda}_t^{0,T}n_t$. Finally, we assume pessimistically there are no distributions or outside cash. With these approximations, we have \begin{eqnarray*} P(W_{t+1}<0 \mid X_t,n_t,h_t) & \approx & P(R_t^\text{liq}h_t-\overline{c}_t \leq 0)\\ &=& P(N(h_t^T\mu_\text{liq}-\overline{c}_t,h_t^T\Sigma_\text{liq}h_t)<0) \leq \epsilon. \end{eqnarray*} This probabilistic constraint holds if and only if \begin{equation}\label{e-insolvency} \overline{c}_t-h_t^T\mu_\text{liq} \leq \Phi^{-1}(\epsilon^\text{ins})\|\Sigma_\text{liq}^{1/2}h_t\|_2, \end{equation} where $\Phi$ is the standard normal cumulative distribution function. This constraint is convex provided $\epsilon^\text{ins}\leq 1/2$, since then $\Phi^{-1}(\epsilon^\text{ins})\leq 0$, and \eqref{e-insolvency} is a second order cone constraint (see \cite[\S 4.4.2]{cvxbook}). As mentioned above, the constraint \eqref{e-insolvency} is pessimistic because it assumes no distribution. An alternative and less pessimistic formulation of the insolvency constraint would consider the distribution, the calls, and the liquid returns all under a joint normal approximation. \paragraph{Smoothing penalty.} Among control sequences with similar objective values, we would like for new commitments to be fairly smooth across time. We can consider a natural commitment smoothing penalty \[ g(n) = \sum_{t=0}^{H-1}\gamma^t\|n_{t+1}-n_t\|^2 \] The time discount $\gamma$ appears because in a growth context we expect $n_t$ to increase over time. Additionally, it helps account for the increased uncertainty of future planned steps. \paragraph{MPC planning problem.} All objective terms and constraints outlined above are consolidated into one optimization problem. At time $t$, we plan $\{\hat x_{\tau|t}\}_{\tau=t}^{t+H+1}$, $\{\hat u_{\tau|t}\}_{\tau=t}^{t+H}$, where $H$ is the planning horizon, by solving the optimization problem \begin{equation}\label{e-full-mpc-problem} \begin{array}{lll} \text{maximize} & \sum_{\tau=t}^{t+H}\gamma^t \paren{\hat L_{\tau|t} +\mathbf 1^T\hat I_{\tau|t}-\lambda^\text{cash}\hat s_{\tau|t}}-\lambda^\text{smooth}g(\hat n_{\cdot|t})\\ \text{subject to} & \hat x_{t|t} = x_t\\ & \hat x_{\tau+1|t}=\overline{A} \hat x_{\tau|t}+ \overline{B} \hat u_{\tau|t}, & \tau=t,\ldots,t+H\\ & \hat L_{\tau|t}\geq 0, & \tau= t,\ldots,t+H+1\\ & \hat h_{\tau|t},\hat n_{\tau|t},\hat s_{\tau|t} \geq 0, & \tau= t,\ldots,t+H\\ & \mathbf{1}^T \hat h_{\tau|t} = \hat L_{\tau|t}, & \tau=t,\ldots,t+H\\ & \|\Sigma^{1/2} \hat y_{\tau|t}\|_2\leq \sigma\mathbf 1^T\hat y_{\tau|t}, & \tau=t,\ldots,t+H\\ & \overline{\lambda}^{1,T} \hat K_\tau+\overline{\lambda}^{0,T}\hat n_{\tau|t} -\hat h_{\tau|t}^T\mu_\text{liq} \leq \Phi^{-1}(\epsilon^\text{ins})\|\Sigma_\text{liq}^{1/2} \hat h_{\tau|t}\|_2, & \tau=t,\ldots,t+H,\\ \end{array} \end{equation} where $\lambda^\text{cash}>0$ is a hyperparameter penalizing outside cash use. Recall that $L$, $I$, and $K$ are components of $x$, and $h$, $n$, and $s$ are components of $u$. \subsection{Example}\label{results} In this example, we evaluate the performance of the two policies described in \S\ref{heuristic} and \S\ref{MPC} using the risk return trade off. For the parameters defined in \eqref{e-join-latent}, we use the specific values of { \begin{eqnarray}\label{e-example-ret-liq} \mu_\text{ret} &=&\pmat{ 0.158,& 0.000,& 0.072,& 0.023,& 0.036,& 0.046 },\\ \sigma_\text{ret} &=& \pmat{ 0.281,& 0.000,& 0.206,& 0.046,& 0.047,& 0.162 },\\ C_\text{ret} &=& \bmat{ 1.000 & 0.000 & 0.422 & -0.298 & -0.002 & 0.261 \\ 0.000 & 1.000 & 0.000 & 0.000 & 0.000 & 0.000 \\ 0.422 & 0.000 & 1.000 & -0.843 & 0.197 & 0.800 \\ -0.298 & 0.000 & -0.843 & 1.000 & -0.018 & -0.739 \\ -0.002 & 0.000 & 0.197 & -0.018 & 1.000 & 0.628 \\ 0.261 & 0.000 & 0.800 & -0.739 & 0.628 & 1.000 }\label{e-example-cov-liq} \end{eqnarray} with $\Sigma_\text{ret} = \mathop{\bf diag}(\sigma_\text{ret})C\mathop{\bf diag}(\sigma_\text{ret})$. \paragraph{Illiquid dynamics.} We consider the actual illiquid world: full call/distribution random dynamics as described in \S\ref{stochastic_linear_system_joint}. We evaluate the two policies described in \S\ref{heuristic} and \S\ref{MPC} on the same simulated returns as the imaginary Markowitz portfolio. \paragraph{Example policy specifications.} In this case study, we use the steady state commitment policy with parameters \[ c=3.685,\qquad \kappa=.1, \] and values of $\theta$ arising from solving the one period Markowitz problem defined in \ref{markowitz} for 30 evenly spaced values of $\sigma$ between 0 and .3, with our specified return distribution parameters defined in (\ref{e-example-mean-return}--\ref{e-example-cov-return}). For the MPC policy, we use the same $\sigma$ values described above, but for numerical reasons use the standard trick of moving the risk limit to penalized form by subtracting \[ \lambda^\text{risk}(\|\Sigma^{1/2} y_t\|_2-\sigma\mathbf 1^Ty_t)_+ \] from each term of the objective defined in \eqref{e-full-mpc-problem}, penalizing excess risk. The parameter values are \[ \gamma=.97,\qquad H = 10,\qquad \epsilon^\text{ins}=.02,\qquad \lambda^\text{risk}=10,\qquad \lambda^\text{smooth}=.1,\qquad \lambda^\text{cash}=1000, \] with $\overline{A},\overline{B}$ as defined in \eqref{e-mean-liquid-dynamics}, with the distributions instantiated in \eqref{e-intensity-dist} and (\ref{e-example-ret-liq}--\ref{e-example-cov-liq}). \paragraph{Results.} We see in figure~\ref{f-policy-results-20} that both the MPC and heuristic policies are extremely close to the risk-return performance of the liquid relaxation, which is an unattainable benchmark. This is despite the challenging illiquid dynamics we face in the non-relaxed setting. \begin{figure} \begin{center} \includegraphics[scale=.3]{figures/tradeoff-20.pdf} \caption{Risk return trade-off, 200 simulations of 20 periods.} \label{f-policy-results-20} \end{center} \end{figure} The performance stated here is averaged across 20 periods of simulation, for 200 simulated trajectories. We can also examine the performance across a shorter time horizon. figure \ref{f-policy-results-10} shows the same risk-return tradeoff for 10 periods. Evidently, there is a larger gap between the MPC policy and the liquid performance ceiling, and also between the MPC and simple policies. This has a perfectly clear interpretation: because there is a roughly 4 period delay before peak illiquid exposure (see figure \ref{impulse_response}), the impact of the illiquid alternative asset's high returns is delayed. Additionally, by planning ahead, the MPC policy achieves illiquid exposure faster than the simple policy. \begin{figure} \begin{center} \includegraphics[scale=.3]{figures/tradeoff-10.pdf} \caption{Risk return trade-off, 200 simulations of 10 periods.} \label{f-policy-results-10} \end{center} \end{figure} By looking at the average allocation across time for both policies shown in figure~\ref{allocations}, we can further understand these differences. We can now see directly that the MPC policy is able to reach a stable allocation in fewer periods than the heuristic policy. If we include the proportional feedback control, the heuristic does reach the allocation faster, but still not as quickly as the MPC. Another difference is that the heuristic policy and MPC sweep out the same risk return trade-off profile, but may not choose the exact same portfolio steady state weights. Generally, the heuristic undershoots the illiquid target it is trying to reach. \begin{figure} \centering \includegraphics[scale=.5]{figures/alloc.png} \caption{Average allocations across time, 200 simulations for 20 periods. MPC, heuristic, and relaxation.} \label{allocations} \end{figure} \clearpage \section{Extensions}\label{s-extensions} \paragraph{Liquidation.} We can easily extend the model to allow for liquidation of illiquid alternatives on the secondaries market. Per common practice (for example, see \cite{Giommetti2021}), we assume that at time $t$ we can liquidate $0\leq \ell_t\leq P_t$ from $P_t$ which, after a haircut $\phi$ is available as liquid wealth $\phi\ell_t$. This changes the control by appending an $\ell_t\in{\mbox{\bf R}}^{n^\text{ill}}$ to $u_t$ as defined in \S\ref{stochastic_linear_system_joint}. Accordingly, the new control matrix is given by \[\bmat{ B_t^{\text{liq}} & \tilde{B}_t }, \] with \[ \tilde{B}_t = \bmat{ \phi \mathbf 1^T\\ -I\\ 0 }, \] where the block of zeros and the identity matrix are in dimension ${\mbox{\bf R}}^{n^\text{ill}\times n^\text{ill}}$. There is also a new constraint enforcing $0 \leq-\ell_t\leq I_t$, {\it i.e.} the maximum liquidation is the entire liquid exposure in a given asset. \paragraph{Tracking fixed weights.} In this paper, a user specifies a risk tolerance parameter $\sigma$ as in \eqref{e-markowitz-problem}, which implicitly specifies the portfolio weights across the liquid and illiquid assets. However, an investor may have arrived with pre-selected target portfolio weights. Instead of seeking to track a target illiquid exposure, as in the problem posed in \S\ref{s-commitment-opt}, we can instead seek to track target portfolio weights. A natural tracking constraint in planning is \[ \|\Sigma^{1/2}(\hat{y}_{\tau|t}-\theta\mathbf 1^T\hat{y}_{\tau|t})\|_2 \leq \sigma^{\text{track}}\mathbf 1^T\hat{y}_{\tau|t}, \] where $\theta\in{\mbox{\bf R}}^{n^\text{ill}+n^\text{liq}}$ is the user-provided vector of target portfolio weights, and $\sigma^\text{track}$ is a tracking variance hyperparameter. As with our risk constraint in \eqref{e-full-mpc-problem}, in practice a slack variable can be added to the above constraint to guarantee feasibility. \paragraph{Liabilities.} We can incorporate external liabilities $Z_t$ by modifying our liquid wealth update to \[ L_{t+1}=h_t^T R_t^\text{liq}-\mathbf 1^TC_t+\mathbf 1^TD_t+s_t-Z_t. \] This encodes an external obligation of $Z_t$ dollars in period $t$. This could represent the liabilities of an insurer or a pension fund. MPC is able to handle these liabilities quite gracefully: at every time $t$ the planning problem takes in a forecast of the next $H$ liabilities $\hat L_{\tau|t}$, $\tau=t,\ldots,t+H$. The insolvency constraint~\eqref{e-insolvency} can be modified to include the liabilities as \[ L_t+\overline{c}_t-h_t^T\mu_\text{liq} \leq \Phi^{-1}(\epsilon^\text{ins})\|\Sigma_\text{liq}^{1/2}h_t\|_2. \] \paragraph{Time varying forecasts.} In the current problem formulation, we plan based on the mean dynamics \eqref{e-mean-liquid-dynamics}, which we treat as stationary at every time $t$. The mean dynamics capture the expected returns, call intensities, and distribution intensities. It is immediate to replace these stationary forecasts with time varying ones: planning at time $t$ in \eqref{e-full-mpc-problem} becomes \[ \hat x_{\tau+1|t}=\overline{A}_{\tau|t} \hat x_{\tau|t}+ \overline{B}_{\tau|t} \hat u_{\tau|t},\qquad \tau=t,\ldots,t+H, \] where $\overline{A}_{\tau|t}$ and $\overline{B}_{\tau|t}$ are the forecasted mean dynamics at time $\tau$ generated at time $t$. \paragraph{Illiquid dynamics with vintages.} A natural way to extend the Takahashi and Alexander illiquid asset model is to have time varying intensity parameters that depend on the age of the investment. This amounts to keeping track of vintages for each asset class, rather than aggregating all exposure to a given illiquid asset in one state, as this paper does. This extension is quite natural, and is readily implementable as an only slightly larger tractable convex optimization based planning problem. A given illiquid asset at time $t$, rather than by two states $I_t$ and $K_t$, will now require $2k$ states, \[ I_{t,a},\quad K_{t,a}, \] where $a$ denotes the age of the investment and the maximum age tracked is $k$. In words, at each time, we keep track of the exposure and uncalled commitments from commitments of age $a$. \section{Conclusion} We have described a flexible stochastic linear system model of liquid and illiquid alternative assets, that takes into account the dynamics of the illiquid assets and the randomness of returns, calls, and distributions. This model allows us to develop an MPC policy that in each time period chooses a liquid wealth allocation, and also new commitments to make in each alternative asset. We compare the results of this policy with a relaxed liquid model, where we assume that all illiquid assets are fully liquid. This relaxed liquid model is easy to understand, since the challenges of alternative assets have all been swept under the rug. For the relaxed liquid model, we can work out optimal investment policies. The performance with these policies can be thought of as an unattainable benchmark, that we know we cannot achieve or beat when all the challenges of alternative investments are present. Suprisingly, the performance of the MPC policy under the real model, with all the challenges of alternative assets, is very close to the performance of the relaxed liquid model, under an optimal policy. Roughly speaking, there isn't much room for improvement. This is a strong validation of the MPC policy. Another interesting conclusion is that the relaxed liquid model is not as useless as one might imagine, since MPC can attain similar performance with all the challenges present. In a sense this validates reasoning based on the relaxed liquid model, where illiquid assets are treated as liquid assets. Roughly speaking, the asset manager can reason about the portfolio using the simple relaxed liquid model; feedback control with the MPC policy handles the challenges of illiquid alternative assets. \clearpage \bibliographystyle{alpha}
1,477,468,749,944
arxiv
\section{Introduction} Reconnection of a magnetic field is a process in which magnetic field lines change connection with respect to their sources. In effect, magnetic energy is converted into kinetic and thermal energies, which accelerate and heat the plasma. Historically, reconnection was first observed in the solar flares and the Earth's magnetosphere, but today it is also investigated in star formation theory and astrophysical dynamo theory. Recently, reconnection has also been invoked in the acceleration of cosmic rays \cite{L05}. In solar flares, oppositely directed magnetic flux is first accumulated, and then reconnection occurs, enabling energy transfer to kinetic energy and heating of plasma. From such an ejection of matter, we can observe the onset of reconnection and estimate the energy released in this process. Recent results from measurements by the instruments onboard the Solar Dynamic Observatory \cite{su13} have revealed new unexpected features and show that even the morphology of the solar reconnection is still not completely understood. In the context of accretion disks around protostars, neutron stars and black holes, reconnection is a part of the transport of heat, matter and angular momentum. It enables re-arrangement of the magnetic field, after which angular momentum can be transported from the matter that is infalling from an accretion disk, towards the central object. In \v{C}emelji\'{c} et al. \cite{scl1} we performed resistive simulations of star-disk magnetospheric outflows in 2D axisymmetric simulations. On-going reconnection is producing a fast, light micro-ejections of matter from the close vicinity of a disk gap. When going to three-dimensional simulations, more precise model of reconnection is needed, as it will define the topology of magnetic field. In the cases where flows are less ordered, turbulent reconnection has been invoked \cite{LV99}. The Sweet-Parker model \cite{sw58} was the first proposed model for reconnection. Parker \cite{par57} solved time-independent, non-ideal MHD equations for two regions of plasma with oppositely directed magnetic fields pushed together. Particles are accelerated by a pressure gradient, with use of the known facts about magnetic field diffusion. Viscosity and compressibility are assumed to be unimportant, so that the magnetic field energy converts completely into heat. This model is robust, but gives too slow a time for the duration of reconnection, when compared with observed data for solar flares. Petschek \cite{pet64} proposed another model, for fast reconnection. For energy conversion, he added stationary slow-mode shocks between the inflow and outflow regions. This decreased the aspect ratio of the diffusion region to the order of unity, and increased the energy release rate, so that the observed data were now better matched. However, his model fails in the explanation of solar flares because fast reconnection can persist only for a very short time period. Many aspects of the reconnection process have been studied since, but the problem of the speed of reconnection remains unsolved. Because of numerical difficulties, research on reconnection was for a few decades constrained to two-dimensional solutions. In three dimensional space there are more ways of reconnection than in two dimensions, and the very nature of reconnection is different \cite{php03}. There is still no full assessment of three dimensional reconnection -- for a recent review, see Pontin \cite{pont11}. Our 2D setup here is a familiar Harris current sheet, an exact stationary solution to the problem of a current sheet separating regions of oppositely directed magnetic field in a fully ionized plasma \cite{har62}. It is possible to obtain a Petschek-like reconnection in resistive-MHD simulations with uniform resistivity, but it demands special care with the setup of boundary conditions, as described in \cite{bat06}. To avoid this issue, we chose to set a spatially asymmetric profile for resistivity, as suggested in \cite{bat06}. In 3D simulations, we build a column of matter above such a Harris 2D configuration, with resistivity dependent on height in the third direction. Because of a modification of the resulting shocks, it enables a Petchek reconnection also in the third dimension. We first investigate differences between the reconnection rate in 2D and 3D numerical simulations, by comparing energies in the computational box. Then we compare change of current helicity and cross-helicity with the increasing height of the matter column in the third direction. \section{Numerical setup for simulations of reconnection} Magnetic reconnection is considered in the magnetohydrodynamic approximation (MHD) in our simulations. There are other possibilities. An entirely different approach, with kinetic reconnection, is described in \cite{buch99} and references therein. Such reconnection is a consequence of the instability of thin current sheets in a self-consistent consideration about ion and electron inertia, and dissipative wave-particle resonances. Ideal MHD, which is the study of the dynamics of a perfectly conducting fluid, is not appropriate for the description of reconnection. This is because a fluid must cross the field lines of magnetic field, not follow them, as required by ideal MHD. This means that for numerical simulations with reconnection, a non-ideal resistive MHD is needed. For our simulations we use the {\sc pluto} (v. 4.0) code \cite{mig12} in the parallel option. The equations we solve are, in Gaussian cgs units: \begin{eqnarray} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec{V})=0\\ \rho\left[ \frac{\partial\vec{V} }{\partial t}+ \left( \vec{V }\cdot \nabla\right) \vec{V} \right] + \nabla p- \frac{\vec{j}\times \vec{B}}{c} = 0 \label{mom2}\\ \frac{\partial\vec{B} }{\partial t}- \nabla \times \left( \vec{V} \times \vec{B}-\frac{4\pi}{c}\eta \vec{j} \right)= 0 \label{faraday}\,, \nabla \cdot \vec{B}=0\label{faraday2}\\ \rho \left[ {\frac{\partial e}{\partial t}} + \left(\vec{V} \cdot \nabla\right)e \right] + p(\nabla \cdot\vec{V} ) -\frac{4\pi}{c^2}\eta\vec{j}^2 = 0 \,, \label{enn} \end{eqnarray} with $\rho$ and p for the density and pressure, and V and B for the velocity and magnetic field. The electric current density is given by Amp\`{e}re's law $\vec{j}=c\nabla\times\vec{B}/{4\pi}$. The internal energy (per unit mass) is $e=p/(\rho(\gamma-1))$, where $\gamma$ is the effective polytropic index. We work in the adiabatic regime, with $\gamma=5/3$, so that the ideal gas law is $\rho=\gamma p/c_s^2$, where $c_s$ is the sound speed and $\eta$ is the electrical resistivity. Here we do not consider the influence of terms other than Ohmic resistivity. Our simulations are set in a uniform Cartesian grid with $X\times Y \times Z$ = $(200 \times 400 \times 100)$ grid cells = $([-1,1]\times [-2,2]\times [0,1])$ in code units. The same resolution, equal in all three directions, is kept in simulations with different heights and in the 2D simulations. The number of cells in the Z direction is changed so as to obtain the same resolution with each height of the box. Our results are compared to \cite{bat09} as a reference. There, a nonuniform grid is used, with the smallest grid spacing 7 times smaller in X, and 3 times smaller in the Y direction, than in our uniform spacing. We performed simulations with different resolutions, and found the number of cells for which the reconnection occurred and qualitatively resembled the reference. In the results with finer resolution, only the time needed to reach different stages varied. With coarser resolution, reconnection did not occur, as the current sheet was not enough resolved. \subsection{Initial and boundary conditions} \begin{figure} \includegraphics[width=8cm]{ict0.eps} \caption{Setup of initial and boundary conditions in our simulations in three dimensions. In two dimensional simulations, Z=0. Color grading is showing a current density in code units, at the boundary planes; the diameter of the magnetic flux tube is set proportional to the magnetic field strength. We start with a 2D simulation in Cartesian coordinates $X\times Y$. Increasing the height of a box in Z direction, we compare the reconnection rates and other interesting quantities in the flow. } \label{initcond3d} \end{figure} To ease a comparison of results, we choose the same initial conditions for density, pressure, velocity and two regions of oppositely directed magnetic field as in \cite{bat09}. The so called Harris current sheet is formed initially with magnetic field $\mathbf{B_{2}}(x)=\mathbf{y}B_{0}\tanh({\mathbf x}/b)$, which is parallel to the Y axis and is varying in the X direction. The amplitude of the magnetic field is chosen to be $B_0=1$, and the initial half-width of the current sheet is $b=0.1$. We set the plasma $\beta=0.35$, the initial pressure to $p(x)=1.25-B_2^2/2$, and the density $\rho(x)=2p(x)/\beta$. To obtain Petschek reconnection with spatially uniform resistivity, boundary conditions need to be overspecified at one of the inflow boundaries by imposing the mass density, two components of the velocity, one component of the magnetic field, and the total energy \cite{bat06}. To avoid such issues, we include the anomalous Ohmic resistivity (which is for a few orders of magnitude larger than microscopic resistivity), with a profile asymmetric in the X-Y plane and dependent on height in the Z-direction: \begin{eqnarray} \eta=(\eta_0-\eta_1)\exp{[-(x/a_1)^2-(y/a_2)^2]}\\ \nonumber +(\eta_0-\eta_1)\exp{-(x/a_1)^2-(z/a_3)^2}+\eta_1 \\ \nonumber =\eta_1+(\eta_0-\eta_1)\exp{[-(x/a_1)^2]}\\ \cdot(\exp{[-(y/a_2)^2]} +\exp{[-(z/a_2)^2]})\nonumber \ , \end{eqnarray} with the characteristic lengths of the resistivity variation in each direction $a_1$=0.05, and $a_2$=$a_3$=0.02. Constant factors in the resistivity $\eta_0=10^{-3}$ and $\eta_1=3\times 10^{-5}$ are also equal to those in \cite{bat09}, for easier comparison. We checked that $\eta_1$ is above the order of numerical resistivity at a given resolution. The level of numerical resistivity we found by a direct comparison of simulations with different minimum resistivity; it is of the order of $10^{-6}$. All our simulations have been performed with the value of the resistivity set above the numerical resistivity. The resistivity is then set equal to $\eta$ for $y\geq 0$, or to $\eta_0$ for $y<0$. At all the boundaries, we impose the free (``outflow'') boundary conditions, extrapolating the flow from the box in the ghost zones. In the {\sc pluto} code setup we used Cartesian coordinates, ideal equation of state, and the ``dimensional splitting'' option, which uses Strang operator splitting to solve the equations in the multi-dimension case. The spatial order of integration was set as ``{\sc linear}'', meaning that a piecewise TVD linear interpolation is applied, accurate to second order in space. We used the second order in time Runge Kutta evolution scheme RK2, with the Eight-Waves option for constraining the $\nabla\cdot\vec{B}=0$ at the truncation level. As the approximate Riemann solver, we use the Lax-Friedrichs scheme (``tvdlf'' solver option in {\sc pluto}). We first present the well known results in 2D, to introduce a concept of reconnection and measurement of the reconnection rate. \section{Results in 2D} \begin{figure} \hspace*{-.5cm}\includegraphics[width=9cm]{sl2dt30.eps} \caption{Reconnection in two dimensions at T=30 in code units, with current density shown in color grading, magnetic field contour lines in solid lines, and arrows showing velocity. } \label{twodre} \end{figure} \begin{figure*} \includegraphics[width=8cm]{sl3dala2dT30.eps} \includegraphics[width=8cm]{sl3dT70.eps} \caption{Solutions in 3D in the first case, without the asymmetry in resistivity in the Z direction at T=30 ({\em Left panel}), and in the second case, with the asymmetry in the Z direction at T=70 ({\em Right panel}). Color grading is showing the toroidal current density at the boundary planes; tubes show a choice of the magnetic flux tubes, with the diameter of the tube set proportional to the magnetic field strength; arrows show velocity. A change in connectivity of the magnetic flux tubes in 3D, triggered by the asymmetry in resistivity in the vertical plane, additionally changes, and complicates, the topology of magnetic field. } \label{mf3d} \end{figure*} Reconnection in 2D is an extensively investigated topic, but still with inconclusive results. It is not even clear if the approach with numerical simulations in MHD correctly describes onset of reconnection, especially in models which invoke turbulence. Here we assume that MHD simulations can provide correct rates. Reconnection rate in the two-dimensional case can be estimated in the different ways, depending on the dominating physical properties in the system. The first such estimate was given by Parker \cite{par57, par63}, with the Alfv\'{e}n Mach number $M_{\mathrm {SP}}=V_{\mathrm i}/V_{\mathrm A}=S^{-1/2}$. The Lundquist number $S=\tau_{Ohm}/\tau_{adv}=LV_{\mathrm A}/\eta $ is the ratio of the advective over the resistive term in the Equation (\ref{faraday}). In an astrophysical case, the length scale is so large that the obtained reconnection rate is way too small, when compared to the observed reconnection events on the Sun or in the Earth's magnetosphere. To improve the model, Petschek \cite{pet64} assumed that the plasma can also be accelerated by slow shocks. Then the reconnection rate is $M_{\mathrm P}=\pi/(8\ln S)$. For the current sheet in a small region, the Petschek model gives orders of magnitude faster reconnection than the Sweet-Parker model, up to 0.1. The reason is that in the Petschek model the reconnection rate is not strongly dependent on the Lundquist number. For the current sheet extending to the whole reconnection area, both models give the same reconnection rate. The reconnection rate can also be estimated from the turbulent motions \cite{LV99}. For the case of only Ohmic resistivity, we can estimate the distance to which a magnetic field can diffuse in time $\tau_D$ as $\ell\sim (\eta\tau_D)^{1/2}$. This means that two lines can merge only if their distance is of the order of $\Delta=\ell/\sqrt{S}$. In combination with mass conservation, one obtains the reconnection rate $M_{\mathrm {SP}}$. In our 2D simulation, initial conditions are the same as in Baty et al. \cite{bat09}, with asymmetric resistivity. We did not specify fixed conditions at the inflow boundary in the X direction. Instead, our boundary conditions are left open to ``outflow'', so that a steady state will not be reached. The density nearby the box boundaries along the Y direction does not change until T=70 in code units, confining the matter inside the box. Starting from time T=1, a Petschek reconnection is obtained. In the case shown in Figure \ref{twodre}, the reconnection rate is of the order of $M_{\mathrm P}=0.1$. The Figure shows a resulting velocity structure with current density and magnetic field during reconnection, at T=30. After T=100, because of reflections back into the computational box, and matter leaving the box, the results become unsuitable for comparison with steady state solutions. \section{Results in 3D} \begin{figure*} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{ekinall3.eps} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{emagehyd2.eps} \caption{Time dependence of the energy with different heights of the computational box in our simulations with reconnection in all three directions. In the {\em Left panel} is shown the kinetic energy, and in the {\em Right panel} is shown the ratio of magnetic to the sum of internal and kinetic energy. Results with different heights of the box h=1, 2, 3 and 4, are shown in solid, dashed, dot-dashed and dotted lines, respectively. In thick solid line is shown the result with height h=0.25 in the case with reconnection only in the X-Y plane, which is our reference 2D case. Kinetic energy during the build-up of reconnection is linearly increasing with height of the computational box, with the factor of proportionality about 2. The fraction of magnetic energy is steadily decreasing with time, until the reconnection in the third direction starts; then it increases proportionally with height. } \label{enratz} \end{figure*} \begin{figure} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{Hczoom2.eps} \caption{Time evolution of current helicity in $X\leq 0$ half of the computational box. Results with different heights of the box h=0.25, 0.5, 1, 2, 3 and 4, are shown in black cross and diamond marked, solid, dashed, dot-dashed and dotted lines, respectively. } \label{hcplot} \end{figure} \begin{figure} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{hbjandengraf2.eps} \caption{In solid line is shown maximum variation in current helicity with height of the computational box, during the increase in reconnection. Values are taken from results at T=23, shown in Figure \ref{hcplot}. In dashed line is shown the maximum variation of fraction of magnetic energy in dependence of h, taken from results about T=50 shown in Figure \ref{enratz}. Note the different scales in the vertical axes of the plot; the right side axis denotes the fraction of magnetic energy. } \label{hbjgr} \end{figure} \begin{figure} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{vball2.eps} \caption{Time evolution of cross helicity in $X\leq 0$ half of the computational box. Lines have the same meaning as in Figure \ref{hcplot}. Results in the first case, with reconnection only in the X-Y plane, are shown in red lines, with the same line style coding. Three time intervals mark time of build-up of reconnection in the X-Y plane (I), increase in reconnection in the Z-direction (II), and short increase in the fraction of magnetic energy, during the re-organization of the magnetic field because of reconnection (III). } \label{hvbplot} \end{figure} \begin{figure} \hspace*{-.6cm}\includegraphics[width=8cm,height=6cm]{Hvjsvezoom2.eps} \caption{Time evolution of ``mixed helicity'' in $X\leq 0$ half of the computational box. Lines have the same meaning as in Figure \ref{hcplot}. } \label{hvjplot} \end{figure} When we set up the 3D simulation with the same asymmetry in resistivity as in the 2D case, and add the dependence of the resistivity with Z, we obtain in each X-Y parallel plane results similar to the 2D case, as shown in Figure \ref{mf3d}. This is because of the current sheet at X=0 along the Y-Z plane. Without some perturbation, which would break the stability of this current sheet, a flow does not really have three, but still only two degrees of freedom, as if the 2D simulations would be stacked atop each other. Instead of introducing the perturbation -- as has been done for example in simulations with Hall resistivity by Huba \& Rudakov \cite{HR02} -- we rather introduce the dependence of resistivity with height (the Z direction) in our prescription of resistivity, in the same way as we did in the X-Y plane. For small heights of the box, the result still resembles the 2D result. With increase in height, shocks are modified in the Z-direction, so that at each height, the shock is of different density. This enables the Petschek reconnection in the Y-Z plane, which is expelling matter in the Z-direction. In Figure \ref{mf3d} both reconnection in the X-Y and Y-Z plane are visible. In Priest \& Schrijver \cite{ps99} it has been estimated that one additional degree of freedom results in an increase by a factor of the order of $\sqrt{2}$ in the reconnection rate. To estimate the reconnection rate in 3D, we compute the energy in the computational box. For the Lundquist number S, from the Sweet-Parker reconnection rate it follows that $V_{\mathrm {i,3D}}^2/V_{\mathrm {i,2D}}^2=E_{\mathrm {k,3D}}/E_{\mathrm {k,2D}}=2$. This means that if in 3D simulations the reconnection rate increases for $\sqrt{2}$, compared to the 2D simulation, the corresponding kinetic energy increases by a factor of 2. We can now verify if this estimate is true. The results for time dependence of the integral kinetic energy for the different heights of the box are shown in Figure \ref{enratz}. As in the 2D case, we compare results during the phase of the simulation when reconnection rate increases, and boundaries still do not change much from the initial conditions. We find agreement with the above prediction: that kinetic energy in a 3D simulation is about double the energy in 2D, for the same length scale. We also find that the trend continues linearly with increase in height of the box. After the peak in reconnection (about T=50 in our simulations with reconnection in both the X-Y and Y-Z planes), as was the case in 2D simulations, reflections back into the computational box and matter leaving the box make results for different heights incomparable. With less density, matter leaves the box with higher velocity, and the kinetic energy increases. For maintaining the stationary reconnection rate, driving at the boundaries would be needed. Here we do not investigate such a case. In Aly \cite{aly84,aly91} it is argued that the maximum magnetic energy the force-free system can store, is in the configuration without loops in the magnetic field. Any loop would, when straightened, lead to an ``open field'' of larger energy. Our simulations are not in the force-free regime, but measurement of the dependence of this ratio with decrease in height of the computational box could still be informative. Our result is shown in Figure \ref{enratz}. The fraction of magnetic energy in total energy increases linearly with the increase in box height. A general measure of reconnection should anticipate not only well ordered, but also complex reconnection, for example in the case of reconnection in turbulent fluids. A change in topology of magnetic field can to some degree be measured by a ``knottedness of tangled vortex lines'' \cite{mof69}. For any vector field $\vec{F}$, the quantity $\vec{F}\cdot(\nabla\times\vec{F})$ is called the helicity density. The integral over a closed volume is then the helicity. For a vector field which is invariant to the change from a right-hand to left-hand orientation of the coordinates, helicity is zero \cite{mof78}. One such example are our simulations here, which have a mirror symmetry across the Y-Z plane at X=0. To obtain non-zero results, we always integrate only over half of the computational box, in the interval ($-0.5\leq x\leq 0$). In a non-symmetric case, this integral should be taken across the whole computational box. We compute three quantities related to helicity, to measure the degree of complexity of the velocity and magnetic field: current helicity H$_{\mathrm c}$ and cross-helicity H$_{\mathrm {VB}}$ \begin{eqnarray} H_{\mathrm c}=\int\vec{B}\cdot(\nabla\times\vec{B})dV,\ H_{\mathrm {VB}}=\int\vec{V}\cdot\vec{B}dV\ , \end{eqnarray} for which results are shown in Figures \ref{hcplot} and \ref{hvbplot}, and a quantity H$_{\mathrm {VJ}}$ which we provisionally call ``mixed helicity'' \begin{equation} H_{\mathrm {VJ}}=\int\vec{V}\cdot\vec{J}dV, \end{equation} shown in Figure \ref{hvjplot}. Integrations are always performed over half of the computational box. There are three separate time intervals, marked in Figure \ref{hvbplot}, which are present in all the results: (I) build-up of reconnection in the X-Y plane, (II) increase in reconnection in the Z-direction, and (III) short increase in the fraction of magnetic energy, during the re-organization of the magnetic field because of reconnection. Increase in current helicity with height of the computational box during the interval II is shown in Figure \ref{hbjgr}, together with the change in energy. For more than double height of the box, increase in H$_{\mathrm c}$ is small during the interval II. Only during the interval III, when reconnection in the third direction is fully established, current helicity increases. Cross helicity is better describing the three dimensional reconnection, as it increases linearly with height of the box for the full 3D reconnection, with h$\ge 1$. For smaller heights, h=0.25 and 0.5, reconnection in the Z direction does not follow the trend -- this could be an artifact of our setup, with directional asymmetry in resistivity. Cross helicity, shown in Figure \ref{hvbplot} is also clearly showing difference between reconnection in 2D and in 3D throughout both intervals II and III. ``Mixed helicity'', shown in Figure \ref{hvjplot} shows there is more structure to fields of velocity and current in all the intervals from I-III, than revealed in the other two helicities or energies. It remains to be seen, in simulations with less ordered, or turbulent reconnection, how much of this structure is related to strictly directional asymmetry in our setup of resistivity. \section{Summary} We have presented new results with the direct comparison of numerical simulations of reconnection in two and three dimensions. Reconnection in our simulations is facilitated by an asymmetry in the Ohmic resistivity. Without asymmetry, reconnection does not occur in our setup. Asymmetry in the X-Y plane is enabling the reconnection in that plane, and dependence of the resistivity with height in Z-direction is changing the shocks in the Z-direction, so that the Petschek reconnection starts also in the Y-Z plane. By comparing the integral kinetic energy in 2D and 3D computations, we find that the 3D simulation proceeds with a reconnection rate which is for a factor $\sqrt{2}$ larger from the rate in the 2D simulation. This finding confirms the simple analytic estimate from Priest \& Schrijver \cite{ps99}. We also show that a fraction of magnetic energy in total energy is increasing linearly with the increase in box height. We obtained our results in the case when reconnection was set by an asymmetry in resistivity. There are other means of facilitating reconnection. One natural generalization from a 2D simulation of X-point collapse of a magnetic field into a localized current layer in a 3D situation is to obtain points in space at which the magnetic field strength is zero -- 3D null points. Topology of such points is characterized by a pair of field lines forming a separatrix surface, which separates portions of magnetic field which are of different topologies. Yet another way to form a current sheet in 3D is to connect two such null points -- forming a separator line (\cite{pont11} and references therein). Reconnection in 3D is also possible without null points, in regions in which field lines are non-trivially linked with each other (as for example in braided magnetic fields or as the result of some ideal instability). Among others, there is also a possibility of a current sheet formation by a motion of a magnetic field line footpoint \cite{par72}. Comparison of results in the various approaches mentioned above is not straightforward; this is why we decided for more general measures. By computing current helicity, cross helicity and ``mixed helicity'' in our choice of setup, we find three characteristic time intervals in all our simulations. In two of them, reconnection in the three dimensional simulation increasingly differs from the corresponding reconnection in the two dimensional simulation, and the results also depend on the height of the reconnection region. It remains to be studied if reconnection in three dimensional simulations is well described by energies and helicities in the cases of less ordered, and of turbulent reconnection. In a future study we will also include other resistive terms, and apply the results in models of resistivity in simulations of reconnection in astrophysical outflows. \begin{acknowledgments} We thank A. Mignone and his team of contributors for the possibility to use the {\sc pluto} code. M.\v{C}. thanks R. Krasnopolsky for helpful discussions. \end{acknowledgments}
1,477,468,749,945
arxiv
\section{Introduction} In this paper, we study functions satisfying the following dynamic programming principle (DPP) \begin{align}\label{dpp} \begin{split} & \hspace{-0.2em} u_{\epsilon}(x,t) \\ & \hspace{-0.5em} = \frac{1}{2}\sup_{\nu \in S^{n-1}} \hspace{-0.15em}\bigg\{ \alpha u_{\epsilon} \bigg(\hspace{-0.25em}x+\epsilon \nu,t-\frac{\epsilon^2}{2} \bigg) \hspace{-0.15em}+ \hspace{-0.25em}\beta\hspace{-0.1em} \kint_{B_{\epsilon}^{\nu} } u_{\epsilon} \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) \hspace{-0.15em} \bigg\} \\ & \hspace{-0.5em} + \frac{1}{2}\inf_{\nu \in S^{n-1}} \hspace{-0.15em}\bigg\{ \alpha u_{\epsilon} \bigg(\hspace{-0.25em}x+\epsilon \nu,t-\frac{\epsilon^2}{2} \bigg) \hspace{-0.15em}+ \hspace{-0.25em}\beta\hspace{-0.1em} \kint_{B_{\epsilon}^{\nu} } u_{\epsilon} \bigg(\hspace{-0.15em}x+h,t-\frac{\epsilon^2}{2} \bigg) d \mathcal{L}^{n-1}(h) \hspace{-0.15em} \bigg\} \end{split} \end{align} for small $\epsilon > 0$. Here, $\alpha, \beta$ are positive constants with $\alpha + \beta = 1$, $S^{n-1}$ is the $(n-1) $-dimensional unit sphere centered at the origin, $B_{\epsilon}^{\nu} $ is an $(n-1)$-dimensional $\epsilon$-ball which is centered at the origin and orthogonal to a unit vector $\nu$ and $$\kint_{A } u(h) d \mathcal{L}^{n-1}(h) = \frac{1}{|A|}\int_{A } u(h) d \mathcal{L}^{n-1}(h) , $$ where $|A|$ is the $(n-1)$-dimensional Lebesgue measure of a set $A$. We will show interior (parabolic) Lipschitz type regularity for $u_{\epsilon}$ satisfying \eqref{dpp}, that is, $$ |u_{\epsilon}(x,t)-u_{\epsilon}(z,s)| \le C( |x-z|+ |t-s|^{\frac{1}{2}} + \epsilon) $$ for some constant $C>0$ and any $(x,t), (z,s) $ in a parabolic cylinder of a given domain. The motivation to study this DPP partly stems from its connection to stochastic games. On the other hand, our work is also linked to a normalized parabolic $p$-Laplace equation \begin{equation} \label{nple} \partial_{t} u = \Delta_{p}^{N} u = \Delta u + (p-2) \Delta_{\infty}^{N} u. \end{equation} There have been many recent results regarding mean value characterizations for the $p$-Laplace type equations (see, for example, \cite{MR2376662,MR2451291,MR2588596,MR2684311,MR2875296,MR2566554,MR3011990}). We can formally justify that a solution of \eqref{nple} asymptotically satisfies \eqref{dpp} by using the Taylor expansion. In \cite{MR3494400}, Parviainen and Ruosteenoja proved Lipschitz type regularity for functions satisfying a DPP related to the PDE \eqref{nple}, but they have a different DPP and it only covers the case $2 < p< \infty$. They also showed H\"{o}lder type estimate for other DPP which is associated with the normalized parabolic $p(x,t)$-Laplace equation. They used an analytic method in order to show the H\"{o}lder type regularity. Meanwhile, for the Lipschitz regularity when $p$ is constant, a core approach in the proof is based on game theory. The aim of this paper is to extend regularity results in \cite{MR3494400} from the case $2 < p< \infty$ to the case $1 < p< \infty$. It is hard to apply the game theoretic argument in that paper to our DPP. Therefore here, we extend the proof of H\"{o}lder regularity results in \cite{MR3494400} to obtain the main result, Theorem \ref{mainthm}. The proof of our main theorem is divided into two parts. In the first part, we provide an estimate for the function $u_{\epsilon}$ with respect to $t$. To be more precise it shows a relation between the oscillation of $u_{\epsilon}$ in time direction and the oscillation in spatial direction. Next, we concentrate on proving regularity results with respect to $x$. We first obtain H\"{o}lder type estimate and then turn to Lipschitz estimate. Comparison arguments play a key role in the proof of the main theorem. As we mentioned earlier, our work is closely related to the $p$-Laplace type equations. The DPP can be understood as a discretization of the related PDE. Therefore, we can expect that key ideas in studying DPP would be useful in order to analyze the PDE. On the other hand, our work is in close connection with game theory. One can understand the DPP \eqref{dpp} in the spirit of tug-of-war games. This interpretation is quite useful in that it allows us to see the problem from a different angle. Actually, game theoretic arguments have played an important role in proving results in several previous studies. The notion of a `harmonious function' was introduced in \cite{MR1617186,MR2346452}. A harmonious function $v$ satisfies the following DPP \begin{align} \label{hsdpp} v(x) =\frac{1}{2} \big\{ \sup_{y \in D} v(y) + \inf_{y \in D} v(y) \big\} , \end{align} where $D$ is a fixed neighborhood of $x$. In \cite{MR2449057}, some properties of harmonious functions were deduced by using tug-of-war games. A relation between the tug-of-war with noise and $p$-Laplace operator was shown in \cite{MR2451291}. Moreover, similar connections for general fully nonlinear equations were covered in \cite{MR2588596}. In \cite{MR2684311,MR2566554,MR2875296}, the authors derived asymptotic mean value characterizations for solutions to $p$-Laplace operators. The coincidence of game values of tug-of-war games and functions satisfying related DPPs as well as the existence and uniqueness of these functions were shown in \cite{MR3161602}. Studies on DPPs and associated tug-of-war games are ongoing under various settings, for example in nonlocal and Heisenberg group setting, as in \cite{MR2471139, MR2868849, MR3177660}. Many regularity results are also known for functions defined through a DPP. In \cite{MR3011990}, a Lipschitz type estimate was proved for a DPP connected to the elliptic $p$-Laplace problem. A local approach for the regularity was developed in \cite{MR3169768} (see also \cite{MR3441079}). It is based on cancellation strategies which as an application give a new and straightforward proof for the Lipschitz continuity for the corresponding PDEs. On the other hand, in \cite{MR3623556}, interior H\"{o}lder regularity was shown for a space-varying DPP based on the method in \cite{MR3846232}. Lipschitz regularity for this DPP was proved in \cite{alpr2018lipschitz}. The paper is organized as follows. In the next section, some notations and background knowledge are presented. We prove the main theorem in the remaining sections. In Section 3, we establish the estimate for our function $u_{\epsilon}$ with respect to $t$. After that, regularity for $u_{\epsilon}$ in spatial direction is covered. We derive the H\"{o}lder regularity in Section 4 and the Lipschitz regularity in Section 5. \section{Preliminaries} Fix $ n \ge 2 $ and let $ \Omega \subset \mathbb{R}^{n} $ be a bounded domain. We consider a parabolic cylinder $\Omega_{T} := \Omega \times (0, T]$ for $T>0$ and its parabolic boundary $$ \partial_{p}\Omega_{T} = ( \partial \Omega \times [0,T] ) \cup ( \Omega \times \{ 0 \} ) .$$ Let $\epsilon > 0$ and define a parabolic $\epsilon$-strip of $\Omega_{T}$ as follows: $$ \Gamma_{\epsilon, T} = \bigg( \Gamma_{\epsilon} \times \bigg( -\frac{\epsilon^{2}}{2},T \bigg] \bigg) \cup \bigg( \Omega \times \bigg( -\frac{\epsilon^{2}}{2},0 \bigg] \bigg) ,$$ where $\Gamma_{\epsilon} = \{ x \in \mathbb{R}^{n} \backslash \Omega : \dist (x, \partial \Omega) \le \epsilon \}$ is an $\epsilon$-strip of $\Omega$. Let $F$ be a given function defined in $\Gamma_{\epsilon, T} $. \begin{definition} Let $\alpha, \beta \in (0,1)$ with $\alpha + \beta = 1$. We say that a function $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP (with boundary data $F$) if \eqref{dpp} holds in $\Omega_{T}$ and $u_{\epsilon}=F$ in $\Gamma_{\epsilon, T} $. \end{definition} Here, we remark that if the boundary data $F$ is bounded, then one can show that $u_{\epsilon}$ is also bounded since $ ||u_{\epsilon}||_{L^{\infty}(\Omega_{ T})} \le ||F||_{L^{\infty}(\Gamma_{\epsilon,T})}$. (cf. \cite{MR3011990,MR3494400 }) We can heuristically interpret these functions in terms of `time-dependent tug-of-war game with noise'. This game is a two player zero-sum game in $ \Omega_{T} $. The procedure of the game is as follows. When the game is started, a token is located at some point $ (x_{0}, t_{0}) \in \Omega_{T} $. First Player I and Player II choose some directions $\nu_{I}, \nu_{II} $ in the $(n-1) $-dimensional unit sphere centered at the origin $S^{n-1}$, respectively. Next one tosses a fair coin and the winner of the toss moves the token. With probability $\alpha$, the winner Player $i(\in \{ I, II \})$ moves the token to the point $ x_{1} = x_{0}+ \epsilon \nu_{i} \in B_{\epsilon}(x_{0}) $ and simultaneously the time changes by $ t_{1}=t_{0}- \epsilon^{2}/2$. On the other hand, with probability $\beta$, the token will be moved to the point $x_{1} $ where $x_{1}$ is randomly chosen from the uniformly probability distribution on the $(n-1)$-dimensional $\epsilon$-ball $B_{\epsilon}^{\nu_{i}}(x_{0})$ which is centered at $x_{0}$ and is orthogonal to $\nu_{i}$ and simultaneously the time also changes by $t_{1}=t_{0}-\epsilon^{2}/2$. If $(x_{1}, t_{1}) \in \Gamma_{\epsilon, T} $, the game ends and Player II pays Player I the payoff $F(x_{1},t_{1})$. Otherwise, the above process is repeated and the token is moved to a point $ (x_{2}, t_{2}) \in B_{\epsilon}(x_{1}) \times \{ t_{1} - \epsilon^{2}/2 \}$. The game ends when the token is located in the parabolic strip $\Gamma_{\epsilon, T}$ for the first time. Since $ t_{k} = t_{0} - k \epsilon^{2}/2 < 0 $ for sufficiently large $ k$, the game must terminate in finite time. Let $(x_{\tau}, t_{\tau}) $ be the end point of the game. We are concerned with the expectation of the payoff $ F(x_{\tau}, t_{\tau}) $. Player I tries to maximize $ F(x_{\tau}, t_{\tau}) $ and Player II tries to minimize that. The value function of Player I and II are defined as $$ u_{\epsilon}^{I}(x_{0},t_{0}) = \sup_{\mathcal{S}_{I}} \inf_{\mathcal{S}_{II}} \mathbb{E}_{\mathcal{S}_{I},\mathcal{S}_{II}}^{(x_{0},t_{0})}[F(x_{\tau},t_{\tau})]$$ and $$ u_{\epsilon}^{II}(x_{0},t_{0}) = \inf_{\mathcal{S}_{II}} \sup_{\mathcal{S}_{I}} \mathbb{E}_{\mathcal{S}_{I},\mathcal{S}_{II}}^{(x_{0},t_{0})}[F(x_{\tau},t_{\tau})] ,$$ where $ \mathcal{S}_{I}$ and $\mathcal{S}_{II} $ are strategies for Player I and Player II, respectively. By the definition of the game, we can make a rough guess that $ u_{\epsilon}^{I}$ and $ u_{\epsilon}^{II}$ satisfy \eqref{dpp} since the value of these functions at every point would coincide the expectation value of it in the next turn. Although we will not show the relation between $ u_{\epsilon}^{I}$, $ u_{\epsilon}^{II}$ and $u_{\epsilon}$ in this paper, the above description of the game gives some intuition in the proof of our main result, Theorem \ref{mainthm}. We will use the notation $B_{\epsilon}^{\nu} $ and $S^{n-1}$ as in the previous section. Let $r$ be a fixed positive number. For $a>0$, set $$ Q_{ar} = B_{ar}(0) \times (-ar^{2}, 0),$$ $$ Q_{ar, \epsilon} = B_{ar + \epsilon}(0) \times (-ar^{2}-\epsilon^{2}/2 , 0) $$ and $$ \Sigma_{a} = \{ (x,z,t,s) : x,z \in B_{ar}(0), -ar^{2}< t < 0 , |t-s|<\epsilon^{2}/2 \} $$ and we write $\Lambda_{t, \epsilon}$ for a $\epsilon$-time slice $$ \Lambda_{t, \epsilon} = B_{r+\epsilon}(0) \times (t-\epsilon^{2}/{2}, t] .$$ Furthermore, let $$ \midrg_{i \in I}A_{i}= \frac{1}{2} \bigg( \sup_{i \in I}A_{i} +\inf_{i \in I}A_{i} \bigg) $$ and $$ \mathscr{A}u_{\epsilon} (x, \nu,t) = \alpha u_{\epsilon}(x+ \epsilon \nu,t) + \beta \kint_{B_{\epsilon}^{\nu} }u_{\epsilon}(x+ h,t) d \mathcal{L}^{n-1}(h) , $$ where $\nu \in S^{n-1} $ and $\kint_{A}$ means average of the integration on a set $A$. Then we can rewrite (\ref{dpp}) by \begin{align} \label{dppref} u_{\epsilon}(x,t) = \midrg_{\nu \in S^{n-1}} \mathscr{A}u_{\epsilon} \bigg(x, \nu,t-\frac{\epsilon^2}{2} \bigg). \end{align} We also define a set $ \mathbf{R}_{\nu}$ such that $$ \mathbf{R}_{\nu} = \{ M \in \mathbf{O}(n) : Me_{1} = \nu \} ,$$ where $\mathbf{O}(n) $ is the orthogonal group in dimension $n$ and $e_{1} $ is the first vector in the standard orthonormal basis. For simplicity, we abbreviate $$\sup_{\substack{\nu_{x},\nu_{z} \in S^{n-1} \\ (P_{\nu_{x}}, P_{\nu_{z}}) \in \mathbf{R}_{\nu_{x}} \times \mathbf{R}_{\nu_{z}}} } $$ to $$\sup_{\nu_{x},\nu_{z} \in S^{n-1} } $$ throughout the paper. \section{Regularity with respect to time} First we investigate regularity for the function $u_{\epsilon}$ with respect to $t$. The aim of this section is to prove Lemma \ref{lem2} below. This lemma provides some information about a relation between the oscillation in a time slice and that in the whole cylinder. We use a comparison argument in the proof of the lemma. We will first find an appropriate function $ \bar{v}$ ($\underline{v}$, respectively) which plays a similar role as a supersolution (subsolution, respectively) in PDE theory. After that, we will deduce the desired result by estimating the difference of those functions. The method used here is motivated by \cite[Lemma 4.3]{MR3660769}. Our proof may be regarded as a discrete version of this lemma. From now on, we fix $0<r<1$ and $T>0$. Since we only consider interior regularity, it is sufficient to show the regularity result in a cylinder $Q_{r}= B_{r}(0) \times (-r^{2}, 0)$ with proper translation. We still use the notation $ \Omega_{T}$ after the translation. \begin{lemma}\label{lem2} Let $\bar{Q}_{2r} \subset \Omega_{T}$, $ -r^{2} < s < t < 0 $ and $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP with boundary data $F \in L^{\infty}(\Gamma_{\epsilon, T} )$ for given $0< \alpha<1$. Then, for given $\epsilon > 0$, $u_{\epsilon}$ satisfies the estimate $$ |u_{\epsilon} (x,t) - u_{\epsilon} (x,s) | \le 18 \sup_{-r^{2} <\tau<0} \osc_{\Lambda_{\tau, \epsilon}} u_{\epsilon} $$ for any $x \in B_{r}$. \end{lemma} \begin{proof} We set $$ A = \sup_{-r^{2} <\tau<0} \osc_{\Lambda_{\tau, \epsilon}} u_{\epsilon} $$ and $$ \bar{v}_{c}(x,t) = c + 7 r^{-2} A t + 2 r^{-2} A |x|^{2}, $$ where $c \in \mathbb{R}$. Define $$ \bar{c} = \inf \{ c \in \mathbb{R} : \bar{v}_{c} \ge u_{\epsilon} \ \textrm{in} \ \Lambda_{-r^{2}, \epsilon} \} $$ and we write $ \bar{v} = \bar{v}_{\bar{c}} $. Then for any $\eta >0$, we can always choose $ (x_{\eta}, t_{\eta}) \in \Lambda_{-r^{2}, \epsilon} $ so that $$ u_{\epsilon} (x_{\eta}, t_{\eta}) \ge \bar{v} (x_{\eta}, t_{\eta}) - \eta .$$ In this case, there would be some accumulation points $(\bar{x}, \bar{t}) \in \bar{\Lambda}_{-r^{2}, \epsilon} $ as $ \eta \to 0 $. Furthermore, $ \bar{x} $ must satisfy $ |\bar{x}| \le r $, since if not, $$ 2A \le \bar{v}(x_{\eta}, t_{\eta}) - \bar{v}(0, t_{\eta}) \le u_{\epsilon}(x_{\eta}, t_{\eta}) - u_{\epsilon}(0, t_{\eta})+\eta \le A+\eta $$ for any $\eta > 0$, then it is a contradiction when $A > 0$. Now we compare $ \midrg_{\nu \in S^{n-1}} \mathscr{A} \bar{v} (x, \nu,t-\epsilon^{2}/2 ) $ with $\bar{v}(x,t) $. First, observe that \begin{align*}& \midrg_{\nu \in S^{n-1}} \mathscr{A} \bar{v} \bigg(x, \nu,t-\frac{\epsilon^2}{2} \bigg) \\ & \hspace{-0.15em} \le \alpha \midrg_{\nu \in S^{n-1}} \bar{v} \bigg(x\hspace{-0.15em}+\hspace{-0.15em} \epsilon \nu,t-\frac{\epsilon^2}{2} \bigg) \hspace{-0.15em} + \hspace{-0.15em} \beta \hspace{-0.35em} \sup_{\nu \in S^{n-1}} \hspace{-0.15em} \kint_{B_{\epsilon}^{ e_{1} } }\bar{v}\bigg(x+ P_{\nu} h,t-\frac{\epsilon^{2}}{2} \bigg) d \mathcal{L}^{n-1}(h) \end{align*} for some $P_{\nu} \in \mathbf{R}_{\nu} $. We see that \begin{align*} \kint_{B_{\epsilon}^{e_{1}}} |x+ P_{\nu}h|^{2} d\mathcal{L}^{n-1}(h) &= \kint_{B_{\epsilon}^{\nu}} (|x|^{2} + 2 \langle x , P_{\nu} h \rangle + | P_{\nu}h|^{2} ) d\mathcal{L}^{n-1}(h) \\ & \le |x|^{2} + \epsilon^{2} \end{align*} for any $ \nu \in S^{n-1} $. Next we need to show that \begin{align*} \midrg_{\nu \in S^{n-1}} |x+ \epsilon \nu |^{2} \le |x|^{2}+\epsilon^{2}. \end{align*} Observe that \begin{align*} \sup_{\kappa \in B_{\epsilon}} |x+\kappa|^{2} &=\sup_{\nu \in S^{n-1}} \sup_{-\epsilon \le a \le \epsilon} |x+ a \nu|^{2} \\&=\sup_{\nu \in S^{n-1}} \sup_{-\epsilon \le a \le \epsilon} ( a^{2} + 2a\langle x, \nu \rangle + |x|^{2} ). \end{align*} Since $a^{2} + 2a\langle x, \nu \rangle + |x|^{2} $ is convex in $a$, we observe that $$ \sup_{-\epsilon \le a \le \epsilon} ( a^{2} + 2a\langle x, \nu \rangle + |x|^{2} ) = \epsilon^{2} + 2\epsilon |\langle x, \nu \rangle| + |x|^{2}.$$ We also see that there is a unit vector $ \mu $ so that $$\sup_{\nu \in S^{n-1}}( \epsilon^{2} + 2\epsilon |\langle x, \nu \rangle| + |x|^{2}) = |x+ \epsilon \mu |^{2}, $$ as $S^{n-1}$ is compact. Then we get \begin{align*} \midrg_{\nu \in S^{n-1}} |x+ \epsilon \nu |^{2} \le \frac{1}{2}(|x+ \epsilon \mu |^{2}+|x- \epsilon \mu |^{2}) = |x|^{2}+ \epsilon^{2}. \end{align*} Therefore, we discover \begin{align*}& \midrg_{\nu \in S^{n-1}} \mathscr{A} \bar{v} \bigg(x, \nu,t-\frac{\epsilon^2}{2} \bigg) \\ & \ \le \bar{c} + 7 r^{-2}A \bigg( t - \frac{\epsilon^{2}}{2} \bigg)+ 2r^{-2}A \{ \alpha(|x|^{2}+\epsilon^{2}) + \beta (|x|^{2}+\epsilon^{2}) \} \\ & \ \le \bar{c} + 7r^{-2}A t + 2r^{-2} A |x|^{2} - \frac{3}{2} r^{-2} A \epsilon^{2} = \bar{v}(x,t) - \frac{3}{2} r^{-2} A \epsilon^{2} . \end{align*} Thus, \begin{align} \label{ie1} \midrg_{\nu \in S^{n-1}}\mathscr{A} \bar{v} \bigg(x, \nu,t-\frac{\epsilon^2}{2} \bigg) \le \bar{v} (x,t) \end{align} for all $ (x, t) \in Q_{r} $. Let $ M = \sup_{Q_{r, \epsilon} \backslash \Lambda_{-r^{2}, \epsilon}}(u_{\epsilon} - \bar{v}) $ and suppose $M >0$. In this case, we see that $u_{\epsilon} \le \bar{v} + M $ in $Q_{r, \epsilon} $. For any $\eta ' > 0$, we can choose a point $ (x_{\eta '}, t_{\eta '}) \in Q_{r, \epsilon} $ such that $$ u_{\epsilon}(x_{\eta'}, t_{\eta'}) > \bar{v}(x_{\eta'}, t_{\eta'}) + M - \eta' .$$ We have to show that $ (x_{\eta '}, t_{\eta '})$ must be in $\bar{Q}_{r} $ for any sufficiently small $\eta' > 0 $. By the definition of $M$, $t_{\eta'}> -r^{2}$. Note that we cannot assert this when $M \le 0$. On the other hand, for any $|x| \ge r $, $$ \bar{v}(x, t)- \bar{v}(0, t) \ge 2A . $$ We also observe that $ u_{\epsilon}(x,t)-u_{\epsilon}(0,t) \le A$. Hence it is always true that $$ (u_{\epsilon}-\bar{v})(x,t) \le (u_{\epsilon}-\bar{v})(0,t) .$$ Thus, $(x_{\eta '}, t_{\eta '}) \in \bar{Q}_{r}$. Then we obtain that \begin{align*} \midrg_{\nu \in S^{n-1}} \mathscr{A} \bigg\{ \bar{v} \bigg( x_{\eta'}, \nu,t_{\eta'}-\frac{\epsilon^2}{2} \bigg) + M \bigg\} & \ge \midrg_{\nu \in S^{n-1}} \mathscr{A} u_{\epsilon} \bigg(x_{\eta'}, \nu,t_{\eta'}-\frac{\epsilon^2}{2} \bigg) \\ & = u_{\epsilon}(x_{\eta'},t_{\eta'}) \\ & > \bar{v}(x_{\eta'},t_{\eta'}) + M -\eta' . \end{align*} In the first inequality, we have used that $ \bar{v}+M \ge u_{\epsilon} $ in $ Q_{r, \epsilon}$. Therefore, \begin{align} \label{ie2} \midrg_{\nu \in S^{n-1}} \mathscr{A} \bar{v} \bigg(x_{\eta'}, \nu,t_{\eta'}-\frac{\epsilon^2}{2} \bigg) >\bar{v}(x_{\eta'},t_{\eta'}) -\eta' \end{align} for any $\eta' > 0$. We combine (\ref{ie1}) with (\ref{ie2}) to discover that $A = 0$, and so $ \bar{v} = u_{\epsilon}= \bar{c} $. If $u_{\epsilon}$ is not a constant function, then we have a contradiction to $ A > 0 $. Hence $ M \le 0$ and therefore $u_{\epsilon} \le \bar{v}$ in $Q_{r, \epsilon} $. On the other hand, consider $$ \underline{v}(x,t) = \underline{c} - 7 r^{-2} A t - 2 r^{-2} A |x|^{2},$$ where $$ \underline{c} = \sup \{ c \in \mathbb{R} : \underline{v}_{c} \le u_{\epsilon} \ \textrm{in} \ \Lambda_{-r^{2}, \epsilon} \}. $$ Following the above procedure, we can show that $u_{\epsilon} \ge \underline{v}$ in $Q_{r, \epsilon} $. For arbitrary $\eta>0$, we can choose $(\bar{x}_{\eta}, \bar{t}_{\eta}), (\underline{x}_{\eta}, \underline{t}_{\eta}) \in \bar{\Lambda}_{-r^{2}, \epsilon}$ such that $$u_{\epsilon} (\bar{x}_{\eta}, \bar{t}_{\eta}) \ge \bar{v} (\bar{x}_{\eta}, \bar{t}_{\eta}) - \eta $$ and $$ u_{\epsilon} (\underline{x}_{\eta}, \underline{t}_{\eta}) \le \bar{v} (\underline{x}_{\eta}, \underline{t}_{\eta}) +\eta.$$ Then $$ \bar{v}(\bar{x}_{\eta}, \bar{t}_{\eta})-\underline{v}(\underline{x}_{\eta}, \underline{t}_{\eta}) \le \osc_{\Lambda_{t, \epsilon}} u_{\epsilon} + 2 \eta , $$ and hence $$ \bar{c}- \underline{c} \le 3A+ \frac{7}{2}r^{-2}A \epsilon^{2} \le 7A. $$ Therefore, we obtain $$ \osc_{Q_{r}}u_{\epsilon} \le \sup_{Q_{r}} \bar{v}- \inf_{Q_{r}} \underline{v} \le \bar{c} - \underline{c}+7A+4A \le 18A. $$ This completes the proof. \end{proof} \begin{remark} \label{remark} We showed in the proof of Lemma \ref{lem2} that the oscillation of $u_{\epsilon}$ in time direction is uniformly estimated by the oscillation of $u_{\epsilon}$ in spatial direction on $(\epsilon^{2}/2)$-time slices. Note that an $(\epsilon^{2}/2)$-time slice $\Lambda_{t, \epsilon}$ shrinks to $B_{r} \times \{ t \}$ as $\epsilon \to 0$ for any $t$. Thus, we can see that regularity for $u_{\epsilon}$ with respect to $t$ almost depends on the regularity with respect to $x$ provided $\epsilon$ is small enough. \end{remark} \section{H\"{o}lder regularity} The aim of this section is to show that $u_{\epsilon}$ satisfies H\"{o}lder type regularity. This result will be essentially used to prove Lipschitz regularity with respect to $x$ in the next section. We will use a comparison argument arising from game interpretations for obtaining regularity results in spatial direction. This argument plays an important role in obtaining the desired estimate. Several regularity results for functions satisfying various time-independent DPPs were proved by calculations based on this argument (see \cite{MR3846232,MR3623556,alpr2018lipschitz}). It was proved in \cite{MR3494400} that functions satisfying another time-dependent DPP have H\"{o}lder regularity. Our proof differs from that in \cite{MR3494400} due to the difference of the setting of DPP. Our argument depends on the distance between two points. If two points are relatively far away, we will consider `multidimensional DPP'(For a more detailed explanation, see \cite{MR3846232}). We divide the argument into two subcases. For each case, we will get the desired estimate by choosing proper behavior of an auxiliary function. In addition, we can derive our estimate by direct calculation when two points are close enough. \begin{lemma} \label{lem1} Let $\bar{B}_{2r}(0) \times [-2r^{2}-\epsilon^{2} / 2, \epsilon^{2}/2] \subset \Omega_{T}$, $ 0< \alpha < 1 $ and $ \epsilon > 0 $ is small. Suppose that $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP with boundary data $F \in L^{\infty}(\Gamma_{\epsilon, T} )$. Then for any $0 < \delta < 1 $, $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C ||u_{\epsilon}||_{\infty} ( |x-z|^{\delta} + \epsilon^{\delta}) ,$$ whenever $x, z \in B_{r}(0)$, $ -r^{2}<t<0$, $ |t-s| < \epsilon^{2} / 2 $ and $C>0$ is a constant which only depends on $r, \delta, \alpha$ and $n$. \end{lemma} \begin{proof} First, we can assume that $ ||u_{\epsilon}||_{\infty} \le r^{\delta} $ by scaling. Let us construct an auxiliary function. Define \begin{align} \label{f1} f_{1}(x,z) = C | x - z |^{\delta} + M | x+ z|^{2} ,\end{align} \begin{equation} \label{f2} f_{2}(x, z) = \left\{ \begin{array}{ll} C^{2(N-i)} \epsilon ^ {\delta} & \textrm{if $(x, z) \in A_{i}$}\\ 0 & \textrm{if $|x-z|>N \epsilon / 10 $} \end{array} \right. \end{equation} and \begin{align} \label{gg} g(t, s) = \max \{ M ( |t- r^{2}|^ { {\delta}/2} - r^{\delta}), M ( |s- r^{2}|^ {{\delta}/2} - r^{\delta}) \}\end{align} where $N = N( r, \delta, \alpha,n) \in \mathbb N $, $ C = C(r, \delta, \alpha, n) >1$ and $M = M(r) > 1$ are constants to be determined, and $$A_{i} = \{ (x , z) \in \mathbb R ^ {2n} : (i-1) \epsilon / 10 < |x-z| \le i \epsilon / 10 \} $$ for $ i = 0, 1, ... , N$. Now we define \begin{align} \label{comfn} H(x, z, t, s) = f_{1}(x,z) - f_{2}(x,z) + g(t, s) . \end{align} We first show that $$ |u_{\epsilon}(x,t) - u_{\epsilon}(z,s) | \le C ( |x-z|^ {\delta}+ \epsilon^ {\delta} ) $$ for every $ x,z \ (x \neq z) \in B_{2r} (0)$, $ -2r^{2}<t<0$ and $ |t-s| < \epsilon^{2} / 2 $. To this end, choose $M$ sufficiently large so that $$ u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H(x,z,t,s) \le C ^ {2N} \epsilon ^ {\delta}+ C \epsilon^ {\delta} \qquad \textrm{in} \ \ \ \Sigma_{2} \backslash \Sigma_{1} . $$ So, if we prove that \begin{align} \label{kees} u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H (x,z,t,s) \le C ^ {2N} \epsilon^ {\delta} +C \epsilon^ {\delta} \qquad \textrm{in} \ \ \ \Sigma_{1} \backslash \Upsilon \end{align} where $ \Upsilon = \{ (x,z,t,s) \in \mathbb{R}^{2n} \times \mathbb{R}^{2} : x = z, \ -r^{2}< t < 0 , \ |t-s|<\epsilon^{2}/2 \}$, then it is shown that Lemma \ref{lem1} holds in $ \Sigma_{2} \backslash \Upsilon $. Since we can obtain this estimate for $ u_{\epsilon}(z,s) - u_{\epsilon}(x,t) $, we have $$ | u_{\epsilon}(x,t) - u_{\epsilon}(z,s) | \le C ^ {2N} \epsilon^ {\delta} +C \epsilon^ {\delta} + H (x,z,t,s) \qquad \textrm{in} \ \ \ \Sigma_{2} \backslash \Upsilon .$$ Now we can assume that $z= -x $ by proper scaling and transformation, and then we get $$ |u_{\epsilon}(x,t) - u_{\epsilon}(-x,s) | \le C|x|^{\delta}+ C' \epsilon^ {\delta} $$ for some universal constant $C'>0$. It gives the result of Lemma \ref{lem1}. Suppose that \eqref{kees} is not true. Then \begin{align} \label{ca1} K := \sup_{(x,z,t,s) \in \Sigma_{1} \backslash \Upsilon } (u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H(x,z,t,s)) > C ^ {2N} \epsilon ^{\delta}+C \epsilon ^{\delta} . \end{align} Let $\eta >0$. We can choose $(x', z', t', s') \in \Sigma_{1} \backslash \Upsilon $ such that $$ u_{\epsilon}(x',t') - u_{\epsilon}(z',s') -H(x',z',t',s') \ge K - \eta . $$ Recall the DPP (\ref{dppref}). Using this together with the previous inequality, we know that \begin{align*} K & \le u_{\epsilon}(x',t') - u_{\epsilon}(z',s') -H(x',z',t',s') + \eta \\ & \\ & \le \frac{1}{2} \bigg[ \sup_{\nu_{x'},\nu_{z'} \in S^{n-1}} \bigg\{ \mathscr{A}u_{\epsilon} \bigg( x', \nu_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - \mathscr{A}u_{\epsilon} \bigg( z', \nu_{z'}, s'- \frac{\epsilon^{2}}{2} \bigg) \bigg\} \\ & \qquad + \inf_{\nu_{x'},\nu_{z'} \in S^{n-1}} \bigg\{ \mathscr{A}u_{\epsilon} \bigg( x', \nu_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - \mathscr{A}u_{\epsilon} \bigg( z', \nu_{z'}, s'- \frac{\epsilon^{2}}{2} \bigg) \bigg\} \bigg] \\ & \qquad - H(x', z', t', s') + 2\eta. \end{align*} Let $$ \mathbf{[I]} = \frac{1}{2} \sup_{\nu_{x'},\nu_{z'} \in S^{n-1}} \bigg\{ \mathscr{A}u_{\epsilon} \bigg( x', \nu_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - \mathscr{A}u_{\epsilon} \bigg( z', \nu_{z'}, s'- \frac{\epsilon^{2}}{2} \bigg) \bigg\}$$ and $$ \mathbf{[II]} = \frac{1}{2} \inf_{\nu_{x'},\nu_{z'} \in S^{n-1}} \bigg\{ \mathscr{A}u_{\epsilon} \bigg( x', \nu_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - \mathscr{A}u_{\epsilon} \bigg( z', \nu_{z'}, s'- \frac{\epsilon^{2}}{2} \bigg) \bigg\}.$$ We see that \begin{align*}\ u_{\epsilon}&(x',t') - u_{\epsilon}(z',s') \\ & = \midrg_{\nu_{x'} \in S^{n-1}} \mathscr{A}u_{\epsilon} \bigg(x', \nu_{x'},t'-\frac{\epsilon^2}{2} \bigg) - \midrg_{\nu_{z'} \in S^{n-1}} \mathscr{A}u_{\epsilon} \bigg(z', \nu_{z'},s'-\frac{\epsilon^2}{2} \bigg) \\ & \le \mathbf{[I]} + \mathbf{[II]} + \eta . \end{align*} By the definition of $\mathscr{A}$, we see that \begin{align*} & \mathbf{[I]} = \frac{1}{2} \sup_{\nu_{x'},\nu_{z'} \in S^{n-1}} \bigg[ \alpha \bigg\{ u_{\epsilon} \bigg( x+\epsilon \nu_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - u_{\epsilon} \bigg( z'+ \epsilon \nu_{z'},s'- \frac{\epsilon^{2}}{2} \bigg) \bigg\} \\ & + \beta \kint_{B_{\epsilon}^{\nu_{x'}} } \bigg\{ u_{\epsilon}\bigg(x'+ P_{\nu_{x'}} h,t'-\frac{\epsilon^{2}}{2} \bigg) - u_{\epsilon}\bigg(z'+ P_{\nu_{z'}} h,s'-\frac{\epsilon^{2}}{2} \bigg) \bigg\} d \mathcal{L}^{n-1}(h) \bigg]. \end{align*} Now we estimate $ \mathbf{[I]}$(and $ \mathbf{[II]}$) by $H$-related terms. Let $$ \mathbf{[III]} = \alpha H(x+ \epsilon\nu_{x},z+ \epsilon \nu_{z}, t,s) + \beta \kint_{B_{\epsilon}^{e_{1}} }H(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h,t,s) d \mathcal{L}^{n-1}(h) . $$ Recall $ f(x,z)=f_{1}(x,z)- f_{2}(x,z) $ and $ H(x,z,t,s)=f(x,z)+g(t,s) $. Then we see that $$H(x+ \epsilon\nu_{x},z+ \epsilon \nu_{z}, t,s)= f(x+ \epsilon\nu_{x},z+ \epsilon \nu_{z}) + g(t,s)$$ and \begin{align*} \kint_{B_{\epsilon}^{e_{1}} }&H(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h,t,s) d\mathcal{L}^{n-1}(h) \\ & =\kint_{B_{\epsilon}^{e_{1}} } \big\{ f(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h) + g(t,s) \big\} d \mathcal{L}^{n-1}(h) \\ & = \kint_{B_{\epsilon}^{e_{1}} }f(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) + g(t,s). \end{align*} Then we can write $ \mathbf{[III]}$ as $$ \alpha f(x+ \epsilon\nu_{x},z+ \epsilon \nu_{z}) + \beta \kint_{B_{\epsilon}^{e_{1}} }f(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) + g(t,s) . $$ Here we define an operator $T$ as \begin{align*} Tf&( x, z, P_{\nu_{x}} , P_{\nu_{z}}) \\ & = \alpha f(x+ \epsilon \nu_{x},z+ \epsilon \nu_{z}) + \beta \kint_{B_{\epsilon}^{e_{1}} }f(x+P_{\nu_{x}} h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h).\end{align*} Since \begin{align} \label{ineq} u_{\epsilon}(y,t) - u_{\epsilon}(\tilde{y},\tilde{t}) \le K + H(y,\tilde{y},t,\tilde{t}) = K + f(y, \tilde{y}) + g(t,\tilde{t}) \end{align} by the definition of $K$, we obtain that \begin{align*} \mathbf{[I]}& \le \frac{1}{2} \sup_{\nu_{x'},\nu_{z'} \in S^{n-1} } \bigg[ \alpha \bigg\{ K+ H \bigg( x'+ \epsilon \nu_{x'}, z'+ \epsilon \nu_{z'}, t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \bigg\} \\ & + \beta \kint_{B_{\epsilon}^{e_{1}} } \bigg\{ K + H \bigg( x'+ P_{\nu_{x'}}h, z'+ P_{\nu_{z'}}h, t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \bigg\} d \mathcal{L}^{n-1}(h) \bigg] \\ & \le \frac{1}{2} \bigg[ K +\sup_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z',P_{\nu_{x'}} , P_{\nu_{z'}} ) + g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \bigg]. \end{align*} Next we have to estimate $ \mathbf{[II]} $. Choose $ \rho_{x'}, \rho_{z'} \in S^{n-1} $ so that $$\inf_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z',P_{\nu_{x'}} ,P_{\nu_{z'}} ) \ge Tf ( x', z',P_{\rho_{x'}} , P_{\rho_{z'}} ) - 2\eta . $$ Then we calculate that \begin{align*} & \mathbf{[II]} \le \frac{1}{2} \bigg[ \alpha \bigg\{ u_{\epsilon} \bigg( x+\epsilon \rho_{x'}, t'- \frac{\epsilon^{2}}{2} \bigg) - u_{\epsilon} \bigg( z'+ \epsilon \rho_{z'}, t'- \frac{\epsilon^{2}}{2} \bigg) \bigg\} \\ & + \beta \kint_{B_{\epsilon}^{e_{1}} } \bigg\{ u_{\epsilon}\bigg(x'+ P_{\rho_{x'}} h,t'-\frac{\epsilon^{2}}{2} \bigg) - u_{\epsilon}\bigg(z'+ P_{\rho_{z'}} h,t'-\frac{\epsilon^{2}}{2} \bigg) \bigg\} d \mathcal{L}^{n-1}(h) \bigg] \\ & \le \frac{1}{2} \bigg[ K + \alpha f(x'+ \epsilon\rho_{x'},z'+ \epsilon \rho_{z'}) \\ & \qquad + \beta \kint_{B_{\epsilon}^{e_{1}} }f(x'+ P_{\rho_{x'}}h,z'+P_{\rho_{z'}} h) d \mathcal{L}^{n-1}(h)+g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg)\bigg] \\& \le \frac{1}{2} \bigg[ K + Tf(x', z',P_{\rho_{x'}} , P_{\rho_{z'}} ) +g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \bigg] \\ & \le \frac{1}{2} \bigg[ K +\inf_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z',P_{\nu_{x'}} , P_{\nu_{z'}} ) + g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \bigg] +\eta . \end{align*} We used (\ref{ineq}) again in the second inequality. Combining the estimate for $ \mathbf{[I]}$ and $ \mathbf{[II]}$, we obtain \begin{align*} K \le & \ u_{\epsilon}(x',t') - u_{\epsilon}(z',s') - H(x',z',t',s') + \eta \\ \le & \ K +\midrg_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z,P_{\nu_{x'}} , P_{\nu_{z'}} ) +g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \\ & \ \ \ -H(x', z', t', s') + 2\eta . \end{align*} Since $\eta$ is arbitrarily chosen, if we show that \begin{align*}\midrg_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z',P_{\nu_{x'}} , P_{\nu_{z'}}) + g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) < H(x', z', t', s') ,\end{align*} that is, \begin{align} \begin{split} \label{eq:hfirst} \midrg_{\nu_{x'},\nu_{z'} \in S^{n-1} } Tf ( x', z',& P_{\nu_{x'}} , P_{\nu_{z'}} ) - f(x', z') \\ & < g(t',s') - g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) , \end{split} \end{align} then the proof is completed. Now we need to estimate \eqref{eq:hfirst}. Without loss of generality, we assume that $t' \ge s'$. Then we see that \begin{align*}g(t',s') - & g\bigg( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \bigg) \\ &= M ( |s'- r^{2}|^ {{\delta}/2} - r^{\delta}) - M \bigg( \bigg|s'-\frac{\epsilon^{2}}{2}- r^{2}\bigg|^ {{\delta}/2} - r^{\delta} \bigg) \\ & = M |s'- r^{2}|^ {{\delta}/2} -M \bigg|s'-\frac{\epsilon^{2}}{2}- r^{2}\bigg|^ {{\delta}/2} . \end{align*} Note that $$ M |s'- r^{2}|^ {{\delta}/2} -M \bigg|s'-\frac{\epsilon^{2}}{2}- r^{2}\bigg|^ {{\delta}/2} \ge M \bigg\{ r^{\delta} - \bigg( r^{2} + \frac{\epsilon^{2}}{2} \bigg)^{\frac{\delta}{2}} \bigg\} $$ and $$ \bigg( r^{2} + \frac{\epsilon^{2}}{2} \bigg)^{\frac{\delta}{2}} \le r^{\delta} + \bigg( \frac{\epsilon^{2}}{2} \bigg)^{\frac{\delta}{2}} \le r^{\delta} + \epsilon^{\delta}$$ for $0 <\delta \le 1$. We also deduce that \begin{align*}M |s'- r^{2}|^ {{\delta}/2} -M \bigg|s'-\frac{\epsilon^{2}}{2}- r^{2}\bigg|^ {{\delta}/2} \ge -M \frac{\delta}{2} | s' - r^{2}|^{\frac{\delta}{2}-1} \frac{\epsilon^{2}}{2} \ge -M r^{\frac{\delta}{2}-1} \epsilon^{2} , \end{align*} since $h(t)=|t|^{\delta / 2}$ is concave. Therefore, we see that $$ g(t',s') - g\big( t'-\frac{\epsilon^{2}}{2}, s'-\frac{\epsilon^{2}}{2} \big) \ge \min \{ -M \epsilon^{\delta} , -M \tilde{C}(r) \epsilon^{2} \} =: \sigma .$$ To establish (\ref{eq:hfirst}), we will distinguish several cases. And from now on, we will write $ (x, z, t, s) $ instead of $ (x', z', t', s')$ in our calculations for convenience. \subsection{Case \texorpdfstring{$|x-z| > N \epsilon / 10$}.} In this case, $f(x,z)=f_{1}(x,z)$ as $ f_{2}(x,z)=0 $. Thus we can write (\ref{eq:hfirst}) as \begin{align} \label{eq:hsecond} \midrg_{\nu_{x},\nu_{z} \in S^{n-1} } T f_{1}( x, z, P_{\nu_{x}} , P_{\nu_{z}} ) - f_{1}(x, z) < \sigma . \end{align} For any $\eta > 0$, we can choose some vectors $\nu_{x}, \nu_{z} \in S^{n-1} $ and related rotations $P_{\nu_{x}} \in \mathbf{R}_{\nu_{x}}$, $P_{\nu_{z}} \in \mathbf{R}_{\nu_{z}}$ so that \begin{align*}\sup_{h_{x},h_{z} \in S^{n-1} } T f_{1} ( x, z, P_{h_{x}}, P_{h_{z}} ) \le T f_{1} ( x, z,P_{\nu_{x}} , P_{\nu_{z}} )+\eta.\end{align*} Hence if we find some unit vectors $ \mu_{x}, \mu_{z} $ and rotations $P_{\mu_{x}} , P_{\mu_{z}} $ such that \begin{align*} \label{eq:ff} \midrg_{h_{x},h_{z} \in S^{n-1} }& T f_{1} ( x, z, P_{h_{x}}, P_{h_{z}} )\\ & \le \frac{1}{2} \big\{ T f_{1} ( x, z, P_{\nu_{x}} , P_{\nu_{z}} ) + T f_{1} ( x, z, P_{\mu_{x}} , P_{\mu_{z}}) + \eta \big\} , \end{align*} then we obtain (\ref{eq:hsecond}) by showing \begin{align} \begin{split} \frac{1}{2} \big\{ T f_{1} ( x, z, P_{\nu_{x}} , P_{\nu_{z}} ) + T f_{1} ( x, z, P_{\mu_{x}} , P_{\mu_{z}}) \big\} - f_{1}(x,z) < \sigma - \eta . \end{split} \end{align} Denote $\mathbf{v}=\frac{x-z}{|x-z|}$, $ y_{V} = \big\langle y, \mathbf{v} \big\rangle $ and $ y_{V_{\perp}}= y- y_{V}\mathbf{v}$. Then $y$ is orthogonally decomposed into $ y_{V}\mathbf{v}$ and $ y_{V_{\perp}}$. By using Taylor expansion, we know that for any $h_{x} $ and $h_{z} $, \begin{align*} f_{1}&(x+\epsilon h_{x}, z+\epsilon h_{z} ) \\ & = f_{1}(x, z) + C \delta |x-z|^{\delta -1}(h_{x}-h_{z})_{V} \epsilon + 2M\langle x+z , h_{x}+h_{z} \rangle \epsilon \\ & + \frac{1}{2}C \delta |x-z|^{\delta -2} \big\{ (\delta -1)(h_{x}-h_{z})_{V}^{2} + |(h_{x}-h_{z})_{V^{\perp}}|^{2} \big\} \epsilon^{2} \\ & + M | h_{x}+h_{z}|^{2} \epsilon^{2} + \mathcal{E}_{x,z}( \epsilon h_{x}, \epsilon h_{z}) , \end{align*} where $ \mathcal{E}_{x,z}(h_{x},h_{z})$ is the second-order error term. Now we estimate the error term by Taylor's theorem as follows: $$ | \mathcal{E}_{x,z}( \epsilon h_{x}, \epsilon h_{z}) | \le C |( \epsilon h_{x} , \epsilon h_{z})^{t} |^{3} (|x-z|-2\epsilon )^{\delta -3} $$ if $|x-z| > 2 \epsilon$. Thus if we choose $ N \ge \frac{100C}{\delta} $, we get $$ | \mathcal{E}_{x,z}( \epsilon h_{x}, \epsilon h_{z}) | \le 10 |x-z|^{\delta -2} \epsilon^{2}. $$ Now we establish \eqref{eq:hsecond}. We first consider a small constant $0<\Theta<4$ to be determined later and we divide again this case into two separate subcases. In the first subsection, we consider the case when $\nu_{x}$, $\nu_{z}$ are in almost opposite directions and nearly parallel to the vector $ x- z $. Otherwise, it is covered in the second subsection. In each case, we will choose proper rotations and investigate changes in the value of the auxiliary function $f_{1}$. The concavity of $f_{1}$ plays a key role in both cases. \subsubsection{Case \texorpdfstring{$(\nu_{x}-\nu_{z})_{V}^{2} \ge (4 - \Theta) $}.} Observe that \begin{align*} \midrg_{\nu_{x},\nu_{z} \in S^{n-1}} & T f_{1}( x, z, P_{\nu_{x}} , P_{\nu_{z}} )\\ & \le \frac{1}{2} \big\{ T f_{1}( x, z, P_{\nu_{x}} , P_{\nu_{z}} ) + T f_{1}( x, z, -P_{\nu_{x}} , -P_{\nu_{z}} ) + \eta \big\} \end{align*} and \begin{align*} \frac{1}{2} & \big\{ T f_{1}( x, z,P_{\nu_{x}} , P_{\nu_{z}} ) + T f_{1}( x, z, -P_{\nu_{x}} , -P_{\nu_{z}} ) \big\} - f_{1}(x, z) \\ & = \frac{\alpha}{2} \big\{ f_{1}(x+\epsilon \nu_{x}, z+\epsilon \nu_{z}) + f_{1}(x-\epsilon \nu_{x}, z-\epsilon \nu_{z}) - 2 f_{1}(x, z) \big\} \\ & \ + \frac{\beta}{2} \bigg\{ \kint_{B_{\epsilon}^{e_{1}} }f_{1}(x+P_{\nu_{x}} h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) \\ & \qquad + \kint_{B_{\epsilon}^{e_{1}} }f_{1}(x- P_{\nu_{x}}h,z-P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) - 2 f_{1}(x, z) \bigg\}. \end{align*} We first estimate the $\alpha$-term. Using the Taylor expansion of $f_{1}$ and the above estimates, we get \begin{align*} & f_{1}(x+\epsilon\nu_{x}, z+\epsilon\nu_{z}) + f_{1}(x-\epsilon \nu_{x}, z-\epsilon \nu_{z}) - 2 f_{1}(x, z) \\ & = C \delta |x-z|^{\delta -2} \big\{ (\delta -1)(\nu_{x}-\nu_{z})_{V}^{2} + |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} \big\}\epsilon^{2} +2 M | \nu_{x}+\nu_{z}|^{2}\epsilon^{2} \\ & \ \ \ + \mathcal{E}_{x, z}(\epsilon \nu_{x},\epsilon \nu_{z}) + \mathcal{E}_{x,z}(-\epsilon \nu_{x},-\epsilon \nu_{z}) \\& \le C \delta |x-z|^{\delta -2} \{ (\delta -1) (4- \Theta) + \Theta \}\epsilon^{2} +2 M (2\epsilon)^{2} + 20|x-z|^{\delta -2} \epsilon^{2} \\& \le \big[ C \delta |x-z|^{\delta -2} \{ (\delta -1) (4- \Theta) + \Theta \} + 8M+ 20 |x-z|^{\delta -2} \big] \epsilon^{2}. \end{align*} And note that \begin{align} \label{eq:hfourth} |P_{\nu_{x}}h -P_{\nu_{z}}h | \le |\nu_{x}+\nu_{z}|, \end{align} for some proper $P_{\nu_{x}}, P_{\nu_{z}} $ and for any $ h \in B_{1}^{e_{1}} $(see \cite[ Appendix A]{alpr2018lipschitz}), to see that \begin{align*} & \kint_{B_{\epsilon}^{e_{1}} } f_{1}(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) -f_{1}(x, z) \\& \ = \kint_{B_{\epsilon}^{e_{1}} } \bigg[ C \delta |x-z|^{\delta -1}(P_{\nu_{x}} h-P_{\nu_{z}} h )_{V}+2M\langle x+z ,P_{\nu_{x}} h+P_{\nu_{z}} h \rangle \\& \qquad \qquad +\frac{C}{2} |x-z|^{\delta -2}\big\{ (\delta -1)(P_{\nu_{x}}h-P_{\nu_{z}} h)_{V}^{2} + |(P_{\nu_{x}} h-P_{\nu_{z}} h)_{V^{\perp}}|^{2} \big\} \\& \qquad \qquad +M| P_{\nu_{x}}h+P_{\nu_{z}} h|^{2} + \mathcal{E}_{x,z}(h_{x},h_{z}) \bigg] d \mathcal{L}^{n-1}(h) \\& \ = \frac{1}{2} \kint_{B_{\epsilon}^{e_{1}} } \bigg[ C |x-z|^{\delta -2} \big\{ (\delta -1)(P_{\nu_{x}}h-P_{\nu_{z}} h)_{V}^{2} + |(P_{\nu_{x}}h-P_{\nu_{z}} h)_{V^{\perp}}|^{2} \big\} \\& \qquad \qquad + 2M| P_{\nu_{x}}h+P_{\nu_{z}} h|^{2} + 2 \mathcal{E}_{x,z}(h_{x},h_{z}) \bigg] d \mathcal{L}^{n-1}(h) \\& \ \le \frac{1}{2} \big\{ |x-z|^{\delta -2} (C \Theta + 20) + 8M \big\} \epsilon ^{2}. \end{align*} The last inequality follows from $ |\nu_{x}+\nu_{z}| ^{2} \le \Theta $. In the same way, it is also obtained \begin{align*}\kint_{B_{\epsilon}^{e_{1}} } f_{1}(x-P_{\nu_{x}}h,&z-P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) -f_{1}(x, z) \\ & \le \frac{1}{2} \big\{ |x-z|^{\delta -2} (C \Theta + 20) + 8M \big\} \epsilon ^{2} . \end{align*} These estimates give \begin{align*} \frac{1}{2} & \big\{ T f_{1}( x, z, \nu_{x}, \nu_{z} ) + T f_{1}( x, z, -\nu_{x}, -\nu_{z} ) \big\} - f_{1}(x, z) \\ & \le \frac{\alpha}{2} \big[ C \delta |x-z|^{\delta -2} \{ (\delta -1) (4- \Theta) + \Theta \} + 8M+ 20 |x-z|^{\delta -2} \big] \epsilon^{2} \\ & \ + \frac{\beta}{2} \big\{ C \Theta |x-z|^{\delta -2} + 8M + 20|x-z|^{\delta -2} \big\} \epsilon ^{2} \\ & \le \bigg[ \frac{C}{2} \big\{ \Theta + \alpha \delta (\delta - 1)(4 - \Theta) \big\} +10 \bigg] |x-z|^{\delta -2} \epsilon ^{2} + 4M \epsilon ^{2}. \end{align*} Observe that $ \Theta + \alpha \delta (\delta - 1)(4 - \Theta) < 0 $ if $\Theta < 4\alpha \delta (1-\delta)/\{ 1- \alpha \delta (\delta -1) \}$. Then we can choose sufficiently large $C$ depending only on $ r, \delta, \alpha$ and $n$ so that \begin{align*} \midrg_{\nu_{x},\nu_{z} \in S^{n-1}} Tf_{1}( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) - f_{1}(x, z) < -M \tilde{C} \epsilon^{2}. \end{align*} Thus, we get (\ref{eq:hsecond}). \subsubsection{Case $(\nu_{x}-\nu_{z})_{V}^{2} \le (4 - \Theta) $} It is clear that $ |\nu_{x}-\nu_{z}|_{V} < 2- \Theta/4 $ in this case. Furthermore, we check that \begin{align} \label{show2} \begin{split} \midrg_{\nu_{x},\nu_{z} \in S^{n-1}} & Tf_{1}( x, z, P_{h_{x}}, P_{h_{z}} ) \\ & \le \frac{1}{2} \big\{ T f_{1}( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + T f_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}}) \big\} + \eta . \end{split} \end{align} Now we estimate the right hand side. By the DPP, it can be written as \begin{align*} \frac{1}{2} & \big\{ T f_{1}( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + T f_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}})\big\} - f_{1}(x,z) \\ & = \frac{\alpha}{2} \big\{ f_{1}(x+ \epsilon \nu_{x}, z+\epsilon \nu_{z}) + f_{1}(x-\epsilon \mathbf{v}, z+\epsilon \mathbf{v}) - 2 f_{1}(x, z) \big\} \\ & \ + \frac{\beta}{2} \bigg\{ \kint_{B_{\epsilon}^{e_{1}} }f_{1}(x+P_{\nu_{x}} h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) \\ & \qquad \qquad + \kint_{B_{\epsilon}^{e^{1}} }f_{1}(x+ P_{-\mathbf{v}}h,z+ P_{\mathbf{v}} h) d \mathcal{L}^{n-1} (h) - 2 f_{1}(x, z) \bigg\} . \end{align*} We will continue in a similar way to the previous case. For the $\alpha$-term, we deduce that \begin{align*} &f_{1}(x+ \epsilon \nu_{x}, z+\epsilon \nu_{z}) + f_{1}(x-\epsilon \mathbf{v}, z+\epsilon \mathbf{v}) - 2 f_{1}(x, z) \\ & =\frac{1}{2} \bigg[ C \delta |x-z|^{\delta -1} \{ (\nu_{x}-\nu_{z})_{V} - 2 \}\epsilon + 2M\langle x+z , \nu_{x}+\nu_{z} \rangle \epsilon \\ & \ \ \ + \frac{C}{2} \delta |x-z|^{\delta -2} \{ (\delta -1)( (\nu_{x}-\nu_{z})_{V}^{2}\epsilon^{2} + (2 \epsilon )^{2} ) + |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2}\epsilon^{2} \} \\ & \ \ \ + 4M \epsilon^{2}+ M | \nu_{x}+\nu_{z}|^{2}\epsilon^{2} + \mathcal{E}_{x,z}(\epsilon \nu_{x},\epsilon \nu_{z}) + \mathcal{E}_{x,z}(-\epsilon \mathbf{v},\epsilon \mathbf{v} ) \bigg] \\ & \hspace{-0.25em} \le \frac{1}{2} \bigg\{ \hspace{-0.25em} - \hspace{-0.25em} \frac{\Theta}{4} C \delta |x-z|^{\delta -1} \epsilon \hspace{-0.15em} + \hspace{-0.15em} 8M\epsilon r \hspace{-0.15em}+\hspace{-0.15em} 2C \delta |x-z|^{\delta -2}\epsilon^{2}\hspace{-0.15em} +\hspace{-0.15em} 20|x-z|^{\delta -2}\epsilon^{2}\hspace{-0.15em} +\hspace{-0.15em} 2M\epsilon^{2} \bigg\} . \end{align*} Then we see that \begin{align*} 2C & \delta |x-z|^{\delta -2}\epsilon^{2} + 20|x-z|^{\delta -2}\epsilon^{2}+2M\epsilon^{2} \\ & \le \frac{10}{N} (2C \delta + 20 + 2M \diam(\Omega)^{2-\delta }) |x-z|^{\delta -1}\epsilon \\ & \le \delta^{2} |x-z|^{\delta -1}\epsilon \end{align*} for sufficiently large $C$ and $N \ge 100C / \delta$ , since $|x-z| > N \epsilon / 10$ and $\Omega$ is bounded. Thus, \begin{align*} f_{1}(x+\nu_{x}, z+\nu_{z})& + f_{1}(x-\epsilon \mathbf{v}, z+\epsilon \mathbf{v}) - 2 f_{1}(x, z) \\ & \le \bigg\{\frac{\delta}{2} |x-z|^{\delta -1}\bigg( \delta - C\frac{\Theta}{4} \bigg) + 4Mr \bigg\}\epsilon . \end{align*} Next, we estimate the $\beta$-term. By a direct calculation, we see that \begin{align*} & \hspace{-0.2em} \kint_{B_{\epsilon}^{e_{1}} } \hspace{-0.3em} \big\{ f_{1}(x+P_{\nu_{x}} h,z+P_{\nu_{z}} h)+f_{1}(x+ P_{-\mathbf{v}}h,z+P_{\mathbf{v}} h) - 2 f_{1}(x, z) \big\} d \mathcal{L}^{n-1}(h) \\ & =\kint_{B_{\epsilon}^{e_{1}}} \{ f_{1}(x+ P_{\nu_{x}}h,z+P_{\nu_{z}} h) - f_{1}(x, z) \} d \mathcal{L}^{n-1}(h) \\ & \qquad \qquad + \kint_{B_{\epsilon}^{e_{1}} } \{ f_{1}(x+ P_{-\mathbf{v}}h,z+P_{\mathbf{v}}h) - f_{1}(x, z) \} d \mathcal{L}^{n-1}(h) \\ & \le \kint_{B_{\epsilon}^{e_{1}}} \bigg[ \frac{C}{2} \delta |x-z|^{\delta-2} \big\{(\delta -1)(P_{\nu_{x}}h-P_{\nu_{z}} h)_{V}^{2} + |(P_{\nu_{x}}h-P_{\nu_{z}} h)_{V^{\perp}}|^{2} \big\} \\& \qquad \qquad +M|P_{\nu_{x}} h+P_{\nu_{z}} h|^{2} + \mathcal{E}_{x,z}(h_{x},h_{z}) \bigg] d \mathcal{L}^{n-1}(h) \\& \ \ \ + \kint_{B_{\epsilon}^{e_{1}} } \bigg[ \frac{C}{2} \delta |x-z|^{\delta-2}(2h)^{2} + \mathcal{E}_{x,z}(-\epsilon \mathbf{v},\epsilon\mathbf{v} ) \bigg] d \mathcal{L}^{n-1}(h) \\& \le C\delta |x-z|^{\delta-2}(2 \epsilon)^{2} + M(2 \epsilon)^{2} + 20|x-z|^{\delta -2} \epsilon^{2} , \end{align*} we have used (\ref{eq:hfourth}) for the last inequality. Now we observe that $$ C\delta |x-z|^{\delta-2}(2 \epsilon)^{2} + M(2 \epsilon)^{2} + 20|x-z|^{\delta -2} \epsilon^{2} \le 2 \delta^{2} |x-z|^{\delta -1}\epsilon. $$ Therefore $\beta$-term is estimated by $ 2 \delta^{2} |x-z|^{\delta -1}\epsilon. $ Combining these estimates, we conclude \begin{align*} \frac{1}{2} & \big\{ Tf_{1}( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + T f_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}}) \big\} - f_{1}(x,z) \\ & \le \frac{\alpha}{2} \bigg\{\frac{\delta}{2} |x-z|^{\delta -1}\bigg( \delta - C\frac{\Theta}{4} \bigg) + 4Mr \bigg\}\epsilon + \beta \delta^{2} |x-z|^{\delta -1}\epsilon \\& \le -M \tilde{C} \epsilon^{2} \end{align*} for sufficiently large $C=C(r,\delta, \alpha, n)$. Combining this with \eqref{show2}, we obtain (\ref{eq:hfirst}). \subsection{Case \texorpdfstring{$0 < |x-z| \le N \epsilon / 10$}.} We observe that \begin{align*} | f_{1}&(x+h_{x}, z+h_{z}) - f_{1}(x,z) | \\ & =C ( |x -z +h_{x} - h_{z}|^{\delta} - |x-z|^{\delta} )+ M( |x +z +h_{x} + h_{z}|^{2} - |x+z|^{2}) \\& \le C|h_{x} - h_{z}|^{\delta} + 2M|x+z| \ |h_{x} +h_{z}|+M|h_{x} +h_{z}|^{2} \\& \le 2C \epsilon^{\delta} + 8Mr\epsilon + 4M \epsilon^{2} \\& \le 3C\epsilon^{\delta} \end{align*} for any $x , z \in B_{r}$ and $h_{x}, h_{z} \in B_{\epsilon}$ if $C=C(r, \delta)$ is sufficiently large. Therefore, we see that \begin{align*} & \sup_{h_{x},h_{z} \in S^{n-1}} T f_{1}( x, z, P_{h_{x}}, P_{h_{z}} ) - f_{1}(x,z) \\& \ = \sup_{h_{x},h_{z} \in S^{n-1}} \bigg[ \alpha\{ f_{1}(x+ h_{x},z+ h_{z})-f_{1}(x,z) \} + \\ & \qquad \qquad \beta \kint_{B_{\epsilon}^{e_{1}} }\{f_{1}(x+ P_{h_{x}}h,z+P_{h_{z}} h) - f_{1}(x,z) \} d \mathcal{L}^{n-1}(h) \bigg] \\& \ \le 3\alpha C\epsilon^{\delta}+ 3\beta C\epsilon^{\delta} = 3C\epsilon^{\delta} \end{align*} and \begin{align} \begin{split} \label{anes} \sup_{h_{x},h_{z} \in S^{n-1}} T f( x, z,P_{h_{x}}, P_{h_{z}} )& =\sup_{h_{x},h_{z} \in S^{n-1}} T( f_{1}-f_{2})(x, z, P_{h_{x}}, P_{h_{z}} ) \\ & \le \sup_{h_{x},h_{z} \in S^{n-1}} Tf_{1}( x, z, P_{h_{x}}, P_{h_{z}} ) . \end{split} \end{align} By the assumption, we can find $i \in \{1, 2, \cdots , N\}$ such that $$ (i-1) \frac{ \epsilon}{10} < |x-z| \le i\frac{ \epsilon}{10}. $$ We deduce that \begin{align*} & \inf_{h_{x},h_{z} \in S^{n-1}} T f( x, z, P_{h_{x}}, P_{h_{z}} ) \\ & \le \sup_{h_{x},h_{z} \in S^{n-1}} T f_1( x, z, P_{h_{x}}, P_{h_{z}} ) - \sup_{h_{x},h_{z} \in S^{n-1}} T f_{2}( x, z, P_{h_{x}}, P_{h_{z}} ) \\& \le \sup_{h_{x},h_{z} \in S^{n-1}} Tf_1( x, z,P_{h_{x}}, P_{h_{z}} ) - \alpha C^{2(N-i+1)} \epsilon^{\delta} \\& = \sup_{h_{x},h_{z} \in S^{n-1}} T f_1( x, z, P_{h_{x}}, P_{h_{z}} ) - \alpha \bigg( C^{2}- \frac{2}{\alpha} \bigg) C^{2(N-i)} \epsilon^{\delta} - 2C^{2(N-i)} \epsilon^{\delta} \\& \le \sup_{h_{x},h_{z} \in S^{n-1}} T f_1( x, z,P_{h_{x}}, P_{h_{z}} ) - 2f_{2}(x,z) - 8C \epsilon ^{\delta}. \end{align*} The last inequality is obtained if $C$ is large. Therefore, we calculate that \begin{align*} \midrg_{\nu_{x},\nu_{z} \in S^{n-1}} T f( x, z, P_{h_{x}}, P_{h_{z}} ) & \le \hspace{-0.45em} \sup_{h_{x},h_{z} \in S^{n-1}} \hspace{-0.25em} T f_1( x, z,P_{h_{x}}, P_{h_{z}} ) - f_{2}(x,z)-4C\epsilon^{\delta} \\& \le f_{1}(x,z) + 3C\epsilon^{\delta} - f_{2}(x,z)-4C\epsilon^{\delta} \\& \le f(x,z) - C\epsilon^{\delta}, \end{align*} and then we get \eqref{eq:hfirst} for choosing $C= C(r, \delta, \alpha, n)$ sufficiently large. \subsection{Case \texorpdfstring{$|x-z| = 0$}.} According to the results in the previous sections, we observe that $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C_{1} ||u_{\epsilon}||_{\infty} ( |x-z|^{\delta} + \epsilon^{\delta}) ,$$ for any $x, z \ (x \neq z) \in B_{r}(0)$, $ -r^{2}<t<0$, $|t-s| < \epsilon^{2} / 2 $ and some $C_{1}= C_{1}(r, \delta, \alpha , n)>0$. Fix $x \in B_{r}(0)$ and $t,s \in (-r^{2},0)$ with $|t-s| < \epsilon^{2} / 2$. Then we can choose a point $y \in B_{\epsilon}(x)$ and deduce that \begin{align*} |u_{\epsilon} (x,t) - u_{\epsilon} (x,s) | & \le |u_{\epsilon} (x,t) - u_{\epsilon} (y,s) | +|u_{\epsilon} (y,s) - u_{\epsilon} (x,s) | \\ & \le C_{1} ||u_{\epsilon}||_{\infty} ( |x-y|^{\delta} + \epsilon^{\delta}) \\ & \le 2C_{1} ||u_{\epsilon}||_{\infty} \epsilon^{\delta}. \end{align*} Now set $C =2C_{1}$. Then we can conclude the proof of this lemma. \end{proof} For any $x \in B_{r}$ and $ -r^{2} < s < t < 0 $, consider a cylinder $B_{\sqrt{t-s}}(x) \times [s,t] $. Applying Lemma \ref{lem1}, we find that \begin{align*} \osc_{B_{\sqrt{t-s}}(x) \times \big(\tau-\frac{ \epsilon^{2}}{2},\tau \big) } u_{\epsilon} \le C(r, \delta, \alpha, n) ||u_{\epsilon}||_{\infty} (|t-s|^{\frac{\delta}{2}} + \epsilon^{\delta}) \end{align*} for any $ \tau \in (s,t)$. Then we obtain the following estimate \begin{align*} |u_{\epsilon} (x,t) - u_{\epsilon} (x,s) | \le C(r, \delta, \alpha, n) ||u_{\epsilon}||_{\infty} (|t-s|^{\frac{\delta}{2}} + \epsilon^{\delta}) \end{align*} by virtue of Lemma \ref{lem2}. Combining this and Lemma \ref{lem1}, we get the desired regularity. \begin{theorem} \label{thm1} Let $ \bar{Q}_{2r} \subset \Omega_{T}$, $ 0 < \delta, \alpha < 1 $ and $ \epsilon > 0 $ is small. Suppose that $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP with boundary data $F \in L^{\infty}(\Gamma_{\epsilon, T} )$. Then for any $x, z \in B_{r}(0)$ and $ -r^{2} < t, s< 0 $, $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C ||u_{\epsilon}||_{\infty} ( |x-z|^{\delta}+ |s-t|^{\frac{\delta}{2}} + \epsilon^{\delta}), $$ where $C>0$ is a constant which only depends on $r, \delta, \alpha$ and $n$. \end{theorem} \section{Lipschitz regularity} We will prove Lipschitz type regularity for the function $u_{\epsilon}$ in this section. In the previous section, we utilized the concavity on the distance of two points of the auxiliary function to get the result. In order to prove Lipschitz estimate, the auxiliary function is also needed to have this property. However, we no longer have the strong concavity that was helpful in the proof there. Therefore, we need to build the proof in a different manner in several places. For this reason, we will construct other (concave) auxiliary function for proving Lipschitz estimate. This causes some difficulties compared to the H\"{o}lder case. As in the proof of Lemma \ref{lem1}, we will distinguish two subcases. More delicate calculations are needed when two points are sufficiently far apart. Note that we will exploit the H\"{o}lder regularity result here. In the case that two points are sufficiently close, the proof is quite similar to the previous section. \begin{lemma} \label{lem3} Let $\bar{B}_{2r}(0) \times [-2r^{2}-\epsilon^{2} / 2, \epsilon^{2}/2] \subset \Omega_{T}$, $ 0< \alpha <1 $ and $\epsilon > 0 $ is small. Suppose that $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP with boundary data $F \in L^{\infty}(\Gamma_{\epsilon, T} )$. Then, $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C||u_{\epsilon}||_{\infty} ( |x-z| + \epsilon) ,$$ whenever $x, z \in B_{r}(0)$, $ -r^{2}<t<0$ and $ |t-s| < \epsilon^{2} / 2 $ and $C>0$ is a constant which only depends on $r, \alpha$ and $n$. \end{lemma} \begin{proof} We can expect that $ |x-z| $ will play the same role as $f_{1}$ in the H\"{o}lder case. But for a Lipschitz type estimate, we cannot deduce the desired result by using that function $ |x-z| $. Therefore, we need to define a new auxiliary function $\omega : [0, \infty) \to [0, \infty)$. First define $$\omega(t) = t- \omega_{0}t^\gamma \qquad 0 \le t \le \omega_{1} := (2\gamma \omega_{0})^{-1/(\gamma-1)},$$ where $\gamma \in (1,2)$ is a constant and $ \omega_{0} >0$ will be determined later. Observe that $$ \omega '(t) = 1-\gamma\omega_{0}t^{\gamma-1} \in [ 1/2,1 ] \qquad \textrm{for} \ \ 0 \le t \le \omega_{1}$$ and$$ \omega ''(t) = -\gamma(\gamma-1)\omega_{0}t^{\gamma-2} <0 \qquad \textrm{for} \ \ 0 \le t \le \omega_{1}.$$ Then we can construct $\omega$ to be increasing, strictly concave and $C^{2}$ in $(0, \infty)$. Assume that $ ||u_{\epsilon}||_{\infty} \le r $ by scaling as in the previous section, and we define $$f_{1}(x,z) = C \omega( | x - z | ) + M | x+ z|^{2}. $$ Consider the functions $ f_{2} $ and $g$ for $ \delta = 1$ as (\ref{f2}) and (\ref{gg}), respectively. Now we set again the auxiliary function $H$ by $$ H(x, z, t, s) = f_{1}(x,z) - f_{2}(x,z) + g(t,s)$$ and let $$ f(x,z)= f_{1}(x,z) - f_{2}(x,z).$$ As in the previous section, we will first deduce that $$ |u_{\epsilon}(x,t) - u_{\epsilon}(z,s) | \le C( |x-z|+ \epsilon ) \qquad \textrm{in} \ \ \ \ \Sigma_{2} \backslash \Upsilon . $$ We can choose $M$ sufficiently large so that $$ u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H (x,z,t,s) \le C ^ {2N} \epsilon + C \epsilon \qquad \textrm{in} \ \ \ \Sigma_{2} \backslash \Sigma_{1} . $$ Thus, for proving the lemma, it is sufficient to show that $$ u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H(x,z,t,s) \le C ^ {2N} \epsilon +C \epsilon \qquad \textrm{in} \ \ \ \Sigma_{1}\backslash \Upsilon . $$ Suppose not. Then \begin{align*} K := \sup_{(x,z,t,s) \in \Sigma_{1} \backslash \Upsilon} (u_{\epsilon}(x,t) - u_{\epsilon}(z,s) - H(x,z,t,s)) > C ^ {2N} \epsilon+C \epsilon . \end{align*} In this case, we can choose $(x', z', t', s') \in \Sigma_{1} \backslash \Upsilon$ such that \begin{align} \label{counter} u_{\epsilon}(x',t') - u_{\epsilon}(z',s') -H (x',z',t',s')) \ge K - \eta \end{align} for any $ \eta > 0 $. Similarly as in Section 4, we need to establish (\ref{eq:hfirst}) in order to prove Lemma \ref{lem3}. The only difference is the right-hand side of the inequality. In this case, it is sufficient to deduce that the left-hand side of (\ref{eq:hfirst}) is less than $ \sigma = \min \{ -M \epsilon , -M \tilde{C} \epsilon^{2} \} $, where $\tilde{C}$ only depends on $r$. We use again the notation $ (x, z, t, s) $ instead of $ (x', z', t', s') $. \subsection{Case \texorpdfstring{$|x-z| > N \epsilon / 10$}.} For the same reason as in the previous section, we shall deduce (\ref{eq:hsecond}). To do this, it is sufficient to show (\ref{eq:ff}) for any $\eta > 0$ and some $P_{\nu_{x}} \in \mathbf{R}_{\nu_{x}}$, $P_{\nu_{z}} \in \mathbf{R}_{\nu_{z}}$. Now we calculate the Taylor expansion of $f_{1}$. We see \begin{align} \label{eq:third} \begin{split} f_{1}&(x+ \epsilon h_{x}, z+ \epsilon h_{z} ) - f_{1}(x, z) \\ & \le C \omega ' {(|x-z|)}(h_{x}-h_{z})_{V} \epsilon + 2M\langle x+z , h_{x}+h_{z} \rangle \epsilon \\ & + \frac{1}{2}C \omega ''(|x-z|) (h_{x}-h_{z})_{V}^{2} \epsilon^{2} + \frac{1}{2}C \frac{\omega'(|x-z|)}{|x-z|} |(h_{x}-h_{z})_{V^{\perp}}|^{2} \epsilon^{2} \\ & + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}\end{split} \end{align} for any $h_{x} ,h_{z} \in \mathbb{R}^{n}$. Then we check that $$ | \mathcal{E}_{x,z}( h_{x}, h_{z}) | \le C |(h_{x} , h_{z})^{t} |^{3} (|x-z|-2\epsilon )^{\gamma -3} \le C |(h_{x} , h_{z})^{t} |^{3} (|x-z|-2\epsilon )^{\gamma -3}$$ if $|x-z| > 2 \epsilon$ and $h_{x}, h_{z} \in B_{\epsilon}$, because for the third derivatives it holds $D_{(x,z)}^{3} \omega (| x-z|) \le C |x-z|^{\gamma -3}$ for some constant $C>0$. Thus if we choose $ N \ge \frac{100C}{\delta} $, we get $$ | \mathcal{E}_{x,z}( h_{x}, h_{z}) | \le 10 |x-z|^{\gamma -2} \epsilon^{2}. $$ For estimating $\alpha$-term in $T f_{1}(x, z, P_{\nu_{x}} , P_{\nu_{z}})$, we can use (\ref{eq:third}) directly. On the other hand, more observations about $P_{\nu_{x}} , P_{\nu_{z}}$ are needed to estimate $\beta$-term. First we see that \begin{align*} f_{1}&(x+ \epsilon P_{\nu_{x}} \zeta, z+ \epsilon P_{\nu_{z}} \zeta ) - f_{1}(x, z) \\ & = C \omega ' {(|x-z|)}( P_{\nu_{x}}\zeta-P_{\nu_{z}} \zeta )_{V} \epsilon + 2M\langle x+z , P_{\nu_{x}} \zeta +P_{\nu_{z}} \zeta \rangle \epsilon \\ & + \frac{1}{2}C \omega ''(|x-z|) (P_{\nu_{x}} \zeta -P_{\nu_{z}} \zeta )_{V}^{2} \epsilon^{2} + \frac{1}{2}C \frac{\omega'(|x-z|)}{|x-z|} |(P_{\nu_{x}} \zeta -P_{\nu_{z}} \zeta )_{V^{\perp}}|^{2} \epsilon^{2} \\& + M |P_{\nu_{x}} \zeta+P_{\nu_{z}}\zeta|^{2}\epsilon^{2} + \mathcal{E}_{x,z}(\epsilon h_{x}, \epsilon h_{z}) \end{align*} from (\ref{eq:third}). Due to rotational symmetry, integral over the first-order terms is zero. Note that $ \omega '' < 0 $ and (\ref{eq:hfourth}) to see that \begin{align*} \kint_{B_{\epsilon}^{e_{1}} } & f_{1}(x+P_{\nu_{x}} h,z+P_{\nu_{z}} h) d \mathcal{L}^{n-1}(h) - f_{1}(x,z) \\ & \le \frac{C}{2} \frac{\omega'(|x-z|)}{|x-z|}|\nu_{x}+\nu_{z}|^{2} \epsilon^{2} + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}. \end{align*} Therefore, \begin{align*} Tf_{1} &( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) - f_{1}(x,z) \\ & \le \alpha C \omega ' {(|x-z|)}(\nu_{x}-\nu_{z})_{V} \epsilon + 2\alpha M\langle x+z , \nu_{x}+\nu_{z} \rangle \epsilon \\ & + \frac{\alpha}{2}C \omega ''(|x-z|) (\nu_{x}-\nu_{z})_{V}^{2} \epsilon^{2} \\ & + \frac{1}{2}C \frac{\omega'(|x-z|)}{|x-z|}( \alpha |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} +\beta|\nu_{x}+\nu_{z}|^{2} ) \epsilon^{2} \\ & + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}. \end{align*} Now we set $ \Theta = |x-z|^{s} $ for some $s \in (0,1]$ to be chosen later. In order to deduce (\ref{eq:hsecond}), we divide again this case into two separate subcases. \subsubsection{$(\nu_{x}-\nu_{z})_{V}^{2} \ge 4 - \Theta $} Consider two rotations $P_{\nu_{x}}, P_{\nu_{z}} $ which satisfy (\ref{eq:hfourth}). Observe that \begin{align} \label{eq:fifth} \begin{split} \frac{1}{2} & \big\{ Tf_{1} ( x, z,P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, -P_{\nu_{x}}, -P_{\nu_{z}} ) \big\} - f_{1}(x, z) \\ & \le \frac{\alpha}{2}C \omega ''(|x-z|) (\nu_{x}-\nu_{z})_{V}^{2} \epsilon^{2} \\ & + \frac{1}{2}C \frac{\omega'(|x-z|)}{|x-z|}( \alpha |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} + \beta|\nu_{x}+\nu_{z}|^{2} ) \epsilon^{2} \\ & + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}. \end{split} \end{align} Since $ \Theta \le 1 $ for sufficiently small $r$ and $ \frac{1}{2} \le \omega ' \le 1 $ and $ \omega '' < 0 $, \begin{align*} \frac{1}{2} & \big\{ Tf_{1} ( x, z,P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, -P_{\nu_{x}}, -P_{\nu_{z}} ) \big\} - f_{1}(x, z) \\ & \le \frac{3}{2}\alpha C \omega ''(|x-z|) \epsilon^{2} + \frac{C}{2} \frac{1}{|x-z|} ( \alpha |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} +\beta|\nu_{x}+\nu_{z}|^{2} ) \epsilon^{2} \\ & + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}. \end{align*} We know that $ |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} \le \Theta $ by the assumption and we also see $$ |\nu_{x}+\nu_{z}|^{2} = 4 - |(\nu_{x} -\nu_{z} )|^{2} \le 4 - |(\nu_{x} -\nu_{z} )_{V}|^{2} \le \Theta .$$ Thus, \begin{align*} \frac{1}{2} & \big\{ Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, -P_{\nu_{x}}, -P_{\nu_{z}} ) \big\} - f_{1}(x, z) \\ & \le \bigg\{ \frac{3}{2}\alpha C \omega ''(|x-z|) + \frac{C}{2} \frac{\Theta}{|x-z|} + 4M +10 |x-z|^{\gamma -2} \bigg\} \epsilon^{2} . \end{align*} By the definition of $\omega$, $\omega''(|x-z|)= -\gamma(\gamma-1)\omega_{0}|x-z|^{\gamma-2}$ if $|x-z| < \omega_{1} $. Choosing $\gamma = 1+s$. Since $|x-z|<1$, we get \begin{align*} \frac{1}{2} & \big\{ Tf_{1} ( x, z,P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, -P_{\nu_{x}}, -P_{\nu_{z}} ) \big\} - f_{1}(x, z) \\ & \le \bigg[ C \bigg\{- \frac{3}{2}\alpha s(s+1) \omega_{0} + 11 \bigg\} |x-z|^{s-1}+ 4M \bigg] \epsilon^{2} . \end{align*} Note that if $|x-z|< \omega_{1}$ (See the definition of $\omega$), $$- \frac{3}{2}\alpha s(s+1) \omega_{0} + 11 < 0$$ for sufficiently large $\omega_{0}$. Now we select $C= C(r, \alpha, n)$ sufficiently large so that \begin{align*} \bigg[ C \bigg\{- \frac{3}{2}\alpha s(s+1) \omega_{0} + 11 \bigg\} |x-z|^{s-1}+ 4M \bigg] \epsilon^{2} \le -M \tilde{C} \epsilon ^{2} \end{align*} then we get (\ref{eq:hfirst}). \subsubsection{Case $(\nu_{x}-\nu_{z})_{V}^{2} < (4 - \Theta) $} Consider two rotations $P_{\mathbf{v}}$ and $P_{-\mathbf{v}}$ as follows: The first column vectors of $P_{-\mathbf{v}}$ and $P_{\mathbf{v}}$ are ${\mathbf{v}}$ and ${-\mathbf{v}}$, respectively. And other column vectors are the same. Then we observe, \begin{align*} Tf_{1} &( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}} ) - f_{1}(x,z) \\ & \le -2 \alpha C \omega ' {(|x-z|)} \epsilon + 2\alpha C \omega ''(|x-z|) \epsilon^{2} + (4M +10 |x-z|^{\gamma -2})\epsilon^{2} \\& \le -2 \alpha C \omega ' {(|x-z|)} \epsilon + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}, \end{align*} and thus \begin{align*} \frac{1}{2} \big\{ Tf_{1} ( x, z,& P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z,P_{-\mathbf{v}}, P_{\mathbf{v}}) \big\} - f_{1}(x,z) \\ & \le \alpha C \omega ' {(|x-z|)}\{(\nu_{x}-\nu_{z})_{V} - 2 \} \epsilon + 2\alpha M\langle x+z , \nu_{x}+\nu_{z} \rangle \epsilon \\ & + \frac{1}{2}C \frac{\omega'(|x-z|)}{|x-z|} \{ \alpha |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} + \beta|\nu_{x}+\nu_{z}|^{2} \} \epsilon^{2} \\ & + (4M +10 |x-z|^{\gamma -2})\epsilon^{2}. \end{align*} Set $$ \kappa = \frac{ |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2}}{\Theta} .$$ Then $ 1 < \kappa \le \frac{4}{\Theta} $ by the assumption. Observe that $$|(\nu_{x}-\nu_{z})_{V}| \le \sqrt{|\nu_{x}-\nu_{z}|^{2}-\kappa \Theta} \le \sqrt{4-\kappa \Theta} \le 2 - \frac{\kappa}{4}\Theta$$ and hence $$ |(\nu_{x}-\nu_{z})_{V^{\perp}}| \le 4( 2 - (\nu_{x}-\nu_{z})_{V}).$$ On the other hand, we have \begin{align*} |\nu_{x}+\nu_{z}|^{2} & = 4 - |\nu_{x}-\nu_{z}|^{2} \\ & \le 4 - (\nu_{x}-\nu_{z})_{V}^{2} \\ & \le 4 ( 2- (\nu_{x}-\nu_{z})_{V}). \end{align*} We observe that \begin{align*} \frac{1}{2}& \big\{ Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z,P_{-\mathbf{v}}, P_{\mathbf{v}} ) \big\} - f_{1}(x,z) \\ & \le 2\alpha M\langle x+z , \nu_{x}+\nu_{z} \rangle \epsilon+ (2 - (\nu_{x}-\nu_{z})_{V} ) C \omega ' {(|x-z|)} \bigg( - \alpha + \frac{20}{N} \bigg) \epsilon \\ & \qquad + (4M +10 |x-z|^{\gamma -2})\epsilon^{2} , \end{align*} as $|x-z| > N\epsilon / 10$. Next we estimate $ M\langle x+z , \nu_{x}+\nu_{z} \rangle \epsilon$. We already know that $u_{\epsilon} $ satisfies H\"{o}lder type estimate for any exponent $ \delta \in (0,1) $ by Theorem \ref{thm1}. Now by the counter assumption (\ref{counter}), $$ u_{\epsilon}(x,t)-u_{\epsilon}(z,s)-C \omega(|x-z|) - M|x+z|^{2}- g(t,s) \ge K - \eta > 0.$$ Then we see \begin{align*} M|x+z|^{2} < u_{\epsilon}(x,t)-u_{\epsilon}(z,s) \le C_{u_{\epsilon}} (|x-z|^{1/2}+ \epsilon^{1/2}) . \end{align*} Note that $ C_{u_{\epsilon}}$ is a constant depending only on $r, \alpha$ and $n$. Thus, we obtain that \begin{align*} |x+z| & < \sqrt{ \frac{ C_{u_{\epsilon}}}{M}} (|x-z|^{1/2}+ \epsilon^{1/2})^{1/2} \\& \le \sqrt{ \frac{ C_{u_{\epsilon}}}{M}} \bigg[ |x-z|^{1/4}+ \frac{1}{2}|x-z|^{-1/4}\epsilon^{1/2} + o (\epsilon^{1/2})\bigg] \\& \le \sqrt{ \frac{ C_{u_{\epsilon}}}{M}} \bigg[ |x-z|^{1/4}+ \frac{1}{2}\bigg( \frac{10}{N} \bigg)^{1/4}\epsilon^{1/4} + o (\epsilon^{1/2})\bigg]. \end{align*} Hence we observe \begin{align*}M\langle x+z &, \nu_{x}+\nu_{z} \rangle \epsilon \le 2M|x+z|\epsilon \\ & \le 2 \sqrt{MC_{u_{\epsilon}}} |x-z|^{1/4}\epsilon+ \sqrt{MC_{u_{\epsilon}}} \bigg( \frac{10}{N} \bigg)^{1/4}\epsilon^{5/4} + o (\epsilon^{3/2}) \\& \le 3 \sqrt{MC_{u_{\epsilon}}} |x-z|^{1/4}\epsilon \end{align*} since $ \sqrt{MC_{u_{\epsilon}}} (10/N )^{1/4}\epsilon^{5/4} + o (\epsilon^{3/2}) $ is bounded by $\sqrt{MC_{u_{\epsilon}}} |x-z|^{1/2}\epsilon$. Therefore, if we choose $ \gamma = 1+ s = 5/4 $, \begin{align*} & \frac{1}{2} \big\{ Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z,P_{-\mathbf{v}}, P_{\mathbf{v}} ) \big\} - f_{1}(x,z) \\ & \le 6\alpha \sqrt{MC_{u_{\epsilon}}} |x-z|^{s} \epsilon + C \omega ' {(|x-z|)} \times \\ & \qquad \ \ \ \bigg[ - \alpha \kappa \frac{ |x-z|^{s} }{4} + \frac{5}{N} \bigg\{ \alpha |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2} + \frac{\beta}{n+1}|(\rho_{x} - \rho_{z} )_{V^{\perp}}|^{2} \bigg\} \bigg] \epsilon \\ & + (4M +10 |x-z|^{s-1})\epsilon^{2}. \end{align*} Note that $$ (4M +10 |x-z|^{s-1})\epsilon^{2} \le (4M+10) \frac{10}{N} |x-z|^{s} \epsilon. $$ Then \begin{align*} \frac{1}{2} \big\{ & Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}}) \big\} - f_{1}(x,z) \\ & \le 6\alpha \sqrt{MC_{u_{\epsilon}}} |x-z|^{s} \epsilon + (2 - (\nu_{x}-\nu_{z})_{V} ) C \omega ' {(|x-z|)} \bigg( - \alpha + \frac{20}{N} \bigg) \epsilon \\ & \qquad + (4M+10) \frac{10}{N} |x-z|^{s} \epsilon. \end{align*} Since we already know that $ \kappa \Theta / 4 \le 2 -(\nu_{x}-\nu_{z})_{V} $ and $ \omega ' \in [1/2, 1] $, we see that \begin{align*} \frac{1}{2} \big\{ & Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}} ) \big\} - f_{1}(x,z) \\ & \le \bigg[ 6\alpha \sqrt{MC_{u_{\epsilon}}} + C \bigg( - \frac{ \alpha }{8} + \frac{5}{N} \bigg) \frac{ |(\nu_{x}-\nu_{z})_{V^{\perp}}|^{2}}{|x-z|^{s}} + (4M+10) \frac{10}{N} \bigg] |x-z|^{s} \epsilon \\ & \le \bigg[ 6\alpha \sqrt{MC_{u_{\epsilon}}} + C \bigg( - \frac{ \alpha }{8} + \frac{5}{N} \bigg) + (4M+10) \frac{10}{N} \bigg] |x-z|^{s} \epsilon . \end{align*} Fix $N > 100/\alpha$ and choose $C = C( r, \alpha, n)$ large enough so that $$ 6\alpha \sqrt{MC_{u_{\epsilon}}} + C \bigg( - \frac{ \alpha }{8} + \frac{5}{N} \bigg) + (4M+10) \frac{10}{N} < 0 .$$ Then we conclude that \begin{align*} \frac{1}{2} \big\{ & Tf_{1} ( x, z, P_{\nu_{x}}, P_{\nu_{z}} ) + Tf_{1} ( x, z, P_{-\mathbf{v}}, P_{\mathbf{v}} ) \big\} - f_{1}(x,z) \\ & \le \frac{N}{10} \bigg[ 6\alpha \sqrt{MC_{u_{\epsilon}}} + C \bigg( - \frac{ \alpha }{8} + \frac{5}{N} \bigg) + (4M+10) \frac{10}{N} \bigg] |x-z|^{s-1} \epsilon^{2} \\ & \le -M \tilde{C} \epsilon^{2} , \end{align*} since $ |x-z| > N \epsilon /10$. Now we obtained the desired result. \subsection{Case \texorpdfstring{$0<|x-z| \le N \epsilon / 10$}.} It is quite similar to the H\"{o}lder case. First, we see that for any $x , z \in B_{r}$ and $h_{x}, h_{z} \in S^{n-1}$, \begin{align*} & |f_{1}(x+\epsilon h_{x},z+\epsilon h_{z})-f_{1}(x,z)| \\& \le C \big|\omega(|x+\epsilon h_{x}-z-\epsilon h_{z}|)-\omega(|x-z|) \big| \\ & \qquad + M\big| |x+\epsilon h_{x}+z+\epsilon h_{z}|^{2}-|x+z|^{2} \big| \\& \le C \big( \big||x+\epsilon h_{x}-z-\epsilon h_{z}|- |x-z| \big|+ \omega_{0}\big| |x+\epsilon h_{x}-z-\epsilon h_{z}|^{\gamma} -|x-z |^{\gamma} \big| \big) \\ & \qquad +M\big| |x+\epsilon h_{x}+z+\epsilon h_{z}|^{2}-|x+z|^{2} \big| \\& \le 2C\epsilon + 2C \omega_{0} \gamma (2r)^{\gamma-1}(2\epsilon) + 8Mr\epsilon+ 4M\epsilon^{2}. \end{align*} Then we can choose a constant $C >0$ such that $$|f_{1}(x+\epsilon h_{x},z+\epsilon h_{z})-f_{1}(x,z)| \le 20C \epsilon. $$ As in the previous section, \begin{align*} &\sup_{h_{x},h_{z} \in S^{n-1} } Tf_{1}( x, z,P_{h_{x}}, P_{h_{z}} ) - f_{1}(x,z) \\& \qquad = \sup_{h_{x},h_{z} \in S^{n-1} } \bigg[ \alpha\{ f_{1}(x+ \epsilon h_{x},z+ \epsilon h_{z})-f_{1}(x,z) \} \\& \qquad \qquad \qquad \qquad + \beta \kint_{B_{\epsilon}^{e_{1}} }\{f_{1}(x+ P_{h_{x}}h,z+P_{h_{z}} h) - f_{1}(x,z) \} d \mathcal{L}^{n-1}(h) \bigg] \\& \qquad \le 20\alpha C\epsilon+ 20\beta C\epsilon= 20C\epsilon \end{align*} and note that (\ref{anes}) is still valid here. We can find $i \in \{1, 2, \cdots , N\}$ such that $ (i-1) \frac{ \epsilon}{10} < |x-z| \le i\frac{ \epsilon}{10} $ as in the previous section. Now, if $C$ is large enough, \begin{align*} &\inf_{h_{x},h_{z} \in S^{n-1}} Tf( x, z, P_{h_{x}}, P_{h_{z}} ) \\ & \le \sup_{h_{x},h_{z} \in S^{n-1}} Tf_{1}( x, z, P_{h_{x}}, P_{h_{z}} ) - \sup_{h_{x},h_{z} \in S^{n-1}} T f_{2}( x, z, P_{h_{x}}, P_{h_{z}} ) \\& \le \sup_{h_{x},h_{z} \in S^{n-1}} T f_{1}( x, z,P_{h_{x}}, P_{h_{z}} ) - \alpha C^{2(N-i+1)} \epsilon\\& = \sup_{h_{x},h_{z} \in S^{n-1}} T f_{1}( x, z,P_{h_{x}}, P_{h_{z}} ) - \alpha \bigg( C^{2}- \frac{2}{\alpha} \bigg) C^{2(N-i)} \epsilon - 2C^{2(N-i)} \epsilon \\& \le \sup_{h_{x},h_{z} \in S^{n-1}} T f_{1}( x, z,P_{h_{x}}, P_{h_{z}} ) - 2f_{2}(x,z) - 50C \epsilon . \end{align*} Therefore, we calculate that \begin{align*} & \midrg_{h_{x},h_{z} \in S^{n-1}} Tf( x, z,P_{h_{x}}, P_{h_{z}} ) \\& \qquad \le \sup_{h_{x},h_{z} \in S^{n-1}} T f_{1}( x, z, P_{h_{x}}, P_{h_{z}} ) - f_{2}(x,z)-25C\epsilon \\& \qquad \le f_{1}(x,z) + 20C\epsilon - f_{2}(x,z)-25C\epsilon. \end{align*} We finally choose a large constant $C > M$ depending only on $r, \alpha$ and $n$ to obtain (\ref{eq:hfirst}). \subsection{Case \texorpdfstring{$|x-z| =0$}.} Similar to the previous section, we already showed that $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C_{2} ||u_{\epsilon}||_{\infty} ( |x-z| + \epsilon) ,$$ for any $x, z \ (x \neq z) \in B_{r}(0)$, $ -r^{2}<t<0$, $|t-s| < \epsilon^{2} / 2 $ and some $C_{2}= C_{2}(r, \alpha , n)>0$. Then we can obtain the desired result by using the same argument as in Section 4.3. \end{proof} Now Lemma \ref{lem2} and Lemma {\ref{lem3}} yield the Lipschitz type regularity in the whole cylinder. \begin{theorem} \label{mainthm} Let $\bar{Q}_{2r} \subset \Omega_{T}$, $0 < \alpha <1$ and $\epsilon > 0 $ be small. Suppose that $u_{\epsilon}$ satisfies the $\alpha$-parabolic DPP with boundary data $F \in L^{\infty}(\Gamma_{\epsilon, T} )$. Then for any $x, z \in B_{r}(0)$ and $ -r^{2} < t, s< 0 $, $$ |u_{\epsilon} (x,t) - u_{\epsilon} (z,s) | \le C ||u_{\epsilon}||_{\infty} ( |x-z|+ |s-t|^{\frac{1}{2}} + \epsilon), $$ where $C>0$ is a constant which only depends on $r, \alpha$ and $n$. \end{theorem} {\bf Acknowledgments}. This work was supported by NRF-2015R1A4A1041675. The author would like to thank M. Parviainen, for introducing this topic, valuable discussions and constant support throughout this work. The work was partly done while the author was visiting Univerity of Jyv\"{a}skyl\"{a} (JYU) in Finland and thanks JYU for the hospitality. \bibliographystyle{alpha}
1,477,468,749,946
arxiv
\subsection{Importance of Different Facial Regions} \label{sec:analysis} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/heatmap_illumination_error_bright.pdf} \caption{Region importance maps and corresponding mean face patches based on a clustering of face patches according to illumination conditions for the MPIIGaze dataset: From directional light on the right side of the face (left), over frontal light (center), to directional light on the left side of the face (right). Bar plots show the estimation error for the two eye model (baseline) and the proposed spatial weights CNN (ours), and the performance gain in percent in the top right corner. Error bars indicate standard deviations.} \label{fig:heatmap_illumination} \end{figure} To further analyse how different facial regions contribute to the overall performance, we generated region importance maps of the full-face model with respect to different factors for 3D gaze estimation. As proposed in~\cite{zeiler2014visualizing}, region importance maps were generated by evaluating estimation error after masking parts of the input image. Specifically, given the $448 \times 448$ input face image, we used a grey-coloured mask with a size of $64 \times 64$ pixels and moved this mask over the whole image in a sliding window fashion with a $32$ pixel stride. The per-image region importance maps were obtained by smoothing the obtained $64 \times 64$ error distribution with a box filter. The larger the resulting drop in gaze estimation accuracy the higher the importance of that region of the face. Individual face images and their importance maps were then aligned by warping the whole image using three facial landmark locations (centres of both eye corners and mouth corners). Finally, mean face patches and mean region importance maps were computed by averaging over all images. To illustrate the effect of the face image input, we compare these region importance maps with a quantitative performance comparison between two eyes ({\em Baseline}) and our proposed full-face model ({\em Ours}). \vspace{-1em} \paragraph{Illumination Conditions} The original MPIIGaze paper characterised the dataset with respect to different illumination conditions as well as gaze ranges~\cite{zhang2015appearance}. We therefore first explored whether and which facial regions encode information on these illumination conditions. As in the original paper, we used the difference in mean intensity values of the right and left half of the face as a proxy to infer directional light. We clustered all $15 \times 3,000$ images according to the illumination difference using $k$-means clustering, and computed the mean face image and mean importance map for each cluster. \autoref{fig:heatmap_illumination} shows resulting sample region importance maps with respect to illumination conditions. As can be seen from the figure, under strong directional lighting (leftmost and rightmost example), more widespread regions around the eyes are required on the brighter side of the face. The proposed method consistently performed better than the two eye model over all lighting conditions. \vspace{-1em} \paragraph{Gaze Directions} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/heatmap_gaze_h_error_bright.pdf} \par \vspace{0.2cm} \includegraphics[width=\columnwidth]{figures/heatmap_gaze_v_error_bright.pdf} \caption{Region importance maps and corresponding mean face patches based on a clustering of images according to ground-truth horizontal (top) and vertical (bottom) gaze direction for the MPIIGaze dataset. Bar plots show the estimation error in the same manner as in \autoref{fig:heatmap_illumination}.} \label{fig:heatmap_gaze} \end{figure} Another factor that potentially influences the importance of different facial regions is the gaze direction. We therefore clustered images according to gaze direction in the same manner as before. The top two rows of \autoref{fig:heatmap_gaze} show the corresponding region importance maps depending on horizontal gaze direction while the bottom two rows show maps depending on vertical gaze direction. As shown, different parts of the face become important depending on the gaze direction to be inferred. The eye region is most important if the gaze direction is straight ahead while the model puts higher importance on other regions if the gaze direction becomes more extreme. \vspace{-1em} \paragraph{Head Pose} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/EYEDIAP_heat_map_headpose_horizontal.pdf} \par \vspace{0.2cm} \includegraphics[width=\columnwidth]{figures/EYEDIAP_heat_map_headpose_vertical.pdf} \caption{Region importance maps based on a clustering of images according to ground-truth horizontal (top) and vertical (bottom) head pose for the EYEDIAP dataset. Bar plots show the estimation error in the same manner as in \autoref{fig:heatmap_illumination}.} \label{fig:heatmap_headpose} \end{figure} While the head pose range in MPIIGaze is limited due to the recording setting, the EYEDIAP dataset contains a wide head pose range. We therefore finally clustered images in EYEDIAP according to head pose in the same manner as before. The top two rows of \autoref{fig:heatmap_headpose} show the corresponding region importance maps depending on horizontal head pose while the bottom two rows show maps depending on vertical head pose. In these cases, it can be clearly seen that the full-face input is particularly beneficial to improving estimation performance for extreme head poses. Non-eye facial regions also have in general higher importance compared to MPIIGaze, which indicates the benefit of using full-face input for low-resolution images. \section{Conclusion} In this work we studied full-face appearance-based gaze estimation and proposed a spatial weights CNN method that leveraged information from the full face. We demonstrated that, compared to current eye-only and multi-region methods, our method is more robust to facial appearance variation caused by extreme head pose and gaze directions as well as illumination. Our method achieved an accuracy of $4.8^\circ$ and $6.0^\circ$ for person-independent 3D gaze estimation on the challenging in-the-wild MPIIGaze and EYEDIAP datasets, respectively -- a significant improvement of 14.3\% and 27.7\% over the state of the art. We believe that full-face appearance-based gaze estimation leans itself closely to related computer vision tasks, such as face and facial feature detection, facial expression analysis, or head pose estimation. This work therefore points towards future learning-based methods that address multiple of these tasks jointly. \section{Evaluation}\label{sec:experiments} To evaluate our architecture for the 2D and 3D gaze estimation tasks, we conducted experiments on two current gaze datasets: MPIIGaze~\cite{zhang2015appearance} and EYEDIAP~\cite{mora2014eyediap}. For the MPIIGaze dataset, we performed a leave-one-person-out cross-validation on all 15 participants. In order to eliminate the error caused by face alignment, we manually annotated the six facial landmarks for data normalization and image cropping. In the original evaluation, there were 1,500 left and 1,500 right eye samples randomly taken from each participant. For a direct comparison, we obtained face images corresponding to the same evaluation set and flipped the face images when they came from the right eye. Our face patch-based setting took the middle point of face (the center of all six landmarks) as the origin of gaze direction. For the EYEDIAP dataset, we used the screen target session for evaluation and sampled one image per 15 frames from four VGA videos of each participant. We used head pose and eye centres annotations provided by the dataset for image normalization, and reference points were set to the midpoint of the two eye centres. The eye images were cropped by the same way as MPIIGaze dataset. We randomly separated the 14 participants into 5 groups and performed 5-fold cross-validation. We compared our full-face gaze estimation method with two state-of-the-art baselines: A single eye method~\cite{zhang2015appearance} that only uses information encoded from one eye as well as a multi-region method~\cite{krafka2016eye} that takes eye images, the face image, and a face grid as input. \vspace{-1em} \paragraph{Single Eye} One of the baseline methods is the state-of-the-art single eye appearance-based gaze estimation method~\cite{zhang2015appearance}, which originally used the LeNet~\cite{jia2014caffe,lecun1998gradient} architecture. For a fair comparison, we instead used the AlexNet architecture as our proposed model (see \autoref{sec:model_details}). Eye images were cropped by taking the center of the eye corners as the center and with the width of 1.5 times of the distance between corners, and resized to $60\times36$ pixels as proposed in~\cite{zhang2015appearance}. In this case, each individual eye became the input to the model, and the reference point $\bm{x}$ was set to the middle of inner and outer eye corners. \vspace{-1em} \paragraph{iTracker} Since neither code nor models were available, we re-implemented the iTracker architecture~\cite{krafka2016eye} according to the description provided in the paper. Face images were cropped in the same manner as our proposed method and resized to $224\times224$ pixels. Eye images were cropped by taking the middle point of the inner and outer eye corners as the image center and with the width of 1.7 times of the distance between the corners, and resized to $224\times224$ pixels. For the 2D gaze estimation task, we also used the face grid feature~\cite{krafka2016eye} with a size of $25\times25$ pixels. The face grid encodes the face size and location inside the original image. For a fair comparison with our proposed architecture, we also evaluated the model using the same AlexNet CNN architecture as {\em iTracker (AlexNet)}. To validate the effect of the face input, we also tested the iTracker (AlexNet) architecture only taking two eye images as {\em Two eyes} model. \subsection{2D Gaze Estimation} \label{sec:results_2D} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/results_mpii_2D.pdf} \caption{Error for 2D gaze estimation on the MPIIGaze dataset in millimetres (Euclidean error) and degrees (angular error). The face grid was used as additional input. Error bars indicate standard deviations.} \label{fig:performance_2d} \end{figure} \autoref{fig:performance_2d} summarises the results for the 2D gaze estimation task. Each row corresponds to one method, and if not noted otherwise, the face grid feature was used in addition to the image input. The left axis shows the Euclidean error between estimated and ground-truth gaze positions in the screen coordinate system in millimetres. The right axis shows the corresponding angular error that was approximately calculated from the camera and monitor calibration information provided by the dataset and the same reference position for the 3D gaze estimation task. As can be seen from~\autoref{fig:performance_2d}, all methods that take full-face information as input significantly outperformed the single eye baseline. The single face image model achieved a competitive result to the iTracker and the iTracker (AlexNet) models. Performance was further improved by incorporating the proposed spatial weights network. The proposed spatial weights network achieved a statistically significant 7.2\% performance improvement (paired t-test: $p < 0.01$) over the second best single face model. These findings are in general mirrored for the EYEDIAP dataset shown in~\autoref{fig:performance_2d_eyediap}, while the overall performance is worse most likely due to the lower resolution and the limited amount of training images. Although the iTracker architecture performs worse than the two eyes model, our proposed model still performed the best. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/results_eyediap_2D.pdf} \caption{Error for 2D gaze estimation on the EYEDIAP dataset in millimetres (Euclidean error) and degrees (angular error). Error bars indicate standard deviations.} \label{fig:performance_2d_eyediap} \end{figure} \subsection{3D Gaze Estimation} \label{sec:results_3D} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/results_mpii_3D.pdf} \caption{Error for 3D gaze estimation on the MPIIGaze dataset in degrees (angular error) and millimetres (Euclidean error). Error bars indicate standard deviations.} \label{fig:performance_3d} \end{figure} \autoref{fig:performance_3d} summarises the results for the 3D gaze estimation task. The left axis shows the angular error that was directly calculated from the estimated and ground-truth 3D gaze vectors. The right axis shows the corresponding Euclidean error that was approximated by intersecting the estimated 3D gaze vector with the screen plane. Compared to the 2D gaze estimation task, the performance gap between iTracker and the single face model is larger (0.7 degrees). Since the AlexNet-based iTracker model could achieve similar performance as the single face model, the performance drop seems to be partly due to its network architecture. Our proposed model achieved a significant performance improvement of 14.3\% (paired t-test: $p>0.01$) over iTracker, and a performance consistent with the 2D case. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/results_eyediap_3D.pdf} \caption{Error for 3D gaze estimation on the EYEDIAP dataset in degrees (angular error) and millimetres (Euclidean error). Error bars indicate standard deviations.} \label{fig:performance_3d_eyediap} \end{figure} As shown in~\autoref{fig:performance_3d_eyediap}, the proposed model also achieved the best performance for the 3D gaze estimation task on the EYEDIAP dataset. \subsection{Head Pose and Facial Appearance} \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/results_headpose.pdf} \caption{Gaze estimation error from the different models related to head pose. The numbers are angular error for 3D gaze estimation in degrees. Error bars indicate standard deviations.} \label{fig:performance_head_pose} \end{figure} One natural hypothesis about why full-face input can help the gaze estimation task is that it brings head pose information which can be a prior for inferring gaze direction. In this section, we provide more insights on this hypothesis by comparing performance using face images {\em without} eye regions with a simple head pose-based baseline. More specifically, using the MPIIGaze dataset, we created face images where both eye regions were blocked with a gray box according to the facial landmark annotation. We compared the estimation performance using eye-blocked face images with: 1) a naive estimator directly treating the head pose as gaze direction, and 2) a linear regression function trained to output gaze directions from head pose input. Angular error of these methods for the 3D estimation task are shown in~\autoref{fig:performance_head_pose}. While the error using eye-blocked face images was larger than the original single face architecture (5.5 degrees), the performance was better than baseline head pose-based estimators. This indicates, somewhat surprisingly, that the impact of taking full-face input is larger than head pose information, and the facial appearance itself is beneficial for inferring gaze direction. \section{Full-Face Gaze Estimation with a Spatial Weights CNN} \label{sec:spatial_weights} For both the 2D and 3D gaze estimation case, the core challenge is to learn the regression function $f$. While a large body of work has only considered the use of the eye region for this task, we instead aim to explore the potential of extracting information from the full face. Our hypothesis is that other regions of the face beyond the eyes contain valuable information for gaze estimation. \begin{figure*}[t] \center \includegraphics[width=0.9\textwidth]{figures/model.pdf} \caption{Spatial weights CNN for full-face appearance-based gaze estimation. The input image is passed through multiple convolutional layers to generate a feature tensor $\bm{U}$. The proposed spatial weights mechanism takes $\bm{U}$ as input to generate the weight map $\bm{W}$, which is applied to $\bm{U}$ using element-wise multiplication. The output feature tensor $\bm{V}$ is fed into the following fully connected layers to -- depending on the task -- output the final 2D or 3D gaze estimate.} \label{fig:model} \end{figure*} As shown in~\autoref{fig:model}, to this end we propose a CNN with spatial weights (spatial weights CNN) for full-face appearance-based 2D and 3D gaze estimation. To efficiently use the information from full-face images, we propose to use additional layers that learn spatial weights for the activation of the last convolutional layer. The motivation behind this spatial weighting is two-fold. First, there could be some image regions that do not contribute to the gaze estimation task such as background regions, and activations from such regions have to be suppressed for better performance. Second, more importantly, compared to the eye region that is expected to always contribute to the gaze estimation performance, activations from other facial regions are expected to subtle. The role of facial appearance is also depending on various input-dependent conditions such as head pose, gaze direction and illumination, and thus have to be properly enhanced according to the input image appearance. Although, theoretically, such differences can be learned by a normal network, we opted to introduce a mechanism that forces the network more explicitly to learn and understand that different regions of the face can have different importance for estimating gaze for a given test sample. To implement this stronger supervision, we used the concept of the three $1 \times 1$ convolutional layers plus rectified linear unit layers from~\cite{tompson2015efficient} as a basis and adapted it to our full face gaze estimation task. Specifically, instead of generating multiple heatmaps (one to localise each body joint) we only generated a single heatmap encoding the importance across the whole face image. We then performed an element-wise multiplication of this weight map with the feature map of the previous convolutional layer. An example weight map is shown in~\autoref{fig:model}, averaged from all samples from the MPIIGaze dataset. \subsection{Spatial Weights Mechanism} The proposed spatial weights mechanism includes three additional convolutional layers with filter size $1\times1$ followed by a rectified linear unit layer (see \autoref{fig:model}). Given activation tensor $\bm{U}$ of size $N \times H \times W$ as input from the convolutional layer, where $N$ is the number of feature channels and $H$ and $W$ are height and width of the output, the spatial weights mechanism generates a $H \times W$ spatial weight matrix $\bm{W}$. Weighted activation maps are obtained from element-wise multiplication of $\bm{W}$ with the original activation $\bm{U}$ with \begin{equation} \bm{V}_c=\bm{W}\odot\bm{U}_c, \end{equation} where $\bm{U}_c$ is the $c$-th channel of $\bm{U}$, and $\bm{V}_c$ corresponds to the weighted activation map of the same channel. These maps are stacked to form the weighted activation tensor $\bm{V}$, and are fed into the next layer. Different from the spatial dropout~\cite{tompson2015efficient}, the spatial weights mechanism weights the information continuously and keeps the information from different regions. The same weights are applied to all feature channels, and thus the estimated weights directly correspond to the facial region in the input image. During training, the filter weights of the first two convolutional layers are initialized randomly from a Gaussian distribution with 0 mean and 0.01, and a constant bias of 0.1. The filter weights of the last convolutional layers are initialized randomly from a Gaussian distribution with 0 mean and 0.001 variance, and a constant bias of 1. Gradients with respect to $\bm{U}$ and $\bm{W}$ are \begin{equation} \frac{\partial \bm{V}}{\partial \bm{U}}={\partial \bm{W}}, \end{equation} and \begin{equation} \frac{\partial \bm{V}}{\partial \bm{W}}=\frac{1}{N}\sum_c^N{\partial \bm{U}_c}. \end{equation} The gradient with respect to $\bm{W}$ is normalised by the total number of the feature maps $N$, since the weight map $\bm{W}$ affects all the feature maps in $\bm{U}$ equally. \subsection{Implementation Details} \label{sec:model_details} As the baseline CNN architecture we used AlexNet~\cite{krizhevsky2012imagenet} that consists of five convolutional layers and two fully connected layers. We trained an additional linear regression layer on top of the last fully connected layer to predict the $\bm{p}$ in screen coordinates for 2D gaze estimation or normalized gaze vectors $\bm{\hat g}$ for the 3D gaze estimation task. We used the pre-training result on the LSVRC-2010 ImageNet training set~\cite{krizhevsky2012imagenet} to initialize the five convolution layers, and fine-tuned the whole network on the MPIIGaze dataset~\cite{zhang2015appearance}. The input image size of our networks was $448\times448$ pixels, which results in an activation $\bm{U}$ of size $256\times13\times13$ after the pooling layer of the 5-th convolutional layers. For 2D gaze estimation, input face images were cropped according to the six facial landmark locations (four eye corners and two mouth corners). While in practice this is assumed to be done with face alignment methods such as~\cite{baltruvsaitis2014continuous}, in the following experiments we used dataset-provided landmark locations. The centroid of the six landmarks was used as the center of the face, and a rectangle with a width of 1.5 times the maximum distance between landmarks was used as the face bounding box. The loss function was the $\ell 1$ distance between the predicted and ground-truth gaze positions in the target screen coordinate system. For 3D gaze estimation, the reference point $\bm{x}$ was defined as the center of 3D locations of the same six facial landmarks. We fit the generic 3D face model provided with MPIIGaze to the landmark locations to estimate the 3D head pose. During image normalization, we defined $d_s$ and $\bm{C}_s$ so that the input face image size became 448$\times$448 pixels. In preliminary experiments we noticed that the additional head pose feature proposed by Zhang et al.~\cite{zhang2015appearance} did not improve the performance in the full-face case. In this work we therefore only used image features. The loss function was the $\ell 1$ distance between the predicted and ground-truth gaze angle vectors in the normalized space. \section{Introduction} A large number of works in computer vision have studied the problem of estimating human eye gaze~\cite{hansen2010eye} given its importance for different applications, such as human-robot interaction~\cite{mutlu2009footing}, affective computing~\cite{d2012gaze}, and social signal processing~\cite{vinciarelli2008social}. While early methods typically required settings in which lighting conditions or head pose could be controlled~\cite{lu2014adaptive,pomerleau1993non,tan2002appearance,williams2006sparse}, latest appearance-based methods using convolutional neural networks (CNN) have paved the way for gaze estimation in everyday settings that are characterised by significant amount of lighting and appearance variation~\cite{zhang2015appearance}. Despite these advances, previous appearance-based methods have only used image information encoded from one or both eyes. \begin{figure}[t] \center \includegraphics[width=\columnwidth]{figures/pipeline_new.pdf} \caption{Overview of the proposed full face appearance-based gaze estimation pipeline. Our method only takes the face image as input and performs 2D and 3D gaze estimation using a convolutional neural network with spatial weights applied on the feature maps.} \label{fig:pipeline} \end{figure} Recent results by Krafka et al.\ indicated that a multi-region CNN architecture that takes both eye and face images as input can benefit gaze estimation performance~\cite{krafka2016eye}. While, intuitively, human gaze is closely linked to eyeball pose and eye images should therefore be sufficient to estimate gaze direction, it is indeed conceivable that especially machine learning-based methods can leverage additional information from other facial regions. These regions could, for example, encode head pose or illumination-specific information across larger image areas than those available in the eye region. However, it is still an open question whether a (more efficient and elegant) face-only approach can work, which facial regions are most important for such a full-face appearance-based method, and whether current deep architectures can encode the information in these regions. In addition, the gaze estimation task in~\cite{krafka2016eye} was limited to a simple 2D screen mapping and the potential of the full-face approach for 3D gaze estimation thus remains unclear. The goal of this work is to shed light on these questions by providing a detailed analysis of the potential of the full-face approach for 2D and 3D appearance-based gaze estimation (see \autoref{fig:pipeline}). The specific contributions of this work are two-fold. First, we propose a full-face CNN architecture for gaze estimation that, in stark contrast to a long-standing tradition in gaze estimation, takes the full face image as input and directly regresses to 2D or 3D gaze estimates. We quantitatively compare our full-face method with existing eye-only~\cite{zhang2015appearance} and multi-region~\cite{krafka2016eye} methods and show that it can achieve a person-independent 3D gaze estimation accuracy of 4.8$^\circ$ on the challenging MPIIGaze dataset, thereby improving by 14.3\% over the state of the art. Second, we propose a {\em spatial weights} mechanism to efficiently encode information about different regions of the full face into a standard CNN architecture. The mechanism learns spatial weights on the activation maps of the convolutional layers, reflecting that the information contained in different facial regions [[...]] Through further quantitative and qualitative evaluations we show that the proposed spatial weights network facilitates the learning of estimators that are robust to significant variation in illumination conditions as well as head pose and gaze directions available in current datasets. \section{Acknowledgements} This work was partly funded by the Cluster of Excellence on Multimodal Computing and Interaction (MMCI) at Saarland University, Germany, and JST CREST Research Grant (JPMJCR14E1), Japan. {\small \bibliographystyle{ieee} \section{Gaze Estimation Tasks} Before detailing our model architecture for full-face appearance-based gaze estimation, we first formulate and discuss two different gaze estimation tasks: 2D and 3D gaze estimation. A key contribution of this work is to investigate full-face appearance-based gaze estimation for both tasks. This not only leads to a generic model architecture but also provides valuable insights into the difference and benefits gained from full-face information for both task formulations. Although the 3D task formulation poses additional technical challenges to properly handle the complex 3D geometry, it can be applied to different device and setups without assuming a fixed camera-screen relationship. This formulation therefore is the most general and practically most relevant. If the application scenario can afford a fixed screen position, the 2D formulation is technically less demanding and therefore expected to show better accuracy. \subsection{2D Gaze Estimation} As the most straightforward strategy, the 2D gaze estimation task is formulated as a regression from the input image $\bm{I}$ to a 2-dimensional on-screen gaze location $\bm{p}$ as $\bm{p} = f(\bm{I})$, where $f$ is the regression function. Usually $\bm{p}$ is directly defined in the coordinate system of the target screen~\cite{lu2014adaptive,sugano2015appearance,tan2002appearance,valenti2012combining} or, more generally, a virtual plane defined in the camera coordinate system~\cite{krafka2016eye}. Since the relationship between eye appearance and gaze location depends on the position of the head, the regression function usually requires 3D head poses~\cite{valenti2012combining} or face bounding box locations~\cite{huang2015tabletgaze,krafka2016eye} in addition to eye and face images. It is important to note that, in addition to the fixed target plane, another important assumption in this formulation is that the input image $\bm{I}$ is always taken from the same camera with fixed intrinsic parameters. Although no prior work explicitly discussed this issue, trained regression functions cannot be directly applied to different cameras without proper treatment of the difference in projection models. \subsection{3D Gaze Estimation} In contrast, the 3D gaze estimation task is formulated as a regression from the input image $\bm{I}$ to a 3D gaze vector $\bm{g} = f(\bm{I})$. Similarly as for the 2D case, the regression function $f$ typically takes the 3D head pose as an additional input. The gaze vector $\bm{g}$ is usually defined as a unit vector originating from a 3D reference point $\bm{x}$ such as the center of the eye~\cite{funes2015gaze,lu2014learning,lu2015gaze,wood2015rendering,zhang2015appearance}. By assuming a calibrated camera and with information on the 3D pose of the target plane, the 3D gaze vector $\bm{g}$ can be converted by projecting gaze location $\bm{p}$ into the camera coordinate system. The gaze location $\bm{p}$ as in the 2D case can be obtained by intersecting the 3D gaze vector $\bm{g}$ with the target plane. \vspace{-1em} \paragraph{Image Normalization} To both handle different camera parameters and address the task of cross-person training efficiently, Sugano et al.\ proposed a data normalization procedure for 3D appearance-based gaze estimation~\cite{sugano2014learning}. The basic idea is to apply a perspective warp to the input image so that the estimation can be performed in a normalized space with fixed camera parameters and reference point location. Given the input image $\bm{I}$ and the location of the reference point $\bm{x}$, the task is to compute the conversion matrix $\bm{M} = \bm{S}\bm{R}$. $\bm{R}$ is the inverse of the rotation matrix that rotates the camera so that it looks at the reference point and so that the $x$-axes of both the camera and head coordinate systems become parallel. The scaling matrix $\bm{S}$ is defined so that the reference point is located at a distance $d_s$ from the origin of the normalized camera coordinate system. The conversion matrix $\bm{M}$ rotates and scales any 3D points in the input camera coordinate system to the normalized coordinate system, and the same conversion can be applied to the input image $\bm{I}$ via perspective warping using the image transformation matrix $\bm{W} = \bm{C}_s\bm{M}\bm{C}^{-1}_r$. $\bm{C}_r$ is the projection matrix corresponding to the input image obtained from a camera calibration, and $\bm{C}_s$ is another predefined parameter that defines the camera projection matrix in the normalized space. During training, all training images $\bm{I}$ with ground-truth gaze vectors $\bm{g}$ are normalized to or directly synthesized~\cite{sugano2014learning,wood2015rendering} in the training space, which is defined by $d_s$ and $\bm{C}_s$. Ground-truth gaze vectors are also normalized as $\bm{\hat g} = \bm{M}\bm{g}$, while in practice they are further converted to an angular representation (horizontal and vertical gaze direction) assuming a unit length. At test time, test images are normalized in the same manner and their corresponding gaze vectors in the normalized space are estimated via regression function trained in the normalized space. Estimated gaze vectors are then transformed back to the input camera coordinates by $\bm{g} = \bm{M}^{-1}\bm{\hat g}$. \section{Related Work} Our work is related to previous works on appearance-based gaze estimation for both the 2D and 3D gaze estimation task, in particular recent multi-region methods, and means to encode spatial information in CNNs. \vspace{-1em} \paragraph{Appearance-Based Gaze Estimation} Gaze estimation methods are typically categorised as either model-based or appearance-based. While model-based methods estimate gaze direction using geometric models of the eyes and face~\cite{chen20083d,valenti2012combining,wood2014eyetab}, appearance-based methods directly regress from eye images to gaze direction. Early appearance-based methods assumed a fixed head pose and training data for each user~\cite{baluja1994non,tan2002appearance,williams2006sparse}. Later works focused on pose-independent gaze estimation either from monocular RGB~\cite{lu2014learning,sugano2015appearance} or depth images~\cite{funes2015gaze} but still required person-specific training. A promising direction to achieve pose- and person-independence are learning-based methods but these require large amounts of labelled training data~\cite{krafka2016eye,mora2013person,sugano2014learning,zhang2015appearance}. Consequently, recent years have seen an increasing number of gaze estimation datasets collected in everyday settings~\cite{he2015omeg,mora2014eyediap,smith2013gaze}, including some at large scale~\cite{krafka2016eye,zhang2015appearance}, or consisting of synthetic data~\cite{sugano2014learning,wood16_etra,wood2015rendering}. In this work, we also focus on this most challenging pose- and person-independent gaze estimation task using a leave-one-person-out cross-validation scheme. \vspace{-1em} \paragraph{2D vs. 3D Gaze Estimation} Appearance-based gaze estimation methods can be further categorised depending on whether the regression target is in 2D or 3D. Early works assumed a fixed head pose of the target person~\cite{baluja1994non,tan2002appearance,valenti2012combining,williams2006sparse}, and consequently focused on the 2D gaze estimation task where the estimator is trained to output on-screen gaze locations. While more recent methods use 3D head pose~\cite{lu2015gaze,sugano2015appearance} or size and location of the face bounding box~\cite{krafka2016eye} to allow for free head movement, they still formulate the task as a direct mapping to 2D on-screen gaze locations. The underlying assumption behind these 2D approaches is that the target screen plane is fixed in the camera coordinate system. Therefore it does not allow for free camera movement after training, which can be a practical limitation especially to learning-based person-independent estimators. In contrast, in 3D gaze estimation, the estimator is trained to output 3D gaze directions in the camera coordinate system~\cite{funes2015gaze,lu2014learning,lu2015gaze,mora2013person,wood2015rendering,zhang2015appearance}. The 3D formulation is closely related to pose- and person-independent training approaches, and the most important technical challenge is how to efficiently train estimators without requiring too much training data. To facilitate model training, Sugano et al.\ proposed a data normalisation technique to restrict the appearance variation into a single, normalized training space~\cite{sugano2014learning}. Although it required additional technical components, such as 3D head pose estimation, 3D methods have a technical advantage in that they can estimate gaze locations for any target object and camera setup. Since these two approaches handle geometry information differently, the role of the full-face input can be also different between 2D and 3D approaches. \vspace{-1em} \paragraph{Multi-Region Gaze Estimation} Despite these advances, most previous works used a single eye image as input to the regressor and only few considered alternative approaches, such as using two images, one of each eye~\cite{huang2015tabletgaze}, or a single image covering both eyes~\cite{he2015omeg}. Krafka et al.\ recently presented a multi-region 2D gaze estimation method that took individual eye images, the face image, and a face grid as input~\cite{krafka2016eye}. Their results suggested that adding the face image can be beneficial for appearance-based gaze estimation. Our work is first to explore the potential of using information on the full face for both 2D and 3D appearance-based gaze estimation. Pushing this idea forward, we further propose the first method that learns a gaze estimator only from the full face image in a truly end-to-end manner. \vspace{-1em} \paragraph{Spatial Encoding in CNNs} Convolutional neural networks were not only successful for classification~\cite{krizhevsky2012imagenet} but also regression~\cite{Simonyan14c}, including gaze estimation~\cite{zhang2015appearance}. Several previous works encoded spatial information more efficiently, for example by cropping sub-regions of the image~\cite{girshickICCV15fastrcnn,jaderberg2015spatial} or treating different regions on the image equally~\cite{he2014spatial}. Tompson et al.\ used a spatial dropout before the fully connected layer to avoid overfitting during training, but the dropout extended to the entire feature maps instead of one unit~\cite{tompson2015efficient}. We instead propose a spatial weights mechanism that encodes the weights for the different region of full face, suppress noisy and enhance the contribution from low activation regions.
1,477,468,749,947
arxiv
\section{Introduction} The $q$\nobreakdash-deformations of $\textup{SU}(2)$ for real deformation parameters $0<q<1$ discovered in~\cite{Woronowicz:Twisted_SU2} are among the first and most important examples of compact quantum groups. Here we construct a family of $q$\nobreakdash-deformations of~$\textup{SU}(2)$ for \emph{complex} parameters $q\in{\mathbb C}^*={\mathbb C}\setminus\{0\}$. For $q\notin \mathbb{R}$, $\textup{SU}_q(2)$ is not a compact quantum group, but a braided compact quantum group in a suitable tensor category. A compact quantum group~$\mathbb{G}$ as defined in~\cite{Woronowicz:CQG} is a pair $\mathbb{G} = (A,\Delta)$ where $\Delta\colon A\rightarrow A\otimes A$ is a coassociative morphism satisfying the cancellation law~\eqref{cancellation} below. The \(\textup{C}^*\)\nobreakdash-algebra~$A$ is viewed as the algebra of continuous functions on~$\mathbb{G}$. The theory of compact quantum groups is formulated within the category~$\mathcal{C}^*$ of $\textup{C}^*$-algebras. This category with the minimal tensor functor~$\otimes$ is a monoidal category (see~\cite{MacLane:Categories}). A more general theory may be formulated within a monoidal category $(\mathcal{D}^*,\boxtimes)$, where~$\mathcal{D}^*$ is a suitable category of $\textup{C}^*$\nobreakdash-algebras with additional structure and $\boxtimes\colon \mathcal{D}^*\times\mathcal{D}^*\rightarrow\mathcal{D}^*$ is a monoidal bifunctor on~$\mathcal{D}^*$. Braided Hopf algebras may be defined in braided monoidal categories (see \cite{Majid:Quantum_grp}*{Definition 9.4.5}). The braiding becomes unnecessary when we work in categories of \(\textup{C}^*\)\nobreakdash-algebras. Let $A$ and~$B$ be $\textup{C}^*$\nobreakdash-algebras. The multiplier algebra of~$B$ is denoted by~$\operatorname{M}(B)$. A \emph{morphism} $\pi\in\operatorname{Mor}(A,B)$ is a $^*$\nobreakdash-homomorphism $\pi\colon A\rightarrow\operatorname{M}(B)$ with $\pi(A)B = B$. If \(A\) and~\(B\) are unital, a morphism is simply a unital $^*$\nobreakdash-homomorphism. Let~$\mathbb{T}$ be the group of complex numbers of modulus~$1$ and let~$\mathcal{C}^*_{\mathbb{T}}$ be the category of $\mathbb{T}$\nobreakdash-$\textup{C}^*$-algebras; its objects are $\textup{C}^*$\nobreakdash-algebras with an action of~$\mathbb{T}$, arrows are \(\mathbb{T}\)\nobreakdash-equivariant morphisms. We shall use a family of monoidal structures~$\boxtimes_{\zeta}$ on~$\mathcal{C}^*_{\mathbb{T}}$ parametrised by $\zeta\in\mathbb{T}$, which is defined as in~\cite{Meyer-Roy-Woronowicz:Twisted_tensor}. The $\textup{C}^*$\nobreakdash-algebra~${A}$ of~$\textup{SU}_q(2)$ is defined as the universal unital $\textup{C}^*$\nobreakdash-algebra generated by two elements $\alpha,\gamma$ subject to the relations \[ \etyk{SU2q} \left\{ \begin{array}{r@{\;=\;}l} \alpha^{*}\alpha+\gamma^{*}\gamma&\mathds{1},\\ \alpha\alpha^{*}+\modul{q}^{2}\gamma^{*}\gamma&\mathds{1},\\ \gamma\gamma^{*}&\gamma^{*}\gamma,\\ \alpha\gamma&\overline{q}\gamma\alpha,\\ \alpha\gamma^{*}&q\gamma^{*}\alpha. \end{array} \right. \] For real~$q$, the algebra~\({A}\) coincides with the algebra of continuous functions on the quantum $\textup{SU}_q(2)$ group described in~\cite{Woronowicz:Twisted_SU2}: ${A} =\C(\textup{SU}_{q}(2))$. Then there is a unique morphism $\Delta\colon {A}\to {A} \otimes {A}$ with \[ \etyk{Delta} \begin{array}{r@{\;=\;}l} \Delta(\alpha)&\alpha\otimes\alpha-q\gamma^{*}\otimes\gamma,\\ \Delta(\gamma)&\gamma\otimes\alpha+\alpha^{*}\otimes\gamma. \end{array} \] It is coassociative, that is, \[ \etyk{coassociative} (\Delta\otimes\textup{id}_{{A}})\,{\raisebox{.5mm}{\tiny o}}\,\Delta = (\textup{id}_{{A}}\otimes\Delta)\,{\raisebox{.5mm}{\tiny o}}\,\Delta, \] and has the following cancellation property: \[ \etyk{cancellation} \begin{aligned} A\otimes A &= \Delta(A)(A\otimes\mathds{1}),\\ A\otimes A &= \Delta(A)(\mathds{1}\otimes A); \end{aligned} \] here and below, $EF$ for two closed subspaces $E$ and~$F$ of a $\textup{C}^*$\nobreakdash-algebra denotes the norm-closed linear span of the set of products~$ef$ for $e\in E$, $f\in F$. If~$q$ is not real, then the operators on the right hand sides of~\eqref{Delta} do not satisfy the relations~\eqref{SU2q}, so there is no morphism~$\Delta$ satisfying~\eqref{Delta}. Instead, \eqref{Delta} defines a \(\mathbb{T}\)\nobreakdash-equivariant morphism \({A}\to {A}\boxtimes_\zeta {A}\) for the monoidal functor~$\boxtimes_\zeta$ with $\zeta = q/\overline{q}$. This morphism in~$\mathcal{C}^*_{\mathbb{T}}$ satisfies appropriate analogues of the coassociativity and cancellation laws \eqref{coassociative} and~\eqref{cancellation}, so we get a braided compact quantum group. Here the action of~\(\mathbb{T}\) on~\(A\) is defined by \(\rho_z(\alpha)=\alpha\) and \(\rho_z(\gamma)=z\gamma\) for all \(z\in\mathbb{T}\). For $X,Y\in\Obj(\mathcal{C}^*)$, $X\otimes Y$ contains commuting copies $X\otimes\mathds{1}_{Y}$ of~$X$ and $\mathds{1}_{X}\otimes Y$ of~$Y$ with \(X\otimes Y=(X\otimes\mathds{1}_{Y})(\mathds{1}_{X}\otimes Y)\). Similarly, $X\boxtimes_\zeta Y$ for $X,Y\in\mathcal{C}^*_{\mathbb{T}}$ is a $\textup{C}^*$\nobreakdash-algebra with injective morphisms $j_{1}\in\operatorname{Mor}(X,X\boxtimes_\zeta Y)$ and $j_{2}\in\operatorname{Mor}(Y,X\boxtimes_\zeta Y)$ such that $X\boxtimes_\zeta Y = j_{1}(X)j_{2}(Y)$. For $\mathbb{T}$\nobreakdash-homogeneous elements $x\in X_k$ and $y\in Y_l$ (as defined in~\eqref{homel}), we have the commutation relation \[ \etyk{com} j_{1}(x)j_{2}(y)=\zeta^{k l}j_{2}(y)j_{1}(x) \] The following theorem contains the main result of this paper: \begin{Thm} \label{main} Let \(q\in\mathbb{C}\setminus\{0\}\) and \(\zeta=q/\overline{q}\). Then \begin{enumerate} \item there is a unique \(\mathbb{T}\)\nobreakdash-equivariant morphism $\Delta\in\operatorname{Mor}({A},{A}\boxtimes_\zeta {A})$ with \[ \etyk{Delta1} \begin{aligned} \Delta(\alpha)&= j_{1}(\alpha)j_{2}(\alpha)-qj_{1}(\gamma)^{*}j_{2}(\gamma),\\ \Vs{5}\Delta(\gamma)&= j_{1}(\gamma)j_{2}(\alpha)+j_{1}(\alpha)^{*}j_{2}(\gamma); \end{aligned} \] \item $\Delta$ is coassociative, that is, \[ (\Delta\boxtimes_\zeta\textup{id}_{{A}})\,{\raisebox{.5mm}{\tiny o}}\,\Delta = (\textup{id}_{{A}}\boxtimes_\zeta\Delta)\,{\raisebox{.5mm}{\tiny o}}\,\Delta; \] \item $\Delta$ obeys the cancellation law \[ j_{1}({A})\Delta({A}) = \Delta({A})j_{2}({A}) = {A}\boxtimes_\zeta {A}. \] \end{enumerate} \end{Thm} We also describe some important features of the representation theory of~$\textup{SU}_q(2)$ to explain the definition of~\(\textup{SU}_q(2)\), and we relate~\(\textup{SU}_q(2)\) to the quantum \(\textup{U}(2)\) groups defined by Zhang and Zhao in~\cite{Zhang-Zhao:Uq2}. Braided Hopf algebras that deform \(\textup{SL}(2,{\mathbb C})\) are already described in~\cite{Majid:Examples_braided}. We could, however, find no precise relationship between Majid's braided Hopf algebra \(\textup{BSL}(2)\) and our braided compact quantum group~\(\textup{SU}_q(2)\). \section{The algebra of \texorpdfstring{$\textup{SU}_{q}(2)$}{SUq(2)}} The following elementary lemma explains what the defining relations~\eqref{SU2q} mean: \begin{Lem} \label{un} Two elements $\alpha$ and~$\gamma$ of a $\textup{C}^*$\nobreakdash-algebra satisfy the relations~\eqref{SU2q} if and only if the following matrix is unitary: \[ \begin{pmatrix}\alpha&-q\gamma^{*}\\\gamma&\alpha^{*}\end{pmatrix} \] \end{Lem} There are at least two ways to introduce a $\textup{C}^*$\nobreakdash-algebra with given generators and relations. One may consider the algebra~$\mathcal{{A}}$ of all non-commutative polynomials in the generators and their adjoints and take the largest $\textup{C}^*$\nobreakdash-seminorm on~$\mathcal{{A}}$ vanishing on the given relations. The set~$\mathfrak{N}$ of elements with vanishing seminorm is an ideal in~$\mathcal{{A}}$. The seminorm becomes a norm on~$\mathcal{{A}}/\mathfrak{N}$. Completing $\mathcal{{A}}/\mathfrak{N}$ with respect to this norm gives the desired $\textup{C}^*$\nobreakdash-algebra~$A$. Another way is to consider the operator domain consisting of all families of operators satisfying the relations. Then~${A}$ is the algebra of all continuous operator functions on that domain (see~\cite{Kruszynski-Woronowicz:Gelfand-Naimark}). Applying one of these procedures to the relations~\eqref{SU2q} gives a $\textup{C}^*$\nobreakdash-algebra~${A}$ with two distinguished elements $\alpha,\gamma\in {A}$ that is universal in the following sense: \begin{Thm} \label{universal} Let~$\widetilde{{A}}$ be a $\textup{C}^*$\nobreakdash-algebra with two elements ${\widetilde \alpha},{\widetilde\gamma}\in\widetilde{{A}}$ satisfying \[ \etyk{SU2qtil} \left\{ \begin{array}{r@{\;=\;}l} \tilde\alpha^{*}\tilde\alpha+\tilde\gamma^{*}\tilde\gamma&\mathds{1},\\ \tilde\alpha\tilde\alpha^{*}+\modul{q}^{2}\tilde\gamma^{*}\tilde\gamma&\mathds{1},\\ \tilde\gamma\tilde\gamma^{*}&\tilde\gamma^{*}\tilde\gamma,\\ \tilde\alpha\tilde\gamma&\overline{q}\tilde\gamma\tilde\alpha,\\ \tilde\alpha\tilde\gamma^{*}&q\tilde\gamma^{*}\tilde\alpha. \end{array} \right. \] Then there is a unique morphism $\rho\in\operatorname{Mor}({A},\widetilde{{A}})$ with \(\rho(\alpha)={\widetilde \alpha}\) and \(\rho(\gamma)={\widetilde\gamma}\).\qed \end{Thm} The elements ${\widetilde \alpha}=\mathds{1}_{\C(\mathbb{T})}\otimes \alpha$ and ${\widetilde\gamma}=z\otimes\gamma$ of $\C(\mathbb{T})\otimes{A}$ satisfy~\eqref{SU2qtil}. Here $z\in\C(\mathbb{T})$ denotes the coordinate function on~$\mathbb{T}$. (Later, we also denote elements of~\(\mathbb{T}\) by~\(z\).) Theorem~\ref{universal} gives a unique morphism $\rho^A\in\operatorname{Mor}({A},\C(\mathbb{T})\otimes{A})$ with \[ \etyk{rho} \begin{array}{r@{\;=\;}r} \rho(\alpha)&\mathds{1}_{\C(\mathbb{T})}\otimes\alpha,\\ \rho(\gamma)&z\otimes\gamma. \end{array} \] This is a continuous \(\mathbb{T}\)\nobreakdash-action, so we may view $(A,\rho^A)$ as an object in the category~$\mathcal{C}^*_{\mathbb{T}}$ described in detail in the next section. \begin{Thm} \label{the:compare_q} The $\textup{C}^*$\nobreakdash-algebras~${A}$ for different~\(q\) with \(\modul{q}\neq0,1\) are isomorphic. \end{Thm} \begin{proof} During this proof, we write~\({A}_q\) for our \(\textup{C}^*\)\nobreakdash-algebra with parameter~\(q\). First, \({A}_q \cong {A}_{q'}\) for \(q'= q^{-1}\) by mapping \({A}_q\ni \alpha\mapsto \alpha'= \alpha^*\in {A}_{q'}\) and \({A}_q\ni \gamma\mapsto \gamma' = q^{-1} \gamma \in {A}_{q'}\). Routine computations show that \(\alpha'\) and~\(\gamma'\) satisfy the relations~\eqref{SU2q}, so that Theorem~\ref{universal} gives a unique morphism \({A}_q\to{A}_{q'}\) mapping \(\alpha\mapsto\alpha'\) and \(\gamma\mapsto\gamma'\). Doing this twice gives \(q''=q\), \(\alpha''=\alpha\) and \(\gamma''=\gamma\), so we get an inverse for the morphism \({A}_q\to{A}_{q'}\). This completes the first step. It reduces to the case \(0<\modul{q}<1\), which we assume from now on. Secondly, we claim that \({A}_q \cong {A}_{\modul{q}}\) if \(0<\modul{q}<1\). Equation~\eqref{SU2q} implies that~$\gamma$ is normal with \(\lVert\gamma\rVert \le 1\). So we may use the functional calculus for continuous functions on the closed unit disc $\mathbb{D}^{1}=\set{\lambda\in{\mathbb C}}{\modul{\lambda}\leq 1}$. We claim that \[ \etyk{Halmosh} \alpha f(\gamma)=f(\overline{q}\gamma)\alpha \] for all $f\in\C(\mathbb{D}^{1})$. Indeed, the set $B\subseteq \C(\mathbb{D}^1)$ of functions satisfying~\eqref{Halmosh} is a norm-closed, unital subalgebra of~$\C(\mathbb{D}^{1})$. The last two equations in~\eqref{SU2q} say that~$B$ contains the functions $f(\lambda)=\lambda$ and $f^*(\lambda)=\overline{\lambda}$. Since these separate the points of~$\mathbb{D}^{1}$, the Stone--Weierstrass Theorem gives $B=\C(\mathbb{D}^{1})$. Let $q=e^{i\theta}\modul{q}$ be the polar decomposition of~$q$. For $\lambda\in \mathbb{D}^{1}$, let \[ g(\lambda)= \begin{cases} \lambda \textup{e}^{\textup{i}\theta\log_{\modul{q}}\modul{\lambda}}& \text{for }\lambda\neq 0,\\ 0&\text{for }\lambda= 0. \end{cases} \] This is a homeomorphism of~$\mathbb{D}^{1}$ because we get the map~\(g^{-1}\) if we replace~$\theta$ by~$-\theta$. Thus $\gamma$ and $\gamma'=g(\gamma)$ generate the same $\textup{C}^*$\nobreakdash-algebra. We also get $g(\overline{q}\lambda)=\modul{q}g(\lambda)$, so inserting $f=g$ and $f=\overline{g}$ in~\eqref{Halmosh} gives \[ a\gamma' =\modul{q}\gamma'\alpha,\qquad a(\gamma')^{*} = \modul{q} (\gamma')^{*}\alpha. \] Moreover, $\modul{g(\lambda)}=\modul{\lambda}$ and hence $\modul{\gamma'}=\modul{\gamma}$. Thus we may replace~$\gamma$ by~$\gamma'$ in the first three equations of~\eqref{SU2q}. As a result, $\alpha$ and $\gamma'$ satisfy the relations~\eqref{SU2q} with~$\modul{q}$ instead of~$q$. Since~\(g\) is a homeomorphism, an argument as in the first step now shows that ${A}_q\cong {A}_{\modul{q}}$. Finally, \cite{Woronowicz:Twisted_SU2}*{Theorem A2.2, page 180} shows that the \(\textup{C}^*\)\nobreakdash-algebras~${A}_q$ for \(0<q<1\) are isomorphic. \end{proof} \section{Monoidal structure on \texorpdfstring{$\mathbb{T}$-$\textup{C}^*$}{T-C*}-algebras} \label{tcat} We are going to describe the monoidal category $(\mathcal{C}^*_{\mathbb{T}},\boxtimes_{\zeta})$ for $\zeta\in\mathbb{T}$ that is the framework for our braided quantum groups. Monoidal categories are defined in~\cite{MacLane:Categories}. The \(\textup{C}^*\)\nobreakdash-algebra \(\C(\mathbb{T})\) is a compact quantum group with comultiplication \[ \delta\colon \C(\mathbb{T}) \to \C(\mathbb{T}) \otimes \C(\mathbb{T}), \qquad z\mapsto z\otimes z. \] An object of $\mathcal{C}^*_{\mathbb{T}}$ is, by definition, a pair~$(X,\rho^X)$ where~$X$ is a $\textup{C}^*$\nobreakdash-algebra and $\rho^{X}\in\operatorname{Mor}(X,\C(\mathbb{T})\otimes X)$ makes the diagram \[\etyk{dziacop} \begin{gathered} \xymatrix{ X\ar[rr]^{\rho^{X}}\ar[d]_{\rho^{X}}&& \C(\mathbb{T})\otimes X\ar[d]^{\delta\otimes\textup{id}} \\ \C(\mathbb{T})\otimes X \ar[rr]_-{\textup{id}_{\C(\mathbb{T})}\otimes\rho^{X}}&& \C(\mathbb{T})\otimes\C(\mathbb{T})\otimes X } \end{gathered} \] commute and satisfies the \emph{Podle\'s condition} \[\etyk{Podl} \rho^{X}(X)(\C(\mathbb{T})\otimes\mathds{1}_{X})=\C(\mathbb{T})\otimes X. \] This is equivalent to a continuous $\mathbb{T}$\nobreakdash-action on~$X$ by \cite{Soltan:Non_cpt_grp_act}*{Proposition 2.3}. Let $X,Y$ be $\mathbb{T}$\nobreakdash-\(\textup{C}^*\)-algebras. The set of morphisms from~$X$ to~$Y$ in~$\mathcal{C}^*_{\mathbb{T}}$ is the set $\operatorname{Mor}_{\mathbb{T}}(X,Y)$ of \(\mathbb{T}\)\nobreakdash-equivariant morphisms \(X\to Y\). By definition, $\varphi\in\operatorname{Mor}(X,Y)$ is \(\mathbb{T}\)\nobreakdash-equivariant if and only if the following diagram commutes: \[\etyk{tmor} \begin{gathered} \xymatrix{ X\ar[r]^-{\rho^{X}}\ar[d]_{\varphi}& \C(\mathbb{T})\otimes X\ar[d]^{\textup{id}_{\C(\mathbb{T})}\otimes\varphi} \\ Y\ar[r]_-{\rho^{Y}}& \C(\mathbb{T})\otimes Y } \end{gathered} \] Let $X\in\mathcal{C}^*_{\mathbb{T}}$. An element $x\in X$ is \emph{homogeneous of degree $n\in{\mathbb Z}$} if \begin{equation} \label{homel} \rho^{X}(x)=z^{n}\otimes x. \end{equation} The degree of a homogeneous element~$x$ is denoted by $\mathrm{deg}(x)$. Let~$X_{n}$ be the set of homogeneous elements of~$X$ of degree~$n$. This is a closed linear subspace of~$X$, and $X_{n}X_{m}\subseteq X_{n+m}$ and $X_{n}^{*}=X_{-n}$ for \(n,m\in{\mathbb Z}\). Moreover, finite sums of homogeneous elements are dense in~$X$. Let $\zeta\in{\mathbb T}$. The monoidal functor $\boxtimes_{\zeta}\colon \mathcal{C}^*_{\mathbb{T}}\times \mathcal{C}^*_{\mathbb{T}}\rightarrow \mathcal{C}^*_{\mathbb{T}}$ is introduced as in~\cite{Meyer-Roy-Woronowicz:Twisted_tensor}. We describe $X\boxtimes_\zeta Y$ using quantum tori. By definition, the $\textup{C}^*$\nobreakdash-algebra $\C({\mathbb T}^{2}_{\zeta})$ of the quantum torus is the $\textup{C}^*$\nobreakdash-algebra generated by two unitary elements~$U,V$ subject to the relation \(UV=\zeta\, VU\). There are unique injective morphisms $\iota_1,\iota_2\in\operatorname{Mor}(\C(\mathbb{T}),\C(\mathbb{T}^{2}_{\zeta}))$ with $\iota_1(z) = U$ and $\iota_2(z) = V$. Define $j_1\in\operatorname{Mor}(X,\C(\mathbb{T}^{2}_{\zeta})\otimes X\otimes Y)$ and $j_2\in\operatorname{Mor}(Y,\C(\mathbb{T}^{2}_{\zeta})\otimes X\otimes Y)$ by \begin{alignat*}{2} j_1(x) &= (\iota_1\otimes\textup{id}_X)\circ\rho^X(x) &\qquad &\text{for all }x\in X,\\ j_2(y) &= (\iota_2\otimes\textup{id}_Y)\circ\rho^Y(y) &\qquad &\text{for all }y\in Y. \end{alignat*} Let $x\in X_k$ and $y\in Y_l$. Then $j_1(x) = U^k\otimes x\otimes 1$ and $j_2(y) = V^l\otimes 1\otimes y$, so that we get the commutation relation~\eqref{com}. This implies $j_1(X)j_2(Y) =j_2(Y) j_1(X)$, so that $j_1(X)j_2(Y)$ is a $\textup{C}^*$\nobreakdash-algebra. We define \[ X\boxtimes_{\zeta} Y = j_1(X)j_2(Y). \] This construction agrees with the one in~\cite{Meyer-Roy-Woronowicz:Twisted_tensor} because \(\C(\mathbb{T}^{2}_{\zeta}) \cong \C(\mathbb{T}) \boxtimes_{\zeta} \C(\mathbb{T})\), see also the end of \cite{Meyer-Roy-Woronowicz:Twisted_tensor}*{Section 5.2}. There is a unique continuous $\mathbb{T}$\nobreakdash-action $\rho^{X\boxtimes_\zeta Y}$ on~\(X\boxtimes_\zeta Y\) for which \(j_1\) and~\(j_2\) are \(\mathbb{T}\)\nobreakdash-equivariant, that is, $j_1\in\operatorname{Mor}_{\mathbb{T}}(X,X\boxtimes_{\zeta}Y)$ and $j_2\in\operatorname{Mor}_{\mathbb{T}}(Y,X\boxtimes_{\zeta}Y)$. This action is constructed in a more general context in~\cite{Meyer-Roy-Woronowicz:Twisted_tensor_2}. We always equip~$X\boxtimes_\zeta Y$ with this \(\mathbb{T}\)\nobreakdash-action and thus view it as an object of~$\mathcal{C}^*_{\mathbb{T}}$. The construction~\(\boxtimes_\zeta\) is a bifunctor; that is, \(\mathbb{T}\)\nobreakdash-equivariant morphisms $\pi_1\in\operatorname{Mor}_{\mathbb{T}}(X_1,Y_1)$ and $\pi_2\in\operatorname{Mor}_{\mathbb{T}}(X_2,Y_2)$ induce a unique \(\mathbb{T}\)\nobreakdash-equivariant morphism $\pi_1\boxtimes_\zeta\pi_2\in\operatorname{Mor}_{\mathbb{T}}(X_1\boxtimes_\zeta X_2,Y_1\boxtimes_\zeta Y_2)$ with \begin{equation} \label{funct} (\pi_1\boxtimes_\zeta\pi_2)(j_{X_1}(x_1)j_{X_2}(x_2)) = j_{Y_1}(\pi_1(x_1))j_{Y_2}(\pi_2(x_2)) \end{equation} for all $x_1\in X_1$ and $x_2\in X_2$. \begin{Prop} \label{8.44} Let $x\in X$ and $y\in Y$ be homogeneous elements. Then \begin{align*} j_{1}(x)j_{2}(Y)&= j_{2}(Y)j_{1}(x),\\ j_{1}(X)j_{2}(y)&= j_{2}(y)j_{1}(X). \end{align*} \end{Prop} \begin{proof} Equation~\eqref{com} shows that \[ j_{1}(x)j_{2}(y)=j_{2}(y)j_{1}(\rho^X_{\zeta^{\mathrm{deg}(y)}}(x)) \] for any $x\in X$ and any homogeneous $y\in Y$. Since~\(\rho^X_{\zeta^{\mathrm{deg}(y)}}\) is an automorphism of~\(X\), this implies \(j_1(X)j_2(y) = j_2(y) j_1(X)\). Similarly, \(j_{1}(x)j_{2}(y) = j_{2}(\rho^Y_{\zeta^{\mathrm{deg}(x)}}(y))j_{1}(x)\) for homogeneous $x\in X$ and any $y\in Y$ implies \(j_{1}(x)j_{2}(Y) = j_{2}(Y)j_{1}(x)\). \end{proof} \section{Proof of the main theorem} Let $\alpha$ and~$\gamma$ be the distinguished elements of~${A}$. Let ${\widetilde \alpha}$ and~${\widetilde\gamma}$ be the elements of ${A}\boxtimes_\zeta {A}$ appearing on the right hand side of~\eqref{Delta1}: \[ \etyk{Deltatilde} \begin{array}{r@{\;=\;}l} {\widetilde \alpha}&j_{1}(\alpha)j_{2}(\alpha)-qj_{1}(\gamma)^{*}j_{2}(\gamma),\\ \Vs{5}{\widetilde\gamma}&j_{1}(\gamma)j_{2}(\alpha)+j_{1}(\alpha)^{*}j_{2}(\gamma). \end{array} \] We have $\mathrm{deg}(\alpha)=\mathrm{deg}(\alpha^{*})=0$, $\mathrm{deg}(\gamma)=1$ and $\mathrm{deg}(\gamma^{*})=-1$ by~\eqref{rho}. Assume \(\overline{q}\zeta=q\). Using~\eqref{com} we may rewrite~\eqref{Deltatilde} in the following form: \[ \begin{array}{r@{\;=\;}l} {\widetilde \alpha}&j_{2}(\alpha)j_{1}(\alpha)-\overline{q} j_{2}(\gamma)j_{1}(\gamma)^{*},\\ \Vs{5}{\widetilde\gamma}&j_{2}(\alpha)j_{1}(\gamma)+j_{2}(\gamma)j_{1}(\alpha)^{*}. \end{array} \] Therefore, \[ \etyk{Deltatilde1} \begin{array}{r@{\;=\;}l} {\widetilde \alpha}^{*}&j_{1}(\alpha)^{*}j_{2}(\alpha)^{*}-q j_{1}(\gamma)j_{2}(\gamma)^{*},\\ \Vs{5}{\widetilde\gamma}^{*}&j_{1}(\gamma)^{*}j_{2}(\alpha)^{*}+j_{1}(\alpha)j_{2}(\gamma)^{*}. \end{array} \] The four equations \eqref{Deltatilde} and~\eqref{Deltatilde1} together are equivalent to \[ \etyk{mmm} \begin{pmatrix} {\widetilde \alpha}&-q{\widetilde\gamma}^{*}\\{\widetilde\gamma}&{\widetilde \alpha}^{*} \end{pmatrix} = \begin{pmatrix} j_{1}(\alpha)&-qj_{1}(\gamma)^{*}\\j_{1}(\gamma)&j_{1}(\alpha)^{*} \end{pmatrix} \begin{pmatrix} j_{2}(\alpha)&-qj_{2}(\gamma)^{*}\\j_{2}(\gamma)&j_{2}(\alpha)^{*} \end{pmatrix}. \] Lemma~\ref{un} shows that the matrix \[ \etyk{u} u= \begin{pmatrix}\alpha&-q\gamma^{*}\\\gamma&\alpha^{*} \end{pmatrix} \in\operatorname{M}_2({A}) \] is unitary. Hence so is the matrix \(j_1(u)j_2(u)\) on the right hand side of~\eqref{mmm}. Now Lemma~\ref{un} shows that ${\widetilde \alpha},{\widetilde\gamma}\in {A}\boxtimes_\zeta {A}$ satisfy~\eqref{SU2qtil}. So the universal property of~${A}$ in Theorem~\ref{universal} gives a unique morphism~$\Delta$ with $\Delta(\alpha)={\widetilde \alpha}$ and $\Delta(\gamma)={\widetilde\gamma}$. The elements $\alpha$ and~$\gamma$ are homogeneous of degrees \(0\) and~\(1\), respectively, by~\eqref{rho}. Hence ${\widetilde \alpha}$ and~${\widetilde\gamma}$ are homogeneous of degree \(0\) and~\(1\) as well. Since \(\alpha\) and~\(\gamma\) generate~\({A}\), it follows that~\(\Delta\) is \(\mathbb{T}\)\nobreakdash-equivariant. This proves statement~(1) in Theorem~\ref{main}. Here we use the action $\rho^{A\boxtimes_\zeta A}$ of~$\mathbb{T}$ with $\rho_z^{A\boxtimes_\zeta A} (j_1(a_1)j_2(a_2)) = j_1(\rho^A_z(a_1))j_2(\rho^A_z(a_2))$. We may rewrite~\eqref{mmm} as \[ \begin{pmatrix} \Delta(\alpha)&-q\Delta(\gamma)^{*}\\\Delta(\gamma)&\Delta(\alpha)^{*} \end{pmatrix} = \begin{pmatrix}j_{1}(\alpha)&-qj_{1}(\gamma)^{*}\\j_{1}(\gamma)&j_{1}(\alpha)^{*} \end{pmatrix} \begin{pmatrix}j_{2}(\alpha)&-qj_{2}(\gamma)^{*}\\j_{2}(\gamma)&j_{2}(\alpha)^{*} \end{pmatrix}. \] Identifying $\operatorname{M}_2({A})$ with $\operatorname{M}_2({\mathbb C})\otimes {A}$, we may further rewrite this as \[ \etyk{Dmm3} (\textup{id}\otimes\Delta)(u) = (\textup{id}\otimes j_{1})(u)\;(\textup{id}\otimes j_{2})(u), \] where~$\textup{id}$ is the identity map on~$\operatorname{M}_2({\mathbb C})$. Now we prove statement (2) in Theorem~\ref{main}. Let $j_{1},j_{2},j_{3}$ be the natural embeddings of~${A}$ into ${A}\boxtimes_\zeta {A}\boxtimes_\zeta {A}$. Since~\(\Delta\) is \(\mathbb{T}\)\nobreakdash-equivariant, we may form $\Delta\boxtimes_{\zeta}\textup{id}$ and $\textup{id}\boxtimes_{\zeta}\Delta$. The values of $\textup{id}\otimes\left(\Delta\boxtimes_\zeta\textup{id}_{{A}}\right)$ and $\textup{id}\otimes\left(\textup{id}_{{A}}\boxtimes_\zeta\,\Delta\right)$ on the right hand side of~\eqref{Dmm3} are equal: \[ \begin{array}{r@{\;=\;(\textup{id}\otimes j_{1})(u)\,(\textup{id}\otimes j_{2})(u)\,(\textup{id}\otimes j_{3})(u)}l} \left(\textup{id}\otimes(\Delta\boxtimes_\zeta\textup{id}_{{A}})\circ\Delta\right)(u)&,\\ \left(\textup{id}\otimes(\textup{id}_{{A}}\boxtimes_\zeta\,\Delta)\circ\Delta\right)(u)&\Vs{5}. \end{array} \] Thus $(\Delta\boxtimes_\zeta\textup{id}_{{A}})\circ\Delta$ and $(\textup{id}_{{A}}\boxtimes_\zeta\,\Delta)\circ\Delta$ coincide on $\alpha,\gamma,\alpha^{*},\gamma^{*}$. Since the latter generate~${A}$, this proves statement~(2) of Theorem~\ref{main}. Now we prove statement~(3). Let \[ S=\set{x\in {A}}{j_{1}(x)\in\Delta({A})j_{2}({A})}. \] This is a closed subspace of~${A}$. We may also rewrite~\eqref{Dmm3} as \[ \etyk{Dmm1} \begin{pmatrix} j_{1}(\alpha)&-qj_{1}(\gamma)^{*}\\j_{1}(\gamma)&j_{1}(\alpha)^{*} \end{pmatrix} = \begin{pmatrix} \Delta(\alpha)&-q\Delta(\gamma)^{*}\\\Delta(\gamma)&\Delta(\alpha)^{*} \end{pmatrix} \begin{pmatrix} j_{2}(\alpha)&-qj_{2}(\gamma)^{*}\\j_{2}(\gamma)&j_{2}(\alpha)^{*} \end{pmatrix}^{*}. \] Thus $\alpha,\gamma,\alpha^{*},\gamma^{*}\in S$. Let $x,y\in S$ with homogeneous~\(y\). Proposition~\ref{8.44} gives \begin{multline*} j_{1}(xy) =j_{1}(x)j_{1}(y) \in \Delta({A})j_{2}({A})j_{1}(y) = \Delta({A})j_{1}(y)j_{2}({A}) \\\subseteq\Delta({A})\Delta({A}) j_{2}({A})j_{2}({A}) =\Delta({A})j_{2}({A}). \end{multline*} That is, $xy\in S$. Therefore, all monomials in $\alpha,\gamma,\alpha^{*},\gamma^{*}$ belong to $S$, so that $S={A}$. Hence $j_{1}({A})\subseteq \Delta({A})j_{2}({A})$. Now ${A}\boxtimes_\zeta {A} = j_{1}({A}) j_{2}({A}) \subseteq \Delta({A}) j_{2}({A}) j_{2}({A}) = \Delta({A}) j_{2}({A})$, which is one of the Podle\'s conditions. Similarly, let \[ R=\set{x\in {A}}{j_{2}(x)\in j_{1}({A})\Delta({A})}. \] Then~$R$ is a closed subspace of~${A}$. We may also rewrite~\eqref{Dmm3} as \[ \etyk{Dmm2} \begin{pmatrix} j_{2}(\alpha)&-qj_{2}(\gamma)^{*}\\j_{2}(\gamma)&j_{2}(\alpha)^{*}\\ \end{pmatrix} = \begin{pmatrix} j_{1}(\alpha)&-qj_{1}(\gamma)^{*}\\j_{1}(\gamma)&j_{1}(\alpha)^{*} \end{pmatrix}^{*} \begin{pmatrix} \Delta(\alpha)&-q\Delta(\gamma)^{*}\\\Delta(\gamma)&\Delta(\alpha)^{*} \end{pmatrix}. \] Thus $\alpha,\gamma,\alpha^{*},\gamma^{*}\in R$. Let $x,y\in R$ with homogeneous~\(x\). Proposition~\ref{8.44} gives \begin{multline*} j_{2}(xy) = j_{2}(x)j_{2}(y)\in j_{2}(x)j_{1}({A})\Delta({A}) = j_{1}({A}) j_{2}(x) \Delta({A}) \\ \subseteq j_{1}({A}) j_{1}({A}) \Delta({A}) \Delta({A}) = j_{1}({A})\Delta({A}). \end{multline*} Thus $xy\in R$. Therefore, all monomials in $\alpha,\gamma,\alpha^{*},\gamma^{*}$ belong to~$R$, so that $R={A}$, that is, $j_{2}({A})\subseteq j_{1}({A}) \Delta({A})$. This implies ${A}\boxtimes_\zeta {A} = j_{1}({A}) j_{2}({A}) \subseteq j_{1}({A}) j_{1}({A}) \Delta({A}) = j_{1}({A}) \Delta({A})$ and finishes the proof of Theorem~\ref{main}. \section{The representation theory of \texorpdfstring{$\textup{SU}_{q}$}{SUq(2)}} For real~\(q\), the relations defining the compact quantum group~\(\textup{SU}_q(2)\) are dictated if we stipulate that the unitary matrix in Lemma~\ref{un} is a representation and that a certain vector in the tensor square of this representation is invariant. Here we generalise this to the complex case. This is how we found $\textup{SU}_q(2)$. Let~$\mathcal{H}$ be a $\mathbb{T}$\nobreakdash-Hilbert space, that is, a Hilbert space with a unitary representation $U\colon \mathbb{T}\rightarrow\mathcal{U}(\mathcal{H})$. For $z\in\mathbb{T}$ and $x\in\mathcal{K}(\mathcal{H})$ we define \[ \rho^{\mathcal{K}(\mathcal{H})}_z(x) = U_z xU_z^*. \] Thus $(\mathcal{K}(\mathcal{H}),\rho^{\mathcal{K}(\mathcal{H})})$ is a \(\mathbb{T}\)\nobreakdash-\(\textup{C}^*\)-algebra. Let $(X,\rho^X)\in\Obj(\mathcal{C}^*_{\mathbb{T}})$. Since $\rho^{\mathcal{K}(\mathcal{H})}$ is inner, the braided tensor product $\mathcal{K}(\mathcal{H})\boxtimes_\zeta X$ may (and will) be identified with $\mathcal{K}(\mathcal{H})\otimes X$ -- see \cite{Meyer-Roy-Woronowicz:Twisted_tensor}*{Corollary 5.18} and \cite{Meyer-Roy-Woronowicz:Twisted_tensor}*{Example 5.19}. \begin{Def} \label{fd} Let $\mathcal{H}$ be a $\mathbb{T}$\nobreakdash-Hilbert space and let $v\in \operatorname{M}(\mathcal{K}(\mathcal{H})\otimes {A})$ be a unitary element which is $\mathbb{T}$-invariant, that is, \((\rho^{\mathcal{K}(\mathcal{H})}_z\otimes\rho^X_z)(v) = v\). We call~$v$ a \emph{representation} of~$\textup{SU}_{q}(2)$ on~$\mathcal{H}$ if \[ (\textup{id}_{\mathcal{H}}\otimes\Delta)(v) = (\textup{id}_{\mathcal{H}}\otimes j_{1})(v)\; (\textup{id}_{\mathcal{H}}\otimes j_{2})(v). \] \end{Def} Theorem~\ref{the:repr_SU_U} below will show that representations of~\(\textup{SU}_q(2)\) are equivalent to representations of a certain compact quantum group. This allows us to carry over all the usual structural results about representations of compact quantum groups to~\(\textup{SU}_q(2)\). In particular, we may tensor representations. To describe this directly, we need the following result: \begin{Prop} \label{komutacja} Let $X,Y,U,T$ be $\mathbb{T}$\nobreakdash-$\textup{C}^*$-algebras. Let $v\in X\otimes T$ and $w\in Y\otimes U$ be homogeneous elements of degree~$0$. Denote the natural embeddings by \begin{alignat*}{2} i_{1}\colon X&\to X\boxtimes_\zeta Y,&\qquad i_{2}\colon Y&\to X\boxtimes_\zeta Y,\\ j_{1}\colon U&\to U\boxtimes_\zeta T,&\qquad j_{2}\colon T&\to U\boxtimes_\zeta T. \end{alignat*} Then $(i_{1}\otimes j_{2})(v)$ and $(i_{2}\otimes j_{1})(w)$ commute in $(X\boxtimes_\zeta Y) \otimes (U\boxtimes_\zeta T)$. \end{Prop} \begin{proof} We may assume that $v=x\otimes t$ and $w=y\otimes u$ for homogeneous elements $x\in X$, $t\in T$, $y\in Y$ and $u\in U$. Since $\mathrm{deg}(v)=\mathrm{deg}(w)=0$, we get $\mathrm{deg}(x)=-\mathrm{deg}(t)$ and $\mathrm{deg}(y)=-\mathrm{deg}(u)$. The following computation completes the proof: \begin{align*} & \phantom{{}={}}(i_{1}\otimes j_{2})(v)\,(i_{2}\otimes j_{1})(w) = \left(i_{1}(x)\otimes j_{2}(t)\right)\left(i_{2}(y)\otimes j_{1}(u)\right) \\&= i_{1}(x)i_{2}(y)\otimes j_{2}(t)j_{1}(u) = \zeta^{\mathrm{deg}(x)\mathrm{deg}(y)-\mathrm{deg}(t)\mathrm{deg}(u)}i_{2}(y)i_{1}(x)\otimes j_{1}(u)j_{2}(t) \\& = \left(i_{2}(y)\otimes j_{1}(u)\right)\left(i_{1}(x)\otimes j_{2}(t)\right) = (i_{2}\otimes j_{1})(w)\;(i_{1}\otimes j_{2})(v) \\&= (i_{2}\otimes j_{1})(w)\;(i_{1}\otimes j_{2})(v).\qedhere \end{align*} \end{proof} \begin{Prop} \label{tensprod} Let $\mathcal{H}_1$ and~$\mathcal{H}_2$ be $\mathbb{T}$\nobreakdash-Hilbert spaces and let $v_i\in \operatorname{M}(\mathcal{K}(\mathcal{H}_i)\otimes A)$ for \(i=1,2\) be representations of~$\textup{SU}_q(2)$. Define \[ v = (\iota_1\otimes\textup{id}_{A})(v_1) (\iota_2\otimes\textup{id}_{A})(v_2) \in\operatorname{M}(\mathcal{K}(\mathcal{H}_1)\boxtimes_\zeta \mathcal{K}(\mathcal{H}_2)\otimes A) \] and identify $\mathcal{K}(\mathcal{H}_1)\boxtimes_\zeta \mathcal{K}(\mathcal{H}_2) \cong \mathcal{K}(\mathcal{H}_1\otimes\mathcal{H}_2)$. Then $v \in \operatorname{M}(\mathcal{K}(\mathcal{H}_1\otimes\mathcal{H}_2)\otimes A)$ is a representation of~$\textup{SU}_q(2)$ on $\mathcal{H}_1\otimes \mathcal{H}_2$. It is denoted $v_1\mathbin{\xymatrix{*+<.7ex>[o][F-]{\scriptstyle\top}}} v_2$ and called the \emph{tensor product} of $v_1$ and~$v_2$. \end{Prop} \begin{proof} It is clear that~\(v\) is \(\mathbb{T}\)\nobreakdash-invariant. We compute \begin{align*} (\textup{id}_{\mathcal{H}_1\otimes \mathcal{H}_2}\otimes\Delta)(v) &= (\textup{id}_{\mathcal{H}_1\otimes \mathcal{H}_2}\otimes\Delta) ((\iota_1\otimes\textup{id}_{A}) (v_1)(\iota_2\otimes\textup{id}_{A})(v_2))\\ &= (\iota_1\otimes j_{1})(v_1)\;(\iota_1\otimes j_{2})(v_1)\; (\iota_2\otimes j_{1})(v_2)\;(\iota_2\otimes j_{2})(v_2) \\&= (\iota_1\otimes j_{1})(v_1)\;(\iota_2\otimes j_{1})(v_2)\; (\iota_1\otimes j_{2})(v_1)\; (\iota_2\otimes j_{2})(v_2) \\&= (\textup{id}_{\mathcal{H}_1\otimes \mathcal{H}_2}\otimes j_1)(v)\; (\textup{id}_{\mathcal{H}_1\otimes \mathcal{H}_2}\otimes j_2)(v), \end{align*} where the third step uses Proposition~\ref{komutacja}. \end{proof} Now consider the Hilbert space~${\mathbb C}^2$, let $\{e_0,e_1\}$ be its canonical orthonormal basis. We equip it with the representation $U\colon \mathbb{T}\rightarrow\mathcal{U}({\mathbb C}^2)$ defined by $U_z e_0 = ze_0$ and $U_z e_1 = e_1$. Let $\rho^{\operatorname{M}_2({\mathbb C})}$ be the action implemented by~$U$: \[ \rho^{\operatorname{M}_2({\mathbb C})}_z \begin{pmatrix} a_{11}&a_{12}\\a_{21}&a_{22} \end{pmatrix} = \begin{pmatrix} a_{11}&za_{12} \\ \overline{z}a_{21} &a_{22} \end{pmatrix}, \] where $a_{ij}\in{\mathbb C}$. We claim that \[ u = \begin{pmatrix} \alpha&-q\gamma^{*}\\\gamma&\alpha^{*} \end{pmatrix} \in\operatorname{M}_2({\mathbb C})\otimes A \] is a representation of~$\textup{SU}_q(2)$ on~${\mathbb C}^2$. By Lemma~\ref{un}, the relations defining~$A$ are equivalent to~$u$ being unitary. The \(\mathbb{T}\)\nobreakdash-action on~\(A\) is defined so that~\(u\) is \(\mathbb{T}\)\nobreakdash-invariant. The comultiplication is defined exactly so that~\(u\) is a corepresentation, see~\eqref{Dmm3}. The particular shape of~$u$ contains further assumptions, however. To explain these, we consider an arbitrary compact quantum group $\mathbb{G}=(\C(\mathbb{G}),\Delta_\mathbb{G})$ in~$\mathcal{C}^*_{\mathbb{T}}$ with a unitary representation \[ u = \begin{pmatrix} a&b\\c&d \end{pmatrix} \in \operatorname{M}_2(\C(\mathbb{G})), \] such that $a,b,c,d$ generate the $\textup{C}^*$-algebra $\C(\mathbb{G})$. We assume that~$u$ is $\mathbb T$\nobreakdash-invariant for the above $\mathbb T$\nobreakdash-action on~${\mathbb C}^2$. Thus $\mathrm{deg}(a) = \mathrm{deg}(d) =0$, $\mathrm{deg}(b)=-1$, $\mathrm{deg}(c) =1$. \begin{Thm} \label{the:SU_q_invariant_universal} Let~$\mathbb{G}$ be a braided compact quantum group with a unitary representation~$u$ as above. Assume $b\neq0$ and that the vector $e_0\otimes e_1-qe_1\otimes e_0 \in {\mathbb C}^2\otimes{\mathbb C}^2$ for \(q\in{\mathbb C}\) is invariant for the representation $u\mathbin{\xymatrix{*+<.7ex>[o][F-]{\scriptstyle\top}}} u$. Then \(q\neq0\), $\overline{q}\zeta=q$, $d = a^*$, $b = -qc^*$, and there is a unique morphism $\pi\colon \C(\textup{SU}_q(2)) \rightarrow\C(\mathbb{G})$ with $\pi(\alpha) =a$ and $\pi(\gamma) =c$. This is $\mathbb{T}$\nobreakdash-equivariant and satisfies $(\pi\boxtimes_\zeta\pi)\circ\Delta_{\textup{SU}_q(2)} = \Delta_{\mathbb{G}}\circ\pi$. \end{Thm} \begin{proof} The representation $u\mathbin{\xymatrix{*+<.7ex>[o][F-]{\scriptstyle\top}}} u\in \operatorname{M}_4(\C(\mathbb{G}))$ is given by Proposition~\ref{tensprod}, which uses a canonical isomorphism $\operatorname{M}_2({\mathbb C}) \boxtimes_\zeta\operatorname{M}_2({\mathbb C}) \cong \operatorname{M}_4({\mathbb C})$. This comes from the following standard representation of $\operatorname{M}_2({\mathbb C})\boxtimes_\zeta \operatorname{M}_2({\mathbb C})$ on ${\mathbb C}^2\otimes {\mathbb C}^2$. For $T,S\in \operatorname{M}_2({\mathbb C})$ of degree $k,l$ and $x,y\in{\mathbb C}^2$ of degree $m,n$, we let $\iota_1(T) \iota_2(S) (x\otimes y) = \overline\zeta^{l m}Tx\otimes Sy$. By construction, $u\mathbin{\xymatrix{*+<.7ex>[o][F-]{\scriptstyle\top}}} u$ is $(\iota_1\otimes\textup{id}_{\C(\mathbb{G})})(u)\cdot (\iota_2\otimes \textup{id}_{\C(\mathbb{G})})(u)$. So we may rewrite the invariance of $e_0\otimes e_1-qe_1\otimes e_0$ as \begin{equation} \label{eq:invariant_vector} (\iota_1\otimes \textup{id}_{\C(\mathbb{G})})(u^*)(e_0\otimes e_1-qe_1\otimes e_0) = (\iota_2\otimes \textup{id}_{\C(\mathbb{G})})(u)(e_0\otimes e_1-qe_1\otimes e_0) \end{equation} in ${\mathbb C}^2\otimes{\mathbb C}^2\otimes \C(\mathbb{G})$. The left and right hand sides of~\eqref{eq:invariant_vector} are \begin{gather*} e_0\otimes e_1 \otimes a^*+e_1\otimes e_1\otimes b^* - q e_0\otimes e_0\otimes c^* - qe_1\otimes e_0\otimes d^*,\\ e_0\otimes e_0\otimes b + e_0\otimes e_1\otimes d - qe_1\otimes e_0\otimes a - q \overline{\zeta} e_1\otimes e_1\otimes c, \end{gather*} respectively. These are equal if and only if $b = -q c^*$, $d = a^*$, and $b^* = -q\overline\zeta c$. Since $b\neq0$, this implies \(q\neq0\) and $\overline{q}\zeta=q$, and~\(u\) has the form in Lemma~\ref{un}. Since~$u$ is a representation, it is unitary. So $a,c$ satisfy the relations defining $\textup{SU}_q(2)$ and Theorem~\ref{universal} gives the unique morphism~$\pi$. The conditions on~$u$ in Definition~\ref{fd} imply that~$\pi$ is $\mathbb{T}$\nobreakdash-equivariant and compatible with comultiplications. \end{proof} The proof also shows that~\(q\) is uniquely determined by the condition that \(e_0\otimes e_1 -qe_1\otimes e_0\) should be \(\textup{SU}_q(2)\)\nobreakdash-invariant. Up to scaling, the basis \(e_0,e_1\) is the unique one consisting of joint eigenvectors of the \(\mathbb{T}\)\nobreakdash-action with degrees \(1\) and~\(0\). Hence the braided quantum group \((\C(\textup{SU}_q(2)),\Delta)\) determines~\(q\) uniquely. An invariant vector for~\(\textup{SU}_q(2)\) should also be homogeneous for the \(\mathbb{T}\)\nobreakdash-action. There are three cases of homogeneous vectors in \({\mathbb C}^2\otimes{\mathbb C}^2\): multiplies of \(e_0\otimes e_0\), multiples of \(e_1\otimes e_1\), and linear combinations of \(e_0\otimes e_1\) and \(e_1\otimes e_0\). If a non-zero multiple of~\(e_i\otimes e_j\) for \(i,j\in\{0,1\}\) is invariant, then the representation~$u$ is reducible. Ruling out such degenerate cases, we may normalise the invariant vector to have the form \(e_0\otimes e_1 -qe_1\otimes e_0\) assumed in Theorem~\ref{the:SU_q_invariant_universal}. Roughly speaking, $\textup{SU}_q(2)$ is the universal family of braided quantum groups generated by a $2$\nobreakdash-dimensional representation with an invariant vector in \(u\mathbin{\xymatrix{*+<.7ex>[o][F-]{\scriptstyle\top}}} u\). There is, however, one extra symmetry that changes the \(\mathbb{T}\)\nobreakdash-action on~\(\C(\textup{SU}_q(2))\) and that corresponds to the permutation of the basis \(e_0,e_1\). Given a \(\mathbb{T}\)\nobreakdash-algebra~\(A\), let \(S(A)\) be the same \(\textup{C}^*\)\nobreakdash-algebra with the \(\mathbb{T}\)\nobreakdash-action by \(\rho^{S(A)}_z = (\rho^A_z)^{-1}\). Since the commutation relation~\eqref{com} is symmetric in \(k,l\), there is a unique isomorphism \[ S(A\boxtimes_\zeta B) \cong S(A) \boxtimes_\zeta S(B),\qquad j_1(a)\mapsto j_1(a),\quad j_2(b)\mapsto j_2(b). \] Hence the comultiplication on~\(\C(\textup{SU}_q(2))\) is one on~\(S(\C(\textup{SU}_q(2)))\) as well. \begin{Prop} \label{pro:Aq_symmetry} The braided quantum groups \(S(\C(\textup{SU}_q(2)))\) and \(\C(\textup{SU}_{\tilde{q}}(2))\) for \(\tilde{q} = \overline{q}^{-1}\) are isomorphic as braided quantum groups. \end{Prop} \begin{proof} Let \(\alpha,\gamma\) be the standard generators of \(A_q = \C(\textup{SU}_q(2))\) and let \(\tilde{\alpha},\tilde{\gamma}\) be the standard generators of~\(A_{\tilde{q}}\). We claim that there is an isomorphism \(\varphi\colon A_q\to A_{\tilde{q}}\) that maps \(\alpha\mapsto \tilde\alpha^*\) and \(\gamma\mapsto \tilde{q}\tilde\gamma^*\) and that is an isomorphism of braided quantum groups from \(S(A_q)\) to~\(A_{\tilde{q}}\). Lemma~\ref{un} implies that the matrix \[ \begin{pmatrix} 0&1\\-1&0 \end{pmatrix} \begin{pmatrix} \tilde\alpha&-\tilde{q}\tilde\gamma^*\\ \tilde\gamma&\tilde\alpha^* \end{pmatrix} \begin{pmatrix} 0&-1\\1&0 \end{pmatrix} = \begin{pmatrix} \tilde\alpha^*&-\tilde\gamma\\ \tilde{q}\tilde\gamma^*&\tilde\alpha \end{pmatrix} = \begin{pmatrix} \varphi(\alpha)&\varphi(-q \gamma^*)\\ \varphi(\gamma)&\varphi(\alpha^*) \end{pmatrix} \] is unitary. Now Lemma~\ref{un} and Theorem~\ref{universal} give the desired morphism~\(\varphi\). Since the inverse of~\(\varphi\) may be constructed in the same way, \(\varphi\) is an isomorphism. On generators, it reverses the grading, so it is \(\mathbb{T}\)\nobreakdash-equivariant as a map \(S(A_q)\to A_{\tilde{q}}\). Let \(\Delta\) and \(\tilde\Delta\) denote the comultiplications on \(S(A_q)\) and~\(A_{\tilde{q}}\). We compute \begin{align*} (\varphi\boxtimes_\zeta\varphi)\Delta(\alpha) &= (\varphi\boxtimes_\zeta\varphi) (j_1(\alpha)j_2(\alpha) - qj_1(\gamma^*) j_2(\gamma)) \\&= j_1(\varphi(\alpha)) j_2(\varphi(\alpha)) - qj_1(\varphi(\gamma^*)) j_2(\varphi(\gamma)) \\&= j_1(\tilde\alpha^*) j_2(\tilde\alpha^*) - \tilde{q} j_1(\tilde\gamma) j_2(\tilde\gamma^*),\\ \tilde\Delta(\varphi(\alpha)) &= \tilde\Delta(\tilde\alpha^*) = j_2(\tilde\alpha)^* j_1(\tilde\alpha)^* - q^{-1} j_2(\tilde\gamma)^* j_1(\tilde\gamma) \\&= j_1(\tilde\alpha)^* j_2(\tilde\alpha)^* - q^{-1} \zeta j_1(\tilde\gamma) j_2(\tilde\gamma)^*. \end{align*} These are equal because \(\tilde{q} = \overline{q}^{-1} = q^{-1}\zeta\). Similarly, \((\varphi\boxtimes_\zeta\varphi)\Delta(\gamma) =\tilde\Delta(\varphi(\gamma))\). Thus~\(\varphi\) is an isomorphism of braided quantum groups. \end{proof} \section{The semidirect product quantum group} \label{sec:U2} A quantum analogue of the semidirect product construction for groups turns the braided quantum group~$\textup{SU}_q(2)$ into a genuine compact quantum group $(B,\Delta_B)$; we will publish details of this construction separately. Here~\(B\) is the universal \(\textup{C}^*\)\nobreakdash-algebra with three generators $\alpha,\gamma,z$ with the $\textup{SU}_q(2)$-relations for $\alpha$ and~$\gamma$ and \begin{align*} z\alpha z^* &=\alpha,\\ z\gamma z^* &= \zeta^{-1}\gamma,\\ z z^* &=z^*z = \mathds{1}; \end{align*} the comultiplication is defined by \begin{align*} \Delta_B(z)&= z\otimes z,\\ \Delta_B(\alpha) &= \alpha\otimes\alpha-q\gamma^*z\otimes\gamma,\\ \Delta_B(\gamma) &=\gamma\otimes\alpha+\alpha^*z\otimes\gamma. \end{align*} There are two embeddings $\iota_1, \iota_2\colon A \rightrightarrows B\otimes B$ defined by \begin{alignat*}{2} \iota_1(\alpha) &= \alpha\otimes \mathds{1}&\qquad \iota_2(\alpha) &= \mathds{1}\otimes\alpha, \\ \iota_1(\gamma) &= \gamma\otimes \mathds{1}&\qquad \iota_2(\gamma) &= z\otimes\gamma. \end{alignat*} Homogeneous elements $x,y\in A$ satisfy \[ \etyk{SU2_commutation} \iota_1(x)\iota_2(y) = \zeta^{\mathrm{deg}(x)\mathrm{deg}(y)}\iota_2(y)\iota_2(x). \] Thus we may rewrite the comultiplication as \begin{align*} \Delta_B(z)&= z\otimes z,\\ \Delta_B(\alpha) &= \iota_1(\alpha)\iota_2(\alpha)-q\iota_1(\gamma)^*\iota_2(\gamma),\\ \Delta_B(\gamma) &= \iota_1(\gamma)\iota_2(\alpha)+\iota_1(\alpha)^*\iota_2(\gamma). \end{align*} In particular, $\Delta_B$ respects the commutation relations for $(\alpha,\gamma,z)$, so it is a well-defined $^*$\nobreakdash-\hspace{0pt}homomorphism \(B\to B\otimes B\). It is routine to check the cancellation conditions~\eqref{cancellation} for \(B\), so $(B,\Delta_B)$ is a compact quantum group. This is a compact quantum group with a projection as in~\cite{Roy:Qgrp_with_proj}. Here the projection $\pi\colon B\rightarrow B$ is the unique $^*$\nobreakdash-homomorphism with $\pi(\alpha) = 1_{B}$, $\pi(\gamma) = 0$ and $\pi(z) = z$; this is an idempotent bialgebra morphism. Its ``image'' is the copy of \(\C(\mathbb{T})\) generated by~\(z\), its ``kernel'' is the copy of \(A\) generated by \(\alpha\) and~\(\gamma\). For \(q=1\), \(B\cong \C(\mathbb{T}\times \textup{SU}(2))\) as a \(\textup{C}^*\)\nobreakdash-algebra, which is commutative. The representation on~\({\mathbb C}^2\) combines the standard embedding of~\(\textup{SU}(2)\) and the representation of~\(\mathbb{T}\) mapping~\(z\) to the diagonal matrix with entries \(z,1\). This gives a homeomorphism \(\mathbb{T}\times\textup{SU}(2) \cong \textup{U}(2)\). So \((B,\Delta_B)\) is the group~\(\textup{U}(2)\), written as a semidirect product of~\(\textup{SU}(2)\) and~\(\mathbb{T}\). For \(q\neq1\), \((B,\Delta_B)\) is the coopposite of the quantum $\textup{U}_q(2)$ group described previously by Zhang and Zhao in~\cite{Zhang-Zhao:Uq2}: the substitutions \(a=\alpha^*\), \(b=\gamma^*\) and \(D=z^*\) turn our generators and relations into those in~\cite{Zhang-Zhao:Uq2}, and the comultiplications differ only by a coordinate flip. \begin{Thm} \label{the:repr_SU_U} Let $U\in \operatorname{M}(\mathcal{K}(\mathcal{H})\otimes\C(\mathbb{T}))$ be a unitary representation of~$\mathbb{T}$ on a Hilbert space~$\mathcal{H}$. There is a bijection between representations of \(\textup{SU}_q(2)\) on~\(\mathcal{H}\) and representations of~\((B,\Delta_B)\) on~\(\mathcal{H}\) that restrict to the given representation on~$\mathbb{T}$. \end{Thm} \begin{proof} Let $v\in \operatorname{M}(\mathcal{K}(\mathcal{H})\otimes A)$ be a unitary representation of~$\textup{SU}_q(2)$ on~$\mathcal{H}$. Since $B$ contains copies of $A$ and $\C(\mathbb{T})$, we may view $u=vU^*$ as an element of $\operatorname{M}(\mathcal{K}(\mathcal{H})\otimes B)$. The $\mathbb{T}$\nobreakdash-invariance of~$v$, \[ (\textup{id}\otimes\rho^A)(v) = U_{12}^*v_{13}U_{12} \] and the formula for~$\iota_2$ (which is basically given by the action~$\rho^A$) show that \[ U_{12}(\textup{id}\otimes\iota_2)(v)U_{12}^* = v_{13}. \] Using $(\textup{id}\otimes\iota_2)(v)=v_{12}$, we conclude that~$u$ is a unitary representation of~$(B,\Delta_B)$: \[ (\textup{id}\otimes\Delta_B)(u) = v_{12}(\textup{id}\otimes\iota_2)(v)U^*_{12}U^*_{13} = v_{12}U^*_{12}v_{13}U^*_{13} = u_{12}u_{13}. \] Going back and forth between \(u\) and~\(v\) is the desired bijection. \end{proof} \begin{bibdiv} \begin{biblist} \bib{Kruszynski-Woronowicz:Gelfand-Naimark}{article}{ author={Kruszy\'nski, Pawe\l }, author={Woronowicz, Stanis\l aw Lech}, title={A noncommutative Gelfand--Na\u \i mark theorem}, journal={J. Operator Theory}, volume={8}, date={1982}, number={2}, pages={361--389}, issn={0379-4024}, review={\MRref {677419}{84b:46068}}, eprint={http://www.theta.ro/jot/archive/1982-008-002/1982-008-002-009.html}, } \bib{MacLane:Categories}{book}{ author={MacLane, Saunders}, title={Categories for the working mathematician}, note={Graduate Texts in Mathematics, Vol. 5}, publisher={Springer}, place={New York}, date={1971}, pages={ix+262}, review={\MRref {0354798}{50\,\#7275}}, doi={10.1007/978-1-4757-4721-8}, } \bib{Majid:Examples_braided}{article}{ author={Majid, Shahn}, title={Examples of braided groups and braided matrices}, journal={J. Math. Phys.}, volume={32}, date={1991}, number={12}, pages={3246--3253}, issn={0022-2488}, review={\MRref {1137374}{93i:17019}}, doi={10.1063/1.529485}, } \bib{Majid:Quantum_grp}{book}{ author={Majid, Shahn}, title={Foundations of quantum group theory}, publisher={Cambridge University Press}, place={Cambridge}, date={1995}, pages={x+607}, isbn={0-521-46032-8}, review={\MRref {1381692}{97g:17016}}, doi={10.1017/CBO9780511613104}, } \bib{Meyer-Roy-Woronowicz:Twisted_tensor}{article}{ author={Meyer, Ralf}, author={Roy, Sutanu}, author={Woronowicz, Stanis\l aw Lech}, title={Quantum group-twisted tensor products of \(\textup C^*\)\nobreakdash -algebras}, journal={Internat. J. Math.}, volume={25}, date={2014}, number={2}, pages={1450019, 37}, issn={0129-167X}, review={\MRref {3189775}{}}, doi={10.1142/S0129167X14500190}, } \bib{Meyer-Roy-Woronowicz:Twisted_tensor_2}{article}{ author={Meyer, Ralf}, author={Roy, Sutanu}, author={Woronowicz, Stanis\l aw Lech}, title={Quantum group-twisted tensor products of \(\textup {C}^*\)\nobreakdash -algebras II}, journal={J. Noncommut. Geom.}, date={2015}, issn={1661-6952}, status={accepted}, note={\arxiv {1501.04432}}, } \bib{Roy:Qgrp_with_proj}{thesis}{ author={Roy, Sutanu}, title={\(\textup C^*\)\nobreakdash -Quantum groups with projection}, date={2013}, type={phdthesis}, institution={Georg-August Universit\"at G\"ottingen}, eprint={http://hdl.handle.net/11858/00-1735-0000-0022-5EF9-0}, } \bib{Soltan:Non_cpt_grp_act}{article}{ author={So\l tan, Piotr Miko\l aj}, title={Examples of non-compact quantum group actions}, journal={J. Math. Anal. Appl.}, volume={372}, date={2010}, number={1}, pages={224--236}, issn={0022-247X}, review={\MRref {2672521}{2012d:46178}}, doi={10.1016/j.jmaa.2010.06.045}, } \bib{Woronowicz:Twisted_SU2}{article}{ author={Woronowicz, Stanis\l aw Lech}, title={Twisted $\mathrm {SU}(2)$ group. An example of a noncommutative differential calculus}, journal={Publ. Res. Inst. Math. Sci.}, volume={23}, date={1987}, number={1}, pages={117--181}, issn={0034-5318}, review={\MRref {890482}{88h:46130}}, doi={10.2977/prims/1195176848}, } \bib{Woronowicz:CQG}{article}{ author={Woronowicz, Stanis\l aw Lech}, title={Compact quantum groups}, conference={ title={Sym\'etries quantiques}, address={Les Houches}, date={1995}, }, book={ publisher={North-Holland}, place={Amsterdam}, }, date={1998}, pages={845--884}, review={\MRref {1616348}{99m:46164}}, } \bib{Zhang-Zhao:Uq2}{article}{ author={Zhang, Xiao Xia}, author={Zhao, Ervin Yunwei}, title={The compact quantum group $U_q(2)$. I}, journal={Linear Algebra Appl.}, volume={408}, date={2005}, pages={244--258}, issn={0024-3795}, review={\MRref {2166867}{2007b:46126}}, doi={10.1016/j.laa.2005.06.004}, } \end{biblist} \end{bibdiv} \end{document}
1,477,468,749,948
arxiv
\section{Introduction} Single-transverse-spin asymmetries (SSAs) have received much attention in recent years, both experimentally and theoretically. Large SSAs have been consistently observed in various experiments at different collision energies \cite{SSA-fixed-tgt,SSA-dis,SSA-rhic}. As a consequence of the parity and time-reversal invariance of the strong interaction, the SSA is directly connected to the transverse motion of partons inside a polarized hadron. Understanding the QCD dynamics behind the measured asymmetries should have the profound impact on our knowledge of strong interaction and hadron structure \cite{SSA-review}. Two QCD mechanisms for generating SSAs have been proposed \cite{Siv90, Efremov, qiu} and been applied \cite{qiu,Kanazawa:2000hz,Vogelsang:2005cs,siverscompare,Ans94,MulTanBoe,Boer:2003tx,Qiu:2007ar} extensively in phenomenological studies. Both of these mechanisms connect the SSA to the parton's transverse motion inside a transversely polarized hadron. One mechanism relies on the transverse momentum dependent (TMD) factorization for the observed polarized cross sections, and explicitly expresses the SSA in terms of ``asymmetric" TMD parton distributions, known as the Sivers functions \cite{Siv90}. The other follows the QCD collinear factorization approach for cross sections when all observed momentum scales are much larger than the non-perturbative hadronic scale $1/{\rm fm}\sim \Lambda_{\rm QCD}$, and attributes the SSA to the twist-three transverse-spin dependent multi-parton correlation functions \cite{Efremov, qiu}. Unlike the first mechanism, in which the SSA measures the TMD parton distribution and the spin-dependence of parton's transverse motion at a given momentum, the twist-three transverse-spin-dependent correlation functions reveal the net spin-dependence of parton's transverse motion when its transverse momentum is integrated. Naturally, the moment of the spin-dependent TMD parton distributions could be related to the twist-three multi-parton correlation functions \cite{Boer:2003cm}. Although two mechanisms each have their own kinematic domain of validity, they describe the same physics in the region where they overlap \cite{UnifySSA}. Most studies in both mechanisms have been concentrated on the SSAs generated by either the quark Sivers function \cite{Vogelsang:2005cs,siverscompare,Ans94,MulTanBoe,Boer:2003tx} or the twist-three quark-gluon correlation function \cite{qiu,Kanazawa:2000hz,Qiu:2007ar,Kouvaris:2006zy}, which is defined as \begin{eqnarray} T_F(x, x)=\int\frac{dy_1^- dy_2^-}{4\pi}e^{ixP^+y_1^-} \langle P,s_\perp|\bar{\psi}(0)\gamma^+\left[ \epsilon^{s_\perp\sigma n\bar{n}}F_\sigma^{~ +}(y_2^-)\right] \psi(y_1^-)|P,s_\perp\rangle \, , \end{eqnarray} with the gauge links suppressed. Possibilities of accessing the transverse motion of gluons, or, the gluon Sivers functions have also been investigated recently \cite{Anselmino:2004nk, Boer:2003tx}. Likewise, the spin-dependence of the gluon's transverse motion in the QCD collinear factorization approach is represented by the twist-three tri-gluon correlation function, $T_G(x,x)$, defined as \begin{eqnarray} T_G(x, x)=\int\frac{dy_1^- dy_2^-}{2\pi}e^{ixP^+y_1^-}\frac{1}{xP^+}\langle P,s_\perp|F^+_{~~\alpha}(0)\left[ \epsilon^{s_\perp\sigma n\bar{n}}F_\sigma^{~ +}(y_2^-)\right] F^{\alpha+}(y_1^-)|P,s_\perp\rangle \, . \label{TG_correlation} \end{eqnarray} Its contribution to the SSA was first studied by Ji in the context of direct photon production in hadronic collision \cite{Ji:1992eu}. Although direct-photon production provides a nice possibility to access $T_G(x,x)$, the extraction of $T_G(x,x)$ could be difficult due to the contribution from the quark-initiated subprocesses, and the limited knowledge on $T_F(x,x)$. In this paper, we study the single-transverse spin asymmetry for open charm production in the semi-inclusive deep inelastic scattering (SIDIS) and argue that the SSA in SIDIS is a clean observable to extract the twist-three transverse-spin dependent tri-gluon correlation function, $T_G(x,x)$. The $D$-meson production at large enough transverse momentum $P_{h\perp}$ in SIDIS is dominated by the photon-gluon fusion subprocess, $\gamma^*+g\to c+\bar{c}$, since the intrinsic charm contribution \cite{Brodsky:ic} is less relevant at large $P_{h\perp}$ and the photon-charm subprocess, $\gamma^*+c\to c+g$, is suppressed by the small charm quark distribution at the collision energy that we are interested in this paper. We calculate the SSA for $D$-meson production in lepton-proton SIDIS in terms of the QCD collinear factorization approach, and find that the asymmetry is directly proportional to the diagonal part of the twist-three tri-gluon correlation function in the polarized proton, $T_G(x,x)$, because the photon-gluon fusion subprocess at this order does not have the so-called ``hard-pole'' contribution to the asymmetry \cite{UnifySSA}. Therefore, the measurement of SSA for $D$-meson production in SIDIS is a direct measurement of the tri-gluon correlation function, $T_G(x,x)$. With a simple model for the tri-gluon correlation function, obtained under a similar assumption guiding the modeling of the quark-gluon correlation function, $T_F(x,x)$, we find that the asymmetry for both COMPASS \cite{compass} and eRHIC \cite{Deshpande:2005wd} kinematics is sizable and could be measured experimentally. Recently, COMPASS experiment successfully measured the gluon polarization, $\Delta G$, in a longitudinally polarized proton based on the photon-gluon fusion process by tagging charmed mesons \cite{compass}. This certainly makes the measurement of SSA of open charm production in the same experimental setting and the extraction of the transverse-spin dependent tri-gluon correlation function, $T_G(x,x)$, promising. We find that the SSA of $D$-meson production in SIDIS has a minimum at $z_h\sim 0.5$, and it increases as $z_h$ is moving away from this central value. This increase of the SSA away from $z_h\sim 0.5$ has the same physics origin as the observed increase of the magnitude of SSA in hadronic pion production as a function of increasing $x_F$ (or rapidity $y$), and is a prediction of the twist-three formalism of the QCD collinear factorization approach to the SSA. We also find that the twist-three gluonic contribution to the SSA has both similarity and difference from the twist-three fermionic contributions. Both gluonic and fermionic twist-three contributions to the SSA have the so-called ``non-derivative'' and ``derivative'' terms, which correspond to the terms that are proportional to the twist-three correlation functions and the derivative of the correlation functions, respectively. As noticed in Refs.~\cite{Kouvaris:2006zy,Koike:proof}, the fermionic ``non-derivative'' and ``derivative'' terms can be combined together and the dependence on the non-perturbative twist-three correlation function, $T_F(x,x)$, is proportional to a simple combination, $T_F(x,x)-xT_F'(x,x)$. However, our explicit calculation shows that the ``non-derivative'' and ``derivative'' gluonic contribution can not be combined into the same simple form due to the difference in the partonic hard parts of these two terms. The same approach discussed in this paper can be applied for studying the SSA in open charm production in hadronic collisions, which is dominated by the gluon-gluon fusion if $T_G(x,x)$ is not too small comparing to the $T_F(x,x)$ \cite{Yuan:2008it,Vitev:2006bi,Kang:hadron}. With the extraction of the tri-gluon correlation function, $T_G(x,x)$, and the knowledge of $T_F(x,x)$, we enter a new era of exploring non-perturbative physics beyond the parton distribution functions (PDFs) which have been well-studied in the past thirty years. The rest of our paper is organized as follows. In Sec. \ref{ssa calculation}, we present our calculation of the SSAs for open charm production in SIDIS. We first introduce the relevant kinematics of open charm production in SIDIS and present the formula for the unpolarized cross section. We then derive the twist-three formula for the SSA in QCD collinear factorization approach and express the asymmetry in terms of the tri-gluon transverse-spin dependent correlation function, $T_G(x,x)$. We close this section by a discussion on the calculation of the color factor of the partonic hard part which depends on the color contraction of three gluon fields in the definition of the tri-gluon correlation function. In principle, there could be two gauge invariant tri-gluon correlation functions, $T_G(x,x)$ and $\widetilde{T}_G(x,x)$ defined later, due to two independent ways to neutralize the color of the three gluon fields in the matrix elements. We point out that only one of them, $T_G(x,x)$, could be related to the gluon Sivers function \cite{Anselmino:2004nk}. In Sec. \ref{numerical}, we estimate the production rate of open charm mesons in SIDIS for both COMPASS and eRHIC kinematics. We choose a simple ansatz for the tri-gluon correlation function, $T_G(x,x)$, and present our predictions for the SSAs of open charm production at the existing COMPASS experiment and the planned eRHIC experiment. Finally, we conclude our paper in Sec. \ref{conclusion}. \section{Calculation of single-spin asymmetry} \label{ssa calculation} We start this section by specifying our notation and kinematics of SIDIS. We consider the scattering processes of an unpolarized lepton, $e$, on a polarized hadron, $p$, \begin{eqnarray} e(\ell)+p(P, s_\perp)\to e(\ell')+h(P_h)+X, \end{eqnarray} where $s_\perp$ is the transverse spin vector defined below, $h$ represents the observed $D$ meson with momentum $P_h$ and mass $m_h$. For the collision energy that we are interested in this paper, we work in the approximation of one-photon exchange, and define the virtual photon momentum $q=\ell-\ell'$ and its invariant mass $Q^2=-q^2$. We adopt the usual SIDIS variables: \begin{eqnarray} S_{ep}=(P+\ell)^2, \qquad x_B=\frac{Q^2}{2P\cdot q},\qquad y=\frac{P\cdot q}{P\cdot \ell}=\frac{Q^2}{x_B S_{ep}},\qquad z_h=\frac{P\cdot P_h}{P\cdot q}. \end{eqnarray} It is also convenient to introduce the ``transverse'' component of the virtual photon momentum, $q$, as \begin{eqnarray} q_t^\mu=q^\mu-\frac{q\cdot P_h}{P\cdot P_h}P^\mu-\frac{q\cdot P}{P\cdot P_h}P_h^\mu, \end{eqnarray} which is a space-like vector and orthogonal to both $P$ and $P_h$. We define \begin{eqnarray} \vec{q}^{\,2}_\perp\equiv -q_t^\mu q_{t\mu}=Q^2\left[1+\frac{1}{x_B}\frac{q\cdot P}{P\cdot P_h}\right]-\frac{m_h^2}{z_h^2}. \label{qT} \end{eqnarray} To completely specify the kinematics, we will work in the so-called {\it hadron frame} \cite{sidis}, where the virtual photon and the polarized proton are taken to have only one spatial component that is in the $z$-direction: \begin{eqnarray} P^{\mu}=P^+\bar{n}^{\mu}, \quad\quad q^{\mu}=-x_B P^+ \bar{n}^{\mu} +\frac{Q^2}{2x_B P^+}n^{\mu}, \end{eqnarray} where the light-cone momentum component is defined as $P^{\pm}=(P^0\pm P^3)/\sqrt{2}$, and $\bar{n}^{\mu}=(1^+,0^-,0_\perp)$, $n^{\mu}=(0^+,1^-,0_\perp)$ are two light-like vectors with $\bar{n}\cdot n=1$. The momentum of final-state $D$-meson can be written as \begin{eqnarray} P_h^\mu=\frac{x_B P^+}{z_h Q^2}m_{h\perp}^2 \bar{n}^{\mu}+\frac{z_h Q^2}{2x_B P^+}n^\mu+P_{h\perp}^\mu, \end{eqnarray} where $m_{h\perp}^2=m_h^2+P_{h\perp}^2$ with $P_{h\perp}=\sqrt{\vec{P}_{h\perp}^2}$. From Eq. (\ref{qT}) one can show that $q_\perp\equiv \sqrt{\vec{q}_\perp^{\,2}}=P_{h\perp}/z_h$ in this hadron frame, independent of mass $m_h$. In this hadron frame, usually, one chooses the coordinate system such that the virtual photon has a vanishing energy component, corresponding to $P^+=Q/\sqrt{2}x_B$, and $P_{h}$ lies in the $xz$-plane (known as the {\it hadron plane}), as shown in Fig.~\ref{frame}. The lepton momenta, $\ell$ and $\ell'$ define the {\it lepton plane} and can be expressed in terms of variables $\psi$ and $\phi$ as follows \cite{sidis}, \begin{eqnarray} \ell^\mu&=&\frac{Q}{2}\left(\cosh\psi, \sinh\psi \cos\phi, \sinh\psi \sin\phi, -1\right),\nonumber\\ \ell'^\mu&=&\frac{Q}{2}\left(\cosh\psi, \sinh\psi \cos\phi, \sinh\psi \sin\phi, +1\right), \end{eqnarray} where $\phi$ is the azimuthal angle between the hadron and lepton plane, as indicated in Fig.~\ref{frame}, and \begin{eqnarray} \cosh\psi=\frac{2x_B S_{ep}}{Q^2}-1=\frac{2}{y}-1. \end{eqnarray} We parametrize the transverse spin vector of the initial proton $s_\perp$ as \begin{eqnarray} s_\perp=(0,\cos\phi_s, \sin\phi_s,0), \end{eqnarray} where $\phi_s$ is the azimuthal angle of $s_\perp$ measured from the hadron plane, as shown in Fig.~\ref{frame}. If one uses the lepton plane as the reference to define the azimuthal angle of $s_\perp$ as $\Phi_S$, and that of hadron plane as $\Phi_h$, one has the relation $\phi_s=\Phi_S-\Phi_h$ and $\phi=-\Phi_h$. \begin{figure}[htb]\centering \psfig{file=SIDIS_Frame.eps,width=3.7in} \caption{Kinematics of the SIDIS process in hadron frame.} \label{frame} \end{figure} The single transverse-spin asymmetry is defined as \begin{eqnarray} A_N=\frac{\sigma(s_\perp)-\sigma(-s_\perp)}{\sigma(s_\perp)+\sigma(-s_\perp)} =\frac{d\Delta\sigma(s_\perp)}{dx_B dy dz_h dP_{h\perp}^2 d\phi}\left/ \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}\right.. \label{AN} \end{eqnarray} In the following subsections, we will first review the unpolarized cross section at leading order, and then derive the single-transverse polarized cross sections, $\Delta\sigma(s_\perp)$. \subsection{Unpolarized cross section} The unpolarized differential SIDIS cross section may be calculated from the formula \begin{eqnarray} \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}=\frac{\pi \alpha_{em}^2 y}{Q^4} L_{\mu\nu}(\ell,\,q)W^{\mu\nu}(P,\,q,\, P_h), \label{LW} \end{eqnarray} where $L_{\mu\nu}$ and $W^{\mu\nu}$ are the leptonic and hadronic tensors, respectively. The leptonic tensor is given by \begin{eqnarray} L_{\mu\nu}(\ell,\,q)=2\left(\ell_\mu \ell'_\nu+\ell'_\nu \ell_\mu-g_{\mu\nu}Q^2/2\right). \end{eqnarray} The hadronic tensor has the following expression in QCD: \begin{eqnarray} W^{\mu\nu}(P,\,q,\, P_h)=\frac{1}{4z_h}\sum_{X}\int\frac{d^4\xi}{(2\pi)^4}e^{iq\cdot \xi}\langle P|J_\mu(\xi)|X\,P_h\rangle\langle X\,P_h|J_\nu(0)|P\rangle, \end{eqnarray} where $J^\mu$ is the quark electromagnetic current and $X$ represents all other final-state hadrons other than the observed open charm meson $h$. The hadronic tensor can be further decomposed in terms of five parity and current conserving tensors ${\cal V}_i^{\mu\nu}$ \cite{sidis}: \begin{eqnarray} W^{\mu\nu}=\sum_{i=1}^5 {\cal V}_i^{\mu\nu} W_i, \end{eqnarray} where the $W_i$ are structure functions which may be projected out from $W^{\mu\nu}$ by $W_i=W_{\rho\sigma}\tilde{{\cal V}}_i^{\rho\sigma}$, with the corresponding inverse tensors $\tilde{{\cal V}}_i$. Both ${\cal V}_i$ and $\tilde{{\cal V}}_i$ can be constructed from four orthonormal basis vectors: \begin{eqnarray} T^\mu&=&\frac{1}{Q}\left( q^\mu+2x_B P^\mu \right),\nonumber\\ X^\mu&=&\frac{1}{q_\perp}\left[ \frac{P_h^\mu}{z_h}-q^\mu-\left( 1+\frac{q_\perp^2+m_h^2/z_h^2}{Q^2} \right)x_B P^\mu \right],\nonumber\\ Y^\mu&=&\epsilon^{\mu\nu\rho\sigma}Z_\nu X_\rho T_\sigma,\nonumber\\ Z^\mu&=&-\frac{q^\mu}{Q}, \end{eqnarray} with normalization $T^2=1$ and $X^2=Y^2=Z^2=-1$, which are reduced to those in \cite{sidis} when $m_h=0$. The tensor ${\cal V}_5$ does not contribute to the cross section when it is contracted with a symmetric $L_{\mu\nu}$, the other four tensors and their inverse are given as \cite{sidis}: \begin{eqnarray} &&{\cal V}^{\mu\nu}_1=X^\mu X^\nu+Y^\mu Y^\nu, \qquad {\cal V}^{\mu\nu}_2=g^{\mu\nu}+Z^\mu Z^\nu, \qquad \nonumber\\ &&{\cal V}^{\mu\nu}_3=T^\mu X^\nu+T^\nu X^\mu, \qquad {\cal V}^{\mu\nu}_4=X^\mu X^\nu-Y^\mu Y^\nu, \\ &&\tilde{{\cal V}}^{\mu\nu}_1=\frac{1}{2}\left(2T^\mu T^\nu+X^\mu X^\nu+Y^\mu Y^\nu\right), \qquad \tilde{{\cal V}}^{\mu\nu}_2=T^\mu T^\nu, \qquad \nonumber\\ &&\tilde{{\cal V}}^{\mu\nu}_3=-\frac{1}{2}\left(T^\mu X^\nu+T^\nu X^\mu\right), \qquad \tilde{{\cal V}}^{\mu\nu}_4=\frac{1}{2}\left(X^\mu X^\nu-Y^\mu Y^\nu\right). \end{eqnarray} The contraction of $L_{\mu\nu}$ and ${\cal V}^{\mu\nu}_i$ leads to various angular distributions. Let ${\cal A}_i=L_{\mu\nu}{\cal V}_i^{\mu\nu}/Q^2$, we have \begin{eqnarray} {\cal A}_1=1+\cosh^2\psi, \qquad {\cal A}_2=-2, \qquad {\cal A}_3=-\cos\phi \sinh{2\psi}, \qquad {\cal A}_4=\cos{2\phi} \sinh^2{\psi}. \end{eqnarray} We can then write the cross section in Eq.~(\ref{LW}) as \begin{eqnarray} \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}=\frac{\pi\alpha_{em}^2 y}{Q^2} \sum_{i=1}^4 {\cal A}_iW_i. \end{eqnarray} At large $P_{h\perp}\sim Q$, the collinear factorization is expected to be valid, and $W_i$ can be factorized into a convolution of the parton distribution function, the fragmentation function for the produced $D$ meson, and a short-distance partonic hard part. The lowest-order (LO) contribution to the partonic hard part comes from the photon-gluon fusion subprocess $\gamma^*+g\to Q(p_c)+\bar{Q}(p_{\bar{c}})$, which gives the leading order cross section as \begin{eqnarray} \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi} &=& \sigma_0 \int_{x_{min}}^1\frac{dx}{x} \int \frac{dz}{z}\, G(x)D(z)\, \delta\left(\frac{P_{h\perp}^2}{z_h^2} -\frac{(1-\hat{x})(1-\hat{z})}{\hat{x}\hat{z}}Q^2 +\hat{z}^2 m_c^2\right) \left(\frac{1}{2}\right)\sum_{i=1}^4{\cal A}_i\hat{W}_i, \label{unpolarized} \end{eqnarray} where $\sigma_0=e_c^2\alpha_{em}^2\alpha_s y/(8\pi z_h^2 Q^2)$, $\hat{x}=x_B/x$, $\hat{z}=z_h/z$, and $e_c$ and $m_c$ are the fractional charge and mass of the charm quark, respectively. The $P_{h\perp}^2/z_h^2$ in the $\delta$-function could be replaced by $q_\perp^2$, and the $1/2$ is the color factor. In Eq.~(\ref{unpolarized}) $G(x)$ is the unpolarized gluon distribution function with gluon momentum fraction $x$, and $D(z)$ is the fragmentation function for the charm quark to become a $D$ meson with $z=P\cdot P_h/P\cdot p_c$. We have suppressed the dependence on the factorization and renormalization scales for simplicity. We used $P_{h\perp}\approx z p_{c\perp}$ inside the $\delta$-function, which fixes the $z$ integration. The lower limit of $x$ integration $x_{min}$ is given by: \begin{eqnarray} x_{min} = \left\{ \begin{array}{ll} x_B\left[1+\frac{P_{h\perp}^2+m_c^2}{z_h(1-z_h)Q^2}\right], & \quad \mbox{if } z_h+\sqrt{z_h^2+\frac{P_{h\perp}^2}{m_c^2}}\geq 1;\\ \\ x_B\left[1+\frac{2m_c^2}{Q^2}\left(1+\sqrt{1+\frac{P_{h\perp}^2} {z_h^2m_c^2}}\right)\right], & \quad \mbox{if } z_h+\sqrt{z_h^2+\frac{P_{h\perp}^2}{m_c^2}}\leq 1.\\ \end{array} \right. \label{xmin} \end{eqnarray} The short-distance parts $\hat{W}_i$ are calculated from the photon-gluon scattering and are given by \begin{eqnarray} \hat{W}_1 &=& 2\left[\frac{\hat{u}}{\hat{t}}+\frac{\hat{t}}{\hat{u}} -\frac{2\hat{s}Q^2}{\hat{t}\hat{u}} +\frac{4\hat{x}^2\hat{s}}{Q^2}\right] +4m_c^2\left[\frac{Q^2-2\hat{t}}{\hat{t}^2}+\frac{Q^2-2\hat{u}}{\hat{u}^2} -\frac{2\hat{x}^2}{Q^2}\left(\frac{\hat{u}}{\hat{t}} +\frac{\hat{t}}{\hat{u}}+2\right)\right] -8m_c^4\left[\frac{1}{\hat{t}}+\frac{1}{\hat{u}}\right]^2,\nonumber\\ \hat{W}_2 &=& \frac{16\hat{x}^2}{Q^2}\left[\hat{s}-m_c^2\left(\frac{\hat{u}}{\hat{t}} +\frac{\hat{t}}{\hat{u}}+2\right)\right],\nonumber\\ \hat{W}_3 &=& 4\hat{x}\hat{z}\frac{q_\perp}{Q}(\hat{u}-\hat{t}) \left[\frac{\hat{s}-Q^2}{\hat{t}\hat{u}} -2m_c^2\left(\frac{1}{\hat{t}}+\frac{1}{\hat{u}}\right)^2 \right], \nonumber\\ \hat{W}_4 &=& 8\hat{z}^2q_\perp^2\left[\frac{Q^2}{\hat{t}\hat{u}} +m_c^2\left(\frac{1}{\hat{t}}+\frac{1}{\hat{u}}\right)^2 \right], \label{WLO} \end{eqnarray} where $\hat{s}, \hat{t}, \hat{u}$ are defined at the partonic level as \begin{eqnarray} \hat{s}\equiv (xP+q)^2=\frac{1-\hat{x}}{\hat{x}}Q^2, \qquad \hat{t}\equiv (p_c-q)^2-m_c^2=-\frac{1-\hat{z}}{\hat{x}}Q^2, \qquad \hat{u}\equiv (xP-p_c)^2-m_c^2=-\frac{\hat{z}}{\hat{x}}Q^2\, , \label{stu} \end{eqnarray} which are different from some definitions used in the literature. We found that this definition makes the expression of $\hat{W}_i$ for massive quark production simpler. Taking $m_c=0$ in Eqs.~(\ref{WLO}) and (\ref{stu}), one recovers the results for the production of massless quark derived in \cite{Mendez:1978zx, Koike}. \subsection{Twist-three polarized cross section} We now proceed to derive the single transverse-spin dependent cross section by applying the same method developed in Refs.~\cite{qiu,Kouvaris:2006zy}. When both physically observed scales $Q, P_{h\perp}\gg \Lambda_{\rm QCD}$, the spin-dependent cross section for $D$-meson production is expected to be factorized in terms of twist-three transverse-spin dependent tri-gluon correlation function \cite{Qiu:1990cu}, \begin{eqnarray} d\Delta\sigma(s_\perp)\propto \frac{1}{2 S_{ep}}\int dz D(z) \int dx_1 dx_2 {\cal T}_G(x_1,x_2)\ i \epsilon^{\rho s_\perp n\bar{n}} \lim_{k_\perp\to 0}\frac{\partial}{\partial k_\perp^\rho}H(x_1,x_2,k_\perp), \label{Dsig_form} \end{eqnarray} where $1/2S_{ep}$ is the flux factor and $\epsilon^{\rho s_\perp n\bar{n}}=\epsilon^{\rho\sigma\mu\nu}s_{\perp\sigma}n_\mu\bar{n}_\nu$, \begin{eqnarray} {\cal T}_G(x_1,x_2)=\int \frac{P^+dy_1^- dy_2^-}{2\pi}e^{ix_1P^+y_1^- + i (x_2-x_1)P^+y_2^-}d_{\alpha\beta}\langle P,s_\perp| A^{\alpha}(0)\left[ \epsilon^{s_\perp\sigma n\bar{n}}F_\sigma^{~ +}(y_2^-)\right] A^{\beta}(y_1^-)|P, s_\perp\rangle, \end{eqnarray} where $d_{\alpha\beta}=-g_{\alpha\beta}+\bar{n}_\alpha n_\beta+\bar{n}_\beta n_\alpha$. ${\cal T}_G(x_1,x_2)$ is related to the tri-gluon correlation function through $T_G(x,x)=x{\cal T}_G(x,x)$. Since ${\cal T}_G(x_1,x_2)$ is real, we need an imaginary part of the hard-scattering function $H(x_1, x_2, k_\perp)$ to contract with $i \epsilon^{\rho s_\perp n\bar{n}}$ in order to obtain a real $\Delta\sigma(s_\perp)$. This imaginary part comes from the interference between a real part of scattering amplitude with a single gluon initial state and an imaginary part of the partonic scattering amplitude with an extra gluon, see Fig. \ref{phase}. Technically, the imaginary part, or the phase, ``$i$'', arises when the virtual momentum integral of the extra gluon is evaluated by the residue of an unpinched pole from a propagator in the amplitude with an extra gluon. Such propagator is indicated by the one marked with a short bar in the diagrams in Fig.~\ref{twist3_LO}. \begin{figure}[htb]\centering \psfig{file=twist3.eps,width=1.4in} \caption{A typical diagram that gives a non-vanishing contribution to the SSA.} \label{phase} \end{figure} There are a total of eight partonic diagrams contributing to the twist-three polarized cross sections, $\Delta\sigma(s_\perp)$. Four of them are shown in Fig.~\ref{twist3_LO}, and the other four are obtained by attaching the extra gluon in the same way on the right side of the final-state cut. When the extra gluon is attached to the left side of the final-state cut, as shown in Fig.~\ref{twist3_LO}, the phase from the propagator marked by the bar arises effectively as \begin{eqnarray} \frac{1}{\left(p_c-(x_2-x_1)P-k_\perp\right)^2-m_c^2+i\epsilon}&=&\frac{1}{2P\cdot p_c}\frac{1}{x_1-x_2+v_1\cdot k_\perp+i\epsilon}+{\cal O}(k_\perp^2)\nonumber\\ &\rightarrow& \frac{-i\pi}{2P\cdot p_c}\delta(x_1-x_2+v_1\cdot k_\perp), \end{eqnarray} to fix the virtual loop momentum fraction $x_1=x_2-v_1\cdot k_\perp$ with $v_1^\mu=-2p_c^\mu/2P\cdot p_c$. \begin{figure}[htb]\centering \psfig{file=figaa.eps,width=1.65in} \hskip 0.1in \psfig{file=figbb.eps,width=1.65in} \hskip 0.1in \psfig{file=figcc.eps,width=1.65in} \hskip 0.1in \psfig{file=figdd.eps,width=1.65in} \caption{Feynman diagrams that give the twist-three contribution to the spin-dependent cross section. The short bar indicates the propagator that produces the pole. The letters, $A,B$ and $C$, represent the color of the initial-state gluons.} \label{twist3_LO} \end{figure} On the other hand, the on-shell condition associated with the unobserved anti-charm quark fixes the momentum fraction of the active initial-state gluon as \begin{eqnarray} \delta(p_{\bar{c}}^2-m_c^2)&=&\delta\left((x_2 P+k_\perp+q-p_c)^2-m_c^2\right)\nonumber\\ &=&\frac{1}{2P\cdot (q-p_c)}\delta(x_2-x-v_2\cdot k_\perp), \end{eqnarray} where terms at ${\cal O}(k_\perp^2)$ and higher are neglected and \begin{eqnarray} x=-\frac{(q-p_c)^2-m_c^2}{2P\cdot (q-p_c)}, \quad\quad v_2^\mu=\frac{2p_c^\mu}{2P\cdot (q-p_c)}. \end{eqnarray} When the extra gluon is attached to the right hand side of the cut, the phase arises as \begin{eqnarray} \frac{1}{\left(p_c+(x_2-x_1)P+k_\perp\right)^2-m_c^2-i\epsilon}&=&\frac{1}{2P\cdot p_c}\frac{1}{x_2-x_1-v_1\cdot k_\perp-i\epsilon}+{\cal O}(k_\perp^2)\nonumber\\ &\rightarrow& \frac{i\pi}{2P\cdot p_c}\delta(x_2-x_1-v_1\cdot k_\perp), \end{eqnarray} and the on-shell condition of the unobserved anti-charm quark gives \begin{eqnarray} \delta(p_{\bar{c}}^2-m_c^2)=\frac{1}{2P\cdot (q-p_c)}\delta(x_1-x), \end{eqnarray} which has no $k_\perp$-dependence. Applying the so-called ``master formula'' in Ref.~\cite{Kouvaris:2006zy}, we have from Eq.~(\ref{Dsig_form}) the following general expression: \begin{eqnarray} &&\lim_{k\perp\to 0}\frac{\partial}{\partial k_\perp^\rho}\int dx_1\int dx_2\, {\cal T}_G(x_1,x_2) \left[ H_L(x_1, x_2, k_\perp)\delta(x_1-x_2+v_1\cdot k_\perp)\delta(x_2-x-v_2\cdot k_\perp) \right. \nonumber\\ &&\left. -H_R(x_1, x_2, k_\perp)\delta(x_2-x_1-v_1\cdot k_\perp)\delta(x_1-x)\right]\nonumber\\ &&=(v_2-v_1)^\rho H_L(x,x,0)\frac{d}{dx}\left(\frac{T_G(x,x)}{x}\right)+\frac{T_G(x,x)}{x} \nonumber\\ &&\times\lim_{k_\perp\to 0}\frac{\partial}{\partial k_\perp^\rho} \left[ H_L(x+(v_2-v_1)\cdot k_\perp, x+v_2\cdot k_\perp, k_\perp)-H_R(x, x+v_1\cdot k_\perp, k_\perp)\right], \label{master} \end{eqnarray} where we have already used the facts that $H_L(x,x,0)=H_R(x,x,0)$ and $T_G(x,x)=x{\cal T}_G(x,x)$. The fact that Eq.~(\ref{master}) depends only on the diagonal part of the tri-gluon correlation function, $T_G(x_1,x_2)$, with $x_1=x_2=x$ is a consequence of that the photon-gluon fusion subprocess at this order has only the so-called ``soft-pole'' contribution to the SSA \cite{qiu,UnifySSA}. Therefore, the measurement of the SSA in $D$-meson production in SIDIS is a direct measurement of the tri-gluon correlation function, $T_G(x,x)$. In terms of $\hat{s},\hat{t},\hat{u}$ defined in the previous subsection, we have \begin{eqnarray} v_1^\mu=\frac{2x}{\hat{u}}p_c^\mu, \qquad v_2^\mu=-\frac{2x}{\hat{t}} p_c^\mu, \qquad \left(v_2-v_1\right)^\mu=-\frac{2x}{\hat{t}}\left(1+\frac{\hat{t}}{\hat{u}}\right)p_c^\mu. \end{eqnarray} Using Eqs. (\ref{Dsig_form}), (\ref{master}), and adding the contributions from the eight diagrams together, we find the final expression for the fully differential single-transverse-spin-dependent cross section: \begin{eqnarray} \frac{d\Delta\sigma(s_\perp)}{dx_B dy dz_h dP_{h\perp}^2 d\phi} &=& \sigma_0\int_{x_{min}}^1 dx\int \frac{dz}{z}D(z) \delta\left(\frac{P_{h\perp}^2}{z_h^2} -\frac{(1-\hat{x})(1-\hat{z})}{\hat{x}\hat{z}}Q^2 +\hat{z}^2 m_c^2\right) \left(\frac{1}{4}\right) \nonumber\\ &\times& \left[\epsilon^{P_h s_\perp n \bar{n}} \left(\frac{\sqrt{4\pi\alpha_s}}{z\hat{t}}\right) \left(1+\frac{\hat{t}}{\hat{u}}\right)\right] \sum_{i=1}^{4} {\cal A}_i \left[-x\frac{d}{d x}\left(\frac{T_G(x,x)}{x}\right)\hat{W}_i+\left(\frac{T_G(x,x)}{x}\right)\hat{N}_i\right], \label{polarized} \end{eqnarray} where $1/4$ is the color factor, $T_G(x, x)$ is the tri-gluon correlation function defined in Eq.~(\ref{TG_correlation}), $\hat{W}_i$ are given in Eq.~(\ref{WLO}), and the hard parts for the ``non-derivative'' term, $\hat{N}_i$, are given by \begin{eqnarray} \hat{N}_1 &=& 4\left[\frac{2m_c^2-Q^2}{\hat{t}\hat{u}}+\frac{6\hat{x}^2}{Q^2}\right]\left[\left(\hat{s}-Q^2\right) -2m_c^2\left(\frac{\hat{u}}{\hat{t}} +\frac{\hat{t}}{\hat{u}}+2\right)\right], \nonumber\\ \hat{N}_2 &=& \frac{16\hat{x}^2}{Q^2}\left[\left(\hat{s}-Q^2\right) -2m_c^2\left(\frac{\hat{u}}{\hat{t}} +\frac{\hat{t}}{\hat{u}}+2\right)\right], \nonumber\\ \hat{N}_3 &=& \frac{2Q}{\hat{z}q_\perp}\left(\hat{u}-\hat{t}\right) \left[\left(\frac{4\hat{z}^2q_\perp^2}{\hat{t}\hat{u}} -\frac{1}{Q^2+\hat{s}}\right) \left(2m_c^2\left(\frac{1}{\hat{t}}+\frac{1}{\hat{u}}\right) -\frac{Q^2-\hat{s}}{Q^2+\hat{s}}\right)-2\hat{z}q_\perp^2\right], \nonumber\\ \hat{N}_4 &=& 8\left[2\hat{z}q_\perp^2-\frac{\hat{t}\hat{u}}{Q^2+\hat{s}}\right] \left[\frac{Q^2}{\hat{t}\hat{u}}+m_c^2\left(\frac{1}{\hat{t}} +\frac{1}{\hat{u}}\right)^2\right]. \label{NLO} \end{eqnarray} Eq.~(\ref{polarized}) is our main result for the leading order twist-three $T_G(x,x)$ contribution to the fully differential polarized cross section, $\Delta\sigma(s_\perp)$, of $D$-meson production in SIDIS. The single transverse-spin asymmetry for the $D$-meson production in SIDIS is obtained by substituting Eqs.~(\ref{unpolarized}) and (\ref{polarized}) into Eq.~(\ref{AN}). Similar to the twist-three contributions to the SSAs generated by the fermionic quark-gluon correlation function, $T_F(x,x)$, the gluonic twist-three contribution to the SSA of $D$-meson production in Eq.~(\ref{polarized}) has both the ``derivative'' and ``non-derivative'' terms, a unique feature of twist-three contribution. It was found that the fermionic ``non-derivative'' and ``derivative'' terms can be combined into a simple form $T_F(x,x)-xT'_F(x,x)$ \cite{Kouvaris:2006zy,Koike:proof}. However, from Eqs.~(\ref{WLO}) and (\ref{NLO}), it is clear that $\hat{W}_i\neq \hat{N}_i$. That is, our explicit calculation shows that such a simple combination does not hold for contributions from tri-gluon correlation function $T_G(x,x)$, and is not universal. We close this section by a discussion on the calculation of the color factor $1/4$ in Eq.~(\ref{polarized}). The color factor in Eq.~(\ref{polarized}) depends on how colors of the three gluon fields in the matrix element of the tri-gluon correlation function in Eq.~(\ref{TG_correlation}) are neutralized. If the color of the operator, $F^A(0)\,F^C(y_2^-)\,F^B(y_1^-)$, in Eq.~(\ref{TG_correlation}) with the Lorentz indices suppressed, is neutralized by $({\cal F}^C)_{AB}=-if^{CAB}$ with $f^{CAB}$ the fully antisymmetric structure constant of the color SU(3) group, we refer to this tri-gluon correlation function as $T_G(x,x)$ as expressed in Eq.~(\ref{TG_correlation}). The corresponding color factor for the partonic part in Eq.~(\ref{polarized}) is calculated by contracting the color indices of the Feynman diagrams in Fig.~\ref{twist3_LO} with $\frac{i}{N(N^2-1)}\,f^{ABC}$, which gives the color factor $1/4$ in Eq.~(\ref{polarized}). On the other hand, if the color of the operator, $F^A(0)\,F^C(y_2^-)\,F^B(y_1^-)$, is neutralized by $({\cal D}^C)_{AB}=d^{CAB}$, which is symmetric under the interchange of any two indices, we have a different tri-gluon correlation function and we refer it as $\widetilde{T}_G(x,x)$, which has the same expression as $T_G(x,x)$ except the difference in the contraction of the gluon color. The color factor for the corresponding partonic hard part is calculated by contracting the color indices of the same Feynman diagrams with $\frac{N}{(N^2-1)(N^2-4)}\,d^{ABC}$, which also gives the color factor $1/4$. That is, there could be two tri-gluon correlation functions depending on how the colors of the three gluon fields are neutralized \cite{Yuan:private}. We find that both correlation functions are gauge invariant after inserting necessary gauge link in the adjoint representation between the gluon field strengths in the matrix element in Eq.~(\ref{TG_correlation}), and they contribute to the SSAs with the same partonic hard parts, and potentially, different color factors \cite{Kang:hadron}. Since the color factors are the same in our case, adding the potential contribution from $\widetilde{T}_G(x,x)$ is to replace the $T_G(x,x)$ in Eq.~(\ref{polarized}) by $T_G(x,x)+\widetilde{T}_G(x,x)$. We noticed that all Feynman diagrams for producing a charm quark at this order in Fig.~\ref{twist3_LO} have the same color structure, ${\rm Tr}[T^A T^C T^B]=(d^{ACB}+if^{ACB})/4$, which leads to the overall color factor 1/4 for the contributions from both $T_G(x,x)$ and $\widetilde{T}_G(x,x)$. However, for the SSAs of producing a $\bar{D}$ meson, which is fragmented from an anticharm quark, both the partonic part and the antisymmetric part of the color structure change sign, while the symmetric part of the color structure is unchanged. Therefore, the SSAs for $\bar{D}$-meson production in the SIDIS at the leading order have the same functional form as that for the $D$-meson produciton except the sum of the tri-gluon correlation functions, $T_G(x,x)+\widetilde{T}_G(x,x)$, is replaced by $T_G(x,x)-\widetilde{T}_G(x,x)$. That is, by comparing the SSAs for producing $D$ and $\bar{D}$ mesons in SIDIS, we could gain valuable information on both tri-gluon correlation functions. However, the relation could be complicated in the $D$-meson production in hadronic collisions due to the additional color flow from the other colliding hadron \cite{Kang:hadron}. We also notice that the correlation function, $T_G(x,x)$, with the color neutralized by $({\cal F}^C)_{AB}=-if^{CAB}$ could potentially be related to the spin-dependent TMD gluon distribution, or the gluonic Sivers function\cite{Anselmino:2004nk}, since the middle gluon field strength of the operator, $F^A(0)\,F^C(y_2^-)\,F^B(y_1^-)$, could be related to the gauge link in the adjoint representation that is needed to define the TMD gluon distribution. Without knowing the size and sign of either tri-gluon correlation functions, we will treat them as one combined tri-gluon correlation function, labeled by $T_G(x,x)$, in the following numerical estimates of the SSAs. \section{Phenomenology} \label{numerical} In this section we first evaluate the inclusive $D$-meson production rate at large $P_{h\perp}$ in SIDIS. We then propose a simple model for tri-gluon correlation function $T_G(x, x)$ and estimate the size of SSA for the $D$-meson production in SIDIS. The charm meson's transverse momentum, $P_{h\perp}$, is chosen to be along the $x$-direction in the {\it hadron frame}, and therefore, \begin{eqnarray} \epsilon^{P_h s_\perp n \bar{n}}=-P_{h\perp}\sin\phi_s. \end{eqnarray} The fully differential cross sections in Eqs.~(\ref{unpolarized}) and (\ref{polarized}) can be decomposed in terms of the independent angular distributions as follows, \begin{eqnarray} \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}&=&\sigma_0^U+\sigma_1^U\cos{\phi}+\sigma_2^U \cos{2\phi}, \nonumber\\ \frac{d\Delta\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}&=&\sin{\phi_s}\left(\Delta\sigma_0+\Delta\sigma_1\cos{\phi}+\Delta\sigma_2 \cos{2\phi}\right). \label{angulardis} \end{eqnarray} Before evaluating the SSA, we first estimate the $D$-meson production rate in the unpolarized SIDIS by using our LO formula in Eq.~(\ref{unpolarized}). For the following numerical evaluations we use CTEQ6L parton distribution functions \cite{Pumplin:2002vw}, and charm-to-$D$ fragmentation functions from Ref.~\cite{Kneesch:2007ey}. We choose the factorization scale to be equal to the renormalization scale and set $\mu=\sqrt{Q^2+m_c^2+P_{h\perp}^2}$ with charm quark mass $m_c=1.3$~GeV. In the following plots, we choose two sets of kinematic variables. The first one is $S_{ep}=300$ GeV$^2$, $x_B=0.01$ and $Q=1$ GeV, which is close to the COMPASS kinematics. The other is $S_{ep}=2500$ GeV$^2$, $x_B=0.01$ and $Q=4$ GeV, which is more relevant to the planned eRHIC experiment \cite{Deshpande:2005wd}. \begin{figure}[htb]\centering \psfig{file=compass_lo_zh.eps,height=2.5in} \hskip 0.2in \psfig{file=compass_lo_pt.eps,height=2.5in} \caption{The fully differential unpolarized cross section for $D^0$ production in SIDIS for COMPASS kinematics. The curves represent: $\sigma_0^U$(solid), $\sigma_1^U$(dashed), and $\sigma_2^U$(dotted) in Eq.~(\protect\ref{angulardis}).} \label{com-LO} \end{figure} In Fig.~\ref{com-LO}, we show individual coefficients of the angular distribution, $\sigma_0^U$, $\sigma_1^U$, and $\sigma_2^U$, of the {\it fully differential} unpolarized cross section for $D^0$ production in Eq.~(\ref{angulardis}) as a function of both $z_h$ and $P_{h\perp}$ for the kinematics relevant to COMPASS experiment. It is clear that the angular dependent pieces $\sigma_1^U, \sigma_2^U\ll \sigma_0^U$, and might be too small to be significant. Without worrying about the detection efficiency, the $D$-meson production at $P_{h\perp}\sim 1$~GeV could be measurable. Likewise, Fig.~\ref{erhic-LO} shows the {\it fully differential} unpolarized cross section for $D^0$ production for eRHIC kinematics. With larger $Q$ and $P_{h\perp}$, the production rate is smaller but may still have enough events with a high luminosity. \begin{figure}[htb]\centering \psfig{file=erhic_lo_zh.eps,height=2.5in} \hskip 0.2in \psfig{file=erhic_lo_pt.eps,height=2.5in} \caption{The fully differential unpolarized cross section for $D^0$ production in SIDIS at the future eRHIC. The curves represent: $\sigma_0^U$(solid), $\sigma_1^U$(dashed), and $\sigma_2^U$(dotted) in Eq.~(\protect\ref{angulardis}).} \label{erhic-LO} \end{figure} In order to obtain the numerical estimate for the SSAs of $D$-meson production, we have to model the unknown, but universal, tri-gluon correlation function $T_G(x, x)$. Similar to the ansatz for quark-gluon correlation function $T_F(x, x)$, which was originally introduced in \cite{qiu} and found to be consistent with the latest experimental data \cite{Kouvaris:2006zy}, we model the tri-gluon correlation function $T_G(x, x)$ as \begin{eqnarray} T_G(x, x)=\lambda_g\, G(x) \end{eqnarray} with $G(x)$ the normal unpolarized gluon distribution function. Because of its non-perturbative nature, $T_G(x,x)$ should be extracted from the experiments and the value and the sign of $\lambda_g$ should be fixed by the data. For the following numerical estimate, we assume that $\lambda_g$ has a positive sign and the same size as that for quark-gluon correlation function $T_F(x, x)$ \cite{qiu}, and choose $\lambda_g=0.07$~GeV. In order to present the SSA and its angular dependence on the $\phi$, the angle between the hadron plane and the lepton plane, we define the $\phi$-integrated single spin azimuthal asymmetries as \begin{eqnarray} \langle \cos(n\phi) \rangle=\frac{1}{\sin{\phi_s}}\, \frac{\int_0^{2\pi}d\phi \cos(n\phi) \frac{d\Delta\sigma(s_\perp)}{dx_B dy dz_h dP_{h\perp}^2 d\phi}} {\int_0^{2\pi}d\phi \frac{d\sigma}{dx_B dy dz_h dP_{h\perp}^2 d\phi}}\, , \end{eqnarray} which gives \begin{eqnarray} \langle 1 \rangle = \frac{\Delta\sigma_0}{\sigma_0^U}, \qquad \langle \cos\phi \rangle = \frac{\Delta\sigma_1}{2\sigma_0^U}, \qquad \langle \cos2\phi \rangle = \frac{\Delta\sigma_2}{2\sigma_0^U}. \label{moment} \end{eqnarray} In Fig.~\ref{com-ssa} we plot the SSAs as a function of $z_h$ (left) and $P_{h\perp}$ (right) for the COMPASS kinematics. The asymmetries, $\langle 1 \rangle$, $\langle \cos\phi \rangle$, and $\langle \cos2\phi \rangle$, defined in Eq.~(\ref{moment}), are shown by the solid, dot-dashed, and dotted curves, respectively. For a comparison between the size of the ``derivative'' and the ``non-derivative'' terms, we also show, by the dashed curves, the contribution to the SSA, $\langle 1 \rangle$, from the derivative term only. It is clear that the derivative term dominates over the whole kinematic region. The asymmetries, $\langle \cos\phi \rangle$ and $\langle \cos2\phi \rangle$, are too small to be observed experimentally. The SSA, $\langle 1 \rangle$, is of the order of $10\%$, and could be measurable at COMPASS experiment. Fig.~\ref{com-ssa} indicates that the SSA hits a minimum at $z_h\sim 0.5$ and increases very fast when $z_h$ becomes very large or very small. This is because the SSA, $\langle 1 \rangle \sim 1/(1-x_{min})$, due to the derivative of $T_G(x, x)$ \cite{qiu}. From the definition of $x_{min}$ in Eq.~(\ref{xmin}), the $z_h(1-z_h)$ has a maximum at $z_h=0.5$. Therefore, $x_{min}$ increases, equivalently, the SSA increases when $z_h$ becomes either smaller or larger than $0.5$. When $z_h$ is much further away from the central value 0.5, the $x_{min}$ becomes so large that the perturbatively calculated asymmetry could increase sharply, which could signal a breakdown of the twist-three approximation and a need of higher power corrections. Nevertheless, the increase of the SSA when $z_h$ is moving away from the central value 0.5 has the same physics origin as the observed increase of the SSA as a function of increasing $x_F$ (or rapidity $y$) in the hadronic pion production \cite{SSA-fixed-tgt,SSA-rhic}, and it could be tested in the COMPASS experiment. Fig.~\ref{com-ssa} also indicates a monotonic increase of the SSA as a function of $P_{h\perp}$. Although we expect the SSA to fall when $P_{h\perp}$ increases, a natural behavior of the twist-three effect in QCD collinear factorization, the enhancement from the derivative of the $T_G(x,x)$ at large $x$ wins over the suppression from large $P_{h\perp}$ due to the limited phase space at COMPASS kinematics. As we will see below, the decrease of the SSA as the increase of $P_{h\perp}$ is clearly seen at the eRHIC kinematics. \begin{figure}[htb]\centering \psfig{file=compass_an_zh.eps,height=2.5in} \hskip 0.2in \psfig{file=compass_an_pt.eps,height=2.5in} \caption{Single-transverse-spin-asymmetries defined in Eq. (\ref{moment}) for $D^0$ production in SIDIS for COMPASS kinematics. The curves are: solid-$\langle 1 \rangle$, dashed-$\langle 1 \rangle$ with derivative-term only, dot-dashed-$\langle \cos\phi \rangle$, and dotted-$\langle \cos{2\phi} \rangle$.} \label{com-ssa} \end{figure} Similarly, we plot the SSAs for $D^0$ production for the eRHIC kinematics in Fig. \ref{rhic-ssa}. Due to the higher collision energy, the effective gluon momentum fraction $x$ that dominates the SSAs is smaller, which leads to a smaller derivative of $T_G(x, x)$ and a smaller SSAs. Similar feature has been seen in the SSA for hadronic pion production when we compare the data from the fixed-target experiments with that from RHIC experiments. The $5\%$ SSA for $D$-meson production at eRHIC could be significant. The slightly different shape of the SSA as a function of $z_h$ is purely a consequence of the difference in effective range of parton momentum fraction $x$. That is, the $z_h$-dependence of the SSA provides a good measurement of the $x$-dependence of the correlation function, $T_G(x,x)$. On the other hand, the slow falloff of the SSA as a function $P_{h\perp}$ is natural due to the asymptotic $\lambda_g/P_{h\perp}$ behavior of the twist-3 contribution when $P_{h\perp}$ increases. Of course, as discussed above, the $1/(1-x_{min})$ dependence of the twist-three formalism compensates some of the $1/P_{h\perp}$ falloff due to the phase space limit on parton momentum fraction $x$. \begin{figure}[htb]\centering \psfig{file=erhic_an_zh.eps,height=2.5in} \hskip 0.2in \psfig{file=erhic_an_pt.eps,height=2.5in} \caption{Single-transverse-spin-asymmetries defined in Eq. (\ref{moment}) for $D^0$ production in SIDIS for eRHIC kinematics. The curves are: solid-$\langle 1 \rangle$, dashed-$\langle 1 \rangle$ with derivative-term only, dot-dashed-$\langle \cos\phi \rangle$, and dotted-$\langle \cos{2\phi} \rangle$.} \label{rhic-ssa} \end{figure} \section{Conclusions} \label{conclusion} In summary, we have studied the single transverse-spin asymmetry for $D$-meson production in SIDIS. In terms of QCD collinear factorization approach, we calculated both derivative and non-derivative contributions to the SSAs. At large enough transverse momentum, $P_{h\perp}$, the intrinsic charm contribution to the asymmetry might be neglected, and the SSA is directly proportional to the transverse-spin dependent tri-gluon correlation function, $T_G(x,x)$ (or $T_G(x,x)\pm \widetilde{T}_G(x,x)$ if we include both color structures), which has not been studied experimentally. We pointed out that by comparing the SSAs for producing $D$ and $\bar{D}$ mesons in SIDIS, we could gain valuable information on both tri-gluon correlation functions. With a simple model for the $T_G(x,x)$, we presented our estimates of the SSAs for the kinematics relevant for both COMPASS and future eRHIC experiments. From the inclusive production rate for the $D$-meson production and the estimated size of the SSAs, we argue that the SSAs of $D$-meson production in SIDIS could be a direct and clean probe of the unknown tri-gluon correlation function, $T_G(x,x)$, which provides the important information on the spin-depenence of gluon's transverse motion inside a polarized hadron. However, we stress that the SSAs shown in all figures are directly proportional to the value and the sign of the $\lambda_g$ and our model for the twist-three tri-gluon correlation function, $T_G(x,x)$. A different $x$-dependence from that of $G(x)$ could lead to a different derivative of $T_G(x,x)$ and a different prediction for the SSA. The actual sign and size of the SSA, and the function, $T_G(x,x)$, should be determined by the experimental measurements, just like the PDFs. However, our calculation does predict the short-distance dynamics and the kinematic dependence of the SSAs, such as the increase of the SSA when $z_h$ moves away from the central value $0.5$. Finally, we emphasize that the QCD collinear factorization approach to the SSAs allows us to calculate the SSAs of open charm production or other particle production in hadronic collisions. With the experimental extraction of the tri-gluon correlation function, $T_G(x,x)$, as well as $\widetilde{T}_G(x,x)$, and the existing and new knowledge of $T_F(x,x)$, we will be able to explore non-perturbative physics, in particular, the multi-parton quantum correlations, beyond what have learned from the parton distribution functions. \section*{Acknowledgments} We thank Werner Vogelsang and Feng Yuan for helpful discussion and thank G. Kramer for providing us with their fortran code for the $D$-meson fragmentation functions. This work was supported in part by the U. S. Department of Energy under Grant No.~DE-FG02-87ER40371.
1,477,468,749,949
arxiv
\section{Current correlator} The starting point is the analysis of the three currents correlator in configuration space, \begin{equation} \Pi_\mu(x,y,z)=-\langle 0|\,{\cal T}\,\eta_p(x)A_\mu(y)\,\bar\eta_n(z)\,|0\rangle, \end{equation} being $\eta_N$ the interpolating current of the nucleons and $A_mu$ the axial-vector current. In the hadronic sector, the nucleonic and axial currents are defined by \begin{align} \langle 0|\,\eta_p(x)\,|p',s'\rangle &= \lambda_p\, u_p^{s'}(p')\,e^{-ip'\cdot x},\label{eq.eta_p} \\ \langle p,s|\,\bar\eta_n(z)\, |0\rangle &= \lambda_n\, \bar u_n^s(p)\, e^{ip\cdot z}\\ \langle p',s'|A_\mu(y)|p,s\rangle &= \bar u_p^{s'}(p')\,T_\mu(q)\,u_n^s(p) \,e^{iq\cdot y},\label{eq.A} \end{align} with $q=p'-p$, and where $\lambda_p$ and $\lambda_n$ are the current-proton coupling and the current-neutron coupling, respectively. The function $T$ is defined as \begin{equation} T_\mu(q) = G_A(t)\gamma_\mu\gamma_5 +G_P(t)\gamma_5 \frac{q_\mu}{2 m_N} +G_T(t)\sigma_{\mu\nu}\gamma_5 \frac{q_\nu}{2 m_N}, \label{T_mu} \end{equation} with $t=q^2$, and where $ m_N$ is the vacuum nucleon mass. The axial coupling is defined as \begin{equation} g_A\equiv G_A(0). \end{equation} In the QCD sector, the nucleon interpolating currents and the axial vector current are defined in terms of the quark fields as \begin{align} \eta_p(x) &=\epsilon^{abc}\left[u^a(x)^T\,C\gamma^\mu \,u^b(x)\right]\gamma_\mu\gamma_5 \,d^c(x),\\ \bar\eta_n(z) &= \epsilon^{abc}\left[\bar d^b(z)\,\gamma^\mu C\,\bar d^a(z)^T\right]\bar u^c(z)\,\gamma_\mu\gamma_5 \\ A_\mu(y) &= \bar d(y)\,\gamma_\mu\gamma_5\, u(y), \end{align} where $C=i\gamma_0\gamma_2$ is the charge conjugation operator. The correlator in momentum space is defined as \begin{equation} \Pi_\mu(p,p')=\int d^4y\, d^4z\, e^{-i(q\cdot y+p\cdot z)}\,\Pi_\mu(0,y,z),\label{Eq.Pi-momentum} \end{equation} where the energy-momentum is conserved as shown in the diagram on the left of Fig.\,\ref{diagrams}. The idea is to obtain the $g_A$ by relating the hadronic sector to the QCD sector through the FESR using the quark-hadron duality principle. But first we need to isolate the contribution of $G_A$ from the other contributions. \subsection{Hadronic sector} \begin{figure} \includegraphics[scale=0.9]{diagrams.pdf} \caption{Feynman diagrams representing the current correlator in the hadronic sector (left) and in the QCD sector (right).} \label{diagrams} \end{figure} Inserting a complete set of intermediate states of nucleons, the correlator in momentum space for the hadronic sector leads to \begin{equation} \Pi^\text{\tiny had}_\mu(p,p')=\lambda_n\lambda_p\frac{ (\slashed{p}+m_n)T_\mu(q)(\slashed{p}'+m_p)}{(p^2-m_n^2)(p'^2-m_p^2)}. \end{equation} This correlator is described by the left diagram in Fig.\,\ref{diagrams}, corresponding to an incoming neutron current with momentum $p$, an outgoing axial current with momentum $q$ and an outgoing proton current with momentum $p'$. To isolate the contribution of the axial coupling, we can decompose the correlator into the different structures relative to the Dirac matrices by tracing the correlator multiplied with the different Dirac structures, {\it i.e.} $\mathrm{tr}[\Pi_\mu\,\Gamma]$ with $\Gamma =I,\gamma_5,\gamma_\mu,\gamma_\mu\gamma_5,\sigma_{\mu\nu}$. As a result, the different structures include combinations of $G_A$, $G_P$ and $G_T$. In particular \begin{equation} \mathrm{tr}\,[\Pi_\mu(p,p')\,\gamma_\nu]=-4i\epsilon_{\mu\nu\alpha\beta}p^\alpha p'^\beta \Pi(s,s',t), \label{trace_corr} \end{equation} where $\Pi$ in the case of the hadronic correlator is \begin{equation} \Pi^\text{\tiny had}(s,s',t)=\lambda_n\lambda_p\frac{G_A(t)+G_T(t)(m_n-m_p)/ m_N}{(s-m_n^2)(s'-m_p^2)}, \label{Pi_had} \end{equation} with $s=p^2$, $s'=p'^2$. The difference in nucleon masses can be neglected in vacuum. If different nucleon masses are assumed, it is possible to completely isolate the axial coupling part with the appropriate combination of the other correlator structures, however it is more complicated and unnecessary, even in the presence of magnetic fields. \subsection{QCD sector} Once the appropriate operation to isolate $g_A$ is obtained in Eq.\,(\ref{trace_corr}), it can be applied in the QCD sector. The projected correlator for the perturbative part of QCD in the chiral limit produces a two-loop contribution. \begin{multline} \mathrm{tr}\,[\Pi^\text{\tiny pQCD}_\mu(p,p')\gamma_\nu]= 4i\epsilon_{\mu\nu\alpha\beta}N_c(N_c-1)\times\\ \int\frac{d^4k}{(2\pi)^4}\frac{d^4k'}{(2\pi)^4}\, \frac{32\,q^\alpha k^\beta\,k'\!\!\cdot\!(p'-k-k')}{k^2\, k'^2\,(k-q)^2 \,(p'-k-k')^2} \end{multline} which is described diagrammatically on the right side of Fig.\,\ref{diagrams}. After the integration of the internal momentum in the frame $t=0$ the result is \begin{multline} \Pi^\text{\tiny pQCD}(s,s',0)= \frac{s^2\ln(-s/\mu^2)-s'^2\ln(-s'/\mu^2)}{(2\pi)^4\,(s'-s)}\\ +\text{regular terms}, \label{PI-pQCD} \end{multline} where $\mu$ is the $\overline{\text{MS}}$ scale. Terms without discontinuities on the real axes or singularities are omitted because they vanish when the FESR are applied. The next contribution comes from the non-perturbative sector. Considering the operator product expansion, the next contribution in the chiral limit corresponds to dimension 3 operators: the quark condensate. However, this term vanishes when performing the projection described in Eq.\,(\ref{trace_corr}), as well as for all diagrams with odd-dimensional operators in the chiral limit. Therefore, the next non-vanishing contribution comes from the dimension 4 operator, which in the chiral limit corresponds to the gluon condensate. This diagram is complicated to handle and is not quite relevant. Therefore, only the leading term, which corresponds to the perturbative part, will be considered. \section{Axial coupling constant from FESR} \begin{figure} \includegraphics[scale=0.8]{pacman.pdf} \caption{The FESR contour. The integration is performed over the variables $s$ and $s'$.} \label{pacman} \end{figure} The FESR in this case are applied for two momentum correlators: $s$ and $s'$. The usual procedure corresponds to integrate the correlator multiplied by an analytical kernel $K(s)$ along the {\it pacman} contour described in Fig.\,\ref{pacman}. The radius of the circle is the hadronic continuum threshold. Using Cauchy's theorem, quark-hadron duality is introduced by placing the hadronic sector at the discontinuity on the positive real axis, and the QCD sector on the complex circle: \begin{equation} \int_0^{s_n}\frac{ds}{\pi}\,\mathrm{Im}_s\,\Pi^\text{\tiny had}(s,s',t)=-\oint\frac{ds}{2\pi i}\Pi^\text{\tiny QCD}(s,s',t) \label{FESR} \end{equation} where the variable $s$ is first integrated with the weight function $K(s)=1$, and where $s_n$ denotes the continuum threshold related with neutron-current. The subscript introduced in the imaginary part is defined as \begin{equation} \mathrm{Im}_s f(s)\equiv\lim_{\epsilon\to 0}\mathrm{Im}f(s+i\epsilon). \end{equation} Of course, Eq.\,(\ref{FESR}) is valid in the absence of poles within the contour, otherwise the residues must be incorporated into the equation. In particular we can see from Eq.\,(\ref{PI-pQCD}) that the pole in $s=s'$ in the denominator cancels with the numerator, so there is no singularity at all within the contour. Proceeding in the same way, but now with the variable $s'$, the double FESR gives \begin{multline} \int_0^{s_p}\frac{ds'}{\pi} \,\mathrm{Im}_{s'}\!\!\int_0^{s_n}\frac{ds}{\pi}\,\mathrm{Im}_s\Pi^\text{\tiny had} (s,s',t)\\ =\oint_{s_p}\frac{ds'}{2\pi i}\oint_{s_n}\frac{ds}{2\pi i}\,\Pi^\text{\tiny QCD}(s,s',t), \label{eq.FESR_had=QCD} \end{multline} where $s_p$ is the continuum threshold related with the proton-current. Once FESR are applied to the hadronic sector in Eq. \,(\ref{Pi_had}) and to the QCD sector in Eq. \,(\ref{PI-pQCD}), after setting $t=0$, the above equation gives the relation \begin{multline} g_A\lambda_n\lambda_p\,\theta(s_n-m_n^2)\theta(s_p-m_p^2)\\ =\frac{1}{48\pi^4}\left[s_n^3\,\theta(s_p-s_n) +s_p^3\,\theta(s_n-s_p)\right]. \label{FESR_had=QCD} \end{multline} In vacuum, all parameters are practically the same for protons and neutrons, so $s_p\approx s_n\equiv s_0$ and $\lambda_p\approx\lambda_n \equiv\lambda_N $ \begin{equation} g_A=\frac{1}{48\pi^4}\frac{s_0^3}{\lambda_N^2}. \label{eq.gA-vac} \end{equation} The nucleon-current coupling can be obtained from the nucleon-nucleon channel. The most appropriate channel is the nucleon-nucleon current correlator \cite{Ioffe:1983ju,Chung:1981wm,Chung:1981cc,Chung:1982rd,Chung:1984gr,Dominguez:2020sdf} \begin{equation} \Pi_N(x)=\langle 0|\,{\cal T}\,\eta_N(x)\,\bar\eta_N(0)\,|0\rangle. \end{equation} In vacuum there are only two Dirac structures, so the FESR \cite{Dominguez:2020sdf} provide two equations: \begin{align} \lambda_N^2 &= \frac{s_0^3}{192\pi^4}+\frac{s_0}{32\pi^2}\langle G^2\rangle+\frac{2}{3}\langle\bar qq\rangle^2\\ \lambda_N^2m_N &=-\frac{s_0^2}{8\pi^2}\langle\bar qq\rangle+ \frac{1}{12}\langle G^2\rangle\langle\bar qq\rangle, \label{Nuclear_FESR} \end{align} where in the last expression, vacuum dominance was considered. The quark and gluon condensates are defined as \begin{align} \langle\bar qq\rangle &\equiv \frac{1}{N_f}\sum_f\langle 0| \bar q_fq_f |0\rangle\\ \langle G^2\rangle &\equiv \langle 0 |\frac{\alpha_s}{\pi}G_{\mu\nu}^a G^{a\mu\nu}|0\rangle. \end{align} The axial coupling constant depends strongly on the quark and gluon condensates. The most often used values for these operators are $ \langle\bar qq\rangle= -(0.24\text{ GeV})^3$ and $\langle G^2\rangle= (0.33\text{ GeV})^4$. If these values are used, the axial coupling results in $g_A=1.52$. Recent lattice results for 2+1 flavors at the $\overline{\text{MS}}$ renormalization scale of 2 GeV obtain on average $\langle -\bar qq\rangle^{1/3}\approx 0.272\text{ GeV}$ \cite{Gubler:2018ctz}. Close results were obtained for 2-flavor FESR, which provide $ \langle -\bar qq\rangle^{1/3}\approx 0.267\text{ GeV}$ \cite{Bordes:2010wy}. The gluon condensate is an scale invariant quantity. In this case the situation is not so clear, where different estimations provide important variations: $\langle G^2\rangle^{1/4} =$ 0.3 -- 0.5 GeV \cite{Dominguez:2018zzi} Actual estimates of the axial coupling fluctuate around $g_A\approx 1.275$ \cite{Czarnecki:2018okw}. Figure\,\ref{gAfixed}, shows the relation between the gluon condensate and the quark condensate by fixing the axial coupling. We can see that the range of values for the chiral condensate are those frequently used in the literature, and the values obtained for the gluon condensate are also in the range of accepted values. The inclusion of more terms in the operator product expansion, as well as radiative corrections, is expected to limit solutions with more precise results. \bigskip \begin{figure} \includegraphics[scale=0.65]{gAfixed.pdf} \caption{ Range of values for the gluon condensate and the quark condensate for a fixed value of the axial coupling.} \label{gAfixed} \end{figure} \section{Axial coupling constant at finite external magnetic field} The FESR result $ \langle -\bar qq\rangle^{1/3}= 0.267$\,GeV will be considered, and the gluon condensate will be set to $\langle G^2\rangle^{1/4} = 0.3775$~GeV in order to obtain $g_A= 1.275$. This choice of condensates generates, from Eq. (\ref{Nuclear_FESR}), $\lambda_N = 0.022\text{ GeV}^3$ and $s_0=1.429\text{ GeV}^2$. As can be seen in \cite{Ayala:2015qwa,Dominguez:2018njv,Dominguez:2020sdf}, the magnetic field can be treated by considering the expanded fermion propagator in powers of $eB$, even for light quarks. The series is truncated by the contour integral depending on the analytical kernel used in the FESR. The expansion of the magnetic field allows reaching values of $eB$ higher than $10\,m_\pi^2$, which is enough to obtain the phenomenology of the strong magnetic field produced in relativistic HIC experiments and in the interior of magnetars. In this sense, the lowest order contribution is the result obtained in vacuum in Eq. (\ref{FESR_had=QCD}), but replacing the different parameters by those dependent on the magnetic field. Aforementioned approximation is evident in the QCD sector, and the result at the lowest order is just the right-hand side of Eq.\,(\ref{FESR_had=QCD}), but what about the hadronic sector? Let us describe the currents in the hadronic sector in another way. The nucleon current is the nucleon field times the nucleon-current coupling. To reproduce the matrix elements described in Eqs. (\ref{eq.eta_p})-(\ref{eq.A}), the axial-vector current can be expressed through the nucleon fields in configuration space in the following way \begin{align} \eta_N(x) &= \lambda_N \psi_N(x) \label{eta_N} \\ A_\mu(y) &= \int d^4\xi\, \bar\psi_p(\xi)T_\mu(\xi-y)\psi_n(\xi), \end{align} where $T_\mu(q)$ in Eq. (\ref{eq.A}) corresponds to the Fourier transform in the momentum space of the function $T_\mu(x)$ in the configuration space described in the previous equation. The correlator in the configuration space is therefore \begin{equation} \Pi_\mu(x,y,z) = -\!\!\int \!\!d^4\xi~ e^{i\Phi(x,\xi)} \, S_p^B(x-\xi)\, T_\mu(\xi-y)\, S_n^B(\xi-z) \end{equation} where the magnetic field dependent proton propagator is described by the local part multiplied by the Schwinger phase. The neutron propagator in the presence of the magnetic field contains the anomalous magnetic moment contribution. The definition of the correlator in momentum space in Eq. (\ref{Eq.Pi-momentum}) is not arbitrary. In fact, choosing the frame $x=0$, the Schwinger phase disappears if the Fock-Schwinger gauge ${\cal A}_\mu(x) = -\frac{1}{2}F_{\mu\nu}x^\nu$ is considered for the external electromagnetic vector field. The correlator in momentum space is therefore \begin{align} \Pi_\mu(p',p) &=-S_p^B(p')T_\mu(q)S_n^B(p). \end{align} Considering the expanded propagators in magnetic field power series \cite{Dominguez:2020sdf}, it is not difficult to see that, in the sum rule considered in Eq. (\ref{FESR}), only the lowest order term in the expansion will survive if we keep the same $T_\mu$ structure described in Eq. (\ref{T_mu}). However, when a magnetized medium is present, the overall structure of $T_\mu$ must incorporate the external electromagnetic tensor $F_{\mu}$ which contributes to other structures, dividing the axial contribution of the form factor as \begin{equation} G_A\gamma_\mu\to G_A^\parallel\gamma_\mu^\parallel + G_A^\perp\gamma_\mu^\perp + \tilde G_A F_{\mu\nu}\gamma^\nu, \end{equation} as well as the other form factor terms in Eq. (\ref{T_mu}) will be divided into several substructures. The difference between $G_A^\parallel$ and $G_A^\perp$ will be of order $(eB)^2/s_0$. Since we are considering the lowest term of the expansion, there will be no difference between $G_A^\parallel$ and $G_A^\perp$. \begin{figure} \includegraphics[scale=0.6]{s0-lambda.pdf} \caption{Hadronic threshold (upper panel) and nucleon current coupling (lower panel) for proton (solid line) and neutron (dashed line)} \label{fig.s0-lambda} \end{figure} With all the above considerations, the axial coupling constant at finite magnetic field is given by Eq.\,(\ref{FESR_had=QCD}). The hadronic thresholds and current-nucleus couplings are obtained from \cite{Dominguez:2020sdf}, changing the values of the quark and gluon condensates in vacuum to the values defined at the beginning of this section. The resulting hadronic thresholds and nucleon-current couplings are plotted in Fig.\,\ref{fig.s0-lambda}. The first thing to note is that $s_p>s_n$, and therefore the axial coupling constant of the relation in Eq.\,(\ref{FESR_had=QCD}) can be written as \begin{align} g_A=\frac{1}{48\pi^4}\frac{s_n^3}{\lambda_p\lambda_n}. \end{align} Both the hadronic threshold of the nucleons and the nucleon-current couplings increase with the magnetic field as can be seen in Fig.\,\ref{fig.s0-lambda}. The evolution of the axial coupling constant as a function of external magnetic field is shown in Fig.\,\ref{fig.gA_B}. The axial coupling constant decreases, as the magnetic field increases. For lowest values of the magnetic field, this decrease is linear. The decrease in axial nucleon coupling is due to flavor asymmetry. Since there is competition between the smaller hadronic threshold of nucleons and the nucleon-current couplings, then the proton coupling dominates. On the contrary, if we replace Eq.\,(\ref{eq.gA-vac}) by the average values of the nucleon-current couplings $(\lambda_n+\lambda_p)/2$ and the nucleon hadronic thresholds $(s_n+s_p)/2$, the result is totally different, increasing quadratically. \begin{figure} \includegraphics[scale=0.65]{gA_B.pdf} \caption{Axial coupling constant as a function of an external magnetic field (solid line). The dashed line corresponds to the case of Eq.\, (\ref{eq.gA-vac}) considering the magnetic field dependence through the average nucleon-current couplings and the average nucleon hadronic thresholds.} \label{fig.gA_B} \end{figure} \section{Conclusions and discussion} The effects of an external uniform magnetic field on the axial coupling constant were obtained by finite energy sum rules. By isolating the axial structure in the proton-axial-neutron current correlator, and performing a double contour integral, it is possible to match the hadronic part with the perturbative QCD part. The relevant parameters in this case, the nucleon-current couplings and the hadronic thresholds are obtained through the nucleon-nucleon correlator at finite external magnetic field calculated in \cite{Dominguez:2020sdf}. The axial coupling is then proportional to the neutron threshold cubed and inversely proportional to the proton-current and neutron-current couplings. Both thresholds and couplings are increasing quantities as a function of magnetic field, but the proton-current coupling dominates, and the axial coupling constant decreases with $B$. The change with the magnetic field about 10\% for $eB=0.1\text{ GeV}^2$. In particular, the factor $1+3g_A^2$, which is proportional to the neutron decay width as well as the neutrino emissivity in the Urca process, decreases by 16\%. This is an effect to take into consideration. Unfortunately, since this is the first attempt to find the magnetic evolution of the axial coupling constant, there are no other solutions to compare with. The axial coupling of pions with constituent quarks has been calculated for $eB\sim 0.01$\,GeV$^2$, presenting a linear behavior \cite{Braghin:2018drl}. This could be a related form factor but not the same, so it is necessary to verify nucleon axial coupling constant under an external magnetic field using other models and techniques. It is interesting to see what happens in high temperature scenarios, such as the relativistic HIC experiments and high density scenarios such as in magnetars. Apparently temperature and baryon density effects tend to reduce axial coupling, but it is not clear what may happen with temperature or density effects in combination with the external magnetic field. The case of baryon density and magnetic field effects will be addressed soon. \begin{acknowledgments} I would like to thank Ces\'areo Dom\'inguez and Marcelo Loewe for their fruitful discussions. This article was funded by Fondecyt under grants 1190192, 1200483 and 1220035. \end{acknowledgments}
1,477,468,749,950
arxiv
\subsubsection*{\bibname}} \bibliographystyle{abbrvnat} \usepackage{graphicx} \graphicspath{{./Figures/}} \usepackage{url} \usepackage{etoolbox} \gappto{\UrlBreaks}{\UrlOrds} \usepackage{xr} \makeatletter \newcommand*{\addFileDependency}[1] \typeout{(#1)} \@addtofilelist{#1} \IfFileExists{#1}{}{\typeout{No file #1.}} } \makeatother \newcommand*{\myexternaldocument}[1]{% \externaldocument{#1}% \addFileDependency{#1.tex}% \addFileDependency{#1.aux}% } \myexternaldocument{arxiv_supplement} \title{Model updating after interventions paradoxically introduces bias} \renewcommand*{\Affilfont}{\normalsize\normalfont} \renewcommand*{\Authfont}{\bfseries} \author[1,2,*]{James~Liley} \author[3]{Samuel~R.~Emerson} \author[2,4,5]{Bilal~A.~Mateen} \author[1,2,*]{Catalina~A.~Vallejos} \author[2,3,*]{Louis~J.~M.~Aslett} \author[2,6,*]{Sebastian~J.~Vollmer} \affil[1]{MRC Human Genetics Unit, University of Edinburgh, UK} \affil[2]{The Alan Turing Institute, London, UK} \affil[3]{Department of Mathematical Sciences, Durham University, UK} \affil[4]{Kings College Hospital, London, UK} \affil[5]{Wellcome Trust, London, UK} \affil[6]{Warwick Mathematics Institute, University of Warwick, UK} \affil[*]{Co-corresponding authors} \begin{document} \maketitle \begin{abstract} Machine learning is increasingly being used to generate prediction models for use in a number of real-world settings, from credit risk assessment to clinical decision support. Recent discussions have highlighted potential problems in the updating of a predictive score for a binary outcome when an existing predictive score forms part of the standard workflow, driving interventions. In this setting, the existing score induces an additional causative pathway which leads to miscalibration when the original score is replaced. We propose a general causal framework to describe and address this problem, and demonstrate an equivalent formulation as a partially observed Markov decision process. We use this model to demonstrate the impact of such `naive updating' when performed repeatedly. Namely, we show that successive predictive scores may converge to a point where they predict their own effect, or may eventually tend toward a stable oscillation between two values, and we argue that neither outcome is desirable. Furthermore, we demonstrate that even if model-fitting procedures improve, actual performance may worsen. We complement these findings with a discussion of several potential routes to overcome these issues. \vspace{5pt} \emph{Note: Sections of this preprint on `Successive adjuvancy' (section~\ref{sec:successive_adjuvancy}, theorem~\ref{thm:successive_adjuvancy}, figures~\ref{fig:chaos},~\ref{fig:causality_sa}, and associated discussions) were not included in the originally submitted version of this paper due to length. This material does not appear in the published version of this manuscript, and the reader should be aware that these sections did not undergo peer review.} \end{abstract} \clearpage \section{Introduction} A common machine learning task concerns the prediction of an outcome $Y$ given a known set of predictors $X$~\citep{friedman01}. Usually, the intent is to anticipate the value of $Y$ in situations in which only $X$ is known. Often, the ultimate goal is to avoid or encourage certain values of $Y$, with interventions guided by the predictions provided by the algorithm. We focus on the standard setting, often seen in healthcare, where $X$ is first observed and used to make predictions about $Y$, then interventions occur before outcomes are observed. This setting can lead to prediction scores being `victims of their own success'~\citep{lenert19,sperrin19}. Interventions driven by the score can change the distribution of the data and outcomes, leading to a decay in observed performance, particularly if the intervention is successful. Analysis of this effect requires consideration of the causal processes governing $X$, $Y$, and the potential interventions driven by the score~\citep{sperrin19}. Predictive scores are often implemented by direct dissemination to agents that are capable of modifying these causal processes~\citep{rahimian18,hyland20}, which leads to vulnerability to this problem. This problem also exist if predictions influence discrete actions, initial progress for this has been made using bandits \citep{Shi2020-na}. The phenomenon in which a predictive model influences its own effect has been called `performative prediction'~\citep{perdomo20}, and is of interest in model fairness~\citep{liu18,elzayn19}, in that actions taken in response to a model may pervert fairness metrics under which the model was designed. This problem is particularly critical in settings where existing predictive scores are to be replaced by an updated version. In many real-world contexts, the underlying phenomena represented by the predictive model will change over time~\citep{wallace14}; statistical procedures for prediction may also improve (particularly for complex tasks); and researchers may wish to include further predictors or increase the scope of predictive scores. In general, we may expect that most predictive algorithms will need to be updated or replaced over time. Up-to-date models should generally be trained on the most recent available data which, as described above, will be contaminated by interventions based on existing scores. Should a new predictive model be fitted to new observations of $X$ and $Y$, it will consequently also model the impact of the existing score. Removal of the existing score will introduce bias into predictions made by the new score, as will insertion of the new score in place of the old. We term such an operation a `naive model replacement'. Our main aim is to introduce a general causal framework under which this phenomenon can be quantitatively studied. We use this framework to draw attention to the hazards of naive model replacement, especially when it occurs repeatedly. We introduce these hazards in the context of a generalised ultimate aim of the model, formulated as a constrained optimisation problem in which the occurrence of undesirable values of $Y$ is to be minimised with limited intervention. We also use our model to describe a second replacement strategy, `successive adjuvancy', in which new predictive scores are `added' to previous scores, with different emergent properties. A simple parable of this phenomenon concerns yearly influenza vaccinations. In a vaccination-naive population, risk assessments for influenza motivate widespread vaccination. However, in a later `epoch', the risk may appear much lower, and could naively suggest vaccination is no longer required introducing risks to public health\footnote{See for example \url{https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019}}. More generally, updated risk scores for clinical outcomes may be biased due to the interventions motivated by the scores themselves. As a second example, consider risk scores used to predict future emergency hospital admissions $Y$, on the basis of covariates $X$~\citep{rahimian18}. Suppose that prescription of some drug $D \in X$ confers increased risk, and this is established by the risk score. Should such risk scores be distributed at time $t=0$ to agents able to modify these factors (e.g., doctors), they may intervene by taking patients off $D$ thereby reducing emergency admission risk $\mathbb{E}[Y]$ at a time $t=1$. If a new score is naively fitted to $X$ at $t=0$ and $Y$ at $t=1$, it would underestimate the danger of $D$. Section~\ref{sec:model} describes the problem in terms of causal effects. We develop this into a full model specification in Section~\ref{sec:general}, along with a description of the constrained optimisation problem the model/intervention pair aims to solve in~\ref{sec:aim}. In Section~\ref{sec:naiveupdating}, we analyse the short and long-term effects of repeated naive replacement and show that they are generally undesirable , and in section~\ref{sec:successive_adjuvancy} we describe successive adjuvancy and examine long-term effects in a simplified setting. In Section~\ref{sec:solution}, we discuss three classes of solutions: more complex modelling, routine maintenance of a `hold-out' set, and controlled interventions. In Section~\ref{sec:control} we describe a reformulation of the model as control theory problem. Finally, in Section~\ref{sec:discussion}, we discuss limitations and implications of our approach. Our supplementary material contains relevant examples and proofs, an exposition of the problem in a real-world example, and a list of open problems in this setting. \section{Model} \label{sec:model} \subsection{Overview} \label{sec:overview} Assume that we are attempting to predict an outcome $Y$ given a known set of covariates $X$. For simplicity, we assume $Y$ is a binary (e.g.~admission versus non admission to an Intensive Care Unit) and model it as a Bernoulli random variable. If $Y = 1$ is considered to be a negative outcome, often the eventual aim is to reduce $\mathbb{P}(Y = 1 | X) = \mathbb{E}[Y|X]$; we will discuss this in Section~\ref{sec:general} once we have defined terms formally. For the moment, we assume the causal structure shown in Figure~\ref{fig:causality}. We denote by $\rho_0(X)$ an initial predictive model for $\mathbb{E}[Y|X]$, fitted to observations of $(X,Y)$ generated under the causal structure in Figure~\ref{fig:causality}A. During deployment, we compute $\rho_0(X)$ for all members of a population and disseminate it to \textit{agents who can intervene} on $X$ (e.g.~doctors) based on those predictions, aiming to prevent $Y = 1$. Replacing or updating $\rho_0$, will typically involve fitting a new predictive model $\rho_1(X)$ to new observations of $(X,Y)$. It is clear that while $\rho_0(X)$ is an estimator of $\mathbb{E}[Y|X]$, the new predictive function $\rho_1(X)$ is instead an estimator of \begin{equation} \mathbb{E}\left[Y|X,\textrm{do}\left[\rho_0(X)\right]\right] \label{eq:f1quantity} \end{equation} where $\textrm{do}\left[\rho_0(X)\right]$ indicates the action `compute and disseminate $\rho_0(X)$'. Although $\rho_0(X)$ is determined by $X$, the computation $\textrm{do}\left[\rho_0(X)\right]$ makes $\rho_0$ actionable. This opens a second causal pathway from $X$ to $Y$, affecting the setting in which $\rho_1$ is fitted (Figure~\ref{fig:causality}B). If the initial score $\rho_0(X)$ is universally disseminated, the distribution of $Y$ given $X$ (without the $\textrm{do}\left[\rho_0(X)\right]$) now becomes a counterfactual which we cannot observe. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{causality_general.pdf} \caption{Causal structure under which $\rho_0$ (panel A) and $\rho_1$ (panel B) are fitted. Dashed lines indicate a model-fitting process.} \label{fig:causality} \end{figure} \subsection{General notation and assumptions} \label{sec:general} Here, we use a causal model to illustrate potential emergent behaviour resulting from repeated naive model updating, expanding out the `do'-operator used in section~\ref{sec:overview}. We do not aim to cover the complexities of \emph{all} real-world applications, yet our simplified setup is sufficient to demonstrate the dangers arising in this context. As $\rho_0$ is deployed and drives interventions, covariate values $X$ may change, as may the dependence of $Y$ on $X$. Here, we partition $X$ into three sets: \begin{align} X^s &\textrm{: Fixed or `set' covariates; $\textrm{dim}(X^s)=p^s$}, \nonumber \\ X^a &\textrm{: Actionable covariates; $\textrm{dim}(X^a)=p^a$}, \nonumber \\ X^{\ell} &\textrm{: Latent covariates; $\textrm{dim}(X^{\ell})=p^{\ell}$}. \label{eq:saldef} \end{align} Although $X^{\ell}$ may influence the causal mechanism between $X$ and $Y$ and may be intervened on, we assume it is unobserved. Hence, only $X^s$ and $X^a$ are known when evaluating a risk score, and $X^s$ cannot be intervened on (e.g.~`Age'). We also define two sets of time indicators $t,e$ (time, epoch): \begin{align} t \in \{0,1\}: &\begin{cases} t=0\textrm{: predictive score is computed} \\ t=1\textrm{: $Y$ observed, after possible} \nonumber \\ \phantom{t=1:}\textrm{intervention} \end{cases} \nonumber \\ e \in \mathbb{N}: &\begin{cases} e=0\textrm{: no predictive score is used} \nonumber \\ e>0\textrm{: model from epoch $e-1$ is used.} \end{cases} \end{align} We assume that values of $X$ depend on $t$ and $e$ using the notation $X_e(t)=(X^s_e(t),X^a_e(t),X^{\ell}_e(t))\in \Omega^s \times \Omega^a \times \Omega^\ell = \Omega$. As $Y$ is only observed at $t=1$, $Y$ at epoch $e$ is denoted as $Y_e$. At each epoch, we assume that values of $X_e(t)$ across individuals in the population are $iid$ with probability measure $\mu_e$. We introduce the following functions \begin{align} f_e(x^s,x^a,x^{\ell}) &= \mathbb{E}\left[Y_e|X_e(1)=(x^s,x^a,x^{\ell})\right] \nonumber \\ &= \textrm{Causal mechanism determining} \nonumber \\ &\phantom{=} \textrm{probability of $Y_e = 1$ given $X_e(1)$} \nonumber \\ g^{a}_e(\rho,x^a) &\in \{g:[0,1] \times \Omega^a \to \Omega^a\} \nonumber \\ &=\textrm{Intervention process on $X^a$ in } \nonumber \\ &\phantom{=} \textrm{response to a predictive score $\rho$} \nonumber \\ &\phantom{=} \textrm{updating } X^a_e(0)\to X^a_e(1)\nonumber\\ g^{\ell}_e(\rho,x^{\ell}) &\in \{g:[0,1] \times \Omega^{\ell} \to \Omega^{\ell}\} \nonumber \\ &=\textrm{Intervention process on $X^{\ell}$ in } \nonumber \\ &\phantom{=} \textrm{response to a predictive score $\rho$} \nonumber \\ &\phantom{=} \textrm{updating } X^\ell_e(0)\to X^\ell_e(1)\nonumber\\ \rho_e(x^s,x^a) &\in \{\rho:\Omega^s \times \Omega^a \to [0,1]\} \nonumber \\ &= \textrm{Predictive score trained at epoch} \nonumber \\ &\phantom{=} \textrm{$e$, evaluated at observed covariates.} \nonumber \end{align} Our main model is based on the following assumptions \begin{enumerate} \item $\forall e \hspace{5pt} X^s_e(0)=X^s_e(1)$: `set' covariates do not change from $t=0$ to $t=1$ \label{asm:first_main_assumption} \item $X^a_0(0)=X^a_0(1)$, $X^{\ell}_0(0)=X^{\ell}_0(1)$: `actionable' and `latent' covariates do not change at epoch 0 \item $X^{\ell}_e(t)$ is unobserved, but may be modified from $t=0$ to $t=1$ in response to $\rho_{e-1}$ \item Values of $X_{e}(0)$ are independent across epochs, i.e. we do not track the same subjects over time. \label{asm:ident_dist} \item At epoch $e$, the predictive score uses only $X^a_e(0)$, $X^s_e(0)$ and $Y_e$ as training data; previous epochs are ignored and $X^a_e(1)$, $X^s_e(1)$ are not observed. \label{asm:fourth_main_assumption} \item $\forall e \hspace{5pt} \mathbb{E}[Y_e|X_e]=\mathbb{E}[Y_e|X_e(1)]$: $Y_e$ depends only on $X_e(1)$; that is, after any potential interventions. \label{asm:last_main_assumption} \end{enumerate} Besides these core assumptions, for the applications in this work, we variably assume some of the following \begin{enumerate}[resume] \item $f_e$, $g^a_e$, $g^{\ell}_e$ and $\mu_e$ remain fixed across epochs\footnote{In practice, we may assume $f_e$ changes slightly between epochs, but that this change is negligible.}, so values $\{ X^s_{\cdot}\}$ are \emph{iid}, as are $\{ X^a_{\cdot}\}$ and $\{ X^{\ell}_{\cdot}\}$ (within an epoch they may be correlated). Where we make this assumption, we will omit the epoch subscript for clarity. We also use the shorthand $X^{\ell} \equiv X^{\ell}_e(0)|(X^s_e(0),X^a_e(0))=(x^s,x^a)$ \label{asm:equally_distributed} \item We allow $\rho_e$ to be an arbitrary function, but generally presume it is an estimator of \begin{align} &\rho_e(x^s,x^a) \approx \mathbb{E}\left[Y_e|X^s_e(0)=x^s,X^a_e(0)=x^a\right] \nonumber \\ &=\mathbb{E}_{X^{\ell}} \left[f_e\left(x^s,g^a_e(\rho_{e-1},x^a),g^{\ell}_e(\rho_{e-1},X^{\ell})\right)\right] \nonumber \\ &\triangleq \tilde{f}_e(x^s,x^a) \label{eq:rho_oracle} \end{align} noting that $\tilde{f}_e$ depends on $e$ even if $f_e$ does not. \item The function $f_e$ is $C^1$ in all arguments, and covariates are coded such that increases in covariate values increase risk \label{asm:fderiv} \item $g^{\ell}_e$, $g^a_e$ are $C^1$ in all arguments, and a higher value of $\rho$ means a larger intervention is made (we assume $g^{\ell}_e$ and $g^a_e$ to be deterministic, but random valued functions may more accurately capture the uncertainty linked to real-world interventions).\label{asm:gderiv} \end{enumerate} This extended causal model is shown in Figure~\ref{fig:diagram_setup}. To aid interpretation, a real-world example is described using this notation in Supplementary Section~\ref{supp_sec:realistic_exposition}. \subsection{Aim of predictive score} \label{sec:aim} The aim of the predictive score is generally to estimate $\mathbb{E}[Y_e|X_e(0)]$ accurately, presuming that we take $X_e(0)$ to be identically distributed over the population concerned. However, if action is to be taken on the score, we may presume the ultimate goal is to minimise $\mathbb{E}[Y_e]$, i.e. minimising \begin{align} &\mathbb{E}\left[Y_e\right] = \mathbb{E}_{X_e(0)}\left[ Y_e|X_e(1) \right] \nonumber \\ &= \mathbb{E}_{X_e(0)}\left[f_e(X^s,g^a_e(\rho,X^a_e(0)),g^{\ell}_e(\rho,X^{\ell}_e(0)))\right] \label{eq:minimisethis} \end{align} However, we presume that we cannot afford to maximally intervene in all cases. Suppose the cost of lowering $X^a$ and $X^{\ell}$ by $x$ is $c^a(X^a,x)$ and $c^{\ell}(X^{\ell},x)$, respectively. The total intervention must then satisfy \newpage \begin{align} &\mathbb{E}_{X_e(0)}\left[c^a\Big(X^a_e(0),X^a_e(0)-g^a_e(\rho,X^a_e(0))\Big) + \right. \nonumber \\ &\phantom{\mathbb{E}_{X_e(0)}} \left. c^{\ell}\Big(X^{\ell}_e(0),X^{\ell}_e(0)-g^{\ell}_e(\rho,X^{\ell}_e(0))\Big)\right] \leq C \label{eq:subjecttothis} \end{align} for a known constant $C$, representing maximum cost. Thus we want to minimise~\eqref{eq:minimisethis} subject to~\eqref{eq:subjecttothis}. We have allowed $f_e$, $\mu_e$, $g^a_e$, $g^{\ell}_e$ and $\rho_e$ to vary across epochs. Of these, we can consider $f_e$ and $\mu_e$ to vary as a consequence of underlying processes, and $g^a_e$, $g^{\ell}_e$ and $\rho_e$ to be (somewhat) under our control. Depending on the problem, we may either consider $g^a_e$ and $g^{\ell}_e$ as fixed, and choose an optimal function $\rho_e$; or consider $\rho_e$ as fixed, and choose optimal functions $g^a_e$, $g^{\ell}_e$. If both are optimised, this corresponds to a general problem of resource allocation; see Supplementary Section~\ref{supp_sec:optimiseboth}. \section{Naive model updating} \label{sec:naiveupdating} We consider a `naive' process in which a new score $\rho_e$ is fitted in each epoch, and then used as a drop-in replacement of an existing score $\rho_{e-1}$. We show that this procedure does not generally solve the constrained optimisation problem in Section~\ref{sec:aim}, can lead to `worse' performance of `better' models, and may lead to wide oscillation of predictions for fixed inputs across epochs. \subsection{Worse performance of better models} Here, we show that naive updating can lead to a loss in observed performance --- even when the procedure to infer $\rho_e$ is more accurate. We adopt assumptions~\ref{asm:first_main_assumption}--\ref{asm:gderiv}, taking the approximation in equation~\eqref{eq:rho_oracle} to be imperfect. Although most model elements are conserved across epochs (assumption~\ref{asm:equally_distributed}), we presume that the procedure used to infer $\rho_{e}$ changes, leading to better estimators of the function $\tilde{f}_e$. At epoch $e$, the training data is denoted by $(X_e^\star,Y_e^\star)$ and consists of $n$ samples of $(X_e(0),Y_e)$, with the latent covariate information removed. In the absence of interventions, we assert that model performance will improve over epochs. Since performance under non-intervention is equivalent to performance at epoch 0, this can be stated as: \begin{align} &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{e}|X_0^\star,Y_0^\star)\right] > \nonumber \\ &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{e+1}|X_0^\star,Y_0^\star)\right], \label{eq:true_expectation2} \end{align} \begin{figure}[p] \centering \includegraphics[width=0.5\textwidth]{causality_diagram_small.pdf} \caption{This figure shows a causal diagram. An `epoch' is a new model fitting cycle. Covariates for a sample at the start of an epoch are modelled by $X^{\cdot}_e(0)$. We presume $\left\{X^s_e(0), e\geq 0\right\}$ are independent (as are $X^a_{\cdot}(0)$ and $X^{\ell}_{\cdot}(0)$). We start with a sample at $t=0,e=0.$ The values $X^s_0(0)$, $X^a_0(0)$ are observed and sent to analysts (arrow 1). No predictive score is present and no interventions are made based on it, so values remain the same to $t=1$ (arrows 2). $\mathbb{E}[Y_0]$ depends only on covariates at $t=1$, through $f_0$ (arrows 3). $Y_0$ is observed and sent to analysts (arrow 4) who decide a function $\rho_{0}$, which is retained into epoch 1 (arrow 5). We start epoch 1 with a new independent sample. At $t=0,$ we observe $X^s_1(0)$, $X^a_1(0)$ and send them to analysts (arrow 6) who compute $\rho_{0} \left( X^s_1(0),X^a_1(0)\right)$ which is used to inform interventions $g^a_1$, $g^{\ell}_1$ (arrow 7) to change values $X^a_e(0), X^{\ell}_e(0)$ to $X^a_e(1), X^{\ell}_e(1)$ respectively (arrows 8). $X^s_e(0)$ is not interventionable and becomes $X^s_e(1)$ (arrow 9). $\mathbb{E}[Y_1]$ is determined by covariates at $t=1$ (arrows 10). Analysts use the values of $X^s_1(0)$, $X^a_1(0)$ (arrows 11), and $Y_1$ (arrow 12) to decide a $\rho_{1}$, which is retained (arrow 13) for epoch 2. Subsequent epochs proceed similarly to epoch 1. } \label{fig:diagram_setup} \end{figure} where $m_{\tilde{f}}(\rho | X,Y)$ denotes a metric for closeness of $\rho$ to $\tilde{f}$, given observed data $(X,Y)$\footnote{In practice, $m_{\tilde{f}_e}$ is unknown but (assuming latent covariates have a small influence on $f$) estimates of $m_{\tilde{f}_0}$ can be calculated through a holdout test data set.}. However, if interventions are in place, the improvement in equation~\eqref{eq:true_expectation2}, does not imply that the actual performance improves across epochs, that is: \newpage \begin{align} &\mathbb{E}_{(X_e^\star,Y_e^\star)}\left[m_{\tilde{f}_e}(\rho_{e}|X_e^\star,Y_e^\star)\right] \not> \nonumber \\ &\mathbb{E}_{(X_{e+1}^\star,Y_{e+1}^\star)}\left[m_{\tilde{f}_{e+1}}(\rho_{e+1}|X_{e+1}^\star,Y_{e+1}^\star)\right]. \label{eq:false_expectation2} \end{align} This is proved by counterexample: see Supplementary Section~\ref{supp_sec:models_worse}. A critical consequence of this artefact is that stakeholders may decide not to update an existing score, even if an apparently better one is available.\footnote{We note that practically (if a holdout test data set was used) the conclusions on performance made by stakeholders would be based on a risk score's closeness to $\tilde{f}_0$ instead of $\tilde{f}_e$, but the results are the same, which we show in Supplementary Section~\ref{supp_sec:models_worse}.} \subsection{Dynamics of repeated naive updating} Here, we analyse the dynamics of repeated naive model updating. For this purpose, we make assumptions~\ref{asm:first_main_assumption}-\ref{asm:gderiv} and assume that $\rho_e$ is an oracle: the `$\approx$' in equation~\eqref{eq:rho_oracle} is replaced by an `$=$'. At epoch 0, there are no interventions, hence the risk of observing $Y = 1$ is $\mathbb{E}[Y_0|X_0(0)=(x^s,x^a,x^{\ell})] = f(x^s,x^a,x^{\ell})$. The score $\rho_0$ is therefore defined as \begin{equation} \rho_0(x^s,x^a)=\mathbb{E}_{X^{\ell}}[f(x^s,x^a,X^{\ell})], \label{eq:rho0def} \end{equation} where $X^{\ell}$ is denoted as in assumption~\ref{asm:equally_distributed}. In subsequent epochs, $\rho_e$ is used to modify $x^a$ and $x^{\ell}$ via $g^a$ and $g^{\ell}$, leading to the following recursive relation: \begin{align} \rho_0(x^s,x^a) &= \mathbb{E}_{X^{\ell}}[f(x^s,x^a,X^{\ell})] \nonumber \\ \rho_e(x^s,x^a) &= \mathbb{E}_{X^{\ell}}[f(x^s,g^a(\rho_{e-1}(x^s,x^a),x^a), \phantom{)]} \nonumber \\ &\phantom{=\mathbb{E}_{X^{\ell}}[f(x^s,} g^{\ell}(\rho_{e-1}(x^s,x^a),X^{\ell}))] \nonumber \\ &\triangleq h(\rho_{e-1}(x^s,x^a)) \label{eq:hdef} \end{align} We briefly explore the dynamics of this recursion. Let $z \in [0,1]$ be arbitrary and denote by $S$ the substitution $(x^s,x^a,x^l)=\left(x^s,g^a(z,x^a),g^\ell(z,X^{\ell})\right)$. Recalling definitions of $p^s$, $p^a$ from~\eqref{eq:saldef}, we set (for $i$ across the dimensions of $(x^a, x^{\ell})$) \begin{align} \delta^{g^a}_i &= \frac{\partial [g^{a}(z,x^a)]_i}{\partial z} \hspace{20pt} &\delta^{g^{\ell}}_i &= \frac{\partial [g^{\ell}(z,x^{\ell})]_i}{\partial z} \nonumber \\ \delta^{f^a}_i &= (\nabla f|_{S})_{p^s + i} \hspace{20pt} &\delta^{f^{\ell}}_i &= (\nabla f|_{S})_{p^s + p^a + i} \nonumber \end{align} recalling assumptions~\ref{asm:fderiv},\ref{asm:gderiv} to assert that these partial derivatives exist. Assumptions~\ref{asm:fderiv} and~\ref{asm:gderiv} further imply $\delta^{f^{\ell}}_i>0$, $\delta^{f^a}_i > 0$ and $\delta^{g^a}_i < 0$, $\delta^{g^{\ell}}_i < 0$ respectively, so \begin{align} h'(z) &= \mathbb{E}_{X^{\ell}}\left[ \sum_{i}^{p^a} \delta^{g^a}_i \delta^{f^a}_i + \sum_{i}^{p^{\ell}} \delta^{g^{\ell}}_i \delta^{f^{\ell}}_i \right] < 0 \label{eq:hderiv} \end{align} and thus the recursion $\rho_{e+1} = h(\rho_{e})$ has exactly one fixed point. Call this $z_0$, so $z_0=h(z_0)$. We now note \begin{theorem} \label{thm:naive_updating_behaviour} If $h'(z_0) \leq -1$ then the recursion does not converge unless $\rho_0=z_0$, and will tend toward a stable oscillation between two values. If for some (possibly unbounded) interval $R$ we have $\rho_e \in R$ for some $e$ and for all $z \in R$, $h(z) \in R$ and \begin{align} \sum_{i}^{p^a} \left( \delta_i^{g^a} \right)^2 &\leq k_1, & \sum_{i}^{p^{\ell}} \mathbb{E}_{X^{\ell}} \left[ \left( \delta_i^{g^{\ell}} \right)^2\right] &\leq k_2 \label{eq:gcond} \\ \sum_{i}^{p^a} \mathbb{E}_{X^{\ell}}\left[ |\delta_i^{f^a} |\right]^2 &\leq k_3, & \sum_{i}^{p^{\ell}} \mathbb{E}_{X^{\ell}}\left[ \left(\delta_i^{f^{\ell}}\right)^2 \right] &\leq k_4 \label{eq:fcond} \end{align} where $\sqrt{k_1 k_3} + \sqrt{k_2 k_4} < 1$, then \begin{equation} |\rho_e(x^s,x^a)-\rho_{e+1}(x^s,x^a)| \to 0 \nonumber \end{equation} as $n \to \infty$. \end{theorem} This is proved in Supplementary Appendix~\ref{supp_sec:thm1proof}. Alternative conditions for convergence (`performative stability') are proved in~\cite{perdomo20}. Condition~\eqref{eq:gcond} states that, on average, interventions make only small change to $x^a$ and $x^{\ell}$ in response to small changes in $\rho$. Condition~\eqref{eq:fcond} states that, on average, the actual risk changes little with small changes in covariates. These conditions are sufficient but not necessary. Since $h'(z)<0$, successive estimates of $\rho_e$ will oscillate around their limit. In general, a requirement for general convergence of $\rho_e$ restricts the type of interventions which can be in place. A simple scenario in which $\rho_e$ cannot converge is provided in Supplementary Section~\ref{supp_sec:oscillation}, and we illustrate an example showing convergence and divergence of $\rho_e$ in Figure~\ref{fig:main_example}. We produced a simple web app illustrating this problem at \url{https://ajl-apps.shinyapps.io/universal_replacement/} \begin{figure}[p] \centering \includegraphics[width=0.7\textwidth]{convergence_plot.jpg} \caption{Example showing convergence and divergence of $\rho_e$ across epochs. We disregard $x^{\ell}$, $g^{\ell}$ in this example. We choose $f(x^s,x^a)=\textrm{logit}(x^s,x^a)$ (top left). We choose $g^a$ with the rationale that we intervene by lowering $X^a(0)$ when $\rho_e> 1/2$, but allow $X^a(0)$ to increase when $\rho_e< 1/2$ (that is, resources for intervention are redistributed rather than introduced), and assume that we can intervene more effectively when $X^a(0)$ is high ( strictly, $g^a(\rho,x^a) = \frac{1}{2}\left((3-2\rho)x^a + (1-2\rho)\sqrt{1+(x^a)^2}\right)$, top right panel). Bottom panel shows whether $\rho_e(x^s,x^a)$ converges or diverges, and how long it takes (num. epochs until $\Delta_e \triangleq |\rho_e-\rho_{e-1}|<0.01$ or $(|\Delta_e|>0.05 \cup |\Delta_e-\Delta_{e-1}|<0.01)$; $|e|\leq 10$). Insets show cobweb plots for relevant recursions, and plots of $\rho_e$.} \label{fig:main_example} \end{figure} We may hope that naive updating, when it converges, may solve the optimisation problem in Section~\ref{sec:aim}. It does not, and we give a specific counterexample in Supplementary Section~\ref{supp_sec:nonoptimal}. Finally, we note that the dynamics above also model a related setting, where samples are tracked across epochs and interventions are permanent (Supplementary Section~\ref{supp_sec:alternative}). In summary, naive updating can readily lead to wide oscillation of successive risk estimates, and even if $\rho_e$ does converge, the limit does not generally correspond to an optimal outcome in terms of minimising incidence of $Y$. \section{Successive adjuvancy} \label{sec:successive_adjuvancy} \emph{Note: This section and associated content (theorem~\ref{thm:successive_adjuvancy}, figures~\ref{fig:chaos},~\ref{fig:causality_sa}, and associated discussions) were not included in the originally submitted version of this paper due to length. This material does not appear in the published version of this manuscript, and the reader should be aware that these sections did not undergo peer review.} \vspace{10pt} We propose a second strategy for updating risk scores in which interventions are `built' across successive epochs, effectively using new risk scores as adjuvants to risk scores from previous epochs, rather than replacements. We retain assumptions~\ref{asm:first_main_assumption} through~\ref{asm:gderiv} except assumption~\ref{asm:equally_distributed}: we assume that $f_e$ and $\mu_e$ remain fixed across epochs, but $g^a_e$ and $g^{\ell}_e$ do not. Although we no longer consider $g^a_e$ and $g^{\ell}_e$ fixed across epochs, we consider fixed functions $g^a$ and $g^{\ell}$ which will be used as `building blocks' for $g^a_e$ and $g^{\ell}_e$. In epoch $e$, we observe initial values $x_e(0)=\left(x^a_e(0),x^s_e(0),x^{\ell}_e(0)\right)=(x^a_e,x^s_e,x^{\ell}_e)$ at $t=0$, and compute $\rho_0(x^a_e,x^s_e)$, $\rho_1(x^a_e,x^s_e)$, $\dots$, $\rho_{e-1}(x^a_e,x^s_e)$. We build $g^a_e$, $g^{\ell}_e$ as follows. We begin by intervening on $x^s_e(0),x^{\ell}_e(0)$ according to $\rho_0$ and the building block functions $g^a$, $g^{\ell}$ to get $g^a(\rho_0,x^a_e)$, $g^l(\rho_0,x^{\ell}_e)$. We then intervene on these new values according to $\rho_1$, to get $g^a\left(\rho_1,g^a(\rho_0,x^a_e)\right)$, $g^{\ell}\left(\rho_1,g^{\ell}(\rho_0,x^{\ell}_e)\right)$. We then intervene on these values according to $\rho_2$, and so on. The intervention functions at epoch $e$ are thus defined as \begin{align} g^a_e(\rho,x^a) &= g^a\left(\rho_{e-1},g^a\left(\rho_{e-2},\dots, g^a(\rho_0,x^a)\dots\right)\right)\nonumber \\ g^{\ell}_e(\rho,x^{\ell}) &= g^{\ell}\left(\rho_{e-1},g^{\ell}\left(\rho_{e-2},\dots, g^{\ell}(\rho_0,x^a)\dots\right)\right)\nonumber \\ \end{align} taking $x^s$ at some fixed value, and $\rho_0$, $\rho_1$, $\dots$, $\rho_{e-1}$ as fixed functions. We also presume again that $\rho_e$ is an oracle; that is, that the approximation in equation~\ref{eq:rho_oracle} is perfect. This enables construction of a recursive definition: \begin{align} g^a_0(\cdot,x^a) &= x^a \nonumber \\ g^{\ell}_0(\cdot,x^{\ell}) &= x^{\ell} \nonumber \\ \rho_0 = \rho_0(x^s,x^a) &= \mathbb{E}_{X^{\ell}} [f\left(x^s,x^a,X^{\ell}\right)] \nonumber \\ & \nonumber \\ g^a_1(\rho_0,x^a) &= g^a(\rho_0,x^a) \nonumber \\ g^{\ell}_1(\rho_0,x^{\ell}) &= g^{\ell}(\rho_0,x^{\ell}) \nonumber \\ \rho_1 = \rho_0(x^s,x^a) &= \mathbb{E}_{X^{\ell}} [f\left(x^s,g^a_1(\rho_0,x^a),g^{\ell}_1(\rho_0,X^{\ell})\right)] \nonumber \\ & \nonumber \\ g^a_{e+1}(\rho_e,x^a) &= g^a(\rho_{e},g^a_e\left(\rho_{e-1},x^a)\right) \nonumber \\ g^{\ell}_{e+1}(\rho_e,x^{\ell}) &= g^{\ell}(\rho_{e},g^{\ell}_e\left(\rho_{e-1},x^{\ell})\right) \nonumber \\ \rho_{e+1} = \rho_{e+1}(x^s,x^a) &= \mathbb{E}_{X^{\ell}} [f\left(x^s,g^a_{e+1}(\rho_e,x^a),g^{\ell}_{e+1}(\rho_e,X^{\ell})\right)] \label{eq:sarecursion} \end{align} \subsection{Dynamics of successive adjuvancy} The dynamics of this system are more complex than that of naive updating. However, under much simplified circumstances: a univariate $x^a$, and disregarding $x^l$, we show the following: \begin{theorem} \label{thm:successive_adjuvancy} Assume the following: \begin{enumerate} \item $g^{\ell}(\cdot,x^{\ell})=g^{\ell}_e(\cdot,x^{\ell})=x^{\ell}$, and $X^{\ell}_e \sim \delta_0$ (so all terms involving $\ell$ can be omitted from recursion~\ref{eq:sarecursion}) \item $x^a$ is univariate ($p^a=1$) \label{asm:univ} \item $\frac{\partial}{\partial x^a}f(x^s,x^a)>0$ \label{asm:fpartial} \label{asm:partial} \item For some unique $\rho_{eq}$ we have $\forall x \hspace{5pt} g^a(\rho_{eq},x) =x$, and $\forall (x,\rho \neq \rho_{eq}) \hspace{5pt} g^a(\rho,x)\neq x$ \end{enumerate} For brevity we define $f(x)=f(x^s,x)$ and denote by $S_2$ the substitution $(\rho,x^a) \to (r,f^{-1}(r)$. Now if, for some interval $I$ containing $\rho_{eq}$, we have $\rho_e \in I$ for some $e < \infty$, and for all $r \in I$, we have \begin{equation} \left|\frac{\partial}{\partial r} f(g^a(r,f^{-1}(r))) \right| = \left|f'\left(g^a(r,f^{-1}(r)\right)\left(\frac{\frac{\partial g^a}{\partial x^a}|_{S_2}}{f'(f^{-1}(r))} + \frac{\partial g^a}{\partial r}|_{S_2} \right) \right| < 1 \label{eq:h2deriv} \end{equation} then \begin{align} \rho_e(x^s,x^a) &\to \rho_{eq} \nonumber \\ P\left(Y_e|(X^s_e(0),X^a_e(0))=(x^s,x^a)\right) &\to \rho_{eq} \nonumber \\ g^a_e(\rho_{e-1},x^a) &\to \{x: f(x^s,x)=\rho_{eq}\}= f^{-1}(\rho_{eq}) \end{align} as $e \to \infty$. \end{theorem} This is proved in Supplementary Section~\ref{supp_sec:thm2proof}. Although limited to simplified circumstances, this results of this theorem warrant some interpretation. We may consider $\rho_{eq}$ to be an `equivocal risk': that is, a risk at which the value of $x^a$ remains the same. The theorem roughly states that, for sufficiently slowly-changing $f$ and $g^a$, interventions will build towards a point in which interventions bring everyone to almost the same (equivocal) risk level. For certain reasonable values of $f$ and $g^a$, including those used for figure~\ref{fig:main_example}, the derivative of $h_2$ can change sign, leading to chaotic behaviour of $\rho_e$ (figure~\ref{fig:chaos}). \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{chaos_plot_1.jpg} \includegraphics[width=0.45\textwidth]{chaos_plot_2.jpg} \caption{Dynamics of successive adjuvancy. In both panels, $f(x^s,x^a)=\textrm{logit}(x^s+x^a)$, we have $\rho_{eq}=1/2$ and the colour indicates the difference $\rho_e-\rho_{eq}$ for $e=20$. The left panel shows dynamics in which $g^a$ has the same form as for figure~\ref{fig:main_example}, and can be seen to lead to chaotic behaviour of $\rho_e$ for some values of $(x^s,x^a)$. The right-hand panel uses $g^a(\rho,x^a) = x^a - 4(1-\rho)\textrm{logit}(x^a)$, and $\rho_e$ can be seen to converge to $\rho_{eq}$ everywhere, albeit at different rates.} \label{fig:chaos} \end{figure} An advantage of successive adjuvancy over naive replacement is that risk scores from previous epochs $\rho_0$, $\rho_1$, $\dots$, $\rho_{e-1}$ have an immediate interpretation as unbiased estimates of risk estimates through the process of intervention. If we consider the interventions $g^a_e$, $g^{\ell}_e$ as a series of interventions of type $g^a$, $g^{\ell}$ applied in succession, then $\rho_0$ is the true risk ($P(Y)$) before applying $g^a$, $g^{\ell}$ at all, $\rho_1$ is the true risk after applying $g^a$, $g^{\ell}$ once in response to $\rho_0$, $\rho_2$ is the true risk after applying $g^a$, $g^{\ell}$ firstly in response to $\rho_0$ and subsequently in response to $\rho_1$, and so on. Specifically, $\rho_{e-1}$ is the risk of $Y$ immediately before applying $g^a$, $g^{\ell}$ for the final ($e$th) time. When used for this final time, $g^a$ and $g^{\ell}$ are applied in response to $\rho_{e-1}$ itself. Figure~\ref{fig:causality_sa} illustrates this idea for epochs 0 and 1 using the format of figure~\ref{fig:causality}. Seen in this way, repeatedly adjusting covariates on the basis of new risk estimates resembles a `boosting' strategy, in which each new $\rho_e$ captures the residual risk from $\rho_0$ through $\rho_{e-1}$, which seems a logical approach in a real-world situation. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{causality_nu_sa.pdf} \caption{Underlying causality structure of successive adjuvancy. Panel A shows structure in a score-naive setting where $\rho_0$ is fitted. Panel B shows the causal structure when $\rho_1$ is fitted; as for figure~\ref{fig:causality}, $\rho_1$ models a setting (coloured red) in which $\rho_0$ forms an additional causal pathway from $X$ to $Y$. Undernaive updating (panel C), $\rho_1$ replaces $\rho_0$, and is used to model a scenario distinct from that to which it was fitted. However, in successive adjuvancy (panel D), $\rho_1$ is an adjuvant to $\rho_0$, and thus still operates on the same system to which it was fitted. In panel D, $\rho_1$ is used to guide interventions after having intervened on $\rho_0$.} \label{fig:causality_sa} \end{figure} An implementation of successive adjuvancy is included in our web app at \url{https://ajl-apps.shinyapps.io/universal_replacement/}. \section{Strategies to avoid this problem} \label{sec:solution} Naive updating is an appropriate method for updating risk scores if no interventions are being made (that is, $g^a(\rho,x^a)=x^a$ and $g^{\ell}(\rho,x^{\ell})=x^{\ell}$), as may be the case if a risk score is used for prognosis only, rather than to guide actions\footnote{EUROscore2~\citep{nashef12} (a risk predictor for cardiac surgery) can be used in this way, by giving patients prognostic estimates but without being used to recommend for or against surgery}. It may also be appropriate if we do not aim to solve the constrained optimisation problem in Section~\ref{sec:aim}, and are only concerned with accuracy of the model: in that case, under at least the conditions of Theorem~\ref{thm:naive_updating_behaviour}, naive updating will lead to estimates $\rho_e(x^s,x^a)$ converging as $e \to \infty$ to a setting in which $\rho_e$ accurately estimates its own effect: conceptually, $\rho_e(x^s,x^a)$ estimates the probability of $Y$ \emph{after} interventions have been made on the basis of $\rho_e(x^s,x^a)$ itself~\citep{perdomo20}. Naive updating is otherwise generally not advisable, although a range of alternative modelling strategies do not lead to the same problems. We demonstrate three general strategies for avoiding the naive updating problem below. We describe how each of these accomplishes this and compare their advantages in Supplementary section~\ref{supp_sec:solution_comparison}. We describe how an implementation of each strategy may look in the context of a toy example in supplementary section~\ref{supp_sec:solution_illustration}. Successive adjvuancy may be an appropriate method for updating risk scores if eventual convergence can be proven and a progression of all samples towards the same risk level is be a desirable outcome. Such an outcome clearly does not generally solve the constrained optimisation problem in~\ref{sec:aim} as the cost may be arbitrarily large. Although $g^a_e$ and $g^{\ell}_e$ are variable, they are entirely built of successive applications of $g^{a}$ and $g^{\ell}$, which may not be practical. \subsection{More complex modelling and more data} \label{sec:solution_modelling} An obvious way to avoid the problem is to model the setting completely, including the effect of any interventions. Methods of this type would include explicit causal modelling, as used in related problems~\citep{sperrin18}, or counterfactual inference, which has been suggested as a direct approach to the problem~\citep{sperrin19}. These approaches would require knowledge or accurate inference of $g^{\ell}$ and $g^a$, or observation of covariates at several points in each epoch~\citep{sperrin18}. A second approach is to consider data from previous epochs alongside the current data when fitting $\rho_e$. Such data can be used as a prior on the fitted model~\citep{alaa18} and could be used to infer model elements: $\mu_e$, $g^{\ell}$, $g^a$, and $f$. If accurate data were available, oscillatory effects could even be detected and avoided. A difficulty with this approach in a realistic setting is in distinguishing whether inaccuracies in older models are due to drift in the underlying system \citep{Quionero-Candela2009-al} (in our case, $f$ and $\mu_e$) or due to the effects of intervention. Indeed, the problems with naive updating can be seen as treating model inaccuracies as though they are due to the first effect, when they are in fact due to the second. Definitive assertion of the cause of inaccuracies will, again, generally require more frequent observation of covariates. \subsection{Hold out set} \label{sec:solution_holdout} A straightforward and potentially practical means to avoid the problems associated with naive updating is to retain a set of samples in each epoch for which $\rho_e$ is not calculated, and hence cannot guide intervention. For such samples, $X_e(0)=X_e(1)$, so a regression of $Y$ on $X_e(0)$ restricted to these `held out' samples can be used as an unbiased estimate for $f_e$. If the hold out set is randomly selected, this would emulate a \emph{clinical trial} which enables us to assess the effect of predictive scores (and their associated interventions) across epochs. A problem with this approach is that any benefit of the risk score-guided intervention is lost for individuals in the hold-out set. Careful consideration of the ethical consequences of this strategy is therefore required. \subsection{Control interventions} \label{sec:solution_control} A radically different option is the direct specification of the interventions $g^{\ell}_e$ and $g^a_e$ in each epoch, considering $\rho_e$, $\mu_e$ constant, and $f_e$ to change only slightly with $e$. This enables directly addressing the constrained optimisation problem in Section~\ref{sec:aim}. If $X^{\ell}$ can be disregarded, and we may regard $f_{e-1}$ as an unbiased estimate of $f_e$\footnote{This assumption underlies the fundamental point of a risk score}, then we may take a simple inductive approach: \begin{enumerate} \item At the end of epoch 0, infer $f_0$ and $\mu_0$. Given some fixed functions $\rho$, $c^a$, find a function $g^a_1$ which solves the constrained optimisation problem in section~\ref{sec:aim} assuming $f_1=f_0$, $\rho_1=\rho_0$. Implement this intervention. \item At the end of epoch $e>0$, regress $Y_e$ on \begin{equation} X_e(1)=\left(X^s_e(0),g^a_e\Big(\rho(X^s_e(0),X^a_e(0)),X^a_e(0)\Big)\right) \nonumber \end{equation} to attain an unbiased estimate of $f_e$. Now solve the constrained optimisation problem to optimise $g^a_{e+1}$, assuming $f_{e+1}=f_e$ and $\rho_{e+1}=\rho_{e}$ \end{enumerate} Thus in each epoch an unbiased update of $f_e$ can be made, and the constrained optimisation problem can be directly solved. If $X^{\ell}$ is present, the problem is more complex. We suggest this general case as an open problem (see Supplementary Section~\ref{supp_sec:open}). A problem with this approach in a medical setting is that specification of $g^a_e$ may cause the procedure to be subject to medical device regulation~\citep{mhra19}. Implications of these regulatory processes map to our potential solutions; for example, countries in the EU~\citep{eu17} have only developed regulatory processes to the point of accommodating static risk scores, and by extension currently treat updated scores as new tools. In these cases a separate evaluation exercise, such as testing on a hold-out, is necessary to demonstrate efficacy prior to dissemination, which would also remedy the problems of naive updating (although costs of repeated formal evaluations of effectiveness, and the ethics of a hold-out, may be a concern). However, the US FDA have proposed an alternative `total-life-cycle' approach~\citep{fda19} which allows for model updating (contingent on defining a performance monitoring mechanism), which, given the problems of naive updating, is potentially seriously flawed. \section{Formulation as control-theoretic/ reinforcement learning problem} \label{sec:control} Control theory \citep{Bertsekas1995-hr} and its modern incarnation, reinforcement learning \citep{Sutton2018-ng}, study temporal problems where multiple actions are available at each time step. The aim of the field is to come up with an optimal policy either from the start or, in the partially observable case, a mechanism that quickly converges to the optimal policy. In the latter the regret is considered to be how much utility is lost compared to using the optimal policy from the start. The methods underlying this, like dynamic programming, are used in a variety of fields such as; playing go~\citep{Silver2018-alphago}, in dynamic treatment strategy~\citep{alaa18} and mechanical and electrical engineering. Here we use the formulation of a Partially Observable Markov Decision Processes (POMDP) \citep{Yuksel2017-ni}, and adopt the notation from \citep{Wang_undated-wi} whereby we consider the POMDP as a 7-Tuple $\left(\mathcal{S},\mathcal{A},\mathcal{T},\mathcal{R},\Omega,\mathcal{Z},\gamma\right)$: \begin{itemize} \item $\mathcal{S},\mathcal{A}$ and $\Omega$ are spaces of states, actions and observations. \item $\mathcal{T}$ is the transition kernel that describes the evolution given state and action, e.g. $s_{e+1}\sim\mathcal{T}(\cdot\mid s_{e},a_{e})$ (i.e. a set of conditional transition probabilities between states and actions). \item $\mathcal{Z}$ is a kernel for the observation given the state, e.g. $o_{e+1}\sim\mathcal{Z}(\cdot\mid s_{e},a_{e})$\footnote{Note that here future observations depend on current states and actions and not on future states and actions}. \item $r_{e}$ represents our reward for being in state $s$ and taking action $a$ at time (or equivalently epoch) $e$, and is sampled from $\mathcal{R}$ - i.e. $r_e\sim\mathcal{R}(s_{e},a_{e})$ \item $\gamma$ is a discount factor that down-weighs future rewards if $0<\gamma<1$. \end{itemize} A solution candidate is a policy $$a_{e}\sim\pi\left(\left\{ o_{s},r_{s},a_{s}\right\}_{s=1}^{e-1}\right)$$ which aims to maximise \[\mathbb{E} \sum_{e=1}^{M}\gamma^{e-1} r(s_e,a_e) \] where $M$ represents the maximum number of time/epoch steps. Other reward/utility parametrisations are possible e.g. to include a final pay off or infinite time horizon pay off. Several options for reward function construction are detailed in~\citep{liu14,yu19,wirth17}. The beauty of this framework is the flexibility: aspects such as optimisation under uncertainty can be included by including parameters of reward, transition and observation processes into the (unobserved) state variable. We cast the above in this framework: \begin{align} s_{e}&=\left(X_{e}(0),X_{e}(1),Y_{e}\right) \nonumber \\ a_{e}&=\rho_e \nonumber \\ o_{e}&=\left((X_{e}^{s}(0),X_{e}^{a}(0)),Y_{e}\right) \nonumber \\ r_{e}&=\mathbb{P}\left(\bar{Y}_{e+1}\mid s_e,a_{e}\right) \nonumber \end{align} with $\bar{Y}$ corresponding to the rate of events in total population. The transition kernel from $s_e$ to $s_{e+1}$ consists of; sampling $X_{e+1}(0)$ (note that this sampling is independent of $s_e$), intervening using this sample with $\rho_e$ to form $X_{e+1}(1)$, and then using these values to sample $Y_{e+1}$ from the resulting conditional distribution. Finally we note that given Assumption~\ref{asm:fourth_main_assumption} our policy $a_e \sim \pi(o_e,r_e,a_e)$ as previous epochs are ignored. Indeed, this assumption also implies that $s_{e+1},o_{e+1}$ and $r_e$ only depend on the previous state through $a_e = \rho_e$. In the control view point it is also easy to formulate the longitudinal problem (this corresponds to setting $X_{e+1}(0)=X_{e}(1)$). The description above allows to use methods of the field such as Q-learning, (approximate dynamic programming), PDE-based approaches such as the Hamilton Jacobi Bellman equation and many more. These methods create a policy which maps the historical observations to an action (for the problem at hand a risk score function). Most of the rigorous methods require a low dimensional state space \citep{Powell2007-xe}. \section{Discussion} \label{sec:discussion} In this work, we elaborate on the issue raised by Lenert and Sperrin~\citep{lenert19,sperrin19} and propose a framework for quantitatively modelling its effects, with a particular focus on a model which is updated repeatedly. We demonstrate some consequences of ignoring this problem, and note that they occur even in highly idealised circumstances. Although the problem can generally be avoided by more complex and complete modelling, we consider that this is often impractical: a full consideration of the setting in which a model will eventually be used is not generally considered until the model is to be implemented~\citep{lipton18}. The formulation of the constrained optimisation problem in section~\ref{sec:aim} makes it clear that for fixed $g^{\ell}$, $g^a$, the best possible $\rho_e$ is not necessarily the oracle estimator in equation~\ref{eq:rho_oracle}. However, many machine learning models tend to focus on accurate prediction of outcomes~\citep{nashef12}, rather than directly solving problems of the type in section~\ref{sec:aim}; hence, the naive updating setting considers a $\rho_e$ which does exactly this. In the naive updating setting, we are assuming an analyst who ignores this effect. The model presented here is not a full description of modern predictive scoring systems; however, it is extensible in various ways (some detailed in Supplementary Section~\ref{supp_sec:open}). In particular, $g^{\ell}$ and $g^a$ could be random-valued rather than deterministic. We also note that we assume a covariate value after intervention confers the same contribution to risk of $Y$ as it does when it takes the same value `naturally', which may not be realistic. We assume we are `starting over' with new samples at the beginning of each epoch, and for naive updating, we assume that covariate values are identically distributed. The basis for this assumption is that we generally expect interventions to be zero-sum: that is, the risk score guides a redistribution of intervention rather than introduction of interventions, so the total effect on the sample population remains roughly the same in each epoch. In this assumption, we differ from that in the analysis by Lenert [2019]. We can alternatively interpret this assumption as taking all interventions as being short-term and having `worn off' by the start of the next epoch. The problem raised here also exists for the more general setting when interventions have long term effects and we consider longitudinal effects. An important consideration in model updating is `stability' of successive predictions: in our setting, whether successive values of $\rho_e$ converge. Colloquially, we can take 'stability' to mean that if the underlying system being modelled does not change, then updating a model will leave it unchanged; the model predicts its own effect. General conditions for stability are considered in~\cite{perdomo20} , who differentiate between stability in which $\rho$ optimises a loss given its own effect, and `performative optimality', in which $\rho$ globally optimises a loss. Although we highlight that stability does not generally guarantee that the model is getting the best outcome (according to the constrained optimisation problem in section~\ref{sec:aim}), we note that stability has real-world advantages: in particular, trust in a model will generally be better if it appears to be stable. In the setting where models change at each epoch, if $m_{\tilde{f}_{e}}$ is known at the current epoch $e$, we note a fair comparison of models is one which compares models built using the training data available at the current epoch\footnote{This is not to say that the performance of models will not deteriorate over epochs, just that the issue may not lie with the model structure.}. If $m_{\tilde{f}_{e}}$ is not known, then a holdout set for test data must be used so a fair comparison can be made using an estimate of $m_{\tilde{f}_0}$ (assuming $\tilde{f}_0 \approx f$). This is because at epoch $e$ we only have access to $(X_e(0),Y_e)$ and not $X_e(1)$, and so we are not able to properly gain insight to the behaviour of $\tilde{f}_e$ needed to provide an estimate of $m_{\tilde{f}_e}$. An attempt to estimate $m_{\tilde{f}_e}$ using $(X_e(0),Y_e)$ implicitly assumes that $Y_e$ directly depends on $X_e(0)$, and as a result $\rho_e$ would appear much closer to $\tilde{f}_e$ than is the case. Put simply, by implementing naive model updating not only may performance severely worsen (even if better models were used), but in not providing a holdout test set stakeholders may not even be able to recognise that performance is worsening as the number of epochs increase. In essence, we provide a causal framework within which to understand a crucial issue in regulation of machine learning and AI-based tools in health and further afield, demonstrating that approaches which incorporate naive updating are unlikely to be fit for purpose. Moreover, even where solutions are available to address the bias introduced by updating on `real-world' data in which outcomes represent (at least in part) the effects of an algorithm, these restrict the potential of `online' and frequently updated solutions. We hope that our work will foster discussion of this interesting problem, which is becoming increasingly pertinent as machine-learning based predictive scores become widely used to guide decision making, and policymakers act to address how to regulate these tools to ensure safety and effectiveness. \subsection*{Code availability} Code to reproduce relevant plots and examples is available at~\url{github.com/jamesliley/model_updating}. \subsubsection*{Acknowledgements} We thank the Alan Turing Institute, MRC Human Genetics Unit at the University of Edinburgh, Durham University, University of Warwick, Wellcome Trust, Health Data Research UK, and Kings College Hospital, London for their support of the authors. This problem was first identified in our circumstance by LJMA. We thank Dr Ioanna Manolopoulou for helping to draw our attention to the imminence of this problem. JL, CAV and LJMA were partially supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1, particularly the ``Health'' theme within that grant and The Alan Turing Institute; JL, BAM, CAV, LJMA and SJV were partially supported by Health Data Research UK, an initiative funded by UK Research and Innovation, Department of Health and Social Care (England), the devolved administrations, and leading medical research charities; SRE is funded by the EPSRC doctoral training partnership (DTP) at Durham University, grant reference EP/R513039/1; LJMA was partially supported by a Health Programme Fellowship at The Alan Turing Institute; CAV was supported by a Chancellor's Fellowship provided by the University of Edinburgh. \clearpage \bibliographystyle{plain} \subsubsection*{\bibname}} \bibliographystyle{apalike} \usepackage{dsfont} \usepackage{amsmath} \usepackage{amssymb} \usepackage{xcolor} \usepackage{centernot} \usepackage{graphicx} \graphicspath{{./Figures/}} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \newtheorem{corollary}{Corollary}[theorem] \newtheorem{remark}{Remark}[theorem] \usepackage{xr} \makeatletter \newcommand*{\addFileDependency}[1] \typeout{(#1)} \@addtofilelist{#1} \IfFileExists{#1}{}{\typeout{No file #1.}} } \makeatother \newcommand*{\myexternaldocument}[1]{% \externaldocument{#1}% \addFileDependency{#1.tex}% \addFileDependency{#1.aux}% } \myexternaldocument{arxiv_main} \title{Model updating after interventions paradoxically introduces bias\\ Supplementary Materials} \setcounter{section}{7} \setcounter{equation}{16} \setcounter{figure}{4} \setcounter{theorem}{2} \setcounter{footnote}{8} \begin{document} \maketitle \section{Example of functions and variables in a realistic setting} \label{supp_sec:realistic_exposition} We consider the model proposed by~\cite{rahimian18} for prediction of emergency admission to a hospital in a given time period on the basis of electronic health records (EHRs). Such a model is not in common use in the location considered (England), so the data in the original paper is not affected by the problems we describe in the main manuscript. For clarity\footnote{Analogous times and variables can be described for other prediction periods and updating patterns}, we presume a prediction window of ten months (February-November), and that predictions are made and distributed to primary health practitioners in January, with a new model being trained on the basis of each year's data in December, to be implemented the following January. In this setting, distribution of the score may open a second causal pathway between covariates and outcome as shown in figure~\ref{fig:causality}, and is thus susceptible to the problems of naive updating. In this setting, variables and functions may be interpreted as follows: \begin{enumerate} \item $Y$ the event `an emergency admission in the following year' \item $X_e(0)$ the values of all variables which affect $E(Y)$ at the time when the predictive score is computed (the start of each year) \item An `epoch': the time in which a given model is in use; eg, each year. \item `Time': $t=0$ when the predictive score is computed (the start of January); $t=1$ represents the time after which any interventions are made (the start of Feburary). \item $X^s_e$ covariates affecting $\mathbb{E}(Y)$ which are included in the predictive score but which cannot be directly modified in the time frame: age, time since most recent emergency admission \item $X^a_e$ covariates affecting $\mathbb{E}(Y)$ included in the predictive score which can be modified in the time frame: current medications. \item $X^{\ell}_e$ covariates affecting $\mathbb{E}(Y)$ which are not included in the predictive score, and possibly can be modified in the time frame: blood pressures, cardiac function \item $f_e$ the underlying causal process for $Y$ given patient status; that is, the probability of admission in the subsequent year, given covariates. \item $g^a_e$ Hypothetical prescribed interventions made on $X^a$ in response to a predictive score; for instance, reduce drug dosages. We roughtly assume that this intervention is symmetric; for a patient at low emergency risk, a higher drug dose is acceptable. \item $g^{\ell}_e$ Hypothetical prescribed interventions made on $X^{\ell}$ in response to a predictive score; for instance, treat low or high blood pressure. \end{enumerate} It is clear that if such a risk score were used universally, and data was collected from the period in which a model was in place was then, then the data would be affected by the effect of the predictive score itself. The model does not fully describe this setting. The trichotomisation into $X^{\ell}$, $X^a$, and $X^s$ is not perfect; intervention on $X^L$ could also affect some variables in $X^a$ and vice versa. Interventions are likely to be random-valued to some extent. \section{Alternative system described by naive updating} \label{supp_sec:alternative} We note that the definition of $h$ (equation~\eqref{eq:hdef} in main text), and hence the following comments on recursion dynamics, can be used to describe a related setting in which we track the same samples over epochs, and the effect of interventions $g^a$, $g^{\ell}$ remain in place. Formally, we retain definitions of $X^s,X^a,X^{\ell},e,t,f_e,g^a_e,g^{\ell}_e,\rho_e$ and all assumptions except~\ref{asm:ident_dist},\ref{asm:equally_distributed} from the main text. In place, we assume that $f_e$, $g^a_e$, $g^{\ell}_e$ are fixed across epochs, but instead of resampling $X_e(0)$ from $\mu_e$, we have \begin{equation} X_{e+1}(0)=X_e(1) \end{equation} thus, while values $X_0(0)$ are sampled from the distribution $\mu_0$, values $X_e(0)$ are then determined for $e>0$. We illustrate this in figure~\ref{fig:alternative}. Now formulas~\eqref{eq:rho0def}, \eqref{eq:hdef} in the main text will hold, and the recursion will proceed as detailed in theorem~\ref{thm:naive_updating_behaviour} in the main text \begin{figure}[h] \centering \includegraphics[width=0.75\textwidth]{alternative_diagram.pdf} \caption{Diagram showing alternative setup for naive updating. Values $x^s,x^a,x^{\ell}$ are sampled at $(e,t)=(0,0)$, and used to determine $\rho_0$. Values are conserved until $t=1$, and remain the same at the start of epoch 1 ($(e,t)=(1,0)$). Values are intervened on by $g^a$, $g^{\ell}$ according to $\rho_0 \left(x^s_1(0),x^a_1(0))\right)$, and resultant values at $(e,t)=(1,1)$ are conserved until the start of the next epoch at $(e,t)=(2,0)$. Lowercase leters indicates that, while quantities random-valued, they inherit all randomness from their values at $(e,t)=(0,0)$. Colour and line conventions are as for figure~\ref{fig:diagram_setup} in the main text} \label{fig:alternative} \end{figure} \section{Proofs and counterexamples} \label{supp_sec:proofs} \subsection{Optimising both $\rho$ and $g^a$, $g^{\ell}$ is equivalent to a general resource allocation problem} \label{supp_sec:optimiseboth} Consider the constrained optimisation problem in section~\ref{sec:aim} in the main text. We show that if we allow $\rho$ and $g^a$, $g^{\ell}$ to vary independently, then the constrained optimisation is equivalent to the solution of a problem in which the use of a predictive score is redundant. \begin{theorem} Suppose that the triple $(\rho_{opt},g^a_{opt},g^{\ell}_{opt})$ minimises quantity~\eqref{eq:minimisethis} subject to constraint~\eqref{eq:subjecttothis} in section~\ref{sec:aim} in the main text, where all are arbitrary functions of two variables in the appropriate range. Let $h^a_{opt}$ and $h^{\ell}_{opt}$ be solutions to a second constrained optimisation problem: find $h^a(x^s,x^a)$ and $h^{\ell}(x^s,x^a,x^{\ell})$ which minimise \begin{align} \mathbb{E}_{X_e(0)}\{&f(X^s, \nonumber \\ &h^a(X^s_e(0),X^a_e(0)), \nonumber \\ &h^{\ell}(X^s_e(0),X^a_e(0),X^{\ell}_e(0)))\} \label{eq:altminimise} \end{align} subject to \begin{align} \mathbb{E}_{X_e(0)}\{ c^a(&X^a_e(0), \nonumber \\ &X^a_e(0)-h^a(X^s_e(0),X^a_e(0))) + \nonumber \\ c^{\ell}(&X^{\ell}_e(0), \nonumber \\ &X^{\ell}_e(0)-h^{\ell}(X^s_e(0),X^a_e(0),X^{\ell}_e(0)))\} \leq C & \label{eq:altsubject} \end{align} with $c^a,c^{\ell},f$ as for section~\ref{sec:aim}. Then the minima of quantity~\eqref{eq:minimisethis} in the main text and of quantity~\eqref{eq:altminimise} achieved by $(\rho_{opt},g^a_{opt},g^l_{opt})$ and $(h^a_{opt},h^{\ell}_{opt})$ are the same. \end{theorem} \begin{proof} Given a tuple $(\rho_{opt},g^a_{opt},g^l_{opt})$, we explicitly construct an $(h^a_{opt},h^{\ell}_{opt})$ which attains the same minimum, and vice versa. Given $(\rho_{opt},g^a_{opt},g^l_{opt})$, the corresponding forms of $h^a_{opt}$, $h^{\ell}_{opt}$ are simply \begin{align} h^a_{opt}(x^s,x^a) &= g^a_{opt}\left(\rho(x^s,x^a),x^a\right) \nonumber \\ h^{\ell}_{opt}(x^s,x^a,x^{\ell}) &= g^{\ell}_{opt}\left(\rho(x^s,x^a),x^{\ell}\right) \nonumber \\ \end{align} Given $h^a_{opt}$, $h^{\ell}_{opt}$, the correspondence is slightly more complex. Set $\rho_{opt}$ as a bijective function from $\mathbb{R}^{n_s+n_a}$ to $\mathbb{R}$; for instance, set it to `splice' the decimal digits of arguments together. Now set $g^a_{opt}$, $g^{\ell}_{opt}$ to firstly `decrypt' the value of $\rho_{opt}$ back into constituent parts ($x^s$ and $x^a$), and then compute $h^a_{opt}(x^s,x^a)$ and $h^{\ell}_{opt}(x^s,x^a,x^{\ell})$ as outputs. This shows that the two constrained optimisation problems are equivalent. \end{proof} We note that this implies that optimising $(\rho,g^a,g^{\ell})$ jointly is equivalent to a more general treatment-allocation problem which does not involve a predictive score. \subsection{Counterexample showing naive updating can cause better models to appear worse} \label{supp_sec:models_worse} For this counterexample we shall use the following set up: \begin{align} f(x^s,x^a,x^\ell) =& f(x^s,x^a) = (1+e^{-x^s-x^a})^{-1}\\ \rho_{0}(x^s,x^a \mid X^\star_{0},Y^\star_0) =& \begin{cases} \frac{\sum_{i=1}^{n}(Y^\star_0)_{i}\mathds{1}\{\sum_{j=1}^{2}(X^\star_{0})_{ij}>0\}}{\sum_{i=1}^{n}\mathds{1}\{\sum_{j=1}^{2}(X^\star_{0})_{ij}>0\}} & x^s + x^a > 0 \\ \frac{\sum_{i=1}^{n}(Y^\star_0)_{i}\mathds{1}\{\sum_{j=1}^{2}(X^\star_{0})_{ij} \leq 0\}}{\sum_{i=1}^{n}\mathds{1}\{\sum_{j=1}^{2}(X^\star_{0})_{ij} \leq 0\}} & x^s + x^a \leq 0 \end{cases}\\ \rho_{1}(x^s,x^a \mid X^\star_{1},Y^\star_1) =& (1+e^{-\hat{\beta}_{0} - x^s\hat{\beta}_{1} - x^a\hat{\beta}_{2}})^{-1} \mbox{ where } \hat{\beta} = \mbox{argmax}\{\mathcal{L}(\beta|X^\star_{1},Y^\star_1)\}\\ m_{\tilde{f}_e}(\rho_{e}|X^\star_{e},Y^\star_e) =& \mathbb{E}_{\mu}\left[|f(X^s,g^a(\rho_{e-1},X^a))-\rho_e(X^s,X^a \mid X^\star_{e},Y^\star_e)|\right]\\ g^{a}(\rho,x^a) =& (1-\rho)(x^a+3) + \rho(x^a-3) \end{align} For simplicity, we shall view the latent variables as having no effect on the true risk score $f$, which corresponds to the scenario where (if no interventions are made), it is possible with the data we observe to fully specify $f$. For the purpose of the counterexample it is reasonable to do this as model performance only requires $m_{\tilde{f}_e}$, which has no dependence on latent covariates. We also state, that due to the omission of latent covariates, $X_{e}(0) = (X^s_e(0),X^a_e(0)) \sim N_{2}(0,I_{2})$, which is then used to generate (through the statistical program R) an initial training data set at epoch 0, of size $n = 100$, which is summarised below: \begin{center} \begin{tabular}{| c |c | c | c|} \hline index & $\mathbf{(X^{\star}_0)_{\cdot 1}}$ & $\mathbf{(X^{\star}_0)_{\cdot 2}}$ & $\mathbf{Y^\star_0}$ \\ \hline 1 & 1.185 & 1.272 & 1 \\ \hline 2 & 0.881 & -0.995 & 0 \\ \hline 3 & 0.122 & -0.956 & 0 \\ \hline \multicolumn{1}{c}{}&\multicolumn{2}{c}{$\vdots$}\\ \hline 98 & -0.826 & 1.779 & 1 \\ \hline 99 & 0.853 & 0.151 & 1 \\ \hline 100 & 0.177 & 0.805 & 1 \\ \hline \end{tabular} \end{center} This training data can then inputted into $\rho_0$ to give the following function: \begin{equation}\label{eq:p0initial} \rho_{0}(x^s,x^a \mid X^\star_{0},Y^\star_0) = \begin{cases} 0.733 & x^s + x^a > 0 \\ 0.200 & x^s + x^a \leq 0 \end{cases} \end{equation} When intervening on any covariates at epoch 1 the function given in equation \eqref{eq:p0initial} will be used to produce $X_1(1)$ and subsequently $Y_1$. We now consider $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right]$, which we approximate using a Monte Carlo estimate with 1000 samples. However, $m_{\tilde{f}_0}(\rho_{0}|X^\star_{0},Y^\star_0)$ also requires approximation, and so a Monte Carlo estimate with the same number of samples is also used for this function. The procedure is as follows: \begin{enumerate} \item For i from 1 to 1000: \begin{enumerate} \item Obtain a training data set , $(X^{\star}_0,Y^{\star}_0)_i$, by taking $n$ samples of $(X_0(0),Y_0)$. \item Use this training data set to obtain a $(\rho_0)_i$ of the form given in equation \eqref{eq:p0initial}. \item For j from 1 to 1000: \begin{enumerate} \item Sample $(x^s,x^a)_j \sim X_0(0)$. \end{enumerate} \item $m_{\tilde{f}_0}(\rho_{0}|(X^\star_{0},Y^\star_0)_i) \approx \frac{1}{1000}\sum_{j=1}^{1000}|f((x^s,x^a)_j) - \rho_0((x^s,x^a)_j \mid (X^\star_{0},Y^\star_0)_i)|$ \end{enumerate} \item $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] \approx \frac{1}{1000}\sum_{j=1}^{1000}m_{\tilde{f}_0}(\rho_{0}|(X^\star_{0},Y^\star_0)_i)$ \end{enumerate} With this in mind, we give the following approximation: $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] \approx 0.124$. If we assert that interventions never take place, then we can use the same procedure described above to obtain $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_0,Y^\star_0)\right] \approx 0.056$. So here we can clearly see that in the setting where interventions are never made, $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] > \mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_0,Y^\star_0)\right]$, and so the model closer to the truth is the logistic regression model at epoch 1. If agents were allowed to make interventions (based on (\eqref{eq:p0initial})) however, we would consider $\mathbb{E}_{(X_1^\star,Y_1^\star)}\left[m_{\tilde{f}_1}(\rho_{1}|X^\star_1,Y^\star_1)\right] \approx 0.197$ instead. Now, since $\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] < \mathbb{E}_{(X_1^\star,Y_1^\star)}\left[m_{\tilde{f}_1}(\rho_{1}|X^\star_1,Y^\star_1)\right]$, we would come to the incorrect conclusion that the model closer to the truth is the model used at epoch 1. Consequently we can state that, given the setup provided in section 3.1, \begin{align} &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] > \mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_0,Y^\star_0)\right] \centernot\implies \nonumber\\ &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] > \mathbb{E}_{(X_1^\star,Y_1^\star)}\left[m_{\tilde{f}_1}(\rho_{1}|X^\star_1,Y^\star_1)\right] \end{align} Additionally, we show that for this example: \begin{align} &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] > \mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_0,Y^\star_0)\right] \centernot\implies \nonumber\\ &\mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right] > \mathbb{E}_{(X_1^\star,Y_1^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_1,Y^\star_1)\right] \label{eq:estimatableinequalities} \end{align} as $\mathbb{E}_{(X_1^\star,Y_1^\star)}\left[m_{\tilde{f}_0}(\rho_{1}|X^\star_1,Y^\star_1)\right] \approx 0.215 > 0.124 \approx \mathbb{E}_{(X_0^\star,Y_0^\star)}\left[m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)\right]$. This statement is given here because for $\tilde{f}_0$, and therefore $m_{\tilde{f}_0}$, it is possible to gain estimates through a holdout test data set. Whilst the comparison is not between a risk score ($\rho_e$) and the function it is trying to estimate ($\tilde{f}_e$), the effect of deteriorating performance as epochs increase is still captured. Going further, it is assumed that if stakeholders were implementing naive model updating, they would assume that $\rho_e$ is estimating $\tilde{f}_0$ for all epochs as the belief is that interventions do not effect the model. Therefore, comparison with $\tilde{f}_0$ will heighten the impression to stakeholders that using an updated model structure is causing performance to deteriorate, especially for epoch 0 to epoch 1, where for this comparison $\rho_0$ is actually estimating $\tilde{f}_0$. We expect from a stakeholders view that comparison (using estimates) between the two models at successive epochs usually leads to the inequality $m_{\tilde{f}_0}(\rho_{e-1} \mid X^\star_{e-1},Y^\star_{e-1}) < m_{\tilde{f}_0}(\rho_e \mid X^\star_e,Y^\star_e)$, and therefore the conclusion is that the new model leads to worse performance. We advise that a conclusion is only reached after further comparison is done between $m_{\tilde{f}_0}(\rho_{e-1} \mid X^\star_e,Y^\star_e)$ and $m_{\tilde{f}_0}(\rho_{e} \mid X^\star_e,Y^\star_e)$, as this gives an indication whether the drop in performance is due to the model structure or the intervention effect. Finally, we advise caution when considering the effect of latent variables when estimating $m_{\tilde{f}_0}(\rho_{e}|X^\star_e,Y^\star_e)$. This is due to that fact that when holdout test data is used to obtain an estimate, it is an estimate of $f$ rather than an estimate of $\tilde{f}_0$. If the latent variables have a small influence on $f$ than $f \approx \tilde{f}_0$ and we can make inferences as shown above, but if latent variables have a large influence on $f$ then our comparison is not based on $m_{\tilde{f}_0}$ but instead on $m_f$. This creates a problem as now how well we perceive our model's performance can be determined largely by how well a model arbitrarily captures the latent covariate information using just the set and actionable covariates. It therefore becomes substantially more difficult to determine whether the cause of a models poor performance is due to the model, the intervention effect or insufficient data. As a general rule however, large values of $m_{\tilde{f}_0}(\rho_{0}|X^\star_0,Y^\star_0)$ should indicate that either the initial model is very poor or that there is insufficient data, but in either case careful consideration of what could possibly influence the underlying mechanism should be made before a risk score is built and given to agents, to ensure that latent variables affect the model as little as possible. \subsection{Proof of theorem~\ref{thm:naive_updating_behaviour}} \label{supp_sec:thm1proof} If $h'(z_0) \leq -1$ then the single fixed point of $h$ is unstable and $\rho_e$ cannot converge to it unless it was always equal to $z_0$. There can be no other $z$ with $h(z)=z_0$ since $h'(z)<0$ by assumption. Since $\rho_e \in [0,1]$ and $h'(z)<0$, $\rho_e$ must tend toward a stable oscillation between two values, or converge to a single value. If the bounds on partial derivatives hold, then from the triangle and Cauchy-Schwarz inequalities, for $z \in R$ \begin{align} |h'(z)| &\leq \mathbb{E}_{X^L}\left[ \sum_{i}^{p^a} |\delta^{g^a}_i \delta^{f^a}_i| + \sum_{i}^{p^L} |\delta^{g^{\ell}}_i \delta^{f^{\ell}}_i| \right] \nonumber \\ &= \sum_{i}^{p^a} |\delta^{g^a}_i| \mathbb{E}_{X^{\ell}}\left[ | \delta^{f^a}_i|\right] + \sum_{i}^{p^{\ell}} \mathbb{E}_{X^{\ell}}\left[|\delta^{g^{\ell}}_i \delta^{f^{\ell}}_i| \right] \nonumber \\ &\leq \sqrt{\sum_{i}^{p^a} (\delta^{g^a}_i)^2 \sum_i^{p^a} \mathbb{E}_{X^{\ell}}\left[ \delta^{f^a}_i \right]^2} \nonumber \\ &\phantom{\leq} + \sqrt{\sum_{i}^{p^{\ell}} \mathbb{E}_{X^{\ell}}\left[\left(\delta^{g^{\ell}}_i \right)^2\right] \sum_{i}^{p^{\ell}} \mathbb{E}_{X^{\ell}}\left[ \left(\delta^{f^{\ell}}_i \right)^2 \right]} \nonumber \\ &\leq \sqrt{k_1 k_3} + \sqrt{k_2 k_4} < 1 \end{align} so the map $h:\rho_e \to \rho_{e+1}$ is a contraction, and the convergence of the recurrence $\rho_e \to \rho_{e+1}$ follows from the Banach fixed-point theorem, as long as $\rho_e \in R$ for some value of $e$. \iffalse \subsection{Temporary: successive adjuvancy with $dim(x^a)>1$} \begin{theorem} Let $x^s \in \mathbb{R}^{p^s}$, $x^a \in \mathbb{R}^{p^a}$, and $f$ and $g^a$ be continuous functions \begin{align} f:\mathbb{R}^{p^s} \times \mathbb{R}^{p^a} &\to (0,1) \nonumber \\ g^a:(0,1) \times \mathbb{R}^{p^s} &\to\mathbb{R}^{p^s} \nonumber \end{align} where $f$ and has limits $1$ and $0$ as any element of $x^s$, $x^a$ approaches $\infty$ or $-\infty$ respectively (holding other elements constant). Suppose that there is a unique $\rho_{eq} \in (0,1)$ such that \begin{align} \forall x &: g^a(\rho_{eq},x) \equiv x \nonumber \\ \forall x,\rho<\rho_{eq} &: g^a(\rho,x)>x \nonumber \\ \forall x,\rho>\rho_{eq} &: g^a(\rho,x)<x \nonumber \end{align} Define $\rho_{-1}=\rho_0=f(x^s,x^a)$, $g^a_0(\rho_{-1},x^a)=x^a$ and for $e \geq 0$ define \begin{align} g^a_{e+1}(\rho_e,x^a) &= g^a(\rho_e,g^a_e(\rho_{e-1},x^a)) \label{eq:g_evolution} \\ \rho_{e+1}(x^s,x^a) &= f\left(x^s,g^a_{e+1}(\rho_{e},x^a)\right) \label{eq:rho_evolution} \end{align} Then if $\lim_{e \to \infty} g^e(\rho_e,x^a)$ exists and is finite, we have $\lim_{e \to \infty} \rho_{e}(x^s,x^a)= \rho_{eq}$. \end{theorem} \begin{proof} Set $x_0=x^a$ and $x_e = g^a_e(\rho_{e-1},x^a)$. Given~\eqref{eq:rho_evolution} at $e$ and~\eqref{eq:g_evolution} at $e+1$, we have \begin{align} x_{e+1} &= g^a\left(f(x^s,x_e),x_e \right) \nonumber \\ &\triangleq k(x_e) \end{align} Let $c_{eq}=\{x:f(x^s,x)=\rho_{eq}\}$. This is non-empty from the limit properties and continuity of $f$. Now for any $x \in c_{eq}$ we have \begin{align} k(x) &= g^a\left(f(x^s,x),x\right) \nonumber \\ &= g^a(\rho_{eq},x) \nonumber \\ &= x \nonumber \end{align} For $x \notin c_{eq}$ we have either $f(x^s,x) >\rho_{eq}$ and $k(x) &= g^a(f(x^s,x),x) < x$ or $f(x^s,x) < \rho_{eq}$ and $k(x)>x$. Thus $c_{eq}$ is the set of fixed points of $k$. If $x_{e} \to x$ with $\infty<x<\infty$, then $k(x_{e}) \to x$. Since $f$ and $g$ are continuous, $k$ is as well, and $x$ must be a fixed point of $k$. Since $g^a$ is continuous, we must have $\rho_e \to \rho_{eq}$. \end{proof} \begin{corollary} Under various conditions on the partial derivatives of $k$, we can guarantee that $x_e$ will converge. \end{corollary} \fi \subsection{Proof of theorem~\ref{thm:successive_adjuvancy} in main text} \label{supp_sec:thm2proof} \begin{proof} The function $f^{-1}(x)$ is well-defined and one-to-one given assumptions~\ref{asm:univ}, \ref{asm:partial} from the theorem statement. Now \begin{align} \rho_{e+1} &= \rho_{e+1}(x^s,x^a) \nonumber \\ &= f\left(x^s,g^a_{e+1}(\rho_e,x^a)\right) \label{eq:rho_ge} \\ &= f\left(x^s,g^a(\rho_e,g^a_e(\rho_{e-1},x^a))\right) \nonumber \\ &= f\left(g^a(\rho_e,f^{-1}(\rho_e))\right) \nonumber \\ &\triangleq h_{2}(\rho_e) \label{eq:h2def} \end{align} and we have \begin{align} h_2(\rho_{eq}) &= f\left(x^s, g^a(\rho_{eq},f^{-1}(\rho_{eq}))\right) \nonumber \\ &= f\left(x^s, f^{-1}(\rho_{eq})\right) \nonumber \\ &= \rho_{eq} \label{eq:rho_fixed_point} \end{align} so $\rho_{eq}$ is a fixed point of the recursion for $\rho_{e}$. It can be the only fixed point; by definition there is only one value of $\rho$ with $g(\rho,x)=x$, and hence for $\rho \neq \rho_{eq}$ we have $g^a(\rho,f^{-1}(\rho)) \neq f^{-1}(\rho_{eq})$. But from assumption~\ref{asm:fpartial} in the theorem statement this must mean that $h_2(\rho)=f(g^a(\rho,f^{-1}(\rho)) \neq f(f^{-1}(\rho))=\rho$. Given the condition on the derivative of $h_2(\rho)$ for $\rho \in I$, the first result follows from the Banach fixed-point theorem. The second is immediate as the LHS is simply $\rho_e(x^s,x^a)$. The third follows from an inversion of equation~\eqref{eq:rho_ge}. \end{proof} \subsection{Counterexample showing failure of naive updating to generally solve constrained optimisation problem} \label{supp_sec:nonoptimal} For this counterexample, we do not need to consider latent covariates, and will assume they do not exist. Under the setting in section~\ref{sec:general} in the main text, if $\rho_n$ converges to $\rho_{\infty}(x^s,x^a)$ for some $x^s,x^a$ under naive updating, then we have \begin{equation} \rho_{\infty}(x^s,x^a)=h(\rho_{\infty}(x^s,x^a)=f(g(\rho_{\infty}(x^s,x^a),x^a),x^s) \label{eq:rhoinfinity} \end{equation} Suppose $x^s$ and $x^a$ each have dimension 1, and consider the example: \begin{align} f(x^a,x^s) &= \textrm{logit}(x^a + x^s) = \frac{1}{1+\exp\left(-(x^a + x^s)\right)} \nonumber \\ g(\rho,x^a) &= x^a - \log(1+\rho) \nonumber \\ c^a(x) &= x \nonumber \end{align} For a given function $\rho$, the objective and cost are, respectively \begin{align} \textrm{obj}\{\rho\} &= E\left\{(1+(1+\rho)\exp(-(X^s+X^a)))^{-1}\right\}\nonumber \\ \textrm{cost}\{\rho\} &= E\left\{ \log(1+\rho)\right\} \end{align} Using an oracle predictor of $Y|X$, as in the previous section, $\rho_n$ converges to the fixed point of the recursion $z \to f(g(z,x^a),x^s)$, which is \begin{equation} \rho_{\infty}(x^s,x^a) = \frac{1}{2}\left(\sqrt{\left(e^{x+y}+1\right)^2 + 4 e^{x+y}}- \left(e^{x+y}+1\right) \right) \end{equation} To see why this is not optimal, suppose $X^a,X^s$ have a discrete distribution taking either of the values $(0,-1)$, $(0,1)$ with probability $1/2$. Then \begin{align} \textrm{cost}\{\rho_{\infty}\} &= \frac{\log(2)}{2} \approx 0.346 \nonumber \\ \textrm{obj}\{\rho_{\infty}\} &= \frac{1+e}{1+e+ \sqrt{1+6e + e^2}} \approx 0.428 \nonumber \end{align} However, consider some $\rho_{0}$ with $\rho_0(0,-1)=0$, $\rho_0(0,1)=1$. Now \begin{align} \textrm{cost}\{\rho_{0}\} &= \frac{\log(2)}{2} = \textrm{cost}\{\rho_{\infty}\} \nonumber \\ \textrm{obj}\{\rho_0\} &= \frac{1}{2}\left(\frac{1}{1+e} + \frac{e}{2+e}\right) \approx 0.423 < \textrm{obj}\{\rho_{\infty}\} \end{align} \subsection{Simple example of updating leading to oscillation} \label{supp_sec:oscillation} Define $g(\rho,x^a)$ as above, and instead define \begin{equation} f(x^a,x^s) = \textrm{logit}\left(-k(x^a+x^s)\right) \label{eq:fdef1} \end{equation} As usual, we presume that to estimate $\rho$, we regress $Y$ on $X^s_0$, $X^a_0$, and we do it accurately enough to presume $\rho$ is an oracle. Now \begin{align} h(x) &= \frac{1}{1+ (1+x)^k \exp \left(-k(x^s+x^a)\right)} \nonumber \\ h'(x) &= -k\frac{e^{k(x^s+x^a)}(1+x)^{k-1}}{\left( e^{k(x^s+x^a)}+(1+x)^k \right)^2} \end{align} Consider a setting when $x^s=x^a=0$ and $k=8$. Now $h(0)=1/2>0$ and $h(1/5) \approx 0.189 < 1/5$. For $x \in (0,1)$ we have $h'(x)<0$, so the equation $h(x)=x$ has a single solution in $(0,1/5)$. But on $(0,1/5)$, we have $h'(x)< -1$. So if $x_0$ is the unique root of $h(x)-x$ on $x \in (0,1)$ then $h'(x_0)<0$ Now as long as $\rho_0(x^s,x^a)$ is not exactly the value of $x$ for which $h(x)=x$, if we update $\rho_n$ using $h$, it can never converge as the fixed point of the map $h$ is unstable. Conceptually, although no intervention changes $x^a$ very much, the function $f$ is very sensitive to small changes in $x^a$ when $k=8$, so a small change in $x^a$ will necessarily cause a larger change in $f(x^a,x^s)$ when $\rho$ is near the fixed point of $h$. \section{Comparison of solution/avoidance strategies} \label{supp_sec:solution_comparison} We briefly compare advantages and disadvantages of the general strategies identified in section~\ref{sec:solution} to avoid or overcome problems associated with naive updating. Any of the three strategies can be used to avoid the naive updating problem if they enable an unbiased estimate of \begin{equation} \mathbb{E} \left[f_e\left(x^s,x^a,X^{\ell}\right)\right] \label{eq:critical_quantity} \end{equation} to be obtained, where the expectation is over $X^{\ell}$ either before or after intervention. The expectation~\eqref{eq:critical_quantity} can be recognised as the quantity for which $\rho_e$ is treated as an estimator. More frequent covariate observation as per section~\ref{sec:solution_modelling} allows this by enabling observation of $X_e(1)$, so such an unbiased estimate may be obtained by regression of $Y_e$ on observed $X_e(1)$. The strategy in section~\ref{sec:solution_holdout} defines a hold-out subset of samples $X_e^{\star}$, $Y_e^{\star}$ for which $X_e^{\star}(1)=X_e^{\star}(0)$, so an unbiased estimate of~\eqref{eq:critical_quantity} can be obtained by regression of $Y_e^{\star}$ on (observed) $X_{e}^{\star}(0)$ will work. Finally, the strategy in section~\ref{sec:solution_control} specifies $g_e^a$ and $g_e^{\ell}$, so an unbiased estimate of~\eqref{eq:critical_quantity} can be made by regressing $Y_e$ on $X_e^S(0)$, $g_e^a(\rho_e,X_e^a(0))$. Although all three solutions avoid the problems of naive updating, they `solve' somewhat different problems and require different experimental designs. The class of strategies described in section~\ref{sec:solution_modelling} (a range of modelling approaches generally requiring more frequent covariate observation) can solve the constrained optimisation problem in section~\ref{sec:aim} over $\rho$. The strategy described in section~\ref{sec:solution_holdout} (retention of a `hold-out' set on which no interventions are made) simply enables unbiased observation of $f_e$. The strategy described in section~\ref{sec:solution_control} (explicit control of interventions $g^a$, $g^{\ell}$) solves the constrained optimisation problem over $g^a$, $g^{\ell}$. However, solutions may be quantitatively compared with an aim of recommending which (if any) might be most appropriate in a given circumstance. If possible, the strategy in section~\ref{sec:solution_modelling} should be used if possible, as it enables the greatest flexibility in approach. The strategy in~\ref{sec:solution_control} should be used alternatively or additionally if appropriate. The strategy in section~\ref{sec:solution_holdout} is advisable as a general approach if covariates cannot be observed more frequently and interventions cannot be controlled (that is, neither of the other strategies are actionable). \subsection{Illustration of solutions} \label{supp_sec:solution_illustration} We consider how each strategy may appear in the context of the setting described in Supplementary section~\ref{supp_sec:realistic_exposition}. The strategy in section~\ref{sec:solution_modelling} would comprise re-observing covariates in February ($t=1$) after interventions are made. Under this closer observation (allowing inference of $g^a$ and $\mathbb{E}(f)$), $\rho_e$ could be set so as to optimise healthcare provision. The strategy in section~\ref{sec:solution_holdout} would require nomination a random sample of the population on which scores would not be calculated, and hence on which no intervention could be made on the basis of a risk score. This would enable observation of `native' covariate effects on risk. The strategy in section~\ref{sec:solution_control} would implement specific interventions: for instance, `if $\rho_e>50\%$, stop drug $X$'. Interventions could then be tuned to optimise healthcare provision. \section{Open problems} \label{supp_sec:open} We propose the following short list of open problems in this area. \begin{enumerate} \item Determine a framework to modulate both $g^{\ell}$ and $g^a$ with the aim of solving the constrained optimisation problem in section~\ref{sec:aim} in the main text. \item Determine the dynamics and consequences of other model-updating strategies. What happens if training data is aggregated at each step, rather than only the most recent data being used? \item Derive results of successive adjuvancy in more general circumstances. \item How do the dynamics of the model change when assumptions differ? Can $f$, $g^{\ell}$ and $g^a$ be extended to be random-valued, and possibly agglomerated into a single intervention function? \item How can assumptions be changed to approximate more general machine learning settings? \end{enumerate} \bibliographystyle{plain}
1,477,468,749,951
arxiv
\section{Introduction} Graphene is a unique material since its electron motion is governed by a special equation similar to the relativistic massless Dirac equation, while a nonrelativistic equation is common in condensed matter physics.~\cite{novoselov05,zhang05} The electron motion is modified by the electron-phonon (el-ph) and electron-light interactions, which are fundamental issues in discussing the transport,~\cite{novoselov05,zhang05} electronic,~\cite{bostwick07} and optical properties~\cite{ferrari06,yan07} of graphene. The goal of this paper is to show that an asymmetry of the Raman spectra for $\Gamma$ point longitudinal and transverse optical phonon (LO and TO) modes, both of which are known as the Raman G band, appears near the edge of graphene. There are two fundamental orientations for the edge of graphene, zigzag and armchair edges, and a general edge shape is considered to be a mixture of them.~\cite{kosynkin09,jiao09} The asymmetry is useful in identifying the orientation of the edge of graphene by Raman spectroscopy. In Raman spectroscopy, we irradiate laser light onto a sample and observe the intensity of the inelastically scattered light. The energy difference between the incident laser and the inelastically scattered light corresponds to the energy of a Raman active phonon mode due to the energy conservation. The el-ph interaction is essential for the Raman process. Further, the el-ph interaction can modify the energy and life-time of the phonon mode, which is known as the Kohn anomaly.~\cite{kohn59} Evidence for Kohn anomalies is found in the phonon dispersion of carbon nanotube,~\cite{dubay02} graphene,~\cite{ando06-ka,lazzeri06prl} and graphite.~\cite{piscanec04} By examining the Kohn anomaly for the G band of carbon nanotube,~\cite{farhat07} a feature of the el-ph interaction such as the chirality dependence of the el-ph interaction upon the Kohn anomaly has been clarified.~\cite{sasaki08_curvat} In this paper, we calculate the el-ph matrix elements relevant to the Raman intensity and Kohn anomaly of the G band of graphene within effective-mass approximation by including the effects of the edge of graphene and polarization direction of an incident laser (and a scattered) light. This paper is organized as follows. In \S~\ref{sec:hamiltonian}, we show the Hamiltonian including the el-ph interaction with respect to the $\Gamma$ point optical phonon modes and the electron-light interaction. In \S~\ref{sec:zig} and \S~\ref{sec:arm}, we calculate the matrix elements for the el-ph and electron-light interactions by taking into account of the presence of the zigzag and armchair edges, respectively. The self-energy of the LO mode is estimated in \S~\ref{sec:selfene} and the phonon self-energy for general edge shape is discussed. Finally, we propose two models representing the electronic states at the interior of a graphene sample and calculate the self-energies for those models in \S~\ref{sec:bulk}. In \S~\ref{sec:discussion}, we discuss the relationship between our result and experimental results, and summarize the results. \section{Hamiltonian}\label{sec:hamiltonian} Let $\Psi_{\rm K}({\bf r})$ [$\Psi_{\rm K^\prime}({\bf r})$] be the wave function for an electron near the K [K$'$] point, the energy eigen equation for an electron near the Fermi energy of graphene is written as \begin{align} {\hat H} \begin{pmatrix} \Psi_{\rm K}({\bf r}) \cr \Psi_{\rm K^\prime}({\bf r}) \end{pmatrix} = E \begin{pmatrix} \Psi_{\rm K}({\bf r}) \cr \Psi_{\rm K^\prime}({\bf r}) \end{pmatrix}. \end{align} The wave function $\Psi_{\rm K}({\bf r})$ [$\Psi_{\rm K^\prime}({\bf r})$] is two-component structure, which results from that the hexagonal unit cell contains two carbon atoms [A atom ($\bullet$) and B atom ($\circ$) in Fig.~\ref{fig:zigLOTO}]. The total Hamiltonian ${\hat H}$ including the el-ph interaction with respect to the $\Gamma$ point LO and TO modes, and the electron-light interaction is given by~\cite{sasaki08ptps} \begin{align} {\hat H} = v_{\rm F} \begin{pmatrix} \mbox{\boldmath $\sigma$} \cdot ({\bf {\hat p}}+{\bf A}^{\rm q}-e{\bf A}) & 0 \cr 0 & \mbox{\boldmath $\sigma$}' \cdot ({\bf {\hat p}}-{\bf A}^{\rm q}-e{\bf A}) \end{pmatrix}. \label{eq:HKK'} \end{align} Here $v_{\rm F}$ is the Fermi velocity, momentum operator ${\bf {\hat p}}=-i\hbar(\partial_x,\partial_y)$, $\mbox{\boldmath $\sigma$} \equiv (\sigma_x,\sigma_y)$ and $\mbox{\boldmath $\sigma$}' \equiv (-\sigma_x,\sigma_y)$ where $\sigma_x$, $\sigma_y$ and $\sigma_z$ are Pauli matrices. We take $x$ and $y$ axes as shown by the inset in Fig.~\ref{fig:zigLOTO}(a). The electromagnetic gauge field ${\bf A}$ enters into the Hamiltonian through the substitution ${\bf {\hat p}} \to {\bf {\hat p}}-e{\bf A}$ where $-e$ is the charge of electron. A uniform field ${\bf A}$ can represent the incident laser light and the scattered light in the Raman process. The el-ph interaction is represented by the deformation-induced gauge field ${\bf A}^{\rm q}=(A^{\rm q}_{x},A^{\rm q}_{y})$.~\cite{sasaki08ptps} It can be shown that $A^{\rm q}_{x}$ and $A^{\rm q}_{y}$ are expressed in terms of a change of the nearest-neighbor hopping integral from an average value $-\gamma_0$, $\delta \gamma_{0,a}$, as~\cite{kane97,sasaki05,katsnelson08} \begin{align} \begin{split} & v_{\rm F} A^{\rm q}_x = \delta \gamma_{0,1} - \frac{1}{2} \left( \delta \gamma_{0,2}+ \delta \gamma_{0,3} \right), \\ & v_{\rm F} A^{\rm q}_y = \frac{\sqrt{3}}{2} \left( \delta \gamma_{0,2}- \delta \gamma_{0,3} \right). \end{split} \label{eq:gauge} \end{align} Here $a$ $(=1,2,3)$ for $\delta \gamma_{0,a}$ denotes the direction of the bond (see the inset of Fig.~\ref{fig:zigLOTO}), and $\delta \gamma_{0,a}$ is caused by atomic displacements by the $\Gamma$ point optical phonon modes. Note that ${\bf A}^{\rm q}$ is uniform for the $\Gamma$ point ${\bf q}=0$ phonons, while ${\bf A}^{\rm q}$ depends on the position ${\bf r}$ as ${\bf A}^{\rm q}({\bf r})$ for phonons with ${\bf q} \ne 0$.~\cite{sasaki08_curvat} Although an additional deformation-induced gauge field due to a local modulation of the hopping integral originating from a defect appears in a realistic situation, we ignore it in eq.~(\ref{eq:HKK'}) for simplicity. \section{Zigzag Edge}\label{sec:zig} First, we calculate the matrix element relevant to the Raman intensity near the zigzag edge. The scattering or reflection of an electron at the zigzag edge is intravalley scattering,~\cite{pimenta07} and therefore we can consider the K and K$'$ points separately. Let us consider the electrons near the K point. The Hamiltonian is given by \begin{align} {\hat H}_{\rm K}=v_{\rm F} \mbox{\boldmath $\sigma$} \cdot \left( {\bf {\hat p}}+{\bf A}^{\rm q}-e{\bf A} \right). \label{eq:HK} \end{align} We specify the deformation-induced gauge field ${\bf A}^{\rm q}$ for the LO and TO modes near the zigzag edge. The vibrations of carbon atoms corresponding to the $\Gamma$ point LO and TO modes are shown in Figs.~\ref{fig:zigLOTO}(a) and~\ref{fig:zigLOTO}(b). By assuming that the perturbation $\delta \gamma_{0,a}$ is proportional to the change in the bond length, we have $\delta \gamma_{0,1} = 0$ and $\delta \gamma_{0,2}=-\delta \gamma_{0,3}$ for the LO mode, while $\delta \gamma_{0,1} = -2 \delta \gamma_{0,2}$ and $\delta \gamma_{0,2}=\delta \gamma_{0,3}$ for the TO mode. Using eq.~(\ref{eq:gauge}), we see that ${\bf A}^{\rm q}$ for the LO mode is written as ${\bf A}^{\rm q}_{\rm LO}=(0,A_y^{\rm q})$ with $v_{\rm F}A_y^{\rm q}=\sqrt{3}\delta\gamma_{0,2}$, while ${\bf A}^{\rm q}$ for the TO mode is written as ${\bf A}^{\rm q}_{\rm TO}=(A_x^{\rm q},0)$ with $v_{\rm F}A_x^{\rm q}=-3\delta\gamma_{0,2}$. Note that the direction of ${\bf A}^{\rm q}$ for the LO (TO) mode is perpendicular (parallel) to the zigzag edge. The direction of ${\bf A}^{\rm q}$ is perpendicular to the direction of atom displacement.~\cite{dubay02,ishikawa06} Thus, the el-ph interaction in eq.~(\ref{eq:HK}), $H^{\rm zig}_{\rm LO/TO}\equiv v_{\rm F} \mbox{\boldmath $\sigma$} \cdot {\bf A}^{\rm q}_{\rm LO/TO}$, is rewritten as \begin{align} \begin{split} & H^{\rm zig}_{\rm LO} = v_{\rm F} A^{\rm q}_y \sigma_y, \\ & H^{\rm zig}_{\rm TO} = v_{\rm F} A^{\rm q}_x \sigma_x, \end{split} \label{eq:Hzig} \end{align} for the LO and TO modes, respectively. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{zigLOTO.eps} \end{center} \caption{The displacement vectors for the LO and TO modes are shown in (a) and (b), respectively. The displacement vectors of the LO (TO) mode are parallel (perpendicular) to the zigzag edge. The direction of the deformation-induced gauge field ${\bf A}^{\rm q}$ is perpendicular to the direction of atom displacement. } \label{fig:zigLOTO} \end{figure} The el-ph matrix element is given as the expectation value of the el-ph interaction with respect to the energy eigenstate for the unperturbed Hamiltonian, $H_{\rm K}^0=v_{\rm F} \mbox{\boldmath $\sigma$} \cdot {\bf {\hat p}}$. The energy eigenstate with wave vector ${\bf k}$ in the conduction energy band is written in terms of the plane wave $e^{i{\bf k}\cdot {\bf r}}$ and the Bloch function $\Phi^{\rm c}_{\bf k}$ as $\Phi^{\rm c}_{\bf k}({\bf r}) = N e^{i{\bf k}\cdot {\bf r}} \Phi^{\rm c}_{\bf k}$, where $N$ is a normalization constant satisfying $N^2 V=1$, $V$ is the area (volume) of the system, and \begin{align} \Phi^{\rm c}_{\bf k} \equiv \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \cr e^{i\theta({\bf k})} \end{pmatrix}. \label{eq:wf} \end{align} Here $\theta({\bf k})$ is the angle between the vector ${\bf k}$ and the $k_x$-axis (see Fig.~\ref{fig:zigPhase}). The expectation values of $\sigma_x$, $\sigma_y$, and $\sigma_z$ with respect to $\Phi^{\rm c}_{\bf k}$ define the pseudospin. Since $\bar{\sigma}_x=\langle \Phi^{\rm c}_{\bf k}| \sigma_x |\Phi^{\rm c}_{\bf k} \rangle =\cos \theta({\bf k})$, $\bar{\sigma}_y=\langle \Phi^{\rm c}_{\bf k}| \sigma_y |\Phi^{\rm c}_{\bf k} \rangle =\sin \theta({\bf k})$, and $\bar{\sigma}_z=\langle \Phi^{\rm c}_{\bf k}| \sigma_z |\Phi^{\rm c}_{\bf k} \rangle =0$, the direction of the pseudospin of $\Phi^{\rm c}_{\bf k}$, \begin{align} (\bar{\sigma}_x,\bar{\sigma}_y,\bar{\sigma}_z)= (\cos\theta({\bf k}),\sin\theta({\bf k}),0), \end{align} is within the $(k_x,k_y)$ plane and parallel to the vector ${\bf k}$ (see Fig.~\ref{fig:zigPhase}). Owing to the presence of the zigzag edge parallel to the $x$-axis, the wave function near the zigzag edge is a standing wave given by a sum of the incident wave $\Phi^{\rm c}_{\bf k}({\bf r})$ and the reflected wave $\Phi^{\rm c}_{\bf k'}({\bf r})$ with ${\bf k}'\equiv (k_x,-k_y)$ as \begin{align} \Psi^{\rm c}_{\bf k}({\bf r}) = \frac{1}{\sqrt{2}} \left( \Phi^{\rm c}_{\bf k}({\bf r}) + \Phi^{\rm c}_{\bf k'}({\bf r}) \right). \label{eq:wfzig} \end{align} Strictly speaking, it is necessary to add the relative phase between $\Phi^{\rm c}_{\bf k}({\bf r})$ and $\Phi^{\rm c}_{\bf k'}({\bf r})$ in order that $\Psi^{\rm c}_{\bf k}({\bf r})$ may satisfy the boundary condition for the zigzag edge. However, this phase gives no contribution to the matrix elements of interest in the present investigation, and therefore we omit it. Note that the normalization of eq.~(\ref{eq:wfzig}) is adopted for $k_y\ne 0$. Some complications arise when $k_y =0$. For example, when $k_y=0$ and $k_x <0$, localized wave functions of edge states~\cite{fujita96} should be used, which is explained in Appendix\ref{app:a}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{zigPhase.eps} \end{center} \caption{ The zigzag edge reflects the wave vector ${\bf k}=(k_x,k_y)$ to ${\bf k'}=(k_x,-k_y)$, and two wave functions $\Psi^{\rm c}_{\bf k}({\bf r})$ and $\Psi^{\rm c}_{\bf k'}({\bf r})$ form a standing wave. Note that the $y$-component of the pseudospin is flipped by the zigzag edge. } \label{fig:zigPhase} \end{figure} The el-ph matrix element from a state in the conduction band to the same state is given as $\langle \Psi^{\rm c}_{\bf k}|H^{\rm zig}_{\rm LO/TO} |\Psi^{\rm c}_{\bf k} \rangle$. Using eqs.~(\ref{eq:Hzig}) and (\ref{eq:wfzig}), the pseudospin, and $\theta({\bf k}')=-\theta({\bf k})$, we obtain \begin{align} & \langle \Psi^{\rm c}_{\bf k}|H^{\rm zig}_{\rm LO} |\Psi^{\rm c}_{\bf k} \rangle = 0, \label{eq:zig-LO} \\ & \langle \Psi^{\rm c}_{\bf k}|H^{\rm zig}_{\rm TO} |\Psi^{\rm c}_{\bf k} \rangle = v_{\rm F} A^{\rm q}_x \cos \theta({\bf k}). \label{eq:zig-TO} \end{align} This result shows that the Raman intensity of the LO mode is negligible compared with that of the TO mode at zigzag edges. For eq.~(\ref{eq:zig-LO}), $\langle \Psi^{\rm c}_{\bf k}|\sigma_y |\Psi^{\rm c}_{\bf k} \rangle$ can be rewritten as a sum of two components, $\langle \Phi^{\rm c}_{\bf k}|\sigma_y |\Phi^{\rm c}_{\bf k} \rangle+ \langle \Phi^{\rm c}_{\bf k'}|\sigma_y |\Phi^{\rm c}_{\bf k'} \rangle$, since cross terms such as $\langle \Phi^{\rm c}_{\bf k}|\sigma_y |\Phi^{\rm c}_{\bf k'} \rangle$ vanish. Because $\langle \Phi^{\rm c}_{\bf k}|\sigma_y |\Phi^{\rm c}_{\bf k} \rangle =\sin\theta({\bf k})$ and $\langle \Phi^{\rm c}_{\bf k'}|\sigma_y |\Phi^{\rm c}_{\bf k'} \rangle =\sin\theta({\bf k'})=-\sin\theta({\bf k})$, the $y$-component of the pseudospin for the incident wave $\Phi^{\rm c}_{\bf k}({\bf r})$ is reflected as shown in Fig.~\ref{fig:zigPhase}. Thus, we have \begin{align} \langle \Psi^{\rm c}_{\bf k}|\sigma_y |\Psi^{\rm c}_{\bf k} \rangle = 0, \label{eq:sig_y} \end{align} due to the cancellation of the $y$-component of the pseudospin between the incident ${\bf k}$-state and the reflected ${\bf k'}$-state. Similarly, in eq.~(\ref{eq:zig-TO}), $\langle \Psi^{\rm c}_{\bf k}|\sigma_x |\Psi^{\rm c}_{\bf k} \rangle$, is written as a sum of two components, $\langle \Phi^{\rm c}_{\bf k}|\sigma_x |\Phi^{\rm c}_{\bf k} \rangle+ \langle \Phi^{\rm c}_{\bf k'}|\sigma_x |\Phi^{\rm c}_{\bf k'} \rangle$. Because $\langle \Phi^{\rm c}_{\bf k}|\sigma_x |\Phi^{\rm c}_{\bf k} \rangle =\cos\theta({\bf k})$ and $\langle \Phi^{\rm c}_{\bf k'}|\sigma_x |\Phi^{\rm c}_{\bf k'} \rangle =\cos\theta({\bf k'})=\cos\theta({\bf k})$, we obtain eq.~(\ref{eq:zig-TO}). The Raman intensity depends on the polarization of the incident laser light~\cite{grueneis03} and on that of the scattered light. The electron-light interaction is given by ${H}^{\rm em}_{\rm K}=-v_{\rm F} e \mbox{\boldmath $\sigma$} \cdot {\bf A}$ in eq.~(\ref{eq:HK}). The optical absorption occurs with amplitude $M^{\rm opt}({\bf A})=\langle \Psi^{\rm c}_{\bf k}|{H}^{\rm em}_{\rm K} | \Psi^{\rm v}_{\bf k}\rangle$, where $\Psi^{\rm v}_{\bf k}({\bf r})$ is the wave function in the valence energy band, which is related to $\Psi^{\rm c}_{\bf k}({\bf r})$ via $\Psi^{\rm v}_{\bf k}({\bf r})= \sigma_z \Psi^{\rm c}_{\bf k}({\bf r})$. On the other hand, the optical emission occurs with amplitude $\langle \Psi^{\rm v}_{\bf k}|{H}^{\rm em}_{\rm K}| \Psi^{\rm c}_{\bf k}\rangle$, which is simply the complex conjugate of $M^{\rm opt}({\bf A})$. Thus, the polarization dependences of the incident and scattered light are the same. Here, let us examine the polarization dependence of the incident light. The direction of ${\bf A}=(A_x,A_y)$ corresponds to the direction of the polarization of the electric field. The polarization of the incident laser light should be perpendicular to the zigzag edge within a graphene plane, i.e., ${\bf A}_\perp=(0,A_y)$, in order to populate photoexcited electrons effectively. This argument follows from $M^{\rm opt}({\bf A}_\perp) = -i v_{\rm F} e A_y\cos \theta({\bf k})$ for ${\bf A}_\perp=(0,A_y)$, while $M^{\rm opt}({\bf A}_\parallel) = 0$ for ${\bf A}_\parallel=(A_x,0)$ because \begin{align} M^{\rm opt}({\bf A}_\perp) &\equiv -v_{\rm F} e A_y \langle \Psi^{\rm c}_{\bf k}|\sigma_y |\Psi^{\rm v}_{\bf k} \rangle \nonumber \\ &= -v_{\rm F} e A_y \langle \Psi^{\rm c}_{\bf k}|\sigma_y \sigma_z |\Psi^{\rm c}_{\bf k} \rangle \nonumber \\ &= -i v_{\rm F} e A_y \langle \Psi^{\rm c}_{\bf k}|\sigma_x |\Psi^{\rm c}_{\bf k} \rangle \nonumber \\ &= -i v_{\rm F} e A_y\cos \theta({\bf k}), \label{eq:zig-perp} \end{align} and \begin{align} M^{\rm opt}({\bf A}_\parallel) &\equiv -v_{\rm F} e A_x \langle \Psi^{\rm c}_{\bf k}|\sigma_x |\Psi^{\rm v}_{\bf k} \rangle \nonumber \\ &= -v_{\rm F} e A_x \langle \Psi^{\rm c}_{\bf k}|\sigma_x \sigma_z |\Psi^{\rm c}_{\bf k} \rangle \nonumber \\ &= i v_{\rm F} e A_x \langle \Psi^{\rm c}_{\bf k}|\sigma_y |\Psi^{\rm c}_{\bf k} \rangle \nonumber \\ &= 0. \label{eq:zig-para} \end{align} Here, we have used $| \Psi^{\rm v}_{\bf k}\rangle = \sigma_z |\Psi^{\rm c}_{\bf k} \rangle$, $\sigma_y \sigma_z = i\sigma_x$, $\sigma_x \sigma_z = -i\sigma_y$, and eq.~(\ref{eq:sig_y}). It is noteworthy that it is mainly the electrons near the $k_x$-axis [$\theta({\bf k})\approx 0$ or $\pi$] that can participate in the Raman process taking place near the zigzag edge since both the el-ph matrix element [eq.~(\ref{eq:zig-TO})] and the optical transition amplitude [eq.~(\ref{eq:zig-perp})] are proportional to $\cos\theta({\bf k})$. Let us define the angle between the laser polarization and the zigzag edge as $\Theta$ (see the inset in Fig.~\ref{fig:Pdepend}), then $A_y=|{\bf A}|\sin\Theta$ and the Raman intensity is proportional to $|M^{\rm opt}({\bf A})|^2 \propto \sin^2 \Theta$. The $\Theta$-dependence of the square of the optical transition amplitude is plotted as the dashed curve in Fig.~\ref{fig:Pdepend}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.7]{raman_int.eps} \end{center} \caption{ The polarization dependence of the square of the optical transition amplitude ($|M^{\rm opt}({\bf A})|^2$) is plotted as a function of the angle of laser polarization ($\Theta$) with respect to the orientation of the edge. For a pure zigzag (armchair) edge, the intensity is maximum when the laser polarization is perpendicular (parallel) to the edge. ``zigzag@armchair'' denotes the case when zigzag edges are introduced into part of a perfect armchair edge. We have used eq.~(\ref{eq:opt-A}) with $r=0$ (armchair only), $r=0.5$ (partial), and $r=1$ (random: a mixture of zigzag and armchair edges). } \label{fig:Pdepend} \end{figure} The Kohn anomaly is relevant to the el-ph matrix element for electron-hole pair creation, i.e., $\langle \Psi^{\rm c}_{\bf k}|H^{\rm zig}_{\rm LO/TO}| \Psi^{\rm v}_{\bf k}\rangle$. Using $\Psi^{\rm v}_{\bf k}({\bf r})= \sigma_z \Psi^{\rm c}_{\bf k}({\bf r})$, we rewrite the matrix element as $\langle \Psi^{\rm c}_{\bf k}|H^{\rm zig}_{\rm LO/TO}\sigma_z| \Psi^{\rm c}_{\bf k}\rangle$. From eq.~(\ref{eq:Hzig}), we have \begin{align} \begin{split} & H^{\rm zig}_{\rm LO} \sigma_z = iv_{\rm F} A^{\rm q}_y \sigma_x, \\ & H^{\rm zig}_{\rm TO} \sigma_z = -iv_{\rm F} A^{\rm q}_x \sigma_y, \end{split} \end{align} where $\sigma_x \sigma_z = -i\sigma_y$ and $\sigma_y \sigma_z = i\sigma_x$ have been used. We have thus shown that $H^{\rm zig}_{\rm TO} \sigma_z$ is proportional to $\sigma_y$ as well as that $H^{\rm zig}_{\rm LO}$ is proportional to $\sigma_y$. From eq.~(\ref{eq:sig_y}), we see that the TO mode is unable to transfer an electron in the valence band into the conduction band, that is, the TO mode does not decay into an electron-hole pair, and therefore the Kohn anomaly for the TO mode is negligible compared with that for the LO mode. \section{Armchair Edge}\label{sec:arm} Next, we calculate the matrix element relevant to the Raman intensity near the armchair edge. Suppose that the armchair edge is located along the $y$-axis, then the armchair edge reflects an electron with ${\bf k}=(k_x,k_y)$ near the K point into the state with ${\bf k'}=(k'_x,k'_y)=(-k_x,k_y)$ near the K$'$ point, where ${\bf k}$ and ${\bf k'}$ are measured from the K and K$'$ points, respectively. The negative sign in front of $k_x$ for ${\bf k'}$ is due to the momentum conservation. One may consider that an intervalley process is unconnected with the $\Gamma$ point LO and TO phonons. Note, however, that we should consider the K and K$'$ points simultaneously in the case of an armchair edge since the reflection of an electron by the armchair edge is an intervalley scattering process as shown in Fig.~\ref{fig:armPhase}. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{armPhase.eps} \end{center} \caption{ The armchair edge reflects the wave vector ${\bf k}=(k_x,k_y)$ of one valley into ${\bf k'}=(-k_x,k_y)$ of another valley, and the two wave functions of the different valleys form a standing wave. The pseudospin is unchanged by the armchair edge. Note that the pseudospin for states near the K$'$ point is not parallel to the vector ${\bf k'}$, while the pseudospin for states near the K point is parallel to the vector ${\bf k}$. } \label{fig:armPhase} \end{figure} We specify the deformation-induced gauge field ${\bf A}^{\rm q}$ for the LO and TO modes near the armchair edge. The vibrations of carbon atoms for the $\Gamma$ point LO and TO modes are shown in Fig.~\ref{fig:armLOTO}. We have $\delta \gamma_{0,1} = -2 \delta \gamma_{0,2}$ and $\delta \gamma_{0,2}=\delta \gamma_{0,3}$ for the LO mode, while $\delta \gamma_{0,1} = 0$ and $\delta \gamma_{0,2}=-\delta \gamma_{0,3}$ for the TO mode. Using eq.~(\ref{eq:gauge}), we see that ${\bf A}^{\rm q}$ for the LO mode is written as ${\bf A}^{\rm q}_{\rm LO}=(A_x^{\rm q},0)$ with $v_{\rm F}A_x^{\rm q}=-3\delta\gamma_{0,2}$, while ${\bf A}^{\rm q}$ for the TO mode is written as ${\bf A}^{\rm q}_{\rm TO}=(0,A_y^{\rm q})$ with $v_{\rm F}A_y^{\rm q}=\sqrt{3}\delta\gamma_{0,2}$. Thus, from eq.~(\ref{eq:HKK'}), we see that the el-ph interaction \begin{align} H^{\rm arm}_{\rm LO/TO} = v_{\rm F} \begin{pmatrix} \mbox{\boldmath $\sigma$} \cdot {\bf A}^{\rm q}_{\rm LO/TO} & 0 \cr 0 & - \mbox{\boldmath $\sigma$}' \cdot {\bf A}^{\rm q}_{\rm LO/TO} \end{pmatrix} \end{align} is rewritten as \begin{align} \begin{split} & H^{\rm arm}_{\rm LO} = v_{\rm F} A^{\rm q}_x \begin{pmatrix} \sigma_x & 0 \cr 0 & \sigma_x \end{pmatrix}, \\ & H^{\rm arm}_{\rm TO} = v_{\rm F} A^{\rm q}_y \begin{pmatrix} \sigma_y & 0 \cr 0 & - \sigma_y \end{pmatrix}, \end{split} \label{eq:Harm} \end{align} for the LO and TO modes, respectively. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.4]{armLOTO.eps} \end{center} \caption{The displacement vectors for the LO and TO modes are shown in (a) and (b), respectively. The displacement vectors of the LO (TO) mode are parallel (perpendicular) to the armchair edge. The direction of ${\bf A}^{\rm q}$ is perpendicular to the direction of atom displacement. } \label{fig:armLOTO} \end{figure} The wave function is given by a sum of the plane wave at the K point and the reflected wave at the K$'$ point as \begin{align} \Psi^{\rm c}_{\bf k}({\bf r}) = \frac{e^{ik_y y}}{\sqrt{2}} \begin{pmatrix} \Phi^{\rm c}_{\bf k} e^{+ik_x x} \cr \Phi^{\rm c}_{\bf k} e^{-ik_x x} \end{pmatrix}. \label{eq:wfarm} \end{align} Note that the Bloch function is the same ($\Phi^{\rm c}_{\bf k}$) for both the K and K$'$ points. In fact, the Bloch function for a state near the K$'$ point can be expressed as \begin{align} \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \cr -e^{-i\theta'({\bf k'})} \end{pmatrix}, \label{eq:wfK'} \end{align} where $\theta'({\bf k'})$ is defined through $k'_x+ik'_y=|{\bf k'}|e^{i\theta'({\bf k'})}$. Since the armchair edge reflects the state with ${\bf k}=(k_x,k_y)$ into the state with ${\bf k'}=(-k_x,k_y)$, we have the relation $\theta'({\bf k'})=\pi-\theta({\bf k})$ (see Fig.~\ref{fig:armPhase}). By substituting this into eq.~(\ref{eq:wfK'}), we see that the Bloch function of eq.~(\ref{eq:wfK'}) becomes $\Phi^{\rm c}_{\bf k}$ of eq.~(\ref{eq:wf}), which explains eq.~(\ref{eq:wfarm}). The pseudospin for the eigenstate near the K$'$ point is given by $\langle \Phi^{\rm c}_{\bf k'}| \sigma_x |\Phi^{\rm c}_{\bf k'} \rangle =-\cos \theta'({\bf k'})$ and $\langle \Phi^{\rm c}_{\bf k'}| \sigma_y |\Phi^{\rm c}_{\bf k'} \rangle =\sin \theta'({\bf k'})$. Thus, the pseudospin for the K$'$ point is not parallel to the vector ${\bf k'}$, as shown in Fig.~\ref{fig:armPhase}, although the pseudospin for states near the K point is parallel to the vector {\bf k}. Using $\theta'({\bf k'})=\pi-\theta({\bf k})$, one can see that the pseudospin is preserved under the reflection at the armchair edge (see Fig.~\ref{fig:armPhase}). Using eqs.~(\ref{eq:Harm}) and (\ref{eq:wfarm}), it is straightforward to check that \begin{align} \begin{split} & \langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm LO} |\Psi^{\rm c}_{\bf k}\rangle = v_{\rm F} A^{\rm q}_x \cos \theta({\bf k}), \\ & \langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm TO} |\Psi^{\rm c}_{\bf k} \rangle = 0. \end{split} \label{eq:arm-LO} \end{align} This result shows that the Raman intensity of the TO mode is negligible compared with that of the LO mode. The absence of the Raman intensity of the TO mode results from the interference between two valleys, namely, the opposite signs in front of $\sigma_y$ for the K and K$'$ points of $H^{\rm arm}_{\rm TO}$ in eq.~(\ref{eq:Harm}). The interaction between the light and the electronic states is given by \begin{align} {H}^{\rm em}({\bf A}) = -v_{\rm F} e \begin{pmatrix} \mbox{\boldmath $\sigma$} \cdot {\bf A} & 0 \cr 0 & \mbox{\boldmath $\sigma$}' \cdot {\bf A} \end{pmatrix}, \end{align} from eq.~(\ref{eq:HKK'}). The optical absorption amplitude is given by $M^{\rm opt}({\bf A})=\langle \Psi^{\rm c}_{\bf k}|{H}^{\rm em}({\bf A}) | \Psi^{\rm v}_{\bf k}\rangle$, where \begin{align} \Psi^{\rm v}_{\bf k}({\bf r}) = \begin{pmatrix} \sigma_z & 0 \cr 0 & \sigma_z \end{pmatrix} \Psi^{\rm c}_{\bf k}({\bf r}). \end{align} If the polarization of the incident laser light is perpendicular to the armchair edge ${\bf A}_\perp=(A_x,0)$, then $\langle \Psi^{\rm c}_{\bf k}|{H}^{\rm em}({\bf A}_\perp)|\Psi^{\rm v}_{\bf k}\rangle$ vanishes owing to the cancellation between the K and K$'$ points. The polarization of the incident laser should be parallel to the armchair edge, i.e., ${\bf A}_\parallel=(0,A_y)$, in order to populate photoexcited electrons effectively because $\langle \Psi^{\rm c}_{\bf k}|{H}^{\rm em}({\bf A}_\parallel)|\Psi^{\rm v}_{\bf k}\rangle=-iv_{\rm F} eA_y \cos\theta({\bf k})$. Note that it is mainly the electrons near the $k_x$-axis [$\theta({\bf k})\approx 0$ or $\pi$] that can participate the Raman process taking place near the armchair edge since both the el-ph matrix element [eq.~(\ref{eq:arm-LO})] and the optical transition amplitude are proportional to $\cos\theta({\bf k})$. By defining the angle between the laser polarization and the armchair edge by $\Theta$ (see the inset in Fig.~\ref{fig:Pdepend}), we have $A_y=|{\bf A}|\cos\Theta$, and we see that the Raman intensity is proportional to $|M^{\rm opt}({\bf A})|^2 \propto \cos^2 \Theta$. The polarization dependence of the Raman intensity for the armchair edge is opposite that for the zigzag edge, as shown in Fig.~\ref{fig:Pdepend}, from which the orientation of the edge may be determined experimentally. The el-ph matrix element for the Kohn anomaly is given by $\langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm LO/TO}\sigma_z| \Psi^{\rm c}_{\bf k}\rangle$. From eq.~(\ref{eq:Harm}), we have \begin{align} \begin{split} & H^{\rm arm}_{\rm LO} \sigma_z = -i v_{\rm F} A^{\rm q}_x \begin{pmatrix} \sigma_y & 0 \cr 0 & \sigma_y \end{pmatrix}, \\ & H^{\rm arm}_{\rm TO} \sigma_z= i v_{\rm F} A^{\rm q}_y \begin{pmatrix} \sigma_x & 0 \cr 0 & - \sigma_x \end{pmatrix}. \end{split} \label{eq:arm-LO-KA} \end{align} It has thus been shown that the TO mode does not undergo a Kohn anomaly because the matrix element vanishes owing to the sign difference between the K and K$'$ points with respect to $\sigma_x$. \section{Energy Difference Between LO and TO Modes}\label{sec:selfene} In this section we calculate the energy difference between the LO and TO modes. The renormalized phonon energy is written as a sum of the unrenormalized energy $\hbar \omega$ and the self-energy. Since the TO mode does not undergo a Kohn anomaly, the self-energy of the TO mode vanishes. Thus, the energy difference between the LO and TO modes is the self-energy of the LO mode, which is given by time-dependent second-order perturbation theory as \begin{align} \Pi(\omega,E_{\rm F}) = & 2 \sum_{\bf k} \left( \frac{|\langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm LO} | \Psi^{\rm v}_{\bf k} \rangle|^2}{\hbar \omega -E^{\rm eh}_{\bf k}+i\delta} - \frac{|\langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm LO} | \Psi^{\rm v}_{\bf k} \rangle|^2}{\hbar \omega +E_{\bf k}^{\rm eh}+i\delta} \right) \nonumber \\ & \times \left(f_{\rm h}-f_{\rm e}\right), \label{eq:PI} \end{align} where the factor of 2 originates from the spin degeneracy, $f_{\rm e,h}=(1+\exp((E^{\rm e,h}-E_{\rm F})/k_{\rm B}T)^{-1}$ is the Fermi distribution function, $E_{\rm F}$ is the Fermi energy, $\delta$ is a positive infinitesimal, $E^{\rm e}$ ($E^{\rm h}$) is the energy of an electron (a hole), and $E^{\rm eh}_{\bf k}\equiv E_{\bf k}^{\rm e}-E_{\bf k}^{\rm h}=2\hbar v_{\rm F}|{\bf k}|$ ($\ge 0$) is the energy of an electron-hole pair. Note that the summation index $\sum_{\bf k}$ in eq.~(\ref{eq:PI}) is not restricted to only interband ($E^{\rm eh} \ne 0$) processes but also includes intraband ($E^{\rm eh}= 0$) processes. Thus, the self-energy can be decomposed into two parts, $\Pi(\omega,E_{\rm F})=\Pi^{\rm inter}(\omega,E_{\rm F}) +\Pi^{\rm intra}(\omega,E_{\rm F})$, where $\Pi^{\rm inter}(\omega,E_{\rm F})$ includes only interband electron-hole pair creation processes satisfying $E^{\rm eh}\ne 0$. In the adiabatic limit, i.e., when $\omega = 0$ and $\delta=0$ in eq.~(\ref{eq:PI}), by substituting eq.~(\ref{eq:arm-LO}) into eq.~(\ref{eq:PI}), it is straightforward to show that, at $T=0$, \begin{align} \begin{split} & \Pi^{\rm intra}(0,E_{\rm F}) = -\frac{V}{\pi} \left( \frac{A_x^{\rm q}}{\hbar} \right)^2|E_{\rm F}|, \\ & \Pi^{\rm inter}(0,E_{\rm F})= -\frac{V}{\pi} \left( \frac{A_x^{\rm q}}{\hbar} \right)^2 \left( E_c - |E_{\rm F}| \right), \end{split} \label{eq:ad-pi} \end{align} where $E_c$ is a cutoff energy. Note that $\Pi^{\rm intra}(0,E_{\rm F})$ does not vanish because $(f_{\rm h}-f_{\rm e})/E^{\rm eh}_{\bf k} \ne 0$ in the limit of $E^{\rm eh}_{\bf k}\to 0$, while in the nonadiabatic case, $\Pi^{\rm intra}(\omega,E_{\rm F})$ vanishes since $(f_{\rm h}-f_{\rm e})/\hbar \omega=0$ in this limit. It is only the interband process that contributes to the self-energy in the nonadiabatic case. Lazzeri and Mauri~\cite{lazzeri06prl} pointed out that $\Pi(0,E_{\rm F})$ does not depend on $E_{\rm F}$ in the adiabatic limit owing to the cancellation between $\Pi^{\rm intra}(0,E_{\rm F})$ and $\Pi^{\rm inter}(0,E_{\rm F})$. This shows that the adiabatic approximation is not appropriate for discussing the $E_{\rm F}$ dependence of the self-energy. In the nonadiabatic case, at $T=0$, it is a straightforward calculation to obtain (see Appendix\ref{app:b} for derivation) \begin{align} & {\rm Re} \left[ \Pi(\omega,E_{\rm F}) \right] = \nonumber \\ &- \frac{V}{\pi} \left( \frac{A_x^{\rm q}}{\hbar} \right)^2 \left[ E_c - |E_{\rm F}|- \frac{\hbar \omega}{4} \ln \left| \frac{|E_{\rm F}| - \frac{\hbar \omega}{2}}{|E_{\rm F}|+ \frac{\hbar \omega}{2} }\right| \right]. \label{eq:repi} \end{align} The Fermi energy dependence is given by the last two terms.~\cite{lazzeri06prl,ando06-ka} The first term is linear with respect to $E_{\rm F}$ and the second term produces a singularity at $E_{\rm F}=\pm\hbar \omega/2$. These terms express the nonadiabatic effects.~\cite{pisana07} Recently, Saitta {\it et al.}~\cite{saitta08} have pointed out that large nonadiabatic effects are found to be more ubiquitous in layered metals such as CaC$_6$ and MgB$_2$. For the case of $E_{\rm F}=0$, eq.~(\ref{eq:repi}) becomes \begin{align} {\rm Re}\left[\Pi(\omega,0) \right] = - \frac{V}{\pi} \left( \frac{A_x^{\rm q}}{\hbar} \right)^2 E_c. \label{eq:pi_inter} \end{align} The self-energy depends on the cutoff energy $E_c$. The value of $E_c$ cannot be determined within the effective-mass model. We assume that $E_c$ is of the order of half of the $\pi$ bandwidth (10 eV); see \S\ref{sec:discussion} for a detailed discussion of the value of $E_c$. Using the harmonic approximation for the displacement of the carbon atoms,~\cite{sasaki09} we obtain $\sqrt{N_u}|A_x^{\rm q}/\hbar| \approx 2\times 10^{-2}$\AA$^{-1}$ (see Appendix\ref{app:b}), where $N_u$ denotes the number of hexagonal unit cells. Since $V$ can be written as $N_u S$ where $S$ is the area of a hexagonal unit cell, we obtain ${\rm Re}\left[\Pi(\omega,0) \right] \approx -6$ meV. Thus, the difference in the Raman shift between the (Raman active) TO mode near the zigzag edge and the (Raman active) LO mode near the armchair edge is approximately 50 cm$^{-1}$. In a realistic system, the actual magnitude of the self-energy may be much smaller than this value. For example, a typical edge is a mixture of zigzag and armchair edges, for which the energy difference between the LO and TO modes is lower. Here, let us introduce zigzag edges into part of a perfect armchair edge at $x=0$ and examine the effect of the randomness of the edge shape on the Raman intensity and phonon self-energies. Then the standing wave near the rough edge is approximated by \begin{align} \Psi^{\rm c}_{\bf k}({\bf r}) = e^{i{\bf k}\cdot {\bf r}} \begin{pmatrix} \Phi^{\rm c}_{\bf k} \cr 0 \end{pmatrix} + a e^{i{\bf k'}\cdot {\bf r}} \begin{pmatrix} 0 \cr \Phi^{\rm c}_{\bf k} \end{pmatrix} + z e^{i{\bf k'}\cdot {\bf r}} \begin{pmatrix} \Phi^{\rm c}_{\bf k'} \cr 0 \end{pmatrix}, \label{eq:random} \end{align} where ${\bf k'}=(-k_x,k_y)$ and $|a|^2 + |z|^2 = 1$. The wave function $\Psi^{\rm c}_{\bf k}({\bf r})$ reproduces eq.~(\ref{eq:wfarm}) for the case when $(a,z)=(1,0)$. Note that $|z|^2/|a|^2$ ($\equiv r \le 1$) can be considered phenomenologically as the ratio of the number of zigzag edges to that of armchair edges in the rough edge, and $r=1$ [$(|a|,|z|)=(1/\sqrt{2},1/\sqrt{2})$] represents the case that armchair and zigzag edges are equally distributed along the $y$-axis. It is a straightforward calculation to obtain \begin{align} \begin{split} & \langle \Psi^{\rm c}_{\bf k}|H^{\rm arm}_{\rm LO}| \Psi_{\bf k}^{\rm c} \rangle=v_{\rm F} A_x^{\rm q} \cos\theta({\bf k}) \frac{1+|a|^2-|z|^2}{1+|a|^2+|z|^2}, \\ & \langle \Psi_{\bf k}^{\rm c}|H^{\rm arm}_{\rm TO}|\Psi_{\bf k}^{\rm c} \rangle= v_{\rm F} A_y^{\rm q} \sin\theta({\bf k}) \frac{1-|a|^2+|z|^2}{1+|a|^2+|z|^2}. \end{split} \end{align} These matrix elements show that the self-energy for the LO mode becomes ${\rm Re}\left[\Pi(\omega,0) \right]/4$ for the case of $r=1$. On the other hand, the self-energy of the TO mode, which is zero for the case of $r=0$, becomes ${\rm Re}\left[\Pi(\omega,0) \right]/4$ for the case of $r=1$. The differences in the Kohn anomalies for the LO and TO modes disappear for the case of $r=1$. Moreover, the Raman intensity of the TO mode increases, while the Raman intensity of the LO mode decreases. As a result, the G band exhibits a single peak. The intensity of the G band is given as the sum of the LO and TO modes. Since the intensity of each mode is four times smaller than that of the LO mode near the pure armchair edge, the total intensity of the G band should be two times smaller than the Raman intensity near the pure armchair edge. Note that for a general value of $(a,z)$, the energy difference between the LO and TO modes is given by $(|a|^2-|z|^2) {\rm Re}\left[\Pi(\omega,0) \right]$. It is also a straightforward calculation to obtain the polarization dependence of the optical transition amplitude, \begin{align} |M^{\rm opt}({\bf A})|^2 \propto \frac{\cos^2\Theta}{(1+r)^2} + \frac{r^2\sin^2\Theta}{(1+r)^2}. \label{eq:opt-A} \end{align} This dependence is plotted for two cases, $r=1$ and $r=0.5$, in Fig.~\ref{fig:Pdepend}. \section{Bulk and Edge}\label{sec:bulk} In the case of an infinite periodic graphene system without an edge, the self-energies of the LO and TO modes are the same and given by ${\rm Re}\left[\Pi(\omega,E_{\rm F}) \right]$ in eq.~(\ref{eq:repi}). Moreover, no asymmetry between the LO and TO modes in the Raman intensity is expected. The reason why the LO and TO modes do not exhibit any difference in Raman spectra is that graphene is a homo-polar crystal with two atoms per unit cell, and hence there is no polar mode, similar to the case of Si. Thus, the LO and TO modes are degenerate and contribute equally to the single peak of the G band (see ``Periodic'' in Fig.~\ref{fig:spectrum}). Note that a slight change in the spring force constant due to a uniaxial strain applied to a graphene sample can resolve the degeneracy between the LO and TO modes. In this case, the unrenormalized energy $\hbar \omega$ for the LO mode is not identical to that for the TO mode. However, even for this case, we can expect that the self-energies and Raman intensities for the LO and TO modes are similar to each other. Thus, we can see two peaks for the LO and TO modes with similar intensity, as was observed by Mohiuddin {\it et al.}~\cite{mohiuddin09} Since an actual sample is always surrounded by an edge, it is interesting to consider whether or not the interior of a graphene sample can be considered as an infinite periodic graphene system without the edge. If the wave function in the interior region is given by a superposition of the incident and reflected states, then it is reasonable to assume that the wave function is approximated by eq.~(\ref{eq:random}) with $(|a|,|z|)=(1/\sqrt{2},1/\sqrt{2})$, since it is probable that the edge is a random mixture of zigzag and armchair edges. The peak positions of the LO and TO modes in the Raman shift are indicated by ``Random'' in Fig.~\ref{fig:spectrum}. We speculate that the peak position for an actual sample appears between the peaks labeled ``Periodic'' and ``Random''. An estimation of the effective distance from the edge at which the effect of interference on the pseudospin discussed so far can survive will be a subject of further investigation. \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.5]{spectrum.eps} \end{center} \caption{The horizontal lines indicate the Raman shift for the case of $E_{\rm F}=0$. (top and bottom lines) The Raman peak taken near the zigzag (armchair) edge appears only for the TO (LO) mode indicated by the solid circle. The peak for the TO mode does not accompany the broadening because the TO mode decouples from the electron-hole pairs. (middle line) The Raman peak appears at $\hbar \omega +{\rm Re}\left[\Pi(\omega,0) \right]$ in the case of an infinite periodic graphene system without an edge (``Periodic''). If the effect of the electron reflection at the edge survives in the interior of a graphene sample (``Random''), the Raman peak is expected to appear at $\hbar \omega +{\rm Re}\left[\Pi(\omega,0) \right]/4$. } \label{fig:spectrum} \end{figure} \section{Discussion and Conclusions}\label{sec:discussion} Here, we discuss the relationship between our result and experimental results. Can\ifmmode \mbox{\c{c}}\else \c{c}\fi{}ado {\it et al}. observed that the Raman intensity of the G band for a nanoribbon has a strong dependence on the incident light polarization.~\cite{canifmmode04} They showed that the Raman intensity is maximum when the polarization is parallel to the edge of a nanoribbon. Their result is consistent with our result for the armchair edge, but not consistent with our result for the zigzag edge. We speculate that the sample used in their experiment is similar to a nanoribbon with an armchair edge. This speculation is reasonable because armchair edges are more frequently observed in experiments than zigzag edges.~\cite{kobayashi05} Casiraghi {\it et al}. performed Raman spectroscopy on graphene edges and observed a small redshift of the G peak near the edge accompanied by a decrease in the linewidth of the G peak.~\cite{casiraghi09} This behavior of the G peak is consistent with that of the zigzag edge since it is only the TO mode without broadening (which is related to the imaginary part of the self-energy) that can be Raman active. The cutoff energy $E_c$ appearing in eq.~(\ref{eq:pi_inter}) may be determined from a tight-binding lattice model. For periodic graphene, by taking into account the contribution of all the possible electron-hole intermediate states in the Brillouin zone, we can have $E_c \approx 7\gamma_0$ (20 eV), which is larger than the value adopted in eq.~(\ref{eq:pi_inter}). The value of $E_c$ for a graphene sample with an edge may be different from that for a periodic graphene sample without an edge. In fact, for a nanoribbon, a tight-binding calculation~\cite{sasaki09} shows that the energy difference between the LO and TO modes is approximately 30 cm$^{-1}$, which corresponds to $E_c\approx 6$ eV. Thus the value of $E_c$ depends on the geometry of the system. Since we have considered a large graphene sample with an edge, we assumed that an appropriate value of $E_c$ is between 6 and 20 eV, and we chose 10 eV, which is of the order of half of the $\pi$ bandwidth. Because $E_c$ is not an experimentally controllable parameter, we consider that, in order to verify our results, it is essential to observe the $E_{\rm F}$ dependence of the G band spectra near the edge. In conclusion, the el-ph matrix elements for the Raman intensity and Kohn anomaly near the edge of graphene were derived by adiabatic calculation, and then perturbation treatment was applied to the nonadiabatic parts of the phonon self-energies. The zigzag edge causes intravalley scattering and the $y$-component of the pseudospin vanishes $\langle \sigma_y \rangle = 0$ for the standing wave. The Raman intensity of the LO mode and the Kohn anomaly of the TO mode are negligible owing to $\langle \sigma_y \rangle = 0$. On the other hand, the armchair edge causes intervalley scattering and the pseudospin does not change its direction. However, owing to the interference between two valleys originating from the el-ph interaction, the Raman intensity and Kohn anomaly are negligible only for the TO mode. The Raman intensity is enhanced when the polarization of the incident laser is parallel (perpendicular) to the armchair (zigzag) edge. The difference in the behavior of the pseudospin with respect to the zigzag and armchair edges is the origin of the asymmetry between the LO and TO modes. Our results are summarized in Table~\ref{tab:1}. \begin{table}[htbp] \caption{\label{tab:1} Dependences of the Raman intensities and Kohn anomalies on the $\Gamma$ point optical phonon modes. The symbols $\bigcirc$ and $\times$ for Raman intensity and the Kohn anomaly represent `occurrence' and `absence', respectively. There is asymmetry between the Raman intensity and Kohn anomaly, that is, the Kohn anomaly occurs only for the LO mode, while the mode with a strong Raman intensity changes according to the edge shape. Raman intensity is enhanced when the polarization of the incident laser light is parallel (LO) to the armchair edge or when it is perpendicular (TO) to the zigzag edge. } \begin{tabular}{c|cccc} {\bf Position} & {\bf Mode} & {\bf Raman} & {\bf Kohn} & {\bf Polarization}\\ \hline {\bf zigzag} & LO & $\times$ & $\bigcirc$ & $\times$ \\ & TO & $\bigcirc$ & $\times$ & $\bigcirc$ \\ \hline {\bf armchair} & LO & $\bigcirc$ & $\bigcirc$ & $\bigcirc$ \\ & TO & $\times$ & $\times$ & $\times$ \\ \hline {\bf bulk} & LO & $\bigcirc$ & $\bigcirc$ & $\bigcirc$ \\ & TO & $\bigcirc$ & $\bigcirc$ & $\bigcirc$ \\ \end{tabular} \end{table} \section*{Acknowledgments} K. S. would like to thank T. Osada (Institute for Solid State Physics, University of Tokyo) for a useful comment on the pseudospin for states near the K$'$ point. R. S. acknowledges a MEXT Grant (No.~20241023). This work was supported by a Grant-in-Aid for Specially Promoted Research (No.~20001006) from MEXT.
1,477,468,749,952
arxiv
\subsection{Elementary functional analytic results} Recall the definition of the Banach space $\ell^1_\nu$ given in \eqref{eq:ell_nu_one}. \begin{lem}\label{l:dualbound} The dual space $(\ell^1_\nu)^*$ is isometrically isomorphic to \[ \ell_{\nu^{-1}}^\infty = \left\{ c = (c_k)_{k \ge 0} : \left\| c \right\|_{\infty,\nu^{-1}} \bydef \max \left( |c_0|, \tfrac{1}{2} \sup_{k \geq 1} |c_k| \nu^{-k} \right) < \infty \right\}. \] For all $b \in \ell^1_\nu$ and $c \in \ell^\infty_{\nu^{-1}}$ we have \begin{equation}\label{e:dualbound} \Bigl|\sum_{k\geq 0} c_k b_k \Bigr| \leq \|c\|_{\infty,\nu^{-1}} \|b\|_{1,\nu}. \end{equation} \end{lem} Given a sequence in $\ell^1_\nu$ we extend it symmetrically to negative indices. The discrete convolution product~\eqref{e:convolution} then naturally works on $\ell^1_\nu$ by \[ (b*\tilde{b})_k = \sum_{k_1+k_2 = k \atop k_1,k_2 \in \mathbb{Z}} b_{|k_1|} \tilde{b}_{|k_2|}. \] The following result states that $\ell_\nu^1$ is a Banach algebra under discrete convolutions and is useful for the analysis of nonlinear problems. \begin{rem}\label{r:Q} We use the bound~\eqref{e:dualbound} to estimate the convolution \[ \sup_{\|v\|_{1,\nu} \leq 1} | (b \ast v)_k | = \sup_{\|v\|_{1,\nu} \leq 1} \left| \sum_{k' \in \mathbb{Z}} v_{|k'|} b_{|k-k'|} \right| \leq \max \left\{ |b_k| ,\sup_{k'\geq1} \frac{|b_{|k-k'|} + b_{|k+k'|}|}{2 \nu^{k'}} \right\} \bydef \mathcal{Q}_k(b). \] Given $v = (v_k)_{k \ge 0} \in \ell_\nu^1$, define $\widehat{v} \in \ell_\nu^1$ as follows: \[ \widehat{v}_k \bydef \begin{cases} 0 & \text{if } k<m,\\ v_k & \text{if } k \geq m. \end{cases} \] A similar estimate as the one above leads to \begin{equation} \label{eq:hQ} \sup_{\|v\|_{1,\nu} \leq 1} | (b \ast \widehat{v})_k | \leq \sup_{k'\geq m} \frac{|b_{|k-k'|} + b_{|k+k'|}|}{2 \nu^{k'}} \bydef \hat{\mathcal{Q}}_k(b). \end{equation} Inequality \eqref{eq:hQ} is useful when computing the $Z_1$ bound (e.g. see Section~\ref{sec:Z1_CR}). \end{rem} \begin{lem} \label{lem:Banach_algebra_conv} If $\nu \geq 1$ and $b, \tilde{b} \in \ell_{\nu}^1$, then $b \ast \tilde{b} \in \ell_{\nu}^1$ and \begin{equation} \label{eq:Banach_algebra_conv} \| b \ast \tilde{b} \|_{1,\nu} \le \| b \|_{1,\nu} \| \tilde{b} \|_{1,\nu}. \end{equation} \end{lem} The final results of this short section concern the computation of norms of bounded linear operators defined on $\ell_\nu^1$, and are useful when computing the bounds $Z_0$ and $Z_2$. \begin{lem}\label{l:Blnu1norm} Let $\Gamma \in B(\ell^1_{\nu})$, the space of bounded linear operators from $\ell^1_\nu$ to itself, acting as $(\Gamma b)_k =\sum_{m\geq 0} \Gamma_{k,m} b_m$ for $k \geq 0$. Define the weights $\omega=(\omega_k)_{k\geq0}$ by $\omega_0=1$ and $\omega_k = 2 \nu^k$ for $k\geq 1$. Then \[ \| \Gamma \|_{B(\ell^1_\nu)} = \sup_{k \geq 0} \frac{1}{\omega_k} \sum_{m\geq 0} | \Gamma_{k,m} | \omega_k . \] \end{lem} The following consequence of Lemma~\ref{l:Blnu1norm} provides an explicit bound on norms of bounded linear operators on $\ell_\nu^1$ with a specific structure, namely as in \eqref{eq:Gamma_blocks}. \begin{cor}\label{cor:OperatorNorm} Let $\Gamma^{(m)}$ be an $m \times m$ matrix, $\{\mu_n\}_{n=m}^{\infty}$ be a sequence of numbers with \[ |\mu_n| \leq |\mu_{m}|, \qquad\text{for all } n \geq m, \] and $\Gamma \colon \ell_{\nu}^1 \to \ell_{\nu}^1$ be the linear operator defined by \begin{equation} \label{eq:Gamma_blocks} \Gamma b = \begin{pmatrix} \Gamma^{(m)} & & 0 \\ & \mu_{m} & \\ 0 & & \mu_{m+1} & \\ & & & \ddots \end{pmatrix} \begin{pmatrix} b^{(m)} \\ b_{m} \\ b_{m+1} \\ \vdots \end{pmatrix}. \end{equation} Here $b^{(m)} = (b_0, \ldots, b_{m-1})^T \in \mathbb{R}^{m}$. Then $\Gamma \in B(\ell_{\nu}^1)$ and \begin{equation} \label{eq:normA} \| \Gamma \|_{B(\ell_{\nu}^1)} = \max (K, |\mu_m|), \end{equation} where \[ K \bydef \max_{0 \leq j \leq m-1} \frac{1}{\omega_j}\sum_{i=0}^{m-1} |\Gamma_{i,j}| \omega_i. \] \end{cor} \subsection{The \boldmath $Y_0$ \unboldmath bound} The nonlinear term of $F_1(\bar{a})$ given in \eqref{eq:F=0_CR} involves the convolution product $(\bar{a}_1*\bar{a}_1*\bar{a}_1)_{k}$, which vanishes for $k \geq 3m-2$. This implies that $(F_1(\bar{a}))_k=0$ for all $k \ge 3m-2$. Also, $(F_2(\bar{a}))_k=0$, for all $k \ge m$. We set \begin{align*} Y_0^{(1)} &\bydef \biggl| \sum_{j=1}^2 \left( A^{(m)}_{1,j} F^{(m)}_j(\bar{a}) \right)_0 \biggr| + 2 \sum_{k=1}^{m-1} \biggl| \sum_{j=1}^2 \left( A^{(m)}_{1,j} F^{(m)}_j(\bar{a}) \right)_k \biggr| \nu^{k} \\ Y_0^{(2)} &\bydef \biggl| \sum_{j=1}^2 \left( A^{(m)}_{2,j} F^{(m)}_j(\bar{a}) \right)_0 \biggr| + 2 \sum_{k=1}^{m-1} \biggl| \sum_{j=1}^2 \left( A^{(m)}_{2,j} F^{(m)}_j(\bar{a}) \right)_k \biggr| \nu^{k} + 2 \sum_{k=m}^{3m-3} \biggl| \frac{1}{k} (F_1(\bar{a}))_{k} \biggr| \nu^{k} \end{align*} which is a collection of finite sums that can be evaluated with interval arithmetic. We infer that \[ \| [ AF(\bar{a})]_i \|_{1,\nu} = \biggl\| \sum_{j=1}^2 A_{i,j} F_j(\bar{a}) \biggr\|_{1,\nu} \le Y_0^{(i)}, \qquad\text{for } i=1,2, \] and we set \begin{equation} \label{eq:Y0_CR} Y_0 \bydef \max \left(Y_0^{(1)},Y_0^{(2)} \right). \end{equation} \subsection{The \boldmath $Z_0$ \unboldmath bound} We look for a bound of the form $ \| I - A A^{\dagger}\|_{B(X)} \le Z_0 $. Recalling the definitions of $A$ and $A^\dagger$ given in \eqref{eq:A_CR} and \eqref{eq:dagA_CR}, let $B \bydef I - A A^\dagger$ the bounded linear operator represented as \begin{equation*} B = \begin{pmatrix} B_{1,1} & B_{1,2} \\ B_{2,1} & B_{2,2} \end{pmatrix}. \end{equation*} We remark that $( B_{i,j} )_{n_1,n_2}=0$ for any $i,j =1,2$ whenever $n_1 \ge m$ or $n_2 \ge m$. Hence we can compute the norms $\|B_{i,j}\|_{B(\ell^1_\nu)}$ using Lemma~\ref{l:Blnu1norm}. Given $h =(h_1,h_2) \in X = \ell_\nu^1 \times \ell_\nu^1$ with $\|h\|_X = \max(\|h_1\|_{1,\nu},\|h_2\|_{1,\nu}) \leq 1$, we obtain \[ \|(Bh)_i \|_{1,\nu} = \biggl\| \sum_{j=1}^2 B_{i,j} h_j \biggr\|_{1,\nu} \le \sum_{i=1}^2 \| B_{i,j} \|_{B(\ell_\nu^1)}. \] Hence we define \begin{equation} \label{eq:Z0_CR} Z_0 \bydef \max\left( \|B_{1,1}\|_{B(\ell^1_\nu)} + \|B_{1,2}\|_{B(\ell^1_\nu)} , \|B_{2,1}\|_{B(\ell^1_\nu)} + \|B_{2,2}\|_{B(\ell^1_\nu)} \right), \end{equation} where each norm $\|B_{i,j}\|_{B(\ell^1_\nu)}$ can be computed using formula \eqref{eq:normA} with vanishing tail terms. \subsection{The \boldmath $Z_1$ \unboldmath bound} \label{sec:Z1_CR} Recall that we look for the bound $ \| A[DF(\bar{x}) - A^{\dagger} ] \|_{B(X)} \le Z_1 $. Given $h=(h_1,h_2) \in X$ with $\|h\|_X \le 1$, set \[ z \bydef [DF(\bar{a}) - A^{\dagger} ] h. \] Since in $z$ some of the terms involving $((h_1)_k)_{k=0}^{m-1}$ will cancel, it is useful to introduce $\widehat h_1$ as follows: \[ (\widehat h_1)_k \bydef \begin{cases} 0 & \text{if } k<m,\\ (h_1)_k & \text{if } k \geq m. \end{cases} \] Then, \begin{align*} (z_1)_k &= \begin{cases} \displaystyle -3 \lambda (\bar{a}_1*\bar{a}_1*\widehat h_1)_k & \text{for } k=0,\dots,m-1 \\ \displaystyle \lambda (h_1)_{k} - 3 \lambda (\bar{a}_1*\bar{a}_1* h_1)_k & \text{for } k \ge m \\ \end{cases} \\ (z_2)_k &= \begin{cases} 0& \text{for }k=0,\dots,m-1\\ (h_2)_{k} & \text{for }k \ge m. \end{cases} \end{align*} By \eqref{eq:hQ}, we get that \[ | (z_1)_k| \le 3 |\lambda| \hat{\mathcal{Q}}_k(\bar{a}_1*\bar{a}_1) , \qquad \text{for } k =0,\dots,m-1. \] Hence, \begin{align*} \|(Az)_1 \|_{1,\nu} & \le \sum_{j=1}^2 \| A_{1,j} z_j \|_{1,\nu} = \sum_{k=0}^{m-1} \bigl| \bigl(A^{(m)}_{1,1} z_1^{(m)} \bigr)_k \bigr| \nu^{k} + \sum_{k \ge m} \frac{1}{k} | (z_2)_k | \nu^{k} \\ & \le 3 |\lambda| \sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{1,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{1}{2m} \left( 2 \sum_{k \ge m} | (z_2)_k | \nu^{k} \right) \\ & \le 3 |\lambda|\sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{1,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{1}{2m} \| h_2\|_{1,\nu} \\ & \le 3 |\lambda| \sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{1,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{1}{2m} \,\bydef Z_1^{(1)}, \end{align*} and similarly, now using the Banach algebra property of Lemma~\ref{lem:Banach_algebra_conv}, \begin{align*} \|(Az)_2 \|_{1,\nu} & \le \sum_{j=1}^2 \| A_{2,j} z_j \|_{1,\nu} = \| A_{2,1} z_1 \|_{1,\nu} = \sum_{k=0}^{m-1} \bigl| \bigl(A^{(m)}_{2,1} z_1^{(m)} \bigr)_k \bigr| \nu^{k} + \sum_{k \ge m} \frac{1}{k} | (z_1)_k | \nu^{k} \\ & \le 3 |\lambda| \sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{2,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{1}{2m} \left( 2 \sum_{k \ge m} | (z_1)_k | \nu^{k} \right) \\ & \le 3 |\lambda|\sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{2,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{|\lambda|}{2m} \left( \| h_1\|_{1,\nu} + 3 (\| \bar{a}_1 \|_{1,\nu})^2 \| h_1\|_{1,\nu} \right) \\ & \le 3 |\lambda| \sum_{k=0}^{m-1} \bigl| \bigl( |A^{(m)}_{2,1}| \hat{\mathcal{Q}}^{(m)}(\bar{a}_1*\bar{a}_1) \bigr)_k \bigr| \nu^{k} + \frac{|\lambda|}{2m} \left( 1 + 3 (\| \bar{a}_1 \|_{1,\nu})^2 \right) \,\bydef Z_1^{(2)}. \end{align*} We thus define \begin{equation} \label{eq:Z1_CR} Z_1 \bydef \max\left(Z_1^{(1)},Z_1^{(2)} \right). \end{equation} \subsubsection{The \boldmath $Z_2$ \unboldmath bound} Let $r>0$ and $c=(c_1,c_2) \in B_r(\bar{a})$, that is $\| c- \bar{a} \|_{X} = \max( \|c_1- \bar{a}_1\|_{1,\nu},\|c_2- \bar{a}_2\|_{1,\nu} ) \le r$. Given $\| h \|_X \leq 1$, note that $\left( [DF_2(c)-DF_2(\bar{a})] h \right)_k=0$ and that \[ \left( [DF_1(c)-DF_1(\bar{a})] h \right)_k = - 3 \lambda \left( (c_1 * c_1 - \bar{a}_1 * \bar{a}_1) * h_1 \right)_{k} \] so that \begin{align*} \| A [DF(c) - DF(\bar{a})]\|_{B(X)} &= \sup_{\| h \|_X \leq 1} \| A [DF(c) - DF(\bar{a})] h \|_{X} \\ & \le \| A \|_{B(X)} \sup_{\| h \|_X \leq 1} \| [DF(c) - DF(\bar{a})] h \|_{X} \\ & = 3 |\lambda| \| A \|_{B(X)} \sup_{\| h \|_X \leq 1} \| (c_1- \bar{a}_1)*(c_1+\bar{a}_1) * h_1 \|_{1,\nu} \\ & \le 3 |\lambda| \| A \|_{B(X)} \sup_{\| h \|_X \leq 1} \| c_1- \bar{a}_1 \|_{1,\nu} \| c_1+ \bar{a}_1 \|_{1,\nu} \| h_1 \|_{1,\nu} \\ & \le 3 |\lambda| \| A \|_{B(X)} r (\| c_1 \|_{1,\nu}+ \| \bar{a}_1 \|_{1,\nu}) \\ & \le 3 |\lambda| \| A \|_{B(X)} r (r+ 2\| \bar{a}_1 \|_{1,\nu}). \end{align*} Then, assuming a loose a priori bound $r \le 1$ on the radius, we set \begin{align} \label{eq:Z2_CR} Z_2 &\bydef 3 |\lambda| \| A \|_{B(X)} (1+ 2\| \bar{a}_1 \|_{1,\nu}), \end{align} with \[ \| A \|_{B(X)} = \max\left( \|A_{1,1}\|_{B(\ell^1_\nu)} + \|A_{1,2}\|_{B(\ell^1_\nu)} , \|A_{2,1}\|_{B(\ell^1_\nu)} + \|A_{2,2}\|_{B(\ell^1_\nu)} \right), \] where each operator norm $\|A_{i,j}\|_{B(\ell^1_\nu)}$ can be computed using formula \eqref{eq:normA}. \subsection{Computation of the critical points} \label{sec:rig_comp_critical points} Studying a critical point $(u(x),v(x))$ of the action functional $\mathcal{A}_{\text{CR}}$ reduces to study the steady states (time independent solutions) of Problem \eqref{e:CR}, that is \begin{equation}\label{eq:CR_BVP} \begin{cases} 0 = v_x + \lambda_1 u- \lambda_2 u^3 , \\ 0 = -u_x + v , \\ u_x(0)=u_x(\pi)=0,\\ v(0)=v(\pi)=0. \end{cases} \end{equation} Here we have added the redundant Neumann boundary conditions for $u$ (they follow immediately from the second equation) to make the symmetries more obvious. Due to the Neumann boundary conditions imposed on $u$ and the Dirichlet boundary conditions imposed on $v$, a solution $(u,v)$ of \eqref{eq:CR_BVP} can be expressed using the Fourier expansions \begin{subequations} \label{e:pairFourier} \begin{align} u(x) &= \sum_{k \in \mathbb{Z}} (a_1)_k e^{i k x}, && \quad (a_1)_k \in \mathbb{R} ~~ \text{and} ~~ (a_1)_{-k} = (a_1)_k ~\text{ for } k>0, \\ v(x) &= \sum_{k \in \mathbb{Z}} i (a_2)_k e^{i k x}, && \quad (a_2)_k \in \mathbb{R} ~~ \text{and} ~~ (a_2)_{-k} = -(a_2)_k ~\text{ for } k>0. \end{align} \end{subequations} There are several ways to transform the problem~\eqref{eq:CR_BVP} to the Fourier setting. Since the variational formulation is a crucial viewpoint, we choose to start by writing the action in terms of the Fourier (or rather cosine and sine) coefficients explicitly: \[ \mathcal{A}_{\text{CR}} (a_1,a_2) = 2 \sum_{k=1}^\infty k (a_1)_k (a_2)_k - \sum_{k=1}^\infty (a_2)_k^2 - \frac{\lambda_1}{2} (a_1^2)_0 + \frac{\lambda_2}{4} (a_1^4)_0 , \] where $a_1=\{(a_1)_k\}_{k \geq 0}$ and $a_2=\{(a_2)_k\}_{k > 0}$ are real variables. This creates a notationally inconvenient asymmetry between $a_1$ and $a_2$, and we use $(a_2)_0=0$ throughout without further ado. The convolution powers are the natural ones stemming from the convolution product \begin{equation}\label{e:convolution} (a_1 \ast \tilde{a}_1)_k = \sum_{k' \in \mathbb{Z}} (a_1)_{k'} (\tilde{a}_1)_{k-k'}, \end{equation} when taking into account the symmetries in~\eqref{e:pairFourier}, for example \begin{alignat*}{1} (a_1^2)_0 & = (a_1)_0^2 + 2 \sum_{k=1}^\infty (a_1)_k^2, \\ (a_1^4)_0 & = \sum_{k_1+k_2+k_3+k_4=0 \atop k_i \in \mathbb Z} (a_1)_{|k_1|} (a_1)_{|k_2|} (a_1)_{|k_3|} (a_1)_{|k_4|}. \end{alignat*} We have scaled out an irrelevant factor $\pi$ in the action compared to~\eqref{eq:functional_CR}. We choose as the inner product \begin{equation}\label{e:innerproduct} \bigl\langle (a_1,a_2), (\tilde{a}_1,\tilde{a}_2) \bigr\rangle \bydef \sum_{k \geq 0} (a_1)_k (\tilde{a}_1)_k + \sum_{k > 0} (a_2)_k (\tilde{a}_2)_k, \end{equation} so that the Hessian will have the straightforward appearance of a symmetric matrix (when restricted to natural finite dimensional projections). \begin{rem} We note that the alternative inner product \[ \bigl\langle\!\bigl\langle (a_1,a_2), (\tilde{a}_1,\tilde{a}_2) \bigr\rangle\!\bigr\rangle \bydef (a_1)_0 (\tilde{a}_1)_0 + 2 \sum_{k >0 } (a_1)_k (\tilde{a}_1)_k + 2 \sum_{k > 0} (a_2)_k (\tilde{a}_2)_k, \] is the one corresponding to the $L^2$ inner product in function space, which was used to interpret the Cauchy-Rieman equations~\eqref{e:CR} as the negative gradient flow of the action functional~\eqref{eq:functional_CR}. In terms of reading of symmetry properties from matrix representations this alternative inner product is less convenient, although this could be remedied by rescaling $(a_i)_k$ for $k>0$ by a factor $\sqrt{2}$. On the other hand, a disadvantage of using such rescaled Fourier coefficients is that it would complicate the description of the convolution product. Since the relative index is independent of the particular choice of inner product, we choose to work with~\eqref{e:innerproduct} in the setup for the relative index computations. \end{rem} We write $a=(a_1,a_2)$. Taking the negative gradient of $\mathcal{A}_{\text{CR}}$ with respect to the inner product~\eqref{e:innerproduct}, we arrive at the system \begin{equation}\label{eq:F=0_CR} \left\{ \begin{array}{rlll} (F_1(a))_0 &\!\!\!\!\bydef \lambda_1 (a_1)_0 - \lambda_2 (a_1^3)_0 &=0 \\[1mm] (F_1(a))_k &\!\!\!\!\bydef 2[-k (a_2)_k + \lambda_1 (a_1)_k - \lambda_2 (a_1^3)_k ] &=0& \qquad \text{for } k>0,\\[1mm] (F_2(a))_k &\!\!\!\!\bydef 2[-k (a_1)_k + (a_2)_k] &=0& \qquad \text{for } k > 0. \end{array}\right. \end{equation} We use~\eqref{eq:F=0_CR} as the Fourier equivalent of~\eqref{eq:CR_BVP}. The factors 2 in~\eqref{eq:F=0_CR} for $k>0$ are the result of the symmetries in~\eqref{e:pairFourier} in combination with the inner product choice~\eqref{e:innerproduct}. We set $F(a) = (\{F_1(a)\}_{k \ge 0},\{F_2(a)\}_{k > 0})$. Given a weight $\nu \ge 1$, consider the Banach spaces (i.e., unrelated the inner product) \begin{equation} \label{eq:ell_nu_one} \ell_\nu^1 \bydef \left\{ \tilde{a} = (\tilde{a}_k)_{k \ge 0} : \|\tilde{a}\|_{1,\nu} \bydef |\tilde{a}_0| + 2 \sum_{k=1}^\infty |\tilde{a}_k| \nu^k < \infty \right\}, \end{equation} and $\ell_\nu^{1,0} = \{ \tilde{a}\in \ell_\nu^1 : \tilde{a}_0=0 \}$, and define $X \bydef \ell_\nu^1 \times \ell_\nu^{1,0}$, with the induced norm, given $a=(a_1,a_2) \in X$, \begin{equation} \label{eq:normX_CR} \|a\|_X \bydef \max\{ \|a_1\|_{1,\nu},\|a_2\|_{1,\nu} \}. \end{equation} The problem of looking for solutions of \eqref{eq:CR_BVP} therefore reduces to finding $a \in X$ such that $F(a)=0$, where the map $F$ is defined component-wise in \eqref{eq:F=0_CR}. Solving the problem $F=0$ in $X$ is done using computer-assisted proofs. The following Newton-Kantorovich type theorem provides an efficient means of performing that task. Denote by $B_r(a) \bydef \{ x \in X : \| x - a \|_X \le r\}$ the closed ball of radius $r>0$ centered at a given $a \in X$. \begin{thm}[{\bf A Newton-Kantorovich type theorem}] \label{thm:radii_polynomials} Let $X$ and $X'$ be Banach spaces, $A^{\dagger} \in B(X,X')$ and $A \in B(X',X)$ be bounded linear operators. Assume $F \colon X \to X'$ is Fr\'echet differentiable at $\bar{a} \in X$, $A$ is injective and $A F \colon X \to X.$ Let $Y_0$, $Z_0$ and $Z_1$ be nonnegative constants, and a function $Z_2:(0,\infty) \to (0,\infty)$ satisfying \begin{align} \label{eq:general_Y_0} \| A F(\bar{a}) \|_X &\le Y_0 \\ \label{eq:general_Z_0} \| I - A A^{\dagger}\|_{B(X)} &\le Z_0 \\ \label{eq:general_Z_1} \| A[A^{\dagger} - DF(\bar{a})] \|_{B(X)} &\le Z_1, \\ \label{eq:general_Z_2} \| A[DF(c) - DF(\bar{a})]\|_{B(X)} &\le Z_2(r) r, \quad \text{for all } c \in B_r(\bar{a}), \end{align} where $\| \cdot \|_{B(X)}$ denotes the operator norm. Define the radii polynomial by \begin{equation} \label{eq:general_radii_polynomial} p(r) \bydef Z_2(r) r^2 - ( 1 - Z_1 - Z_0) r + Y_0. \end{equation} If there exists $r_0>0$ such that $p(r_0)<0$, then there exists a unique $\tilde{a} \in B_{r_0}(\bar{a})$ such that $F(\tilde{a}) = 0$. \end{thm} \begin{proof} The idea of the proof (for the details, see Appendix A in \cite{MR3612178}) is to show that the Newton-like operator $T(a) \bydef a-AF(a)$ satisfies $T:B_{r_0}(\bar{a}) \to B_{r_0}(\bar{a})$ and it is a contraction mapping, that is, there exists $\kappa \in [0,1)$ such that $\|T(x)-T(y)\|_X \le \kappa \| x - y \|_X$, for all $x,y \in B_{r_0}(\bar{a})$. The proof follows from Banach fixed point theorem. \end{proof} Proving the existence of a solution of $F=0$ using Theorem~\ref{thm:radii_polynomials} is often called the {\em radii polynomial approach} (see e.g.~\cite{MR2338393,MR2443030}). In practice, this approach consists of considering a finite dimensional projection of \eqref{eq:F=0_CR}, computing an approximate solution $\bar{a}$ (i.e. such that $F(\bar{a}) \approx 0$), considering an approximate derivative $A^\dag$ of the derivative $DF(\bar{a})$ and an approximate inverse $A$ of $DF(\bar{a})$. Once the numerical approximation $\bar{a}$ and the linear operators $A$ and $A^\dag$ are obtained, formulas for the bounds $Y_0$, $Z_0$, $Z_1$ and $Z_2(r)$ are derived analytically and finally implemented in a computer-program using interval arithmetic (see \cite{MR0231516}). The final step is to find (if possible) a radius $r_0>0$ for which $p(r_0)<0$. In case such an $r_0$ exists, it naturally provides a $C^0$ bound for the error between the approximate solution $(\bar u(x),\bar v(x))$ and the exact solution $(\tilde u(x),\tilde v(x))$, which have Fourier coefficients $\bar{a}$ and $\tilde{a}$, respectively, see~\eqref{e:pairFourier}. The following remark makes this statement explicit. \begin{rem} [{\bf Explicit error control}] \label{rmk:error_control} Assume that $r_0>0$ satisfies $p(r_0)<0$, where $p$ is the radii polynomial defined in \eqref{eq:general_radii_polynomial}. Then the unique $\tilde{a} \in B_{r_0}(\bar{a})$ such that $F(\tilde{a}) = 0$ satisfies \[ \| \tilde{a} - \bar{a} \|_X = \max \left\{ \| \tilde{a}_1 - \bar{a}_1 \|_{1,\nu}, \| \tilde{a}_2 - \bar{a}_2 \|_{1,\nu} \right\} \le r_0, \] which implies that \begin{align*} \| \tilde u - \bar u \|_{C^0} &= \sup_{x \in [0,\pi]} | \tilde u(x) - \bar u(x) | = \sup_{x \in [0,\pi]} \left| \sum_{k \in \mathbb{Z}} [ (\tilde a_1)_k - (\bar a_1)_k ] e^{i k x} \right| \\ & \le \sum_{k \in \mathbb{Z}} | (\tilde a_1)_k - (\bar a_1)_k | \le \sum_{k \in \mathbb{Z}} | (\tilde a_1)_k - (\bar a_1)_k | \nu^{|k|} = \| \tilde{a}_1 - \bar{a}_1 \|_{1,\nu} \le r_0. \end{align*} Analogously, we obtain the bound $\| \tilde v - \bar v \|_{C^0} \le r_0$. \end{rem} As mentioned previously, the radii polynomial approach begins by computing an approximate solution $\bar{a}$ of $F=0$. This first requires considering a finite dimensional projection. Fixing a projection size $m \in \mathbb{N}$, denote a finite dimensional projection of $a \in X$ by $a^{(m)} = \big( ((a_1)_k)_{k=0}^{m-1} ,((a_2)_k)_{k=1}^{m-1} \big) \in \mathbb{R}^{2m-1}$. The finite dimensional projection of $F$ is then given by $F^{(m)}=(F_1^{(m)},F_2^{(m)}):\mathbb{R}^{2m-1} \to \mathbb{R}^{2m-1}$ defined by \begin{equation} \label{eq:projection_F_CR} F^{(m)}(a^{(m)}) \bydef \begin{bmatrix} \left( F_1(a^{(m)})_k \right)_{0\leq k<m} \\ \left( F_2(a^{(m)})_k \right)_{1\leq k<m} \end{bmatrix}. \end{equation} Assume that a solution $\bar{a}^{(m)}$ such that $F^{(m)}(\bar{a}^{(m)}) \approx 0$ has been computed (e.g.~using Newton's method). Given $i=1,2$, denote $\bar{a}_i = \left( (\bar{a}_i)_0 ,\dots,(\bar{a}_i)_{m-1},0,0,0,\dots \right)$ the vector which consists of embedding $\bar{a}_i^{(m)} \in \mathbb{R}^m$ in the infinite dimensional space $\ell_\nu^1$ by {\em padding} the tail by infinitely many zeroes. We recall we set $(\bar{a}_2)_0=0$ by symmetry convention. Denote $\bar{a} = (\bar{a}_1,\bar{a}_2)$, and for the sake of simplicity of the presentation, we use the same notation $\bar{a}$ to denote $\bar{a} \in X$ and $\bar{a}^{(m)} \in \mathbb{R}^{2m-1}$. Denote by $DF^{(m)}(\bar{a})$ the Jacobian of $F^{(m)}$ at $\bar{a}$, and let us write it as \[ DF^{(m)}(\bar{a})= \begin{pmatrix} D_{a_1} F_1^{(m)}(\bar{a}) & D_{a_2} F_1^{(m)}(\bar{a})\\ D_{a_1} F_2^{(m)}(\bar{a}) & D_{a_2} F_2^{(m)}(\bar{a}) \end{pmatrix} \in M_{2m-1}(\mathbb{R}). \] The next step is to construct the linear operator $A^\dag$ (an approximate derivative of the derivative $DF(\bar{a})$), and the linear operator $A$ (an approximate inverse of $DF(\bar{a})$). Let \begin{equation} \label{eq:dagA_CR} A^\dagger= \begin{pmatrix} A_{1,1}^\dagger & A_{1,2}^\dagger\\ A_{2,1}^\dagger & A_{2,2}^\dagger \end{pmatrix} , \end{equation} whose action on an element $h=(h_1,h_2) \in X$ is defined by $(A^\dagger h)_i = A_{i,1}^\dagger h_1 + A_{i,2}^\dagger h_2$, for $i=1,2$. Here the action of $A_{i,j}^\dagger$ is defined as \begin{align*} (A_{i,1}^\dagger h_1)_k &= \begin{cases} \bigl(D_{a_1} F_i^{(m)}(\bar{a}) h_1^{(m)} \bigr)_k &\quad\text{for } 0 \leq k < m , \\ -\delta_{i,2} k (h_1)_k &\quad\text{for } k \ge m, \end{cases} \\ (A_{i,2}^\dagger h_2)_k &= \begin{cases} \bigl(D_{a_2} F_i^{(m)}(\bar{a}) h_2^{(m)} \bigr)_k &\quad\text{for } 1 \leq k < m, \\ -\delta_{i,1} k (h_2)_k &\quad\text{for } k \ge m, \end{cases} \end{align*} where $\delta_{i,j}$ is the Kronecker $\delta$. Consider now a matrix $A^{(m)} \in M_{2m-1}(\mathbb{R})$ computed so that $A^{(m)} \approx {DF^{(m)}(\bar{a})}^{-1}$. We decompose it into four blocks: \[ A^{(m)}= \begin{pmatrix} A_{1,1}^{(m)} & A_{1,2}^{(m)}\\ A_{2,1}^{(m)} & A_{2,2}^{(m)} \end{pmatrix}. \] This allows defining the linear operator $A$ as \begin{equation} \label{eq:A_CR} A= \begin{pmatrix} A_{1,1} & A_{1,2}\\ A_{2,1} & A_{2,2} \end{pmatrix} , \end{equation} whose action on an element $h=(h_1,h_2) \in X$ is defined by $(Ah)_i = A_{i,1} h_1 + A_{i,2} h_2$, for $i=1,2$. The action of $A_{i,j}$ is defined as \begin{align*} (A_{i,1} h_1)_k &= \begin{cases} \left(A_{i,1}^{(m)} h_1^{(m)} \right)_k & \text{for } 0 \leq k < m \\ - \delta_{i,2} \frac{1}{k} (h_1)_k & \text{for } k \ge m \end{cases} \\ (A_{i,2} h_2)_k &= \begin{cases} \left( A_{i,2}^{(m)} h_2^{(m)} \right)_k & \text{for } 1 \leq k < m \\ -\delta_{i,1} \frac{1}{k} (h_2)_k & \text{for } k \ge m. \end{cases} \end{align*} Having obtained an approximate solution $\bar{a}$ and the linear operators $A^\dagger$ and $A$, the next step is to construct the bounds $Y_0$, $Z_0$, $Z_1$ and $Z_2(r)$ satisfying \eqref{eq:general_Y_0}, \eqref{eq:general_Z_0}, \eqref{eq:general_Z_1} and \eqref{eq:general_Z_2}, respectively. Their analytic derivation will be done explicitly in Section~\ref{s:CR} for the Cauchy-Riemann equations. Assume that using these explicit bounds we applied the radii polynomial approach and obtained $r_0>0$ such that $p(r_0)<0$. As the following remark shows, this implies that $A$ is an injective operator. \begin{rem}[{\bf Injectivity of the linear operator \boldmath$A$\unboldmath}] \label{remark:injectivity_of_A} If $r_0>0$ satisfies $p(r_0)<0$, then $Z_2(r_0) r_0^2 + (Z_0 + Z_1) r_0 + Y_0 < r_0$. Since $Y_0$, $Z_0$, $Z_1$ and $Z_2(r_0)$ are nonnegative, this implies that $\| I - A A^{\dagger}\|_{B(X)} \le Z_0 <1$. By construction of the linear operators $A$ and $A^\dagger$, this implies that \[ \| I_{\mathbb{R}^{2m-1}} - A^{(m)} DF^{(m)}(\bar{a}) \| < 1, \] which in turns implies that both $A^{(m)}$ and $DF^{(m)}(\bar{a})$ are invertible matrices in $M_{2m-1}(\mathbb{R})$. Since $A^{(m)}$ is invertible and the tail part of $A$ is invertible by construction, $A$ is injective. \end{rem} As consequence of Remark~\ref{remark:injectivity_of_A}, if $r_0>0$ satisfies $p(r_0)<0$, then $A$ is injective, and therefore there exists a unique $\tilde{a} \in B_{r_0}(\bar{a})$ such that $F(\tilde{a})=0$. \begin{rem} Using a finite dimensional projection of size $m=100$ we computed two numerical approximations $\bar{a}^{(1)}$ and $\bar{a}^{(2)}$. In Figure~\ref{f:CR}, the approximate solutions $\bar{a}^{(1)}$ (left) and $\bar{a}^{(2)}$ (right) are plotted. For each approximation, the code {\tt script\_proofs\_CR.m} (available at \cite{codes_webpage}) computes with interval arithmetic (using INTLAB, see \cite{Ru99a}) the bounds $Y_0$, $Z_0$, $Z_1$ and $Z_2$ using the explicit formulas presented in Section~\ref{s:CR} with $\nu=1.01$. For each $\bar{a}^{(i)} $ ($i=1,2$), the code verifies that $p(r_0^i)<0$. From this, we conclude that there exists $\tilde{a}^{(i)} \in X$ such that $F(\tilde{a}^{(i)})=0$ and such that $\| \tilde{a}^{(i)} - \bar{a}^{(i)} \|_X \le r_0^i$, where $r_0^1 = 4.7 \cdot 10^{-11}$ and $r_0^2 = 1.1 \cdot 10^{-13}$. \end{rem} Having introduced the ingredients to compute a critical point $\tilde{a}$, we now turn to the question of controlling the spectrum of $DF(\tilde{a})$. \subsection{Controlling the spectrum of \boldmath $DF(\tilde{a})$ \unboldmath} \label{sec:rig_comp_spectrum} In this section, we assume that using the radii polynomial approach of Theorem~\ref{thm:radii_polynomials}, we have proven existence of a unique $\tilde{a} \in B_{r_0}(\bar{a}) \subset X$ such that $F(\tilde{a})=0$ for some $r_0>0$ satisfying $p(r_0)<0$. Denote by $DF(\tilde{a})$ the derivative at $\tilde{a}$. Recall that when we rigorously compute this solution we use an operator $A^\dagger$, defined by \eqref{eq:dagA_CR}, which approximates the Jacobian~$DF(\bar{a})$. Both the Hessian $DF(a)$ and the approximation $A^\dagger$ of $DF(\bar{a})$ are symmetric with respect to the inner product~\eqref{e:innerproduct}, hence their eigenvalues are real-valued. Given any $c \in X$, we define the homotopy between $DF(c)$ and $A^\dagger$ by \begin{equation} \label{eq:homotopy_dagA_DF} \mathcal{D}_{c}(\sigma) \bydef (1-\sigma) DF(c)+ \sigma A^\dagger , \qquad\text{for } \sigma \in [0,1]. \end{equation} \begin{thm} \label{thm:homotopy_dagA_DF} Assume that $r_0>0$ satisfies $p(r_0)<0$ with $p$ given in \eqref{eq:general_radii_polynomial}. For any $c \in B_{r_0}(\bar{a})$, we have \[ {\rm specflow}(\mathcal{D}_c(\sigma)) = 0. \] \end{thm} \begin{proof} From the hypothesis that $p(r_0)<0$, we obtain \begin{equation}\label{e:sumZ} Z_2(r_0) r_0 + Z_0 + Z_1 + \frac{Y_0}{r_0} = \frac{1}{r_0} ( Y_0 + (Z_0 + Z_1)r_0 + Z_2(r_0) r_0^2) < 1. \end{equation} Hence \begin{equation} \label{eq:bounds_less_than_one} \| I - A A^\dagger \|_X \le Z_0 <1 \quad \text{and} \quad \sup_{c \in B_{r_0}(\bar{a})} \| I - A DF(c) \|_X <1, \end{equation} where the first inequality follows from the fact that $Z_0<1$, and the second inequality holds since, for any $c \in B_{r_0}(\bar{a})$, \begin{align*} \| I - A DF(c) \|_X &= \| I - A A^{\dagger} + A[A^{\dagger} - DF(\bar{a})] + A[DF(\bar{a}) - DF(c)]\|_X \\ & \le \| I - A A^{\dagger}\|_X + \| A[A^{\dagger} - DF(\bar{a})] \|_X + \| A[DF(\bar{a}) - DF(c)]\|_X \\ & \le Z_0 + Z_1 + Z_2(r_0) r_0 < 1, \end{align*} where the final inequality follows from~\eqref{e:sumZ}. Hence, given any $c \in B_{r_0}(\bar{a})$ and any $\sigma \in [0,1]$, \begin{align*} \|I - A \mathcal{D}_c(\sigma) \|_X &= \| I - A (\sigma A^\dagger + (1-\sigma) DF(c)) \|_X \\ &= \| (\sigma I+(1-\sigma)I - A (\sigma A^\dagger + (1-\sigma) DF(c)) \|_X \\ &\le \sigma \| I - AA^\dagger \|_X + (1-\sigma) \|I - A DF(c)) \|_X \\ &< \sigma + (1-\sigma) = 1. \end{align*} By a standard Neumann series argument, the composition $A \mathcal{D}_c(\sigma)$ is invertible. This implies that ${\rm ker}(\mathcal{D}_c(\sigma)) = \{0\}$. Hence $ {\rm specflow}(\mathcal{D}_c(\sigma)) = 0$. \end{proof} Assume that we have proven the existence of two critical points $\tilde{a}$ and $\tilde{b}$ of the Cauchy-Riemann problem \eqref{eq:F=0_CR} using the radii polynomial approach (Theorem~\ref{thm:radii_polynomials}). Denote by $\bar{a}$ and $\bar{b}$ the numerical approximation of $\tilde{a}$ and $\tilde{b}$, and by $A^\dagger_{\bar{a}}$ and $A^\dagger_{\bar{b}}$ the approximate derivatives used to obtain the computer-assisted proofs. In addition to the paths $\mathcal{D}_{\tilde{a}}(\sigma)$ and $\mathcal{D}_{\tilde{b}}(\sigma)$ discussed above, we introduce the following paths of linear operators: \begin{alignat*}{2} \mathcal{D}_{\tilde{a} \to \tilde{b}} (\sigma) &= (1-\sigma) DF(\tilde{a})+ \sigma DF(\tilde{b}) , &\quad&\text{for } \sigma \in [0,1],\\ \mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma) &= (1-\sigma) A^\dagger_{\bar{a}} + \sigma A^\dagger_{\bar{b}}, &\quad&\text{for } \sigma \in [0,1]. \end{alignat*} To compute the relative index of $\tilde{a}$ and $\tilde{b}$ we use the identity \begin{alignat}{1} i\bigl(\tilde{a},\tilde{b}\bigr) &= {\rm specflow}\bigl(\mathcal{D}_{\tilde{a} \to \tilde{b}} (\sigma)\bigr) \nonumber\\ &={\rm specflow}\bigl(\mathcal{D}_{\tilde{a}}(\sigma)\bigr) + {\rm specflow}\bigl(\mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma)\bigr)- {\rm specflow}\bigl(\mathcal{D}_{\tilde{b}}(\sigma)\bigr) \nonumber \\ &= {\rm specflow}\bigl(\mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma)\bigr), \label{e:specflowdagger} \end{alignat} where we have used independence with respect to the chosen path, as well as Theorem~\ref{thm:homotopy_dagA_DF}. In the next section we discuss how to compute the spectral flow in the righthand side of~\eqref{e:specflowdagger}. \subsection{Computing the relative indices} \label{sec:rig_comp_relative_indices} To continue the discussion from Section~\ref{sec:rig_comp_spectrum}, we assume that we have proven the existence of two critical points $\tilde{a}$ and $\tilde{b}$ of the Cauchy-Riemann problem \eqref{eq:F=0_CR} using the radii polynomial approach (Theorem~\ref{thm:radii_polynomials}) in balls around the numerical approximations $\bar{a}$ and $\bar{b}$. We denote by $m_{\bar{a}}$ and $m_{\bar{b}}$ the dimensions of the finite dimensional projections used, and we set $m=\max \{ m_{\bar{a}} , m_{\bar{b}} \}$. Ordering the components of $a$ as \[ a = \left( (a_1)_0, (a_1)_1, (a_2)_1,\dots,(a_1)_k, (a_2)_k,\dots \right) \] leads to the following representation of the linear operator $A^\dagger_{\bar{a}}$: \begin{equation*} A^\dagger_{\bar{a}} = \begin{pmatrix} DF^{(m_{\bar{a}})}(\bar{a}) & & & \\ & \Lambda_{m_{\bar{a}}} & & \\ & & \Lambda_{m_{\bar{a}}+1} & \\ & & & \ddots \end{pmatrix}, \qquad \Lambda_k \bydef \begin{pmatrix} 0 & -k \\ -k & 0 \end{pmatrix}, \end{equation*} and similarly for $A^\dagger_{\bar{b}}$. Alternatively, we may write \begin{equation*} A^\dagger_{\bar{a}} = \begin{pmatrix} (A^\dagger_{\bar{a}})^{(m)} & & & \\ & \Lambda_{m} & & \\ & & \Lambda_{m+1} & \\ & & & \ddots \end{pmatrix}, \end{equation*} and similarly for $A^\dagger_{\bar{b}}$. The latter representation allows us to write the homotopy \[ \mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma) = \begin{pmatrix} (1-\sigma) (A^\dagger_{\bar{a}})^{(m)} + \sigma (A^\dagger_{\bar{b}})^{(m)} & & & \\ & \Lambda_{m} & & \\ & & \Lambda_{m+1} & \\ & & & \ddots \end{pmatrix}. \] The tail of $\mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma)$ is independent of $\sigma$ and has eigenvalues $\{ \pm k : k \ge m\}$. Hence, any crossing of eigenvalues of $\mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma)$ must come from the finite dimensional part \[ (\mathcal{D}^\dagger_{\bar{a} \to \bar{b}})^{(m)} (\sigma) = (1-\sigma) (A^\dagger_{\bar{a}})^{(m)} + \sigma (A^\dagger_{\bar{b}})^{(m)} . \] We may perturb this finite dimensional path to a generic one to conclude that \begin{alignat}{1} {\rm specflow}\bigl(\mathcal{D}^\dagger_{\bar{a} \to \bar{b}} (\sigma)\bigr) & = {\rm specflow}\bigl( (\mathcal{D}^\dagger_{\bar{a} \to \bar{b}})^{(m)} (\sigma) \bigr) \nonumber \\ & = n_{2m-1}\bigl((A^\dagger_{\bar{a}})^{(m)} \bigr) - n_{2m-1}\bigl((A^\dagger_{\bar{b}})^{(m)} \bigr), \label{e:counteigenvalues} \end{alignat} where $n_{2m-1}(Q)$ denotes the number of positive eigenvalues of a $(2m-1) \times (2m-1)$ matrix~$Q$. Using the tools of validated numerics, one can use interval arithmetic and the contraction mapping theorem (e.g. via the method \cite{MR3204427}) to enclose rigorously all eigenvalues of $(A^\dagger_{\bar{a}})^{(m)}$ and therefore compute $n_{2m-1}\bigl((A^\dagger_{\bar{a}})^{(m)} \bigr)$. A convenient alternative, especially when there are repeated eigenvalues (for example due to symmetry, such as in Problem~\eqref{e:cylinder} posed on the square $[0,\pi]^2$, see also Section~\ref{s:TW}) or when $m$ is large, is to determine $n_{2m-1}(Q)$ via a similarity argument (cf.~\cite{OKspacegroups}). Namely, one can determine a basis transformation $V$ using approximate eigenvectors of $Q$, enclose the inverse of $V$ by interval arithmetic methods, and compute the (interval-valued) matrix $Q_0 = V^{-1} Q V$. Then $Q_0$ has the same eigenvalues as~$Q$, and it is approximately diagonal. When none of the Gershgorin circles associated to~$Q_0$ intersect the imaginary axis, one may read of $n_{2m-1}(Q_0) = n_{2m-1}(Q)$ from the diagonal of~$Q_0$. In conclusion, by combining~\eqref{e:specflowdagger} and~\eqref{e:counteigenvalues} we find that the (computable) formula \begin{equation*} i\bigl(\tilde{a},\tilde{b}\bigr) =n_{2m-1}\bigl((A^\dagger_{\bar{a}})^{(m)} \bigr) - n_{2m-1}\bigl((A^\dagger_{\bar{b}})^{(m)} \bigr) \end{equation*} for the relative index of~$\tilde{a}$ and $\tilde{b}$. \subsection*{Three example problems} To illustrate the central ideas of this paper, we will use three problems, for which we have implemented the computer-assisted computations to obtain the indices of critical points. The first example is the classical application of Floer theory: the Cauchy-Riemann equations \begin{equation}\label{e:CR} \begin{cases} u_t = v_x + \psi_\lambda(u) ,\\ v_t = -u_x + v, \\ v(t,0)=v(t,\pi)=0 , \end{cases} \end{equation} where $\psi_\lambda : \mathbb{R} \to \mathbb{R}$ is some smooth nonlinear function. Throughout this paper we will restrict attention to \[ \psi_\lambda(u) \bydef \lambda_1 u - \lambda_2 u^3, \] where $\lambda_1,\lambda_2 \in \mathbb{R}$ are parameters, but the method works for much more general nonlinearities. Although rescaling could reduce the number of parameters when the signs of $\lambda_1$ and $\lambda_2$ are fixed, keeping two parameters turns out to be advantageous when capitalizing on continuation arguments. In particular, while from the viewpoint of pattern formation and forcing results the interesting case to consider is when both parameters are positive, when homotoping it is convenient to allow $\lambda_1$ to change sign and then allow $\lambda_2$ to vanish when $\lambda_1$ is negative. We come back to this later. In~\eqref{e:CR} we have chosen Neumann boundary condition on $u$ and Dirichlet boundary conditions on $v$. In this and all other examples we choose Neumann/Dirichlet boundary conditions rather than periodic ones in order to avoid the issues related to shift invariance (which would make all critical points degenerate). The time variable is $t$, but this problem is ill-posed hence there is no flow in forward or backward time. The equation has a variational structure, as~\eqref{e:CR} is the formal (negative) $L^2$-gradient flow of the action functional \begin{equation} \label{eq:functional_CR} \mathcal{A}_{\text{CR}} \bydef \int_0^{\pi} \Bigl[ v u_x - \frac{1}{2} v^2 - \Psi_\lambda(u) \Bigr] dx, \end{equation} where $\Psi_\lambda(u)=\frac{\lambda_1}{2}u^2-\frac{\lambda_2}{4}u^4$ is an anti-derivative of $\psi_\lambda(u)$. Our second example is \begin{equation}\label{e:TW} \begin{cases} u_{tt} - c u_t + u_{x_1 x_1}+u_{x_2 x_2} + \psi_\lambda(u) =0 \qquad\text{for } x=(x_1,x_2) \in [0,\pi] \times [0,\pi],\\ u_{x_1}(t,0,x_2)=u_{x_1}(t,\pi,x_2)=0,\\ u_{x_2}(t,x_1,0)=u_{x_2}(t,x_1,\pi)=0. \end{cases} \end{equation} Here $c > 0$ is a parameter that has the interpretation of the wave speed, since~\eqref{e:TW} results from substituting a travelling wave Ansatz into the parabolic equation \begin{equation}\label{e:cylinder} u_t = \Delta u +\psi_\lambda(u) = u_{x_1 x_1} + u_{x_2 x_2} + u_{x_3 x_3} +\psi_\lambda(u), \qquad\text{for } t,x_3 \in \mathbb{R}, \, x_1,x_2 \in [0,\pi], \end{equation} with Neumann boundary conditions on the ``cylindrical'' spatial domain $[0,\pi]^2 \times \mathbb{R}$. Hence solutions of~\eqref{e:TW} correspond to travelling wave solutions of~\eqref{e:cylinder} on the infinite cylinder, see e.g.~\cite{BakkervdBergvdVorst,FSV,Gardner,Mielke}. The problem~\eqref{e:TW} is not quite a gradient flow, but rather it is gradient-like. This still suffices for a Morse-Floer homology construction. Indeed, for the problem~\eqref{e:TW} the details of this construction can be found in~\cite{BakkervdBergvdVorst}. The functional \begin{equation} \label{eq:functional_TW} \mathcal{A}_{\text{TW}} \bydef \int_0^\pi \int_0^\pi \Bigl[ -\frac{1}{2} (u_t)^2 + \frac{1}{2} ((u_{x_1})^2+(u_{x_2})^2) - \Psi_\lambda(u) \Bigr] dx_1 dx_2 \end{equation} serves as Lyapunov function for solutions of~\eqref{e:TW} for any $c>0$. Our third example is the Ohta-Kawasaki equation~\cite{Ohta-Kawasaki} \begin{equation}\label{e:OK} \begin{cases} u_t = - u_{xxxx} - (\psi_\lambda(u))_{xx} - \lambda_3 u, &\quad\text{for } x\in [0,\pi],\\ u_x(t,0)=u_x(t,\pi)=0, \\ u_{xxx}(t,0)=u_{xxx}(t,\pi)=0,\\ \int_0^\pi u(0,x) dx =0 , \end{cases} \end{equation} which is used to model diblock copolymers~\cite{MR1334695,MR2496714,MR2685742}. The extra parameter $\lambda_3 \geq 0$ describes the strength of the (attractive) long range interactions in the mixture. The space of functions $u$ satisfying $\int_0^\pi u(x) dx =0$ is invariant (the general Ohta-Kawasaki model has a parameter $m$ that denotes the mass ratio of the two constituents in the mixture; for simplicity we consider the case $m=0$ only, corresponding to a 50\%-50\% mixture). Equation~\eqref{e:OK} does not have an ill-posed initial value problem, but generates a semi-flow. Indeed, we use it to illustrate that the computation of a Morse and a relative index can be treated in a unified framework. The flow generated by~\eqref{e:OK} is the negative gradient flow in $H^{-1}$ for the functional \begin{equation} \label{eq:functional_OK} \mathcal{A}_{\text{OK}} \bydef \int_0^\pi \Bigl[ \frac{1}{2} (u_x) ^2 - \Psi_\lambda(u) + \frac{\lambda_3}{2} (\phi_x)^2 \Bigr] dx, \end{equation} where $\phi$ is the unique solution of the elliptic problem \[ \begin{cases} -\phi_{xx} = u, & \quad\text{for } x\in [0,\pi], \\ \phi_x(0)=\phi_x(\pi)=0,\\ \int_0^\pi \phi(x) dx =0 . \end{cases} \] \subsection*{Sample results} In gradient(-like) systems the only type of (bounded) solutions that exist for all time $t \in \mathbb{R}$ are equilibria and heteroclinic connections. We will not assume all the equilibria to be nondegenerate (since that is very difficult to check). Therefore, we define connecting orbits as orbits for which the $\alpha$ and $\omega$ limit sets are disjoint and consists of equilibria only. Although generically these are classical connecting orbits between nondegenerate equilibria (indeed, the definition of Morse-Floer homology is built on that), this broader definition allows one to draw more general conclusions. For the Cauchy-Riemann problem~\eqref{e:CR} we determine the Floer homology by continuation to the linear case $\widetilde{\psi}(u)=-u$. In that case there is a unique equilibrium solution $(u,v)(x) \equiv (0,0)$. This stationary point is hyperbolic and we use the associated linear operator as the base point relative to which we define indices. \begin{thm}\label{t:CR} For $\lambda_1=\lambda_2=6$ the Cauchy-Riemann problem~\eqref{e:CR} has at least seven equilibrium solutions with relative indices $0,0,1,1,2,2,3$. Moreover, there are at least three connecting orbits, of which at least two have nontrivial spatial dependence. \end{thm} \begin{proof}[Outline of proof] Continuation of the nonlinearity $\psi_\lambda$ for $\lambda=(6,6)$ to the base point at $\lambda=(-1,0)$ can be performed within the class of coercive nonlinearities (i.e. $\psi'(u)u<0$ as $|u| \to\infty$) by using a piecewise linear homotopy in parameter space via the intermediate point $\lambda=(-1,6)$. This guarantees the necessary compactness properties, see Proposition~\ref{compact1}. We obtain $\beta_0=1$ and $\beta_k=0$ for $k\neq 0$, where $\beta_k$ are the Betti numbers of the Floer homology ${\mathrm{HF}}_k\bigl(\mathcal{S^\infty},\psi\bigr)$, where $\mathcal{S^\infty}$ is maximal invariant set in $\mathcal{N} = C^1([0,\pi])$. cf. Section \ref{construction1}. At $\lambda=(6,6)$ the indices of the homogeneous equilibria $(u,v)=(\pm 1,0)$ and $(u,v)=(0,0)$ are $0$ and~$3$, respectively, as can be verified by hand or computer. Two of the other equilibria are depicted in Figure~\ref{f:CR}; see Section~\ref{sec:rig_comp_critical points} for an explanation about the rigorous error control on the distance between the graphs depicted and the true solutions. Their relative indices are 1 and 2. The remaining two equilibria are related to these via the transformation $(u,v) \mapsto (-u,-v)$. The results on the number and type of connecting orbits follow from the forcing Lemma~\ref{l:forcing}. As mentioned when we chose the base point, the only nonzero Betti number is $\beta_0=1$. On the other hand, the relative index information on the seven equilibria implies that we may set \[ \zeta_0=2, \qquad \zeta_1=2, \qquad \zeta_2=2, \qquad \zeta_3=1. \] The multiplicity result then follows directly from the forcing Lemma~\ref{l:forcing}. Additionally it implies that each of the four nonhomogeneous equilibria (the ones with relative index $1$ and~$2$) forms the $\alpha$ or $\omega$ limit set of at least one connecting orbit. The remaining details of the proof are filled in Sections~\ref{s:setuphomotopy} (existence theorem for equilibria and computation of the relative indices) and~\ref{s:CR} (bounds needed for the computer-assisted part of the proof). \end{proof} We note that large parts of the analysis of~\eqref{e:CR} can be done by hand, since the equilibria for the particular choice of the right-hand side coincide with those of the \emph{Allen-Cahn} or \emph{Chaffee-Infante} parabolic problem \[ u_t= u_{xx} + \psi_{\lambda}(u). \] This (bifurcation) problem is analyzed in detail in~\cite{MR804887,MR1347417}. Furthermore, by using the symmetry one could obtain somewhat stronger forcing results, but we do not pursue that here as it is beside the point of this paper. For the other two examples we obtain similar results, but here no alternative using pencil-and-paper analysis is available. For the problem~\eqref{e:TW} we again first select a base point, relative to which we define the indices. Namely, as for the problem~\eqref{e:CR}, for the linear case $\widetilde{\psi}(u)=-u$ there is a unique, hyperbolic equilibrium solution $u(x) \equiv 0$. We choose the associated linear operator (where we may pick any $c>0$) as our base point. We can now formulate the following sample results. \begin{thm}\label{t:TW} For $\lambda_1=\lambda_2=12$ the travelling wave problem~\eqref{e:TW} has at least 71 equilibrium solutions with relative indices $0$ (2$\times$), $2$ (8$\times$), $3$ (8$\times$), $4$ (8$\times$), $5$ (8$\times$), $6$ (12$\times$), $7$ (8$\times$), $8$ (6$\times$), $10$ (4$\times$), $11$ (4$\times$), $12$ (2$\times$), and $13$ (1$\times$). Moreover, for any $c>0$ there are at least 35 connecting orbits, corresponding to travelling waves of~\eqref{e:cylinder}. Each of the 68 nonhomogeneous equilibria is the $\alpha$ or $\omega$ limit set of a connecting orbit. \end{thm} The nonhomogeneous equilibria are depicted in Figure~\ref{f:TW}. The problem allows a symmetry group of order $16$, generated by the operations \[ x_1 \mapsto \pi-x_1 \qquad x_2 \mapsto \pi-x_2 \qquad (x_1,x_2) \mapsto (x_2,x_1) \qquad u \mapsto -u. \] For each equilibrium represented in Figure~\ref{f:TW} there are additional ones generated by these operations (the orbit under the action of the symmetry group). The number of such symmetry-related equilibria is indicated in Figure~\ref{f:TW}. The proof of Theorem~\ref{t:TW} is essentially the same as the one of Theorem~\ref{t:CR}, although of course the estimates and computational details are somewhat different (see Section~\ref{s:TW}) and the Floer theory constructed in this case has some less classical aspects, see~\cite{BakkervdBergvdVorst}. The result in Theorem~\ref{t:TW} complements the ones obtained in~\cite{FSV}, where a result similar to Lemma~\ref{l:forcing} is proven using the Conley index, but without the information on the existence of equilibria provided by our computer-assisted approach. For the problem~\eqref{e:OK} choosing a base point is not an issue. Since the problem is not ill-posed, one may just use the classical Morse index. Nevertheless, it is useful to note that for the linear case $\widetilde{\psi}(u)=-u$ and any $\lambda_3 \geq 0$ there is a unique, hyperbolic equilibrium solution $u(x) \equiv 0$, which is a (global) minimizer, i.e., it has Morse index $0$. Hence indices can also be interpreted as relative to the linearization at this equilibrium. \begin{thm}\label{t:OK} For $\lambda_1=\lambda_2=9$ and $\lambda_3=4.5$ the Ohta-Kawasaki problem~\eqref{e:OK} has at least $9$ equilibrium solutions with Morse indices $0$ (4$\times$), $1$ (4$\times$) and $2$ (1$\times$). Moreover, there are at least 4 connecting orbits, each having nontrivial spatial dependence. \end{thm} The nontrivial equilibria are depicted in Figure~\ref{f:OK}. The proof is analogous to those discussed above, with some computational details for this particular problem provided in Section~\ref{s:OK}. This complements results from \cite{MR2136516}, where constructive computer-assisted proofs of existence of connecting orbits in Ohta-Kawasaki are obtained. \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{figures/x1-eps-converted-to} ~~ \includegraphics[width=0.4\textwidth]{figures/x2-eps-converted-to} } \caption{Equilibrium solutions of~\eqref{e:CR} for $\lambda_1=\lambda_2=6$. The error in the plots in the $C^0$ norm is no more than $5 \cdot 10^{-11}$. To each equilibrium $(u(x),v(x))$ corresponds another equilibrium $(-u(x),-v(x))$ with the same relative index. Moreover, the homogeneous states $(-1,0)$, $(0,0)$ and $(1,0)$ are also equilibria. The states $(\pm 1,0)$ have relative index $0$, the equilibrium on the left has index $1$, the one on the right has index $2$, and the state $(0,0)$ has relative index $3$.} \label{f:CR} \end{figure} \begin{figure}[p] \vspace*{-17.1pt} \centerline{\includegraphics[width=0.33\textwidth]{figures/sol12no4}\includegraphics[width=0.33\textwidth]{figures/sol12no10}\includegraphics[width=0.33\textwidth]{figures/sol12bif4}} \centerline{\includegraphics[width=0.33\textwidth]{figures/sol12bif5}\includegraphics[width=0.33\textwidth]{figures/sol12no5}\includegraphics[width=0.33\textwidth]{figures/sol12no9}} \centerline{\includegraphics[width=0.33\textwidth]{figures/sol12no13}\includegraphics[width=0.33\textwidth]{figures/sol12no14}\includegraphics[width=0.33\textwidth]{figures/sol12bif2}} \centerline{\includegraphics[width=0.33\textwidth]{figures/sol12bif1}\includegraphics[width=0.33\textwidth]{figures/sol12no6}\includegraphics[width=0.33\textwidth]{figures/sol12no7}} \centerline{\includegraphics[width=0.33\textwidth]{figures/sol12no12}\includegraphics[width=0.33\textwidth]{figures/sol12no8}\includegraphics[width=0.33\textwidth]{figures/sol12no15}} \caption{Nonhomogenous equilibrium solutions of~\eqref{e:TW} for $\lambda_1=\lambda_2=12$. The relative index is indicated above each graph. The error in the plots in the $C^0$ norm is less than $3\cdot10^{-5}$. Additionally, the homogenous solutions $u \equiv \pm 1$ and $u \equiv 0$ have indices 0 and 13, respectively. The multiplicity mentioned above each graph is the number of symmetry-related equilibria, as explained in the main text.} \label{f:TW} \end{figure} \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{figures/bif_diagram3-eps-converted-to} \includegraphics[width=0.4\textwidth]{figures/ok.pdf} } \caption{(Left) Bifurcation diagram of equilibria of \eqref{e:OK} when $\lambda_1=\lambda_2$ varies over the interval $[0,9]$ with Morse indices $0$ (blue), $1$ (red) and $2$ (green). (Right) Equilibrium solutions of~\eqref{e:OK} for $\lambda_1=\lambda_2=9$ and $\lambda_3=4.5$. The trivial solution (in green) has index $2$. The equilibria in blue have index $0$ while the one in red has index $1$. To each blue equilibrium solution $u(x)$ corresponds a solution $-u(x)$ having the same index. Moreover, to the red solution $u(x)$ corresponds the three other equilibria $-u(x)$ and $\pm u(\pi-x)$. The error in the plots in the $C^0$ norm is no more than $6 \cdot 10^{-14}$.} \label{f:OK} \end{figure} \subsection*{Outline of the paper} The outline of this papers is as follows. In Section~\ref{s:theory} we give a concise outline of the construction of Morse-Conley-Floer homology, and we discuss the forcing relation in Morse-Conley-Floer theory between critical points (with their indices) and connecting orbits. In Section~\ref{s:setuphomotopy} we introduce the computational setup for computing-proving the equilibria and their (relative, Morse) indices, as well as the spectral-flow and homotopy arguments that turn the computational results into rigorous ones. We use that in a Fourier series setting, which the three problems~\eqref{e:CR}, \eqref{e:TW} and~\eqref{e:OK} all fit in, the spectral flow properties that we need are particularly convenient from a computational point of view. This is due to the lack of explicit boundary conditions (which are absorbed in the Banach spaces we choose to work in) and the fact that the dominant differential operators are diagonal in the Fourier basis. In Sections~\ref{s:CR},~\ref{s:TW} and~\ref{s:OK} we provide computational details for each of the example problems~\eqref{e:CR}, \eqref{e:TW} and~\eqref{e:OK}, respectively. Since the computational particulars are not the core of the present paper, and many of the estimates are available elsewhere, we keep those sections brief, using citations to the literature where appropriate. All computer-assisted parts of the proofs are performed with code available at~\cite{codes_webpage}. \section{Introduction} \label{s:intro} \input{introduction} \section{Morse-Conley-Floer homology} \label{s:theory} \input{theory} \section{Computing the equilibria and their relative indices} \label{s:setuphomotopy} \input{computations} \section{The bounds for the Cauchy-Riemann equations} \label{s:CR} \input{cauchyriemann} \section{The travelling wave problem} \label{s:TW} \input{travwave} \section{The Ohta-Kawasaki problem} \label{s:OK} \input{ohtakawasaki} \bibliographystyle{abbrv} \subsection{The relative Morse index} As pointed out in the introduction, strongly indefinite problems have infinite Morse (co)-index. This complication defies a standard counting definition for an index. Instead we use the approach proposed by Floer in his treatment of the Hamilton action (e.g. see \cite{MR1703347,MR987770}). Let $z$ be a solution of Problem \eqref{eq:CR_BVP1e} with $\lim_{t\to\pm \infty} z(t,\cdot) = w_\pm$, where $w_\pm$ are \emph{hyperbolic} critical points of $\mathcal{A}^\epsilon_{\text{CR}}$. Linearizing the equations in \eqref{eq:CR_BVP1e} yields a linear operator \begin{equation}\label{eq:CR_BVP12} \left( \begin{array}{c} \xi \\ \eta \end{array} \right) \mapsto \left( \begin{array}{c} \xi_t - \eta_x - \psi'_\lambda(u)\xi -\epsilon h_{uu}(x,u,v)\xi -\epsilon h_{uv}(x,u,v)\eta \\ \eta_t +\xi_x - \eta -\epsilon h_{uv}(x,u,v)\xi -\epsilon h_{vv}(x,u,v)\eta \end{array} \right) . \end{equation} Such linearized Cauchy-Riemann equations are written compactly as \begin{equation}\label{e:LK} L_K \bydef \partial_t-J\partial_x - K(t,x), \end{equation} where $J= \begin{psmallmatrix} 0 & 1 \\ -1 & 0\end{psmallmatrix}$ is the standard symplectic $2\times 2$-matrix and $K$ is a ($2\times 2$) matrix-valued function with asymptotic limits $\lim_{t\to\pm\infty} K(t,x) = K_{\pm}(x)$. The operators $L_K$ of this type are Fredholm operators on $W^{1,2}(\mathbb{R}\times [0,\pi])$ and the Fredholm index ${\rm ind}(L_K)$ only depends on the limits $K_\pm$, which we denote by \[ {\rm ind} (L_K)= \iota(K_-,K_+) , \] cf.\ \cite{MR1331677}. When $J\partial_x + K_\pm = -d^2\mathcal{A}^\epsilon_{\text{CR}}(w_\pm)$ we define the \emph{relative Morse index} $i(w_-,w_+)$ of $w_-$ and~$w_+$ as the Fredholm index \[ i(w_-,w_+) \bydef \iota(K_-,K_+) , \] where $K_\pm = - J\partial_x -d^2\mathcal{A}^\epsilon_{\text{CR}}(w_\pm)$. The Fredholm index satisfies the co-cycle property, which expresses that concatenation of paths corresponds to addition of Fredholm indices. In particular, if $w$, $w'$ and $w''$ are critical points of $\mathcal{A}^\epsilon_{\text{CR}}$ then \[ i(w,w') + i(w',w'') = i(w,w''). \] This property implies that the relative index function on the critical points is well-defined. One may normalize the index, for example by setting \[ \mu(w) \bydef \iota(K, K_0), \] where $J \partial_x + K = -d^2\mathcal{A}^\epsilon_{\text{CR}}(w)$ and $K_0 = \begin{psmallmatrix} 1 & 0 \\ 0 & 1\end{psmallmatrix}$. With the normalized Morse index we obtain \[ \iota(w,w') = \mu(w) - \mu(w'). \] In the Fredholm theory for operators $L_K$ of the form~\eqref{e:LK}, the Fredholm index can be related to another characteristic of self-adjoint operators: spectral flow. Let $\sigma \mapsto B(\sigma)$, $\sigma\in [-1,1]$, be a smooth path of self-adjoint operators such that \[ B(\pm 1) = -d^2\mathcal{A}^\epsilon_{\text{CR}}(w_\pm). \] A path can be deformed slightly to be a \emph{generic} path, that is $B(\sigma)$ is singular only for finitely many values of $\sigma$, \cite[Sect.\ 4]{MR1331677}. We denote $I =\{ \sigma \in (-1,1) : B(\sigma) \text{ is singular} \}$, where we assume that the end points $B(- 1)$ and $B(+1)$ are regular ($w_\pm$ are hyperbolic critical points). Moreover, at any $\sigma_0 \in I$ the kernel of $B(\sigma_0)$ is 1-dimensional, and the single eigenvalue such that $\lambda(\sigma_0)=0$ crosses zero transversally: $\lambda'(\sigma_0)\not =0$. One defines the \emph{spectral flow} of a generic path as \[ {\rm specflow}(B(\sigma)) \bydef \sum_{\sigma_0 \in I} {\rm sign}(\lambda'(\sigma_0)). \] The spectral flow does not depend on the chosen (generic) path, hence the definition of the spectral flow can be extended to nongeneric paths. The spectral flow is related to the Fredholm operators $L_K$ as follows: \[ i(w_-,w_+) = {\rm ind}\bigl( L_K \bigr) = {\rm specflow}( J \partial_x + K(\sigma,x)). \] The link between the relative index and the Fredholm operator is used again in the next section to determine the dimension of sets of bounded solutions. \subsection{Isolating neighborhoods} \label{geninv12} Problem \eqref{eq:CR_BVP1} is ill-posed when viewed as an initial value problem. It does however make sense to consider the \emph{set of bounded solutions} \[ \mathcal{W}_{\epsilon,h} \bydef \Bigl\{ z=(u,v) : \mathbb{R}\times [0,\pi] \to \mathbb{R}^2 \Bigm| z\text{ solves~\eqref{eq:CR_BVP1e} and }\int_{\mathbb{R}\times [0,\pi]} |z_t|^2 <\infty\Bigr\}. \] When $\psi(u)=\Psi'(u)$ is a coercive nonlinearity, that is $\psi(u)u<0$ for $|u|\to\infty$, then we have the following compactness result: \begin{prop} \label{compact1} Suppose $\psi$ is coercive. Then the set $\mathcal{W}_{\epsilon,h}$ is compact in $C^1_{\rm loc}(\mathbb{R}\times[0,\pi])$, for all $\epsilon$ and all $h$. \end{prop} The compactness result is based on elliptic estimates and the ``geometric type'' of the nonlinearity $\psi$ (coercivity). The latter provides an a priori $L^\infty$-bound on complete trajectories $z(t,x)$. The global compactness result is a crucial pillar for defining invariants, cf.\ \cite{MR1703347,MR3349416,MR987770}. Nonetheless, when such a property is not available, for example when $\psi (u)u>0$ for $|u|\to\infty$, one way to circumvent this problem is to consider bounded solutions restricted to a subset $\mathcal{N}\subset C^1([0,\pi])$. In fact, we will use isolating neighborhoods $\mathcal{N}$ (see Definition~\ref{def:isolating} below) even for coercive nonlinearities. We define \[ \mathcal{W}_{\epsilon,h}(\mathcal{N}) \bydef \bigl\{ z\in \mathcal{W}_{\epsilon,h} \bigm| z(t,\cdot)\in \mathcal{N} \text{ for all } t\in \mathbb{R}\bigr\}. \] Bounded solutions define the equivalent of an invariant set since $t$-translation of bounded solutions of the nonlinear Cauchy-Riemann equations in $\mathcal{W}_{\epsilon,h}(\mathcal{N})$ induces a (continuous) $\mathbb{R}$-flow on the (metric) space $\mathcal{W}_{\epsilon,h}$ (compact-open topology). We define \begin{equation} \label{invset} \mathcal{S}_{\epsilon,h}(\mathcal{N}) \bydef \bigl\{z(0,\cdot) \bigm| z\in \mathcal{W}_{\epsilon,h}(\mathcal{N}) \bigr\}\subset \mathcal{N}, \end{equation} which is called the \emph{maximal invariant set} in $\mathcal{N}$. Points in $\mathcal{S}_{\epsilon,h}(\mathcal{N})$ will be denoted by $w$. The Cauchy-Riemann equations are special in the sense that the unique continuation property yields an $\mathbb{R}$-flow on $\mathcal{S}(\mathcal{N})$, cf.\ \cite{MR92067,MR1181727}. This induced $\mathbb{R}$-flow on $\mathcal{S}_{\epsilon,h}(\mathcal{N})$ is denoted by $\phi_{\epsilon,h}$. In the case $\epsilon=0$ we write $\mathcal{W}(\mathcal{N})$ and $\phi\colon \mathbb{R}\times\mathcal{S}(\mathcal{N}) \to \mathcal{S}(\mathcal{N})$ for the set of bounded solutions and the induced $\mathbb{R}$-flow, respectively. In general, while the $t$-translation flow on $\mathcal{W}(\mathcal{N})$ always defines a $\mathbb{R}$-flow, for parabolic equations such as~\eqref{e:OK} the induced flow on $\mathcal{S}(\mathcal{N})$ may yield only a semi-flow. The most important reason for using the induced flow $\phi$ is to have a straightforward definition of isolation and isolating neighborhood, leading to a compact metric space $\mathcal{S}(\mathcal{N})$, on which the theory of attractors and Morse representations from \cite{KMV1,KMV3} can be used. \begin{prop}[\cite{MR3349416,MR987770}] \label{compact2} Let $\mathcal{N}\subset C^1([0,\pi])$ be a closed set. If $\mathcal{S}_{\epsilon,h}(\mathcal{N})\subset \mathcal{N}$ is bounded, then the set $\mathcal{W}_{\epsilon,h}(\mathcal{N})$ is compact in $C^1_{\rm loc}(\mathbb{R}\times[0,\pi])$. \end{prop} As a consequence $\mathcal{S}_{\epsilon,h}(\mathcal{N})$ is a compact subset in $C^1([0,\pi])$. \begin{defn}[\cite{MR1703347,MR3349416,MR987770}]\label{def:isolating} A subset $\mathcal{N} \subset C^1([0,\pi])$ is called an \emph{isolating neighborhood} for Problem \eqref{eq:CR_BVP1e}, if \begin{enumerate} \item[(i)] $\mathcal{S}_{\epsilon,h}(\overline{\mathcal{N}})$ is compact; \item[(ii)] $\mathcal{S}_{\epsilon,h}(\overline{\mathcal{N}})\subset {\rm int} (\mathcal{N})$. \end{enumerate} \end{defn} A sufficient condition to guarantee the boundedness of $\mathcal{S}_{\epsilon,h}(\mathcal{N})$ is an action bound: \[ a\le \mathcal{A}^\epsilon_{\text{CR}}(z(t,x)) \le b,\quad \forall z\in \mathcal{N}, \quad a<b\in \mathbb{R}, \] cf.\ \cite{MR987770,MR1045282}. A second important pillar for defining intrinsic invariants is a generic structure theorem for gradient systems. We say that Problem \eqref{eq:CR_BVP1e} is \emph{generic} if (i): the critical points $w$ of $\mathcal{A}^\epsilon_{\text{CR}}$ are non-degenerate, i.e. $d^2\mathcal{A}^\epsilon_{\text{CR}}(w)$ is an invertible operator, and (ii): the \emph{adjoint} of the linearized problem \eqref{eq:CR_BVP12} is onto for every bounded trajectory $z\in \mathcal{W}_{\epsilon,h}(\mathcal{N})$. The pair $(\epsilon,h)$ is called \emph{generic} in this case. \begin{prop} \label{generic12} For every $\epsilon\not =0$ and for almost every (in a well-defined sense) perturbation $h$, Problem \eqref{eq:CR_BVP1e} is generic. \end{prop} When $\epsilon=0$ and $\mathcal{N}$ is an isolating neighborhood, then $\mathcal{N}$ is also isolating for $\epsilon\not=0$ sufficiently small. For isolating neighborhoods and generic pairs $(\epsilon,h)$ we have the following structure theorem: \begin{thm} \label{struc12} Let $\mathcal{N}$ be an isolating neighborhood and let $(\epsilon,h)$ be a generic pair. Then, \[ \mathcal{W}_{\epsilon,h}(\mathcal{N}) = \bigcup_{w_-,w_+}\mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N}), \] where \[ \mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N}) \bydef \bigl\{z\in \mathcal{W}_{\epsilon,h}(\mathcal{N}) \bigm| \lim_{t\to\pm\infty}z(t,\cdot)=w_\pm\bigr\} , \] and $w_\pm$ are (the finitely many) critical points of $\mathcal{A}^\epsilon_{\text{CR}}$ in $\mathcal{N}$. The sets $\mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N})$ are smooth embedding manifolds (without boundary) and \[ \dim \mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N}) = \iota(w_-,w_+). \] \end{thm} \subsection{The homology construction} \label{construction1} Theorem \ref{struc12} states that, generically, bounded solutions are connecting orbits or critical points. This allows us to carry out a standard construction of chain complexes. To reduce notational clutter we fix a base point for the index and consider the normalized index $\mu(w)$. Given an isolating neighborhood~$\mathcal{N}$ and a generic pair $(\epsilon,h)$ we define \[ C_k(\epsilon,h;\mathcal{N}) \bydef \bigoplus_{\substack{d\mathcal{A}^\epsilon_{\text{CR}}(w)=0 \\ \mu(w) =k}} \mathbb{Z}_2 \langle w\rangle , \] called the $k$-dimensional \emph{chain groups} over $\mathbb{Z}_2$. The latter are finite dimensional since $\mathcal{W}_{\epsilon,h}(\mathcal{N})$ is compact. Also by compactness $\mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N}) $ is a finite set of trajectories whenever $i(w_-,w_+)=\mu(w_-)-\mu(w_+) =1$. This allows us to define the boundary operator \[ \partial_k(\epsilon,h;\mathcal{N})\colon C_k(\epsilon,h;\mathcal{N})\to C_{k-1}(\epsilon,h;\mathcal{N}), \] given by \[ \partial_k\langle w\rangle \bydef \sum_{\mu(w') =k-1}n(w,w') \langle w'\rangle, \] where $n(w,w')\in \mathbb{Z}_2$ is the number of trajectories in $\mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N})$ modulo $2$. In order to justify the terminology \emph{boundary operator} we observe that \begin{equation} \label{sum12} \bigl(\partial_{k-1}\circ \partial_k\bigr) \langle w\rangle = \sum_{\mu(w'')=k-2}\sum_{\mu(w')=k-1} n(w,w')n(w',w'')\langle w''\rangle. \end{equation} The inner sum counts the number of 2-chain connections between $w$ and $w''$. The structure theorem can be appended with the statement that every (of the finitely many) components of $\mathcal{W}_{\epsilon,h}(w_-,w_+;\mathcal{N}) $ with $i(w_-,w_+)=2$, is either a circle of trajectories or an open interval of trajectories with distinct ends. The latter implies that the sum in~\eqref{sum12} is even and therefore $\partial_{k-1}\circ \partial_k=0$ for all $k$, proving that $\partial_k$ is indeed a boundary operator. Hence \[ \Bigl( C_k(\epsilon,h;\mathcal{N}),\partial_k(\epsilon,h;\mathcal{N})\Bigr),\quad k\in \mathbb{Z}, \] is a finite dimensional chain complex over the critical points of $\mathcal{A}^\epsilon_{\text{CR}}$ in $\mathcal{N}$. The homology of the chain complex is defined as \begin{equation} \label{FH12} {\mathrm{HF}}_k(\epsilon,h;\mathcal{N}) \bydef\frac{ \ker \partial_k(\epsilon,h;\mathcal{N})}{{\rm im~} \partial_{k+1}(\epsilon,h;\mathcal{N})}, \end{equation} which is the \emph{Floer homology} of the triple $(\epsilon,h;\mathcal{N})$. A priori the Floer homology depends on the three parameters $\epsilon$, $h$ and $\mathcal{N}$. Basic properties of the Cauchy-Riemann equations can be used to show various invariance properties of the Floer homology. \begin{prop}[\cite{MR987770}] \label{homotopy1} Let $\mathcal{N} \subset C^1([0,\pi])$ be a closed set and let $(\epsilon_s,h_s)_{s\in [0,1]}$ be a homotopy. Suppose that \begin{enumerate} \item[(i)] $\mathcal{N}$ is an isolating neighborhood for every pair $(\epsilon_s,h_s)$, $s\in [0,1]$; \item [(ii)] the pairs $(\epsilon_0,h_0)$ and $(\epsilon_1,h_1)$ are generic. \end{enumerate} Then, ${\mathrm{HF}}_k(\epsilon_0,h_0;\mathcal{N}) \cong {\mathrm{HF}}_k(\epsilon_1,h_1;\mathcal{N})$ for all $k$, and concatenations of homotopies yield compositions of isomorphisms. \end{prop} We may thus interpret the Floer homology as a Conley-Floer index ${\mathrm{HF}}_*(\mathcal{N})$ of the isolating neighborhood $\mathcal{N}$. Note that for isolating neighborhoods $\mathcal{N}$ and $\mathcal{N}'$ with $\mathcal{S}(\mathcal{N}) = \mathcal{S}(\mathcal{N}')$ the index is the same, which motivates the definition as an index for $\mathcal{S}$: \[ {\mathrm{HF}}_k(\mathcal{S}) = {\mathrm{HF}}_k(\mathcal{N}) \cong {\mathrm{HF}}_k(\mathcal{N}') , \qquad\text{for all } k \in \mathbb{Z}, \] which can be formalized via the usual inverse limit construction. The next step is to see how the Conley-Floer index depends on the nonlinearity $\psi$. Consider a homotopy $\psi^s$, $s\in [0,1]$, which represents a continuous family of functions $\psi^s(u)$ of superlinear polynomial growth. \begin{thm}[Continuation, cf.\ \cite{MR1703347,MR987770, MR1181727}] \label{homotopy2} Let $\mathcal{N} \subset C^1([0,\pi])$ be a closed set and let $(\psi^s)_{s\in [0,1]}$ be a homotopy. Suppose $\mathcal{N}$ is isolating for all $s\in [0,1]$. Then \[ {\mathrm{HF}}_k(\mathcal{S}_0,\psi^0) \cong {\mathrm{HF}}_k(\mathcal{S}_1,\psi^1), \qquad\text{for all } k\in \mathbb{Z}, \] where $\mathcal{S}_0$ and $\mathcal{S}_1$ are the isolated invariant sets in $\mathcal{N}$ with respect to $\psi^0$ and $\psi^1$ respectively. \end{thm} In advantageous circumstances, the continuation theorem can be used to compute the Conley-Floer index, e.g.\ by continuation to a situation where there is just a single critical point (or none). We denote the Betti numbers by \[ \beta_k \bydef {\rm rank~} {\mathrm{HF}}_k\bigl(\mathcal{S}(\mathcal{N}),\psi\bigr). \] Furthermore, to formulate a forcing result for connecting orbits, we assume that the number of hyperbolic critical points of relative index $k$ is bounded below by $\zeta_k$. If $\zeta_k > \beta_k$ for some~$k$, then there must be at least one connecting orbit. We can be a bit more precise in the context where we have computationally found a finite set of hyperbolic critical points $U = \bigcup_k U_k = \bigcup_k \{u_{k,i}\}_{i=1}^{\zeta_k}$, where $u_{k,i}$ has relative index $k$. We use the notation $n_+ = \max\{n,0\}$. \begin{lem}\label{l:forcing} The number of points in $U_k$ that is not the $\omega$ or $\alpha$ limit set of any connecting orbit is at most $\beta_k$. The number of connecting orbits with $\omega$ or $\alpha$ limit set in $U$ is bounded below by \begin{equation}\label{e:forced} \frac{1}{2} \sum_{k} (\zeta_k - \beta_k)_+. \end{equation} \end{lem} \begin{proof} We outline the proof, cf.~\cite[Theorem 10.2]{BakkervdBergvdVorst}. We first consider small perturbations to a generic pair, with a perturbed set of hyperbolic critical points $U^\epsilon=\bigcup_k U^\epsilon_k = \bigcup_k \{u^\epsilon_{k,i}\}_{i=1}^{\zeta_k}$. Let $\xi^\epsilon_k$ be the number of critical points in $U_k^\epsilon$ without a connecting orbit to it (i.e.\ it is not in the $\omega$ or $\alpha$ limit set of any connecting orbit). It follows from the homology construction that $\xi^\epsilon_k \leq \beta_k$. Taking the limit $\epsilon \to 0$ we find that number of points in $U_k$ that is not the $\omega$ or $\alpha$ limit set of any connecting orbit for $\epsilon=0$ is at most $\beta_k$. Furthermore, for all sufficiently small $\epsilon>0$ there must be at least $(\zeta_k - \beta_k)_+$ critical points in $U_k^\epsilon$ with a connecting orbit ``attached'' to it. Taking the limits of these connecting orbits (for all $k$) as $\epsilon \to 0$, and noticing that no more than two of the points in $U$ can be in the union of the $\omega$ and $\alpha$ limit set of a single connecting orbit (for $\epsilon=0$), we arrive at the lower bound~\eqref{e:forced}. \end{proof} While for the cases encountered in our applications we are satisfied with the forcing result provided by this lemma, there is definitely room for improvement. For example, the ordering in terms of energy of the critical points contains additional information that can lead to stronger forcing results. In such situations one will need a refinement of the setup in terms of Morse representations, which we may indeed introduce in the current (Morse-Conley-Floer) context along the lines presented in~\cite{KMV1,KMV3}. We leave this for future work. \subsection{Spectral properties} \label{s:TWspectralflow} Equation~\eqref{e:TW} may be written as a system of first order equations \begin{equation}\label{e:TWsystem} \left\{ \begin{aligned} u_t &= v,\\ v_t &= cv - u_{x_1 x_1} -u_{x_2 x_2} - \psi_\lambda(u), \end{aligned} \right. \end{equation} with Neumann boundary conditions on the square. The spectral problem for~\eqref{e:TWsystem} is directly related to the spectral problem for the parabolic equation \begin{equation}\label{e:parabolic} u_t = u_{x_1 x_1} + u_{x_2 x_2} + \psi_\lambda(u), \end{equation} again with Neumann boundary conditions on the square. First, we note that any equilibrium of~\eqref{e:TW} is of the form $(u,v)=(u_*,0)$, with $u_*$ an equilibrium of~\eqref{e:parabolic}. Furthermore, the eigenvalue problems of the linearized operators at these equilibria are \begin{equation}\label{e:rho} \left\{ \begin{aligned} \rho u &= v,\\ \rho v &= cv - u_{x_1 x_1}-u_{x_2 x_2} - \psi'_\lambda(u_*) u, \end{aligned} \right. \end{equation} and \begin{equation}\label{e:sigma} \sigma u = u_{x_1 x_1} + u_{x_2 x_2} + \psi'_\lambda(u_*) u, \end{equation} respectively, both with Neumann boundary conditions. Hence eigenvalues $\rho$ of~\eqref{e:rho} and eigenvalues $\sigma$ of~\eqref{e:sigma} are related through \begin{equation}\label{e:sigmamu} \sigma = c \rho - \rho^2. \end{equation} Since the elliptic operator in~\eqref{e:sigma} is self-adjoint, all eigenvalues $\sigma$ are real. Each negative eigenvalue $\sigma$ of~\eqref{e:sigma}, of which there are infinitely many, corresponds to a pair of eigenvalues \[ \rho = \rho_{\pm}(\sigma) = \frac{c}{2} \pm \biggl(\frac{c^2}{2} - \sigma \biggr)^{\!1/2} , \] one positive and one negative (which is of course consistent with~\eqref{e:TWsystem} being strongly indefinite). For each positive $\sigma$, of which there are at most finitely many, there are two eigenvalues $\rho=\rho_\pm$ (a double eigenvalue for $\sigma=-\frac{1}{2}$), both with positive real part. In particular, all eigenvalues of~\eqref{e:rho} lie in the union $\{ \text{Im}(z)=0 \} \cup \{ \text{Re}(z)=\frac{c}{2} \} \subset \mathbb{C}$. Hence, when parameters are varied eigenvalues can only pass from the left half-plane to the right half-plane through the origin. It is thus reasonable to expect that for $c>0$ (or $c<0$) the spectral flow for the linearization of~\eqref{e:TWsystem} is well-defined, and this is indeed the case, see~\cite{BakkervdBergvdVorst,MR1331677}. Furthermore, it follows from~\eqref{e:sigmamu} that along a homotopy eigenvalues $\rho$ of~\eqref{e:rho} and $\sigma$ of~\eqref{e:sigma} cross the origin simultaneously and in the same direction. The spectral flows for~\eqref{e:TWsystem} and~\eqref{e:parabolic} are thus ``the same'' in the sense that the relative index of a pair of equilibria for~\eqref{e:TWsystem} is equal to the relative index of this pair for~\eqref{e:parabolic}. Since the latter is easier to analyse (it is scalar), we compute relative indices using the parabolic equation~\eqref{e:parabolic} and then draw conclusions for the strongly indefinite system~\eqref{e:TWsystem}, or, equivalently, the travelling wave problem~\eqref{e:TW}. \subsection{Problem reformulation} As explained in Section~\ref{s:TWspectralflow}, to draw conclusions about~\eqref{e:TW}, we compute equilibria and associated Morse indices of the parabolic equation \begin{equation} \label{eq:nonlinear_parabolic} u_t = \Delta u + \psi_\lambda(u) = \Delta u + \lambda (u-u^3), \end{equation} with Neumann boundary conditions on the square $[0,\pi] \times [0,\pi]$. We perform the cosine transform \begin{equation*} u(x) = \sum_{k \in \mathbb{Z}^2} a_k e^{i k \cdot x} = \sum_{k \in \mathbb{N}^2} m_k a_k \cos(k_1 x_1) \cos(k_2 x_2) \end{equation*} where the multiplicities are \[ m_k=m_{k_1,k_2} \bydef \begin{cases} 1 & \text{for } k_1=k_2=0\\ 2 & \text{for } k_1=0, k_2>0\\ 2 & \text{for } k_1>0, k_2=0\\ 4 & \text{for } k_1>0, k_2>0. \end{cases} \] We will from now on assume $a_{k_1,k_2} = a_{|k_1|,|k_2|} \in \mathbb{R}$. The equilibrium equations for the unknowns $(a_k)_{k\in\mathbb{N}^2}$ become \begin{equation} \label{e:eqa} F_k(a) \bydef m_k \big[ (- (k_1^2+k_2^2) +\lambda) a_k - \lambda (a*a*a)_k \bigl], \end{equation} with the usual convolution. Here the choice to include the factor $m_k$ is for the same reason as the factor $2$ in~\eqref{eq:F=0_CR}: it makes the symmetry of the Jacobian $DF$ apparent. We denote $F(a) = \{F_k(a)\}_{k \in \mathbb{N}^2}$. For the norm in Fourier space we select an (exponentially) weighted $\ell^1$-norm: \begin{equation}\label{e:ell1nu2D} \|a\|_{1,\nu} \bydef \sum_{k \in \mathbb{N}^2} m_k |a_k| \, \nu^{|k|} \end{equation} with $ |k| \bydef \max\{|k_1|,|k_2|\}$ and $\nu \geq 1$ (one may alternatively use another norm on $k$ in the exponent of $\nu$, e.g.~$|k|=|k_1|+|k_2|$). One nice thing about the weigted $\ell^1$-norm~\eqref{e:ell1nu2D} is that $ \|a*b\|_{1,\nu} \leq \|a\|_{1,\nu} \|b\|_{1,\nu}$. This makes our space \begin{equation} \label{e:X1} X = \{ a=(a_k)_{k\in\mathbb{N}^2} \,:\, a_k \in \mathbb{R} \,,\, \|a\|_{1,\nu} < \infty \} \end{equation} into a Banach algebra. Computing equilibria of \eqref{eq:nonlinear_parabolic} reduces to find $a \in X$ such that $F(a)=0$, where $F$ is given component-wise by \eqref{e:eqa}. The Newton-Kantorovich approach of Theorem~\ref{thm:radii_polynomials} is applied to achieve this task. Following a similar approach as in Section~\ref{s:CR}, we compute an approximate solution $\bar{a}$ of $F=0$, define the linear operators $A^\dagger$ and $A$, and compute the bounds $Y_0$, $Z_0$, $Z_1$ and $Z_2(r)$ satisfying \eqref{eq:general_Y_0}, \eqref{eq:general_Z_0}, \eqref{eq:general_Z_1} and \eqref{eq:general_Z_2}. The derivation of the detailed expressions for $Y_0$, $Z_0$, $Z_1$ and $Z_2(r)$ is omitted, as this analysis is analogous to Section~\ref{s:CR}, see also~\cite{OK2D} for a similar (but more involved) problem in two space dimensions. Defining the radii polynomial $p(r)$ as in \eqref{eq:general_radii_polynomial}, if there is $r_0>0$ such that $p(r_0)<0$, then there exists a unique $\tilde{a}$ with $\| \tilde{a} - \bar{a}\| < r_0$ such that $F(\tilde{a}) = 0$. After having obtained computer-assisted proofs of a solution $\tilde{a}$ of $F=0$, we use the theory of Section~\ref{sec:rig_comp_spectrum} and Section~\ref{sec:rig_comp_relative_indices} to compute their relative indices. Using this approach, we proved the solutions and relative indices depicted in Figure~\ref{f:TW}. All proofs used weight $\nu=1+10^{-8}$ and truncation dimension $m=20$ in Fourier space. The code {\tt proveall12.m}, available at~\cite{codes_webpage}, performs all the computations with interval arithmetic.
1,477,468,749,953
arxiv
\section{Introduction} Automatic Emotion Recognition is an area of research that has been active for many decades. Researchers are not only trying to mimic human logical-mathematical intelligence but have been for many years now trying to explore emotional functions. However, despite many impressive results in terms of predictive ability, automatic methods still have a long way to go when it comes to offering a satisfying model that can capture the complexity, heterogeneity, and subjectivity of the emotional experience. The interest in this topic stems from different concerns. FER has applications in various fields. For instance, given that understanding the emotional state of our interlocutor is essential for a deep and rich communication, recognizing emotions from facial expressions could be seen as a wish to further develop the human-machine interaction. Also, having an automatic FER that offers reliable results on complex emotional states could be a breakthrough in the fields of healthcare \citep{Kashif} and psychology \citep{Tadalagi}. A key element for building a robust machine learning model is the availability and quality of data. In the context of emotion recognition using images of facial expressions, collecting a big amount of data is a manageable task as proven by the multitude of available FER datasets. The samples in these datasets are collected from different sources. For example, images of faces could be collected from the internet and can be considered as expressions captured "in-the-wild". The samples could be pre-processed to only keep the facial region~\citep{Barsoum,Mollahosseini} or we can choose to also incorporate and analyze the context in which the image is captured ~\citep{Kosti}. Other datasets contain images taken in the lab, where subjects are asked to act the facial expressions in a controlled environment ~\citep{Jaffe, Lucey}. Other differences between datasets could come from the existence of noisy data, the lighting, the encoding of the samples, the diversity of the faces in terms of age, gender, race, ethnicity, etc. It is also important to insist of the subjectivity in expressing and identifying emotions, which introduces bias and ambiguity \citep{Tran} in the annotation step, as well as during the interpretation of the results step. In fact, one way to try to build a more accurate and precise emotion recognition model is to combine different types of data that could be informative on the emotional state of a person, such as facial expression, speech, tone, body posture, physiological signal, etc. In this situation, processing and combining data from different sources creates an additional challenge to the design of a robust model. As a first step to building such systems, we choose to focus this work on methods that predict emotions from facial expression images only. Three state-of-the-art models for FER tasks have been selected for experiments. These models diverge not only in their architectures but also in the level on which they intervene to better the prediction quality. Our experiments provide a fair comparison of their performance on three datasets that differ in terms of their size, the setting in which the samples are captured, and the distribution of classes. The paper starts by giving an overview of three existing facial expression datasets used for emotion prediction and that were chosen for our experiments: FER+, AffectNet and CK+. Then, three neural network architectures are described in section 2, followed by a presentation of the experimental protocol in section 3. In sections 4 and 5, results are respectively presented and analyzed. The code used for the experiments is publicly available at \url{https://anonymous.4open.science/r/FE-rec-0F0B/}. \section{Dataset Description} \label{db_sec} This section describes the three datasets, FER+, AffectNet and CK+, used for our experiments. \subsection{FER+} FER+, introduced by ~\cite{Barsoum}, is one of the most known and used datasets for FER tasks. FER+ is annotated with 8 emotions: Neutral, Happiness, Surprise, Sadness, Anger, Disgust, Fear, and Contempt, as well as Unknown and NF (Not a Face). The images are collected from the internet and contain only the facial region. They are gray-scale and are of size 48x48 pixels, which could be considered as very low resolution. To avoid affecting the quality of predictions, we resized the images to 96x96 pixels. FER+ can be considered as an improvement of the FER-2013 dataset, which was introduced by ~\cite{Goodfellow}. While the labeling for FER-2013 was done by two persons only (the authors of the dataset), FER+ took advantage of the increasingly popular scheme of crowdsourcing in order to collect ground truth labels and the images of FER-2013 were relabeled by 10 crowdsourced taggers. Each image has an emotion probability distribution by using the annotations given by the 10 taggers. The FER+ dataset is therefore a multi-labeled set of images that better reflects the ambiguity and diversity of facial emotions. In our single-label classification experiments, the label affected to a sample is the one with the highest number of votes. If the label with the highest number of votes is Unknown or NF, we choose to exclude the sample from the experiments. \subsection{AffectNet} The AffectNet dataset~\citep{Mollahosseini} provides 291,651 images of faces annotated with 8 categories of emotions: Neutral, Happy, Sad, Surprise, Fear, Disgust, Anger and Contempt. A larger version of AffectNet provides annotations such as None, Uncertain, and No-Face that were not taken into account in the implementation but are important to ensure the quality of the annotated dataset. The images present in AffectNet were collected from the internet by the means of queries such as "joyful girl", "astonished senior", etc. Only the face region was kept and 12 annotators were asked to annotate the samples. The authors provide the agreement percentages between two annotators over the annotations. It is interesting to note that all classes resulted in agreements ranging from 50\% to 70\%, except Happy with 79.6\%. These percentages are similar to the accuracy reportedly found by recognition models in the FER literature that are trained on this dataset~\citep{Siqueira,Farzaneh}. The images are in RGB and of size 224x224 pixels. In addition to the discrete emotion categories, AffectNet gives a continuous annotation of the faces in a two dimensional space, valence-arousal. The valence dimension is an indicator for the pleasantness of the emotion and arousal is a measure of the emotion intensity. These annotations were not used for our experiments, but provide precious information that enables us to extend our research to emotion recognition using continuous modeling. \subsection{CK+} The Extended Cohn-Kanade, or CK+, dataset~\citep{Lucey}, contains 593 sequences of facial expressions captured from 123 subjects in a lab. A sequence starts with a neutral expression and ends with the peak expression where the facial Action Units (AUs) are coded. Action Units were proposed by ~\cite{Ekman} as a way to model facial expressions by encoding the movements of facial muscles. From these 593 sequences, only 327 are labeled with a category of emotions: Anger, Contempt, Disgust, Fear, Happy, Sadness and Surprise. The unlabeled sequences are considered as non fitting for the prototypical definition of the emotions taken into account and are not used for supervised training. The labeling is done by assigning an emotion to the facial expression if one or more AUs are detected, with respect to the Facial Action Coding System manual ~\citep{Ekman}. Given the small number of available sequences, we chose to take three images from each sequence instead of only the peak expression. This way, the number of samples is increased for each class of emotions. Moreover, the neutral class is created by taking the first frame of each sequence, so that the set of emotions used for CK+ is the same as the other two datasets. The images are sized either 640x490 or 640x480 pixels. Some were gray-scale while others were RGB. In our experiments, the images were fed to the networks as gray-scale 640x490 pixel arrays. \section{Studied networks} This section describes the methods chosen for experiments in order to compare their performance on the previously presented datasets. The choice was based on the fact that each method provides a different approach in terms of improving on facial emotion recognition tasks. Availability of implementations of these models was also an important factor in choosing them. \subsection{ESR} In the paper titled “Efficient Facial Feature Learning with Wide Ensemble-based Convolutional Neural Networks”, by ~\cite{Siqueira} introduced the Ensembles with Shared Representations (ESR) network. Ensemble learning combines the predictions obtained by different classifiers that are usually trained in an individual and independent manner, hence producing a more accurate output than a single model would. The ESR network makes use of the translation-invariance property of the patterns learned by each convolutional layer. In fact, the patterns that are learned in the early layers of the network (low-level features) can be considered in some way as common to all the images that the network might encounter. These patterns can be oriented lines, edges, or colors. As we go deeper into the layers, the patterns that the network is learning become more complex and specific to each image. These features, in the case of facial expressions, could be the shape of the eyes, of the mouth, and of the nose. Therefore, in ESR we find two main building pieces: \begin{enumerate} \item The base of the network: A line-up of convolutional layers that are responsible for learning the low-level features. As mentioned before, patterns learned at this level of the network are very general and therefore can be shared by multiple branches. This base uses a transfer learning mechanism that speeds up the learning process as the ensemble grows, while improving the performance, since the best configuration from the training of each branch is reloaded as base. \item The independent convolutional branches: A branching of the convolutional layers allows each branch to learn its own individual high-level features. \end{enumerate} Finally, the optimization of the network consists in minimizing a loss function that combines the loss function of each branch. In the case of in-the-lab databases, the ESR network contains only 4 independent convolutional branches, as input data is expected to be of good quality (good lighting, adjusted head pose, etc.). By contrast, when dealing with in-the-wild datasets, the number of branches is increased to 9. \subsection{SCN} In the Self-Cure Network~\citep{Wang}, the authors addressed the problem of uncertainty that comes with the labeling of the facial expression datasets. One of the reasons behind is the problem of subjectiveness when categorizing the human emotional experience. Furthermore, in datasets where images are captured in-the-wild, the uncontrolled setting is definitely a source of inconsistency and uncertainty. To overcome this problem, the SCN network proposes a relabeling step before the recognition task to perform robust feature learning with uncertainty. First, the features are extracted from the images using a “backbone” convolutional network, which can be any traditional CNNs. In the implementation provided by the authors and as well as in our experiments, the backbone used is ResNet-18~\citep{He} pre-trained on the MS-Celeb-1M face recognition dataset~\citep{Guo}. Second, if a sample is considered uncertain, it is relabeled. The relabeling can be summarized in three steps: \begin{enumerate} \item Self-attention importance weighting: Importance weights are computed for each sample using a linear fully-connected layer with a sigmoid activation function. The assigned importance weight reflects the contribution that each image has on the classifier training. \item Ranking regularization: The weights are regularized in order to reduce the importance given to uncertain samples. The samples are ranked by importance and two groups of samples are created: the low-importance (30\%) and the high-importance images (70\%). \item Relabeling: This module assigns a new emotion label (the one with highest predicted probability produced by the model) to the images from the low-importance group. For that purpose, softmax probabilities are used to determine which image is actually incorrectly labeled. If the difference between the predicted probability for the label initially given to the sample, and the highest predicted probability produced by the model, is higher than a given threshold (set to 0.2 by the authors), then the data is relabeled according to the highest probability found by the model. \end{enumerate} \subsection{DACL} DACL, or Deep Attentive Center Loss~\citep{Farzaneh}, is a facial expression recognition method that uses an attention mechanism to estimate attention weights that are correlated with feature importance. In fact, in the training phase, learning irrelevant features is harmful for the performance of the network. Therefore, the authors proposed the integration of a Deep Metric Learning (DML) approach that enhances the learning of discriminative features by the model. The first step is to feed a convolutional neural network with the input images in order to generate the feature maps, followed by the DACL component composed of two building pieces: \begin{enumerate} \item Context Encoder Unit: This generates latent representations for each spatial feature map that is outputted by the backbone CNN. All these feature maps represent the context, and therefore the obtained latent feature vector is reduced in dimension and is devoid of the noise, only containing relevant information from the initial features extracted by the CNN. The linear layer weights were initialized according to the Kaiming initialization~\citep{He_delving}, which is appropriate when using the ReLU activation function, as it helps to capture non-linear relationships between layers. \item Multi-head binary classification module: This component estimates the attention weights from the latent representation outputted by the Context Encoder Unit. At this point, the problem is considered as a multi-label classification problem, where an attention weight (softmax) is computed at each component for the latent feature vector. \end{enumerate} Finally, these attention weights are used to compute the sparse center loss, which in turn is fractionally used, alongside the loss computed by the CNN backbone to compute the final loss. The authors empirically gave 1\% of the final loss to the sparse center loss. \section{Experimental Setting} In order to fairly compare the performance of the three networks mentioned above on the FER+, AffectNet and CK+ datasets, we followed an identical experimental protocol for training and testing them. A 5-fold cross-validation is used to evaluate the model. In fact, the three datasets are constructed in different ways: FER+ has training, validation and testing subsets, AffectNet has training and validation subsets, and CK+ has no subsets. Therefore, in the case of FER+ and AffectNet the subsets are mixed before performing cross-validation. The dataset is split as follows: 80\% for training, 10\% for validation, and 10\% for test. Table \ref{tab:dbs} shows the distribution of the eight emotions for the three datasets. The training is performed over 60 epochs, with Adam optimizer and a 0.001 learning rate. \begin{table}[ht] \centering \begin{adjustbox}{width=\columnwidth,center} \begin{tabular}{|c | c c c c | c c c c | c c c c|}\hline &\multicolumn{4}{c|}{\textbf{FER+}} &\multicolumn{4}{c|}{\textbf{AffectNet}} &\multicolumn{4}{c|}{\textbf{CK+}} \\\cline{2-13} &Train &Valid &Test &Total &Train &Valid &Test &Total &Train &Valid &Test &Total \\\hline Neutral &8796 &1148 &1052 &10996 &3737 &464 &471 &4672 &262 &28 &37 &327 \\ Happy &7231 &912 &896 &9039 &6283 &756 &815 &7854 &166 &16 &25 &207 \\ Sad &3001 &351 &399 &3751 &1509 &192 &185 &1886 &67 &9 &8 &84 \\ Surprise &3153 &378 &410 &3941 &1043 &147 &113 &1303 &199 &31 &19 &249 \\ Fear &546 &67 &69 &682 &691 &81 &92 &864 &60 &6 &9 &75 \\ Disgust &126 &15 &17 &158 &587 &72 &75 &734 &141 &23 &13 &177 \\ Anger &2125 &253 &278 &2656 &1449 &188 &174 &1811 &108 &14 &13 &135 \\ Contempt &120 &13 &17 &150 &563 &83 &58 &704 &43 &4 &7 &54 \\ \hline Total &25098 &3137 &3138 &31373 &15862 &1983 &1983 &19828 &1046 &131 &131 &1308 \\ \hline \end{tabular} \end{adjustbox} \caption{Class distribution in the three studied dataset.}\label{tab:dbs} \end{table} \subsection {Evaluation Metrics} \label{metrics} This subsection presents the metrics used for the model evaluation, we define TP as the value of true positives and FP is the false positives. \paragraph{Accuracy and Balanced Accuracy} The overall accuracy is reported in our results. However, it does not take into account the highly imbalanced label distribution found in the three datasets. For this reason, we report the balanced accuracy defined as the average of recall (see further in this section) obtained on each class. This definition proposed by~\cite{Mosley} is equivalent to the most commonly used formula for accuracy where each sample is weighted by the prevalence of its true label. The chosen formula computes an aggregated score of the measurements of the predictive quality for each class independently. \paragraph{Precision and Recall} For our experiments, we compute precision score, which measures the predictor's ability not to label a negative sample as positive, weighted by the support of the labels. Similarly, the weighted recall score is computed by averaging the recall score for each label after weighting it by the support of the label. \paragraph{F1-score} We also report the F1-score as the harmonic mean of the weighted precision and the weighted recall. \paragraph{AUC ROC} The AUC ROC score reflects the discrimination ability of the classifier between the different classes. In our experiments, we report the prevalence-weighted average of AUC ROC scores computed for each class against all the others. \section {Results} In this section we present the results of the five-fold cross-validation. First, we evaluate and compare the models on the whole dataset in terms of predictive performance. Second, the models are compared by taking into account their capability to discriminate each emotion class. \subsection{Overall classification} Table \ref{tab:res} reports the computed metrics on the test subsets averaged over the 5 folds. This fairly quantifies the predictive performance of the models while taking into account the imbalance in the datasets. When trained on FER+, DACL shows a better accuracy than ESR and SCN. Weighted accuracy is decreased compared to overall accuracy, which is expected given the imbalance in the classes distribution in FER+. Also, DACL shows a better precision and recall compared to the other models, as well as a higher ability to discriminate between labels (AUC ROC). For AffectNet, DACL also shows the best performance in terms of recall, precision and AUC ROC scores. However, the best balanced accuracy is achieved by ESR while staying relatively close to the balanced accuracy of DACL. Regarding CK+, ESR shows the best performance with a really high accuracy of 91.5\% and a very satisfying balanced accuracy compared to the other models. The F1-score is also the highest for ESR on this dataset. However, SCN produces the best AUC ROC score which shows that it is the best model in terms of discrimination between the categories. \begin{table}[ht] \begin{center} \begin{tabular}{|c | c | c | c | c | c | c | c |} \hline & & acc & bal acc & pr & rec & F1 & AUC ROC \\ [0.5ex] \hline \multirow {6}{*}{FER+} & {ESR} & \makecell{0.857 \\ ± 0.013} & \makecell{0.617 \\ ± 0.029} & \makecell{0.855 \\ ± 0.015} & \makecell{0.857 \\ ± 0.013} & \makecell{0.856 \\ ± 0.014} & \makecell{0.937 \\ ± 0.023} \\\cline{2-8} & {SCN} & \makecell{0.810 \\ ± 0.012} & \makecell{0.520 \\ ± 0.028} & \makecell{0.808 \\ ± 0.010} & \makecell{0.810 \\ ± 0.012} & \makecell{0.809 \\ ± 0.011} & \makecell{0.956 \\ ± 0.002}\\\cline{2-8} & {DACL} & \makecell{\textbf{0.867} \\ ± 0.005} & \makecell{\textbf{0.647} \\ ± 0.018} & \makecell{\textbf{0.863} \\ ± 0.005} & \makecell{\textbf{0.867} \\ ± 0.005} & \makecell{\textbf{0.865} \\ ± 0.005} & \makecell{\textbf{0.973} \\ ± 0.002}\\\cline{2-8} \hline \multirow {6}{*}{AffectNet} & {ESR} & \makecell{0.648 \\ ± 0.002} & \makecell{\textbf{0.439} \\ ± 0.001} & \makecell{0.626 \\ ± 0.006} & \makecell{0.648 \\ ± 0.002} & \makecell{0.637 \\ ± 0.004} & \makecell{0.821 \\ ± 0.002} \\\cline{2-8} & {SCN} & \makecell{0.651 \\ ± 0.002} & \makecell{0.390 \\ ± 0.011} & \makecell{0.622 \\ ± 0.005} & \makecell{0.651 \\ ± 0.002} & \makecell{0.636 \\ ± 0.003} & \makecell{0.894 \\ ± 0.004}\\\cline{2-8} & {DACL} & \makecell{\textbf{0.664} \\ ± 0.016} & \makecell{0.429 \\± 0.033} & \makecell{\textbf{0.633} \\ ± 0.017} & \makecell{\textbf{0.664} \\ ± 0.016} & \makecell{\textbf{0.648} \\ ± 0.017} & \makecell{\textbf{0.901} \\ ± 0.007}\\\cline{2-8} \hline \multirow {6}{*}{CK+} & {ESR} & \makecell{\textbf{0.915} \\ ± 0.018} & \makecell{\textbf{0.888} \\ ± 0.034} & \makecell{\textbf{0.922} \\ ± 0.014} & \makecell{\textbf{0.915} \\ ± 0.018} & \makecell{\textbf{0.918} \\ ± 0.016} & \makecell{0.945 \\ ± 0.008} \\\cline{2-8} & {SCN} & \makecell{0.820 \\ ± 0.030} & \makecell{0.703 \\ ± 0.072} & \makecell{0.798 \\ ± 0.043} & \makecell{0.820 \\ ± 0.030} & \makecell{0.808 \\ ± 0.032} & \makecell{\textbf{0.962} \\ ± 0.010} \\\cline{2-8} & {DACL} & \makecell{0.846 \\ ± 0.125} & \makecell{0.790 \\ ± 0.175} & \makecell{0.843 \\ ± 0.154} & \makecell{0.846 \\ ± 0.125} & \makecell{0.844 \\ ± 0.140} & \makecell{0.951 \\ ± 0.051}\\\cline{2-8} \hline \end{tabular} \caption{Average performance metrics computed on the test subsets afetr the five-fold cross-validation. The value after (±) symbol represents the standard deviation across five folds. "acc" is accuracy, "bal acc" is balanced accuracy, "pr" is weighted precision, "rec" is weighted recall, "F1" is the F1-score, "AUC ROC" is the Area under the ROC Curve.}\label{tab:res} \end{center} \end{table} \subsection{Emotion-specific classification} In order to have a closer look on the performance of each model for each of the 8 emotion categories, normalized confusion matrices are shown in figure \ref{fig:conf_mats}. When training on FER+, we find a higher TP value for the classes that are over-represented. For example, a high percentage of images that are labeled "Happy" in the dataset were correctly classified. However, not more than 1.2\% of "Contempt" images were predicted as such by the three models. We can clearly see that for FER+ the imbalance in the training sets impacts the learning process by the models. Also, "Contempt" expressions are more often than not misclassified as "Neutral" by all three of the models. When training on AffectNet, we can see that "Neutral" and "Happy" have a high classification accuracy for the three models. This is not the case for all other classes, which have a high probability of being misclassified as "Neutral" (and as "Happy" for "Contempt" samples). As the distribution classes in AffectNet shows, "Happy" has a higher frequency than "Neutral" in AffectNet, this was translated by a higher positive values for "Happy" than for "Neutral". Finally, when training on CK+, ESR predicts correct labels for over 80\% of the samples for all classes except for "Contempt" which is accurately predicted in 77.1\% of cases. However, this is not the case for SCN and DACL, where a high accuracy is only found for classes that are over-represented in CK+ (Happy, Neutral and Surprise). This said, all three models trained on CK+ still manage to provide a better accuracy on almost all under-represented classes compared to the other two datasets. \begin{figure}[H]% \makebox[\textwidth][c]{ \subfloat [\footnotesize \centering Confusion matrices of ESR predictions on FER+(left), AffectNet(middle) and CK+(right)] {{\includegraphics[width=.4\textwidth]{cm/esr_fer_norm.png}} {\includegraphics[width=.4\textwidth]{cm/esr_an_norm.png}} {\includegraphics[width=.4\textwidth]{cm/esr_ck_norm.png}} }% } \makebox[\textwidth][c]{ \subfloat [\footnotesize \centering Confusion matrices of SCN predictions on FER+(left), AffectNet(middle) and CK+(right)] {{\includegraphics[width=.4\textwidth]{cm/scn_fer_norm.png}} {\includegraphics[width=.4\textwidth]{cm/scn_an_norm.png}} {\includegraphics[width=.4\textwidth]{cm/scn_ck_norm.png}} }% } \makebox[\textwidth][c]{ \subfloat [\footnotesize \centering Confusion matrices of DACL predictions on FER+(left), AffectNet(middle) and CK+(right)] {{\includegraphics[width=.4\textwidth]{cm/dacl_fer_norm.png}} {\includegraphics[width=.4\textwidth]{cm/dacl_an_norm.png}} {\includegraphics[width=.4\textwidth]{cm/dacl_ck_norm.png}} }% } \caption{Normalized confusion matrices, each computed by averaging the confusion matrices obtained over the 5 folds. } \label{fig:conf_mats}% \end{figure} \section {Discussion} \texttt{DACL} provides the best scores when training on FER+ and \texttt{AffectNet}. Both these datasets contain facial expressions captured in-the-wild, where the subjects expressed their emotions in a natural way. This is why samples from FER+ and \texttt{AffectNet} might require a representation that takes into account the characteristics of the whole face region. It could be said that the Context Encoder unit of DACL is helping to represent the overall context from the face region of these samples and the multi-head attention mechanism allow the network to retain the most important information that are useful to infer the emotion. ESR gives the best accuracy and F1-score for CK+. Therefore, we can say that the exploitation of shared representations that is characteristic of ESR, proves effective when the samples are taken in a controlled environment and when the expressions are posed and intentional. In figure \ref{fig:conf_mats}, it is noteworthy that training on FER+ induces a lot of mistakes by predicting "neutral" for samples that are labeled "sad" (using ESR, 59.4\% of "sad" are correctly predicted and 31.9\% are predicted as "neutral"). This is not the case for samples that are labeled "surprise" (using ESR, 87.4\% of "surprise" are correctly classified), despite the fact that both classes, "sad" and "surprise" are represented in the dataset in very close proportions (3751 and 3941 samples respectively). This raises the question of what is it about "sad" expressions that are less discernible than "surprise" expressions. However, both ESR and DACL manage to correctly classify 31.9\% of the "disgust" samples, which is an under-represented class. This is not the case for SCN. In fact, we can see that SCN's characteristic of dealing with uncertainty in the annotations of the training set does not translate well on the predictions done on a test set. As for training on AffectNet, the classification of a higher number of samples from other classes into "neutral" than into "happy" is proof that this misclassification comes from the ambiguity of emotions and not necessarily from the distribution in the dataset. We can say that "neutral" is a safe choice to categorize an emotion when we are somewhat uncertain. Similar to AffectNet, we find that when SCN and DACL are trained on CK+, a high number of misclassified samples are in fact predicted as "neutral". This shows that even a controlled manner of capturing emotions induces the mistake of safely labelling an emotion as "neutral". In our experiments, we chose to compare the performance of three different neural networks, \texttt{ESR}, \texttt{SCN} and \texttt{DACL}. They all have different architectures and different approaches to deal with FER tasks. However, they cannot represent the whole class of methods to which they belong (attention-based, dealing with uncertainty, or ensemble learning). Other architectures for each method class~\citep{Li,Hao} could be studied in order to have a more comprehensive view of their performance in the context of FER applications. The same could be said about datasets, where we can extend our experiments to other datasets~\citep{Jaffe,Li_raf} that share characteristics with the ones we discussed in this paper. \section {Conclusion} This paper compares the performance of three neural networks that have different approaches to tackle FER challenges: the first uses ensemble learning and transfer learning to learn facial emotion patterns, the second addresses the problem of subjectivity and uncertainty when labelling emotions, and the third makes use of an attention mechanism to learn from relevant features. To evaluate these models, we used three datasets that are different in terms of data collection and capture setting. FER+ and AffectNet are both in-the-wild datasets with facial expressions captured in an uncontrolled setting. The model that uses an attention mechanism provides the best results on images that are captured in the wild, which was expected as this type of images is very noisy and it would have been difficult to recognize the emotion if the model was not guided in focusing on the relevant parts of the images. On CK+, containing images with posed expressions, the model based on ensemble and transfer learning is the one that performs the best: the network is build to exploit low-level features, which is suitable in a setting where noisy parameters are mitigated (face in front of the camera with a neutral background). Moreover, our experiments show that models often mistake some emotion classes (e.g. "contempt" and "neutral") in in-the-wild datasets, showing that emotion ambiguity alters the model's discrimination abilities. In our comparative study, many challenges were identified, such as the under-represented emotion classes found in many FER datasets and the ambiguity of facial expressions. Also, extending the experiments to more models and datasets would provide a reliable benchmark to choose an adapted FER model in terms of the application at hand. \bibliographystyle{rnti}
1,477,468,749,954
arxiv
\section{Introduction} Traditional power grids consume fossil fuels to generate electricity and transmit electricity over long distances, which results in quick depletion of fossil fuel resources and serious environmental pollution. This motivates the study of distributed microgrids (MGs), which can efficiently realize investment deferral \cite{ArmendCoordinated}, local balance \cite{ZhangPeer}, resiliency advancement \cite{RenEnabling}, security reinforcement \cite{Zhu2015Microgrid} and reduce greenhouse gas emissions and energy losses by using renewable energy sources {\cite{ZhangCredit}}}. However, renewable energy generation is stochastic, which may influence energy reliability and quality. Meanwhile, considering the heat demands of users, the combined heat and power (CHP) system, which can efficiently generate both electricity and heat simultaneously by consuming natural gas, is introduced. Energy storage also plays a key role in improving energy reliability by storing extra energy to be used in the future. However, it is not efficient and economic for individual MG to serve its users because of the mismatch between renewable energy generation and electricity demand. Geographically distributed MGs can improve energy reliability and efficiency by sharing energy. However, the MG is selfish and wants to minimize its own cost for sharing energy. Only if its benefit cannot be lowered can a MG be incentivized to join energy trading. This needs an effective method to carry out energy scheduling and trading among multiple MGs in order to achieve benefit maximization for individual MGs. Several inter-related decisions are involved: (1) Energy pricing: what method should be adopted for energy sale and purchase among multiple MGs, and at what prices? (2) Energy scheduling: with time-varying demand and renewable generation of each MG, should a MG serve the demand using its own energy storage or trading with other MGs? When local energy storage and energy trading cannot satisfy the demand, should a MG serve the demand by purchasing energy from utility companies or CHP system, and is it necessary to exploit time-varying electricity prices? These decisions should be optimally and efficiently made online while guaranteeing individual MG's benefits for a long period. Therefore, a joint algorithm for energy scheduling and trading for MGs is designed. A double-auction mechanism is proposed to determine the purchase price and selling price, increase the economic benefits of MGs, and ensure the truthfulness of the information that MGs submit in energy trading. However, owing to the limitation of the battery storage, the MG might not fully exploit the time-diversity of renewable energy generation. In order to improve the utilization of renewable energy generation, we can introduce hydrogen into the MG and use excess renewable energy to electrolyze water to produce and store hydrogen in hydrogen storage tanks. Fuel cell vehicles can convert hydrogen into electricity to supply the MG's energy demand when the MG is short of energy, and fuel cell vehicles can be used for transportation. The following few advantages contain the reasons we introduce hydrogen: First, for the same size of energy storage, hydrogen storage can provide larger amounts of energy than batteries and can be filled in a few minutes. A number of facilities that integrate renewable energy and energy storage are under operation all over the world and most of them use hydrogen for energy storage in both stand-alone and grid-tied power generation systems {\cite{KyriakarakosPolygeneration}}. In these facilities, the hydrogen storage system is often coupled with a battery bank for short-term energy storage, thus achieving a hybrid poly-generation system. Proper integration of hydrogen storage systems and batteries increases bus stability and enhances the management of intermittent power peaks and transient loads \cite{Little2007Electrical}. Second, the entire electricity-hydrogen conversion process only utilizes water and is carbon free. Last but not least, hydrogen can be purchased from a hydrogen-producing company and used for the transportation of fuel cell vehicles. Fuel cell vehicles are particularly suited to provide spinning reserves and peak power to the grid {\cite{Lipman2004Fuel}}}. In contrast to plug-in electric vehicles, fuel cell vehicles can be operated continuously and have very low emissions \cite{Lipman2004Fuel}. Hydrogen, as a clean energy with high calorific value, is attracting wide attention. Therefore, the car as a power plant (CaPP) \cite{Wijk2014Our} is presented to introduce a controllable energy system, which uses fuel cell vehicles as dispatchable power plants \cite{Fernandes2016Fuel}. Considering that the average driving time of vehicles is less than 10\% of the whole day, vehicles can generate electricity by combusting hydrogen in a cleaner way than other power systems when they are parked, and there is a huge potential for fuel cell vehicles to take replace traditional power plants or reduce the number of new plants in the future. Therefore, the synergies between hydrogen and electricity can be explored to increase the benefits of MGs. In particular, the main contributions of this paper are as follows: \begin{itemize} \item A multi-energy management framework that includes fuel cell vehicles, energy storage, CHP system, and renewable energy is proposed. The synergies between hydrogen and electricity can further improve the local absorption of the excess renewable energy and the economic benefits of MGs. \item A joint energy scheduling and trading algorithm based on Lyapunov optimization and a double-auction mechanism is designed to optimize the long-term energy cost of each MG \item Through theoretical analysis, the proposed algorithm can achieve better trade-off among energy trading cost, energy storage and users' satisfaction. Moreover, by using practical data sets, the effectiveness of the proposed algorithm is verified. \end{itemize} In the rest of the paper, Section II introduces related works. Section III describes the system model and cost functions. Then, Section IV proposes a joint algorithm based on Lyapunov optimization and a double-auction mechanism for the energy scheduling and trading problem, and proves the theoretical performance of this algorithm. Section V shows the numerical results, and Section VI concludes the paper. \section{Related Works} Energy sharing is a way to reduce the imbalance of supply and demand of MGs and improve the local consumption of renewable energy. A number of research efforts have been conducted. In \cite{2}, it is shown that energy sharing allows participants to exchange energy in order to lower reliance on the utility company. In \cite{3}, the authors demonstrate that the development of peer-to-peer energy sharing has a significant advantage to prosumers in both earning revenues and reducing energy costs. In \cite{4} because of stochastic renewable energy generation, nanogrids form a nanogrid cluster that shares renewable energy. In \cite{5}, a real-time demand response model is presented to assist the energy sharing provider, which realizes the maximization of the energy sharing provider's utility. { In \cite{Chen2018Analyzing}, energy trading and sharing schemes for multiple energy hubs are proposed to increase system flexibility and reduce the cost of the system. } However, owing to the randomness of renewable energy, it is difficult to schedule renewable energy sharing among multiple MGs and investigate the economic aspect. There are two types of market-based models that are applicable for resource management of energy sharing. The first one is the market model where resource owners decide the price based on users' demands by the game approach. For the first situation, two different models are proposed: 1) the prosumers decide the price of energy together \cite{6,8,Chen2019An}; 2) a leader--follower structure decides the price \cite{7,9,Motalleb2019Networked}. Liu et al. \cite{6} formulate a dynamical internal pricing model for the energy sharing of prosumers who decide the price of energy. Lu et al. \cite{8} establish an informative game vector to perform price-based energy interactions among multiple MGs that decide the price. {Chen et al. \cite{Chen2019An} propose a novel energy sharing game for prosumers to determine the role of the buyer or seller and the sharing price.}} Liu et al. \cite{7} propose a Stackelberg game approach, in which the MG operator acts as the leader and prosumers act as followers to decide the price together. Tushar et al. \cite{9} formulate a non-cooperative Stackelberg game to capture the interaction between the shared facility controller and the residential units that decide the price of energy to minimize the cost. Motalleb et al. \cite{Motalleb2019Networked} propose a networked Stackelberg competition among firms to determine their optimal bids for the price of a market transaction. The second one is the auction model, where every player acts independently and agrees privately on the price. According to the type of interactions between buyers and sellers, auctions can be divided into two classes: one-side auction \cite{11} and two-side auction {\cite{10}}}. The auction mechanism helps players benefit from cooperation and energy trading with little global information. The auction mechanism can make every player autonomously share the energy and automatically guarantee the truthfulness of the energy information. Therefore, an auction mechanism is used to determine the price of energy sharing in this study. Energy storage and CaPP are also effective ways to reduce the imbalance of supply and demand in MGs, and improve the local consumption of renewable energy. In \cite{Huang2013Optimal}, Huang et al. develop a low-complexity algorithm with energy storage management to minimize the average cost of a power-consuming entity. In \cite{Gatzianas2010Control}, Gatzianas et al. explicitly take the actual energy storage into account and construct an algorithm for energy management based on the Lyapunov optimization technique. In \cite{Gayme2011Optimal}, Gayme et al. investigate distributed energy storages and illustrate their effects using an example along with time-varying demand profiles. {In \cite{Good2019A}, Good et al. propose an aggregation modeling method for multi-energy conversion, storage, and demand to take advantage of distributed energy flexibility and provide multiple services.}} The scheduling of vehicles and electrolyzers are the main aspects to be considered in the operational control of the CaPP. Centralized optimization approaches, such as minimizing operating costs \cite{Battistelli2013Generalized} or power losses \cite{Khodr2012Intelligent}, are used to address the scheduling problem of vehicles. In \cite{Shinoda2016Optimization}, the scheduling problem in the MG among renewable energy sources (RES), electrolyzer, and vehicle-to-grid (V2G) power is to minimize the power purchased from the grid. {In \cite{Jaramillo2016Optimal}, a multi-objective mixed-integer linear programming model is proposed for the scheduling in a grid-connected MG, and the startup constraints of the alkaline electrolyzer are explicitly modeled.}} In \cite{Chiesa2011Dynamic}, the electrolyzer levels out voltage fluctuations in a weak grid and improves the power quality of the MG based on a dynamic electrolyzer model. {However, the existing works do not consider the coordinated operation and multi-energy demands of MGs after introducing hydrogen storage and fuel cell vehicles. In this paper, a multi-energy management framework including fuel cell vehicles, energy storage, CHP system, and renewable energy is proposed. The characteristics and scheduling arrangements of fuel cell vehicles are considered to further improve the local absorption of the renewable energy and enhance the economic benefits of MGs. This paper designs a joint energy scheduling and trading algorithm based on Lyapunov optimization and a double-auction mechanism. The dynamic and computationally efficient energy scheduling and trading algorithm is developed that determines the valuations of energy in the auction, optimally schedules energy distribution, and strategically purchases and sells energy with the current electricity prices. The implementation of the algorithm only depends on the current system states without the need of any priori information. At the end, simulations based on real data are conducted to investigate the performance of the multi-energy management framework and demonstrate the effectiveness of the proposed algorithm.} \section{System Model} This paper considers a system consisting of $n$ interconnected MGs, an electricity utility company, a gas utility company, and a hydrogen-producing company. Each MG is equipped with renewable energy, CHP system, fuel cell vehicles, battery, hydrogen storage, boiler, and water tank, as shown in Fig.1. MGs can harvest renewable energy, such as wind and solar power. Fuel cell vehicles can generate electricity by consuming hydrogen. CHP system can consume natural gas to generate electricity, and at the same time, the generated heat follows its electricity production with fixed ratios. In addition, each MG can store extra energy for the demand in the future. \begin{figure*} \centering \begin{minipage}[c]{1\textwidth} \centerline{\includegraphics[width=\textwidth]{mgg1.jpg}} \end{minipage} \caption{Energy flows of system.} \label{fig1} \end{figure*} \subsection{Energy Purchase} MG $i$ harvests $N_{i}(t)$ units of energy generated by renewable energy during one time slot. Here, one time slot is set to be one hour in order to coordinate with the simulation. The electricity utility company uses fossil energy to generate electricity, so it has huge energy generation at one time slot, which means that constraints on energy generation by the electricity utility company are not considered. The same assumption is applied to the gas utility company and the hydrogen-producing company. MG $i$ purchases $E_{i}(t)$ units of energy from the electricity utility company with price $p_{e}(t)$. From the gas utility company, MG $i$ purchases $P_{i}^{CHP}(t)$ and $H_{i}^{CHP}(t)$ units of gas to generate $\eta_{pg}P_{i}^{CHP}(t)$ units of electricity and $\eta_{hg}H_{i}^{CHP}(t)$ units of hot water by CHP system at time slot $t$. $\eta_{pg}$ and $\eta_{hg}$ are the conversion efficiency of CHP system from natural gas to electricity and heat, respectively. Moreover, MG $i$ purchases $H_{i}^{b}(t)$ units of gas to produce $\eta_{bg}H_{i}^{b}(t)$ units of hot water by boiler at time slot $t$. $\eta_{bg}$ is the conversion efficiency of boiler from natural gas to heat. The price of the gas is $p_{g}(t)$. When there is not enough hydrogen for fuel cell vehicles, MG $i$ will purchase $d_{i}(t)$ units of hydrogen from the hydrogen-producing company with price $p_{y}(t)$. \subsection{Energy Demands} {\color{black}{MG $i$ needs to meet the electricity $L_{ie}(t)$, hydrogen $L_{iy}(t)$, and heat $L_{ih}(t)$ demands. Although these demands are stochastic, they still need to be met quickly and precisely.}} \subsubsection{Electricity Demands} First, MG $i$ uses renewable energy to meet its users' electricity demands, $L_{ie}(t)$. If $N_{i}(t)>L_{ie}(t)$, extra renewable energy can be used for energy storage, water electrolysis, and energy trading. Otherwise, MG $i$ uses all renewable energy to serve its loads. The unsatisfied electricity loads are expressed as $L_{ie}(t)-N_{i}(t)$ and are served by the following methods: \begin{itemize} \item Discharge the battery. MG $i$ can draw $D_{ie}(t)$ units of electricity from the battery to serve unsatisfied electricity loads. \item Generate electricity using hydrogen. Fuel cell vehicles can use hydrogen to generate $\eta_fhY_{if}(t)$ units of electricity \item Generate electricity using CHP system. CHP system can consume natural gas to generate $\eta_{pg}P_{i}^{CHP}(t)$ units of electricity to meet electricity demands \item Purchase electricity by energy trading. MG $i$ may acquire $X_{i}(t)$ units of electricity by trading with other MGs. \item Purchase electricity from the electricity utility company. MG $i$ can purchase $E_{i}(t)$ units of electricity from the electricity utility company. \end{itemize} \subsubsection{Hydrogen Demands} First, vehicle $l_i$ uses $Y_{il}(t-1)$ units of hydrogen stored in the vehicle to meet its driving demands $h_{il}(t)$, which can be estimated from historical data. If $Y_{il}(t-1)>h_{il}(t)+Y_{il,min}$, the vehicle can drive normally. If $Y_{il}(t-1)\leq h_{il}(t)+Y_{il,min}$, the vehicle $l_i$ uses all hydrogen in the vehicle for driving. Deficient hydrogen is obtained from MG $i$ or purchased from a hydrogen-producing company. MG $i$ purchases $d_{i}(t-1)$ units of hydrogen to meet the total hydrogen demand $L_{iy}(t)$ of all vehicles at time slot $t$. \subsubsection{Heat Demands} MG $i$ uses the hot water stored in the water tank to meet its heat demands. If these water cannot meet its heat demands, MG $i$ will use both CHP system and boiler to produce $\eta_{hg}H_{i}^{CHP}(t)+\eta_{bg}H_{i}^{b}(t)$ units of hot water to meet its heat demands $L_{ih}(t)$ at time slot $t$. \subsection{Dynamic Model for Energy Storages} Each MG has a battery that can store extra electricity generated by renewable energy generation, and a hot water tank to supply hot water. Meanwhile, hydrogen storage is introduced, and the dynamic model for three types of energy storages is considered. For MG $i$, the electricity of battery, hydrogen of the storage, and equivalent thermal energy of the hot water tank are denoted by $B_{i}(t)$, $Y_{i}(t)$, and $W_{i}(t)$ at the end of one time slot, respectively. The electricity, hydrogen, and equivalent thermal energy are charged in the amounts of $C_{ie}(t)$, $C_{iy}(t)$, and $C_{ih}(t)$, and discharged in the amounts of $D_{ie}(t)$, $D_{iy}(t)$, and $D_{ih}(t)$, respectively. Then, the energy storage dynamics can be obtained as: \begin{equation} B_{i}(t+1)=B_{i}(t)+C_{ie}(t)-D_{ie}(t) \label{A1} \end{equation} \begin{equation} Y_{i}(t+1)=Y_{i}(t)+C_{iy}(t)-D_{iy}(t) \label{Y1} \end{equation} \begin{equation} W_{i}(t+1)=W_{i}(t)+C_{ih}(t)-D_{ih}(t) \label{A3} \end{equation} where $C_{iy}(t)$ denotes the amount of hydrogen injected into hydrogen storage, which is generated by the electrolyzer during one time slot. $\frac{hC_{iy}(t)}{\eta_e}$ is the energy consumed by the electrolyzer during one time slot, $\eta_e$ is the conversion efficiency of the electrolyzer from electricity to hydrogen, and $h$ is the heating value of hydrogen, which is {$1.4 \times 10^{8}$J/kg}}. Hydrogen needs to be compressed and stored. The compression energy is $c_1C_{iy}(t)$, and $c_1$ is the specific energy consumption of the compressor. To be specific, the operations of battery, hydrogen storage, and hot water tank of MG $i$ are subject to a lot of constraints. First, electricity charging and discharging will not happen simultaneously. \begin{equation} 1_{C_{ie}(t)>0}+1_{D_{ie}(t)>0}\leq{1}, \label{cd1} \end{equation} \begin{center} $1_{f(x)>0}=\left\{\begin{array}{cc} 1 & \mbox{if}\ f(x)>0\\ 0 & \mbox{otherwise} \end{array}\right.$ \end{center} Battery, hydrogen storage, and water tank of MG $i$ have finite capacities: \begin{equation} 0\leq B_{i}(t) \leq B_{i,max} \label{Bm} \end{equation} \begin{equation} 0\leq Y_{i}(t) \leq Y_{i,max} \label{Ym} \end{equation} \begin{equation} 0\leq W_{i}(t) \leq W_{i,max} \label{Wm} \end{equation} where $B_{i,max}$, $Y_{i,max}$ and $W_{i,max}$ are the upper bounds of the battery, hydrogen storage, and hot water tank's thermal energy. There are maximum electricity, hydrogen, and equivalent thermal energy charging $C_{ie,max}$, $C_{iy,max}$, $C_{ih,max}$ and discharging $D_{ie,max}$, $D_{iy,max}$, $D_{ih,max}$ during one time slot. Thus, the charging and discharging constraints of the energy storage are denoted by \begin{equation} 0\leq C_{ie}(t) \leq C_{ie,max}, 0\leq D_{ie}(t) \leq D_{ie,max} \label{Cem} \end{equation} \begin{equation} 0\leq C_{iy}(t) \leq C_{iy,max}, 0\leq D_{iy}(t) \leq D_{iy,max} \label{Cym} \end{equation} \begin{equation} 0\leq C_{ih}(t) \leq C_{ih,max}, 0\leq D_{ih}(t) \leq D_{ih,max} \label{Chm} \end{equation} The feasible control decision on $C_{ie}(t)$ and $D_{ie}(t)$ should meet constraints (\ref{cd1}), (\ref{Bm}), and (\ref{Cem}), simultaneously. {\color{black}{Since electricity charging and discharging will not happen simultaneously, the energy level of the battery cannot exceed the capacity of the battery, which means that $B_{i}(t)+C_{ie}(t)\leq B_{i,max}$. Meanwhile, the energy level of the battery cannot be negative, which means $B_{i}(t)-D_{ie}(t) \geq 0$. }}Therefore, the charging and discharging constraints of the battery are denoted as: \begin{equation} 0\leq{C_{ie}(t)}\leq{\min[B_{i,max}-B_{i}(t),C_{ie,max}]} \label{C1} \end{equation} \begin{equation} 0\leq{D_{ie}(t)}\leq{\min[B_{i}(t),D_{ie,max}]} \label{C2} \end{equation} \subsection{Dynamic Model for Fuel Cell Vehicles} Because fuel cell vehicles can act as controllable power plants, fuel cell vehicles are introduced and a dynamic model for fuel cell vehicles is considered. The model includes the transportation features and power generation of fuel cell vehicles. The transportation features are information about the departure, arrival time, and driving distance of each vehicle, which can be estimated. Power generation is determined by the transportation features and hydrogen storage of vehicles. The hydrogen in the vehicle $l_i$ is $Y_{il}(t)$ at the end of one time slot. The number of vehicles in MG $i$ is $L_{i}$. Then, the model of fuel cell vehicle $l_i$ is as follows: \begin{equation} Y_{il}(t+1)=\left\{\begin{array}{cc} Y_{il}(t)+D_{iyl}(t)+d_{il}(t) & \mbox{injection} \\ Y_{il}(t)-Y_{ifl}(t) & \mbox{generation} \\ Y_{il}(t)-h_{il}(t) & \mbox{driving} \end{array}\right. \label{Yc} \end{equation} \begin{equation} \begin{split} \sum_{l=1}^{L_i}D_{iyl}(t)=D_{iy}(t); \sum_{l=1}^{L_i}d_{il}(t)=d_{i}(t)\\ \sum_{l=1}^{L_i}Y_{ifl}(t)=Y_{if}(t); \sum_{l=1}^{L_i}h_{il}(t)=h_{i}(t)\\ \end{split} \label{DdYh} \end{equation} \begin{equation} h_{il}(t)=\eta_dh_{ild}(t) \label{hil} \end{equation} The model in (\ref{Yc}) is a hybrid piece affine model with three modes. The injection mode denotes that the vehicle is being injected. The generation mode represents that the vehicle is available for power generation. The driving mode denotes that the vehicle is driving. The three modes will not happen simultaneously. $D_{iyl}(t)+d_{il}(t)$ is the hydrogen injected into the vehicle $l_i$ at time slot $t$. Fuel cell vehicle $l_i$ obtains hydrogen $D_{iyl}(t)$ from MG $i$ and purchases hydrogen $d_{il}(t)$ from the hydrogen station of the hydrogen-producing company. $Y_{ifl}(t)$ is the hydrogen consumed for generation by vehicle $l_i$ at time slot $t$. The power generated by fuel cell vehicle $l_i$ is denoted as $\eta_fhY_{ifl}(t)$, where $\eta_f$ is the conversion efficiency of the fuel cell from hydrogen to electricity. $h_{il}(t)$ is the hydrogen used for transportation by vehicle $l_i$ at time slot $t$, $h_{ild}(t)$ is the travel distance, and $\eta_d$ is the hydrogen by each vehicle consumes per kilometer. For fuel cell vehicle $l_i$, there are maximum hydrogen storage $Y_{il,max}$, hydrogen injected $D_{iyl,max}$ and $d_{il,max}$, hydrogen consumed for generation $Y_{ifl,max}$, and hydrogen used for transportation $h_{il,max}$ during one time slot: \begin{equation} 0\leq Y_{il}(t)\leq Y_{il,max} \label{Yil} \end{equation} \begin{equation} 0\leq D_{iyl}(t)\leq D_{iyl,max}, 0\leq d_{il}(t)\leq d_{il,max} \label{Yi1} \end{equation} \begin{equation} 0\leq Y_{ifl}(t)\leq Y_{ifl,max} \label{Yi2} \end{equation} \begin{equation} 0\leq h_{il}(t)\leq h_{il,max} \label{Yi3} \end{equation} \subsection{Cost Function} The cost function of MG $i$ consists of the payment and revenue, which is denoted as \begin{equation} \begin{aligned} C_{i}(t)&=C_{ihy}(t)+C_{ip}(t)+C_{ig}(t)+C_{iX}(t)-R_{iS}(t)-R_{ip}(t) \end{aligned} \label{micro1} \end{equation} \begin{equation} \begin{aligned} C_{ihy}(t)&=p_{y}(t)d_{i}(t), C_{ip}(t)=E_{i}(t)p_{e}(t) \\ C_{ig}(t)&=(P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t) \\ C_{iX}(t)&=\beta_i(t)X_{i}(t), R_{iS}(t)=\alpha_i(t)S_{i}(t), R_{ip}(t)=E_{io}(t)p_{eo}(t) \end{aligned} \end{equation} where $C_{ihy}(t)$ is the hydrogen cost of purchasing hydrogen from the hydrogen-producing company by all vehicles of MG $i$ at time slot $t$. $C_{ip}(t)$ and $C_{ig}(t)$ are the costs of purchasing electricity and gas from the electricity and gas utility company at time slot $t$. $C_{iX}(t)$ and $R_{iS}(t)$ are the cost of purchasing electricity from other MGs and the revenue of selling electricity to other MGs in energy trading at time slot $t$. $R_{ip}(t)$ is the revenue from selling electricity to the electricity utility company at time slot $t$. $p_{y}(t)$ is the hydrogen price of the hydrogen-producing company. $d_{i}(t)$ is the amount of hydrogen purchased from the hydrogen-producing company by all vehicles of MG $i$ at time slot $t$. $E_{i}(t)$ is the amount of electricity purchased from the electricity utility company by MG $i$ at time slot $t$. When MG $i$ purchases electricity from other MGs, $\beta_i(t)$ is the purchase price and $X_{i}(t)$ is the amount of electricity at time slot $t$. When MG $i$ sells electricity to other MGs, $\alpha_i(t)$ is the selling price of MG $i$ and $S_{i}(t)$ is the amount of electricity at time slot $t$. $E_{io}(t)$ and $p_{eo}(t)$ are the amount and price of electricity sold to the electricity utility company by MG $i$ at time slot $t$. Note that the electricity demand $L_{ie}(t)$, hydrogen demand $L_{iy}(t)$, and heat demand $L_{ih}(t)$ of MG $i$ should be satisfied when they arrive, i.e. \begin{equation} \begin{aligned} L_{ie}(t)&=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t) -C_{ie}(t)+D_{ie}(t)\\&+\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)\\ L_{iy}(t)&=h_{i}(t)\\ L_{ih}(t)&=\eta_{hg}H_{i}^{CHP}(t)+\eta_{bg}H_{i}^b-C_{ih}(t)+D_{ih}(t) \end{aligned} \label{lg} \end{equation} \section{Solution Methodology} \subsection{Optimization Method} The strategy set of MG $i$ is $\boldsymbol{M}_{i}(t)$=\{$C_{ie}(t)$, $D_{ie}(t)$, $C_{iy}(t)$, $D_{iy}(t)$, $C_{ih}(t)$, $D_{ih}(t)$, $D_{iyl}(t)$, $d_{il}(t)$, $Y_{ifl}(t)$, $h_{il}(t)$, $E_{i}(t)$, $P^{CHP}_{i}(t)$, $H^{CHP}_{i}(t)$, $H_{i}^{b}(t)$, $X_{i}(t)$, $S_{i}(t)$, $E_{io}(t)$\}. According to the system model, the optimization problem of MG $i$ is to find a control policy that schedules the electricity, hydrogen, and heat at each time slot to minimize the time average energy cost, which can be denoted as a stochastic network optimization problem: \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} \lim_{T \rightarrow \infty}\frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{C_{i}(t)\} \end{aligned} \label{eq30} \end{equation} subject to (\ref{A1}) - (\ref{Yi3}) and (\ref{lg}). {\color{black}{The Lyapunov optimization gives simple online solutions based on the current information of the system state as opposed to traditional approaches like Markov decision processes and dynamic programming, which have very high computational complexity and require a priori information of all the random processes in the system. The performance of the Lyapunov optimization algorithm can be close to the optimal value arbitrarily \cite{Lakshminarayana2014Cooperation}. The underlying assumption about the availability of future information renders offline approaches ill-suited for energy storage system applications with high uncertainty, whereas dynamic programming solutions are impractical for multiple networked energy storage systems \cite{Sarthak2018Optimal}.}} The time average expected values under any feasible control policy of the original problem are denoted as follows: \begin{equation} \begin{split} \overline{C_{ie}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{ie}(t)\}, \overline{D_{ie}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{ie}(t)\}\\ \overline{C_{iy}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{iy}(t)\} , \overline{D_{iy}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{iy}(t)\}\\ \overline{C_{ih}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{ih}(t)\}, \overline{D_{ih}} =\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{ih}(t)\}\\ \overline{D_{iyl}}&+\overline{d_{il}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{iyl}(t)+d_{il}(t)\}\\ \overline{Y_{ifl}}&+\overline{h_{il}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{Y_{ifl}(t)+h_{il}(t)\} \end{split} \label{sto1} \end{equation} The above stochastic network optimization problem (\ref{eq30}) cannot be solved directly owing to the capacity constraints of battery, hydrogen storage, and water tank (\ref{Bm}) - (\ref{Wm}) of MG $i$ and hydrogen storage (\ref{Yil}) of fuel cell vehicle $l_i$. To be specific, stochastic network optimization can ensure that the average energy consumption equals the average energy generation for a long period, but cannot provide a hard constraint on the difference between consumption and generation at any time slot. In order to solve this issue, the problem is relaxed, which is stated as follows: The optimization problem (\ref{eq30}) is subject to \begin{equation} \begin{split} \overline{C_{ie}} &= \overline{D_{ie}}\\ \overline{C_{iy}} &= \overline{D_{iy}}\\ \overline{C_{ih}} &= \overline{D_{ih}}\\ \overline{D_{iyl}}+\overline{d_{il}} &= \overline{Y_{ifl}}+\overline{h_{il}}\\ \end{split} \label{cd} \end{equation} and (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). $C_{i}^{opt}$ is denoted as the optimal solution of the cost function for the original problem and $C_{ir}^{opt}$ is denoted as the optimal solution of the cost function for the relaxed problem. Any feasible solution to the original problem is also a feasible solution to the relaxed problem, that is, the relaxed problem is less constrained than the original problem. Therefore, $C_{ir}^{opt} \leq C_{i}^{opt}$. The optimal solution to the relaxed problem can be got by the stationary and randomized policy $\Pi$, stated as follows: \begin{equation} \begin{aligned} \mathbb{E}\{C_{i}^{\Pi}(t)\}=C_{ir}^{opt} \end{aligned} \end{equation} subject to: \begin{equation} \begin{split} C^{\Pi}_{ie}(t)& = D^{\Pi}_{ie}(t)\\ C^{\Pi}_{iy}(t) &= D^{\Pi}_{iy}(t)\\ C^{\Pi}_{ih}(t) &= D^{\Pi}_{ih}(t)\\ D^{\Pi}_{iyl}(t)+d^{\Pi}_{il}(t) &= Y^{\Pi}_{ifl}(t)+h^{\Pi}_{il}(t)\\ \end{split} \label{cd2} \end{equation} and (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). The existence of the stationary and randomized policy can be proved by the Caratheodory theory \cite{georgiadis2006resource}. Obviously, only if the solutions to the relaxed problem can meet constraints (\ref{Bm}) - (\ref{Wm}) and (\ref{Yil}), they are also feasible to the original problem. To reach this objective, the constants $\theta_{i}$, $\xi_{i}$, $\varepsilon_{i}$ and $\gamma_{il}$ are defined. These constants are adjusted appropriately to make the solutions to the relaxed problem also be feasible to the original problem. To start, the virtual queues $A_{i}(t)$, $F_{i}(t)$, $Z_{i}(t)$, and $I_{il}(t)$ for battery, hydrogen storage, water tank of MG $i$, and hydrogen storage of fuel cell vehicle $l_i$ are defined as follows, respectively: \begin{equation} \begin{aligned} &A_{i}(t)=B_{i}(t)-\theta_{i} , F_{i}(t)=Y_{i}(t)-\xi_{i} \\ &Z_{i}(t)=W_{i}(t)- \varepsilon_{i}, I_{il}(t)=Y_{il}(t)- \gamma_{il} \end{aligned} \label{vir} \end{equation} where $\theta_{i}$, $\xi_{i}$ and $\varepsilon_{i}$ and $\gamma_{il}$ are perturbations that are used to guarantee the bounds of $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. The Lyapunov function is defined as $Q_{i}(t)=\frac{1}{2}A_{i}(t)^{2}+\frac{1}{2}F_{i}(t)^{2}+\frac{1}{2}Z_{i}(t)^{2}+\frac{1}{2}\sum^{L_i}_{l=1}I_{il}(t)^{2}$. The conditional Lyapunov drift, which represents the change in the Lyapunov function, is defined as \begin{equation} \Delta_{i}(t)=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t), Y_{il}(t)\} \end{equation} where the expectation is related to the random processes of the system, given the values $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. According to the equation for the virtual queue (\ref{vir}) associated with the evolution of battery, hydrogen storage, and water tank of MG $i$ in (\ref{A1}) - (\ref{A3}), and the hydrogen storage of the fuel cell vehicle in (\ref{Yc}), the Lyapunov drift is bounded as \begin{equation} \begin{aligned} \Delta_{i}(t)&=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t), Y_{il}(t)\} \\&\leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))]\} \end{aligned} \label{rightmin} \end{equation} where $G_{i}$ is constant and $G_{i}=\frac{1}{2}\{\max(C_{ie,max}^2,D_{ie,max}^2)+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\}$. The proof of this step is given in Appendix A In order to make these queues stable, MG $i$ needs to minimize drift $\Delta_{i}(t)$. In addition, MG $i$ intends to minimize the energy cost. Hence, $V_{i}$ is used to represent the tradeoff between the two objectives. Then, the drift-plus-penalty function is denoted as \begin{equation} \begin{aligned} \Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \leq & G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\}+V_{i}\mathbb{E}\{C_{i}(t)\} \\=&G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t)) +F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\ +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+(P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t) +\beta_i(t)X_{i}(t) \\&-\alpha_i(t)S_{i}(t)-E_{io}(t)p_{eo}(t))\} \end{aligned} \label{p1} \end{equation} The relaxed problem can be viewed as minimizing the cost of the MG while maintaining the stability of virtual queues. The drift-plus-penalty term consists of two terms: the Lyapunov drift term $\Delta_{i}(t)$ and the modified cost term $V_{i}\mathbb{E}\{C_{i}(t)\}$. A larger value of $V_{i}$ means that minimizing the energy cost has greater priority than minimizing the drift, and vice versa. The objective of Lyapunov optimization is to minimize the right hand of (\ref{p1}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))] +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t)) \end{aligned} \label{P2} \end{equation} subject to constraints (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). In the following section, the price and amount of energy in energy trading among multiple MGs are determined, and the optimal strategy of problem (\ref{P2}) is obtained by solving the linear programming problem. \subsection{Double-Auction Mechanism} Optimization problem (\ref{P2}) has two variables of price. Owing to the decentralized structure of energy trading, the selling price and purchase price can be determined by the external auctioneer according to a double-auction mechanism. First, the selling price and purchase price of each MG submitted in energy trading among multiple MGs are investigated. \textbf{Lemma 1.} MG $i$ decides the selling price $\widetilde{\alpha}_{i}(t)$ and purchase price $\widetilde{\beta}_{i}(t)$ based on the cost-minimization problem: \begin{equation} \widetilde{\alpha}_{i}(t)=\max[\frac{-A_{i}(t)}{V_{i}},\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}, p_{eo}(t)] \label{e11} \end{equation} \begin{equation} \widetilde{\beta}_{i}(t)=\min[\frac{\max(-A_{i}(t),0)}{V_{i}},\frac{p_{g}}{\eta_{pg}},p_{e}(t)] \label{e12} \end{equation} where $p_{eo}(t)$ is the price of energy sold to the electricity utility company by MGs, and $p_{eo}(t)<p_{e}(t)$ . The proof of this step is presented in Appendix B After determining $\widetilde{\alpha}_{i}(t)$ and $\widetilde{\beta}_{i}(t)$, the amount of electricity $\widetilde{S}_{i}(t)$ and $\widetilde{X}_{i}(t)$ that MG $i$ will sell and purchase in energy trading are determined by solving (\ref{P2}). MGs are willing to sell their energy when their energy storages have enough energy. Moreover, they are willing to get energy when the cost of purchasing energy is lower than that of generating energy by themselves (such as generating electricity by CHP system and using hydrogen). The maximum amount of electricity that MG $i$ can sell $S_{i,max}(t)$ and purchase $X_{i,max}(t)$ at time slot $t$ are: \begin{equation} S_{i,max}(t)=N_{i}(t)-L_{ie}(t) \end{equation} \begin{equation} X_{i,max}(t)=L_{ie}(t)-N_{i}(t) \end{equation} A double-auction mechanism is designed to encourage MGs to actively trade energy and ensure the benefits of MGs. The double-auction mechanism has two steps: \begin{itemize} \item MGs submit the selling price, purchase price, and the corresponding amount of energy to the external auctioneer. \item The external auctioneer decides the accepted selling price and purchase price by trading rules, and allocates energy to MGs to minimize the transmission loss. \end{itemize} The mechanism of the threshold price double auction \cite{Kant2005Double} is shown in this section. First, the external auctioneer collects and sorts all received purchase prices in descending order and selling prices in ascending order: $\overline{\beta}_{1}(t)\geq \overline{\beta}_{2}(t)\geq\cdots\geq \overline{\beta}_{i}(t) \geq r>\overline{\beta}_{i+1}(t) \geq\cdots \geq\overline{\beta}_{n}(t)$ and $\overline{\alpha}_{1}(t)\leq \overline{\alpha}_{2}(t)\leq \cdots\leq \overline{\alpha}_{j}(t)\leq r< \overline{\alpha}_{j+1}(t)\leq \cdots \leq\overline{\alpha}_{n}(t)$. If $i=j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , i$, that they can trade with. The accepted selling price and purchase price are the same, i.e., $\alpha(t)=\beta(t)=r$. If $i>j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , j$, that they can trade with. The accepted selling price and purchase price are $\alpha(t)=r$ and $\beta(t)=\overline{\beta}_{j+1}(t)$, respectively. If $i<j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , i$, that they can trade with. The accepted selling price and purchase price are $\alpha(t)=\overline{\alpha}_{i+1}(t)$ and $\beta(t)=r$, respectively. The accepted purchase price and selling price for MG $i$ can be derived as \begin{equation} \hat{\beta}_{i}(t)=\left\{\begin{array}{cc} \beta(t) & \mbox{if MG $i$ purchases electricity}\\ 0 & \mbox{otherwise} \end{array}\right. \end{equation} and \begin{equation} \hat{\alpha}_{i}(t)=\left\{\begin{array}{cc} \alpha(t) & \mbox{if MG $i$ sells electricity}\\ 0 & \mbox{otherwise} \end{array}\right. \end{equation} After determining the market clearing prices, the external auctioneer needs to match energy sellers and buyers to reduce energy losses. \begin{equation} Loss(t) = \sum_{i=1}^{k}\sum_{j=1}^{k} I_{ij}T_{ij}(t) \end{equation} where $T_{ij}(t)$ is the amount of energy transmitted from MG $i$ to MG $j$ at time $t$. $I_{ij}$ is the energy loss coefficient, which is related to the transmission distance. The external auctioneer aims to minimize the energy losses during transmission: \begin{equation} \min_{T_{ij}, \forall i, j \in [1, k]} \quad Loss(t) \label{chap3equ:minLoss} \end{equation} subject to: \begin{equation} \sum_{j=1}^{k} T_{ij} \leq \overline{S}_{i}(t) \label{chap3equ:sell} \end{equation} \begin{equation} \sum_{i=1}^{k} (1-I_{ij})T_{ij} \geq \overline{X}_{j}(t) \label{chap3equ:buy} \end{equation} After determining $\alpha_{i}(t)$ and $\beta_{i}(t)$, the actual amount of electricity that MG $i$ sells $S^{*}_{i}(t)$ or purchases $X^{*}_{i}(t)$ can be determined to minimize the energy losses during transmission by linear programming. The performance of the proposed trading mechanism is as follows: \textbf{Lemma 2.} Using the mechanism presented above, all MGs will submit the selling prices and purchase prices truthfully; otherwise, they will get lower benefits owing to deviating from the true value of the selling prices and purchase prices in (\ref{e11}) and (\ref{e12}). The proof of this step is shown in Appendix C \subsection{Algorithm Design and Performance Analysis} After obtaining $X^{*}_{i}(t)$ and $S^{*}_{i}(t)$ by (\ref{chap3equ:minLoss})-(\ref{chap3equ:buy}) and the double-auction mechanism, an optimal strategy set of MG $i$ can be acquired by solving the linear programming problem (\ref{P2}): $\boldsymbol{M}_{i}^{*}(t)$=\{$C_{ie}^{*}(t)$, $D_{ie}^{*}(t)$, $C_{iy}^{*}(t)$, $D_{iy}^{*}(t)$, $C_{ih}^{*}(t)$, $D_{ih}^{*}(t)$, $D_{iyl}^{*}(t)$, $d_{il}^{*}(t)$, $Y_{ifl}^{*}(t)$, $h_{il}^{*}(t)$, $E_{i}^{*}(t)$, $P^{CHP,*}_{i}(t)$, $H^{CHP,*}_{i}(t)$, $H_{i}^{b,*}(t)$, $X^{*}_{i}(t)$, $S^{*}_{i}(t)$, $E_{io}^{*}(t) \} to minimize the drift-plus-penalty. The implementation process of the algorithm is shown in Algorithm 1. \begin{algorithm}[h] \caption{Joint Energy Scheduling and Trading Algorithm} \begin{algorithmic}[1] \State Set $t=0$. \State Set the initial values $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. \For{each MG $i$} \State Calculate $\overline{\alpha}_{i}(t)$ and $\overline{\beta}_{i}(t)$ by (\ref{e11}) and (\ref{e12}), calculate $\overline{X}_{i}(t)$ and $\overline{S}_{i}(t)$ by (\ref{P2}), and then submit them to the external auctioneer. \State Calculate $\alpha_{i}(t)$, $\beta_{i}(t)$, $X_{i}(t)$, and $S_{i}(t)$ by the double-auction mechanism. \State Calculate $\boldsymbol{M}_i(t)$ using (\ref{P2}). \EndFor \State Update $B_{i}(t+1)$, $Y_{i}(t+1)$, $W_{i}(t+1)$ by (\ref{A1}) - (\ref{A3}), and $Y_{il}(t+1)$ by (\ref{Yc}). \label{code:recentEnd} \end{algorithmic} \end{algorithm} In the aforementioned design, the capacity constraints of battery, hydrogen storage, and water tank of MG $i$ and hydrogen storage of fuel cell vehicle $l_i$ are not considered. In fact, the capacity constraints should be considered as follows: \textbf{Lemma 3.} If $\theta_{i}$, $\xi_{i}$, $\varepsilon_{i}$, $\gamma_{il}$ and $V_{i}$ satisfy the following conditions: \begin{equation} \theta_{i}=V_{i}p_{e,max}+D_{ie,max} \label{A2} \end{equation} \begin{equation} \xi_{i}=V_{i}p_{y,max}+D_{iy,max} \label{Y2} \end{equation} \begin{equation} \varepsilon_{i}=\frac{V_{i}p_{g,max}}{\eta_{bg}}+D_{ih,max} \label{A4} \end{equation} \begin{equation} \gamma_{il}=V_{i}p_{y,max}+Y_{ifl,max}+h_{il,max} \label{A5} \end{equation} \begin{equation} \begin{aligned} V_{i,max}= & \min \{\frac{B_{i,max}-C_{ie,max}-D_{ie,max}}{p_{e,max}}, \\&\frac{Y_{i,max}-C_{iy,max}-D_{iy,max}}{p_{y,max}}, \\& \frac{\eta_{bg}(W_{i,max}-C_{ih,max}-D_{ih,max})}{p_{g,max}}, \\&\frac{Y_{il,max}-D_{iyl,max}-d_{il,max}-Y_{ifl,max}-h_{il,max}}{p_{y,max}}\}\\ \end{aligned} \label{P3} \end{equation} where $0\leq V_{i} \leq V_{i,max}$, the capacity constraints of battery, hydrogen storage, and water tank of MG $i$ and hydrogen storage of fuel cell vehicle $l_i$ are always satisfied. The proof of this step is presented in Appendix D. According to Lemma 3, the algorithm satisfies the capacity constraints in (\ref{Bm}) - (\ref{Wm}) and (\ref{Yil}). Hence, the algorithm is feasible for the original problem. Then, the result of the performance of the algorithm based on Lyapunov optimization is provided. \textbf{Theorem 1.} According to the algorithm in the previous section, the expected time average energy cost has a bound: \begin{equation} \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{ C_{i}(t) \} \leq C_{i}^{opt} + \frac{G_{i}}{V_{i}} \end{equation} The proof of this step is given in Appendix E. {\color{black}{In a sense, Theorem 1 shows the gap between the performance of the proposed algorithm, independent of random distribution factors, and an optimization algorithm with accurate random information is described. According to (\ref{P3}) and Theorem 1, battery, hydrogen storage, and water tank capacity of MG $i$ and hydrogen storage capacity of fuel cell vehicle $l_i$ increase, the performance of the proposed algorithm can be made arbitrarily close to the optimal performance of the optimization algorithm with accurate random information.}} \section{Numerical Results} In this section, the numerical results based on real data are presented to examine the proposed algorithm in the previous sections. \subsection{Experimental Setup} A network of three MGs is considered. Each MG includes renewable energy resources, CHP system, fuel cell vehicles, battery, hydrogen storage, boiler, and water tank. Wind-driven turbines and photovoltaic systems are renewable energy generators with maximum outputs of 750 kW for MGs $1$ and $2$, and 450 kW for MG $3$. For each MG's electricity load, the hourly load data provided by the PJM hourly load \cite{pjm} is shown in Fig. \ref{fig2}(a). For renewable energy generation, the hourly generation data provided by Renewables.ninja \cite{Institute} is shown in Fig. \ref{fig2}(b). For the price of the electricity utility company, the hourly energy price provided by the Power Smart Pricing administered for Ameren Illinois data \cite{Illinois} is shown in Fig. \ref{fig2}(c). \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{ed.eps}} \centerline{\scriptsize{(a) Electricity demand}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{6a1.eps}} \centerline{\scriptsize{(b) Renewable energy }} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{5a1.eps}} \centerline{\scriptsize{(c) Price of the electricity utility company}} \end{minipage} \caption{Data from website.} \label{fig2} \end{figure*} The maximum electricity consumption of the electrolysis system is 100 kW. The total amount of hydrogen consumed by each MG's all vehicles, which are used for driving, takes value from [25, 35] m$^3$ at random during a time slot. When hydrogen in the hydrogen storage of the MG cannot supply the vehicles, the vehicles will purchase hydrogen from the hydrogen station of the hydrogen-producing company. In this paper, fuel cell vehicles refer to buses. Each MG has 10 fuel cell buses. The hydrogen by each bus consumes is approximately 0.5 m$^3$/km, and the maximum generation of the bus is 45 kW. The buses depart every 20 minutes from 6:00 am to 10:00 pm. At 6:00 am, there are 5 buses at the starting point and the bus terminal of each MG, respectively. The distance from the starting point to the bus terminal is about 10km, and it takes 40-60 minutes to complete the journey. The heat and electricity generation of CHP system satisfies $H_{i}^{CHP}(t)=P_{i}^{CHP}(t)$. The parameters of efficiency are $\eta_{pg}=70\%$, $\eta_{hg}=70\%$, $\eta_{bg}=80\%$, $\eta_{e}=85\%$, and $\eta_{f}=50\%$, respectively. Other parameters are summarized as follows: $p_{y}(t)=10$ cents/m$^3$, $p_{g}(t)=15$ cents/m$^3$, $B_{i,max}=300$kWh, $C_{ie,max}=D_{ie,max}=75$kWh, $W_{i,max}=900$kWh, $C_{ih,max}=D_{ih,max}=225$kWh. $Y_{i,max}=300$m$^3$, $C_{iy,max}=D_{iy,max}=75$m$^3$. \subsection{Results} Fuel cell vehicles and hydrogen storages play important roles in relieving storage stress of the battery and further using excess renewable energy. Fig. \ref{fig3} shows that the costs of the MGs are lower than those without hydrogen storage. {The existence of hydrogen storage obviously reduces the costs of MGs 1 and 2. The cost of MG 3, however, is slightly reduced. The reason is because MGs 1 and 2 electrolyze water to supply hydrogen for fuel cell vehicles instead of selling electricity to the electricity company at a low price. Therefore, the costs of MGs 1 and 2 are obviously reduced. Because the renewable energy of MG 3 is not enough, MG 3 needs to purchase energy. In Fig. \ref{fig6}(c), MG 3 with hydrogen storage electrolyzes water to supply a little hydrogen for fuel cell vehicles, and MG 3 without hydrogen storage charges the battery. Both of them purchase much hydrogen from the hydrogen-producing company. Therefore, the cost of MG 3 with hydrogen is slightly reduced in Fig. \ref{fig3}. }} Then, the comparisons of the costs, energy trading, and battery dynamics with and without hydrogen storage for all three MGs across 24 time slots are presented in Figs. \ref{fig4}-\ref{fig6}. During energy trading, positive values denote purchasing energy, and negative values denote selling energy. Fig. \ref{fig4} denotes that MGs achieve lower costs with hydrogen storage in most cases, where MGs electrolyze water to supply hydrogen for fuel cell vehicles or store hydrogen for the demand in the future instead of selling electricity to the electricity utility company at a low price. Fig. \ref{fig5} denotes the comparison of energy trading dynamics with and without hydrogen storage. MG $1$ sells more electricity to other MGs with hydrogen storage. This is because MG $3$ needs more electricity to electrolyze water to generate hydrogen for fuel cell vehicles, and MG $3$ purchases more electricity from MG $1$. Fig. \ref{fig6} denotes the comparison of battery dynamics with and without hydrogen storage. All MGs charge less electricity into the battery with hydrogen storage. This is because MGs with hydrogen storage use some electricity to electrolyze water to generate hydrogen. \begin{figure} \centering \centerline{\includegraphics[height=42mm,width=56mm]{h.eps}} \caption{Comparisons of all MGs' total costs with and without hydrogen storage.} \label{fig3} \end{figure} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t1h.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t2h.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t3h.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Costs of each MG with and without hydrogen storage.} \label{fig4} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Energy trading of each MG with and without hydrogen storage.} \label{fig5} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b1h.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b2h.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b3h.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Battery dynamics of each MG with and without hydrogen storage.} \label{fig6} \end{figure*} Energy trading plays an important role in releasing the imbalance of supply and demand for a single MG. Fig. \ref{fig7} shows that the costs of MGs are lower than those without trading. {Energy trading obviously reduces the cost of MG 1. The costs of MGs 2 and 3, however, are slightly reduced. The reason is that the renewable energy of MG 1 is more than the demand in most cases. MG 1 sells electricity to MGs 2 and 3 instead of selling electricity to the electricity company at a low price in most cases. Therefore, the cost of MG 1 is obviously reduced. MG 3 without energy trading cannot purchase electricity from MG 1 at a low price, but it can generate electricity by CHP system at a cost that is lower than the cost of purchasing electricity from the electricity company, and MG 2 trades little energy with other MGs. Therefore, the costs of MGs 2 and 3 with trading are slightly reduced in Fig. \ref{fig7}. }} Then, the comparisons of the costs, battery, and hydrogen storage dynamics with and without energy trading for all three MGs across 24 time slots are presented in Figs. \ref{fig8}-\ref{fig10}. Fig. \ref{fig8} shows that MGs achieve lower costs with energy trading in most cases, where MGs acquire electricity from other MGs in energy trading instead of the electricity utility company. Fig. \ref{fig2}(b) tells that MG $1$ has higher renewable energy output than other MGs, and hence MG $1$ sells excessive energy to other MGs in most cases except the last four hours. The reason is because MG $1$ has a drop in renewable energy output during the last four hours, while MG $2$ has adequate renewable energy output during the last four hours. Fig. \ref{fig9} denotes the comparison of battery dynamics with and without energy trading. MG $1$ charges less electricity into the battery with energy trading. This is because MG $1$ sells electricity to other MGs instead of storing electricity in the battery. This is the same as the hydrogen storage in Fig. \ref{fig10}. {Whether it involves trading or not, MG 3 without abundant renewable energy to electrolyze water, has to purchase hydrogen from a hydrogen-producing company to supply vehicles. Therefore, MG 3 has no energy to charge, and the dynamics of the storage level of MG 3 are the same as in Fig. \ref{fig9} and Fig. \ref{fig10}.}} \begin{figure*} \centering \centerline{\includegraphics[height=42mm,width=56mm]{s.eps}} \caption{Comparisons of all MGs' total costs with and without energy trading.} \label{fig7} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Costs of each MG with and without energy trading.} \label{fig8} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Battery dynamics of each MG with and without energy trading.} \label{fig9} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Hydrogen storage dynamics of each MG with and without energy trading.} \label{fig10} \end{figure*} Table 1 shows the costs of three MGs under the proposed method, and the methods without hydrogen storage and without energy trading. {Three cases of different initial energy storages are studied as follows. 1) The initial energy of storage is 10\% of its capacity (Figs. \ref{fig3}-\ref{fig10} are generated in this case). The total cost of three MGs is reduced by up to 26.53\% from 28563 cents without hydrogen storage to 20984 cents, and 13.16\% from 24163 cents without energy trading to 20984 cents. 2) The initial energy of storage is 50\% of its capacity. The total cost of three MGs is reduced by up to 29.68\% from 26121 cents without hydrogen storage to 18367 cents, and 15.92\% from 21844 cents without energy trading to 18367 cents. 3) The initial energy of storage is its capacity. The total cost of three MGs is reduced by up to 35.50\% from 22888 cents without hydrogen storage to 14763 cents, and 19.55\% from 18350 cents without energy trading to 14763 cents. According to the results, it is observed that the cost decreases with an increase in the initial energy of storage, and the extent of cost reduction increases with an increase in the initial energy of storage. This is because more initial energy of storage means less cost of purchasing energy and more energy used to trade or electrolyze water.}} The introduction of hydrogen storage and energy trading reduces the costs of all MGs. Therefore, MGs benefit from energy trading, hydrogen storage, and fuel cell vehicles. This verifies the effectiveness of the proposed algorithm. \begin{table} \small \caption{ {Costs of three MGs under different methods.}}} \begin{center} \begin{tabular}{l|l|l|l|l|l} \hline Initial energy storage&Costs (cent) & MG $1$ & MG $2$ & MG $3$ & System \\ \hline \multirow{3}*{10\% of capacity}&Cost & 3091 & 3887 & 14006 & 20984 \\ \cline{2-6} {}&Cost(without trading) & 5419 & 4319 & 14425 & 24163 \\ \cline{2-6} {}&Cost(without hydrogen) & 7327 & 7005 & 14231 & 28563 \\ \hline \multirow{3}*{50\% of capacity}&Cost & 2060 & 3377 & 12931 & 18367 \\ \cline{2-6} {}&Cost(without trading) & 4570 & 3738 & 13536 & 21844 \\ \cline{2-6} {}&Cost(without hydrogen) & 6392 & 6491 & 13239 & 26121 \\ \hline \multirow{3}*{100\% of capacity}&Cost & 807 & 2198 & 11758 & 14763 \\ \cline{2-6} {}&Cost(without trading) & 3419 & 2567 & 12364 & 18350 \\ \cline{2-6} {}&Cost(without hydrogen) & 5509 & 5317 & 12061 & 22888\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} In this paper, the energy scheduling and energy trading problem for real-time pricing among multiple microgrids is studied, which is an urgent issue for today's the cyber-physical-energy systems. A multi-energy management framework including fuel cell vehicles, energy storage, {combined heat and power system}}, and renewable energy is presented, where fuel cell vehicles and energy storage further improve the absorption of the renewable energy. A joint algorithm based on Lyapunov optimization and a double-auction mechanism is designed to optimize the long-term energy cost of each microgrid. At last, the results based on real data show that microgrids' costs, under the management of the proposed algorithm, can be decreased. Comparative analysis of energy storage and energy trading demonstrate the necessity of including energy storage and trading. In this paper, fuel cell vehicles refer to buses that have a specific route. Due to transportation concerns, fuel cell vehicles can be cars, buses, and so on. In this case, the trip characteristics of vehicles need to be considered. Investigating some control schemes, e.g., Ref. \cite{AlaviFuel}, to optimize dispatch of fuel cell vehicles is a significant research direction. Another direction is how to design the scheduling method, e.g., Ref. \cite{ZhouIndirect}, to further realize the multi-energy coupled peak load shifting in realistic scenarios, such as industrial parks. \begin{appendix} \section{Proof of (\ref{rightmin}) } According to (\ref{A1}) - (\ref{A3}), (\ref{Yc}), and (\ref{vir}), the Lyapunov drift term $\Delta_{i}(t)$ is denoted by \begin{equation} \begin{aligned} &\Delta_{i}(t)=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t),Y_{il}(t)\} \\&=\frac{1}{2}\mathbb{E}\{2A_{i}(t)(C_{ie}(t)-D_{ie}(t))+2F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+2Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[2I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+(C_{ie}(t)-D_{ie}(t))^2\\&+(C_{iy}(t)-D_{iy}(t))^2+(C_{ih}(t)-D_{ih}(t))^2\\&+\sum^{L_i}_{l=1}[(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))^2]\} \\& \leq \frac{1}{2}\mathbb{E}\{2A_{i}(t)(C_{ie}(t)-D_{ie}(t))+2F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+2Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[2I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+\max(C_{ie,max}^2,D_{ie,max}^2)\\&+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)\\&+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\} \\&=\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\}+G_{i} \end{aligned} \end{equation} where $G_{i}=\frac{1}{2}\{\max(C_{ie,max}^2,D_{ie,max}^2)+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\}$ \section{Proof of Lemma 1 } The following four cases are considered to determine the price of energy trading: \begin{enumerate} \item Case 1: $A_{i}(t)\geq0$. In this case, MG $i$ has too much energy in its battery. According to (\ref{lg}), $C_{ie}(t)-D_{ie}(t)=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t) +\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t)$. According to (\ref{P2}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))] +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t))\\ =&\min_{\boldsymbol{M}_{i}(t)} -(A_{i}(t)+V_{i}\alpha_i(t))S_{i}(t)+(A_{i}(t)+V_{i}\beta_i(t))X_{i}(t)\\&+A_{i}(t)(E_{i}(t)+N_{i}(t)+\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}\\&-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t))\\&+F_{i}(t)(C_{iy}(t)-D_{iy}(t))+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))\\&+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))] \\&+V_{i}(p_{y}(t)d_{i}(t)+E_{i}(t)p_{e}(t)+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)\\&+H_{i}^b(t))p_{g}(t)-E_{io}(t)p_{eo}(t)) \end{aligned} \label{cde} \end{equation} and $-\alpha_i(t)V_{i}-A_{i}(t)<0$, MG $i$ tends to increase $S_i(t)$, and $C_{ie}(t)=0$, $D_{ie}(t)=D_{ie,max}$. \item Case 2: $A_{i}(t)<0$. In this case, six situations are considered. \begin{itemize} \item If $0<\alpha_i(t)<\frac{-A_{i}(t)}{V_{i}}$, then$-\alpha_i(t)V_{i}-A_{i}(t)>0$. Therefore, MG $i$ tends to decrease $S_i(t)$ and increase $C_{ie}(t)$. \item If $\alpha_i(t)>\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)-\alpha_i(t)V_{i}<0$. Therefore, MG $i$ tends to increase $S_i(t)$ and decrease $C_{ie}(t)$. \item If $\alpha_i(t)=\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)-\alpha_i(t)V_{i}=0$. This is the same for MG $i$ to increase $S_i(t)$ or increase $C_{ie}(t)$. \item If $0<\beta_i(t)<\frac{-A_{i}(t)}{V_{i}}$, then $\beta_i(t)V_{i}+A_{i}(t)<0$. Therefore, MG $i$ tends to increase $X_i(t)$ and decrease $D_{ie}(t)$. \item If $\beta_i(t)>\frac{-A_{i}(t)}{V_{i}}$, then $A_{i}(t)+\beta_i(t)V_{i}>0$. Therefore, MG $i$ tends to decrease $X_i(t)$ and increase $D_{ie}(t)$. \item If $\beta_i(t)=\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)=\beta_i(t)V_{i}$. It is same for MG $i$ to increase $X_i(t)$ or increase $D_{ie}(t)$. \end{itemize} Case 3: $F_{i}(t)\geq0$. In this case, MG $i$ has too much hydrogen in its hydrogen storage. According to (\ref{lg}), $(\frac{h}{\eta_e}+c_1)C_{iy}(t)=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t)-C_{ie}(t)+D_{ie}(t)+\eta_fhY_{if}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t)$. According to (\ref{P2}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+V_{i}(p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t))\\ =&\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))-(\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}+V_i\alpha_i(t))S_{i}(t)\\&+(\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}+V_i\beta_i(t))X_{i}(t)+\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}(E_{i}(t)+N_{i}(t)\\&-C_{ie}(t)+D_{ie}(t)+\eta_fhY_{if}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)\\&-L_{ie}(t))-F_{i}(t)D_{iy}(t)+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))\\&+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))]\\&+V_{i}(p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)-E_{io}(t)p_{eo}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)) \end{aligned} \label{cdh} \end{equation} and $-\alpha_i(t)V_{i}-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}<0$, MG $i$ tends to increase $S_i(t)$, and $C_{iy}(t)=0$, $D_{iy}(t)=D_{iy,max}$. \item Case 4: $F_{i}(t)<0$. In this case, three situations are considered. \begin{itemize} \item If $0<\alpha_i(t)<\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\alpha_i(t)V_{i}-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}>0$. Therefore, MG $i$ tends to decrease $S_i(t)$ and increase $C_{iy}(t)$. \item If $\alpha_i(t)>\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}-\alpha_i(t)V_{i}<0$. Therefore, MG $i$ tends to increase $S_i(t)$ and decrease $C_{iy}(t)$. \item If $\alpha_i(t)=\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}-\alpha_i(t)V_{i}=0$. It is same for MG $i$ to increase $S_i(t)$ or increase $C_{iy}(t)$. \end{itemize} \end{enumerate} \section {Proof of Lemma 2 } All MGs are assumed to be rational. They will choose a strategy that can minimize their costs. The purchase price and selling price submitted by MG $i$ are $\beta_{i}(t)$ and $\alpha_{i}(t)$, and the purchase price and selling price determined by the double-auction mechanism are $\hat{\beta}(t)$ and $\hat{\alpha}(t)$ in the actual energy trading. Then, the benefit of MG $i$ is analyzed when it cheats. \begin{enumerate} \item Case 1: $\alpha_{i}(t) > \hat{\alpha}(t)$. In this case, MG $i$ is not allowed to sell energy in the double-auction mechanism. \begin{itemize} \item If MG $i$ increases $\alpha_{i}(t)$, the situation does not change. \item If MG $i$ reduces $\alpha_{i}(t)$ and $\alpha_{i}(t) > \hat{\alpha}(t)$, the situation does not change. \item If MG $i$ reduces $\alpha_{i}(t)$ and $ \alpha_{i}(t) \leq \hat{\alpha}(t)$, the MG will be forced to sell energy at a price lower than expected, and its benefit will decrease owing to cheating. \end{itemize} \item Case 2: $\alpha_{i}(t) \leq \hat{\alpha}(t)$. In this case, MG $i$ sells energy in the double-auction mechanism. \begin{itemize} \item If MG $i$ reduces $\alpha_{i}(t)$, the situation does not change. \item If MG $i$ increases $\alpha_{i}(t)$ and $ \alpha_{i}(t) \leq \hat{\alpha}(t)$, the situation does not change. \item If MG $i$ increases $\alpha_{i}(t)$ and $\alpha_{i}(t) > \hat{\alpha}(t)$, the MG is not allowed to sell energy in the double-auction mechanism. However, energy is excessive, MG $i$ may sell excessive energy to the electricity utility company at a lower price, and its benefit will decrease owing to cheating. \end{itemize} \end{enumerate} This is similar to analyze $\beta_{i}(t)$. Therefore, the double-auction mechanism can prevent MGs from cheating. \section{Proof of Lemma 3 } The induction is used to prove the bound of $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. First, the conditions hold at time slot 1 and still hold at time slot $t$. Then, the following four cases are considered \begin{enumerate} \item Case 1: $B_{i}(t)<\theta_{i}$. In this case, $C_{ie}(t) \leq{C_{ie,max}}$, and $\theta_{i}=V_ip_{e,max}+D_{ie,max} \leq B_{i,max}-C_{ie,max}$. Therefore, $B_{i}(t+1) \leq B_{i}(t)+C_{ie,max} < \theta_{i} + C_{ie,max} \leq B_{i,max}$. \item Case 2: $B_{i}(t)\geq \theta_{i}$. In this case, $C_{ie}(t)=0$. The battery will not be charged at time slot $t$. Therefore, $B_{i}(t+1) \leq B_{i}(t) \leq B_{i,max}$. \item Case 3: $B_{i}(t)<D_{ie,max}$. In this case, $A_{i}(t)<D_{ie,max}-\theta_{i}=-V_ip_{e,max}$. Then, $A_{i}(t)+V_i\beta_{i}(t)<A_{i}(t)+V_ip_{e,max}<0$. According to (\ref{lg}) and (\ref{cde}), $D_{ie}(t)=0$. Therefore, $B_{i}(t+1) \geq B_{i}(t) \geq 0$. \item Case 4: $B_{i}(t)\geq D_{ie,max}$. In this case, $B_{i}(t+1)= B_{i}(t)+C_{ie}(t)-D_{ie}(t)\geq B_{i}(t)-D_{ie}(t)\geq 0$. \end{enumerate} It is similar to analyze the bounds of $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. \section{Proof of Theorem 1 } The optimal solution of problem (\ref{P2}) is obtained to minimize the drift-plus-penalty. Comparing this optimal solution with the result of the stationary random policy($\Pi$), the drift-plus-penalty term satisfies \begin{equation} \begin{aligned} &\Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \\ & \leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}^{*}(t)-D_{ie}^{*}(t))+F_{i}(t)(C_{iy}^{*}(t)-D_{iy}^{*}(t))\\&+Z_{i}(t)(C_{ih}^{*}(t)-D_{ih}^{*}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}^{*}(t)+d_{il}^{*}(t)\\&-Y_{ifl}^{*}(t)-h_{il}^{*}(t))] +V_{i} p_{y}(t)d_{i}^{*}(t) +E_{i}^{*}(t)p_{e}(t)\\&+ (P_{i}^{CHP,*}(t)+H_{i}^{CHP,*}(t)+H_{i}^{b,*}(t))p_{g}(t) \beta_{i}(t) X_{i}^{*}(t)\\&-\alpha_{i}(t) S_{i}^{*}(t)-E_{io}^{*}(t)p_{eo}(t))\} \\ & \leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}^{\Pi}(t)-D_{ie}^{\Pi}(t))+F_{i}(t)(C_{iy}^{\Pi}(t)-D_{iy}^{\Pi}(t))\\&+Z_{i}(t)(C_{ih}^{\Pi}(t)-D_{ih}^{\Pi}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}^{\Pi}(t)+d_{il}^{\Pi}(t)\\&-Y_{ifl}^{\Pi}(t)-h_{il}^{\Pi}(t))] +V_{i} p_{y}(t)d_{i}^{\Pi}(t) +E_{i}^{\Pi}(t)p_{e}(t)\\&+(P_{i}^{CHP,\Pi}(t)+H_{i}^{CHP,\Pi}(t)+H_{i}^{b,\Pi}(t))p_{g}(t)+\beta_{i}(t) X_{i}^{\Pi}(t)\\&-\alpha_{i}(t) S_{i}^{\Pi}(t)-E_{io}^{\Pi}(t)p_{eo}(t))\ \end{aligned} \end{equation} According to (\ref{cd2}) and the stationary randomized policy that achieves the optimal cost $C_{ir}^{opt}$, the drift-plus-penalty term satisfies \begin{equation} \Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \leq G_{i} + V_{i}C_{ir}^{opt} \leq G_{i} + V_{i}C_{i}^{opt} \end{equation} Summing across $t \in \{1,2,...,T\}$, the sum term satisfies \begin{equation} \mathbb{E}\{Q_{i}(T)-Q_{i}(1)\}+\sum_{t=1}^{T}V_{i}\mathbb{E}\{C_{i}(t)\} \leq TG_{i} + TV_{i}C_{i}^{opt} \end{equation} Dividing both sides by $TV_{i}$ and taking $T \rightarrow \infty$, the time-average cost term satisfies \begin{equation} \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{ C_{i}(t) \} \leq C_{i}^{opt} + \frac{G_{i}}{V_{i}} \end{equation} \end{appendix} \balance \section*{References} \biboptions{square, numbers, sort&compress} \bibliographystyle{elsarticle-num} \section{Introduction} Traditional power grids consume fossil fuels to generate electricity and transmit electricity over long distances, which results in quick depletion of fossil fuel resources and serious environmental pollution. This motivates the study of distributed microgrids (MGs), which can efficiently realize investment deferral \cite{ArmendCoordinated}, local balance \cite{ZhangPeer}, resiliency advancement \cite{RenEnabling}, security reinforcement \cite{Zhu2015Microgrid} and reduce greenhouse gas emissions and energy losses by using renewable energy sources {\cite{ZhangCredit}}}. However, renewable energy generation is stochastic, which may influence energy reliability and quality. Meanwhile, considering the heat demands of users, the combined heat and power (CHP) system, which can efficiently generate both electricity and heat simultaneously by consuming natural gas, is introduced. Energy storage also plays a key role in improving energy reliability by storing extra energy to be used in the future. However, it is not efficient and economic for individual MG to serve its users because of the mismatch between renewable energy generation and electricity demand. Geographically distributed MGs can improve energy reliability and efficiency by sharing energy. However, the MG is selfish and wants to minimize its own cost for sharing energy. Only if its benefit cannot be lowered can a MG be incentivized to join energy trading. This needs an effective method to carry out energy scheduling and trading among multiple MGs in order to achieve benefit maximization for individual MGs. Several inter-related decisions are involved: (1) Energy pricing: what method should be adopted for energy sale and purchase among multiple MGs, and at what prices? (2) Energy scheduling: with time-varying demand and renewable generation of each MG, should a MG serve the demand using its own energy storage or trading with other MGs? When local energy storage and energy trading cannot satisfy the demand, should a MG serve the demand by purchasing energy from utility companies or CHP system, and is it necessary to exploit time-varying electricity prices? These decisions should be optimally and efficiently made online while guaranteeing individual MG's benefits for a long period. Therefore, a joint algorithm for energy scheduling and trading for MGs is designed. A double-auction mechanism is proposed to determine the purchase price and selling price, increase the economic benefits of MGs, and ensure the truthfulness of the information that MGs submit in energy trading. However, owing to the limitation of the battery storage, the MG might not fully exploit the time-diversity of renewable energy generation. In order to improve the utilization of renewable energy generation, we can introduce hydrogen into the MG and use excess renewable energy to electrolyze water to produce and store hydrogen in hydrogen storage tanks. Fuel cell vehicles can convert hydrogen into electricity to supply the MG's energy demand when the MG is short of energy, and fuel cell vehicles can be used for transportation. The following few advantages contain the reasons we introduce hydrogen: First, for the same size of energy storage, hydrogen storage can provide larger amounts of energy than batteries and can be filled in a few minutes. A number of facilities that integrate renewable energy and energy storage are under operation all over the world and most of them use hydrogen for energy storage in both stand-alone and grid-tied power generation systems {\cite{KyriakarakosPolygeneration}}. In these facilities, the hydrogen storage system is often coupled with a battery bank for short-term energy storage, thus achieving a hybrid poly-generation system. Proper integration of hydrogen storage systems and batteries increases bus stability and enhances the management of intermittent power peaks and transient loads \cite{Little2007Electrical}. Second, the entire electricity-hydrogen conversion process only utilizes water and is carbon free. Last but not least, hydrogen can be purchased from a hydrogen-producing company and used for the transportation of fuel cell vehicles. Fuel cell vehicles are particularly suited to provide spinning reserves and peak power to the grid {\cite{Lipman2004Fuel}}}. In contrast to plug-in electric vehicles, fuel cell vehicles can be operated continuously and have very low emissions \cite{Lipman2004Fuel}. Hydrogen, as a clean energy with high calorific value, is attracting wide attention. Therefore, the car as a power plant (CaPP) \cite{Wijk2014Our} is presented to introduce a controllable energy system, which uses fuel cell vehicles as dispatchable power plants \cite{Fernandes2016Fuel}. Considering that the average driving time of vehicles is less than 10\% of the whole day, vehicles can generate electricity by combusting hydrogen in a cleaner way than other power systems when they are parked, and there is a huge potential for fuel cell vehicles to take replace traditional power plants or reduce the number of new plants in the future. Therefore, the synergies between hydrogen and electricity can be explored to increase the benefits of MGs. In particular, the main contributions of this paper are as follows: \begin{itemize} \item A multi-energy management framework that includes fuel cell vehicles, energy storage, CHP system, and renewable energy is proposed. The synergies between hydrogen and electricity can further improve the local absorption of the excess renewable energy and the economic benefits of MGs. \item A joint energy scheduling and trading algorithm based on Lyapunov optimization and a double-auction mechanism is designed to optimize the long-term energy cost of each MG \item Through theoretical analysis, the proposed algorithm can achieve better trade-off among energy trading cost, energy storage and users' satisfaction. Moreover, by using practical data sets, the effectiveness of the proposed algorithm is verified. \end{itemize} In the rest of the paper, Section II introduces related works. Section III describes the system model and cost functions. Then, Section IV proposes a joint algorithm based on Lyapunov optimization and a double-auction mechanism for the energy scheduling and trading problem, and proves the theoretical performance of this algorithm. Section V shows the numerical results, and Section VI concludes the paper. \section{Related Works} Energy sharing is a way to reduce the imbalance of supply and demand of MGs and improve the local consumption of renewable energy. A number of research efforts have been conducted. In \cite{2}, it is shown that energy sharing allows participants to exchange energy in order to lower reliance on the utility company. In \cite{3}, the authors demonstrate that the development of peer-to-peer energy sharing has a significant advantage to prosumers in both earning revenues and reducing energy costs. In \cite{4} because of stochastic renewable energy generation, nanogrids form a nanogrid cluster that shares renewable energy. In \cite{5}, a real-time demand response model is presented to assist the energy sharing provider, which realizes the maximization of the energy sharing provider's utility. { In \cite{Chen2018Analyzing}, energy trading and sharing schemes for multiple energy hubs are proposed to increase system flexibility and reduce the cost of the system. } However, owing to the randomness of renewable energy, it is difficult to schedule renewable energy sharing among multiple MGs and investigate the economic aspect. There are two types of market-based models that are applicable for resource management of energy sharing. The first one is the market model where resource owners decide the price based on users' demands by the game approach. For the first situation, two different models are proposed: 1) the prosumers decide the price of energy together \cite{6,8,Chen2019An}; 2) a leader--follower structure decides the price \cite{7,9,Motalleb2019Networked}. Liu et al. \cite{6} formulate a dynamical internal pricing model for the energy sharing of prosumers who decide the price of energy. Lu et al. \cite{8} establish an informative game vector to perform price-based energy interactions among multiple MGs that decide the price. {Chen et al. \cite{Chen2019An} propose a novel energy sharing game for prosumers to determine the role of the buyer or seller and the sharing price.}} Liu et al. \cite{7} propose a Stackelberg game approach, in which the MG operator acts as the leader and prosumers act as followers to decide the price together. Tushar et al. \cite{9} formulate a non-cooperative Stackelberg game to capture the interaction between the shared facility controller and the residential units that decide the price of energy to minimize the cost. Motalleb et al. \cite{Motalleb2019Networked} propose a networked Stackelberg competition among firms to determine their optimal bids for the price of a market transaction. The second one is the auction model, where every player acts independently and agrees privately on the price. According to the type of interactions between buyers and sellers, auctions can be divided into two classes: one-side auction \cite{11} and two-side auction {\cite{10}}}. The auction mechanism helps players benefit from cooperation and energy trading with little global information. The auction mechanism can make every player autonomously share the energy and automatically guarantee the truthfulness of the energy information. Therefore, an auction mechanism is used to determine the price of energy sharing in this study. Energy storage and CaPP are also effective ways to reduce the imbalance of supply and demand in MGs, and improve the local consumption of renewable energy. In \cite{Huang2013Optimal}, Huang et al. develop a low-complexity algorithm with energy storage management to minimize the average cost of a power-consuming entity. In \cite{Gatzianas2010Control}, Gatzianas et al. explicitly take the actual energy storage into account and construct an algorithm for energy management based on the Lyapunov optimization technique. In \cite{Gayme2011Optimal}, Gayme et al. investigate distributed energy storages and illustrate their effects using an example along with time-varying demand profiles. {In \cite{Good2019A}, Good et al. propose an aggregation modeling method for multi-energy conversion, storage, and demand to take advantage of distributed energy flexibility and provide multiple services.}} The scheduling of vehicles and electrolyzers are the main aspects to be considered in the operational control of the CaPP. Centralized optimization approaches, such as minimizing operating costs \cite{Battistelli2013Generalized} or power losses \cite{Khodr2012Intelligent}, are used to address the scheduling problem of vehicles. In \cite{Shinoda2016Optimization}, the scheduling problem in the MG among renewable energy sources (RES), electrolyzer, and vehicle-to-grid (V2G) power is to minimize the power purchased from the grid. {In \cite{Jaramillo2016Optimal}, a multi-objective mixed-integer linear programming model is proposed for the scheduling in a grid-connected MG, and the startup constraints of the alkaline electrolyzer are explicitly modeled.}} In \cite{Chiesa2011Dynamic}, the electrolyzer levels out voltage fluctuations in a weak grid and improves the power quality of the MG based on a dynamic electrolyzer model. {However, the existing works do not consider the coordinated operation and multi-energy demands of MGs after introducing hydrogen storage and fuel cell vehicles. In this paper, a multi-energy management framework including fuel cell vehicles, energy storage, CHP system, and renewable energy is proposed. The characteristics and scheduling arrangements of fuel cell vehicles are considered to further improve the local absorption of the renewable energy and enhance the economic benefits of MGs. This paper designs a joint energy scheduling and trading algorithm based on Lyapunov optimization and a double-auction mechanism. The dynamic and computationally efficient energy scheduling and trading algorithm is developed that determines the valuations of energy in the auction, optimally schedules energy distribution, and strategically purchases and sells energy with the current electricity prices. The implementation of the algorithm only depends on the current system states without the need of any priori information. At the end, simulations based on real data are conducted to investigate the performance of the multi-energy management framework and demonstrate the effectiveness of the proposed algorithm.} \section{System Model} This paper considers a system consisting of $n$ interconnected MGs, an electricity utility company, a gas utility company, and a hydrogen-producing company. Each MG is equipped with renewable energy, CHP system, fuel cell vehicles, battery, hydrogen storage, boiler, and water tank, as shown in Fig.1. MGs can harvest renewable energy, such as wind and solar power. Fuel cell vehicles can generate electricity by consuming hydrogen. CHP system can consume natural gas to generate electricity, and at the same time, the generated heat follows its electricity production with fixed ratios. In addition, each MG can store extra energy for the demand in the future. \begin{figure*} \centering \begin{minipage}[c]{1\textwidth} \centerline{\includegraphics[width=\textwidth]{mgg1.jpg}} \end{minipage} \caption{Energy flows of system.} \label{fig1} \end{figure*} \subsection{Energy Purchase} MG $i$ harvests $N_{i}(t)$ units of energy generated by renewable energy during one time slot. Here, one time slot is set to be one hour in order to coordinate with the simulation. The electricity utility company uses fossil energy to generate electricity, so it has huge energy generation at one time slot, which means that constraints on energy generation by the electricity utility company are not considered. The same assumption is applied to the gas utility company and the hydrogen-producing company. MG $i$ purchases $E_{i}(t)$ units of energy from the electricity utility company with price $p_{e}(t)$. From the gas utility company, MG $i$ purchases $P_{i}^{CHP}(t)$ and $H_{i}^{CHP}(t)$ units of gas to generate $\eta_{pg}P_{i}^{CHP}(t)$ units of electricity and $\eta_{hg}H_{i}^{CHP}(t)$ units of hot water by CHP system at time slot $t$. $\eta_{pg}$ and $\eta_{hg}$ are the conversion efficiency of CHP system from natural gas to electricity and heat, respectively. Moreover, MG $i$ purchases $H_{i}^{b}(t)$ units of gas to produce $\eta_{bg}H_{i}^{b}(t)$ units of hot water by boiler at time slot $t$. $\eta_{bg}$ is the conversion efficiency of boiler from natural gas to heat. The price of the gas is $p_{g}(t)$. When there is not enough hydrogen for fuel cell vehicles, MG $i$ will purchase $d_{i}(t)$ units of hydrogen from the hydrogen-producing company with price $p_{y}(t)$. \subsection{Energy Demands} {\color{black}{MG $i$ needs to meet the electricity $L_{ie}(t)$, hydrogen $L_{iy}(t)$, and heat $L_{ih}(t)$ demands. Although these demands are stochastic, they still need to be met quickly and precisely.}} \subsubsection{Electricity Demands} First, MG $i$ uses renewable energy to meet its users' electricity demands, $L_{ie}(t)$. If $N_{i}(t)>L_{ie}(t)$, extra renewable energy can be used for energy storage, water electrolysis, and energy trading. Otherwise, MG $i$ uses all renewable energy to serve its loads. The unsatisfied electricity loads are expressed as $L_{ie}(t)-N_{i}(t)$ and are served by the following methods: \begin{itemize} \item Discharge the battery. MG $i$ can draw $D_{ie}(t)$ units of electricity from the battery to serve unsatisfied electricity loads. \item Generate electricity using hydrogen. Fuel cell vehicles can use hydrogen to generate $\eta_fhY_{if}(t)$ units of electricity \item Generate electricity using CHP system. CHP system can consume natural gas to generate $\eta_{pg}P_{i}^{CHP}(t)$ units of electricity to meet electricity demands \item Purchase electricity by energy trading. MG $i$ may acquire $X_{i}(t)$ units of electricity by trading with other MGs. \item Purchase electricity from the electricity utility company. MG $i$ can purchase $E_{i}(t)$ units of electricity from the electricity utility company. \end{itemize} \subsubsection{Hydrogen Demands} First, vehicle $l_i$ uses $Y_{il}(t-1)$ units of hydrogen stored in the vehicle to meet its driving demands $h_{il}(t)$, which can be estimated from historical data. If $Y_{il}(t-1)>h_{il}(t)+Y_{il,min}$, the vehicle can drive normally. If $Y_{il}(t-1)\leq h_{il}(t)+Y_{il,min}$, the vehicle $l_i$ uses all hydrogen in the vehicle for driving. Deficient hydrogen is obtained from MG $i$ or purchased from a hydrogen-producing company. MG $i$ purchases $d_{i}(t-1)$ units of hydrogen to meet the total hydrogen demand $L_{iy}(t)$ of all vehicles at time slot $t$. \subsubsection{Heat Demands} MG $i$ uses the hot water stored in the water tank to meet its heat demands. If these water cannot meet its heat demands, MG $i$ will use both CHP system and boiler to produce $\eta_{hg}H_{i}^{CHP}(t)+\eta_{bg}H_{i}^{b}(t)$ units of hot water to meet its heat demands $L_{ih}(t)$ at time slot $t$. \subsection{Dynamic Model for Energy Storages} Each MG has a battery that can store extra electricity generated by renewable energy generation, and a hot water tank to supply hot water. Meanwhile, hydrogen storage is introduced, and the dynamic model for three types of energy storages is considered. For MG $i$, the electricity of battery, hydrogen of the storage, and equivalent thermal energy of the hot water tank are denoted by $B_{i}(t)$, $Y_{i}(t)$, and $W_{i}(t)$ at the end of one time slot, respectively. The electricity, hydrogen, and equivalent thermal energy are charged in the amounts of $C_{ie}(t)$, $C_{iy}(t)$, and $C_{ih}(t)$, and discharged in the amounts of $D_{ie}(t)$, $D_{iy}(t)$, and $D_{ih}(t)$, respectively. Then, the energy storage dynamics can be obtained as: \begin{equation} B_{i}(t+1)=B_{i}(t)+C_{ie}(t)-D_{ie}(t) \label{A1} \end{equation} \begin{equation} Y_{i}(t+1)=Y_{i}(t)+C_{iy}(t)-D_{iy}(t) \label{Y1} \end{equation} \begin{equation} W_{i}(t+1)=W_{i}(t)+C_{ih}(t)-D_{ih}(t) \label{A3} \end{equation} where $C_{iy}(t)$ denotes the amount of hydrogen injected into hydrogen storage, which is generated by the electrolyzer during one time slot. $\frac{hC_{iy}(t)}{\eta_e}$ is the energy consumed by the electrolyzer during one time slot, $\eta_e$ is the conversion efficiency of the electrolyzer from electricity to hydrogen, and $h$ is the heating value of hydrogen, which is {$1.4 \times 10^{8}$J/kg}}. Hydrogen needs to be compressed and stored. The compression energy is $c_1C_{iy}(t)$, and $c_1$ is the specific energy consumption of the compressor. To be specific, the operations of battery, hydrogen storage, and hot water tank of MG $i$ are subject to a lot of constraints. First, electricity charging and discharging will not happen simultaneously. \begin{equation} 1_{C_{ie}(t)>0}+1_{D_{ie}(t)>0}\leq{1}, \label{cd1} \end{equation} \begin{center} $1_{f(x)>0}=\left\{\begin{array}{cc} 1 & \mbox{if}\ f(x)>0\\ 0 & \mbox{otherwise} \end{array}\right.$ \end{center} Battery, hydrogen storage, and water tank of MG $i$ have finite capacities: \begin{equation} 0\leq B_{i}(t) \leq B_{i,max} \label{Bm} \end{equation} \begin{equation} 0\leq Y_{i}(t) \leq Y_{i,max} \label{Ym} \end{equation} \begin{equation} 0\leq W_{i}(t) \leq W_{i,max} \label{Wm} \end{equation} where $B_{i,max}$, $Y_{i,max}$ and $W_{i,max}$ are the upper bounds of the battery, hydrogen storage, and hot water tank's thermal energy. There are maximum electricity, hydrogen, and equivalent thermal energy charging $C_{ie,max}$, $C_{iy,max}$, $C_{ih,max}$ and discharging $D_{ie,max}$, $D_{iy,max}$, $D_{ih,max}$ during one time slot. Thus, the charging and discharging constraints of the energy storage are denoted by \begin{equation} 0\leq C_{ie}(t) \leq C_{ie,max}, 0\leq D_{ie}(t) \leq D_{ie,max} \label{Cem} \end{equation} \begin{equation} 0\leq C_{iy}(t) \leq C_{iy,max}, 0\leq D_{iy}(t) \leq D_{iy,max} \label{Cym} \end{equation} \begin{equation} 0\leq C_{ih}(t) \leq C_{ih,max}, 0\leq D_{ih}(t) \leq D_{ih,max} \label{Chm} \end{equation} The feasible control decision on $C_{ie}(t)$ and $D_{ie}(t)$ should meet constraints (\ref{cd1}), (\ref{Bm}), and (\ref{Cem}), simultaneously. {\color{black}{Since electricity charging and discharging will not happen simultaneously, the energy level of the battery cannot exceed the capacity of the battery, which means that $B_{i}(t)+C_{ie}(t)\leq B_{i,max}$. Meanwhile, the energy level of the battery cannot be negative, which means $B_{i}(t)-D_{ie}(t) \geq 0$. }}Therefore, the charging and discharging constraints of the battery are denoted as: \begin{equation} 0\leq{C_{ie}(t)}\leq{\min[B_{i,max}-B_{i}(t),C_{ie,max}]} \label{C1} \end{equation} \begin{equation} 0\leq{D_{ie}(t)}\leq{\min[B_{i}(t),D_{ie,max}]} \label{C2} \end{equation} \subsection{Dynamic Model for Fuel Cell Vehicles} Because fuel cell vehicles can act as controllable power plants, fuel cell vehicles are introduced and a dynamic model for fuel cell vehicles is considered. The model includes the transportation features and power generation of fuel cell vehicles. The transportation features are information about the departure, arrival time, and driving distance of each vehicle, which can be estimated. Power generation is determined by the transportation features and hydrogen storage of vehicles. The hydrogen in the vehicle $l_i$ is $Y_{il}(t)$ at the end of one time slot. The number of vehicles in MG $i$ is $L_{i}$. Then, the model of fuel cell vehicle $l_i$ is as follows: \begin{equation} Y_{il}(t+1)=\left\{\begin{array}{cc} Y_{il}(t)+D_{iyl}(t)+d_{il}(t) & \mbox{injection} \\ Y_{il}(t)-Y_{ifl}(t) & \mbox{generation} \\ Y_{il}(t)-h_{il}(t) & \mbox{driving} \end{array}\right. \label{Yc} \end{equation} \begin{equation} \begin{split} \sum_{l=1}^{L_i}D_{iyl}(t)=D_{iy}(t); \sum_{l=1}^{L_i}d_{il}(t)=d_{i}(t)\\ \sum_{l=1}^{L_i}Y_{ifl}(t)=Y_{if}(t); \sum_{l=1}^{L_i}h_{il}(t)=h_{i}(t)\\ \end{split} \label{DdYh} \end{equation} \begin{equation} h_{il}(t)=\eta_dh_{ild}(t) \label{hil} \end{equation} The model in (\ref{Yc}) is a hybrid piece affine model with three modes. The injection mode denotes that the vehicle is being injected. The generation mode represents that the vehicle is available for power generation. The driving mode denotes that the vehicle is driving. The three modes will not happen simultaneously. $D_{iyl}(t)+d_{il}(t)$ is the hydrogen injected into the vehicle $l_i$ at time slot $t$. Fuel cell vehicle $l_i$ obtains hydrogen $D_{iyl}(t)$ from MG $i$ and purchases hydrogen $d_{il}(t)$ from the hydrogen station of the hydrogen-producing company. $Y_{ifl}(t)$ is the hydrogen consumed for generation by vehicle $l_i$ at time slot $t$. The power generated by fuel cell vehicle $l_i$ is denoted as $\eta_fhY_{ifl}(t)$, where $\eta_f$ is the conversion efficiency of the fuel cell from hydrogen to electricity. $h_{il}(t)$ is the hydrogen used for transportation by vehicle $l_i$ at time slot $t$, $h_{ild}(t)$ is the travel distance, and $\eta_d$ is the hydrogen by each vehicle consumes per kilometer. For fuel cell vehicle $l_i$, there are maximum hydrogen storage $Y_{il,max}$, hydrogen injected $D_{iyl,max}$ and $d_{il,max}$, hydrogen consumed for generation $Y_{ifl,max}$, and hydrogen used for transportation $h_{il,max}$ during one time slot: \begin{equation} 0\leq Y_{il}(t)\leq Y_{il,max} \label{Yil} \end{equation} \begin{equation} 0\leq D_{iyl}(t)\leq D_{iyl,max}, 0\leq d_{il}(t)\leq d_{il,max} \label{Yi1} \end{equation} \begin{equation} 0\leq Y_{ifl}(t)\leq Y_{ifl,max} \label{Yi2} \end{equation} \begin{equation} 0\leq h_{il}(t)\leq h_{il,max} \label{Yi3} \end{equation} \subsection{Cost Function} The cost function of MG $i$ consists of the payment and revenue, which is denoted as \begin{equation} \begin{aligned} C_{i}(t)&=C_{ihy}(t)+C_{ip}(t)+C_{ig}(t)+C_{iX}(t)-R_{iS}(t)-R_{ip}(t) \end{aligned} \label{micro1} \end{equation} \begin{equation} \begin{aligned} C_{ihy}(t)&=p_{y}(t)d_{i}(t), C_{ip}(t)=E_{i}(t)p_{e}(t) \\ C_{ig}(t)&=(P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t) \\ C_{iX}(t)&=\beta_i(t)X_{i}(t), R_{iS}(t)=\alpha_i(t)S_{i}(t), R_{ip}(t)=E_{io}(t)p_{eo}(t) \end{aligned} \end{equation} where $C_{ihy}(t)$ is the hydrogen cost of purchasing hydrogen from the hydrogen-producing company by all vehicles of MG $i$ at time slot $t$. $C_{ip}(t)$ and $C_{ig}(t)$ are the costs of purchasing electricity and gas from the electricity and gas utility company at time slot $t$. $C_{iX}(t)$ and $R_{iS}(t)$ are the cost of purchasing electricity from other MGs and the revenue of selling electricity to other MGs in energy trading at time slot $t$. $R_{ip}(t)$ is the revenue from selling electricity to the electricity utility company at time slot $t$. $p_{y}(t)$ is the hydrogen price of the hydrogen-producing company. $d_{i}(t)$ is the amount of hydrogen purchased from the hydrogen-producing company by all vehicles of MG $i$ at time slot $t$. $E_{i}(t)$ is the amount of electricity purchased from the electricity utility company by MG $i$ at time slot $t$. When MG $i$ purchases electricity from other MGs, $\beta_i(t)$ is the purchase price and $X_{i}(t)$ is the amount of electricity at time slot $t$. When MG $i$ sells electricity to other MGs, $\alpha_i(t)$ is the selling price of MG $i$ and $S_{i}(t)$ is the amount of electricity at time slot $t$. $E_{io}(t)$ and $p_{eo}(t)$ are the amount and price of electricity sold to the electricity utility company by MG $i$ at time slot $t$. Note that the electricity demand $L_{ie}(t)$, hydrogen demand $L_{iy}(t)$, and heat demand $L_{ih}(t)$ of MG $i$ should be satisfied when they arrive, i.e. \begin{equation} \begin{aligned} L_{ie}(t)&=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t) -C_{ie}(t)+D_{ie}(t)\\&+\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)\\ L_{iy}(t)&=h_{i}(t)\\ L_{ih}(t)&=\eta_{hg}H_{i}^{CHP}(t)+\eta_{bg}H_{i}^b-C_{ih}(t)+D_{ih}(t) \end{aligned} \label{lg} \end{equation} \section{Solution Methodology} \subsection{Optimization Method} The strategy set of MG $i$ is $\boldsymbol{M}_{i}(t)$=\{$C_{ie}(t)$, $D_{ie}(t)$, $C_{iy}(t)$, $D_{iy}(t)$, $C_{ih}(t)$, $D_{ih}(t)$, $D_{iyl}(t)$, $d_{il}(t)$, $Y_{ifl}(t)$, $h_{il}(t)$, $E_{i}(t)$, $P^{CHP}_{i}(t)$, $H^{CHP}_{i}(t)$, $H_{i}^{b}(t)$, $X_{i}(t)$, $S_{i}(t)$, $E_{io}(t)$\}. According to the system model, the optimization problem of MG $i$ is to find a control policy that schedules the electricity, hydrogen, and heat at each time slot to minimize the time average energy cost, which can be denoted as a stochastic network optimization problem: \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} \lim_{T \rightarrow \infty}\frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{C_{i}(t)\} \end{aligned} \label{eq30} \end{equation} subject to (\ref{A1}) - (\ref{Yi3}) and (\ref{lg}). {\color{black}{The Lyapunov optimization gives simple online solutions based on the current information of the system state as opposed to traditional approaches like Markov decision processes and dynamic programming, which have very high computational complexity and require a priori information of all the random processes in the system. The performance of the Lyapunov optimization algorithm can be close to the optimal value arbitrarily \cite{Lakshminarayana2014Cooperation}. The underlying assumption about the availability of future information renders offline approaches ill-suited for energy storage system applications with high uncertainty, whereas dynamic programming solutions are impractical for multiple networked energy storage systems \cite{Sarthak2018Optimal}.}} The time average expected values under any feasible control policy of the original problem are denoted as follows: \begin{equation} \begin{split} \overline{C_{ie}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{ie}(t)\}, \overline{D_{ie}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{ie}(t)\}\\ \overline{C_{iy}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{iy}(t)\} , \overline{D_{iy}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{iy}(t)\}\\ \overline{C_{ih}}&=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{C_{ih}(t)\}, \overline{D_{ih}} =\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{ih}(t)\}\\ \overline{D_{iyl}}&+\overline{d_{il}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{D_{iyl}(t)+d_{il}(t)\}\\ \overline{Y_{ifl}}&+\overline{h_{il}}=\lim_{T\rightarrow\infty}\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\{Y_{ifl}(t)+h_{il}(t)\} \end{split} \label{sto1} \end{equation} The above stochastic network optimization problem (\ref{eq30}) cannot be solved directly owing to the capacity constraints of battery, hydrogen storage, and water tank (\ref{Bm}) - (\ref{Wm}) of MG $i$ and hydrogen storage (\ref{Yil}) of fuel cell vehicle $l_i$. To be specific, stochastic network optimization can ensure that the average energy consumption equals the average energy generation for a long period, but cannot provide a hard constraint on the difference between consumption and generation at any time slot. In order to solve this issue, the problem is relaxed, which is stated as follows: The optimization problem (\ref{eq30}) is subject to \begin{equation} \begin{split} \overline{C_{ie}} &= \overline{D_{ie}}\\ \overline{C_{iy}} &= \overline{D_{iy}}\\ \overline{C_{ih}} &= \overline{D_{ih}}\\ \overline{D_{iyl}}+\overline{d_{il}} &= \overline{Y_{ifl}}+\overline{h_{il}}\\ \end{split} \label{cd} \end{equation} and (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). $C_{i}^{opt}$ is denoted as the optimal solution of the cost function for the original problem and $C_{ir}^{opt}$ is denoted as the optimal solution of the cost function for the relaxed problem. Any feasible solution to the original problem is also a feasible solution to the relaxed problem, that is, the relaxed problem is less constrained than the original problem. Therefore, $C_{ir}^{opt} \leq C_{i}^{opt}$. The optimal solution to the relaxed problem can be got by the stationary and randomized policy $\Pi$, stated as follows: \begin{equation} \begin{aligned} \mathbb{E}\{C_{i}^{\Pi}(t)\}=C_{ir}^{opt} \end{aligned} \end{equation} subject to: \begin{equation} \begin{split} C^{\Pi}_{ie}(t)& = D^{\Pi}_{ie}(t)\\ C^{\Pi}_{iy}(t) &= D^{\Pi}_{iy}(t)\\ C^{\Pi}_{ih}(t) &= D^{\Pi}_{ih}(t)\\ D^{\Pi}_{iyl}(t)+d^{\Pi}_{il}(t) &= Y^{\Pi}_{ifl}(t)+h^{\Pi}_{il}(t)\\ \end{split} \label{cd2} \end{equation} and (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). The existence of the stationary and randomized policy can be proved by the Caratheodory theory \cite{georgiadis2006resource}. Obviously, only if the solutions to the relaxed problem can meet constraints (\ref{Bm}) - (\ref{Wm}) and (\ref{Yil}), they are also feasible to the original problem. To reach this objective, the constants $\theta_{i}$, $\xi_{i}$, $\varepsilon_{i}$ and $\gamma_{il}$ are defined. These constants are adjusted appropriately to make the solutions to the relaxed problem also be feasible to the original problem. To start, the virtual queues $A_{i}(t)$, $F_{i}(t)$, $Z_{i}(t)$, and $I_{il}(t)$ for battery, hydrogen storage, water tank of MG $i$, and hydrogen storage of fuel cell vehicle $l_i$ are defined as follows, respectively: \begin{equation} \begin{aligned} &A_{i}(t)=B_{i}(t)-\theta_{i} , F_{i}(t)=Y_{i}(t)-\xi_{i} \\ &Z_{i}(t)=W_{i}(t)- \varepsilon_{i}, I_{il}(t)=Y_{il}(t)- \gamma_{il} \end{aligned} \label{vir} \end{equation} where $\theta_{i}$, $\xi_{i}$ and $\varepsilon_{i}$ and $\gamma_{il}$ are perturbations that are used to guarantee the bounds of $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. The Lyapunov function is defined as $Q_{i}(t)=\frac{1}{2}A_{i}(t)^{2}+\frac{1}{2}F_{i}(t)^{2}+\frac{1}{2}Z_{i}(t)^{2}+\frac{1}{2}\sum^{L_i}_{l=1}I_{il}(t)^{2}$. The conditional Lyapunov drift, which represents the change in the Lyapunov function, is defined as \begin{equation} \Delta_{i}(t)=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t), Y_{il}(t)\} \end{equation} where the expectation is related to the random processes of the system, given the values $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. According to the equation for the virtual queue (\ref{vir}) associated with the evolution of battery, hydrogen storage, and water tank of MG $i$ in (\ref{A1}) - (\ref{A3}), and the hydrogen storage of the fuel cell vehicle in (\ref{Yc}), the Lyapunov drift is bounded as \begin{equation} \begin{aligned} \Delta_{i}(t)&=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t), Y_{il}(t)\} \\&\leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))]\} \end{aligned} \label{rightmin} \end{equation} where $G_{i}$ is constant and $G_{i}=\frac{1}{2}\{\max(C_{ie,max}^2,D_{ie,max}^2)+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\}$. The proof of this step is given in Appendix A In order to make these queues stable, MG $i$ needs to minimize drift $\Delta_{i}(t)$. In addition, MG $i$ intends to minimize the energy cost. Hence, $V_{i}$ is used to represent the tradeoff between the two objectives. Then, the drift-plus-penalty function is denoted as \begin{equation} \begin{aligned} \Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \leq & G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\}+V_{i}\mathbb{E}\{C_{i}(t)\} \\=&G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t)) +F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\ +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+(P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t) +\beta_i(t)X_{i}(t) \\&-\alpha_i(t)S_{i}(t)-E_{io}(t)p_{eo}(t))\} \end{aligned} \label{p1} \end{equation} The relaxed problem can be viewed as minimizing the cost of the MG while maintaining the stability of virtual queues. The drift-plus-penalty term consists of two terms: the Lyapunov drift term $\Delta_{i}(t)$ and the modified cost term $V_{i}\mathbb{E}\{C_{i}(t)\}$. A larger value of $V_{i}$ means that minimizing the energy cost has greater priority than minimizing the drift, and vice versa. The objective of Lyapunov optimization is to minimize the right hand of (\ref{p1}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))] +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t)) \end{aligned} \label{P2} \end{equation} subject to constraints (\ref{Cem}) - (\ref{Chm}), (\ref{DdYh}), (\ref{hil}), (\ref{Yi1}) - (\ref{Yi3}), (\ref{lg}). In the following section, the price and amount of energy in energy trading among multiple MGs are determined, and the optimal strategy of problem (\ref{P2}) is obtained by solving the linear programming problem. \subsection{Double-Auction Mechanism} Optimization problem (\ref{P2}) has two variables of price. Owing to the decentralized structure of energy trading, the selling price and purchase price can be determined by the external auctioneer according to a double-auction mechanism. First, the selling price and purchase price of each MG submitted in energy trading among multiple MGs are investigated. \textbf{Lemma 1.} MG $i$ decides the selling price $\widetilde{\alpha}_{i}(t)$ and purchase price $\widetilde{\beta}_{i}(t)$ based on the cost-minimization problem: \begin{equation} \widetilde{\alpha}_{i}(t)=\max[\frac{-A_{i}(t)}{V_{i}},\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}, p_{eo}(t)] \label{e11} \end{equation} \begin{equation} \widetilde{\beta}_{i}(t)=\min[\frac{\max(-A_{i}(t),0)}{V_{i}},\frac{p_{g}}{\eta_{pg}},p_{e}(t)] \label{e12} \end{equation} where $p_{eo}(t)$ is the price of energy sold to the electricity utility company by MGs, and $p_{eo}(t)<p_{e}(t)$ . The proof of this step is presented in Appendix B After determining $\widetilde{\alpha}_{i}(t)$ and $\widetilde{\beta}_{i}(t)$, the amount of electricity $\widetilde{S}_{i}(t)$ and $\widetilde{X}_{i}(t)$ that MG $i$ will sell and purchase in energy trading are determined by solving (\ref{P2}). MGs are willing to sell their energy when their energy storages have enough energy. Moreover, they are willing to get energy when the cost of purchasing energy is lower than that of generating energy by themselves (such as generating electricity by CHP system and using hydrogen). The maximum amount of electricity that MG $i$ can sell $S_{i,max}(t)$ and purchase $X_{i,max}(t)$ at time slot $t$ are: \begin{equation} S_{i,max}(t)=N_{i}(t)-L_{ie}(t) \end{equation} \begin{equation} X_{i,max}(t)=L_{ie}(t)-N_{i}(t) \end{equation} A double-auction mechanism is designed to encourage MGs to actively trade energy and ensure the benefits of MGs. The double-auction mechanism has two steps: \begin{itemize} \item MGs submit the selling price, purchase price, and the corresponding amount of energy to the external auctioneer. \item The external auctioneer decides the accepted selling price and purchase price by trading rules, and allocates energy to MGs to minimize the transmission loss. \end{itemize} The mechanism of the threshold price double auction \cite{Kant2005Double} is shown in this section. First, the external auctioneer collects and sorts all received purchase prices in descending order and selling prices in ascending order: $\overline{\beta}_{1}(t)\geq \overline{\beta}_{2}(t)\geq\cdots\geq \overline{\beta}_{i}(t) \geq r>\overline{\beta}_{i+1}(t) \geq\cdots \geq\overline{\beta}_{n}(t)$ and $\overline{\alpha}_{1}(t)\leq \overline{\alpha}_{2}(t)\leq \cdots\leq \overline{\alpha}_{j}(t)\leq r< \overline{\alpha}_{j+1}(t)\leq \cdots \leq\overline{\alpha}_{n}(t)$. If $i=j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , i$, that they can trade with. The accepted selling price and purchase price are the same, i.e., $\alpha(t)=\beta(t)=r$. If $i>j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , j$, that they can trade with. The accepted selling price and purchase price are $\alpha(t)=r$ and $\beta(t)=\overline{\beta}_{j+1}(t)$, respectively. If $i<j$, the external auctioneer notifies MG $l$, $l=1,2, \cdots , i$, that they can trade with. The accepted selling price and purchase price are $\alpha(t)=\overline{\alpha}_{i+1}(t)$ and $\beta(t)=r$, respectively. The accepted purchase price and selling price for MG $i$ can be derived as \begin{equation} \hat{\beta}_{i}(t)=\left\{\begin{array}{cc} \beta(t) & \mbox{if MG $i$ purchases electricity}\\ 0 & \mbox{otherwise} \end{array}\right. \end{equation} and \begin{equation} \hat{\alpha}_{i}(t)=\left\{\begin{array}{cc} \alpha(t) & \mbox{if MG $i$ sells electricity}\\ 0 & \mbox{otherwise} \end{array}\right. \end{equation} After determining the market clearing prices, the external auctioneer needs to match energy sellers and buyers to reduce energy losses. \begin{equation} Loss(t) = \sum_{i=1}^{k}\sum_{j=1}^{k} I_{ij}T_{ij}(t) \end{equation} where $T_{ij}(t)$ is the amount of energy transmitted from MG $i$ to MG $j$ at time $t$. $I_{ij}$ is the energy loss coefficient, which is related to the transmission distance. The external auctioneer aims to minimize the energy losses during transmission: \begin{equation} \min_{T_{ij}, \forall i, j \in [1, k]} \quad Loss(t) \label{chap3equ:minLoss} \end{equation} subject to: \begin{equation} \sum_{j=1}^{k} T_{ij} \leq \overline{S}_{i}(t) \label{chap3equ:sell} \end{equation} \begin{equation} \sum_{i=1}^{k} (1-I_{ij})T_{ij} \geq \overline{X}_{j}(t) \label{chap3equ:buy} \end{equation} After determining $\alpha_{i}(t)$ and $\beta_{i}(t)$, the actual amount of electricity that MG $i$ sells $S^{*}_{i}(t)$ or purchases $X^{*}_{i}(t)$ can be determined to minimize the energy losses during transmission by linear programming. The performance of the proposed trading mechanism is as follows: \textbf{Lemma 2.} Using the mechanism presented above, all MGs will submit the selling prices and purchase prices truthfully; otherwise, they will get lower benefits owing to deviating from the true value of the selling prices and purchase prices in (\ref{e11}) and (\ref{e12}). The proof of this step is shown in Appendix C \subsection{Algorithm Design and Performance Analysis} After obtaining $X^{*}_{i}(t)$ and $S^{*}_{i}(t)$ by (\ref{chap3equ:minLoss})-(\ref{chap3equ:buy}) and the double-auction mechanism, an optimal strategy set of MG $i$ can be acquired by solving the linear programming problem (\ref{P2}): $\boldsymbol{M}_{i}^{*}(t)$=\{$C_{ie}^{*}(t)$, $D_{ie}^{*}(t)$, $C_{iy}^{*}(t)$, $D_{iy}^{*}(t)$, $C_{ih}^{*}(t)$, $D_{ih}^{*}(t)$, $D_{iyl}^{*}(t)$, $d_{il}^{*}(t)$, $Y_{ifl}^{*}(t)$, $h_{il}^{*}(t)$, $E_{i}^{*}(t)$, $P^{CHP,*}_{i}(t)$, $H^{CHP,*}_{i}(t)$, $H_{i}^{b,*}(t)$, $X^{*}_{i}(t)$, $S^{*}_{i}(t)$, $E_{io}^{*}(t) \} to minimize the drift-plus-penalty. The implementation process of the algorithm is shown in Algorithm 1. \begin{algorithm}[h] \caption{Joint Energy Scheduling and Trading Algorithm} \begin{algorithmic}[1] \State Set $t=0$. \State Set the initial values $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. \For{each MG $i$} \State Calculate $\overline{\alpha}_{i}(t)$ and $\overline{\beta}_{i}(t)$ by (\ref{e11}) and (\ref{e12}), calculate $\overline{X}_{i}(t)$ and $\overline{S}_{i}(t)$ by (\ref{P2}), and then submit them to the external auctioneer. \State Calculate $\alpha_{i}(t)$, $\beta_{i}(t)$, $X_{i}(t)$, and $S_{i}(t)$ by the double-auction mechanism. \State Calculate $\boldsymbol{M}_i(t)$ using (\ref{P2}). \EndFor \State Update $B_{i}(t+1)$, $Y_{i}(t+1)$, $W_{i}(t+1)$ by (\ref{A1}) - (\ref{A3}), and $Y_{il}(t+1)$ by (\ref{Yc}). \label{code:recentEnd} \end{algorithmic} \end{algorithm} In the aforementioned design, the capacity constraints of battery, hydrogen storage, and water tank of MG $i$ and hydrogen storage of fuel cell vehicle $l_i$ are not considered. In fact, the capacity constraints should be considered as follows: \textbf{Lemma 3.} If $\theta_{i}$, $\xi_{i}$, $\varepsilon_{i}$, $\gamma_{il}$ and $V_{i}$ satisfy the following conditions: \begin{equation} \theta_{i}=V_{i}p_{e,max}+D_{ie,max} \label{A2} \end{equation} \begin{equation} \xi_{i}=V_{i}p_{y,max}+D_{iy,max} \label{Y2} \end{equation} \begin{equation} \varepsilon_{i}=\frac{V_{i}p_{g,max}}{\eta_{bg}}+D_{ih,max} \label{A4} \end{equation} \begin{equation} \gamma_{il}=V_{i}p_{y,max}+Y_{ifl,max}+h_{il,max} \label{A5} \end{equation} \begin{equation} \begin{aligned} V_{i,max}= & \min \{\frac{B_{i,max}-C_{ie,max}-D_{ie,max}}{p_{e,max}}, \\&\frac{Y_{i,max}-C_{iy,max}-D_{iy,max}}{p_{y,max}}, \\& \frac{\eta_{bg}(W_{i,max}-C_{ih,max}-D_{ih,max})}{p_{g,max}}, \\&\frac{Y_{il,max}-D_{iyl,max}-d_{il,max}-Y_{ifl,max}-h_{il,max}}{p_{y,max}}\}\\ \end{aligned} \label{P3} \end{equation} where $0\leq V_{i} \leq V_{i,max}$, the capacity constraints of battery, hydrogen storage, and water tank of MG $i$ and hydrogen storage of fuel cell vehicle $l_i$ are always satisfied. The proof of this step is presented in Appendix D. According to Lemma 3, the algorithm satisfies the capacity constraints in (\ref{Bm}) - (\ref{Wm}) and (\ref{Yil}). Hence, the algorithm is feasible for the original problem. Then, the result of the performance of the algorithm based on Lyapunov optimization is provided. \textbf{Theorem 1.} According to the algorithm in the previous section, the expected time average energy cost has a bound: \begin{equation} \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{ C_{i}(t) \} \leq C_{i}^{opt} + \frac{G_{i}}{V_{i}} \end{equation} The proof of this step is given in Appendix E. {\color{black}{In a sense, Theorem 1 shows the gap between the performance of the proposed algorithm, independent of random distribution factors, and an optimization algorithm with accurate random information is described. According to (\ref{P3}) and Theorem 1, battery, hydrogen storage, and water tank capacity of MG $i$ and hydrogen storage capacity of fuel cell vehicle $l_i$ increase, the performance of the proposed algorithm can be made arbitrarily close to the optimal performance of the optimization algorithm with accurate random information.}} \section{Numerical Results} In this section, the numerical results based on real data are presented to examine the proposed algorithm in the previous sections. \subsection{Experimental Setup} A network of three MGs is considered. Each MG includes renewable energy resources, CHP system, fuel cell vehicles, battery, hydrogen storage, boiler, and water tank. Wind-driven turbines and photovoltaic systems are renewable energy generators with maximum outputs of 750 kW for MGs $1$ and $2$, and 450 kW for MG $3$. For each MG's electricity load, the hourly load data provided by the PJM hourly load \cite{pjm} is shown in Fig. \ref{fig2}(a). For renewable energy generation, the hourly generation data provided by Renewables.ninja \cite{Institute} is shown in Fig. \ref{fig2}(b). For the price of the electricity utility company, the hourly energy price provided by the Power Smart Pricing administered for Ameren Illinois data \cite{Illinois} is shown in Fig. \ref{fig2}(c). \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{ed.eps}} \centerline{\scriptsize{(a) Electricity demand}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{6a1.eps}} \centerline{\scriptsize{(b) Renewable energy }} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{5a1.eps}} \centerline{\scriptsize{(c) Price of the electricity utility company}} \end{minipage} \caption{Data from website.} \label{fig2} \end{figure*} The maximum electricity consumption of the electrolysis system is 100 kW. The total amount of hydrogen consumed by each MG's all vehicles, which are used for driving, takes value from [25, 35] m$^3$ at random during a time slot. When hydrogen in the hydrogen storage of the MG cannot supply the vehicles, the vehicles will purchase hydrogen from the hydrogen station of the hydrogen-producing company. In this paper, fuel cell vehicles refer to buses. Each MG has 10 fuel cell buses. The hydrogen by each bus consumes is approximately 0.5 m$^3$/km, and the maximum generation of the bus is 45 kW. The buses depart every 20 minutes from 6:00 am to 10:00 pm. At 6:00 am, there are 5 buses at the starting point and the bus terminal of each MG, respectively. The distance from the starting point to the bus terminal is about 10km, and it takes 40-60 minutes to complete the journey. The heat and electricity generation of CHP system satisfies $H_{i}^{CHP}(t)=P_{i}^{CHP}(t)$. The parameters of efficiency are $\eta_{pg}=70\%$, $\eta_{hg}=70\%$, $\eta_{bg}=80\%$, $\eta_{e}=85\%$, and $\eta_{f}=50\%$, respectively. Other parameters are summarized as follows: $p_{y}(t)=10$ cents/m$^3$, $p_{g}(t)=15$ cents/m$^3$, $B_{i,max}=300$kWh, $C_{ie,max}=D_{ie,max}=75$kWh, $W_{i,max}=900$kWh, $C_{ih,max}=D_{ih,max}=225$kWh. $Y_{i,max}=300$m$^3$, $C_{iy,max}=D_{iy,max}=75$m$^3$. \subsection{Results} Fuel cell vehicles and hydrogen storages play important roles in relieving storage stress of the battery and further using excess renewable energy. Fig. \ref{fig3} shows that the costs of the MGs are lower than those without hydrogen storage. {The existence of hydrogen storage obviously reduces the costs of MGs 1 and 2. The cost of MG 3, however, is slightly reduced. The reason is because MGs 1 and 2 electrolyze water to supply hydrogen for fuel cell vehicles instead of selling electricity to the electricity company at a low price. Therefore, the costs of MGs 1 and 2 are obviously reduced. Because the renewable energy of MG 3 is not enough, MG 3 needs to purchase energy. In Fig. \ref{fig6}(c), MG 3 with hydrogen storage electrolyzes water to supply a little hydrogen for fuel cell vehicles, and MG 3 without hydrogen storage charges the battery. Both of them purchase much hydrogen from the hydrogen-producing company. Therefore, the cost of MG 3 with hydrogen is slightly reduced in Fig. \ref{fig3}. }} Then, the comparisons of the costs, energy trading, and battery dynamics with and without hydrogen storage for all three MGs across 24 time slots are presented in Figs. \ref{fig4}-\ref{fig6}. During energy trading, positive values denote purchasing energy, and negative values denote selling energy. Fig. \ref{fig4} denotes that MGs achieve lower costs with hydrogen storage in most cases, where MGs electrolyze water to supply hydrogen for fuel cell vehicles or store hydrogen for the demand in the future instead of selling electricity to the electricity utility company at a low price. Fig. \ref{fig5} denotes the comparison of energy trading dynamics with and without hydrogen storage. MG $1$ sells more electricity to other MGs with hydrogen storage. This is because MG $3$ needs more electricity to electrolyze water to generate hydrogen for fuel cell vehicles, and MG $3$ purchases more electricity from MG $1$. Fig. \ref{fig6} denotes the comparison of battery dynamics with and without hydrogen storage. All MGs charge less electricity into the battery with hydrogen storage. This is because MGs with hydrogen storage use some electricity to electrolyze water to generate hydrogen. \begin{figure} \centering \centerline{\includegraphics[height=42mm,width=56mm]{h.eps}} \caption{Comparisons of all MGs' total costs with and without hydrogen storage.} \label{fig3} \end{figure} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t1h.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t2h.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{t3h.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Costs of each MG with and without hydrogen storage.} \label{fig4} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=3cm,width=4cm]{wh3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Energy trading of each MG with and without hydrogen storage.} \label{fig5} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b1h.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b2h.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b3h.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Battery dynamics of each MG with and without hydrogen storage.} \label{fig6} \end{figure*} Energy trading plays an important role in releasing the imbalance of supply and demand for a single MG. Fig. \ref{fig7} shows that the costs of MGs are lower than those without trading. {Energy trading obviously reduces the cost of MG 1. The costs of MGs 2 and 3, however, are slightly reduced. The reason is that the renewable energy of MG 1 is more than the demand in most cases. MG 1 sells electricity to MGs 2 and 3 instead of selling electricity to the electricity company at a low price in most cases. Therefore, the cost of MG 1 is obviously reduced. MG 3 without energy trading cannot purchase electricity from MG 1 at a low price, but it can generate electricity by CHP system at a cost that is lower than the cost of purchasing electricity from the electricity company, and MG 2 trades little energy with other MGs. Therefore, the costs of MGs 2 and 3 with trading are slightly reduced in Fig. \ref{fig7}. }} Then, the comparisons of the costs, battery, and hydrogen storage dynamics with and without energy trading for all three MGs across 24 time slots are presented in Figs. \ref{fig8}-\ref{fig10}. Fig. \ref{fig8} shows that MGs achieve lower costs with energy trading in most cases, where MGs acquire electricity from other MGs in energy trading instead of the electricity utility company. Fig. \ref{fig2}(b) tells that MG $1$ has higher renewable energy output than other MGs, and hence MG $1$ sells excessive energy to other MGs in most cases except the last four hours. The reason is because MG $1$ has a drop in renewable energy output during the last four hours, while MG $2$ has adequate renewable energy output during the last four hours. Fig. \ref{fig9} denotes the comparison of battery dynamics with and without energy trading. MG $1$ charges less electricity into the battery with energy trading. This is because MG $1$ sells electricity to other MGs instead of storing electricity in the battery. This is the same as the hydrogen storage in Fig. \ref{fig10}. {Whether it involves trading or not, MG 3 without abundant renewable energy to electrolyze water, has to purchase hydrogen from a hydrogen-producing company to supply vehicles. Therefore, MG 3 has no energy to charge, and the dynamics of the storage level of MG 3 are the same as in Fig. \ref{fig9} and Fig. \ref{fig10}.}} \begin{figure*} \centering \centerline{\includegraphics[height=42mm,width=56mm]{s.eps}} \caption{Comparisons of all MGs' total costs with and without energy trading.} \label{fig7} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{t3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Costs of each MG with and without energy trading.} \label{fig8} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{b3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Battery dynamics of each MG with and without energy trading.} \label{fig9} \end{figure*} \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y1.eps}} \centerline{\scriptsize{(a) MG 1}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y2.eps}} \centerline{\scriptsize{(b) MG 2}} \end{minipage} \hspace{-5pt} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[height=30mm,width=40mm]{y3.eps}} \centerline{\scriptsize{(c) MG 3}} \end{minipage} \caption{Hydrogen storage dynamics of each MG with and without energy trading.} \label{fig10} \end{figure*} Table 1 shows the costs of three MGs under the proposed method, and the methods without hydrogen storage and without energy trading. {Three cases of different initial energy storages are studied as follows. 1) The initial energy of storage is 10\% of its capacity (Figs. \ref{fig3}-\ref{fig10} are generated in this case). The total cost of three MGs is reduced by up to 26.53\% from 28563 cents without hydrogen storage to 20984 cents, and 13.16\% from 24163 cents without energy trading to 20984 cents. 2) The initial energy of storage is 50\% of its capacity. The total cost of three MGs is reduced by up to 29.68\% from 26121 cents without hydrogen storage to 18367 cents, and 15.92\% from 21844 cents without energy trading to 18367 cents. 3) The initial energy of storage is its capacity. The total cost of three MGs is reduced by up to 35.50\% from 22888 cents without hydrogen storage to 14763 cents, and 19.55\% from 18350 cents without energy trading to 14763 cents. According to the results, it is observed that the cost decreases with an increase in the initial energy of storage, and the extent of cost reduction increases with an increase in the initial energy of storage. This is because more initial energy of storage means less cost of purchasing energy and more energy used to trade or electrolyze water.}} The introduction of hydrogen storage and energy trading reduces the costs of all MGs. Therefore, MGs benefit from energy trading, hydrogen storage, and fuel cell vehicles. This verifies the effectiveness of the proposed algorithm. \begin{table} \small \caption{ {Costs of three MGs under different methods.}}} \begin{center} \begin{tabular}{l|l|l|l|l|l} \hline Initial energy storage&Costs (cent) & MG $1$ & MG $2$ & MG $3$ & System \\ \hline \multirow{3}*{10\% of capacity}&Cost & 3091 & 3887 & 14006 & 20984 \\ \cline{2-6} {}&Cost(without trading) & 5419 & 4319 & 14425 & 24163 \\ \cline{2-6} {}&Cost(without hydrogen) & 7327 & 7005 & 14231 & 28563 \\ \hline \multirow{3}*{50\% of capacity}&Cost & 2060 & 3377 & 12931 & 18367 \\ \cline{2-6} {}&Cost(without trading) & 4570 & 3738 & 13536 & 21844 \\ \cline{2-6} {}&Cost(without hydrogen) & 6392 & 6491 & 13239 & 26121 \\ \hline \multirow{3}*{100\% of capacity}&Cost & 807 & 2198 & 11758 & 14763 \\ \cline{2-6} {}&Cost(without trading) & 3419 & 2567 & 12364 & 18350 \\ \cline{2-6} {}&Cost(without hydrogen) & 5509 & 5317 & 12061 & 22888\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} In this paper, the energy scheduling and energy trading problem for real-time pricing among multiple microgrids is studied, which is an urgent issue for today's the cyber-physical-energy systems. A multi-energy management framework including fuel cell vehicles, energy storage, {combined heat and power system}}, and renewable energy is presented, where fuel cell vehicles and energy storage further improve the absorption of the renewable energy. A joint algorithm based on Lyapunov optimization and a double-auction mechanism is designed to optimize the long-term energy cost of each microgrid. At last, the results based on real data show that microgrids' costs, under the management of the proposed algorithm, can be decreased. Comparative analysis of energy storage and energy trading demonstrate the necessity of including energy storage and trading. In this paper, fuel cell vehicles refer to buses that have a specific route. Due to transportation concerns, fuel cell vehicles can be cars, buses, and so on. In this case, the trip characteristics of vehicles need to be considered. Investigating some control schemes, e.g., Ref. \cite{AlaviFuel}, to optimize dispatch of fuel cell vehicles is a significant research direction. Another direction is how to design the scheduling method, e.g., Ref. \cite{ZhouIndirect}, to further realize the multi-energy coupled peak load shifting in realistic scenarios, such as industrial parks. \begin{appendix} \section{Proof of (\ref{rightmin}) } According to (\ref{A1}) - (\ref{A3}), (\ref{Yc}), and (\ref{vir}), the Lyapunov drift term $\Delta_{i}(t)$ is denoted by \begin{equation} \begin{aligned} &\Delta_{i}(t)=\mathbb{E}\{Q_{i}(t+1)-Q_{i}(t)|B_{i}(t),Y_{i}(t),W_{i}(t),Y_{il}(t)\} \\&=\frac{1}{2}\mathbb{E}\{2A_{i}(t)(C_{ie}(t)-D_{ie}(t))+2F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+2Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[2I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+(C_{ie}(t)-D_{ie}(t))^2\\&+(C_{iy}(t)-D_{iy}(t))^2+(C_{ih}(t)-D_{ih}(t))^2\\&+\sum^{L_i}_{l=1}[(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))^2]\} \\& \leq \frac{1}{2}\mathbb{E}\{2A_{i}(t)(C_{ie}(t)-D_{ie}(t))+2F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+2Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[2I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+\max(C_{ie,max}^2,D_{ie,max}^2)\\&+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)\\&+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\} \\&=\mathbb{E}\{A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]\}+G_{i} \end{aligned} \end{equation} where $G_{i}=\frac{1}{2}\{\max(C_{ie,max}^2,D_{ie,max}^2)+\max(C_{iy,max}^2,D_{iy,max}^2)+\max(C_{ih,max}^2,D_{ih,max}^2)+\sum^{L_i}_{l=1}[\max((D_{iyl,max}+d_{il,max})^2,(Y_{ifl,max}+h_{il,max})^2)]\}$ \section{Proof of Lemma 1 } The following four cases are considered to determine the price of energy trading: \begin{enumerate} \item Case 1: $A_{i}(t)\geq0$. In this case, MG $i$ has too much energy in its battery. According to (\ref{lg}), $C_{ie}(t)-D_{ie}(t)=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t) +\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t)$. According to (\ref{P2}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))] +V_{i} p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t))\\ =&\min_{\boldsymbol{M}_{i}(t)} -(A_{i}(t)+V_{i}\alpha_i(t))S_{i}(t)+(A_{i}(t)+V_{i}\beta_i(t))X_{i}(t)\\&+A_{i}(t)(E_{i}(t)+N_{i}(t)+\eta_fhY_{if}(t)-\frac{hC_{iy}(t)}{\eta_e}\\&-c_1C_{iy}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t))\\&+F_{i}(t)(C_{iy}(t)-D_{iy}(t))+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))\\&+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))] \\&+V_{i}(p_{y}(t)d_{i}(t)+E_{i}(t)p_{e}(t)+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)\\&+H_{i}^b(t))p_{g}(t)-E_{io}(t)p_{eo}(t)) \end{aligned} \label{cde} \end{equation} and $-\alpha_i(t)V_{i}-A_{i}(t)<0$, MG $i$ tends to increase $S_i(t)$, and $C_{ie}(t)=0$, $D_{ie}(t)=D_{ie,max}$. \item Case 2: $A_{i}(t)<0$. In this case, six situations are considered. \begin{itemize} \item If $0<\alpha_i(t)<\frac{-A_{i}(t)}{V_{i}}$, then$-\alpha_i(t)V_{i}-A_{i}(t)>0$. Therefore, MG $i$ tends to decrease $S_i(t)$ and increase $C_{ie}(t)$. \item If $\alpha_i(t)>\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)-\alpha_i(t)V_{i}<0$. Therefore, MG $i$ tends to increase $S_i(t)$ and decrease $C_{ie}(t)$. \item If $\alpha_i(t)=\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)-\alpha_i(t)V_{i}=0$. This is the same for MG $i$ to increase $S_i(t)$ or increase $C_{ie}(t)$. \item If $0<\beta_i(t)<\frac{-A_{i}(t)}{V_{i}}$, then $\beta_i(t)V_{i}+A_{i}(t)<0$. Therefore, MG $i$ tends to increase $X_i(t)$ and decrease $D_{ie}(t)$. \item If $\beta_i(t)>\frac{-A_{i}(t)}{V_{i}}$, then $A_{i}(t)+\beta_i(t)V_{i}>0$. Therefore, MG $i$ tends to decrease $X_i(t)$ and increase $D_{ie}(t)$. \item If $\beta_i(t)=\frac{-A_{i}(t)}{V_{i}}$, then $-A_{i}(t)=\beta_i(t)V_{i}$. It is same for MG $i$ to increase $X_i(t)$ or increase $D_{ie}(t)$. \end{itemize} Case 3: $F_{i}(t)\geq0$. In this case, MG $i$ has too much hydrogen in its hydrogen storage. According to (\ref{lg}), $(\frac{h}{\eta_e}+c_1)C_{iy}(t)=E_{i}(t)+N_{i}(t)+X_{i}(t)-S_{i}(t)-C_{ie}(t)+D_{ie}(t)+\eta_fhY_{if}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)-L_{ie}(t)$. According to (\ref{P2}), i.e. \begin{equation} \begin{aligned} &\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))+F_{i}(t)(C_{iy}(t)-D_{iy}(t))\\&+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)\\&-Y_{ifl}(t)-h_{il}(t))]+V_{i}(p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)+\beta_i(t) X_{i}(t)\\&-\alpha_i(t) S_{i}(t)-E_{io}(t)p_{eo}(t))\\ =&\min_{\boldsymbol{M}_{i}(t)} A_{i}(t)(C_{ie}(t)-D_{ie}(t))-(\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}+V_i\alpha_i(t))S_{i}(t)\\&+(\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}+V_i\beta_i(t))X_{i}(t)+\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}(E_{i}(t)+N_{i}(t)\\&-C_{ie}(t)+D_{ie}(t)+\eta_fhY_{if}(t)+\eta_{pg}P_{i}^{CHP}(t)-E_{io}(t)\\&-L_{ie}(t))-F_{i}(t)D_{iy}(t)+Z_{i}(t)(C_{ih}(t)-D_{ih}(t))\\&+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}(t)+d_{il}(t)-Y_{ifl}(t)-h_{il}(t))]\\&+V_{i}(p_{y}(t)d_{i}(t) +E_{i}(t)p_{e}(t)-E_{io}(t)p_{eo}(t)\\&+ (P_{i}^{CHP}(t)+H_{i}^{CHP}(t)+H_{i}^b(t))p_{g}(t)) \end{aligned} \label{cdh} \end{equation} and $-\alpha_i(t)V_{i}-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}<0$, MG $i$ tends to increase $S_i(t)$, and $C_{iy}(t)=0$, $D_{iy}(t)=D_{iy,max}$. \item Case 4: $F_{i}(t)<0$. In this case, three situations are considered. \begin{itemize} \item If $0<\alpha_i(t)<\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\alpha_i(t)V_{i}-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}>0$. Therefore, MG $i$ tends to decrease $S_i(t)$ and increase $C_{iy}(t)$. \item If $\alpha_i(t)>\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}-\alpha_i(t)V_{i}<0$. Therefore, MG $i$ tends to increase $S_i(t)$ and decrease $C_{iy}(t)$. \item If $\alpha_i(t)=\frac{-F_{i}(t)}{(\frac{h}{\eta_{e}}+c_1)V_{i}}$, then $-\frac{F_{i}(t)}{\frac{h}{\eta_e}+c_1}-\alpha_i(t)V_{i}=0$. It is same for MG $i$ to increase $S_i(t)$ or increase $C_{iy}(t)$. \end{itemize} \end{enumerate} \section {Proof of Lemma 2 } All MGs are assumed to be rational. They will choose a strategy that can minimize their costs. The purchase price and selling price submitted by MG $i$ are $\beta_{i}(t)$ and $\alpha_{i}(t)$, and the purchase price and selling price determined by the double-auction mechanism are $\hat{\beta}(t)$ and $\hat{\alpha}(t)$ in the actual energy trading. Then, the benefit of MG $i$ is analyzed when it cheats. \begin{enumerate} \item Case 1: $\alpha_{i}(t) > \hat{\alpha}(t)$. In this case, MG $i$ is not allowed to sell energy in the double-auction mechanism. \begin{itemize} \item If MG $i$ increases $\alpha_{i}(t)$, the situation does not change. \item If MG $i$ reduces $\alpha_{i}(t)$ and $\alpha_{i}(t) > \hat{\alpha}(t)$, the situation does not change. \item If MG $i$ reduces $\alpha_{i}(t)$ and $ \alpha_{i}(t) \leq \hat{\alpha}(t)$, the MG will be forced to sell energy at a price lower than expected, and its benefit will decrease owing to cheating. \end{itemize} \item Case 2: $\alpha_{i}(t) \leq \hat{\alpha}(t)$. In this case, MG $i$ sells energy in the double-auction mechanism. \begin{itemize} \item If MG $i$ reduces $\alpha_{i}(t)$, the situation does not change. \item If MG $i$ increases $\alpha_{i}(t)$ and $ \alpha_{i}(t) \leq \hat{\alpha}(t)$, the situation does not change. \item If MG $i$ increases $\alpha_{i}(t)$ and $\alpha_{i}(t) > \hat{\alpha}(t)$, the MG is not allowed to sell energy in the double-auction mechanism. However, energy is excessive, MG $i$ may sell excessive energy to the electricity utility company at a lower price, and its benefit will decrease owing to cheating. \end{itemize} \end{enumerate} This is similar to analyze $\beta_{i}(t)$. Therefore, the double-auction mechanism can prevent MGs from cheating. \section{Proof of Lemma 3 } The induction is used to prove the bound of $B_{i}(t)$, $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. First, the conditions hold at time slot 1 and still hold at time slot $t$. Then, the following four cases are considered \begin{enumerate} \item Case 1: $B_{i}(t)<\theta_{i}$. In this case, $C_{ie}(t) \leq{C_{ie,max}}$, and $\theta_{i}=V_ip_{e,max}+D_{ie,max} \leq B_{i,max}-C_{ie,max}$. Therefore, $B_{i}(t+1) \leq B_{i}(t)+C_{ie,max} < \theta_{i} + C_{ie,max} \leq B_{i,max}$. \item Case 2: $B_{i}(t)\geq \theta_{i}$. In this case, $C_{ie}(t)=0$. The battery will not be charged at time slot $t$. Therefore, $B_{i}(t+1) \leq B_{i}(t) \leq B_{i,max}$. \item Case 3: $B_{i}(t)<D_{ie,max}$. In this case, $A_{i}(t)<D_{ie,max}-\theta_{i}=-V_ip_{e,max}$. Then, $A_{i}(t)+V_i\beta_{i}(t)<A_{i}(t)+V_ip_{e,max}<0$. According to (\ref{lg}) and (\ref{cde}), $D_{ie}(t)=0$. Therefore, $B_{i}(t+1) \geq B_{i}(t) \geq 0$. \item Case 4: $B_{i}(t)\geq D_{ie,max}$. In this case, $B_{i}(t+1)= B_{i}(t)+C_{ie}(t)-D_{ie}(t)\geq B_{i}(t)-D_{ie}(t)\geq 0$. \end{enumerate} It is similar to analyze the bounds of $Y_{i}(t)$, $W_{i}(t)$, and $Y_{il}(t)$. \section{Proof of Theorem 1 } The optimal solution of problem (\ref{P2}) is obtained to minimize the drift-plus-penalty. Comparing this optimal solution with the result of the stationary random policy($\Pi$), the drift-plus-penalty term satisfies \begin{equation} \begin{aligned} &\Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \\ & \leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}^{*}(t)-D_{ie}^{*}(t))+F_{i}(t)(C_{iy}^{*}(t)-D_{iy}^{*}(t))\\&+Z_{i}(t)(C_{ih}^{*}(t)-D_{ih}^{*}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}^{*}(t)+d_{il}^{*}(t)\\&-Y_{ifl}^{*}(t)-h_{il}^{*}(t))] +V_{i} p_{y}(t)d_{i}^{*}(t) +E_{i}^{*}(t)p_{e}(t)\\&+ (P_{i}^{CHP,*}(t)+H_{i}^{CHP,*}(t)+H_{i}^{b,*}(t))p_{g}(t) \beta_{i}(t) X_{i}^{*}(t)\\&-\alpha_{i}(t) S_{i}^{*}(t)-E_{io}^{*}(t)p_{eo}(t))\} \\ & \leq G_{i}+\mathbb{E}\{A_{i}(t)(C_{ie}^{\Pi}(t)-D_{ie}^{\Pi}(t))+F_{i}(t)(C_{iy}^{\Pi}(t)-D_{iy}^{\Pi}(t))\\&+Z_{i}(t)(C_{ih}^{\Pi}(t)-D_{ih}^{\Pi}(t))+\sum^{L_i}_{l=1}[I_{il}(t)(D_{iyl}^{\Pi}(t)+d_{il}^{\Pi}(t)\\&-Y_{ifl}^{\Pi}(t)-h_{il}^{\Pi}(t))] +V_{i} p_{y}(t)d_{i}^{\Pi}(t) +E_{i}^{\Pi}(t)p_{e}(t)\\&+(P_{i}^{CHP,\Pi}(t)+H_{i}^{CHP,\Pi}(t)+H_{i}^{b,\Pi}(t))p_{g}(t)+\beta_{i}(t) X_{i}^{\Pi}(t)\\&-\alpha_{i}(t) S_{i}^{\Pi}(t)-E_{io}^{\Pi}(t)p_{eo}(t))\ \end{aligned} \end{equation} According to (\ref{cd2}) and the stationary randomized policy that achieves the optimal cost $C_{ir}^{opt}$, the drift-plus-penalty term satisfies \begin{equation} \Delta_{i}(t)+V_{i}\mathbb{E}\{C_{i}(t)\} \leq G_{i} + V_{i}C_{ir}^{opt} \leq G_{i} + V_{i}C_{i}^{opt} \end{equation} Summing across $t \in \{1,2,...,T\}$, the sum term satisfies \begin{equation} \mathbb{E}\{Q_{i}(T)-Q_{i}(1)\}+\sum_{t=1}^{T}V_{i}\mathbb{E}\{C_{i}(t)\} \leq TG_{i} + TV_{i}C_{i}^{opt} \end{equation} Dividing both sides by $TV_{i}$ and taking $T \rightarrow \infty$, the time-average cost term satisfies \begin{equation} \lim_{T \rightarrow \infty} \frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\{ C_{i}(t) \} \leq C_{i}^{opt} + \frac{G_{i}}{V_{i}} \end{equation} \end{appendix} \balance \section*{References} \biboptions{square, numbers, sort&compress} \bibliographystyle{elsarticle-num}
1,477,468,749,955
arxiv
\section{Introduction} By the end of the twentieth century, quantum groups have started to draw great attention at the fields of mathematics and mathematical physics. Quantum groups was first defined in \cite{9}. After short time, quantum groups were generalized to quantum super groups which leads to an innovative mathematical field \cite{12}, applied this subject to Lie groups and Lie Algebras. The noncommutative differential geometry of quantum groups was introduced by Woronowicz in \cite{19}. In this approach the quantum group was taken as the basic noncommutative space and the differential calculus on the group was deduced from the properties of the group. The other approach, initiated by Wess-Zumino \cite{18}, succeeded to extend Manin's emphasis \cite{15} on the quantum spaces as the primary objects, they defined differential forms in terms of noncommuting (quantum) coordinates, and the differential and algebraic properties of quantum groups acting on these spaces are obtained from the properties of the spaces. The natural extension of their scheme to superspace\cite{16} was introduced in \cite{3} and \cite{17}, for example. Recently, there have been many attempts to generalize Z$_2$-graded constructions to the Z$_3$-graded case [\cite{1},\cite{4},\cite{7},\cite{10},\cite{11},\cite{13},\cite{14}]. Chung\cite{7} studied the Z$_3$-graded quantum space that generalizes the Z$_2$-graded space called a superspace, using the methods of \cite{18}. The first author of this paper investigated the noncommutative geometry of the Z$_3$-graded quantum superplane in \cite{4}. This work will follow the same pattern with one difference. In this work, differential geometry of $h$-deformed Z$_3$-graded quantum superplane is going to be investigated. In $q$-differential calculus, exterior differential operator {\sf d} has two properties: nilpotency (that is, ${\sf d}^2=0$) and Leibniz Rule. In this work, it is assumed that ${\sf d}^2\neq{0}$ and ${\sf d}^3=0$, while constructing a calculus on Z$_3$-graded $h$-superplane, hence second order differentials are also considered in addition to the relations obtained in q-differential calculus. Thus, while $q$-commutation relations between differentials of coordinate functions and relations among differentials are given in $q$-differential calculus, additional relations will appear in Z$_3$-graded $h$-superplane, since second order differentials should be considered. In this work, we shall build up the noncommutative differential calculus on the Z$_3$-graded $h$-superplane. The noncommutative differential calculus on the Z$_3$-graded $h$-superplane involves functions on the superplane, first and second differentials and differential forms. The purpose of this paper is to present a differential calculus on the Z$_3$-graded $h$-superplane. The paper is organized as follows. In section 2 we obtain the Z$_3$-graded $h$-superplane via a contraction of Z$_3$-graded $q$-superplane using approach of \cite{2}. In section 3 we explicitly set up a differential calculus on the Z$_3$-graded $h$-superplane. Some relations are abtained in \cite{6}. In section 4 we find a {\bf new} Z$_3$-graded quantum supergroup denoted by GL$_{h,j}(1|1)$. \section{The Algebra of Functions on the Z$_3$-graded $h$-Superplane} It is well known that \cite{16} defined the Z$_2$-graded quantum superplane as an associative algebra whose even coordinate $x$ and the odd (Grassmann) coordinate $\theta$ satisfy $$x\theta = q \theta x, \qquad \theta^2 = 0$$ where $q$ is a nonzero complex deformation parameter. One of the possible ways to generalize the quantum superplane is to use the power of nilpotency of its odd generator. This fact gives the motivation for the following definition. \begin{defn} Let $K\{x',\theta'\}$ be a free algebra and $I_q$ is a two-sided ideal generated by $x'\theta'-q\theta'x'$ and $\theta'^3$. The Z$_3$-graded quantum superplane $K_q[x',\theta']$ is defined as quotient algebra $K\{x',\theta'\}/I_q$. \end{defn} Here, the coordinate $x'$ with respect to the Z$_3$-grading is of grade 0 and the coordinate $\theta'$ with respect to the Z$_3$-grading is of grade 1. Using the approach given in \cite{2}, $h$-deformation of Z$_3$-graded superplane will be described and afterwards a differential calculus on $h$-deformed structure will be constructed. Recalling Definition 2.1, commutation relations between coordinate functions in Z$_3$-graded superplane can be given as follows \begin{equation} \label{eq1} x'\theta' = q\theta'x' \qquad \theta'^3=0. \end{equation} We consider a non-singular deformation matrix $g$ which is defined as in \cite{2}, \begin{equation} \label{eq2} g = \left(\begin{matrix} 1 & 0 \\ \frac{h}{q-1} & 1 \end{matrix}\right) \end{equation} where $h$ is a new quantity having grade two. If we assume that, \begin{equation} \label{eq3} \left(\begin{matrix} x' \\ \theta' \end{matrix}\right) = g\left(\begin{matrix} x \\ \theta \end{matrix}\right). \end{equation} then, new coordinates $x$ and $\theta$ would be, \begin{equation} \label{eq4} x = x' \quad \mbox{and} \quad \theta = \theta' - \frac{h}{q-1} \, x'. \end{equation} If the relations (\ref{eq1}) is used in order to obtain commutation relation between $x$ and $\theta$ one can easily find the relation, \begin{equation}\label{eq5} x\theta = q\theta x+hx^2. \end{equation} While obtaining relation (\ref{eq5}) it is assumed that parameter $h$ is commutative with the coordinate $x$. Now let's assume that \begin{equation} \label{eq6} \theta h = qjh\theta \quad \mbox{and} \quad h^3=0 \end{equation} where $j=e^\frac{2\pi i}{3}$ $(i^2 = - 1)$ and $$j^3 = 1 \quad \mbox{and} \quad j^2 + j +1 = 0, \quad \mbox{or} \quad (j + 1)^2 = j.$$ If the coordinate $\theta'$ in (\ref{eq4}) is substituted in the second equation in (\ref{eq1}), then it can be found that, \begin{equation} \label{eq7} \theta^3=0. \end{equation} Consequently, in the limit $q\to1$, the relations that define Z$_3$-graded $h$-superplane can be obtained as defined in \cite{5}. \begin{equation} \label{eq8} x\theta=\theta x+hx^2, \quad \theta^3=0, \quad h^3=0. \end{equation} Now we can define Z$_3$-graded $h$-superplane. \begin{defn} Let $K\{x,\theta,h\}$ be a free algebra and $I_h$ is a two-sided ideal generated by $x\theta-\theta x-hx^2$, $\theta^3$ and $h^3$. The Z$_3$-graded $h$-superplane $K_h[x,\theta,h]$ is defined as quotient algebra $K\{x,\theta,h\}/I_h$. \end{defn} \section{A Differential Calculus on the Z$_3$-graded $h$-Superplane} In this section, we construct a differential calculus on the Z$_3$-graded $h$-superplane. This calculus involves functions on the $h$-superplane, first and second differentials and differential forms. We begin with the definition of the Z$_3$-graded differential calculus. Let $\hat{\alpha}$ denotes the grade of $\alpha$. \begin{defn} Let $A$ be an arbitrary associative (in general, noncommutative) algebra and let $\Gamma^{\wedge n}$ be a space of $n$-form $(n=0,1,2)$ and $A$-bimodule. A Z$_3$-graded differential calculus on the algebra $A$ is a Z$_3$-graded algebra $\Gamma^\wedge=\bigoplus_{n=0}^2 \Gamma^{\wedge n}$ with a ${\mathbb C}$ linear exterior differential operator ${\sf d}$ which defines the map ${\sf d}:\Gamma^\wedge \longrightarrow \Gamma^\wedge$ of grade one. A generalization of a usual differential calculus leads to the rules: \begin{eqnarray} \label{eq9} {\sf d}^3 & =& 0, \qquad ({\sf d}^2\ne0) \nonumber\\ {\sf d}(\alpha\wedge\beta) &=& ({\sf d}\alpha)\wedge\beta + j^{\hat{\alpha}} \, \alpha\wedge({\sf d}\beta), \nonumber\\ {\sf d}^2(\alpha\wedge\beta) &=& ({\sf d}^2\alpha)\wedge\beta + (j^{\hat{\alpha}}+j^{\hat{d\alpha}}) \, (d\alpha)\wedge({\sf d}\beta) + j^{2\hat{\alpha}} \, \alpha\wedge({\sf d}^2\beta) \end{eqnarray} for $\alpha\in\Gamma^{\wedge n}$ $(n=0,1,2)$ and $\beta\in\Gamma^{\wedge}$. \end{defn} \subsection{Some conventions and assumptions} The Z$_3$-graded quantum superplane underlies a noncommutative differential calculus on a smooth manifold with exterior differential {\sf d} satisfying ${\sf d}^3=0$. So, in order to construct the differential calculus on the Z$_3$-graded quantum superplane a linear operator {\sf d} which acts on the functions of the coordinates of the Z$_3$-graded quantum superplane must be defined. For the definition, it is sufficient to define the action of {\sf d} on the coordinates and on their products: The linear operator {\sf d} applied to $x$ produces a 1-form whose Z$_3$-grade is one, by definition. Similarly, application of {\sf d} to $\theta$ produces a 1-form whose Z$_3$-grade is two. We shall denote the obtained quantities by ${\sf d}x$ and ${\sf d}\theta$, respectively. When the linear operator {\sf d} applied to ${\sf d}x$ (or twice by iteration to $x$) it will produce a new entity which we shall call a 1-form of grade two, denoted by ${\sf d}^2x$ and to ${\sf d}\theta$ produces a 1-form of grade zero, modulo 3, denoted by ${\sf d}^2\theta$. Finally, we require that ${\sf d}^3=0$. With a simple arithmetic calculation from (\ref{eq4}), one find \begin{equation} \label{eq11} x' = x \quad \mbox{and} \quad \theta' = \theta + \frac{h}{q-1} \, x. \end{equation} If the exterior differential operator {\sf d} is acted on both sides of the relations given with (\ref{eq11}) and by using the Leibniz Rule defined in (\ref{eq9}) will give \begin{equation} \label{eq12} {\sf d}x' ={\sf d}x \quad \mbox{and} \quad {\sf d}\theta' = {\sf d}\theta + j \, \frac{h}{q-1} \, {\sf d}x. \end{equation} If {\sf d} is acted on (\ref{eq12}) once more, then we get, \begin{equation} \label{eq13} {\sf d}^2 x' = {\sf d}^2x \quad \mbox{and} \quad {\sf d}^2\theta' = {\sf d}^2\theta + j^2 \, \frac{h}{q-1} \, {\sf d}^2x. \end{equation} In order to obtain commutation relations between $h$ and differentials of coordinate functions, along with the assumption, \begin{equation} \label{eq14} x \,h=h \,x \quad \mbox{and} \quad \theta \,h = qjh \,\theta \end{equation} and also we made another assumption which is \begin{equation} \label{eq15} {\sf d} \, h = jh \, {\sf d}. \end{equation} If we apply the exterior differential operator {\sf d} to the relations in (\ref{eq14}) and use (\ref{eq15}), then we find, \begin{equation} \label{eq16} {\sf d}x \, h = jh \, {\sf d}x \quad \mbox{and} \quad {\sf d}\theta \, h = qj^2h \, {\sf d}\theta. \end{equation} Applying {\sf d} to the relations in (\ref{eq16}), will give \begin{equation} \label{eq17} {\sf d}^2x \, h = j^2h \, {\sf d}^2x \quad \mbox{and} \quad {\sf d}^2\theta \, h = qh \, {\sf d}^2\theta. \end{equation} Equation (\ref{eq14})-(\ref{eq17}) will be used in the proceeding sections where commutation relations are needed between $x, \theta, {\sf d}x, {\sf d}\theta,{\sf d}^2x,{\sf d}^2\theta$ and ${\sf d}$. \subsection{Relations between coordinate functions and their first order differentials} In this subsection, possible relations between the coordinate functions of Z$_3$-graded $h$-superplane and their differentials will be obtained by the help of relations given with (\ref{eq18}) in below. We assume that the commutation relations between the coordinates of $q$-superplane and their differentials are in the following form: \begin{eqnarray} \label{eq18} x' \,{\sf d}x' &=& A \, {\sf d}x' \,x', \nonumber\\ x' \,{\sf d}\theta' &=& F_{11} \, {\sf d}\theta' \,x' + F_{12} \, {\sf d}x' \,\theta', \nonumber\\ \theta' \,{\sf d}x' &=& F_{21} \, {\sf d}x' \,\theta' + F_{22} \, {\sf d}\theta' \,x', \nonumber\\ \theta' \,{\sf d}\theta' &=& B \, {\sf d}\theta' \,\theta'. \end{eqnarray} The coefficients $A$, $B$ and $F_{ik}$ are related the complex deformation parameters $q$ and $j$. In this work, we shall determine these coefficients finding new relations on the Z$_3$-graded $h$-superplane. \begin{thm} $(q,j,h)$-deformed relations between the coordinate functions of Z$_3$-graded $h$-superplane and their differentials are in the form \begin{eqnarray} \label{eq19} x \,{\sf d}x &=& j^2 \, {\sf d}x \,x, \nonumber\\ x \,{\sf d}\theta &=& q \, {\sf d}\theta \, x + (j^2-1) \, {\sf d}x \, \theta + hj \, {\sf d}x \,x, \nonumber\\ \theta \,{\sf d}x &=& jq^{-1} \, {\sf d}x \,\theta - q^{-1}hj^2 \, {\sf d}x \,x, \nonumber\\ \theta \,{\sf d}\theta &=& j \, {\sf d}\theta \,\theta. \end{eqnarray} These relations will be rewritten at the limit $q\to1$ later. \end{thm} \begin{proof} For completing the proof, relations given with (\ref{eq11}) and (\ref{eq12}) should be replaced with relations (\ref{eq18}) step by step. After some tedious calculations, relations (\ref{eq18}) would yield, \begin{eqnarray}\label{eq20} x \,{\sf d}x &=& A \, {\sf d}x \,x, \nonumber\\ x \,{\sf d}\theta &=& F_{11} \, {\sf d}\theta \,x + F_{12} \, {\sf d}x \,\theta + \frac{h}{q-1} \, (F_{11}j+F_{12}j-Aj) \, {\sf d}x \,x, \nonumber\\ \theta \,{\sf d}x &=& F_{21} \, {\sf d}x \,\theta + F_{22} \, {\sf d}\theta \,x + \frac{h}{q-1} \, (F_{21}j+F_{22}j-A) \, {\sf d}x \,x, \\ \theta \,{\sf d}\theta &=& B \, {\sf d}\theta \,\theta + \frac{h}{q-1} \, K_1 \, {\sf d}\theta \,x + \frac{h}{q-1} \, K_2 \, {\sf d}x \,\theta + \left(\frac{h}{q-1}\right)^2 \, K_3 \, {\sf d}x \,x \nonumber \end{eqnarray} where \begin{eqnarray} \label{eq21} K_{1} &=& Bj^2q-F_{22}j^2q-F_{11}, \nonumber\\ K_{2} &= & Bj-F_{21}j^2q-F_{12}, \nonumber\\ K_{3} &=& Bj^2-F_{21}q-F_{22}q+Aj^2q-F_{11}j-F_{12}j. \end{eqnarray} So our problem is reduced to find the coefficients in relations (\ref{eq20}) and (\ref{eq21}). In order to do that, we will act {\sf d} to (\ref{eq5}) and (\ref{eq7}). Applying {\sf d} to (\ref{eq5}) will lead to \begin{eqnarray} \label{eq22} x \, {\sf d}\theta &=& (q+qjF_{22}){\sf d}\theta \, x+(qjF_{21}-1){\sf d}x \, \theta+\left[\frac{qjh}{q-1}(F_{21}j+F_{22}j-A)\right.\nonumber\\ && +hj(A+1)\Big]{\sf d}x \, x. \end{eqnarray} Comparing equation (\ref{eq22}) with the second equation in (\ref{eq20}) would yield the equations \begin{equation} \label{eq23} F_{11}=q(1+jF_{22}) \quad \mbox{and} \quad F_{12}=qjF_{21}-1. \end{equation} If the exterior differential operator {\sf d} is acted on (\ref{eq7}), after some tedious calculations one can see that, \[1+jB+j^2B^2=0.\] Hence, it appears that $B=1$ or $B=j$. Since taking $B=1$ doesn't yield to a solution, we are going to take $B=j$. Other coefficients can be found by using $B=j$. Therefore, coefficients in (\ref{eq20}) are determined in terms of $q$ and $j$. Consequently, the relations given with (\ref{eq19}) is obtained. \end{proof} \subsection{Relations between coordinate functions and their second order differentials} In this subsection, possible relations between the coordinate functions of Z$_3$-graded $h$-superplane and their second order differentials will be obtained. \begin{lem} Relation between ${\sf d}x$ and ${\sf d}\theta$ is \begin{equation} \label{eq24} {\sf d}x\wedge{\sf d}\theta = F \, {\sf d}\theta\wedge{\sf d}x+\frac{h}{q-1}(Fj-j^2)({\sf d}x\wedge{\sf d}x) \end{equation} where $F$ depends on $q$ and $j$. \end{lem} \begin{proof} In Z$_3$-graded $q$-superplane this relation is at the form of \begin{equation*} {\sf d}x'\wedge{\sf d}\theta' = F \, {\sf d}\theta'\wedge{\sf d}x'. \end{equation*} By using (\ref{eq12}), at the left side, \begin{equation*} {\sf d}x'\wedge{\sf d}\theta'={\sf d}x\wedge\left({\sf d}\theta+j\frac{h}{q-1}{\sf d}x\right)={\sf d}x\wedge{\sf d}\theta+j^2\frac{h}{q-1}({\sf d}x\wedge{\sf d}x). \end{equation*} and right side \begin{equation*} F \, {\sf d}\theta'\wedge{\sf d}x' = F \, \left({\sf d}\theta + \frac{jh}{q-1} \, {\sf d}x\right)\wedge{\sf d}x = F \, {\sf d}\theta\wedge{\sf d}x + Fj \frac{h}{q-1} \, ({\sf d}x\wedge{\sf d}x). \end{equation*} Equality of these two equations would give the relation \begin{equation*} {\sf d}x\wedge{\sf d}\theta = F \, {\sf d}\theta\wedge{\sf d}x + \frac{h}{q-1} \, (Fj-j^2) \, ({\sf d}x\wedge{\sf d}x). \end{equation*} Here $F$ will be determined in Theorem 3.4. \end{proof} \begin{thm} $(q,j,h)$-deformed relations between the coordinate functions of Z$_3$-graded $h$-superplane and their differentials are in the form \begin{eqnarray} \label{eq25} x \,{\sf d}^2x &=& j^2 \, {\sf d}^2x \,x,\nonumber\\ x \,{\sf d}^2\theta &=& q \, {\sf d}^2\theta \,x + (j^2-1) \, {\sf d}^2x \, \theta + hj^2 \, {\sf d}^2x \,x,\nonumber\\ \theta \,{\sf d}^2x &=& q^{-1} \, {\sf d}^2x \,\theta - q^{-1}hj^2 \, {\sf d}^2x \,x, \nonumber\\ \theta \,{\sf d}^2\theta &=& {\sf d}^2\theta \,\theta. \end{eqnarray} and differentials \begin{equation} \label{eq26} {\sf d}x\wedge{\sf d}\theta = qj \, {\sf d}\theta\wedge{\sf d}x + hj^2 \, ({\sf d}x\wedge{\sf d}x). \end{equation} \end{thm} \begin{proof} Applying exterior differential operator {\sf d} to (\ref{eq19}) would give us the desired results. For the first equation in (\ref{eq19}), left side, $${\sf d}\wedge(x{\sf d}x) = {\sf d}x\wedge{\sf d}x + x \,{\sf d}^2x,$$ and right side $$j^2 \, {\sf d}\wedge({\sf d}x \,x) = j^2 \, d^2x \,x + j^2j \, ({\sf d}x\wedge{\sf d}x).$$ From the equality of two sides, $$x \,{\sf d}^2x = j^2 \, d^2x \,x$$ can be obtained. Using similar approach and making necessary arrangements to the second equation in (\ref{eq19}) would yield, \begin{eqnarray*} x \,{\sf d}^2\theta &=& q \, {\sf d}^2\theta~x + (j^2-1) \, {\sf d}^2x \,\theta+j^2h \, {\sf d}^2x \,x + (-Fj+qj^2) \, {\sf d}\theta\wedge{\sf d}x\\ &&+\left[\frac{h}{q-1}(1-Fj^2)+h\right]({\sf d}x\wedge{\sf d}x). \end{eqnarray*} Having first order differentials in the relation between coordinate functions and second order differentials, violates homogeneity. In order to have a homogeneous relation, coefficients of ${\sf d}\theta\wedge{\sf d}x$ and ${\sf d}x\wedge{\sf d}x$ should be zero. Taking $F=qj$ would make those coefficient zero. Hence, desired equation would become, $$x \,{\sf d}^2\theta = q \, {\sf d}^2\theta \,x + (j^2-1) \, {\sf d}^2x \,\theta + hj^2 \, {\sf d}^2x \,x.$$ Also, the relation (\ref{eq24}) given in Lemma 3.3 would transform to (\ref{eq26}) by taking $F=qj$. One can find the third and fourth equations in (\ref{eq25}) by applying exterior differential operator ${\sf d}$ to third and fourth equation in (\ref{eq19}). \end{proof} \subsection{Relations between first order differentials and second order differentials} In this subsection, relations between first order differentials and second order differentials of the coordinate functions will be obtained by using relations in (\ref{eq25}). \begin{lem} $(q,j,h)$-deformed relations between first order differentials and second order differentials of the coordinate functions of Z$_3$-graded $h$-superplane and their differentials are in the form \begin{eqnarray} \label{eq27} {\sf d}x\wedge{\sf d}^2x &=& j \, {\sf d}^2x\wedge{\sf d}x, \nonumber\\ {\sf d}x\wedge{\sf d}^2\theta &=& q \, {\sf d}^2\theta\wedge{\sf d}x + (j-j^2) \, {\sf d}^2x\wedge{\sf d}\theta + hj^2 \, {\sf d}^2x\wedge{\sf d}x, \nonumber\\ {\sf d}\theta\wedge{\sf d}^2x &=& q^{-1} j^2 \, {\sf d}^2x\wedge{\sf d}\theta - q^{-1}hj^2 \, {\sf d}^2x\wedge{\sf d}x, \nonumber\\ {\sf d}\theta\wedge{\sf d}^2\theta &=& {\sf d}^2\theta\wedge{\sf d}\theta. \end{eqnarray} \end{lem} \begin{proof} For completing the proof, we are going to apply ${\sf d}$ exterior differential operator to the relations given with (\ref{eq25}). For the first equation, one can obtain left side, \[{\sf d}\wedge(x \,{\sf d}^2x) = {\sf d}x\wedge{\sf d}^2x,\] and right side \[j^2 \, {\sf d}\wedge({\sf d}^2x \,x) = j \, {\sf d}^2x\wedge{\sf d}x.\] From the equality of these equations, one can obtain \[{\sf d}x\wedge{\sf d}^2x = j \, {\sf d}^2x\wedge{\sf d}x.\] Other equations can be found by using same approach. \end{proof} \begin{cor} The relationship between ${\sf d}^2x$ and ${\sf d}^2\theta$ as follows \begin{equation} \label{eq28} {\sf d}^2x\wedge{\sf d}^2\theta = qj^2 \, {\sf d}^2\theta\wedge{\sf d}^2x + jh \, {\sf d}^2x\wedge{\sf d}^2x. \end{equation} \end{cor} \subsubsection{$Z_3$-graded $h$-superplane and some $(h,j)$-deformed relations} In this subsection, we will obtain commutation relations on $Z_3$-graded $h$-superplane, by taking the limit $q\to1$ at the previously found relations. \begin{itemize} \item In equation (\ref{eq8}), relations between coordinate functions of $Z_3$-graded $h$-superplane was found as, \begin{equation*} x\theta = \theta x+hx^2, \quad \theta^3 = 0, \quad h^3 = 0. \end{equation*} \end{itemize} Following the same approach and taking $q\to1$ at the equations (\ref{eq19}),(\ref{eq25})-(\ref{eq28}) would give us the following relations. \begin{itemize} \item Relations between coordinate functions and their first order differentials \begin{eqnarray*} x \,{\sf d}x &=& j^2 \, {\sf d}x \,x,\\ x \,{\sf d}\theta &=& {\sf d}\theta \,x + (j^2-1) \, {\sf d}x \,\theta + hj \, {\sf d}x \,x,\\ \theta \,{\sf d}x &=& j \, {\sf d}x \,\theta - hj^2 \, {\sf d}x \,x, \\ \theta \,{\sf d}\theta &=& j \, {\sf d}\theta \,\theta. \end{eqnarray*} \item Relations between coordinate functions and their second order differentials \begin{eqnarray*} x \,{\sf d}^2x &=& j^2 \, {\sf d}^2x \,x,\\ x \,{\sf d}^2\theta &=& {\sf d}^2\theta \,x + (j^2-1) \, {\sf d}^2x \,\theta + hj^2 \, {\sf d}^2x \,x,\\ \theta \,{\sf d}^2x &=& {\sf d}^2x \,\theta - hj^2 \, {\sf d}^2x \,x, \\ \theta \,{\sf d}^2\theta &=& {\sf d}^2\theta \,\theta. \end{eqnarray*} \item Relations between first order differentials \begin{eqnarray*} {\sf d}x\wedge{\sf d}\theta = j \, {\sf d}\theta\wedge{\sf d}x + hj^2 \, ({\sf d}x\wedge{\sf d}x). \end{eqnarray*} \item Relations between first order differentials and second order differentials \begin{eqnarray*} {\sf d}x\wedge{\sf d}^2x &=& j \, {\sf d}^2x\wedge{\sf d}x,\\ {\sf d}x\wedge{\sf d}^2\theta &=& {\sf d}^2\theta\wedge{\sf d}x + (j-j^2) \, {\sf d}^2x\wedge{\sf d}\theta + hj^2 \, {\sf d}^2x\wedge{\sf d}x,\\ {\sf d}\theta\wedge{\sf d}^2x &=& j^2 \, {\sf d}^2x\wedge{\sf d}\theta - hj^2 \, {\sf d}^2x\wedge{\sf d}x, \\ {\sf d}\theta\wedge{\sf d}^2\theta &=& {\sf d}^2\theta\wedge{\sf d}\theta. \end{eqnarray*} \item Relations between second order differentials \[{\sf d}^2x\wedge{\sf d}^2\theta = j^2 \, {\sf d}^2\theta\wedge{\sf d}^2x + jh \, {\sf d}^2x\wedge{\sf d}^2x.\] \end{itemize} \subsection{The Relations Between Partial Derivatives and First and Second Order Differentials} In this section, we are going to obtain relations between the coordinate functions and their partial derivatives and also relations between first order differentials and their partial derivatives on the $Z_3$-graded $h$-superplane. \begin{defn} If $f$ is a differentiable function of $x$ and $\theta$, then first order differential of $f$ is defined as \begin{equation} \label{eq29} {\sf d}f=({\sf d}x\partial_x+{\sf d}\theta\partial_\theta)f. \end{equation} \end{defn} \subsubsection{Relations between the coordinate functions and partial derivatives} In this subsection, commutation relations between the coordinate functions and partial derivatives will be given. \begin{thm} Commutation relations between the coordinate functions and partial derivatives are given with \begin{eqnarray} \label{eq30} \partial_xx &=& 1 + j^2x\partial_x + (j^2-1)\theta\partial_\theta+hx\partial_\theta, \nonumber\\ \partial_\theta x &=& qx\partial_\theta, \nonumber\\ \partial_x\theta &=& j^2q^{-1}(\theta-hx)\partial_x, \nonumber\\ \partial_\theta \theta &=& 1 + j^2\theta\partial_\theta. \end{eqnarray} \end{thm} \begin{proof} Writing $xf$ instead of $f$ in (\ref{eq29}) would give the left side, \begin{eqnarray*} {\sf d}(xf) &=& {\sf d}xf+x{\sf d}f = {\sf d}xf+x({\sf d}x\partial_x + {\sf d}\theta\partial\theta)f \\ &=& \left[ {\sf d}x(1 + j^2x\partial_x + (j^2-1)\theta \partial_\theta + hx\partial_\theta){\sf d}\theta\partial_\theta x\right] f \end{eqnarray*} and right side \begin{equation*} {\sf d}(xf) = ({\sf d}x\partial_x x + {\sf d}\theta\partial_\theta x)f \end{equation*} from the equality of those two relations, desired equations can be obtained. \end{proof} \subsubsection{Relations between partial derivatives and first order differentials} In this subsection, commutation relations between the first order differentials and partial derivatives will be given. \begin{thm} Commutation relations between the first order differentials and partial derivatives are given with: \begin{eqnarray} \label{eq31} \partial_x{\sf d}x &=& j{\sf d}x\partial_x - j^2h{\sf d}x\partial_\theta, \nonumber\\ \partial_x{\sf d}\theta &=& q^{-1}{\sf d}\theta\partial_x + q^{-1}jh{\sf d}x\partial_x, \nonumber\\ \partial_\theta{\sf d}x &=& qj^2{\sf d}x\partial_\theta, \nonumber\\ \partial_\theta{\sf d}\theta &=& (j^2-j){\sf d}x\partial_x + j^2{\sf d}\theta\partial_\theta. \end{eqnarray} \end{thm} \begin{proof} First let's assume that these relations are at the form of \begin{eqnarray} \label{eq32} \partial_x{\sf d}x &=& A_1{\sf d}x\partial_x + A_2{\sf d}\theta\partial_\theta + A_3{\sf d}x\partial_\theta + A_4{\sf d}\theta\partial_x, \nonumber\\ \partial_x{\sf d}\theta &=& A_5{\sf d}\theta\partial_x + A_6{\sf d}x\partial_\theta + A_7{\sf d}x\partial_x + A_8{\sf d}\theta\partial_\theta, \nonumber\\ \partial_\theta{\sf d}x &=& A_9{\sf d}x\partial_\theta + A_{10}{\sf d}\theta\partial_x + A_{11}{\sf d}x\partial_x+A_{12}{\sf d}\theta\partial_\theta,\nonumber\\ \partial_\theta{\sf d}\theta &=& A_{13}{\sf d}x\partial_x + A_{14}{\sf d}\theta\partial_\theta + A_{15}{\sf d}x\partial_\theta + A_{16}{\sf d}\theta\partial_x. \end{eqnarray} From the definition of partial derivative operator, we know that: \begin{equation} \label{eq33} \partial_i(x^i{\sf d}x^k) = \delta_j^i\delta_l^k{\sf d}x^k, \quad (x^1 = x, \quad x^2 = \theta). \end{equation} Acting partial derivative operator to the first equation in (\ref{eq19}) would yield, $\partial_x(x~{\sf d}x-j^2 {\sf d}x~x)=0$. Using (\ref{eq33}) would give us, ${\sf d}x-j^2\partial_x{\sf d}x~x=0$. If this equation is written at the proper place in (\ref{eq32}), one can find \begin{eqnarray*} {\sf d}x - j^2\left[A_1{\sf d}x\partial_x+A_2{\sf d}\theta\partial_\theta+A_3{\sf d}x\partial_\theta+A_4{\sf d}\theta\partial_x\right] x &=& 0 \nonumber\\ {\sf d}x - j^2A_1{\sf d}x - j^2A_4{\sf d}\theta &=& 0 \nonumber\\ (1 - j^2A_1){\sf d}x - j^2A_4{\sf d}\theta &=& 0. \end{eqnarray*} From here it can be easily seen that $A_1=j$ and $A_4=0$. All $A_i$ coefficients can be obtained after some messy calculations by acting both $\partial_x$ and $\partial_\theta$ to the all equations in (\ref{eq19}). \end{proof} \subsubsection{Relations between partial derivatives} In this subsection, commutation relations between partial derivatives will be given. \begin{thm} Relations between the partial derivatives are \begin{eqnarray} \label{eq34} \partial_x\partial_\theta &=& jq\partial_\theta\partial_x, \nonumber\\ \partial_\theta^3 &=& 0. \end{eqnarray} \end{thm} \begin{proof} In $Z_3$-graded space we know that $d^3=0$. Hence, \begin{eqnarray*} {\sf d}^2f &=& \left[({\sf d}x\partial_x + {\sf d}\theta\partial_\theta)({\sf d}x\partial_x + {\sf d}\theta\partial_\theta)\right] f\\ &=& \left[{\sf d}x\partial_x{\sf d}x\partial_x + {\sf d}x\partial_x{\sf d}\theta\partial_\theta + {\sf d}\theta\partial_\theta{\sf d}x\partial_x + {\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta\right] f.\end{eqnarray*} and \begin{eqnarray*} 0 &=& d^3f \left[({\sf d}x\partial_x+{\sf d}\theta\partial_\theta)({\sf d}x\partial_x+{\sf d}\theta\partial_\theta)({\sf d}x\partial_x + {\sf d}\theta\partial_\theta)\right] f \nonumber\\ &=&\left[ ({\sf d}x\partial_x{\sf d}x\partial_x + {\sf d}x\partial_x{\sf d}\theta\partial_\theta + {\sf d}\theta\partial_\theta{\sf d}x\partial_x + {\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta)({\sf d}x\partial_x + {\sf d}\theta\partial_\theta)\right] f \nonumber\\ &=&[{\sf d}x\partial_x{\sf d}x\partial_x{\sf d}x\partial_x + {\sf d}x\partial_x{\sf d}x\partial_x{\sf d}\theta\partial_\theta + {\sf d}x\partial_x{\sf d}\theta\partial_\theta{\sf d}x\partial_x + {\sf d}x\partial_x{\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta \nonumber\\ && + {\sf d}\theta\partial_\theta{\sf d}x\partial_x{\sf d}x\partial_x + {\sf d}\theta\partial_\theta{\sf d}x\partial_x{\sf d}\theta\partial_\theta + {\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta{\sf d}x\partial_x + {\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta{\sf d}\theta\partial_\theta] f. \end{eqnarray*} Hence, using (\ref{eq17}) and (\ref{eq31}) in this equation, assuming ${\sf d}x\wedge{\sf d}x\wedge{\sf d}x=0$, and with the help of homogeneity one can obtain desired results. \end{proof} \subsubsection{Some $(h,j)$-deformed relations for partial derivatives} Taking limit $q\to1$ in the equations (\ref{eq30}), (\ref{eq31}) and (\ref{eq34}) would give us new relations. \begin{itemize} \item Relations of the coordinate functions with partial derivatives and relations between partial derivatives \begin{eqnarray} \label{eq35} \partial_xx &=& 1 + j^2x\partial_x + (j^2-1)\theta\partial_\theta + hx\partial_\theta, \nonumber\\ \partial_\theta x &=& x\partial_\theta, \nonumber\\ \partial_x\theta &=& j^2(\theta-hx)\partial_x,\nonumber\\ \partial_\theta \theta &=& 1 + j^2\theta\partial_\theta, \nonumber\\ \partial_x\partial_\theta &=& j\partial_\theta\partial_x, \nonumber\\ \partial_\theta^3 &=& 0. \end{eqnarray} \item Relations between first order differentials of coordinate functions and partial derivatives \begin{eqnarray*} \partial_x{\sf d}x &=& j{\sf d}x\partial_x - j^2h{\sf d}x\partial_\theta,\\ \partial_x{\sf d}\theta &=& {\sf d}\theta\partial_x + jh{\sf d}x\partial_x,\\ \partial_\theta{\sf d}x &=& j^2{\sf d}x\partial_\theta,\\ \partial_\theta{\sf d}\theta &=& (j^2-j){\sf d}x\partial_x + j^2{\sf d}\theta\partial_\theta. \end{eqnarray*} \end{itemize} \begin{defn} The Z$_3$-graded quantum Weyl algebra ${\mathcal A}_{h,j}(2)$ is the unital algebra with four generators $x$, $\theta$, $\partial_x$, $\partial_\theta$ and defining relations (\ref{eq8}) and (\ref{eq35}). \end{defn} \subsection{Cartan-Maurer Forms} In this section Cartan-Maurer forms will be described and necessary commutation relations will be obtained. In $Z_3$-graded $q$ deformation, two-forms had been described with the help of a generators of an $\mathcal{A}$ algebra in \cite{4}. \begin{eqnarray*} w' &=& {\sf d}x'(x')^{-1},\\ u' &=& {\sf d}\theta'(x')^{-1}-{\sf d}x'(x')^{-1}\theta'(x')^{-1}. \end{eqnarray*} {\bf Note:} Cartan-Maurer forms in Z$_3$-graded $h$-deformation are \begin{eqnarray}\label{eq36} w &=& {\sf d}x \, x^{-1}, \nonumber\\ u &=& {\sf d}\theta \, x^{-1}-{\sf d}x \, x^{-1}\theta \, x^{-1}\end{eqnarray} under the assumptions of \begin{equation} \label{eq37} u \, h = qj^2h \, u, \quad w \, h = jh \, w \end{equation} \subsubsection{Relations between the coordinate functions and Cartan-Maurer forms} In this subsection, relations between the coordinate functions and Cartan-Maurer forms will be obtained. \begin{lem} Relations between the coordinate functions and Cartan-Maurer forms are \begin{eqnarray} \label{eq38} xw &=& j^2 \, wx, \nonumber\\ xu &=& q \, ux, \nonumber\\ \theta w &=& j \, w\theta, \nonumber\\ \theta u &=& qj \, u\theta + qh \, ux. \end{eqnarray} \end{lem} \begin{proof} Multiplying both sides of the first equation in (\ref{eq36}) with $x$ gives, $$xw = x \,{\sf d}x \,x^{-1}.$$ Using convenient equation in relation system (\ref{eq19}) would give us, $$xw = j^2 wx.$$ Other relations can be obtained, by using equations in (\ref{eq19}) and applying necessary transformations. \end{proof} \subsubsection{Relations between the Cartan-Maurer forms and first order differentials} In this subsection, relations between the Cartan-Maurer forms and first order differentials will be given. \begin{lem} Relations between the Cartan-Maurer forms and first order differentials are, \begin{eqnarray}\label{eq39} w\wedge{\sf d}x &=& j \, {\sf d}x\wedge w, \nonumber\\ u\wedge{\sf d}x &=& q^{-1} \, {\sf d}x\wedge u, \nonumber\\ w\wedge{\sf d}\theta &=& j \, {\sf d}\theta\wedge w + (1-j) \, \theta x^{-1} {\sf d}x\wedge w, \nonumber\\ u\wedge{\sf d}\theta &=& q^{-1} \, {\sf d}\theta\wedge u + q^{-1}[(1-j) \, \theta x^{-1}-h] \,{\sf d}x\wedge u. \end{eqnarray} \end{lem} \begin{proof} These relations can be found by using, (\ref{eq36}), (\ref{eq37}), (\ref{eq19}) and (\ref{eq25}), and making necessary arrangements. \end{proof} \subsubsection{Relations between the Cartan-Maurer forms and second order differentials} In this subsection, relations between the Cartan-Maurer forms and second order differentials will be given. \begin{lem} Relations between the Cartan-Maurer forms and second order differentials are, \begin{eqnarray}\label{eq40} w\wedge{\sf d}^2x &=& j^2 \, {\sf d}^2x\wedge w, \nonumber\\ u\wedge{\sf d}^2x &=& q^{-1} \, {\sf d}^2x\wedge u, \nonumber\\ w\wedge{\sf d}^2\theta &=& (j-j^2)q^{-1} \, d^2x\wedge u + d^2\theta\wedge w, \nonumber\\ u\wedge{\sf d}^2\theta &=& q^{-1} \, {\sf d}^2\theta\wedge u + \left[(j-j^2)x^{-1}\theta - q^{-1}j^2h\right] {\sf d}^2x\wedge u. \end{eqnarray} \end{lem} \begin{proof} These relations can be found by using, (\ref{eq36}), (\ref{eq37}), (\ref{eq19}), (\ref{eq25}) and (\ref{eq26}) and making necessary arrangements. \end{proof} \subsubsection{Relations between the Cartan-Maurer forms} In this subsection, relations between the Cartan-Maurer forms will be given. \begin{thm} Relations between the Cartan-Maurer forms are, \begin{eqnarray}\label{eq41} w\wedge u&=& u\wedge w, \nonumber\\ w\wedge w\wedge w &=& 0. \end{eqnarray} \end{thm} \begin{proof} These relations can be found by using, (\ref{eq19}) and (\ref{eq26}) and making necessary arrangements. \end{proof} \begin{cor} The Cartan-Maurer forms are closed in the means that \begin{equation}\label{eq42} {\sf d}^2\wedge w = 0, \quad {\sf d}^2\wedge u =0. \end{equation} \end{cor} \subsubsection{Some $(h,j)$-deformed relations for Cartan-Maurer forms} Taking limit $q\to1$ in the equations (\ref{eq38}-\ref{eq40}) would give us new relations. \begin{itemize} \item Relations between the coordinate functions and Cartan-Maurer forms \begin{eqnarray*} xw &=& j^2wx, \\ xu &=& ux, \\ \theta w &=& j w\theta, \\ \theta u &=& j u\theta + h ux. \end{eqnarray*} \item Relations between the Cartan-Maurer forms and first order differentials \begin{eqnarray*} w\wedge{\sf d}x &=& j {\sf d}x\wedge w, \\ u\wedge{\sf d}x &=& {\sf d}x\wedge u, \\ w\wedge{\sf d}\theta &=& j{\sf d}\theta\wedge w + (1-j) \, \theta x^{-1} {\sf d}x\wedge w, \\ u\wedge{\sf d}\theta &=& {\sf d}\theta\wedge u - h \, {\sf d}x\wedge u + [(1-j) \, \theta x^{-1}-h] \,{\sf d}x\wedge u. \end{eqnarray*} \item Relations between the Cartan-Maurer forms and second order differentials \begin{eqnarray*} w\wedge{\sf d}^2x &=& j^2 \, {\sf d}^2x\wedge w, \\ u\wedge{\sf d}^2x &=& {\sf d}^2x\wedge u, \\ w\wedge{\sf d}^2\theta &=& (j-j^2) \, d^2x\wedge u+d^2\theta\wedge w, \\ u\wedge{\sf d}^2\theta &=& {\sf d}^2\theta\wedge u + \left[(j-j^2) \, x^{-1}\theta - j^2h\right] {\sf d}^2x\wedge u. \end{eqnarray*} \item Relations between the Cartan-Maurer forms \begin{eqnarray*} w\wedge u&=& u\wedge w, \\ w\wedge w\wedge w &=& 0. \end{eqnarray*} \end{itemize} \section{A Z$_3$-graded $(h,j)$-deformed quantum (super)group} In this section, we will consider the $Z_3$-graded structures of the $(h,j)$-deformed quantum 2x2-supermatrices. We had given commutation relations between coordinates in the $h$-superplane in (\ref{eq8}). Here the coordinate $x$ with the respect to the $Z_3$-grading is of grade 0 and the coordinate $\theta$ with respect to the $Z_3$-grading is of grade 1. The noncommutative space ${\mathbb R}_h(1|1)$ with the function algebra \[O({\mathbb R}_h(1|1))=K\{x,\theta\}/(x\theta-\theta x-hx^2, \quad \theta^3, \quad h^3)\] is called Z$_3$-graded $h$-superplane. The noncommutative space ${\mathbb R}_{h,j}^*(1|1)$ with the function algebra \[O({\mathbb R}_{h,j}^*(1|1))=K\{\varphi,y\}/(\varphi y-jy\varphi-hj^2\varphi^2, \quad \varphi^3, \quad h^3)\] is called dual Z$_3$-graded $h$-superplane. Under these definitions, we have \begin{equation} \label{eq43} {\mathcal R}_h(1|1)=\left\{\begin{pmatrix} x \\ \theta \end{pmatrix}: x\theta=\theta x+hx^2, \quad \theta^3=0, \quad h^3=0.\right\} \end{equation} and \begin{equation} \label{eq44} {\mathcal R}^*_{q,j}(1|1) \left\{\begin{pmatrix} \varphi \\ y \end{pmatrix}: \varphi y=jy\varphi+hj^2\varphi^2, \quad \varphi^3=0, \quad h^3=0.\right\} \end{equation} Here, $$ \left[{\mathcal R}_h(1|1)\right]^* = {\mathcal R}^*_{h,j}(1|1).$$ Let $T$ be a $2x2$ supermatrix in $Z_3$-graded superspace \begin{equation} \label{eq45} T=\begin{pmatrix} a \quad \beta \\ \gamma \quad d \end{pmatrix} \end{equation} where $a$ and $d$ with respect to the $Z_3$-grading is of grade $0$, and $\beta$ and $\gamma$ with the respect to the $Z_3$-grading is of grade $2$ and grade $1$, respectively. We now consider linear transformations with the following properties: \begin{equation} \label{eq46} T: {\mathcal R}_h(1|1)\longrightarrow{\mathcal R}_h(1|1), \quad T:{\mathcal R}^*_{h,j}(1|1)\longrightarrow{\mathcal R}^*_{h,j}(1|1). \end{equation} We assume that the entries of $T$ are $j$-commutative with the elements of ${\mathcal R}_h(1|1)$ and ${\mathcal R}^*_{h,j}(1|1)$, i.e. for example, \[ax=xa, \quad \theta\beta=j^2\beta\theta, \, etc.\] As a consequence of linear transformations in (\ref{eq46}), the elements \begin{equation} \label{eq47} \tilde{x}=ax+\beta\theta, \quad \tilde{\theta}=\gamma x+d\theta \end{equation} should satisfy the relations in (\ref{eq43}): \[\tilde{x}\tilde{\theta}=\tilde{\theta}\tilde{x}+h\tilde{x}^2, \quad \tilde{\theta}^3=0. \] Using these relations one has, \[a\gamma=\gamma a+h[a^2-ad+\gamma\beta+j^2ha\beta], \quad d\gamma=\gamma d,\] \[\beta d=j^2[d\beta+h\beta^2], \quad \gamma^3=-hj\left[(j-1)\gamma^2d+2jh\gamma d^2\right].\] Similarly, the elements \begin{equation} \label{eq48} \tilde{\varphi}=a\varphi+j^2\beta y, \quad \tilde{y}=jy\varphi+dy \end{equation} must satisfy the relations in (\ref{eq44}). Using these relations, one has \[a\beta=j\beta a, \quad \beta^3=0.\] Also if we use the second relation in (\ref{eq19}), \[\tilde{x}\tilde{y}=\tilde{y}\tilde{x}+(j^2-1)\tilde{\varphi}\tilde{\theta}+hj\tilde{\varphi}\tilde{x},\] we have, \[ad=da+(1-j)\beta\gamma+h\beta a, \quad \beta \gamma= \gamma\beta+ha\beta. \] Consequently we have the following commutation relations between the matrix elements of $T$: \begin{eqnarray} \label{eq49} a\beta &= & j \,\beta a, \nonumber\\ a\gamma &=& \gamma a+h\left[a^2-ad+\gamma\beta+j^2ha\beta\right], \nonumber\\ d\beta &= & j \, \beta d + jh \, \beta^2, \nonumber\\ d\gamma &=& \gamma d, \nonumber\\ \beta^3 &=& 0,\qquad \gamma^3 = -hj\left[(j-1)\gamma^2d + 2jh \,\gamma d^2\right], \nonumber\\ \beta \gamma &=& \gamma\beta + h \, a\beta, \nonumber\\ ad &=& da + (1-j) \, \beta\gamma +h \, \beta a. \end{eqnarray} Super inverse and super determinant of of $T$ is defined in \cite{5}, as follows. \begin{equation} \label{eq50} T^{-1}=\begin{pmatrix} A & -a^{-1}\beta d^{-1}-a^{-1}\beta d^{-1}\gamma a^{-1}\beta d^{-1} \cr -d^{-1}\gamma a^{-1}-d^{-1}\gamma a^{-1}\beta d^{-1}\gamma a^{-1} & D \end{pmatrix}\end{equation} where \[A= a^{-1}+a^{-1}\beta d^{-1}\gamma a^{-1}+a^{-1}\beta d^{-1}\gamma a^{-1} \beta d^{-1}\gamma a^{-1}, \] \[D=d^{-1}+d^{-1}\gamma a^{-1}\beta d^{-1}+d^{-1} \gamma a^{-1}\beta d^{-1}\gamma a^{-1}\beta d^{-1}\] and \begin{equation} \label{eq51} D_{h,j}(T)=ad^{-1}+ad^{-1}\gamma a^{-1}\beta d^{-1}+ad^{-1}\gamma a^{-1}\beta d^{-1}\gamma a^{-1}\beta d^{-1}. \end{equation} \begin{defn} Z$_3$-graded $(h,j)$-deformed supergroup is a group that consists $T$ matrices that satisfy the following three conditions. \begin{itemize} \item Elements of a matrix $T$ satisfies relations given with (\ref{eq49}), \item $T$ matrix has the inverse given with (\ref{eq50}), \item $T$ matrix has the super determinant given with (\ref{eq51}). \end{itemize} \end{defn} This defined group, will be denoted with GL$_{h,j}(1|1)$. It can be shown that the Z$_3$-graded quantum supergroup GL$_{h,j}(1|1)$ is a Z$_3$-graded Hopf (super) algebra. A study on this group and on differential geometry of this group is in progress. \section*{Acknowledgments} This work was supported in part by {\bf TBTAK} the Turkish Scientific and Technical Research Council.
1,477,468,749,956
arxiv
\section{Introduction} Weakly Interacting Massive Particles (WIMPs) $\chi$ arising in several extensions of the Standard Model of electroweak interactions are one of the leading candidates for Dark Matter. Currently, direct searches for different candidates for WIMP Dark Matter based on measuring recoil energy deposited by elastic scattering of ambient WIMPs on the target nuclei \cite{Smith90, Lewin96} are one of the promising methods for understanding the nature of Dark Matter particles, identifying them among new particles produced hopefully in the near future at colliders, as well as reconstructing the (sub)structure of our Galactic halo. However, for conventional data analyses used in direct detection experiments one needs assumptions not only about the Galactic halo from astrophysics but also about the WIMP properties from particle physics \cite{SUSYDM96}. Therefore, since a few years ago we started to develop new methods for analyzing data, i.e., measured recoil energies, from (future) direct detection experiments as model--independently as possible. Up to now we could in principle reconstruct the (moments of the) one--dimensional velocity distribution function of halo WIMPs \cite{DMDDf1v}, as well as determine the WIMP mass \cite{DMDDmchi} and (ratios of) their couplings on nucleons \cite{DMDDfp2-IDM2008, DMDDidentification-Dark2009}. Following the development of these model--independent data analysis procedures, we combined the programs for simulations to a compact system: {\tt AMIDAS} (A Model--Independent Data Analysis System). For users' convenience and under the collaboration with the ILIAS Project \cite{ILIAS}, an online system \cite{AMIDAS-web} has also been established at the same time. In this article, I give an overview of functions of the {\tt AMIDAS} code based on the use of its website. In Sec.~2 I will describe the {\tt AMIDAS}' functions and different working modes for these functions. In Sec.~3 I will talk about the options of the input parameters for simulations which users can modify. In Sec.~4 the use of the {\tt AMIDAS} website for analyzing user--uploaded data sets will be described. I will conclude and give some future prospects of the {\tt AMIDAS} code and website in Sec.~5. \section{Functions, modes, targets} \subsection{{\tt AMIDAS}' Functions} Based on our works on the model--independent data analysis methods for extracting the nature of halo WIMPs \cite{DMDDf1v, DMDDmchi, DMDDfp2-IDM2008, DMDDidentification-Dark2009}, {\tt AMIDAS} has so far the following functions: \begin{enumerate} \item reconstruction of the one--dimensional velocity distribution function of halo WIMPs; \item determination of the WIMP mass; \item determinations of ratios of different WIMP--nucleon couplings/cross sections; \item estimation of the spin--independent (SI) WIMP--proton coupling. \end{enumerate} \subsection{Reconstruction modes} For reconstructing the one--dimensional WIMP velocity distribution and estimating the SI WIMP--proton coupling, one needs the WIMP mass $\mchi$ as an input parameter \cite{DMDDf1v, DMDDfp2-IDM2008}. This information could be obtained either from e.g., collider experiments, or from two direct detection experiments \cite{DMDDmchi}. To allow these two cases, {\tt AMIDAS} has three options of the input WIMP mass for the reconstruction mode: \begin{enumerate} \item with only an input WIMP mass from other/collider experiments; \item with only a reconstructed WIMP mass from other direct detection experiments; \item with both of them. \end{enumerate} In addition, {\tt AMIDAS} offers two modes for determining the WIMP mass \cite{DMDDmchi}: \begin{enumerate} \item only the combined result from the estimators for different moments; \item both the combined result from and each of the estimators for different moments. \end{enumerate} \subsection{Target(s)} So far {\tt AMIDAS} uses $\rmXA{Si}{28}$, $\rmXA{Ge}{76}$, $\rmXA{Ar}{40}$, and $\rmXA{Xe}{136}$ for simulations and can analyze data sets with these four targets. For the determination of the WIMP mass, two combinations have been programmed \cite{DMDDmchi}: \begin{enumerate} \item $\rmXA{Si}{28}$ + $\rmXA{Ge}{76}$; \item $\rmXA{Ar}{40}$ + $\rmXA{Xe}{136}$. \end{enumerate} \subsection{Data type} The most important and powerful ability of the {\tt AMIDAS} code is that this system and its website can {\em not only} do simulations with self--generated events based on the Monte Carlo method, {\em but also} analyze user uploaded data set(s) generated by other event generators or {\em recorded} in direct Dark Matter detection experiments {\em without} modifying the source code. Users have thus two choices for the data type: \begin{enumerate} \item a simulation (events will be generated by {\tt AMIDAS}); \item real (user--uploaded) data. \end{enumerate} A sample file for the uploaded data sets can be downloaded from the {\tt AMIDAS} website. \subsection{Simulation mode} All {\tt AMIDAS}' functions can be simulated based on the Monte Carlo method. Considering the current experimental sensitivity and the required executing time for these simulations, the {\tt AMIDAS} website offers full Monte Carlo simulations with maximal 2,000 experiments and maximal 2,500 events on average per one experiment. However, since the algorithmic procedure for the determination of the WIMP mass, needed also for the reconstruction of the one--dimensional WIMP velocity distribution function and for the estimation of the SI WIMP--proton coupling, takes a (much) longer time than what usual Monte Carlo simulations require, {\tt AMIDAS} also offers users faster theoretical estimations as an alternative option: \begin{enumerate} \item a Monte Carlo simulation; \item a theoretical estimation. \end{enumerate} Here integrals over the theoretical predicted recoil spectrum will be used. Note that, firstly, since for these estimations the statistical fluctuations {\em have not} been taken into account, these pure theoretically estimated results, especially for cases with (very) few events, could be (fairly) different from results obtained by more realistic simulations. Secondly, as the alternative option for the Monte Carlo simulation with a much shorter required executing time, the total event number used for theoretical estimations is fixed% \footnote{ The actual event number for each Monte Carlo simulated experiment is Poisson--distributed around the expected value set by users. } and the calculations are limited to be done for only a few times. These restrictions could cause sometimes unexpected zigzag on the result curves. \section{Running simulations} In this section, I describe the options of the input factors needed for predicting the recoil spectrum used {\em only} for generating events. Note that some commonly used and standard {\tt AMIDAS}' values used for our works presented in Refs.~\cite{DMDDf1v, DMDDmchi, DMDDfp2-IDM2008, DMDDidentification-Dark2009} have been given as default choices, but all these parameters can be modified by users. \subsection{WIMP properties} The following information on the WIMP properties are required for predicting the recoil spectrum and/or analyzing user--uploaded data. Note that {\em not} all of them are needed for every {\tt AMIDAS}' function. \begin{enumerate} \item the input WIMP mass $\mchi$; \item an overall uncertainty on the input WIMP mass $\sigma(\mchi)$; \item the SI WIMP--proton cross section $\sigmapSI$. \end{enumerate} \subsection{Astronomical setup} {\tt AMIDAS} requires the following astronomical parameters for the velocity distribution function of halo WIMPs: \begin{enumerate} \item the WIMP density near the Earth $\rho_0$; \item the Sun's orbital velocity in the Galactic frame $v_0$; \item the escape velocity from our Galaxy at the position of the Solar system $\vesc$; \item the date on which the Earth's velocity relative to the WIMP halo is maximal, $t_{\rm p}$; \item the experimental running date $t_{\rm expt}$. \end{enumerate} \subsection{Velocity distribution of halo WIMPs} So far users have two options for the one--dimensional WIMP velocity distribution function \cite{DMDDf1v}: \begin{enumerate} \item the simple Maxwellian velocity distribution $f_{1, \Gau}(v)$; \item the shifted Maxwellian velocity distribution $f_{1, \sh}(v)$. \end{enumerate} Note that the analytical forms of these velocity distributions can be checked on the website, once one hovers the curser onto the ``{\tt analytical form}''. By clicking it one can also open another page and get more detailed information and some references about the velocity distribution. \subsection{Nuclear form factor for the SI cross section} So far users have four options for the nuclear form factor for the SI WMP--nucleus cross section: \begin{enumerate} \item the exponential form factor $F_{\rm ex}(Q)$ \cite{Ahlen87, Freese88, SUSYDM96}; \item the Woods-Saxon form factor $F_{\rm WS}(Q)$ \cite{Engel91, SUSYDM96}; \item the Woods-Saxon form factor with a modified nuclear radius $F_{\rm WS, Eder}(Q)$ \cite{Eder68, Lewin96}; \item the Helm form factor $F_{\rm Helm}(Q)$ \cite{Helm56, Lewin96}. \end{enumerate} As for the velocity distribution function, the analytical forms of these form factors can be checked on the website, by hovering the curser onto the ``{\tt analytical form}'' or by clicking it to open another page for more detailed information and references. \subsection{Nuclear form factor for the SD cross section} For the nuclear form factor for the spin--dependent (SD) WIMP cross section, due to its dependence on the SD WIMP--nucleon couplings as well as on the individual spin structure of target nuclei, {\tt AMIDAS} offers so far only one analytic form for the SD cross section, namely, \begin{enumerate} \item the thin-shell form factor, $F_{\rm TS}(Q)$ \cite{Lewin96, Klapdor05}. \end{enumerate} \subsection{Experimental setup} Finally, one needs to set the following experimental information: \begin{enumerate} \item the minimal and maximal cut--off energies, $\Qmin$ and $\Qmax$; \item the width of the first $Q-$bin, $b_1$; \item the (expected) total event number between $\Qmin$ and $\Qmax$, $N_{\rm tot}$; \item the number of simulated experiments or uploaded data sets; \item the number of $Q-$bins between $\Qmin$ and $\Qmax$. \end{enumerate} Note that, as for the WIMP velocity distribution function and the elastic nuclear form factors, users can hover the curser onto each notation in the setup tables for checking its definition. \subsection{Running simulations} After giving all the required information for the aimed simulation, users have one more chance to check their choices, modify some of them, and then resubmit the whole setup. In case that any required datum is missed, this omission will be detected automatically after the (re)submission; users will be reminded of that with a {\em red} block around the table. Note that {\em all data} in this table will be {\em reset} to the default values and should therefore be {\em checked again} and modified eventually to the users' own choices. Once all the required data have been checked, users have only to click the ``{\tt Simulation start}'' button and wait for the simulation results for a few minutes. \subsection{Output results} Simulation results will be given in form(s) of plot(s), and/or eventually table(s). In order to let users understand the output results more conveniently and clearly, each output plot or table will be accompanied with a short caption. On the other hand, for users' need of self--producing results with different kinds of presentation, the original file of output results with users' personal simulation setup will also be given and downloadable from the website. Remind that it would be very grateful that a credit of the {\tt AMIDAS} program and website could be given for using the output results. \section{Analyzing (real) data} The most useful and powerful function of the {\tt AMIDAS} website is the ability for analyzing user--uploaded data set(s) directly. \subsection{Preparing data set(s)} As mentioned above, on the {\tt AMIDAS} website users can find and download a sample file for the uploaded data sets. Note that, for comments a ``0'' (zero) {\em has to be used} at the beginning; and all words in the comment lines must be connected by ``\_'' (underscores). For instance, {\footnotesize \begin{verbatim} 0 sigma_[chi,_p]^SI_=_1e-8_pb 0 m_[chi]_=_50_GeV 1 dataset, 43 events, 12817.27 kg-day: 1 1 5.25 keV; 1 2 5.37 keV; : 0 m_[chi]_=_100_GeV 2 dataset, 53 events, 15322.9 kg-day: 2 1 6.16 keV; 2 2 4.25 keV; : \end{verbatim} } \noindent Note also that it is {\em unnecessary} to output generated/recorded recoil energies in the ascending or descending order in your uploaded data file(s). {\tt AMIDAS} will order the events in each data set after reading them. \subsection{Uploading data file(s)} Users can upload their data file(s) as usual. Note only that the maximal size of each uploaded file is {\em 2 MB}. \subsection{Analyzing uploaded data} As for simulations, after giving all the required information for the aimed analysis, users have one more chance to check their choices and the {\em original} name(s) of their data file(s), modify some of them and/or replace the uploaded data file(s), and then resubmit the whole setup. In case that any required datum or {\em data file} is missed, this omission will be detected automatically after the (re)submission; users will be reminded of that with a {\em red} block around the table. Note that, while {\em all data} in this table will be {\em reset} to the default values and should therefore be checked again and modified eventually to the users' own choices, {\em only} the {\em missed} data file(s) and the {\em replacement(s)} of the uploaded file(s) will be required to upload. Once all the required data and uploaded data file(s) have been checked, users have only to click the ``{\tt Data analysis start}'' button and wait for the analyzed results for a few minutes. \section{Summary} In this article, I introduced a new simulation/data analysis code and its website for direct Dark Matter detection experiments. So far users have only a few options for the WIMP velocity distribution function as well as for the nuclear form factors, as a planned improvement user--defined velocity distribution and form factor(s) for their own target(s) should be able to read from an uploaded plain text file in the future. Moreover, the choice(s) of target(s) will also be released for (at least) most of the currently running and projected experiments. \begin{theacknowledgments} The author would like to thank the ILIAS Project and the Physikalisches Institut der Universit\"at T\"ubingen for kindly providing the opportunity of the collaboration and the technical support of the {\tt AMIDAS} website. This work was partially supported by the BK21 Frontier Physics Research Division under project no.~BA06A1102 of Korea Research Foundation. \end{theacknowledgments} \bibliographystyle{aipproc}
1,477,468,749,957
arxiv
\section{Introduction} The tree-depth of a graph $G$ is the smallest size of a set $\{1,\dots,n\}$ of labels with which the vertices of $G$ may be labeled so that any path between two vertices with the same label contains a vertex receiving a higher label. Equivalently, the tree-depth is the minimum number of steps needed to delete all vertices from $G$ if at each step at most one vertex may be deleted from each current component. Tree-depth is denoted $\operatorname{td}(G)$ and has also been called the vertex ranking number or ordered chromatic number of $G$. For a sampling of results on tree-depth, and bibliographic references to other sources, see~\cite{BarNoyEtAl12,path,BodlaenderEtAl98,ChangEtAl10,IyerEtAl88,NesetrilOssonadeMendez12,NesetrilOssonadeMendez06,NesetrilOssonadeMendez08}. One fundamental property of tree-depth is its monotonicity under the graph minor relationship; as noted in~\cite{NesetrilOssonadeMendez12}, if $G$ is a minor of $H$, then $\operatorname{td}(G) \leq \operatorname{td}(H)$. If $M$ is a graph with tree-depth $k$ such that every proper minor of $M$ has tree-depth less than $k$, we say that $M$ is \emph{minor-critical}, or simply \emph{critical} or \emph{$k$-critical}. (For clarity, we note here that some authors use ``critical'' when discussing the context of subgraphs, rather than our present context of minors.) Because of the monotonicity of tree-depth under the minor relationship, it follows that every graph with tree-depth $k$ has a $k$-critical minor, and the graphs with tree-depth at most $k$ are characterized in terms of a list of forbidden minors; the minimal such list consists of all $(k+1)$-critical graphs. In~\cite{DGT}, Dvo{\v{r}}{\'a}k, Giannopoulou, and Thilikos initiated the study of critical graphs having small tree-depth. They determined all $k$-critical graphs for $k \leq 4$ and exhibited a construction of critical graphs from smaller ones; this construction is sufficient to construct all critical trees of any tree-depth. In examining the critical graphs with small tree-depth, a number of apparent patterns suggest themselves. We mention two conjectured relationships. \begin{conj} \label{conj: order, max degree} Let $G$ be a $k$-critical graph. \begin{enumerate} \item[\textup{(a)}] \textup{(\cite{DGT})} $G$ has at most $2^{k-1}$ vertices. \item[\textup{(b)}] \textup{(\cite{BarrusSinkovic15})} $G$ has maximum degree at most $k-1$. \end{enumerate} \end{conj} \noindent Both conjectures presently remain open in their full generality. In~\cite{BarrusSinkovic15}, the authors observed that item (b) above is true for a special class of graphs, the 1-unique graphs. A graph $G$ is \emph{1-unique} if for every vertex $v$ in $G$, there exists a labeling of the vertices of $G$ with labels from $\{1,\dots,\operatorname{td}(G)\}$ having the defining requirement of tree-depth and also having the property that $v$ is the unique vertex of $G$ receiving the label 1. A 1-unique $k$-critical graph must have maximum degree $k-1$, since the neighbors of any vertex labelled 1 must receive distinct labels. In~\cite{BarrusSinkovic15} the authors established some similarities between tree-depth criticality and 1-uniqueness. In particular, they showed that though 1-unique graphs need not be critical, if $G$ is any 1-unique graph, then $G$ has a subset of edges that can be removed so as to leave a spanning subgraph of $G$ that is $\operatorname{td}(G)$-critical. Furthermore, every critical graph with tree-depth at most 4 is 1-unique, and the authors noted that every critical graph constructed through a certain generalization of the algorithm in~\cite{DGT} is also 1-unique. These facts led the authors to the following. \begin{conj} \label{conj: false} Every critical graph is 1-unique. \end{conj} As it happens, Conjecture~\ref{conj: false} is false infinitely often, as we will shortly show. In Section 2 we discuss a computer search that found a number of counterexamples, one of which we generalize to an infinite family of non-1-unique critical graphs. In Section 3 we show that the edge subset that can be deleted from a 1-unique graph to leave a critical spanning subgraph may be arbitrarily large. These results have the effect of somewhat separating the properties of criticality and 1-uniqueness, which appeared in~\cite{BarrusSinkovic15} to be closely related. However, we will also show that the property of 1-uniqueness can be used to good effect in questions on criticality. In Section 4 we use 1-uniqueness to efficiently show that the Andr\'{a}sfai graphs are (1-unique) critical graphs. The examples and counterexamples considered in this paper follow a theme of dense graphs (where we take `dense' here to mean that the number of edges in $n$-vertex members of the family is at least a constant fraction of $\binom{n}{2}$). This is true for the family of counterexamples to Conjecture~\ref{conj: false} presented in Section 2, as well as for the families of graphs in Section 3 that illustrate differences between the number of edges in 1-unique graphs and their critical subgraphs. These examples stand in contrast to the 1-unique critical trees and other often-sparse critical graphs shown and constructed in~\cite{DGT} and~\cite{BarrusSinkovic15}, which motivated Conjecture~\ref{conj: false}. On the other hand, the relationship between tree-depth criticality and 1-uniqueness cannot be simply explained by sparseness, as in Section 2 we show that the $(n-1)$-critical graphs are both dense and 1-unique, and the Andr\'{a}sfai graphs in Section 4 are likewise examples of dense, 1-unique, critical graphs. We define a \emph{labeling} of a graph $G$ to be an assignment of the vertices of $G$ with the positive integers. If every path between any two vertices with the same label contains a vertex having a higher label, we call the labeling \emph{feasible}; thus $\operatorname{td}(G)$ is the smallest number of labels necessary for a feasible labeling. We use $V(G)$ and $N_G(v)$ to denote the vertex set of $G$ and the (open) neighborhood in $G$ of vertex $v$, respectively. The complete graph and cycle with $n$ vertices are referred to respectively as $K_n$ and $C_n$. The Cartesisan product of graphs $G$ and $H$ is denoted by $G \Box H$, the disjoint union of $G$ and $H$ is denoted by $G+H$, and the disjoint union of $m$ copies of $G$ is denoted by $mG$. \section{Critical graphs with high tree-depth} In this section we consider graphs whose tree-depth differs from the number of vertices by at most a fixed constant. The extremal example is the complete graph $K_n$, which is the unique graph on $n$ vertices where these parameters are equal. We show that graphs with similar relatively high tree-depths form hereditary classes of graphs. These results will help at the end of this section, where we prove a special case of Conjecture~\ref{conj: false}, that critical graphs with tree-depth almost as high as that of $K_n$ are 1-unique. The results will also be useful in the next section, where we separate the notions of tree-depth criticality and 1-uniqueness. A graph class is hereditary, i.e., closed under taking induced subgraphs, if and only if it can be characterized by a collection of forbidden induced subgraphs. We show that this is true of graphs with relatively high tree-depths compared to their respective numbers of vertices. \begin{thm} \label{thm: high td hereditary} Given a nonnegative integer $k$, let $\mathcal{F}_k$ denote the set of minimal elements, under the induced subgraph ordering, of all graphs $H$ for which $n(H)- \operatorname{td}(H) = k+1$. For all graphs $G$, the graph $G$ satisfies $\operatorname{td}(G) \geq n(G) - k$ if and only if $G$ is $\mathcal{F}_k$-free. \end{thm} To illustrate this theorem, we identify a few special cases, some of which will be useful later in the paper. \begin{lem} \label{lem: high td forb sub lists} Let $G$ be a graph with $n$ vertices. \begin{enumerate} \item[\textup{(a)}] The graph $G$ satisfies $\operatorname{td}(G) = n$ if and only if $G$ is $\{2K_1\}$-free. \item[\textup{(b)}] The graph $G$ satisfies $\operatorname{td}(G) \geq n-1$ if and only if $G$ is $\{3K_1,2K_2\}$-free. \item[\textup{(c)}] The graph $G$ satisfies $\operatorname{td}(G) \geq n-2$ if and only if $G$ is $\{4K_1,2K_2+K_1,P_3+K_2,2K_3\}$-free. \end{enumerate} \end{lem} We remark in passing that Lemma~\ref{lem: high td forb sub lists} suggests that the graphs with nearly equal tree-depths and orders are necessarily dense graphs. To prove Theorem~\ref{thm: high td hereditary} and Lemma~\ref{lem: high td forb sub lists}, we first develop some terminology and a few preliminary results. For any graph $G$, define the \emph{surplus} $s(G)$ by $s(G) = n(G) - \operatorname{td}(G)$. Clearly $\operatorname{td}(G) \geq n-k$ if and only if $s(G) \leq k$, and the graphs in $\mathcal{F}_k$ are the minimal graphs under induced subgraph inclusion for which the surplus is $k+1$. We now show that $s(G)$ is well-behaved under taking induced sugraphs. \begin{proof}[Proof of Theorem~\ref{thm: high td hereditary}] We prove the equivalent statement that $G$ has no induced subgraph with surplus $k+1$ if and only if $s(G)\leq k$. Suppose first that $s(G) > k$. If $s(G)=k+1$ then $G$ clearly has at least one induced subgraph with surplus $k+1$. If $s(G)>k+1$, then let $v_1,\dots,v_n$ denote the vertices of $G$ under some arbitrary ordering. If we define $G_0=G$ and $G_i = G - \{v_1,\dots,v_i\}$ for each $i \in \{1,\dots,n-1\}$, then for $i\geq 1$ each $G_i$ is obtained by deleting a vertex from $G_{i-1}$. Since deleting a vertex from a graph either leaves the tree-depth unchanged or lowers it by one, it follows that $s(G_i) = s(G_{i-1})-1$ or $s(G_i)=s(G_{i-1})$. Since $s(G) > k+1$ and $G_{n-1}\cong K_1$ and hence $s(G_{n-1})=0$, it follows that $s(G_j)=k+1$ for some $j \in \{1,\dots,n-2\}$. Hence $G$ has an induced subgraph with surplus $k+1$. Suppose instead that $G$ has an induced subgraph $H$ with surplus $k+1$. As described in the previous paragraph, deleting a single vertex from a graph either maintains the present value of the surplus or decreases it by 1, so if we imagine deleting vertices from $V(G)\setminus V(H)$ to arrive at the induced subgraph $H$, we have $s(G) \geq s(H)=k+1$, so $s(G)>k$. \end{proof} Before proving Lemma~\ref{lem: high td forb sub lists}, we develop a useful type of labeling. Call a feasible labeling of a graph \emph{reduced} if no label appearing on more than one vertex has a higher value than a label appearing on at most one vertex. In other words, in a reduced labeling, the repeated labels are the lowest. \begin{lem}\label{lem: reduced labelings} \mbox{} \begin{enumerate} \item[(1)] Every graph $G$ has an optimal reduced labeling. \item[(2)] If $\gamma$ is an optimal reduced labeling of $G$, and $H$ is the induced subgraph of $G$ consisting of all vertices sharing their label with some other vertex of $G$, then the restriction of $\gamma$ to $H$ is an optimal labeling of $H$; hence $s(H) = s(G)$ and $\operatorname{td}(H) \leq s(H)$. \end{enumerate} \end{lem} \begin{proof} Let $\gamma$ be a feasible labeling of $G$ using $\operatorname{td}(G)$ labels, and let $\ell_1,\dots,\ell_k$ denote the labels appearing on more than one vertex in $\gamma$, ordered so that $\ell_1<\dots<\ell_k$. Construct an alternate labeling $\delta$ of the vertices of $G$ as follows: for $i \in \{1,\dots,k\}$, assign label $i$ to each of the vertices having label $\ell_i$ in $\gamma$; then label the remaining unlabeled vertices arbitrarily but injectively with labels from $\{k+1,\dots,\operatorname{td}(G)\}$. We claim that $\delta$ is an optimal feasible labeling of $G$. First, note that $\delta$ uses the same number of labels as the optimal labeling $\gamma$. Next note that any path in $G$ between vertices with the same label from $\delta$ corresponds to a path joining vertices with the same label from $\gamma$. In $\gamma$ some vertex $v$ on this path received a higher label. If $v$ did not share its $\gamma$-label with another vertex in $G$, then in the labeling $\delta$ the vertex $v$ receives a label higher than $k$, while the endpoints of the path receive a label less than or equal to $k$. If $v$ does share its $\gamma$-label with another vertex in $G$, then by construction since $v$'s label in $\gamma$ was higher than that of the path endpoints', this same relationship holds between the vertex labels in $\delta$. We conclude that $\delta$ is a feasible labeling of $G$; hence $G$ has an optimal labeling that is reduced, establishing (1). Supposing now that $\gamma$ is a reduced optimal labeling of $G$, let $H$ be the graph formed by deleting from $G$ all vertices not sharing their label with some other vertex of $G$. Let $\gamma'$ denote the restriction of $\gamma$ to the graph $H$. We claim that $\gamma'$ is an optimal labeling of $H$. First, observe that if $H$ has an optimal labeling $\beta$ using fewer colors than $\gamma'$, then we may modify the labeling $\gamma$ of $G$ by replacing the labels on vertices of $H$ with the labels from $\beta$. Since $\gamma$ was a reduced labeling, the resulting modification is still a feasible labeling and uses fewer than $\operatorname{td}(G)$ labels, a contradiction. On the other hand, it is simple to verify that $\gamma'$ is a feasible labeling of $H$, so in fact $\operatorname{td}(H) = \operatorname{td}(G)-(n(G)-n(H))$, and $s(H)=s(G)$. Finally, consider a subset $W$ of $V(G)$ containing, for each label used by $\gamma$, a single vertex of $G$ receiving that label. The labels appearing on vertices in $H$ are precisely the labels $\gamma$ assigns to vertices in $V(G)-W$, so $\operatorname{td}(H) \leq s(G) = s(H)$, establishing (2). \end{proof} Given a graph $G$ and a feasible labeling $\gamma$ of $G$ as in Lemma~\ref{lem: reduced labelings}, let the graph $H$ described in the lemma be the \emph{irreducible core of $G$ under $\gamma$}. Further call $G$ \emph{irreducible under $\gamma$} if every label of $\gamma$ appears on at least two vertices of $G$. Note that any graph that is irreducible under $\gamma$ is disconnected, since there is at most one vertex per component having the maximum label in any feasible labeling. \begin{proof}[Proof of Lemma~\ref{lem: high td forb sub lists}] Using the language of Theorem~\ref{thm: high td hereditary}, we need simply show that $\mathcal{F}_0=\{2K_1\}$, $\mathcal{F}_1=\{3K_1,2K_2\}$, and $\mathcal{F}_2=\{4K_1,2K_2+K_1,P_3+K_2,2K_3\}$. Note first that $s(2K_1) = 1$, that $s(3K_1) = s(2K_2) = 2$, and that for each $G \in \{4K_1,2K_2+K_1,P_3+K_2,2K_3\}$ we have $s(G) = 3$. It is straightforward to check that if $H$ is any proper induced subgraph of any one of the graphs listed in the statement of the lemma, then the surplus of $H$ is less than the surplus of the graph. Hence $\{2K_1\}\subseteq \mathcal{F}_0$, $\{3K_1,2K_2\}\subseteq \mathcal{F}_1$, and $\{4K_1,2K_2+K_1,P_3+K_2,2K_3\} \subseteq \mathcal{F}_2$. Suppose now that $G$ is a graph for which $s(G) \in \{1,2,3\}$, and let $H$ be the irreducible core of $G$ under some optimal labeling $\gamma$. By Lemma~\ref{lem: reduced labelings} the restriction of $\gamma$ to $H$ is an optimal labeling of $H$, and $\operatorname{td}(H)\leq s(H)=s(G)$. Observe that $H$ contains at least $s(G)+1$ vertices, and every label in $H$ appears on at least two vertices. If $s(G)=1$, then we claim that $H$ contains $2K_1$ as an induced subgraph. Indeed, $H$ contains at least two vertices receiving the same label under $\gamma$. Since $\gamma$ is a proper coloring of the vertices of $H$, these two vertices are nonadjacent, and thus $H$ and $G$ induce $2K_1$. We conclude that $\mathcal{F}_0=\{2K_1\}$. If $s(G)=2$, then we claim that $H$ contains an induced subgraph from $\{3K_1,2K_2\}$. If $\operatorname{td}(H)=1$, then since $H$ has at least three vertices, $H$ contains $3K_1$ as an induced subgraph. If instead $\operatorname{td}(H)=2$, then since every label appears on at least two vertices of $H$, the graph $H$ contains four vertices $a,b,c,d$ such that $a$ and $b$ both receive label 1 and $c$ and $d$ receive label 2. We also know that $H$ is $\{P_4,C_4,K_3\}$-free, since $\operatorname{td}(H)<3$. Considering all 4-vertex graphs directly, we see that $H[\{a,b,c,d\}]$ either induces $3K_1$ or is isomorphic to $2K_2$. Hence $\mathcal{F}_1=\{3K_1,2K_2\}$. If $s(G)=3$, we claim that $H$ induces an element of $\{4K_1,2K_2+K_1,P_3+K_2,2K_3\}$. Observe that if $\operatorname{td}(H)=1$, then as $H$ has at least four vertices, $4K_1$ is an induced subgraph. If $\operatorname{td}(H)=2$, then $H$ is a forest of stars; we can also say that $H$ has five vertices, and it has at least two components that have an edge. The only two such graphs are $2K_2+K_1$ and $P_3+K_2$, so $H$ is one of these. If $\operatorname{td}(H)=3$, then $H$ has six vertices. Since $H$ is the irreducible core of $G$ under $\gamma$, it follows that $H$ consists of exactly two components, each having tree-depth 3; the only such graph on six vertices is $2K_3$, so $\mathcal{F}_2=\{4K_1,2K_2+K_1,P_3+K_2,2K_3\}$, and the proof is complete. \end{proof} We turn our attention now to critical graphs. As we do so, it is interesting to observe that Theorem~\ref{thm: high td hereditary} provides a contrasting result on how substructures affect tree-depth. Critical graphs are the minors that force a graph to have \emph{higher} tree-depth, while as in the proof of Theorem~\ref{thm: high td hereditary} the graphs in $\mathcal{F}_k$ are precisely the subgraphs that allow for labelings with a \emph{smaller} number of labels. We recall a definition and result from~\cite{BarrusSinkovic15}. Given a vertex $v$ in a graph $G$, a \emph{star-clique transform at $v$} removes $v$ from $G$ and adds edges between the vertices in $N_G(v)$ so as to make them a clique. \begin{thm}\label{thm:starclique}(\cite{BarrusSinkovic15}) Let $v$ be a vertex of a graph $G$, and let $H$ be the graph obtained through the star-clique transform at $v$ of $G$. Vertex $v$ is $1$-unique in $G$ if and only if $\operatorname{td}(H)<\operatorname{td}(G)$. \end{thm} \begin{lem} \label{lem: star-clique on 3K1,2K2 free} If $G$ is a $\{3K_1,2K_2\}$-free graph and $G'$ is obtained via a star-clique transform of $G$, then $G'$ is also $\{3K_1,2K_2\}$-free. \end{lem} \begin{proof} We prove the contrapositive. Let $G'$ be a graph that is not $\{3K_1,2K_2\}$-free and that is obtained from $G$ by a star-clique transform at vertex $v$. By the definition of a star-clique transform, $V(G')=V(G-v)$ and $E(G-v)\subset E(G')$. Thus, if $3K_1$ is induced in $G'$, then it is induced in $G-v$. Since any induced subgraph of $G-v$ is also induced in $G$, $3K_1$ is induced in $G$ as well. Similarly, if $2K_2$ is induced in $G'$ and $G-v$, then it is induced in $G$. Thus we consider the case where $2K_2$ is induced in $G'$ but not in $G-v$. Let $H$ be such a subgraph of $G'$. At least one edge of $H$ is in $E(G')\setminus E(G-v)$. Since $E(G')$ and $E(G-v)$ differ only in the edges joining vertices in $N_G(v)$, $|N_G(v)\bigcap V(H)|\geq 2$. Since $N_G(v)$ induces a clique in $G'$, $|N_G(v)\bigcap V(H)|\leq 2$. Thus $H$ has two adjacent vertices in $N_G(v)$ and two adjacent vertices not in $N_G(v)$. The latter two vertices, together with a third vertex from $H$ and the vertex $v$, induce $2K_2$ in $G$. \end{proof} \begin{thm} \label{thm: n-1 critical is 1-unique} If $G$ is a critical graph with $n$ vertices and $\operatorname{td}(G) \geq n-1$, then $G$ is 1-unique. \end{thm} \begin{proof} The only $n$-vertex graph with tree-depth $n$ is $K_n$, which is 1-unique, so suppose that $G$ is $(n-1)$-critical. If $G$ is not 1-unique, then by Theorem~\ref{thm:starclique} there exists a vertex $v$ in $G$ such that performing a star-clique transform on $G$ at $v$ results in a graph $G'$ such that $\operatorname{td}(G') \geq \operatorname{td}(G)$. Then $n-1=\operatorname{td}(G) \leq \operatorname{td}(G') \leq n(G')=n-1$, so $\operatorname{td}(G')=n(G')$, and in fact $G'$ is a complete graph. It follows that in $G$ each vertex not in $N_G(v)$ is adjacent to every vertex of $N_G(v)$. Now let $e$ be an edge incident with $v$ in $G$. Since $G$ is $(n-1)$-critical, $\operatorname{td}(G-e)=n-2$, and by Lemma~\ref{lem: high td forb sub lists}, $G-e$ contains an induced $3K_1$ or $2K_2$ while $G$ contains neither. In $G$ the edge $e$ is either the central edge in an induced $P_4$ or the edge in an induced $K_2+K_1$. Both of these possibilities lead to contradictions, however; if $v$ is a midpoint of an induced $P_4$, then the non-neighbor of $v$ on the path belongs to $V(G)-N_G(v)-\{v\}$ and hence is adjacent to every vertex of $N_G(v)$, including both neighbors of $v$ on the path, a contradiction. If instead $v$ is an endpoint of the edge in an induced $K_2+K_1$, then $v$'s neighbor and non-neighbor are non-adjacent, a contradiction, since vertices in $N_G(v)$ and vertices outside of $N_G(v)$ are adjacent to each other. We conclude that $n$-vertex, $(n-1)$-critical graphs are 1-unique. \end{proof} \section{Separating 1-uniqueness and criticality} As mentioned in the introduction, the property of 1-uniqueness leads a graph to have many properties in common with critical graphs. In particular, although the converse of Conjecture~\ref{conj: false} is false, in~\cite{BarrusSinkovic15} the authors showed that the following is true. \begin{thm}(\cite{BarrusSinkovic15}) Let $G$ be a connected 1-unique graph with tree-depth $k$. \begin{enumerate} \item[\textup{(a)}] If $v$ is any vertex of $G$, then $\operatorname{td}(G-v) < k$. \item[\textup{(b)}] If $e$ is any edge of $G$, and $G'$ is obtained from $G$ by contracting edge $e$, then $\operatorname{td}(G') < k$. \item[\textup{(c)}] There exists a set $S$ of edges of $G$ such that $G-S$ is a $k$-critical graph. \end{enumerate} \end{thm} Given that the operations in parts (a) and (b) are two of the operations allowed in obtaining a minor of a graph, with the third operation being edge deletion, as touched on in part (c), it is natural to wonder if some limit can be imposed on the size of the edge set $S$; if so, 1-unique graphs would in one sense be ``close'' to being critical. Additional developments detailed in~\cite{BarrusSinkovic15}, as well as the positive result in Theorem~\ref{thm: n-1 critical is 1-unique} above, suggest the question posed in Conjecture~\ref{conj: false} of whether every critical graph is 1-unique. In this section we provide negative answers to both questions. We begin in Section~\ref{subsec: cycle comp} by showing that no constant bound on $|S|$ is possible, in the sense that we show that there exist dense 1-unique graphs with arbitrarily many more edges than certain critical spanning subgraphs. In Section~\ref{subsec: counterexamples} we then exhibit an infinite family demonstrating the existence of non-1-unique, $k$-critical graphs for all $k \geq 5$. \subsection{1-uniqueness and critical spanning subgraphs} \label{subsec: cycle comp} To see that 1-unique graphs exist from which arbitrarily many edges may be deleted without lowering the tree-depth, consider the complements of cycles. \begin{thm} For every integer $n \geq 5$, the graph $\overline{C_n}$ has tree-depth $n-1$. Moreover, $\overline{C_n}$ is 1-unique. \end{thm} \begin{proof} Observe that for $n \geq 5$, the graph $\overline{C_n}$ is not complete and contains no induced $3K_1$ or $2K_2$; from Lemma~\ref{lem: high td forb sub lists}, we conclude that $\operatorname{td}(\overline{C_n})=n-1$. We now exhibit a 1-unique labeling of $\overline{C_n}$. With the vertices of $\overline{C_n}$ denoted by $1,\dots,n$ as before, assign vertices 1 and 2 the labels 1 and 2, respectively, and for $3 \leq i \leq n$, assign vertex $i$ the label $i-1$. Vertices 2 and 3 are the only vertices receiving the same label, and it is easy to verify that every path between these two vertices passes through a vertex receiving a higher label. Thus the labeling described is feasible, and since $\overline{C_n}$ is vertex-transitive, $\overline{C_n}$ may be feasibly labeled so that any desired vertex receives the unique 1 in the labeling. \end{proof} Having established that $\overline{C_n}$ is 1-unique, we now define another class of graphs. Let $n=4k$, where $k$ is an integer greater than 1. Let $G_n$ denote the graph obtained from $\overline{C_n}$, with vertices denoted by $1,\dots,n$ as before, by deleting all edges of the form $\{2j,2j+2k\}$, where $j \in \{1,\dots,k\}$. In words, we form $G_n$ by proceeding along the vertices in order, alternately deleting and leaving alone the edges of $\overline{C_n}$ that correspond to pairs of antipodal vertices in $C_n$. The graph $G_3$ is illustrated in Figure~\ref{fig: G3}. \begin{figure} \centering \includegraphics[width=2in]{C12CompDel3Edges.pdf} \caption{The graph $G_3$.} \label{fig: G3} \end{figure} It is straightforward to verify that the complement of $G_n$, which is a cycle in which chords join alternate pairs of antipodal vertices, contains no triangle or 4-cycle; hence $G_n$ is $\{3K_1,2K_2\}$-free, and by Lemma~\ref{lem: high td forb sub lists} we conclude that $\operatorname{td}(G_n) = n-1 = \operatorname{td}(\overline{C_n})$. Noting that $G_n$ has exactly $k$ edges fewer than $\overline{C_n}$ yields the following. \begin{thm} \label{thm: arbitrarily many edges} For any $k \geq 2$ there exists a 1-unique graph and a spanning subgraph with the same tree-depth but having at least $k$ fewer edges. Hence it is possible to delete arbitrarily many edges from a 1-unique graph before obtaining a critical subgraph. \end{thm} \subsection{Non-1-unique critical graphs} \label{subsec: counterexamples} Having shown in Section~\ref{subsec: cycle comp} that in at least one sense there may be a big difference between 1-unique graphs and critical graphs, we now present a number of counterexamples to Conjecture~\ref{conj: false} that all critical graphs are 1-unique. Portions of these results were presented in~\cite{unpub}. \begin{figure} \centering \includegraphics[width=3.5in]{FourNon1UniqueGraphs.pdf} \caption{Some 5-critical graphs with non-1-unique vertices.} \label{fig: non-1-unique examples} \end{figure} The four graphs in Figure~\ref{fig: non-1-unique examples} are each 5-critical, but in each graph the labeled vertex is not 1-unique. (Instead, the label indicates the smallest label the indicated vertex $v$ can receive in an optimal labeling where $v$ shares its label with no other vertex.) These graphs and others were found using the open source software SageMath, based on an algorithm that uses the ideas of $t$-uniqueness to search for 5-critical graphs, which we now describe briefly. The algorithm considers a graph $G$ from SageMath's dababase of small graphs. After determining that $G$ has tree-depth 5, for each vertex $v$ in $G$ the algorithm finds the smallest value of $t$ for which $v$ is $t$-unique, if such a value exists; it does this by examining every feasible labeling of $G$ with 5 labels. If each vertex of $G$ has such a value $t$, then the graph is induced-subgraph-critical~\cite{BarrusSinkovic15}. The subgraph-critical graphs are found from the induced-subgraph-critical graphs by determining whether the tree-depth decreases upon deletion of any single edge. Critical graphs are found among the subgraph-critical graphs by testing edge contractions; our tests are simplified by a result in~\cite{BarrusSinkovic15} that assures us that it suffices to restrict our tests to edges not incident with any $1$-unique vertex. Note now that if a graph $G$ is subgraph-critical with at most one vertex which is not $1$-unique, then $G$ is critical. Indeed, all 5-critical counterexamples to Conjecture~\ref{conj: false} with 9 or fewer vertices, like the ones in Figure~\ref{fig: non-1-unique examples}, have exactly one vertex which is not $1$-unique. We can expand the families of counterexamples to include graphs with tree-depths other than 5; in fact, the family we will present contains a non-1-unique $k$-critical graph for any $k \geq 5$; furthermore, the tree-depth of such a graph can differ from the order of the graph by as little as 2, showing that the bound in Theorem~\ref{thm: n-1 critical is 1-unique} cannot be improved. Before describing the family of counterexamples we need a few simple preliminaries. First, for any positive integer $k$, define a \emph{$k$-net} to be the graph constructed by attaching a single pendant vertex to each vertex of the complete graph $K_k$. The following fact is a special case of Lemma~2.7 in~\cite{unpub}. \begin{lem}\label{lem:k-net} A $k$-net has tree-depth $k+1$. \end{lem} Next is a result on the tree-depth of the Cartesian product of a complete graph with $K_2$. \begin{lem}\label{lem: Ka Box K2} For any positive integer $a$, the graph $K_a \Box K_2$ has tree-depth $\lceil 3a/2\rceil$. \end{lem} \begin{proof} The claim is easily verified for $a \in \{1,2\}$, so suppose $a \geq 3$. Let $V_1$ and $V_2$ denote the disjoint vertex sets of the two induced copies of $K_a$ in $K_a \Box K_2$. We may group the vertices of $K_a \Box K_2$ into pairs $\{u,u'\}$, where $uu'$ is an edge and $u$ and $u'$ are elements of $V_1$ and $V_2$, respectively. Any cutset $T$ in $K_a \Box K_2$ must contain at least one vertex from each such pair, so $|T| \geq a$. Moreover, since $K_a \Box K_2$ has independence number 2, after deleting any cutset the resulting graph has exactly two components, which must be complete subgraphs, the larger of which has at least $a - |T|/2$ vertices. It follows that the tree-depth of $K_a \Box K_2$ is at least $a + |T|/2$, which is at least $\lceil 3a/2\rceil$. To demonstrate equality, let $T$ be a subset of $V(K_a \Box K_2)$ consisting of $\lfloor a/2 \rfloor$ vertices from $V_1$ and $\lceil a/2 \rceil$ vertices from $V_2$, with no vertex in $T \cap V_1$ adjacent to any vertex in $T \cap V_2$. Label the vertices in $T$ injectively with labels from $\{\lceil a/2 \rceil+1, \dots, \lceil 3a/2\rceil\}$, and in each of $V_1 - T$ and $V_2 - T$, injectively label the vertices with $\{1,\dots,\lceil a/2 \rceil\}$. It is straightforward to verify that this is a feasible labeling using the appropriate number of colors. \end{proof} \begin{thm} For any $n \geq 4$, let $H_n$ be the graph obtained by subdividing (once) all edges incident with a single vertex $v$ of $K_{n}$. The graph $H_n$ is $(n+1)$-critical but not $1$-unique; moreover, $v$ is the only non-1-unique vertex in $H_n$. \end{thm} \begin{proof} In the following, let $A_n$ denote the vertices of degree $2$ incident with $v$ in $H_n$, and let $B_n$ denote $V(H_n)-v-A_n$; note that the $n-1$ vertices in $B_n$ form a clique in $H_n$, and each vertex in $B_n$ is adjacent exactly to the other vertices of $B_n$ and to a single vertex in $A_n$ (with each vertex in $A_n$ having a single neighbor in $B_n$). To see that $\operatorname{td}(H_n) \leq n+1$, injectively label the vertices of $B_n$ with labels $2,\dots,n$, label each vertex in $A_n$ with $1$, and label $v$ with $n+1$. Under this labeling only vertices in $A_n$ receive a common label, and each path joining two vertices in $A_n$ contains a vertex outside $A_n$, which has a higher label than $1$. For convenience in proving that $\operatorname{td}(H_n) \geq n+1$, we now construct a graph $H_3$ in the same way that $H_n$ is defined for $n \geq 4$; note that $H_3$ is isomorphic to $C_5$. By induction we show that $\operatorname{td}(H_n) \geq n+1$ for all $n \geq 3$). Observe that $H_3$ has tree-depth $4$, as desired. Now suppose that for some integer $k \geq 3$ we have $\operatorname{td}(H_k) \geq k+1$. Now consider the result of deleting a vertex from $H_{k+1}$. If the vertex deleted is $v$, the remaining graph is isomorphic to a $k$-net, which by Lemma~\ref{lem:k-net} has tree-depth $k+1$. Deleting any vertex from $A_{k+1}$ or from $B_{k+1}$, along with its neighbor in the other set, leaves a copy of $H_k$, which by our induction hypothesis has tree-depth $k+1$. Thus $\operatorname{td}(H_{k+1}) \geq 1 + \operatorname{td}(H_k) \geq (k+1)+1$, as desired. We now show that $H_n$ is critical. Note that if $u$ is any vertex in $A_n$, and if $w$ is the neighbor of $u$ in $B_n$, then each of $H_n - uv$ and $H_n-uw$ may be feasibly colored by labeling $v$ and $w$ with $2$, labeling all of $A_n$ with $1$, and injectively labeling the vertices of $B_n - u$ with colors from $3,\dots,n$. If $w,w'$ are vertices in $B_n$, we feasibly color $H_n - ww'$ by labelling $w$ and $w'$ with $1$, labeling all vertices in $A_n$ with $2$, labeling $v$ with $3$, and injectively labeling the vertices of $B_n - \{w,w'\}$ with colors from $4,\dots,n$. Contracting an edge of $H_n$ that is incident with a vertex in $A_n$ yields a graph isomorphic to that obtained by adding to $H_{n-1}$ a vertex $w'$ adjacent to the analogous vertex $v$ and to all vertices of $B_{n-1}$; we feasibly color this graph by labeling $w'$ and all vertices in $A_{n-1}$ with $1$, labeling $v$ with $2$, and injectively labeling vertices in $B_{n-1}$ with $\{3,\dots,n\}$. Contracting an edge $w_1w_2$ of $H_n$, where $w_1,w_2 \in B_n$, yields a graph that can be feasibly colored in the following way: label all vertices of $A_n$ with 1, label the vertex replacing $w_1$ and $w_2$ with 2, label $v$ with 3, and injectively label the vertices of $B_n-\{w_1,w_2\}$ with $\{4,\dots,n\}$. Having shown now that deleting or contracting any edge results in a graph with smaller tree-depth (and, it follows, the same holds if we delete any vertex), we conclude that $G$ is $(n+1)$-critical. Now by Theorem~\ref{thm:starclique}, $v$ will be 1-unique if and only if performing a star-clique transform at $v$ in $H_n$ yields a graph with a lower tree-depth. Observe that a star-clique transform on $v$ actually yields $K_{n-1} \Box K_2$. By Lemma~\ref{lem: Ka Box K2}, $\operatorname{td}(K_{n-1}\Box K_2) = \lceil \frac{3}{2}(n-1)\rceil$, which is at least $n+1$ for $n \geq 4$, rendering $v$ non-1-unique. Note, however, that a star-clique transform on any vertex of $A_n$ or $B_n$ has the same effect as contracting an edge between $A_n$ and $B_n$ in $H_n$, which lowers the tree-depth, as we verified above; hence all vertices of $H_n$ other than $v$ are 1-unique. \end{proof} \section{A family of dense 1-unique critical graphs} In this section we present a pleasing family of graphs that have appeared in the literature but were previously not known to be critical with respect to tree-depth. Though the previous two sections have established results separating criticality from 1-uniqueness, our proof in this section will use 1-uniqueness to efficiently establish criticality. For any positive integer $k$, the \emph{Andr\'{a}sfai graph} $\And(k)$ is defined to be the graph with vertex set $V=\{0,\dots,3k-2\}$ where edges are defined to be pairs $i,j$ (assume that $i>j$) such that $i-j$ is congruent to 1 modulo 3. The graph $\And(5)$ is shown in Figure~\ref{fig: And(5)}. Andr\'{a}sfai graphs are discussed in~\cite{Andrasfai64,GodsilRoyle}. It is easy to see that $\And(k)$ is a circulant graph and a Cayley graph. As we will see, these graphs also have pleasing properties regarding tree-depth. \begin{figure} \centering \includegraphics[width=2in]{AND5.pdf} \caption{The Andr\'{a}sfai graph $\And(5)$.} \label{fig: And(5)} \end{figure} In the following, let $\And(k)-v$ denote the graph obtained by deleting a vertex from $\And(k)$; since $\And(k)$ is vertex-transitive, this graph is well-defined up to isomorphism. \begin{lem}\label{lem: Andrasfai connectivity} For all $k \geq 1$, the graph $\And(k)$ is $k$-connected. \end{lem} \begin{proof} By Menger's Theorem it suffices to show that between any two vertices in $\And(k)$ there are at least $k$ pairwise internally disjoint paths. We prove this by induction on $k$. When $k=1$, the graph $\And(1) = K_2$, and there is clearly a path between the two vertices. Suppose now that for some integer $j\geq 1$, the graph $\And(j)$ has $j$ pairwise internally disjoint paths between any two distinct vertices. We now consider the number of internally disjoint paths between an arbitrary pair of vertices in $\And(j+1)$. Since this graph is vertex-transitive, without loss of generality we may assume that 0 is one of the vertices of the pair; if $a$ denotes the other vertex, then by symmetry we may assume that $1 \leq a \leq 3j-2$. Observe that the induced subgraph on vertices $\{0,\dots,3j-2\}$ is isomorphic to $\And(j)$, so by the induction hypothesis there exists a set of at least $j$ pairwise internally disjoint vertices joining $1$ and $a$ and using only vertices from $\{0,\dots,3j-2\}$. Now note that vertices $0$ and $a$ each have a neighbor in $\{3j-1,3j,3j+1\}$, so we may find a path from $0$ to $a$ whose internal vertices are drawn from this set; this path is necessarily internally disjoint from each of the earlier $j$ paths. Thus $\And(j+1)$ contains at least $j+1$ pairwise internally disjoint paths between any two vertices; by induction our proof is complete. \end{proof} We now define a useful labeling of the vertices of $\And(k)$. \begin{defn} The \emph{standard labeling} of $\And(k)$ is a function $r: V(\And(k)) \to \{1,2,\dots,2k\}$ given as follows: \[r(x) = \begin{cases} 1 & \textup{if } x = 0;\\ 2 & \textup{if }x>0 \text{ and }x \equiv 0 \!\!\!\!\pmod 3;\\ \frac{2x+4}{3} & \textup{if }x \equiv 1 \!\!\!\!\pmod 3;\\ \frac{2x+5}{3} & \textup{if }x \equiv 2 \!\!\!\!\pmod 3. \end{cases}\] In words, the standard labeling assigns label 1 to vertex 0, labels vertices $1,2,4,5,7,8,\dots,3k-2$ (i.e. all vertices that are not multiples of 3) in order, injectively, with the labels $2,3,\dots,2k$, and assigns label 2 to all other vertices. \end{defn} \begin{figure} \centering \includegraphics[width=2in]{AND5StandardLabel.pdf} \caption{The standard labeling of $\And(5)$.} \label{fig: And(5) labeled} \end{figure} Figure~\ref{fig: And(5) labeled} shows a standard labeling of $\And(5)$. Observe that in the standard labeling of $\And(k)$ the only repeated label is 2, no two vertices with label 2 are adjacent (since the difference of any two vertices labeled 2 is a multiple of 3 or is 1 less than a multiple of 3). Moreover, the only vertex adjacent to a vertex with label 1 is vertex 1, and any path beginning at vertex 1 and passing through vertex 0 must immediately afterward pass through another vertex with a higher label (since the last vertex has a number that is congruent to 1 modulo 3). It follows that any path between two vertices with the same label must include a vertex having a higher label than that of the endpoints, so the standard labeling fits the conditions of a labeling of the vertices, though we have not yet shown that it is an optimal labeling. \begin{thm} \label{thm: td Andrasfai} For all $k \geq 1$, \[\operatorname{td}(\And(k)) = 2k \quad {\text and } \quad \operatorname{td}(\And(k)-v) = 2k-1.\] \end{thm} \begin{proof} Note that if $\operatorname{td}(\And(k)) = 2k$, it easily follows that $\operatorname{td}(\And(k)-v) = 2k-1$, so we restrict our attention to the graphs $\And(k)$. Since the standard labeling of $\And(k)$ has $2k$ as its highest label value, $\operatorname{td}(\And(k)) \leq 2k$. To show that $\operatorname{td}(\And(k)) \geq 2k$ we proceed by induction. Note that $\And(1)$ has tree-depth $2$, and suppose that $\operatorname{td}(\And(j))=2j$ for some positive integer $j$. Now fix an optimal labeling of $\And(j+1)$. By Lemma~\ref{lem: Andrasfai connectivity}, $\And(j+1)$ is $(j+1)$-connected and hence the highest $j+1$ labels appear only once in the labeling. By an application of the pigeonhole principle, there must exist three consecutive vertices in $\{0,\dots,3j+1\}$ (where consecutivity is determined modulo $3j+2$) such that the labels on these vertices include two values from the highest $j+1$ labels. Exchange the labels on these vertices with those on the vertices receiving the highest two labels; since the only repeated labels in the labeling have a value not from the $j+1$ highest values, the resulting labeling is still a feasible, optimal labeling of $\And(j+1)$. Now by symmetry, we may assume that the highest two labels occur among the vertices $\{3j-1, 3j, 3j+1\}$. Since the induced subgraph on vertices $\{0,\dots,3j-2\}$ is equal to $\And(j)$, by the induction hypothesis the labeling must use at least $2j$ labels on this subgraph, which means that the optimal labeling of $\And(j+1)$ uses at least $2j+2$ distinct labels, and the induction is complete. \end{proof} \begin{thm} \label{thm: And is 1-unique} For all $k \geq 1$, both $\And(k)$ and $\And(k)-v$ are 1-unique. \end{thm} \begin{proof} We observe that the standard labeling of $\And(k)$ places the label 1 only on the vertex 0; since $\And(k)$ is vertex-transitive, it follows that $\And(k)$ is 1-unique. To show that $\operatorname{td}(\And(k)-v)$ is 1-unique, we may assume that $k \geq 2$, since the claim is clearly true when $k=1$. It suffices by symmetry to show that for all $v \in \{1,\dots,3k-2\}$ there is a labeling of $\And(k)-v$ using $2k-1$ distinct labels and placing a unique label of 1 on vertex 0. We prove this in cases. \textit{Case: $v$ is not 1 and not a multiple of 3.} In this case we begin by labeling $\And(k)$ with the standard labeling. We then delete vertex $v$ and reduce all labels higher than that of $v$ by 1. The result is a 1-unique labeling of $\And(k)-v$ using $2k-1$ labels, as desired. \textit{Case: $v$ is 1 or a multiple of 3.} In this case $3k-1-v$ is neither 1 nor a multiple of 3. Since there is an automorphism $\phi:V(\And(k)) \to V(\And(k))$ mapping each vertex $x$ to $3k-1-x$ (modulo $3k-1$), we simply label each vertex $x$ of $\And(k)-v$ with $r(\phi(x))$, where $r$ is the standard labeling of $\And(k)$, producing a labeling with the desired properties. \end{proof} \begin{thm} For all $k \geq 1$, both $\And(k)$ and $\And(k)-v$ are critical. \end{thm} \begin{proof} Since both $\And(k)$ and $\And(k)-v$ are 1-unique, it suffices by Theorem~\ref{thm: And is 1-unique} to show that deleting any edge from $\And(k)$ or from $\And(k)-v$ lowers the tree-depth. We consider $\And(k)$ first. Suppose that the deleted edge is $uw$. Let $x$ be a vertex adjacent to neither $u$ nor $w$ in $\And(k)-uw$ (since $u$ and $w$ are each adjacent to exactly one of every three consecutive vertices, such a vertex exists). Label all neighbors of $x$ with 1, label $w$ and $u$ both with 2, and label the remaining $2k-3$ vertices injectively with labels from $\{3,\dots,2k-1\}$. We claim that this labeling is feasible. Note that since $\And(k)$ contains no triangles, no vertices with label 1 are adjacent, and $\And(k)-uw$ contains no path of length 2 joining $u$ and $w$. It follows that any path between vertices with the same label must be longer than 1 edge if the label is 1, and longer than 2 edges if the label is 2. Such a path must then contain a higher label on an interior vertex than the label on the endpoints is. Thus the labeling is feasible, and $\operatorname{td}(\And(k) - uw) \leq 2k-1 < \operatorname{td}(\And(k))$. We now show that $\And(k)-v$ is critical. This is clearly verified for $k \leq 2$, so assume $k \geq 3$. As before, it suffices to show that deleting an arbitrary edge lowers the tree-depth. Equivalently, we show that for an arbitrary edge $e$ and vertex $v$ of $\And(k)$, where $e$ is not incident with $v$, the tree-depth of $\And(k) - e - v$ is at most $2k-2$. As before, let $x$ be a vertex other than $v$ that is adjacent to neither endpoint of $e$ (such a vertex exists because $k \geq 3$). We label the neighbors of $x$ with $1$, the former endpoints of $e$ with $2$, and each of the remaining $2k-4$ vertices with a distinct label from $\{3,\dots,2k-2\}$. The same arguments used above for $\And(k)-uw$ are valid in showing that this labeling is feasible; hence $\operatorname{td}( \And(k)-e-v) \leq 2k-2 < \operatorname{td}(\And(k)-v)$, and our proof is complete. \end{proof} In conclusion we remark that the Andr\'{a}sfai graphs join the cycles of order $2^n+1$ and complete graphs as critical graphs having the remarkable property that deleting any vertex yields another critical graph, something that is not true of critical graphs in general. Interestingly, in each of the graphs in Figure~\ref{fig: non-1-unique examples} (the non-1-unique critical graphs), deleting the non-1-unique vertex from the critical graph yields another critical graph; this seems to hold for several similar counterexamples to Conjecture 1.2 that have a single non-1-unique vertex. The Andr\'{a}sfai graphs, cycles of order $2^n+1$, and complete graphs, along with the graphs $\overline{C_6}$ and $\overline{C_7}$ are all 1-unique, critical circulant graphs (though cycle-complements in general are not critical, as shown in Section~\ref{subsec: cycle comp}). At present these graphs are the only critical graphs the authors are aware of that are circulant, vertex-transitive, or even regular, though there are almost surely other classes of examples. Though we have seen that Conjecture~\ref{conj: false} is false for graphs in general, it would be interesting to know whether it holds for circulant graphs (or vertex-transitive or regular graphs). With the failure of Conjecture~\ref{conj: false}, it also remains to find a different approach to prove the maximum degree property appearing in Conjecture~\ref{conj: order, max degree} for graphs in general. \section*{Acknowledgments} The authors thank A.~Giannopoulou for mentioning the critical example of the triangular prism, which is $\overline{C_6}$, in a personal communication.
1,477,468,749,958
arxiv
\section{Introduction} \label{sec:intro} The ATLAS and CMS experiments at the Large Hadron Collider (LHC) have discovered a neutral spinless particle that closely matches the description of the Higgs boson~\cite{Aad:2012tfa,Chatrchyan:2012ufa} which is responsible for masses of elementary particles, according to the standard model (SM) of electroweak interactions. While this ties the final knot on the framework embodied in the SM, there are many reasons to believe that there is more fundamental physics at higher energies. The reason for such expectation can be traced to many issues, including the unexplained replication of fermion families, the source of dark matter in the universe, and the problems of naturalness and vacuum stability involving the Higgs boson itself. The Large Hadron Collider (LHC) has not revealed any direct signature of new physics so far. However, one is led to suspect that such physics should affect the interaction Lagrangian of the Higgs boson. This generates, for example, effective operators of dimension-6 contributing to $HVV$ interactions, with $V=W,Z,\gamma$. Probing such effective couplings for the recently discovered scalar is therefore tantamount to opening a gateway to fundamental physics just beyond our present reach. Such `effective' interaction terms better be $SU(2) \times U(1)$ invariant if they arise from physics above the electroweak scale. Constraints on such terms have already been studied, using precision electroweak data as well as global fits of the current Higgs data~\cite{Masso:2012eq,Corbett:2012ja,Falkowski:2013dza,Corbett:2013pja,Dumont:2013wma,Banerjee:2012xc,Gainer:2013rxa, Corbett:2013hia,Elias-Miro:2013mua,Pomarol:2013zra,Einhorn:2013tja,Banerjee:2013apa,Willenbrock:2014bja, Ellis:2014dva,Belusca-Maito:2014dpa,Gupta:2014rxa,Masso:2014xra,Biekoetter:2014jwa,Englert:2014cva, Ellis:2014jta,Edezhath:2015lga,Gorbahn:2015gxa,Han:2004az,Ciuchini:2013pca,Blas:2013ana,Chen:2013kfa, Alonso:2013hga,Englert:2014uua,Trott:2014dma,Falkowski:2014tna,Henning:2014wua,deBlas:2014mba, Berthier:2015oma,Efrati:2015eaa,Bhattacherjee:2015xra}. Recently, CMS has published an exhaustive study on anomalous $HVV$ couplings~\cite{Khachatryan:2014kca}. Many studies have considered anomalous Higgs couplings in context of future lepton colliders~\cite{Amar:2014fpa,Kumar:2014zra,Craig:2014una,Beneke:2014vqa,Kumar:2015eea,Ren:2015uka}. The general conclusion, based on analyses of the 8 TeV data, is that several (though not all) of the gauge invariant, dimension-6 $HVV$ terms have been quite strongly constrained by the EW precision and LHC data (as discussed in section~\ref{sec:ratio})~\cite{Masso:2012eq,Corbett:2012ja,Falkowski:2013dza,Corbett:2013pja,Banerjee:2012xc,Dumont:2013wma,Gainer:2013rxa, Corbett:2013hia,Elias-Miro:2013mua,Pomarol:2013zra,Einhorn:2013tja,Banerjee:2013apa,Willenbrock:2014bja, Ellis:2014dva,Belusca-Maito:2014dpa,Gupta:2014rxa,Masso:2014xra,Biekoetter:2014jwa,Englert:2014cva, Ellis:2014jta,Edezhath:2015lga,Gorbahn:2015gxa,Han:2004az,Ciuchini:2013pca,Blas:2013ana,Chen:2013kfa, Alonso:2013hga,Englert:2014uua,Trott:2014dma,Falkowski:2014tna,Henning:2014wua,deBlas:2014mba, Berthier:2015oma,Efrati:2015eaa,Bhattacherjee:2015xra}. It still remains to be seen whether such small coefficients can be discerned with some ingeniously constructed kinematic distributions. Some work has nonetheless been done to study such distributions~\cite{Plehn:2001nj,Bernaciak:2012nh,Bernaciak:2013dwa,Biswal:2012mp,Djouadi:2013yb}, in terms of either the gauge invariant operators themselves or the structures finally ensuing from them. At the same time, it is of interest to see if meaningful constraints do arise from the study of total rates at the LHC. The essence of any probe of these anomalous couplings, however, lies in pinning them down to much smaller values using the 14 TeV runs, as common sense suggests the manifestation, if any, of new physics through Higher Dimensional Operators (HDO's) with small coefficients only. We show here that the relative rates of events of different kinds in the Higgs data can allow us to probe such effective interactions to levels of smallness not deemed testable otherwise~\cite{Djouadi:2012rh,Djouadi:2013qya}. This happens through (a) the cancellation of theoretical uncertainties, and (b) the fact that some ratios have the numerators and denominators shifting in opposite directions, driven by the additional interactions. Thus the cherished scheme of finding traces of new physics in Higgs phenomenology can be buttressed with one more brick. We organise our paper as follows: we summarise the relevant gauge invariant operators and the interaction terms in Sec.~\ref{sec:HDO}. In Sec.~\ref{sec:ratio}, we introduce three ratios of cross-sections as our observables. The results of our analysis are explained in Sec.~\ref{results}. We summarise and conclude in Sec.~\ref{summary}. \section{Higher dimensional operators} \label{sec:HDO} In order to see any possible deviations from the SM in the Higgs sector, we will follow the effective field theory (EFT) framework. We consider $SU(2)_L \times U(1)_Y$ invariant operators of dimension up to 6, which affect Higgs couplings to itself and/or a pair of electroweak vector bosons. While a full list of such operators are found in ~\cite{Buchmuller:1985jz,Hagiwara:1993qt,GonzalezGarcia:1999fq,Grzadkowski:2010es}, we have concentrated here on dimension-6 CP-conserving operators which affect Higgs phenomenology. They include: \begin{itemize} \item Operators which contain the Higgs doublet $\Phi$ and its derivatives: \begin{equation} \mathcal{O}_{\Phi,1} = (D_{\mu}\Phi)^{\dagger}\Phi\Phi^{\dagger}(D^{\mu}\Phi);~~~ \mathcal{O}_{\Phi,2} = \frac{1}{2}\partial_{\mu}(\Phi^{\dagger}\Phi)\partial^{\mu}(\Phi^{\dagger}\Phi);~~~ \mathcal{O}_{\Phi,3} = \frac{1}{3}(\Phi^{\dagger}\Phi)^{3} \end{equation} \item Those containing $\Phi$ (or its derivatives) and the bosonic field strengths : \begin{equation} \mathcal{O}_{GG} = \Phi^{\dagger}\Phi G_{\mu\nu}^a G^{a\,\mu\nu};~~~ \mathcal{O}_{BW} = \Phi^{\dagger}\hat{B}_{\mu \nu} \hat{W}^{\mu \nu} \Phi;~~~ \mathcal{O}_{WW} = \Phi^{\dagger}\hat{W}_{\mu \nu} \hat{W}^{\mu \nu} \Phi \nonumber \end{equation} \begin{equation} \mathcal{O}_{W} = (D_{\mu}\Phi)^{\dagger} \hat{W}^{\mu \nu} (D_\nu \Phi);~~~ \mathcal{O}_{BB} = \Phi^{\dagger}\hat{B}_{\mu \nu} \hat{B}^{\mu \nu} \Phi;~~~ \mathcal{O}_{B} = (D_{\mu}\Phi)^{\dagger} \hat{B}^{\mu \nu} (D_\nu \Phi), \end{equation} \end{itemize} where \begin{equation} \hat{W}^{\mu \nu}=i\,\frac{g}{2} \sigma_{a}W^{a \; \mu \nu};~~~ \hat{B}^{\mu \nu}=i\,\frac{g}{2}' B^{\mu \nu} \nonumber \end{equation} and $g$, $g'$ are respectively the $\tr{SU}(2)_\tr{L}$ and $\tr{U}(1)_\tr{Y}$ gauge couplings. $W^a_{\mu \nu} = \partial_{\mu}W^a_{\nu}-\partial_{\nu}W^a_{\mu} - g \epsilon^{abc}W^b_{\mu} W^c_{\nu}$, $B_{\mu \nu} = \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}$ and $G^a_{\mu \nu} = \partial_{\mu}G^a_{\nu}-\partial_{\nu}G^a_{\mu} - g_s f^{abc}G^b_{\mu} G^c_{\nu}$. The covariant derivative of $\Phi$ is given as $D_{\mu}\Phi=(\partial_{\mu}+\frac{i}{2}g' B_{\mu} + i g \frac{\sigma_a}{2}W^a_{\mu})\Phi$. The Lagrangian in the presence of the above operators can be generally expressed as: \begin{equation} \mathcal{L} \supset \kappa\left(\frac{2 m_W^2}{v} H W_{\mu}^+ W^{\mu -}+\frac{ m_Z^2}{v} H Z_{\mu} Z^{\mu } \right) + \sum_{i} \frac{f_{i}}{\Lambda^2}\mathcal{O}_{i} \label{Lag}, \end{equation} where in addition to the dimension-6 (D6) operators, we also allow for the SM-like $HWW$ and $HZZ$ couplings to be scaled by a factor $\kappa$. While $\kappa \neq 1$ is indicative of certain kinds of new physics, we are specially interested in this study in the new observable features associated with the HDOs. Therefore, we have set $\kappa = 1$ for simplicity.\footnote{Possible constraints on the departure of $\kappa$ from unity have been obtained in the literature from global fits of the Higgs data (See for example~\cite{Masso:2012eq,Corbett:2012ja,Falkowski:2013dza,Corbett:2013pja,Dumont:2013wma,Banerjee:2012xc,Gainer:2013rxa, Corbett:2013hia,Elias-Miro:2013mua,Pomarol:2013zra,Einhorn:2013tja,Banerjee:2013apa,Willenbrock:2014bja, Ellis:2014dva,Belusca-Maito:2014dpa,Gupta:2014rxa,Masso:2014xra,Biekoetter:2014jwa,Englert:2014cva, Ellis:2014jta,Edezhath:2015lga,Gorbahn:2015gxa,Han:2004az,Ciuchini:2013pca,Blas:2013ana,Chen:2013kfa, Alonso:2013hga,Englert:2014uua,Trott:2014dma,Falkowski:2014tna,Henning:2014wua,deBlas:2014mba, Berthier:2015oma,Efrati:2015eaa})} No operator of the form $\mathcal{O}_{GG}$ is assumed to exist since we are presently concerned with Higgs interactions with a pair of electroweak vector bosons only. The operator $\mathcal{O}_{\Phi,1}$ is severely constrained by the $T$-parameter (or equivalently the $\rho$ parameter), as it alters the $HZZ$ and $HWW$ couplings by unequal multiplicative factors. As far as $HZZ$ and $HWW$ interactions are concerned, $\mathcal{O}_{\Phi,2}$ only scales the standard model-like couplings ($\kappa$), without bringing in any new Lorentz structure. This amounts to a renormalization of the Higgs field. It also alters the Higgs self-coupling, something that is the sole consequence of $\mathcal{O}_{\Phi,3}$ as well. In view of the above, we focus on the four operators $\mathcal{O}_{WW}$, $\mathcal{O}_{BB}$, $\mathcal{O}_W$ and $\mathcal{O}_B$. We do not include the operator $\mathcal{O}_{BW} = \Phi^{\dagger}\hat{B}_{\mu \nu} \hat{W}^{\mu \nu} \Phi$ in the present analysis, because it mixes the $Z$ and $\gamma$ fields at the tree level, violates custodial symmetry (by contributing only to the $Z$-boson mass) and is, therefore, highly constrained by the $S$ and $T$-parameters at the tree level~\cite{Corbett:2012ja}. The effective interactions that finally emerge and affect the Higgs sector are \begin{align} \label{eq:lagHVV} \mathcal{L}_{eff} &= g_{HWW}^{(1)}~(W_{\mu\nu}^{+}W^{-\mu}\partial^{\nu}H + h.c.) + g_{HWW}^{(2)}~HW_{\mu\nu}^{+}W^{-\mu\nu} \nonumber \\ &+ g_{HZZ}^{(1)}~Z_{\mu\nu}Z^{\mu}\partial^{\nu}H + g_{HZZ}^{(2)}~HZ_{\mu\nu}Z^{\mu\nu} \nonumber \\ &+ g_{HZ\gamma}^{(1)}~A_{\mu\nu}Z^{\mu}\partial^{\nu}H + g_{HZ\gamma}^{(2)}~HA_{\mu\nu}Z^{\mu\nu}+g_{H\gamma\gamma}H A_{\mu \nu} A^{\mu \nu}, \end{align} where \begin{align} \label{eq:lagHVVcoeff} g^{(1)}_{HWW}&=\left(\frac{g M_W}{\Lambda^2}\right) \frac{f_W}{2};~~~ g^{(2)}_{HWW}=-\left(\frac{g M_W}{\Lambda^2}\right)f_{WW} \nonumber \\ g^{(1)}_{HZZ}&=\left(\frac{g M_W}{\Lambda^2}\right) \frac{c^2 f_W + s^2 f_B}{2 c^2};~~~g^{(2)}_{HZZ}=-\left(\frac{g M_W}{\Lambda^2}\right) \frac{s^4 f_{BB} + c^4 f_{WW}}{2 c^2} \nonumber \\ g^{(1)}_{HZ\gamma}&=\left(\frac{g M_W}{\Lambda^2}\right)\frac{s(f_W-f_B)}{2 c};~~~g^{(2)}_{HZ\gamma}=\left(\frac{g M_W}{\Lambda^2}\right)\frac{s(s^2 f_{BB}-c^2 f_{WW})}{c} \nonumber \\ g_{H\gamma\gamma}&=-\left(\frac{g M_W}{\Lambda^2}\right)\frac{s^2(f_{BB}+f_{WW})}{2} \end{align} with $s\,(c)$ being the sine (cosine) of the Weinberg angle. Besides, the operators $\mathcal{O}_W$, $\mathcal{O}_B$ and $\mathcal{O}_{WWW}$ also contribute to the anomalous triple gauge boson interactions which can be summarised as \begin{align} \label{eq:lagWWV} \mathcal{L}_{WWV}=-i g_{WWV}\left\{g_1^V\left(W_{\mu\nu}^+W^{-\mu}V^{\nu}-W_{\mu}^+V_{\nu}W^{-\mu \nu}\right)+\kappa_V W_{\mu}^+W_{\nu}^-V^{\mu \nu} + \frac{\lambda_V}{M_W^2}W_{\mu \nu}^+ W^{-\nu \rho} V_{\rho}^{\mu}\right\}, \end{align} where $g_{WW\gamma}=g \, s$, $g_{WWZ} = g \, c$, $\kappa_V=1+\Delta\kappa_V$ and $g_1^Z=1+\Delta g_1^Z$ with \begin{align} \label{eq:lagWWVcoeff} \Delta \kappa_{\gamma}&=\frac{M_W^2}{2 \Lambda^2}\left(f_W+f_B\right);~~~\lambda_{\gamma}=\lambda_Z=\frac{3g^2M_W^2}{2\Lambda^2} f_{WWW} \nonumber \\ \Delta g_1^Z&=\frac{M_W^2}{2 c^2 \Lambda^2} f_W;~~~\Delta \kappa_Z=\frac{M_W^2}{2 c^2 \Lambda^2}\left(c^2 f_W - s^2 f_B\right) \end{align} The already existing limits on the various operators discussed above are found in numerous references~\cite{Corbett:2013pja,Falkowski:2013dza,Corbett:2013hia,Masso:2012eq,Corbett:2012ja}. Even within their current limits, some of the operators are found to modify the efficiencies of the various kinetic cuts~\cite{Gainer:2013rxa,Banerjee:2013apa}. The question we address in the rest of the paper is : can these limits be improved in the next run(s) through careful measurement of the ratios of total rates in different channels? As we shall see below, the answer is in the affirmative. \section{Ratios of cross-sections as chosen observables} \label{sec:ratio} The four HDOs under consideration affect Higgs production as well as its decays, albeit to various degrees. For example, HDO-dependent single Higgs production processes are in association with vector bosons ($VH$) \textit{i.e.} $pp\to VH$ (where $V=\{W,Z\}$) and vector-boson fusion ($VBF$). We show the production cross-sections in these channels at 14 TeV in Fig.~\ref{fig:Hprod}, as functions of the four operator coefficients ($f_i$) taken one at a time.\footnote{We have used CTEQ6L1 parton distribution functions (PDFs) by setting the factorization ($\mu_F$) and renormalization scales ($\mu_R$) at the Higgs mass ($M_H=125$ GeV).} The relevant decay channels which are dependent on such operators are $H\to WW^*,ZZ^*,\gamma\gamma,Z\gamma$. Fig.~\ref{fig:BRHD} contains these branching ratios (BR) as functions of the four coefficients under consideration. The $VBF$ and $VH$ rates are sensitive to $f_{WW}$ and $f_W$, but depend very weakly on $f_{BB}$ and $f_B$, while the cross-section $\sigma(pp\to WH)$, is completely independent of $f_{BB}$ and $f_B$). The HDO effects in $H\to \gamma\gamma$ and $H\to Z \gamma$ for $f_i\sim \mathcal{O}(1)$ \footnote{If the operators arise from loop-induced diagrams which imply `loop factors' in denominators of the effective interactions, $O(1)$ TeV$^{-2}$ coefficients imply strongly coupled theories~\cite{Elias-Miro:2013mua,Einhorn:2013kja}. However, if such operators originate from tree-level diagrams, then $O(1)$ TeV$^{-2}$ coefficients imply weakly-coupled theories.} is of the same order as the loop-induced SM contribution unlike in the case of the $HWW$ and $HZZ$ couplings. Therefore, $\textrm{BR}_{H\to \gamma\gamma}$ becomes highly sensitive to $f_{WW}$ and $f_{BB}$. Consequently, the 7$+$8 TeV data already restrict their magnitudes. Bounds on all these operators in a similar framework can be seen in Table VI of Ref.~\cite{Corbett:2012ja} and also in Ref.~\cite{Masso:2012eq}. In Ref.~\cite{Corbett:2012ja}, the bounds have been presented at 90\% CL by varying multiple operators at the same time. These bounds have been obtained by considering the LHC data as well as constraints from on the oblique parameters, \textit{viz.}, $S,T$ and $U$. Bounds coming from the oblique parameters are generally weaker than those obtained from the LHC data as can be seen in Ref.~\cite{Masso:2012eq}. These limits may not be applicable when the analysis is performed varying one operator at a time. Based on the above information, we set out to find observables which are sensitive to $f_i \lesssim 5$ TeV$^{-2}$ in the High luminosity run at the LHC. It is not completely clear yet how much of statistics is required to probe such small values with various event shape variables. On the other hand, the more straightforward observables, namely, total rates in various channels, are also fraught with statistical, systematic and theoretical uncertainties which must be reduced as far as possible when precision is at a premium. An approach that is helpful is looking at ratios of cross-sections in different channels. In this paper, we invoke two kinds of ratios. First, we take ratios of events in two different final states arising from a Higgs produced via the same channel (in our case, gluon fusion). Such a ratio enables one to get rid of correlated theoretical uncertainties (CThU) such as those in PDF and renormalisation/factorisation scales. They also cancel the uncertainty in total width which is correlated in the calculation of BRs into the two final states. Secondly, we consider the ratio of rates for the same final state for two different production channels (such as $VBF$ and $VH$). Although the uncertainty in the BR cancels here, the theoretical uncertainties at the production level do not. Moreover, since the final state is same in this case, some systematic uncertainties which are correlated (related to identification, isolation, trigger etc.) will also get cancelled. However, this is helpful in another manner. For some of the operators, the $f_i$-dependent shifts with respect to the SM are in opposite direction for the numerator and the denominator in such ratios. The result is that the net deviation adds up, as shown in subsection~\ref{subsec:R2}. We shall see that the use of both these kinds of ratios (including those involving the channel $Z\gamma$ can capture the HDO coefficients at a level unprecedented, going down to values where new physics can show up. \begin{figure*} \centering \subfloat[]{\includegraphics[width=4.8cm,height=4.8cm]{figures/vbf}\label{fig:VBF}}~~ \subfloat[]{\includegraphics[width=4.8cm,height=4.8cm]{figures/wh}\label{fig:WH}}~~ \subfloat[]{\includegraphics[width=4.8cm,height=4.8cm]{figures/zh}\label{fig:ZH}} \caption{Higgs production cross-sections for the $VBF$ and $VH$ channels in presence of HDOs at 14 TeV. Here the operators are varied one at a time.} \label{fig:Hprod} \end{figure*} \begin{figure*} \centering \subfloat[]{\includegraphics[width=6cm,height=6cm]{figures/BRww}\label{fig:BRww}}~~~ \subfloat[]{\includegraphics[width=6cm,height=6cm]{figures/BRzz}\label{fig:BRzz}}\\ \subfloat[]{\includegraphics[width=6cm,height=6cm]{figures/BRyy}\label{fig:BRyy}}~~~ \subfloat[]{\includegraphics[width=6cm,height=6cm]{figures/BRzy}\label{fig:BRzy}} \caption{Branching ratios of $H\to WW^*,ZZ^*,\gamma\gamma,Z\gamma$ in presence of HDOs. The operators are varied one at a time.} \label{fig:BRHD} \end{figure*} \subsection{Observable sensitive to $\mathcal{O}_{WW}$ and $\mathcal{O}_{BB}$: $\mathcal{R}_1$} As has been noted earlier, $\tr{BR}_{H\to\gamma\gamma}$ (Fig.~\ref{fig:BRyy}) is highly sensitive to two of the operators, namely, $\mathcal{O}_{BB}$ and $\mathcal{O}_{WW}$. Therefore, we propose to probe them in the $\gamma\gamma$ channel, with the Higgs produced through gluon-gluon fusion ($ggF$). This final state is clean for reconstruction, and has high statistics. We should mention here that if we consider the simultaneous presence of more than one operators, then there is a ``blind-direction'' in the parameter space $f_{WW}\approx -f_{BB}$ where $\tr{BR}_{H\to \gamma\gamma}$ mimics the SM value. This is because the higher-dimensional part of the $H\gamma\gamma$ vertex is proportional to $f_{WW}+f_{BB}$. Also, for the non-trivial range $f_{WW}=f_{BB}\approx -3$, $\tr{BR}_{H\to \gamma\gamma}$ mimics the SM value, due to parabolic dependence of the diphoton rate on the HDO coefficients. Therefore, the Higgs produced through $ggF$ followed by its decay to $\gamma\gamma$ cannot be used alone to probe these two `special' regions of the parameter space. We construct the observable \begin{equation} \mathcal{R}_1(f_i)=\frac{\sigma_{\tr{ggF}}\times \tr{BR}_{H\to \gamma \gamma}(f_i)}{\sigma_{\tr{ggF}}\times \tr{BR}_{H\to WW^* \to 2 \ell 2 \nu}(f_i)}, \end{equation} where $\ell=e,\mu$ and $f_i$'s are the operator coefficients. As explained earlier, the CThU in production as well as total width cancels here; so does the $K$-factor in the production rate. Clearly, $\mathcal{R}_1$ can also be expressed as the ratio of two signal strengths as follows, \begin{equation} \mathcal{R}_1(f_i) = \frac{\mu^{\tr{ggF}}_{\gamma\gamma}(f_i)}{\mu^{\tr{ggF}}_{WW^*}(f_i)} \times \frac{(\sigma_{\tr{ggF}}\times \tr{BR}_{H\to \gamma \gamma})^{\tr{SM}}}{(\sigma_{\tr{ggF}}\times \tr{BR}_{H\to WW^* \to 2 \ell 2 \nu})^{\tr{SM}}}\ . \end{equation} Therefore, already measured $\gamma\gamma$ and $WW^*$ signal strengths can be used to constrain the operator coefficients affecting the ratio $\mathcal{R}_1$. The efficiency of acceptance cuts does not affect the results, for values of $f_{WW}$ and $f_{BB}$ which are of relevance here because for such small values of the parameter coefficients the change in experimental cut-efficiencies is negligible. On top of that, for the $ggF$ production mode, these operators only affect the decay vertices and hence the cut-efficiencies are but modified by a very small extent. We must also note that in defining $\mathcal{R}_1$ a full jet-veto (0-jet category) has been demanded for both the numerator and the denominator to reduce the uncertainties related to the different jet-requirement in the final state. Besides, in the denominator, the $WW^*$ pair is considered to decay into both same flavour ($ee+\mu\mu$) and different flavour ($e\mu + \mu e$) final states to improve the statistics. \subsection{Observable sensitive to $\mathcal{O}_{WW}$ and $\mathcal{O}_{W}$: $\mathcal{R}_2$} \label{subsec:R2} It turns out that the $f_{WW}$ and $f_W$ affect (to one's advantage) the ratio of events in a particular Higgs decay mode in the $VBF$ and $VH$ channels. This captures the new physics at the production level. By considering the same final states from Higgs decay, some theoretical uncertainties in the decay part cancels out. The production level uncertainties, including the $K$-factors, however, do not cancel here. In our calculation, the next-to-next-to leading order (NNLO) $K$-factors have been assumed to be the same as in the SM, expecting that the presence of HDO does not effect the $K$-factors much. For precise estimate of the observed ratio, one of course has to incorporate the modified cut efficiencies due to the new operators, though such modifications may be small. The other, important advantage in taking the above kind of ratio is that, for not-too-large $f_{WW}$ or $f_W$ (in the range $[-5,+5]$), the deviations of the $VBF$ and $VH$ cross-sections are in opposite directions. The generic deviation for the rate in any channel can be parametrized as \begin{equation} \sigma^{\textrm{HDO}}_{\textrm{prod.}}=\sigma^{\textrm{SM}}_{\textrm{prod.}}\times \left(1+\delta_{\textrm{prod.}}\right). \end{equation} From Fig.~\ref{fig:VBF}, $\delta_{\textrm{VBF}}$ is positive in the range $f_{WW},f_W>0$. On the other hand, in the same region of the parameter space, $\delta_{\textrm{VH}}$ is negative as evident from Figs.~\ref{fig:WH} and~\ref{fig:ZH}. Hence, on taking the ratio $\sigma_{\textrm{VBF}}^{\textrm{HDO}}/\sigma_{\textrm{VH}}^{\textrm{HDO}}$, the deviation from SM is \begin{equation} \frac{\sigma_{\textrm{VBF}}}{\sigma_{\textrm{VH}}} =\frac{\sigma_{\textrm{VBF}}^{\textrm{SM}}}{\sigma_{\textrm{VH}}^{\textrm{SM}}}\times \left(1 + \delta_{\textrm{VBF}} - \delta_{\textrm{VH}} + \mathcal{O}({\delta^{2}})\right). \end{equation} Thus this ratio further accentuates the deviation from SM behaviour. As an example, if we consider the parameter choice $f_W=2$, then $\delta_{\tr{VBF}}\approx 3.6$\% and $\delta_{\tr{WH}}\approx 10$\%. However, from the ratio, the combined $\delta_{\tr{VBF+WH}} \approx 15$\%, which is a clear indication of why we should consider such ratios. We thus define our next observable \begin{equation} \label{eq:R2} \mathcal{R}_2(f_i)=\frac{\sigma_{\tr{VBF}}(f_i)\times \tr{BR}_{H\to \gamma \gamma}(f_i)}{\sigma_{\tr{WH}}(f_i) \times \tr{BR}_{H\to \gamma \gamma}(f_i) \times \tr{BR}_{W \to \ell \nu}}, \end{equation} where the $\gamma \gamma$ final state has been chosen because of its clean character and reconstructibility of the Higgs mass. It should be remembered, however, that $f_{WW}, f_{BB}$ in the range $-3$ to 0 causes the diphoton branching ratio to undergo a further dip. This can adversely affect the statistics, and thus the high luminosity run is required for an exhaustive scan of the admissible ranges of the above coefficients. \subsection{Observable sensitive to $\mathcal{O}_{B}$: $\mathcal{R}_3$} \label{subsec:R3} The operator $\mathcal{O}_{B}$ is sensitive to $H\to ZZ^*$ and $H\to Z\gamma$. In the former mode, the sensitivity of $f_B$ is limited (see the green curve in Fig.~\ref{fig:BRzz}) and can be appreciable only for larger $f_B$. The partial decay width $\Gamma_{H\to Z\gamma}$, on the other hand is rather sensitive to all the four operators under study (Fig.~\ref{fig:BRzy}), primarily due to the fact that the new $HZ\gamma$ vertex contributes practically as the same order as in the SM. However, the present statistics in this channel is poor~\cite{Aad:2014fia,Chatrchyan:2013vaa}. We expect better bounds on $\mathcal{O}_{WW}$, $\mathcal{O}_{BB}$ and $\mathcal{O}_W$ from the measurements of $\mathcal{R}_1$ and $\mathcal{R}_2$. We use $\mathcal{R}_3$ for the 14 TeV 3000 fb$^{-1}$ run to constrain $f_B$ only, for which other channels fail. In the same spirit as for $\mathcal{R}_1$, we thus define our third observable \begin{equation} \mathcal{R}_3(f_i)=\frac{\sigma_{\tr{ggF}}\times \tr{BR}_{H\to Z \gamma \to 2 \ell \gamma}(f_i)}{\sigma_{\tr{ggF}}\times \tr{BR}_{H\to WW^* \to 2 \ell 2 \nu}(f_i)}, \end{equation} where $\ell=e,\mu$ and here again the CThU cancels. Here also, we must note that in defining $\mathcal{R}_3$ a full jet-veto has been demanded for both the numerator and the denominator. For the numerator, the $Z$ boson's decay to both an electron pair and a muon pair is considered. Besides, in the denominator, the $WW^*$ pair is taken to decay similar to the $\mathcal{R}_1$ case. \textbf{Comparison with the $\kappa$-framework :} In principle, studies in terms of ratios in different channels can be carried also in the $\kappa$-framework~\cite{Banerjee:2012xc,Heinemeyer:2013tqa,Khachatryan:2014jba,ATLAS-CONF-2015-007} in which couplings are modified just by scale factors. It should, however, be remembered that the present analysis involves new Lorentz structures and hence brings non-trivial interference terms in the squared amplitudes. Unlike the situation with overall scaling, this prevents the cancellation of the modifying couplings when one considers ratios of events taking (SM+BSM) effects into account. Even though the ratio $\mathcal{R}_1$ ($\mathcal{R}_3$), dominated by $H \gamma\gamma$ ($H Z \gamma$) vertex, contains no new Lorentz structures, it is still sensitive to the HDOs due to the presence of the $HWW$ vertex in the denominator. Therefore, these ratios, although apparently similar to ratios employing the $\kappa$-framework, are different in practice. $\mathcal{R}_2$ is a ratio of $\sigma_{VBF}$ and $\sigma_{WH}$ which are sensitive to the operator coefficients as shown in Fig.~\ref{fig:Hprod}. In the $\kappa$-framework, $\sigma_{VBF}$ is dominated by the $WWH$ vertex and hence $\kappa_{WW}$ will approximately cancel in $\mathcal{R}_2$. On the other hand, there will be no trivial cancellations between the numerator and denominator in the HDO-framework. \section{Results of the analysis} \label{results} For our subsequent collider analysis, the chain we have used is as follows - first we have implemented the relevant dimension-6 interaction terms as shown in Eq.~(\ref{eq:lagHVV}) in~\textsc{FeynRules}~\cite{Alloul:2013bka}, and generated the Universal FeynRules Output (UFO)~\cite{Degrande:2011ua} model files. These UFO model files have been used in the~\textsc{Monte-Carlo} (MC) event generator~\textsc{MadGraph}~\cite{Alwall:2014hca} to generate event samples. Next, the parton-showering and hadronisation are performed using \textsc{Pythia}~\cite{Sjostrand:2006za} and finally detector level analyses is carried using \textsc{Delphes}~\cite{deFavereau:2013fsa}. Before we discuss the phenomenological aspects of the aforementioned observables, we re-iterate below the various kinds of uncertainties considered. The two major classes of observables where these uncertainties arise are as follows: \begin{itemize} \item \underline{Same production channel but different final states:}\\ In such cases (as in $\mathcal{R}_1$ and $\mathcal{R}_3$), the correlated uncertainties lie in PDF+$\alpha_s$, QCD-scale and in the total Higgs decay width, $\Gamma_H$. However, uncertainties in the partial decay widths are uncorrelated~\footnote{We must mention here that $\Gamma_{H \to \gamma \gamma}$ and $\Gamma_{H \to Z \gamma}$ have tiny correlations with $\Gamma_{H \to WW^*}$ because of the $W$-boson loop in the former two cases. However, in this present analysis we neglect such small correlations and consider these partial decay widths to be mostly uncorrelated}. Statistical uncertainties for distinct final states are always uncorrelated and are retained in our analysis. We also assume some systematic uncertainties, whenever shown, to be fully uncorrelated. All surviving uncertainties are added in quadrature to estimate total uncertainties related to our observables. \item \underline{Different production channels but same final state:}\\ For such observables ($\mathcal{R}_2$ in our definition), the only correlated uncertainty is in $\tr{BR}_{H\to \gamma \gamma}$. All other uncertainties are uncorrelated and hence are added in quadrature (including the uncertainties in the numerator and the denominator of the ratio $\mathcal{R}_2$). Beside the already mentioned theoretical uncertainties, we also encounter some additional theoretical uncertainty related to the QCD-scale in the $WH$ mode, which we separately discuss in subsection~\ref{subsec:R214}. \end{itemize} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|} \hline SM Quantity & Value & +\textit{ve} uncert. \% & $-$\textit{ve} uncert. \% \\ \hline $\tr{BR}_{H\to\gamma\gamma}$ & $2.28\times 10^{-3}$ & $+4.99$ & $-4.89$ \\ \hline $\tr{BR}_{H\to WW^*}$ & $2.15\times 10^{-1}$ & $+4.26$ & $-4.20$ \\ \hline $\tr{BR}_{W\to e \nu_e}$ & $1.07\times 10^{-1}$ & $+0.16$ & $-0.16$ \\ \hline $\tr{BR}_{W\to \mu \nu_{\mu}}$ & $1.06\times 10^{-1}$ & $+0.15$ & $-0.15$ \\ \hline $\tr{BR}_{H\to Z \gamma}$ & $1.54\times 10^{-3}$ & $+9.01$ & $-8.83$ \\ \hline $\tr{BR}_{Z\to ee}$ & $3.36 \times 10^{-2}$ & $+0.004$ & $-0.004$ \\ \hline $\tr{BR}_{Z\to \mu \mu}$ & $3.37 \times 10^{-2}$ & $+0.007$ & $-0.007$ \\ \hline Total $\Gamma_H$ & $4.07$ MeV & $+3.97$ & $-3.94$ \\ \hline \end{tabular} \caption{$\tr{BR}_{H\to\gamma\gamma}$, $\tr{BR}_{H\to WW^*}$, $\tr{BR}_{H\to Z \gamma}$, $\tr{BR}_{W\to \ell \nu}$, $\tr{BR}_{Z\to \ell \ell}$ and total Higgs width $\Gamma_H$ (MeV) and their \% uncertainties ($+ve$ and $-ve$ refer to positive and negative uncertainties respectively)} for a Higgs of mass 125 GeV ($m_W = 80.385$ GeV and $m_Z = 91.1876$ GeV). These numbers are taken from the LHC Higgs Cross Section Working Group page~\cite{twiki}. \label{tab:BR-width-SM} \end{center} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Process & $\sigma$ (pb) & \footnotesize{$+$QCD-Scale \%} & \footnotesize{$-$QCD-Scale \%} & \footnotesize{$+$(PDF$+\alpha_S$) \%} & \footnotesize{$-$(PDF$+\alpha_S$) \%} \\ \hline $ggF$ & 49.47 & $+7.5$ & $-8.0$ & $+7.2$ & $-6.0$ \\ \hline $VBF$ & 4.233 & $+0.4$ & $-0.5$ & $+3.3$ & $-3.3$ \\ \hline $WH$ & 1.522 & $+0.8$ & $-1.6$ & $+3.2$ & $-3.2$ \\ \hline $ZH$ & 0.969 & $+4.0$ & $-3.9$ & $+3.5$ & $-3.5$ \\ \hline \end{tabular} \caption{The cross-sections of relevant Higgs production ($m_H=125$ GeV) channels and their QCD-Scale and PDF$+\alpha_s$ uncertainties in \%. These numbers are again taken from the LHC Higgs Cross Section Working Group page~\cite{twiki}.} \label{tab:sigma-SM} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline & $\mathcal{R}_1$ & $\mathcal{R}_2$ & $\mathcal{R}_3$ \\ \hline $N_S^{num}$ & 47724 ($\gamma\gamma$ in $ggF$) & 194 ($\gamma\gamma$ in $VBF$) & 1989 ($Z\gamma$ in $ggF$) \\ \hline $N_B^{num}$ & $3.16\times 10^6$ & 1041 & 691931 \\ \hline $N_S^{den}$ & 40850 ($WW^*$ in $ggF$) & 238 ($\gamma\gamma$ in $WH$) & 40850 ($WW^*$ in $ggF$) \\ \hline $N_B^{den}$ & 366450 & 995 & 366450 \\ \hline \end{tabular} \caption{Number of surviving events (taken from Refs.~\cite{ATLAS-HL-LHC,ATLAS-HL-LHC-gaga}) after the selection cuts in the SM at 14 TeV with 3000 fb$^{-1}$ integrated luminosity. These numbers are used to compute the statistical uncertainties (which goes as $\sqrt{N_S+N_B}/N_S$, where $N_S$ and $N_B$ are respectively the number of surviving signal and background events after selection cuts) related to the numerator and denominator of the three observables. Number of events in the $VBF$ ($\gamma\gamma$) channel is computed by applying a fixed $p_T$-cut (keeping other cuts same as in Ref.~\cite{ATLAS-HL-LHC}) of 50 GeV on both the tagged jets instead of $\eta$-dependent jet selection cuts as used in the same reference. Number of events for $\gamma \gamma$ in $\mathcal{R}_1$, $Z \gamma$ in $\mathcal{R}_3$ and $WW^*$ for $\mathcal{R}_1$ and $\mathcal{R}_3$ are obtained after putting 0-jet veto and demanding only $ggF$ events. The superscripts $num$ and $den$ signifies the numerators and denominators of the three observables.} \label{tab:Nev} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline $\mathcal{R}_1$ & $\mathcal{R}_2$ & $\mathcal{R}_3$ \\ \hline 2.87 \% & 13.83 \% & 29.63 \% \\ \hline \end{tabular} \caption{Statistical uncertainty for the observables $\mathcal{R}_1$, $\mathcal{R}_2$ and $\mathcal{R}_3$. The numbers are obtained after doubling the number of signal and background events given in Table~\ref{tab:Nev} in order to account for both ATLAS and CMS experiments.} \label{tab:stat} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline & $\mathcal{R}_1$ & $\mathcal{R}_2$ & $\mathcal{R}_3$ \\ \hline Numerator & 2.5\% ($\gamma\gamma$ in $ggF$) & 9.1\% ($\gamma\gamma$ in $VBF$) & 3.1\% ($Z\gamma$ in $ggF$) \\ \hline Denominator & 3.4\% ($WW^*$ in $ggF$) & 5.0\% ($\gamma\gamma$ in $WH$) & 2.8\% ($WW^*$ in $ggF$) \\ \hline \end{tabular} \caption{Systematic uncertainties used in our analysis to compute the total uncertainties related to the three observables. The numbers shown here are combination of various types of relevant systematic uncertainties added in quadrature taken from Refs.~\cite{Aad:2014eha,ATLAS:2014aga,Aad:2014fia}.} \label{tab:systU} \end{table} We further assume that the percentage uncertainties remain same even after the inclusion of the anomalous couplings. In order to illustrate, how the uncertainties are taken into consideration, we list the theoretical uncertainties related to relevant Higgs BR and total width in Table~\ref{tab:BR-width-SM}, and related to various production cross-sections in Table~\ref{tab:sigma-SM}. In Table~\ref{tab:Nev}, we present the number of surviving events after the selection cuts in the SM at 14 TeV with 3000 fb$^{-1}$ integrated luminosity in the pure production modes. These numbers are taken from Refs.~\cite{ATLAS-HL-LHC,ATLAS-HL-LHC-gaga} except for the $\gamma\gamma$ channel in the $VBF$ production mode, which we estimate by applying a fixed $p_T$-cut (keeping other cuts are same as in Ref.~\cite{ATLAS-HL-LHC}) of 50 GeV on both the tagged jets instead of $\eta$-dependent jet selection cuts as used in the same reference. The number of events have been computed by removing the contaminations from other production mechanisms which will reduce the number of events and hence enhance the statistical uncertainties (which roughly goes as $\sqrt{N_S+N_B}/N_S$, with $N_S$ and $N_B$ being respectively the number of surviving signal and background events after selection cuts). For instance, the reported number of $\gamma\gamma$ events for an integrated luminosity of 3000 fb$^{-1}$ is $49200$ with a $3\%$ contamination from $VBF$ (Table 3 in Ref.~\cite{ATLAS-HL-LHC}). In our analysis we have used $N_S=47724$ ($=0.97 \times 49200$) to compute the statistical uncertainty. Similarly a $30\%$ contamination in the $VBF$ category due to $ggF$ (Table 3 in Ref.~\cite{ATLAS-HL-LHC}) has also been taken into consideration. In doing so, we are giving conservative estimates on the statistical uncertainties. All entries in Table~\ref{tab:Nev} are shown after removing contamination to compute conservative statistical uncertainties. We must note that, while computing the statistical uncertainties (as shown in Table~\ref{tab:stat}) for all the three ratios, we double the number of events in Table~\ref{tab:Nev} to roughly accommodate two independent experiments to be performed by ATLAS and CMS. Here, we assume that ATLAS and CMS will analyse the same channels with similar set of selection cuts and will roughly obtain same number of events in the actual experiment. It is also assumed that the overall performance of ATLAS and CMS will be similar, integrated over a large luminosity. In future, when the data become actually available, one would be able to compute the exact statistical uncertainties. However, we must note that one should actually take the number of events in the \textit{side-band} ($N_{side-band}$) in order to compute the statistical uncertainties. The procedure we follow gives conservative values for the statistical uncertainties. In future, the actual experiments will provide us $N_{side-band}$ which will allow us to compute accurate statistical uncertainties. However, the \textit{side-band} analysis is beyond the scope of this paper as the data for the 14 TeV run at 3000 fb$^{-1}$ is yet unavailable. We also use some systematic uncertainties in our analysis as listed in Table~\ref{tab:systU} (Refs.~\cite{Aad:2014eha,ATLAS:2014aga,Aad:2014fia}). In the future, it is quite expected, various systematic uncertainties will reduce by improving their modelling. To be conservative, we have used various important uncorrelated systematic uncertainties as used in Refs.~\cite{Aad:2014eha,ATLAS:2014aga,Aad:2014fia} for 7+8 TeV analysis. For the observable $\mathcal{R}_1$, since we are applying same jet veto (\textit{i.e.} 0-jet category), the systematic uncertainties related to the jet energy scale, jet vertex fraction etc. will not be present. On the other hand, due to the different final state, systematic uncertainties related to the photon and lepton identification and isolation, missing energy trigger etc. will remain. In a similar fashion, for $\mathcal{R}_2$ and $\mathcal{R}_3$ various correlated systematic uncertainties will cancel between their respective numerator and denominator. Next, we consider the ratio $\mathcal{R}_1$ in the light of both the existing data and those predicted for the high energy run. For $\mathcal{R}_2$ and $\mathcal{R}_3$, only a discussion in terms of 14 TeV rates is relevant, as the currently available results have insufficient statistics on these. \subsection{$\mathcal{R}_1$ @ 7$+$8 TeV} Before predicting the bounds from the 14 TeV HL run, let us form an idea about the constraints from the 7+8 TeV Higgs data in the $\gamma\gamma$ and $WW^*$ channels. In Table~\ref{tab:mu-gagaWW}, we show the \emph{exclusive} signal strengths in the $\gamma\gamma$ and $WW^*$ final states through the $ggF$ production mode as reported by ATLAS~\cite{Aad:2014eha,ATLAS:2014aga} and CMS~\cite{Khachatryan:2014ira,CMS:2014ega}. We must emphasize that the categorization introduced by the ATLAS and CMS experiments are used to enhance the sensitivity for the Higgs boson signal (Tables II and III in Ref.~\cite{Aad:2014eha}). The signal strengths ($\mu$) shown in Fig. 17 include these contaminations. These signal strengths are further combined to give specific production categories as shown in Fig. 18. For instance $\mu$ for \textit{ggF categories} is the combination of the four categories, \textit{viz.} central low $P_{T_t}$, central high $P_{T_t}$, forward low $P_{T_t}$ and forward high $P_{T_t}$. Therefore, the $\mu$ for specific categories in Fig. 18 is not \emph{exclusive}. However, while obtaining the $\mu$ for a specific production mode in Fig. 19, the effect of contaminations are properly removed (by knowing the amount of contaminations from Monte-Carlo simulation for the SM) and therefore, these are the \emph{exclusive} signal strengths. The removal of contaminations includes not only the subtraction of production mechanisms that are not of interest but also the propagation of errors. The experiments have taken into account the impact on the statistical, systematic and theoretical errors for the extraction of the \emph{exclusive} signal strengths. Therefore, the \emph{exclusive} $\mu$ will generally contain larger uncertainty. For example one can see that the error on the global signal strength is significantly better than that extracted for individual production mechanisms. For instance, in Ref.~\cite{Aad:2014eha}, where ATLAS reports on signal strengths with the di-photon channel, the global signal strength is $\mu = 1.17 \pm 0.27$, which leads to an accuracy of 23\%, whereas for the signal strength of gluon-gluon fusion (ggf) $\mu_{ggf} = 1.32 \pm 0.38$, corresponding to an accuracy of 29\%. Same applies to the results reported by CMS in Ref.~\cite{Khachatryan:2014ira}. Here we statistically combine the signal strengths for a particular final state as reported by the two experiments, using the following relations \begin{equation} \frac{1}{\bar{\sigma}^2}=\sum_{i} \frac{1}{\sigma_i^2};~~~~~ \frac{\bar{\mu}}{\bar{\sigma}^2}=\sum_{i} \frac{\mu_i}{\sigma_i^2}, \label{eq:comb} \end{equation} where $\bar{\sigma}$ ($\bar{\mu}$) refers to the combined $1\sigma$ uncertainty (signal strength) and $\sigma_i$ ($\mu_i$) signifies the corresponding uncertainties (signal strengths) in different experiments. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline Experiment & $\mu(H\to \gamma \gamma)$ in $ggF$ & $\mu(H \to WW^* \to 2\ell \slashed{E}_T)$ in $ggF$ \\ \hline ATLAS (@ 7+8 TeV) & $1.32^{+0.38}_{-0.38}$ & $1.02^{+0.29}_{-0.26}$ \\ \hline CMS (@ 7+8 TeV) & $1.12^{+0.37}_{-0.32}$ & $0.75^{+0.29}_{-0.23}$ \\ \hline Combined & $1.21 \pm 0.26$ & $0.88 \pm 0.19$ \\ \hline \end{tabular} \caption{Measured Higgs Signal strengths in the $\gamma \gamma$ and $WW^*$ modes where Higgs is produced through only $ggF$ channel using $\sqrt{s} = 7 + 8$ TeV data by ATLAS~\cite{Aad:2014eha,ATLAS:2014aga} and CMS~\cite{Khachatryan:2014ira,Khachatryan:2014jba}. Here we have combined the ATLAS and CMS signal strengths for a particular final state and production mode using Eq.~\ref{eq:comb}.} \label{tab:mu-gagaWW} \end{center} \end{table} We compute all the surviving correlated theory errors and subtract them in quadrature from the errors in the numerator and denominator of the ratio $\mathcal{R}_1$, \textit{viz.} $\mathcal{R}_{1}^{num.}=\mu_{H\to \gamma \gamma}^{\tr{ggF}} \times (\sigma_{\tr{ggF}}\times \tr{BR}_{H\to \gamma \gamma})^{\tr{SM}}$ and $\mathcal{R}_{1}^{den.}=\mu_{H\to WW^*}^{\tr{ggF}} \times (\sigma_{\tr{ggF}}\times \tr{BR}_{H\to WW^*})^{\tr{SM}} \times \sum_{\ell} \tr{BR}^2_{W\to \ell\nu_{\ell}}$~\footnote{For instance, the error associated with combined (ATLAS+CMS) $\mu^{ggF}(H\to \gamma \gamma)$ \textit{i.e.} $\pm 0.26$ consists of theoretical, statistical and systematic uncertainties and, by subtracting the CThU ($\pm 0.13$) in quadrature we get ($\pm 0.22$) which will finally contribute to the uncertainty related to the numerator of $\mathcal{R}_1$.}. In Fig.~\ref{fig:R18}, the red line is the theoretically computed $\mathcal{R}_1$ which is independent of the centre of mass energy since $\mathcal{R}_1$ is actually a ratio of two BRs. The outer (light green) band shows the uncertainty comprising of the uncorrelated theoretical, statistical and systematic parts and the inner (dark green) band represents the total uncorrelated theory uncertainty. The black dashed line gives the experimental central value of $\mathcal{R}_1$. The ratio, $\mathcal{R}_1$ is almost completely dominated by $\tr{BR}_{H \to \gamma \gamma}$ (since $\tr{BR}_{H \to WW^*}$ is not so sensitive on HDOs) and therefore highly sensitive to the operators $\mathcal{O}_{WW}$ and $\mathcal{O}_{BB}$. The parabolic nature of the $\tr{BR}_{H \to \gamma \gamma}$ as functions of $f_{WW}$ and $f_{BB}$ leads to two disjoint allowed ranges of $f_{WW}=f_{BB} \approx [-3.32,-2.91] \cup [0.12,0.57]$ as shown in Fig.~\ref{fig:R18}. We should mention that the region between these two allowed ranges shows extremely low values of $\tr{BR}_{H\to \gamma\gamma}$ because of destructive interference between the SM and HDO might leads to poor statistics. If both $\mathcal{O}_{WW}$ and $\mathcal{O}_{BB}$ are present simultaneously with almost equal magnitude and opposite signs, the observable $\mathcal{R}_1$ closely mimics the SM expectation, and to probe that `special' region of parameter space we need to go for other observable like $\mathcal{R}_2$. The operators $\mathcal{O}_{W}$ and $\mathcal{O}_{B}$ are mostly insensitive to this observable mainly because $\tr{BR}_{\gamma\gamma}$ is independent of these operators and the dependence of $\tr{BR}_{WW^*}$ on all four operators is comparatively weak (see Fig.~\ref{fig:BRww}) We compare our results with the existing bounds on these operators as obtained in literature. For instance, the limits obtained in Fig. 3 (left panel) of Ref.~\cite{Masso:2012eq} on $\mathcal{O}_{WW} \, \textrm{and} \, \mathcal{O}_{BB}$ at 68\% CL are $[-3.23,-2.61] \cup [-0.35,0.27]$ (in TeV$^{-2}$) for the ATLAS case. In obtaining these limits, they varied one operator at a time. This is similar in approach to our study where we have given a framework where one operator is varied at a time. Our bounds are in very good agreement with their results. The slightly different limits obtained by us are due to the use of more recent data in our case. \begin{figure*}[ht!] \centering \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R1_8_fWW}\label{fig:BRyywwfWW}}~~~ \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R1_8_fWW_mag}\label{fig:BRyywwfBB}} \caption{(a) $\mathcal{R}_1$ versus $f_{WW}/\Lambda^2$ (TeV$^{-2}$) and (b) same plot in magnified scale. Plots (a) and (b) are identical for $f_{BB}/\Lambda^2$. The red line is the theoretical expectation in presence of HDOs. The inner band (dark green) shows the uncorrelated theoretical uncertainty (UThU) and the outer (light green) band shows the total surviving uncorrelated uncertainty (UU) (uncorrelated theoretical + statistical + systematic) at 7+8 TeV computed using the $\mu_{\gamma \gamma}$ and $\mu_{WW^*}$ (CMS+ATLAS) results. The black dotted line is the corresponding central value. The uncertainty bands correspond to 68\% CL.} \label{fig:R18} \end{figure*} \subsection{$\mathcal{R}_1$ @ 14 TeV} Next, we present a projected study of $\mathcal{R}_1$ for the 14 TeV run at 3000 fb$^{-1}$ of integrated luminosity. It should be noted here that the systematic uncertainties used here are for the 8 TeV run and we have assumed that they will not change significantly for the HL-LHC at 14 TeV. The inner bands, more clearly noticeable in Fig.~\ref{fig:BRyywwfBB14}, contain only the uncorrelated theoretical errors, while the statistical and systematic errors are compounded in the outer bands. Clearly, the uncertainty gets reduced, as compared to $\mathcal{R}_1$ (@ 7 + 8 TeV), and we get an even smaller window around $f_{WW}$ and $f_{BB} \approx [-2.76, -2.65] \cup [-0.06,0.04]$ TeV$^{-2}$ as shown in Fig.~\ref{fig:R114}. The difference in this case is that the projected band is around the SM in contrast to what was shown for the 7+8 TeV case, where the ratio of the experimental signal strengths was treated as the reference. \begin{figure*}[ht!] \centering \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R1_14_fWW}\label{fig:BRyywwfWW14}}~~~ \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R1_14_fWW_mag}\label{fig:BRyywwfBB14}} \caption{(a) $\mathcal{R}_1$ versus $f_{WW}/\Lambda^2$ (TeV$^{-2}$) and (b) same plot in magnified scale. Plots (a) and (b) are identical for $f_{BB}/\Lambda^2$. The red line is the theoretical expectation in presence of HDOs. The inner band (dark green) shows the uncorrelated theoretical uncertainty (UThU) and the outer band (light green) shows total uncorrelated uncertainty (UU) (uncorrelated theoretical + statistical + systematic) at 14 TeV with 3000 fb$^{-1}$ integrated luminosity. The black dotted line is the corresponding central value. The uncertainty bands correspond to 68\% CL.} \label{fig:R114} \end{figure*} \subsection{$\mathcal{R}_2$ @ 14 TeV} \label{subsec:R214} We now show the potential of $\mathcal{R}_2$ in deriving bounds on some of the operator coefficients at 14 TeV. As is evident from Eq.~(\ref{eq:R2}), this ratio has the capacity to probe $\mathcal{O}_{W}$ which cannot be constrained from $\mathcal{R}_1$. On the other hand, the operator $\mathcal{O}_{BB}$, though amenable to probe via $\mathcal{R}_1$, fails to show any marked effect on $\mathcal{R}_2$ because $\tr{BR}_{H\to \gamma \gamma}$ gets cancelled in the ratio as defined by us. Also, $\mathcal{O}_{BB}$ does not modify $\sigma_{\tr{WH}}$ but, $\mathcal{R}_2$ is however sensitive to the operator $\mathcal{O}_{WW}$ as both $\sigma_{\tr{VBF}}$ and $\sigma_{\tr{WH}}$ are sensitive to this. By closely following the ATLAS analyses in the context of high luminosity LHC run, we have used a trigger cut of 50 GeV on jet $p_T$, instead of using $\eta$-dependent $p_T$ cut for jets as used in Ref.~\cite{ATLAS-HL-LHC}. The reason is that, a flat cut on the $p_T$ will most certainly give us a less pessimistic number of final state events than that for the $\eta$ dependent $p_T$ cuts and performs as good as the $\eta$-dependent cut to suppress the background. So, we estimate a slightly larger number of events, {\it i.e.} we obtain a better efficiency to the cuts for the flat $p_T$ case as compared to what is predicted by ATLAS. For the $WH$ production mode, we use a matched sample with $WH+0,1,2$ jets with the $W$ decaying leptonically. Finally we demand samples with a maximum of one jet in our analysis. In selecting this $0+1$ jet sample, from a matched two jet sample, we encounter another theoretical scale uncertainty as described in Ref.~\cite{Stewart:2011cf}. We have estimated this uncertainty as follows: \begin{equation} \label{eq.stwtack} \Delta^{th.}=\frac{\sigma(pp\to WH\, +\, \geq 2\, \textrm{jets})}{\sigma^{NNLO}(pp \to WH)}\Bigg\vert_{m_H} \times \Delta \sigma(pp\to WH\, +\, \geq 2\, \textrm{jets})(\mu_F,\mu_R), \end{equation} where $\Delta \sigma(pp\to WH\, +\, \geq 2\, \textrm{jets})$ is the maximum deviation of the exclusive 2-jet cross-section computed at $\mu_F=\mu_R=m_H$ from the ones computed by varying $\mu_F$ and $\mu_R$ between $m_H/2$ and $2 m_H$. \begin{figure*}[ht!] \centering \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R2_14_fWW}\label{fig:R2fWW}}~~~ \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R2_14_fW}\label{fig:R2fW}} \caption{The ratio $\mathcal{R}_2$ versus (a) $f_{WW}/\Lambda^2$ (TeV$^{-2}$), (b) $f_{W}/\Lambda^2$ (TeV$^{-2}$) for the 14 TeV analysis with 3000 fb$^{-1}$. The red line is the theoretical expectation in presence of HDOs. The inner band (dark green) shows the uncorrelated theoretical uncertainty due to PDF+$\alpha_s$, QCD-scale and $\Delta^{th.}$ which is defined in Eq.~(\ref{eq.stwtack}). The outer band (light green) shows the uncertainties due to the statistical, systematic compounded with the uncorrelated theoretical part. The black dotted line is the corresponding SM value. The uncertainty bands correspond to 68\% CL.} \label{fig:R214} \end{figure*} In constructing $\mathcal{R}_2$, we include the modified cut-efficiencies~\cite{Banerjee:2013apa,Gainer:2013rxa} for both the $VBF$ and $WH$ channels. Even though we stick to small values of $f_i$ where the modification in such efficiencies from the SM-values are small, we still incorporate these to make the study more rigorous. In computing the statistical uncertainties, we took the relevant numbers from the 14 TeV projected study done by ATLAS (see Refs.~\cite{ATLAS-HL-LHC,ATLAS-HL-LHC-gaga}). Besides, we also suggest tagging a single jet for $VBF$, which reduces the statistical uncertainty by a factor of $\sqrt{2}$~\cite{Kruse:2014pya}. The $\sqrt{2}$ factor takes into account the number of events as well as the contamination due to $ggF$ as can be seen on Table 1 in Ref.~\cite{Kruse:2014pya}. In Fig.~\ref{fig:R214}, we present $\mathcal{R}_2$ as a function of the $f_{WW}$ and $f_W$ taken one at a time for an integrated luminosity of $\mathcal{L}=3000$ fb$^{-1}$. The outer band (light green) shows the uncertainties due to the statistical, systematic compounded with the uncorrelated theoretical part. The central black dashed line shows the SM expectation for $\mathcal{R}_2$. We can see in Fig.~\ref{fig:R214} that very small values of HDO coefficients can be probed by measuring the observable $\mathcal{R}_2$. For $f_{WW}$, one can corner the allowed region to a small window of $[-1.96,+1.62]$ and for $f_W$ the range would be $[-2.10,+2.50]$. Predicting the observability of such small values in the parameter coefficients is definitely an improvement on existing knowledge. \subsection{$\mathcal{R}_3$ @ 14 TeV} \begin{figure*}[ht!] \centering \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R3_14_fB}\label{fig:R3fB}} \subfloat[]{\includegraphics[width=7cm,height=7cm]{figures/R3_14_fB_mag}\label{fig:R3fBmag}} \caption{The ratio $\mathcal{R}_3$ versus $f_{B}/\Lambda^2$ (TeV$^{-2}$) at 14 TeV with 3000 fb$^{-1}$. The red line is the theoretical expectation in presence of HDOs. The inner band (dark green) shows the uncorrelated theoretical uncertainty (UThU) and the outer band (light green) shows the total uncorrelated uncertainty (UU) due to statistical, systematic and the uncorrelated theoretical part. These uncertainty bands are for $\mathcal{R}_3$ at 14 TeV. The black dotted line is the corresponding SM value. The uncertainty bands correspond to 68\% CL.} \label{fig:R314} \end{figure*} The operator $\mathcal{O}_{B}$ appears only in the $HZZ$ and $HZ\gamma$ couplings, As seen in Fig.~\ref{fig:BRzz}, the sensitivity of $\mathcal{O}_{B}$ is too low and hence $H\to ZZ^*$ will not give a proper bound on $f_{B}/\Lambda^2$. Recent experiment by ATLAS (CMS) puts bounds on the observed signal strength of $H\to Z\gamma$ at about 11 (9.5) times the SM expectation at $95$\% confidence level~\cite{Aad:2014fia,Chatrchyan:2013vaa}. Instead of using these weak signal strengths, we perform an analogous projected study of $\mathcal{R}_3$ at 14 TeV in the same spirit as $\mathcal{R}_1$ at 14 TeV. From Fig.~\ref{fig:R314}, we find that the projected bounds on $f_{B}/\Lambda^2$ is $[-8.44,-7.17]\cup [-0.72,+0.56]$. The region in between is again inaccessible due to poor statistics, as in this region, $\tr{BR}_{H\to Z \gamma}$ becomes insignificant, the reasons being similar to those mentioned for $H\to \gamma \gamma$. The inner band (dark green) includes the uncorrelated theoretical uncertainties due to the partial decay widths of $H\to Z \gamma$ and $H \to WW^*$. The outer band (light green), in addition to the theoretical uncertainties, contains the statistical and systematic uncertainties. As discussed earlier, a few types of correlated systematic uncertainties related to the uncertainty in luminosity, lepton identification and isolation etc. will get cancelled in the ratio $\mathcal{R}_3$. On the other hand, photon identification, isolation etc. uncertainties will retain in the analysis. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline Observable & $\mathcal{O}_{WW}$ & $\mathcal{O}_{BB}$ & $\mathcal{O}_{W}$ & $\mathcal{O}_{B}$ \\ \hline & $[-3.32,-2.91]$ & $[-3.32,-2.91]$ & Not & Not \\ $\mathcal{R}_1$ @ 7+8 TeV & $\cup$ & $\cup$ & bounded & bounded\\ & $[+0.12,+0.57]$ & $[+0.12,+0.57]$ & & \\ \hline & $[-2.76,-2.65]$ & $[-2.76,-2.65]$ & Not & Not \\ $\mathcal{R}_1$ @ 14 TeV & $\cup$ & $\cup$ & bounded & bounded \\ & $[-0.06,+0.04]$ & $[-0.06,+0.04]$ & & \\ \hline $\mathcal{R}_2$ @ 14 TeV & $[-1.96,+1.62]$ & Not & $[-2.10,+2.50]$ & Not \\ & & bounded & & bounded \\ \hline & Not & Not & Not & $[-8.44,-7.17]$ \\ $\mathcal{R}_3$ @ 14 TeV & used & used & used & $\cup$\\ &&&& $[-0.72,+0.56]$ \\ \hline \end{tabular} \caption{We summarize our obtained allowed region of the coefficients of HDOs using the three observables. $\mathcal{R}_3$ is not used to constrain the operators $\mathcal{O}_{WW},\mathcal{O}_{BB}$ and $\mathcal{O}_W$ as has been discussed in Sec.~\ref{subsec:R3}.} \label{tab:limit} \end{table} In Table~\ref{tab:limit}, we summarize our obtained region of the parameter space allowed using three ratios, $\mathcal{R}_1$, $\mathcal{R}_2$ and $\mathcal{R}_3$. We present $\mathcal{R}_1$ using combined ATLAS+CMS data for 7+8 TeV run. We also present a projected study for all three observables at 14 TeV with an integrated luminosity of 3000 fb$^{-1}$. The allowed regions on $f_{WW}$ and $f_{BB}$ shrink at the 14 TeV 3000 fb$^{-1}$ run as compared to the current data. Using the ratio, $\mathcal{R}_2$ one can also put bounds on $f_{WW}$ and $f_{W}$. As mentioned earlier, there is a `special' region of parameter space where $\mathcal{R}_1$ mimics the SM expectation, therefore, $\mathcal{R}_2$ can also be used to infer the presence of $\mathcal{O}_{WW}$ with `special' values of coefficient $f_{WW}$. The operator $\mathcal{O}_B$ does not show any appreciable sensitivity in any production of Higgs or its decay except in the $\tr{BR}_{H\to Z\gamma}$. Therefore, the ratio $\mathcal{R}_3$ is constructed to constrain $f_{B}$ by a significant amount as evident from Table~\ref{tab:limit}. \section{Summary and conclusions} \label{summary} We have investigated how well one can constrain dimension-6 gauge-invariant operators inducing anomalous $HVV$ interactions. Probing the gauge invariant operators individually, we feel, are important, since they can point at any new physics above the electroweak symmetry breaking scale. While the operators contributing to $H\to \gamma\gamma$ are subjected to the hitherto strongest limits using the (7+8) TeV data, the remaining ones are relatively loosely constrained, in spite of the bounds coming from precision electroweak observables. At any rate, it is necessary to reduce uncertainties as much as possible, since any realistically conceived new physics is likely to generate such operators with coefficients no greater than $\approx \mathcal{O}(1)$ TeV$^{-2}$. We show that a good opportunity to probe them at this level, and improve spectacularly over the existing constraints, arises if event ratios in various channels are carefully studied. These include both ratios of events in different final states with the same Higgs production channel and those where a Higgs produced by different production modes ends up decaying into the same final state. While a majority of the theoretical uncertainties cancel in the former category, the latter allow us to probe those cases where some dimension-6 operators shift the rates in the numerator and the denominator in opposite directions. We find that, after a thorough consideration of all uncertainties, all the couplings can be pinned down to intervals of width $\approx \mathcal{O}(1)$ TeV$^{-2}$ on using 3000 fb$^{-1}$ of integrated luminosity at 14 TeV. Even with 300 fb$^{-1}$, the improvement over existing constraints is clearly expected, and the results are more uncertainty-free than in any other hitherto applied method. However, we must mention here that this approach should be complemented with the study of differential distributions which is not within the scope of this paper. \section*{Acknowledgements} The work of S.B., T.M. and B. Mukhopadhyaya was partially supported by funding available from the Department of Atomic Energy, Government of India for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. B. Mellado acknowledges the hospitality of RECAPP, Harish-Chandra Research Institute, during the collaboration. \bibliographystyle{JHEP}
1,477,468,749,959
arxiv
\section{Introduction} Deep submillimeter surveys provide a probe of galaxies which is almost independent of luminosity for a wide redshift range $1\,{<}\,z\,{<}\,5$, because of the negative K-correction in the infrared (e.g.~\citealp{blain2002}). The Submillimeter Common User Bolometer Array (SCUBA; \citealp{holland1999}) and Max-Planck Millimeter Bolometer (MAMBO, \citealp{kreysa1998}) have now discovered several hundred submillimeter galaxies (SMGs: e.g.~\citealp{smail1997}, \citealp{hughes1998}, \citealp{barger1998}, \citealp{borys2002}, \citealp{chapman2002}, \citealp{webb2003}, \citealp{greve2004}, \citealp{pope2005}, \citealp{coppin2006}). Our knowledge of SMGs has been hampered by the large beamsize of the submm telescopes, which makes it difficult to identify counterparts in the optical and near-infrared. For example, in the Great Observatories Origins Deep Survey (GOODS) North field there are typically ${\sim}\,10$ {\sl Hubble Space Telescope\/} ({\sl HST\/}) Advanced Camera for Surveys (ACS) optical galaxies within a single SCUBA 15 arcsec beam. Deep radio surveys have proven to be one of the best ways to identify the counterparts to SMGs (e.g.~\citealp{ivison2000}, \citealp{barger2000}, \citealp{pope2006}). The star formation processes that heat the dust responsible for the submm light also produce radio emission, as evidenced by the well known far-infrared--radio correlation. Taking advantage of this, SMGs have been matched to radio sources, which have much better positional accuracy, and the optical/near-IR counterparts can then be found after the radio identification has been made. This is possible because the density of sources in the deepest radio images is much less than that of optical galaxies, so the probability of a chance coincidence is lower. The current knowledge of SMGs is biased towards the radio identified sources. The SMGs have a median redshift of about 2.2 and bolometric infrared (IR) luminosities $L_{\rm IR}\,{>}\,10^{12}\,{\rm L}_\odot$ \citep{chapman2004, chapman2005}. The optically identified SMGs are faint optically, ($i_{ 775}\,{\gtrsim}\,22$), and red ($i_{775}-K_{\rm s}\,{\simeq}\,2.3$ in AB magnitudes), with about 30\% having colors consistent with Extremely Red Objects (EROs, \citealp{pope2005}). Thus SMGs are thought to be high redshift, dusty analogs of local ultraluminous infrared galaxies (ULIRGs). The spectral energy distributions (SEDs) of SMGs have traditionally been fit with local ULIRGs as templates, and in particular Arp220 (e.g.~\citealp{barger2000}), which has an effective dust temperature of about 42--47$\,$K (\citealp{klaas1997}, \citealp{dunne2000}). Chapman et al.~(2005) found a typical dust temperature of $36\pm7\,$K and a median $L_{\rm IR}\,{=}\,8.5\times10^{12}\,{\rm L}_\odot$ for their SMG sample. This is cooler than local ULIRGs, which have an average dust temperature of $43\pm6\,$K (based on the Dunne et al.~2000 sample). The Far-IR BACKground (FIRBACK) study of 170$\,\mu$m selected galaxies found two ULIRGs between $0.5\,{<}\,z\,{<}\,1$ which have SEDs cooler and less luminous than Arp220 \citep{sajina2006}. A 350$\,\mu$m study of radio-detected SMGs found temperatures of $35\pm3\,$K (Kovacs et al.~2006). Using the $24\,\mu$m imaging from the {\sl Spitzer\/} Legacy Project GOODS, \cite{pope2006} securely identify 60\% of SMGs in this field in the mid-infrared (MIR), and have tentative counterparts for another 34\%. It was found that the observed MIR--submm--radio SED of the SMGs peak at longer wavelengths than local ULIRGs and are best fit by models with temperatures of about $30\,$K (Pope et al.~2006). There is thus an emerging picture that SMGs are cooler than previously thought. Dust temperature affects the inferred IR luminosity, and hence star formation rate (SFR), that is derived for these galaxies. Studies of SMGs have often assumed dust temperatures of $40\,$K (see \citealp{blain2002}). A drop from $40\,$K to $35\,$K in temperature decreases the IR luminosity by about a factor of two. Better knowledge of the temperature of SMGs is thus crucial for more accurate luminosity estimates. The advent of the {\sl Spitzer Space Telescope\/} makes it possible to study the MIR and FIR properties of SMGs in detail for the first time. At the median SMG redshift of $z\,{\sim}\,2$, PAH and silicate features fall into the rest-frame wavelength of the 24$\,\mu$m band. This can make it difficult to determine the total infrared luminosities and to fit model SEDs well to the FIR, which does not always correlate with the MIR. The 70$\,\mu$m band of the Multiband Imaging Photometer for {\sl Spitzer\/} (MIPS) instrument is not affected by PAH or silicate features for redshifts less than about 3, while the 160$\,\mu$m band is not affected at all. Moreover, these MIPS bands are closer to the FIR peak than the MIR data points used in previous studies. The longer wavelength MIPS bands should therefore be extremely useful for studying SMGs. The sensitivities and confusion limits at 70$\,\mu$m make this feasible, but difficult, and hence only the deepest MIPS data are likely to lead to SMG detections. In this paper we present a study of the FIR properties and SEDs of submm galaxies using the deepest 70 and 160$\,\mu$m data available for the GOODS-N field. In particular, we use the 70 and 160$\,\mu$m data to check if the SEDs of distant SMGs are consistent with those of local ULIRGs. We assume a Hubble constant of $71\,{\rm km}\,{\rm s}^{-1}{\rm Mpc}^{-1}$, together with matter and cosmological constant density parameters of $\Omega_{\rm M}=0.27$ and $\Omega_{\rm \Lambda}=0.73$ in this paper. We also use the notation $S_{70}$, $S_{850}$, $S_{1.4}$ etc.~throughout for the flux densities at $850\,\mu$m, $70\,\mu$m and $1.4\,$GHz. \section{The Submillimeter Sample} The SCUBA `super-map' of GOODS-N contains 35 robust 850$\,\mu$m detections. Details of the data reduction and source extraction can be found in Borys et al.~(2003) and Pope et al.~(2005). This submm image contains all publicly available SCUBA mapping data in this field taken up until 2004. Although $450\,\mu$m data were also taken, they unfortunately have essentially no constraining power. Of the 35 SCUBA sources, 33 have likely radio and/or {\sl Spitzer\/} counterparts using the Very Large Array (VLA), InfraRed Array Camera (IRAC) and MIPS 24$\,\mu$m data in GOODS-N (see Pope et al.~2006). Thirty of these SCUBA galaxies are within the ultra-deep 70$\,\mu$m image area. SCUBA sources with low signal-to-noise ratios (SNRs) may have true flux densities which are boosted by a factor which depends on the source brightness and the local noise in the SCUBA map, and can be estimated if the source counts are known (see e.g.~\citealp{coppin2005}). Therefore in this study we use the deboosted 850$\,\mu$m flux densities given in Pope et al.~(2006). \section{Spitzer 70 and 160$\,\mu$\lowercase{m} Observations} The MIPS 70$\,\mu$m observations were carried out during Cycle 1 of the General Observer program ({\sl Spitzer\/} program ID 3325, Frayer et al.~2006). The inner $10^\prime\times10^\prime$ of GOODS-N was mapped to a depth of $10.6\,$ksec. The data were taken in the small-field photometry mode of MIPS with 10-second Data Collection Events (DCEs). Each Astronomical Observation Request (AOR) consisted of an 8-position cluster map, and the observations were completed with 12 AORs in total. In addition to our GO data, we used the MIPS Guaranteed Time Observers (GTO) data (program ID 81, \citealp{dole2004}). These GTO data have an integration time of $600\,$sec. The raw data were processed off-line using the Germanium Reprocessing Tools ({\tt GERT}, version S13.1), following the algorithms derived by the MIPS team \citep{gordon2005}. Instrumental artifacts in the Basic Calibrated Data (BCDs) were removed using the filtering techniques adopted for the extragalactic First Look Survey (xFLS, \citealp{frayer2006}). The data were calibrated assuming an absolute flux calibration factor of $702\,{\rm MJy}\,{\rm sr}^{-1}$ per MIPS-70$\mu$m unit for stellar SEDs. The flux densities were multiplied by 1.09 to apply a galactic SED color correction. This assumes a power law SED of the form $f_\nu \propto \nu^\alpha$ and $\alpha = -1$, but the color corrections are similar for $\alpha = 0$ to $-3$. A more detailed discussion of the data reduction can be found in \cite{frayer2006b}. The final image achieves a sensitivity of ${\sim}\,0.6\,$mJy rms and we have cataloged 101 sources (over $120\,{\rm arcmin}^2$) with $S_{70}\,{>}\,2.3\,$mJy (SNR$\,{>}\,3\sigma$). We catalog a region slightly larger than the area of deepest coverage to include some relatively bright 70 $\mu$m sources and improve statistics. The source counts are presented in \cite{frayer2006b} and a full catalog will be presented in Huynh et al.~(in preparation). The 70$\,\mu$m image has a beam size of 18\farcs5 FWHM, and in the presence of Gaussian noise the 1$\sigma$ positional error of sources is of the order $0.5\,\theta_{\rm FWHM}/{\rm SNR}$, i.e. 3\arcsec\ for the faintest sources. The 160$\,\mu$m observations of the GOODS-N region were taken as part of the MIPS GTO program in 2004. These data were taken in the scan mode of MIPS and we applied the standard filtering techniques to the 160$\,\mu$m BCDs similar to what was used for the xFLS \citep{frayer2006}. The data were calibrated using a factor of $44.7\,{\rm MJy}\,{\rm sr}^{-1}$ per MIPS-160$\,\mu$m unit. The 160$\,\mu$m data have an effective integration time of 120 seconds and the 160$\,\mu$m image reaches a sensitivity of ${\sim}\,15\,$mJy rms. A multiplication of 1.04 was applied to the 160$\,\mu$m flux densities to color correct for galaxy SEDs. We also note that the 160 $\mu$m light leak from 1 -- 1.6 $\mu$m is not a problem for this data because there are no blue sources with m$_{\rm J}$~$\gtrsim$~5.5 in the field. \section{Identifications at 70 and 160$\,\mu$\lowercase{m}} The negative K-correction at submillimeter wavelengths means that SMGs are detectable in deep SCUBA images over a wide redshift range, $1\,{<}\,z\,{<}\,5$ (e.g.~\citealp{blain2002}). However, at 70 and 160$\,\mu$m, the K-correction does not compensate for the distance dimming, and the flux density of a galaxy with a given intrinsic luminosity drops steeply as a function of redshift (e.g.~\citealp{lagache2005}). This is reflected in the high median redshifts of the SMGs ($z\,{\simeq}\,2.0$, Pope et al.~2006) compared to the 70$\,\mu$m sources ($z\,{\simeq}\,0.5$, Huynh et al.~in preparation). Hence the SMG and 70$\,\mu$m samples are unlikely to have much overlap, and we only expect to detect the low redshift SMGs in deep 70$\,\mu$m imaging. We examined the 70$\,\mu$m image for counterparts to the SCUBA sources. To do this the 70$\,\mu$m catalogue positions were compared to IRAC positions of the SCUBA counterparts (Pope et al.~2006), searching within 10\arcsec\ of each submm counterpart. This search radius was chosen in order to take into account the typical positional uncertainties of low SNR 70$\,\mu$m and SCUBA sources added in quadrature. This procedure uncovered two secure identifications of SMGs at 70$\,\mu$m, GN26 and GN13 (following the naming convention of Pope et al.~2005). These sources have 70$\,\mu$m SNRs of 12 and 8, respectively, and hence 70$\,\mu$m positional uncertainties of about 1\farcs3 and 1\farcs5 (including the {\sl Spitzer\/} 1\arcsec\ pointing uncertainty). The positional offset of the 70$\,\mu$m source relative to the IRAC position is 0.6 and 0.3 arcsec for GN26 and GN13, respectively, which is well within the positional uncertainty. The probability that one or more 70$\,\mu$m sources lies randomly within a distance $\theta$ of a SCUBA counterpart is: \begin{equation} P = 1 - \exp (-\pi n \theta^2) \, , \end{equation} given a surface density $n$ of 70$\,\mu$m sources (often called the $P$-statistic, e.g.~\citealp{downes1986}). Hence the probability of any 70$\,\mu$m source lying within 1\arcsec\ of a SCUBA counterpart is 0.07\%, using the 70$\,\mu$m source density of 101 sources over 120 arcmin$^2$, and hence the 70$\,\mu$m matches to GN13 and GN26 are likely to be real. Two more SMGs (GN12 and GN32) have a nearby 70$\,\mu$m source at distances of 4.7\arcsec and 9.5\arcsec, respectively. The 70$\,\mu$m sources near GN12 and GN32 have IRAC and 24$\,\mu$m counterparts which do not match the identifications for the SCUBA source (Pope et al.~2006). Although it is likely that some fraction of the 70$\,\mu$m flux density is associated with the submm galaxy, it is difficult to determine this fraction because of the other 24$\,\mu$m sources in the vicinity. Using Equation~(1), the random probability that 2/30 SCUBA sources have a 70$\,\mu$m source within a distance of 10\arcsec\ can be determined to be 28\%. This is consistent with the 70$\,\mu$m sources near GN12 and GN32 being random matches. At 160$\,\mu$m the only SMG detected is GN26. The beamsize at 160$\,\mu$m is 40\arcsec, so 70$\,\mu$m and/or SCUBA sources which are close together may be blended into one 160$\,\mu$m source. Examination of the 70$\,\mu$m MIPS image suggests that the 160$\,\mu$m flux density of GN26 has some contribution from another 70$\,\mu$m source (at $z\,{=}\,0.46$) not associated with GN26. We therefore deblended GN26 with a double Gaussian fit, fixing positions to the IRAC counterparts of the two 70$\,\mu$m sources which contribute to the 160$\,\mu$m flux density. We find that GN26 has $S_{160}\,{=}\,110\pm27\,$mJy, which is 60\% of the flux density of the 160$\,\mu$m complex. The uncertainty at 160$\,\mu$m is conservatively estimated to be 25\%, taking into account absolute calibration, fitting errors, and deblending issues. We also examined the SED of the source near GN26 to check the accuracy of the deblending. The $S_{70}/S_{160}$ ratio of the second source at $z\,{=}\,0.46$ is well fit by the quiescently star-forming galaxy SED templates of \cite{dh02}, so the deblending at 160$\,\mu$m seems reasonable. The multi-wavelength properties of the two SMGs which are detected at 70$\,\mu$m (Figure~1 and Table~1) are described in detail in Pope et al.~(2006). As expected, both sources (GN26 and GN13) are in the low redshift tail of the submm redshift distribution. Furthermore, we notice that these two sources are among the faintest at 850$\,\mu$m in the submm sample (both have $S_{850}\,{<}\,2.5\,$mJy) and therefore they are not typical of the full submm sample presented in Pope et al.~(2006) or indeed other samples of SMGs. GN13 and GN26 have 70$\,\mu$m flux densities of 6.5 and $13.9\,$mJy, respectively, while the full 70$\,\mu$m catalog has a median flux density of $5\,$mJy. About 80\% of the 70$\,\mu$m sources have spectroscopic redshifts, and the median redshift of these sources is $z\,{=}\,0.46$ (Huynh et al.~in preparation). Thus the 70$\,\mu$m counterpart to GN13 is typical of the full 70$\,\mu$m sample, but GN26 is unusual in that it has a bright 70$\,\mu$m counterpart, and is one of only seven 70$\,\mu$m sources in our sample currently confirmed to be at $z\,{>}\,1$. \section{Stacking Analysis} We performed a stacking analysis to derive an average 70$\,\mu$m flux density for the SMG population in the GOODS-N field. To begin with, we stacked the {\sl Spitzer\/} data at the positions of all SMGs, including sources with 70$\,\mu$m matches or with coincident flux. For each SMG a square image 132\arcsec\ on a side (approximately 7 MIPS beams) was extracted. We rotated each image by $90^\circ$ with respect to the previous one before co-adding to remove any large-scale background effects. The median level of the individual extracted images was subtracted to remove any small scale offsets and yield better background removal. Flux densities were determined using an aperture of 12\arcsec\ radius, and we applied an aperture correction of 2.0, which was calculated empirically from bright sources in the image. The aperture photometry was done after the background was subtracted from the stacked images, and the results were verified by making measurements with different size sky annuli. To estimate the expected scatter, offset stacked images were generated by randomly choosing a position in the 70$\,\mu$m image for each stacked source. Five hundred such random stacks were generated ,and the uncertainty in the stacked flux density is taken to be the standard deviation of these 500 measured values. The average 70$\,\mu$m flux density ($\left\langle S_{70}\right\rangle$) for all 30 SMGs is $2.00\pm0.48\,$mJy, and hence the stacked signal is detected at over $4\sigma$. The contribution from SMGs to the Extragalactic Background Light (EBL) at 70$\,\mu$m can be estimated from this stacked signal by multiplying by the appropriate source density. Assuming the SMG integrated source count of $2506\pm406\,{\rm deg}^{-2}$ for $S_{850}\,{>}\,2\,$mJy \citep{coppin2006}, we estimate the contribution to the 70$\,\mu$m EBL from SMGs to be $0.016\pm0.005\,{\rm MJy}\,{\rm sr}^{-1}$. From an extrapolation of the source counts \cite{frayer2006b} find that the total 70$\,\mu$m EBL is $0.18\,{\rm MJy}\,{\rm sr}^{-1}$, so SMGs ($S_{850}\,{>}\,2\,$mJy) make up about $9\pm3$\% of the 70$\,\mu$m EBL. The 70$\,\mu$m stacked signal from all 30 SMGs is dominated by the 4 low redshift sources with coincident 70$\,\mu$m flux density. To determine the average properties of the SMGs {\it not\/} detected at 70$\,\mu$m, which is more representative of the general SMG population, we also stacked the 26 sources without coincident 70$\,\mu$m flux density. This time we stacked the residual 70$\,\mu$m image, which was obtained by removing all sources brighter than 3$\sigma$ at 70$\,\mu$m. Sources were removed by subtracting their fitted point spread functions from the image. The residual image was used to obtain a better signal to noise ratio in the stacked image and in the measured stacked flux density. For these 26 sources we find $\left\langle S_{70}\right\rangle\,{=}\,0.70\pm0.27\,$mJy, and so this stacked signal has an SNR of about 3. To test whether the majority of the stacked 70$\,\mu$m flux density is from the lower redshift SMGs, we also looked at low and high redshift sub-samples. The low redshift sub-sample consists of 12 SMGs out of the 26 with $z\,{<}\,2$, and the remaining 14 SMGs with $z\,{\geq}\,2$ make up the high redshift sub-sample. For redshift completeness we included IRAC photometric redshifts from Pope et al.~(2006) for 8/26 SMGs, and these all have $z\,{\geq}\,1.8$ (and estimated accuracy of $\Delta z\,{\leq}\,0.4$). The aperture flux density in the central region of the high redshift stack is $0.22\pm0.44\,$mJy, while for the low redshift stack it is more positive at $1.0\pm0.4\,$mJy. This is consistent with the idea that the majority of the flux density from the full sample is coming from the lower redshift sources, although the measurements are too noisy to make a definitive statement. Similarly, we stacked the 160$\,\mu$m image at all SMG positions, excluding the one detected source, GN26. There is no significant flux density in the stacked image and the 3$\sigma$ upper limit is $13\,$mJy. A stack of the 12 low redshift SMGs gives a 3$\sigma$ upper limit at 160$\,\mu$m of $22\,$mJy. To study the FIR properties of SMGs we could use the average flux densities of the full sample of SMGs. However, the stacked 70$\,\mu$m flux density from the separate high- and low-$z$ sub-samples clearly show that most of the signal is coming from $z\,{<}\,2$ sources (as expected from the K-correction). We therefore limit our analyses to the average properties of the low-$z$ sub-sample in the following sections. \section{Infrared Colors} The average IR colors, $S_{70}/S_{850}$ and $S_{70}/S_{24}$, are shown in Figure~2 as a function of redshift. GN26, at $z\,{\sim}\,1.5$ is consistent with a more active Dale \& Helou (2002, hereafter DH02) model, with dust intensity index $\gamma\,{=}\,1.5$, whereas GN13 has cooler colors, corresponding to $\gamma\,{=}\,2.5$. In the DH02 models $\gamma$ defines the amount of dust as a function of heating intensity \citep{dale2001}: \[ dM(U) \propto U^{-\gamma} dU \, ,\] where $M(U)$ is the dust mass heated by a radiation field of intensity $U$. These $\gamma$ values span the range expected for infrared luminous galaxies -- a low value around 1 implies a high contribution from photo-dissociation regions in an actively star-forming galaxy, while a significantly higher value represents the cirrus-dominated interstellar medium of a more quiescent or cooler galaxy. The infrared colors of the average low-$z$ SMG are consistent with $\gamma\,{\simeq}\,2$--2.5 DH02 SEDs. The average $S_{70}/S_{850}$ ratio of SMGs from the stacking analysis indicates that the SMGs are relatively cool galaxies for their high IR luminosities, which is consistent with Pope et al.~(2006). We also calculated upper limits to the $S_{70}/S_{850}$ ratio for SMGs {\it not\/} detected individually at 70$\,\mu$m (see Figure~2). The lower redshift SMGs are clearly inconsistent with lower values of $\gamma$. For the higher redshift sources we expect much lower $S_{70}/S_{850}$ ratios, but with the current observational limits these sources can still be actively star-forming. For GN13, Figure~2 shows that the $S_{70}/S_{24}$ ratio is that of an actively star-forming galaxy, with $\gamma\,{\simeq}\,1$--1.5, while the $S_{70}/S_{850}$ ratio indicates it is a cooler galaxy. This may be due to broad silicate absorption falling into the 24$\,\mu$m band; the $S_{70}/S_{24}$ ratio can be strongly affected by PAH features and silicate absorption, which are not fully accounted for in the models. We find that the colors of GN26 are consistent with DH02 models with $\gamma\,{=}\,1$ and $z\,{\simeq}\,1$--2 (Figure~3). The colors of GN13 place it at $z\,{=}\,1$--2 if a $\gamma = 1.5$ model is assumed (Figure~3). However, GN13 is at $z\,{=}\,0.475$ and so this SMG has a warmer $S_{70}/S_{24}$ ratio than that suggested by its $S_{70}/S_{850}$ ratio, as mentioned earlier. The average colors of low-$z$ SMGs are also plotted in Figure~3. We find that the average IR colors are best represented with a DH02 model having $\gamma\,{\simeq}\,2$--2.5 at $z\,{=}\,1$--2. This is consistent with the median redshift of the low-$z$ SMG sub-sample, and suggests the color-color plot can be used as a crude redshift indicator. \section{Spectral Energy Distribution} The FIR photometry at 70 and 160$\,\mu$m provides valuable data points for constraining the FIR peak. Combined with the 850$\,\mu$m observations, the photometry spans both sides of the peak. Previous estimates of the SED of SMGs have relied on extrapolating the MIR or radio to fit the FIR peak -- at the redshifts of SMGs, the MIR can be affected by complex emission and absorption features, so this method may not be reliable. We fit a variety of models to the data. These include four DH02 models with $\gamma$ = 1, 1.5, 2, and 2.5, as well as the Chary \& Elbaz (2001, hereafter CE01) SEDs, which are templates derived from ISOCAM, {\sl IRAS\/} and SCUBA data of nearby galaxies. The CE01 models have luminosity dependent shapes, but since the local luminosity--temperature relation found for nearby galaxies may not hold for high-$z$ SMGs (e.g.~Pope et al.~2006), we fit CE01 models allowing them to scale arbitrarily in luminosity. For GN13 and for the low-$z$ SMG sub-sample, we constrain the fit with the 70 and 850$\,\mu$m observations only, since they are not detected at 160$\,\mu$m (although the 160$\,\mu$m data provide a useful upper limit). For GN26, the 70, 160 and 850$\,\mu$m observations are all used to constrain the fit. We summarize the fitting results in Table~2, while Figures~4 and 5 show the best fit SEDs for GN13, GN26, and the average low-$z$ SMG. Fitting CE01 SEDs without allowing the luminosity to vary freely results in relatively poor fits, so we exclude these from the Table. This confirms the Pope et al.~(2006) result that the local luminosity--temperature relationship does not hold for SMGs. For each best fit SED the total $L_{\rm IR}$ (between 8 and 1000 $\mu$m) was calculated and given in Table~2. Uncertainties in the luminosity were derived by scaling the best fit SED until the minimum $\chi^2$ value exceeded the 68\% ($\pm1\sigma$) confidence interval. The models provide good fits to the 70 and 850$\,\mu$m data for GN13 and for the low-$z$ average SMG, but they do not typically fit the 24$\,\mu$m (observed) data point, although it has been shown that the 24$\,\mu$m flux density can often be fit with additional extinction (Pope et al.~2006). The models do not provide a similarly good fit in the FIR for GN26; the 70 and 850$\,\mu$m data points of GN26 are well fit, but the 160$\,\mu$m measurement is underestimated by the models. Hence the luminosity for GN26 is probably higher than given by these models. Nevertheless, our derived IR luminosity for GN26 is 3 times greater than that estimated by Pope et al.~(2006) from fitting SEDs to 24$\,\mu$m, 850$\,\mu$m and $1.4\,$GHz radio data. It is possible that there are further deblending issues at 160$\,\mu$m for GN26 (even although we have already divided the total 160$\,\mu$m flux density between the two 70$\,\mu$m sources in the area). This demonstrates the power of 160$\,\mu$m photometry in constraining the total infrared luminosity of galaxies, but shows that higher resolution is required to study such faint sources individually. The SMGs have high luminosities, but their FIR spectral shape is different from local ULIRGs of the same luminosity. We find that the average low-$z$ SMG has a total IR luminosity of about $8.0\times10^{11}\,{\rm L}_{\odot}$. This is a factor ${\simeq}\,2$ less than the median SMG luminosity found by Pope et al.~(2006) and Chapman et al.~(2005) for their SMGs with $z\,{<}\,2$. The reason that our calculated SMG luminosities are low compared to previous results is because our best fit SEDs are cooler. The average SMG is best fit by a quiescent DH02 model with $\gamma\,{\simeq}\,2.5$, or with CE01 SED templates of normal spiral galaxies scaled up by a factor ${\simeq}\,300$, with a rest-frame peak at about $150\,\mu$m (i.e. $T\,{\simeq}\,20\,$K). Local ULIRGs of the same luminosity as the SMGs are therefore not the best spectral templates for this sample. Several recent studies have relied on the MIR data (at 24$\,\mu$m in particular) to derive luminosities and SED fits (e.g.~\citealp{perez2005}, \citealp{lefloch2005}). The 24$\,\mu$m flux density was used in SED fitting of SMGs by Pope et al.~(2006), who found that the typical SMG peaks at about $100\,\mu$m (corresponding to about $29\,$K). If the 24$\,\mu$m data point is included in the fitting of GN13 along with the 70 and 850$\,\mu$m data, we find the best fit DH02 SED peaks at shorter wavelength, corresponding to warmer dust temperatures, and the total IR luminosity is decreased by a factor of about 2. There is no significant difference in the CE01 fits to GN13 with and without the 24$\,\mu$m data point. For GN26 we find the 24$\,\mu$m data point makes no significant difference to the best fit DH02 SED, while the best fit CE01 model is slightly warmer and 3 times less luminous. The fit to the average SMG including the 24$\,\mu$m data point is warmer than that with only the longer wavelength data, and the best fit DH02 and CE01 SEDs are about 2 times more luminous. At the median redshift of the average low-$z$ SMG, $\left\langle z\right\rangle\,{=}\,1.4$, PAH and silicate features fall into into the 24$\,\mu$m band, and our fit here is driven by these features in the model SEDs. The DH02 and CE01 SEDs certainly contain PAH features, but it is not clear whether SMGs at this redshift have strong or weak PAH features, if any. Therefore we would argue that the fit to the 850 and 70$\,\mu$m flux densities alone gives a more reliable result for the average SMG total luminosity, at least for the moment, until we learn more about the MIR spectra of SMGs. \section{Dust Temperatures and Masses} As a phenomenological alternative, we also adopt a modified blackbody SED model to fit the temperature of these SMGs. The SED is described by $f_\nu \propto \nu^\beta B_\nu$, where $B_\nu(\nu, T)$ is the blackbody function for dust of temperature $T$, and $\beta$ is the dust emissivity index. The MIR is approximated as a power-law of the form $f_\nu\propto\nu^{-\alpha}$ and smoothly matches $\nu^\beta B_\nu$ at longer wavelengths \citep{blain2003}. Although this simple phenomenological model cannot describe the full complex dust properties of a galaxy, it can provide a good description of the general behavior of the SED. The range of parameters we consider is $15\,{\rm K}\,{<}\,T\,{<}\,90\,{\rm K}$ and $1\,{<}\,\alpha\,{<}\,4$, which is representative of galaxies ranging from normal spirals to AGN. We set $\beta\,{\equiv}\,1.5$ for our model fits, which is the value found for dust in the galactic plane \citep{masi95}, and a typical value for well-studied nearby galaxies \citep{dunne2000}. We fit for $T$ and $\alpha$ using the 70$\,\mu$m and 850$\,\mu$m data points for GN13 and the average low-$z$ SMG, but also include the 160$\,\mu$m detection for GN26. The results are summarized in Table~3. When fitting only two data points (and allowing the normalization to also be free) there is a strong degeneracy between $\alpha$ and $T$ (which would be complete except for the boundaries of the parameter ranges) -- so the fit parameters must be interpreted with caution. In the case of GN13, the full range of $\alpha$ is allowed by the 70 and 850$\,\mu$m data, with low values of $\alpha$ corresponding to low values of $T$. Because of the additional 160$\,\mu$m data point, the parameters are better constrained for GN26, with $T\,{\simeq}\,45\,$K and $\alpha\,{\simeq}\,3.5$ being preferred. For the average SMG, the 70 and 850$\,\mu$m flux densities alone cannot break the $T$ and $\alpha$ degeneracy -- a very low temperature of $15\,$K is allowed for $\alpha\,{\simeq}\,1.0$, while $T\,{\simeq}\,33\,$K for $\alpha\,{\simeq}\,4.0$. In a sample of 73 radio-detected SMGs the average $S_{450}/S_{850}$ ratio is measured to be $5.0\pm2.3$ \citep{chapman2005}, while 15 SMGs from this same sample have been detected with SHARC-II (Kovacs et al.~2006) and they have an average $S_{350}/S_{850}$ ratio of $4.0\pm1.3$. We find that models with $\alpha\,{<}\,1.6$ are inconsistent with these ratios. This implies that the allowed models for the average SMG have $21\,{\rm K}\,{<}\,T\,{<}\,33\,{\rm K}$ and $1.6\,{<}\,\alpha\,{<}\,4.0$, where the low values of $T$ require low $\alpha$. This shows that the low-$z$ SMGs have relatively cool dust temperatures. The best fit dust temperatures of 21--$33\,$K are consistent with those values previously derived for SMGs (e.g.~Pope et al.~2006). Chapman et al.~(2005) and Kovacs et al.~(2006) suggested average temperatures close to the upper end of our acceptable range. This implies that the average low-$z$ SMG in our sample has a relatively steep mid-IR SED, since our model fits with $T\,{\simeq}\,30\,$K require $\alpha\,{\simeq}\,3$. This suggests that the SMGs are star-forming galaxies, because large $\alpha$ implies cool mid-IR colors, which are inconsistent with AGN-dominated sources. Assuming that the submm light is thermal emission from dust which is optically thin at $\lambda_{\rm rest}\,{\sim}\,200\,\mu$m, with a single dust temperature $T$, the dust mass $M_{\rm d}$ is given by: \begin{equation} M_{\rm d} = \frac{S_{850} \: D^2_{\rm L}} {(1 + z) \: \kappa_{\rm d}(\nu_{\rm rest}) \: B_\nu(\nu_{\rm rest}, T) } \; , \end{equation} (e.g.~\citealp{mcmahon1994}), where $D_{\rm L}$ is the cosmological luminosity distance at redshift $z$ and the dust absorption coefficient $\kappa$ is uncertain, even in the local Universe. We take a $\kappa$ value of $0.077\pm0.030\,{\rm m}^2{\rm kg}^{-1}$ \citep{hughes1993}, converting it to rest-frame frequency $\nu_{\rm rest}$ with: \begin{equation} \kappa_{\rm d}(\nu_{\rm rest}) = 0.077 \left( \frac{\nu_{\rm rest}}{350\,{\rm GHz}} \right)^\beta. \end{equation} Here we again assume that the dust emissivity index $\beta$ is fixed at 1.5. The range of allowable dust masses are calculated from the range of temperatures in Table~3. The dust mass calculated from Equation~(2) is 1.0--1.6$\times10^8\,{\rm M}_\odot$ and 2.2--2.5$\times10^8\,{\rm M}_\odot$ for GN13 and GN26, respectively, while the dust mass found for the average low-$z$ SMG is 1.1--2.6$\times10^9\,{\rm M}_\odot$. This does not take into account the uncertainties in $\kappa$ and $S_{850}$; the dust mass uncertainty is about 50\% when these are added in quadrature. These dust masses are consistent, within uncertainties, with the molecular gas mass derived from CO observations (e.g.~\citealp{frayer1998}, \citealp{frayer99}, \citealp{neri2003}, \citealp{greve2005}), assuming a typical galactic gas mass to dust mass ratio $M_{\rm g}/M_{\rm d}\,{\simeq}\,100$ (e.g.~\citealp{hildebrand1983}). \section{FIR--Radio Correlation} Our sample can be used to test the FIR--radio correlation in submillimeter galaxies. The FIR--radio correlation is often expressed as (e.g.~\citealp{yun2001}): \begin{equation} q \equiv \log\left( \frac{{\rm FIR}}{ 3.75 \times 10^{12}\,{\rm W m}^{-2}} \right) - \log\left( \frac{S_{1.4}}{{\rm W}\,{\rm m}^{-2}{\rm Hz}^{-1}} \right) \; , \end{equation} where `FIR' here refers to the flux between 40 and 120 $\mu$m. The observed local value is $q\,{=}\,2.34\pm0.3$ (Yun, Reddy and Condon 2001). We use the best fit DH02 models to derive the conversion from $L_{40{-}120}$ to $L_{8{-}1000}$, which is 2.0 and 1.6 for GN13 and GN26, respectively. Based on their $L_{\rm IR}$ (Table \ref{lir_table}), we find $q$ parameters of $2.5^{+\,0.3}_{-\,0.1}$ and $2.4^{+\,0.1}_{-\,0.1}$ for GN13 and GN26, respectively. These values are consistent with the local value of $q$, suggesting that these two sources follow the local FIR--radio correlation. \section{Contribution from Active Galactic Nuclei} As mentioned in Section 8, the inferred high values of $\alpha$ suggest that AGN do not dominate the bolometric luminosity in our sample of SMGs. To quantify the contribution of AGN to the infrared luminosity of SMGs, we adopt the simple modified blackbody approach as described in Section~8. For our AGN model we use $\beta\,{=}\,1.5$, $T\,{=}\,90\,$K, and $\alpha\,{=}\,1.1$, as found for the xFLS AGN population (Frayer et al.~2006). We subtract this AGN component from the observed 70 and 850$\,\mu$m flux densities of the average SMG, and then repeat the fitting procedure with the DH02 and CE01 models, increasing the AGN component until the best fit $\chi^2$ value exceeds the previous minimum by $1\sigma$. This allows us to estimate the maximum AGN contribution to the FIR luminosity, with the results shown in Figure~5. It could be argued that the MIR waveband is the best discriminator of AGN, and therefore we should be focussing on the 24$\,\mu$m data point (e.g.~\citealp{sajina2005}). However, we are here calculating the percentage contribution of AGN to the $L_{\rm IR}$, which is dominated by the FIR peak, and the contribution from AGN emission in the MIR is only a very small proportion of the total IR luminosity. For the average low-$z$ SMG, an AGN which contributes up to 14\% of the IR luminosity is allowed, using only the previous best fit CE01 SED model. If all CE01 models are used in the refitting, an AGN component of up to 23\% contribution is allowed. Together these fitting procedures imply that the average (low-$z$ sub-sample) SMG is dominated by a starburst. This is consistent with X-Ray studies which find that AGN contribute on average 10\% to the infrared luminosity of SMGs \citep{alexander2005}. For the 70$\,\mu$m detected SMGs, GN13 and GN26, we find that 32\% and 21\%, respectively, of the total IR luminosity can be attributed to an AGN from the same analysis applied to the best fit templates. However, if the whole suite of CE01 SEDs are used, a larger proportion of the IR luminosity can be subtracted off the observed data points, with lower luminosity CE01 models still fitting. This shows that with the current data a low luminosity starburst model from CE01 with an additional dominant AGN component is indistinguishable from a high luminosity CE01 ULIRG model, and that further photometry (or spectroscopy) is needed. Another method to estimate the AGN contribution is to make use of the 24$\,\mu$m data. Here, we consider the extreme case where all the 24$\,\mu$m flux density is due to an AGN, and determine its contribution to the total IR luminosity. For GN13 and GN26 such an AGN would contribute 11\% and 5\%, respectively, to the total IR luminosity, assuming the $T\,{=}\,90\,$K, $\alpha\,{=}\,1.1$ AGN model used above, and the CE01 best fit SEDs. For the average low-$z$ SMG, 21\% of the total IR luminosity could be attributed to an AGN in this extreme case. This again supports the hypothesis that SMGs are dominated by star formation processes. \section{Concluding Remarks} We have presented 70$\,\mu$m properties of submillimeter galaxies in the GOODS-N field. Out of 30 SMGs (with $S_{850}\,{>}\,1.7\,$mJy) in the overlap region of this field, 2 are detected at relatively high significance in ultra-deep (${\sim}\,0.6\,$ mJy rms) 70$\,\mu$m imaging. Both of these detected SMGs lie at relatively low redshift. One SMG, GN26 at redshift $z\,{=}\,1.2$, has infrared colors which indicate it is actively star forming. The second SMG, GN13 ($z\,{=}\,0.47$) has infrared colors similar to normal spirals, but with a much higher luminosity. We confirm that these two SMGs detected at 70$\,\mu$m follow the locally derived FIR-radio correlation. To determine the average properties of SMGs (most of which lie at $z\,{>}\,2$ and are not detected individually at 70$\,\mu$m), we performed a stacking analysis and find that the average SMG has a 70$\,\mu$m flux density of $0.70\pm0.27\,$mJy. Most of the 70$\,\mu$m flux density is coming from the lower redshift SMGs however. We analysed the average properties of twelve SMGs with $z\,{<}\,2$, which have a stacked 70$\,\mu$m flux density of $1.0\pm0.4\,$mJy. From a stack of all 30 SMGs in the ultra-deep 70$\,\mu$m image, we find that the contribution of SMGs (with $S_{850}\,{>}\,2\,$mJy) to the total Extragalactic Background Light at 70$\,\mu$m is $9\pm3$\%. The average low-$z$ SMG ($\left\langle z\right\rangle\,{=}\,1.4$) has cool infrared colors and an FIR SED that is best fit by a scaled up (about 300$\times$) normal spiral galaxy. We also found that an AGN contributes less than 23\% of the total IR luminosity in SMGs. We find that the average low-$z$ SMG has an IR luminosity, $L_{8{-}1000}$, of $8.0(\pm 2.2)\times10^{11}\,{\rm L}_{\odot}$. The average low-$z$ SMG therefore has a star formation rate of $135\pm35\,{\rm M}_\odot\,{\rm yr}^{-1}$, using the relationship between SFR and IR luminosity for starburst galaxies given by \cite{kennicutt1998}. GN13 and GN26 have star-formation rates of $30\pm10$ and $800\pm270\,{\rm M}_\odot\,{\rm yr}^{-1}$, respectively. The median IR luminosity of $z\,{<}\,2$ SMGs in Chapman et al.~(2005) is twice the value we find for our average low-$z$ SMG, suggesting that theirs may have been an overestimate, suffering from lack of FIR data. The next generation submillimeter bolometer, SCUBA-2, will come online in 2007 \citep{holland2006} and it is expected to yield a major improvement at 450$\,\mu$m, where it should reach a confusion limit of ${\sim}\,1\,$mJy ($5\sigma$). This depth is well-matched to that of deep {\sl Spitzer\/} 70$\,\mu$m imaging and should provide much more overlap than the 850$\,\mu$m selected sources. In addition, SCUBA-2 will produce much larger samples of fainter 850$\,\mu$m sources, much like the two 70$\,\mu$m detected submm sources in this study, and therefore 70$\,\mu$m data will be a valuable addition to understanding their SEDs. Also in the near future, the {\sl Herschel Space Observatory\/} will produce confusion limited images across the wavelength range 75--500$\,\mu$m. {\sl Herschel\/} will therefore enable detailed study of the far-IR SEDs of large samples of SMGs, which we have shown here to be feasible for a sub-set of SMGs using {\sl Spitzer}. \acknowledgements We thank the anonymous referee for helpful comments that improved this paper. MTH would like to thank Anna Sajina for helpful discussions. AP and DS acknowledge support from the Natural Sciences and Engineering Research Council of Canada and the Canadian Space Agency. This work is based in part on observations made with the {\sl Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech.
1,477,468,749,960
arxiv
\section{Introduction} The existence of Quark Gluon Plasma (QGP), a deconfined strongly interacting matter, which was predicted by lattice Quantum Chromo Dynamics calculations (see, e.g, Ref~\cite{Ratti:2018ksb} for a recent review), is now routinely produced in relativistic heavy ion collisions at BNL Relativistic Heavy Ion Collider and CERN Large Hadron Collider. It is believed to have filled the early universe till a few micro-seconds after the Big Bang. The study of QGP has remained one of the most rewarding disciplines of modern nuclear physics for more than three decades. The observation of large elliptic flow of hadrons~\cite{Adler:2002ct, Adler:2003kt} and jet-quenching~\cite{Adcox:2001jp, Adams:2003kv}, because of the energy loss of high energy partons in the hot and dense medium, are the most prominent early signatures of the formation of QGP in these collisions. Additional confirmation has been provided by detection of thermal radiation of photons form these collisions~\cite{Aggarwal:2000th, Adare:2008ab, Adam:2015lda}. The unexpected surprises have been provided by the parton recombination as a mechanism of hadronization~\cite{Fries:2003vb} and the very low viscosity (see e.g., Ref~\cite{Song:2007ux, Luzum:2008cw, Song:2010mg}). Coming years will explore its properties to a great deal of accuracy and once the Facility for Anti-proton and Ion Research, Darmstadt (FAIR) and the Nuclotron-based Ion Collider Faсility, Dubna (NICA) start working, the next frontier of this fascinating field, namely QGP at high baryon density and low temperatures, which perhaps forms the core of large neutron stars~\cite{Annala:2019puf}, will be open for exploration. The Future Circular Collider (FCC) will provide an opportunity to study $pp$ and $AA$ collisions at unprecedented high centre of mass energies~\cite{Dainese:2016gch,Armesto:2016qyo,Armesto:2014iaa}. Charm quarks have emerged as a valuable probe of the evolution dynamics of quark gluon plasma. This was realized quite early in the literature. The large mass of charm quarks ensures that they are produced only in processes involving a sufficiently large $Q^2$. This makes these interactions amenable to perturbative QCD calculations. The later interactions conserve charm and the hadrons containing charm are easily identified. More than three decades ago, Svetistky~\cite{Svetitsky:1987gq} obtained the drag and diffusion coefficients for them by considering that they execute a Brownian motion in the QGP. A first estimate of the radiative energy loss of heavy quarks was also obtained by authors of Ref.~\cite{Mustafa:1997pm} using some simplifying assumptions. These early studies have been brought to a very high degree of sophistication by now. The energy loss suffered by heavy quarks due to scatterings and radiation of gluons have been been estimated and its consequences have been explored in detail (see, e.g. \cite{GolamMustafa:1997id, vanHees:2004gq, Moore:2004tg, vanHees:2005wb, Peigne:2008nd, Gossiaux:2008jv, Gossiaux:2009mk, He:2011qa, Cao:2011et, Abir:2012pu, Meistrenko:2012ju, Alberico:2013bza, Berrehrah:2014kba,Song:2015sfa, Song:2015ykw, Horowitz:2015dta, Nahrgang:2016lst, Scardina:2017ipo, Plumari:2017ntm, Sheikh:2017ock}). The temperature dependence of the drag coefficient has also been calculated using lattice QCD~\cite{Banerjee:2011ra, Ding:2012sp, Kaczmarek:2014jga}. A phenomenological extension of Wang, Huang, Sarcevic model~\cite{Wang:1996yh, Wang:1996pe} was used by authors of Ref.~\cite{Younus:2012yi} to extract energy loss of charm quarks in the quark gluon plasma and their azimuthal correlations~\cite{Younus:2013be} and a comparative study of different energy loss mechanisms for heavy quarks at central and forward rapidities was performed by authors of Ref.~\cite{Jamil:2010te}. However, all the above studies mostly start with the assumption of a thermally and chemically equilibrated plasma at some formation time $\tau_0 \approx$ 0.2--1.0 fm/$c$. The assumption of chemical equilibration has been relaxed in some studies for determination of the drag and diffusion~\cite{GolamMustafa:1997id}. The consequences of this relaxation for interactions~\cite{Lin:1994xma,Younus:2010sx} following their initial production in prompt collisions~\cite{Levai:1994dx} have also been studied. The drag and diffusion co-efficients for heavy quarks for the pre-equilibrium phase have been studied by replacing the distributions of quarks and gluons by Colour Gluon Condensate model inspired distributions~\cite{Das:2015aga}. The thermalization of heavy quarks by assuming that they perform Brownian motion in a thermal bath of gluons was studied quite some time ago~\cite{Alam:1994sc}. Heavy quark thermalization and flow has been studied in considerable detail within a partonic transport model BAMPS (Boltzmann Approach of Multi Parton Scatterings) by the authors of Ref.~\cite{Uphoff:2010sh, Uphoff:2011ad}, where the initial distribution of charm quarks was sampled from {\tt {PYTHIA}}. The parton cascade model proposed by Geiger and Muller~\cite{Geiger:1991nj} has been refined by Bass, Muller, and Srivastava~\cite{Bass:2002fh} with improved implementation of the dynamics of the collision with several interesting insights and results. It was further extended to implement the production of heavy quarks~\cite{Srivastava:2017bcm}. A box-mode implementation was shown to provide an accurate description of energy loss suffered by charm quarks in QGP at a given temperature due to collisions and radiations~\cite{Younus:2013rja}. Recently it was employed to study the transport dynamics of parton interactions in $pp$ collisions at the LHC energies~\cite{Srivastava:2018dye}. These studies are of interest because of the QGP like features observed in high multiplicity events of these collisions. The studies reported in Refs.~\cite{Srivastava:2017bcm,Srivastava:2018dye} were performed by neglecting the Landau Pomeranchuk Migdal (LPM) effect, which results in enhanced parton multiplication. Authors of Ref.~\cite{Srivastava:2018nfu} have reported results for charm production in $pp$ collisions with the accounting of LPM effect. Their results indicate that $pp$ collisions at the higher LHC energies may lead to formation of a dense medium. This, in turn triggers a suppression of radiations (and parton multiplication) due to the LPM effect. However, it was reported that, even after these suppressions, multiple scatterings occur among the partons and the transverse momentum distribution of charm quark is rather sensitive to such scatterings. These calculations also provided a reasonably good description of the charm distribution measured at LHC energies. The bottom quarks, on the other hand, due to their very large mass are not likely to be produced in multiple scatterings after the initial collisions and were not affected by this suppression~\cite{Chatterjee:2018ulb}, at least for $pp$ collisions. These considerations presage a considerable influence of LPM effect in $AA$ collisions. We aim to study this pre-equilibrium dynamics for charm production in $AA$ collisions, in the present work. We briefly discuss the details of our formulation in the next section, give our results in Sec. III, and conclusions in Sec. IV. \begin{figure} \centerline{\includegraphics*[width=8.0 cm]{au200.eps}} \centerline{\includegraphics*[width=8.0 cm]{pb2.76.eps}} \centerline{\includegraphics*[width=8.0 cm]{pb5.02.eps}} \caption{(Colour on-line) The $p_T$ distribution of charm quarks at the end of the pre-equilibrium phase for nucleus-nucleus collisions (for $b=0$ fm) and $pp$ collisions scaled by the number of collisions, at the same $\sqrt{s_{\text{NN}}}$ for $y=0$. Results are given for Au+Au collisions at 200 AGeV (upper panel), Pb+Pb collisions at 2.76 ATeV (middle panel) and 5.02 ATeV (lower panel).} \label{pt} \end{figure} \section{Formulation} First of all, let us state the limited objective of our work in a little more detail. We realize that charm quarks, after their initial production in hard partonic scatterings will be a witness to the emergence of a thermally and possibly chemically equilibrated QGP followed by its expansion and cooling. These will see the beginning of the flow and hadronization. The heavy mesons containing charm quarks may also undergo scatterings during the hadronic phase. Thus these are valuable chroniclers of the system. As mentioned earlier, the drag, diffusion, energy loss and flow experienced by them need to be understood in quantitative detail so that we can use them to determine the properties of the medium precisely. We realize that the charm quarks will experience a considerable turmoil, before the thermally and chemically equilibrated plasma sets in at some formation time $\tau_0$ of the order of 0.2--1.0 fm/$c$. This suggests that we understand their dynamics {\it before} the system enters the so-called QGP phase, as some amount of medium modification of their momentum distribution could already happen by then. In absence of this the medium modification already sustained during the pre-equilibrium phase will have to be, per-force, accounted by adjusting the values for drag, diffusion and radiative energy loss during the QGP phase and later. The parton cascade model~\cite{Geiger:1991nj,Bass:2002fh} is eminently suited for this study for the following reasons. It starts from experimentally measured parton distribution functions and proceeds to pursue a Monte Carlo implementation of the Boltzmann equation to study the time evolution of the parton density, due to semi-hard perturbative Quantum Chromo Dynamics (pQCD) interactions including scatterings and radiations within a leading log approximation~\cite{Altarelli:1977zs}. The $2\rightarrow 2$ scatterings among massless partons use the well-known matrix elements (see, e.g.~\cite{Owens:1986mp}) at leading order pQCD. The singularities present in the matrix elements are regularized by introducing a transverse momentum cut-off ($p_T^{\text{cut-off}}$ fixed at 2 GeV in the present work). The radiation processes ($g\rightarrow gg$ and $q \rightarrow qg$) are, in turn, regularized by introducing a virtuality cut-off, $\mu_0^i =\sqrt{m_i^2+ \mu_0^2}$, where $m_i$ is the current mass of quark (zero for gluons) and $\mu_0$ is taken as 1 GeV. This is implemented using the well tested procedure implemented in ${\tt {PYTHIA}}$~\cite{Sjostrand:2006za}. It has been reported earlier that the introduction of the LPM effect minimizes the dependence of the results on the precise value of $\mu_0$~\cite{Renk:2005yg,Srivastava:2018nfu}. The matrix elements for the $gg\rightarrow Q\overline{Q}$ and $q\overline{q} \rightarrow Q\overline{Q}$ processes do not have a singularity and the minimum $Q^2$ for them is $4M_Q^2$, which for charm quarks is more than 7 GeV$^2$ and amenable to calculations using pQCD. The $qQ\rightarrow qQ$ and $gQ\rightarrow gQ$ processes will require a $p_T^\text{cut-off}$ to avoid becoming singular, and it is taken as 2.0 GeV as before. The matrix elements for these are taken from Combidge~\cite{Combridge:1978kx}. For more details, the readers are referred to earlier publications~\cite{Srivastava:2017bcm}. The scatterings among partons and radiation of gluons will lead to a rapid formation of a dense partonic medium for $AA$ collisions, even though the PCM involves only those partons which participate in the collisions leading to momentum transfers larger than $p_T^\text{cut-off}$ and are radiated only their till virtuality of the mother parton drop to $\mu_0^i$ (see above). This would necessitate introduction of LPM effect. We have already noted its importance for $pp$ collisions~\cite{Srivastava:2018nfu}. We have implemented the LPM effect by assigning a formation time $\tau$ to the radiated particle: \begin{equation} \tau = \frac{\omega}{k_T^2}, \end{equation} where $\omega$ is its energy and $k_T$ is its transverse momentum with respect to the emitter. We further implement a scheme such that during the formation time, the radiated particle does not interact. The emitter, though continues to interact and if that happens, the radiated particle is removed from the list of participants and is thus excluded from later evolution of the system~\cite{Renk:2005yg} (see \cite{Srivastava:2018nfu}, for more detail). These aspects are incorporated in the Monte Carlo implementation of the parton cascade model, {\tt {VNI/BMS}} which we use for the results given in the following. Before proceeding, we insist that PCM does not include the soft scatterings which lead to flow etc. We have already mentioned that a good description of charm production at LHC energies, using this procedure, was reported earlier~~\cite{Srivastava:2018nfu}. \begin{figure} \centerline{\includegraphics*[width=8.0 cm]{200.eps}} \centerline{\includegraphics*[width=8.0 cm]{2.76.eps}} \centerline{\includegraphics*[width=8.0 cm]{5.02.eps}} \caption{(Colour on-line) The $p_T$ integrated rapidity distribution of charm quarks at the end of the pre-equilibrium phase for nucleus-nucleus collisions (for $b=0$ fm) and $pp$ collisions scaled by the number of collisions, at the same $\sqrt{s_{\text{NN}}}$. Results are given for Au+Au collisions at 200 AGeV (upper panel), Pb+Pb collisions at 2.76 ATeV (middle panel) and 5.02 ATeV (lower panel).} \label{dy} \end{figure} \section{Results} We have calculated production of charm quarks for $Au+Au$ collisions at 200 AGeV and for $Pb+Pb$ collisions at 2.76 ATeV and 5.02 ATeV for zero impact parameter. Results for $pp$ collisions at the same centre of mass energy have also been included for a comparison and for estimating the medium modification factor $R_{\text{AA}}$, such that \begin{equation} R_\textrm{AA}(p_T)=\frac{dN_\textrm{AA}/d^2p_T dy}{N_\text{coll} \times dN_\text{pp}/d^2p_T dy} \end{equation} where $N_\text{coll}$ is the number of binary nucleon-nucleon collisions for the given centrality. \begin{figure} \centerline{\includegraphics*[width=8.0 cm]{R.eps}} \caption{(Colour on-line) The $p_T$ and rapidity integrated production of charm quarks at the end of the pre-equilibrium phase for nucleus-nucleus collisions (for $b=0$ fm) and $pp$ collisions scaled by the number of collisions, at the same $\sqrt{s_{\text{NN}}}$. Results are given for Au+Au collisions at 200 AGeV, Pb+Pb collisions at 2.76 ATeV and 5.02 ATeV. The lines are drawn to guide the eye. } \label{R} \end{figure} We shall also use the ratio of $p_T$ and $y$ integrated results to denote the possible deviation of the production of charm quarks from the $N_{\text{coll}}$ values for $pp$ interactions: \begin{equation} R =\frac{N_\textrm{AA}}{N_\text{coll} \times N_\text{pp}} \end{equation} We expect the final results for the medium modification to deviate substantially from what is reported here, which is only due to pre-equilibrium dynamics of the system. Charm will be conserved during the later evolution of the system. Thus, a rich structure should emerge for the final modification, once the energy loss suffered by the charm quarks and the consequence of the collective flow is accounted for as they traverse the quark gluon plasma. The depletion of charm quark at larger momenta should be accompanied by an enhancement at lower momenta. This enhancement may depend strongly on the transverse momentum as the $p_T$ spectrum falls rapidly with increase in $p_T$. Further, as charm quarks participate in the collective expansion of the medium, their flow would lead to a depletion of charm having very low momenta which would result in an enhancement at intermediate momenta. In Fig.\ref{pt} we plot the $p_T$ distribution of charm quarks for central $AA$ collisions at $y=0$ along with the same for $pp$ collisions at the corresponding $\sqrt{s_{\text{NN}}}$ scaled by the number of collisions as appropriate for production mechanisms involving hard interactions. We notice a $p_T$ dependent suppression of charm production in $AA$ collisions, increasing with the centre of mass energy of the collisions. The $p_T$ integrated rapidity distributions shown in Fig.~\ref{dy} brings this fact even more clearly. It additionally suggests that these modifications are limited to central rapidities at the lowest centre of mass energy (200A GeV) considered here but extend to more forward (backward) rapidities as the energy rises to those available at LHC. \begin{figure} \centerline{\includegraphics*[width=8.0 cm]{raa_200.eps}} \centerline{\includegraphics*[width=8.0 cm]{raa_2.76.eps}} \centerline{\includegraphics*[width=8.0 cm]{raa_5.02.eps}} \caption{ The medium modification of charm production due to the pre-equilibrium dynamics for Au+Au collisions at 200 AGeV (upper panel), Pb+Pb collisions at 2.76 ATeV (middle panel) and 5.02 ATeV (lower panel) along with experimental data by STAR~\cite{Adamczyk:2014uip,Adam:2018inb}, ALICE~\cite{Adam:2015sza} and CMS~\cite{Sirunyan:2017xss} Collaborations respectively.} \label{raa} \end{figure} The medium modification of total charm production $R$, is shown in Fig.~\ref{R} as a function of centre of mass energy per nucleon. We note that the suppression increases with $\sqrt{s_\text{NN}}$ and tends to saturate at LHC energies. We are not sure that the importance of this has been sufficiently emphasized in literature. Let us discuss this in a little more detail. The experimentally measured $R_\text{AA}$ at 200 AGeV~\cite{Adamczyk:2014uip}, 2.76~ATeV~\cite{Adam:2015sza}, and 5.02 ATeV~\cite{Sirunyan:2017xss} for the central rapidity is always less than one. We know that the charm production during the thermally equilibrated phase is rather negligible. This trend should persist at larger rapidities. The authors of Ref.~\cite{Srivastava:2018dye} reported emergence of LPM effect already in $pp$ collisions, signalling the formation of a dense medium. As stated earlier, it was found that even with this suppression of scatterings and parton multiplications, there was multiple parton scatterings beyond the primary-primary collisions followed by fragmentations which provided a reasonable explanation to the experimental data. In $AA$ collisions the LPM effect should be quite strong. This will result in a large scale suppression of partonic collisions as parton multiplication is arrested due to the delayed fragmentaions of off-shell partons following scatterings. This should then lead to an overall suppression of charm production beyond that expected from a straight forward scaling of results of $pp$ collisions by the number of collisions, seen here. It is also expected that this effect would get stronger as the centre of mass energy rises. We recall that this was not seen in calculations performed with the neglect of the LPM effect~\cite{Srivastava:2017bcm}, where $R_{\text{AA}} \geq \, 1$ was seen for low $p_T$ (recall also that the $p_T$ distributions drop rapidly with increasing $p_T$). This implies that in the absence of LPM effect, the parton multiplications and multiple scatterings would lead to a charm production in $AA$ collisions well beyond that obtained from a scaling of the corresponding results for $pp$ collisions. We have verified that it is indeed so, for all the three energies considered here. This has one interesting and important consequence. While the final $R_\text{AA}$ will result from a rich interplay of collective flow and energy loss of the charm quarks, its value at lower $p_T$ would already be quite smaller than unity. An effort to attribute this entire suppression due to energy loss during the QGP phase alone would necessarily require a larger value for $dE/dx$. We give our results for medium modification of charm production at $y=0$ for most central collisions in Fig.~\ref{raa}. We emphasize that we neither intend nor expect to explain the experimental $R_\text{AA}$. These are shown only to denote how this rich input of medium modification of charm phase space distribution due the pre-equilibrium dynamics, has to provide the platform for the launch of their journey through the hydrodynamic evolution of the system. During this period they will be subjected to the collective flow and further energy loss. We do note one interesting and possibly valuable result of these studies. The distribution of charm quarks having large $p_T$ $\geq$ 6 GeV or so does not seem to be affected strongly by the pre-equilibrium dynamics discussed here. Thus we feel that a charm distribution whose momenta are sampled from those for $pp$ collisions and the points of production distributed according to the nuclear thickness function $T_{\text{AA}} (x,y)$ is not quite adequate as in input for study of charm propagation during the QGP phase of the system produced in relativistic collision of nuclei. \section{Summary and Discussion} We have calculated the dynamics of production of charm quark using parton cascade model, which should provide a reasonable description of the parton dynamics, albeit limited to, $p_T \geq p_T^\text{cut-off}$ and a reasonably modest virtuality $\geq \mu_0$ defined earlier during the pre-equilibrium phase. The LPM effect provides a rich structure to the so-called medium modification factor for charm quarks, defined as a ratio of the results for $AA$ collisions divided by the results for $pp$ collisions (at the same $\sqrt{s_\text{NN}}$) scaled by number of collisions.We noticed an over all suppression of charm production, which we attribute to the LPM effect. The medium modification factor as a function of $p_T$ shows a rich structure which evolves with energy, deviating (supression) from unity by about 10\% at low $p_T$ and approaching unity as intermediate $p_T$ at $\sqrt{s_\text{NN}}=$ 200 GeV. This deviation (suppression) is seen to rise to about 40\% at LHC energies. An interesting result, seems to be the supression of large $p_T$ charm at 2.76 TeV, but not at 5.02 TeV, which we are unable to understand. Realizing that this should form input to the calculations using hydrodynamics and with collisional and radiative energy loss of charm quarks to determine $dE/dx$, one expect some interesting deviations with those with the neglect of these suppressions. A future study, which is under way, will use more modern structure functions, (we have used GRV HO~\cite{Gluck:1994uf} in these preliminary studies) and account for straight forward corrections like nuclear shadowing, which will further suppress the production of charm quarks beyond what is reported here. The results for the phase space distribution of charm quarks at the end of the pre-equilibrium phase will then be used as inputs to hydrodynamics based calculations, as indicated above. In brief, we see that the pre-equilibrium dynamics of parton scattering and fragmentation along with LPM effect provides a rich structure to the production of charm quarks. We suggest that this effect should be taken into account to get a precise value for the energy loss suffered by charm quarks and the modification of their $p_T$ distributions due to the flow. \section*{Acknowledgments} DKS gratefully acknowledges the grant of Raja Ramanna Fellowship by the Department of Atomic Energy. We thankfully acknowledge the High Performance Computing Facility of Variable Energy Cyclotron Centre Kolkata for all the help, We thank S. A. Bass for a valuable discussion which triggered this study. \newpage
1,477,468,749,961
arxiv
\section{Introduction}\label{sec1} AM CVn stars are short-period binary stars in which a white dwarf accretes helium-rich material from a low-mass donor star. Their orbital periods ranges between 5 - 65 minutes \citep{2003MNRAS.340.1214P}, \citep{2010PASP..122.1133S}. The AM CVn stars are rare and unusual objects. Their study can give us information about the properties of the accretion flow and the compact objects themselves. By their short orbital periods and variations in photometry and spectroscopy, the AM CVn stars could also be classified as interacting binary white dwarfs or Double White Dwarf binaries (DWDs). Their main feature is that both binary components are degenerated dwarfs, the white dwarf accretes from another white dwarf companion \citep{bib31}, \citep{paczynski1967gravitational}, \citep{bib7}. The formation of AM CVn stars currently follows two known models. The first model \citep{1979AcA....29..665T}, \citep{bib30} is valid after common envelope phases, then the configuration consists of two degenerate dwarfs in a formation of close binary star \citep{paczynski_1976}, \citep{bib13}, \citep{1984ApJ...277..355W}. The second model describes the detached system with semi-degenerate helium star and a white dwarf companion is formed, after more than two mass-transfer phases. There is also a third, less probable channel of formation \citep{2010PASP..122.1133S}, when the donor evolves from a low-mass main sequence star \citep{2003MNRAS.340.1214P}. The mass transfer between white dwarfs is a defining process in the evolution of AM CVn stars. Its stability or destabilization has a significant effect on the white dwarfs binary configuration \citep{bib27}. When the mass transfer is in progress, while the mass ratio decreases, their orbital separation increases. At a further evolutionary point of the AM CVn objects, the direct accretion (i.e. the matter that inflow directly to the primary star or the accretor star) stops and this leads to the formation of an accretion disc. The mass transfer rate changes over time, according to the binary orbital separation and also depends on the angular momentum loss in the system \citep{bib27}, \citep{bib8}. According to some theoretical calculations \citep{1989SvA....33..606T} AM CVn stars could be a sub-part of the close binary evolution branch. Thus, they can ensure observational information to investigate the physics of helium accretion discs \citep{bib17}. It has been possible to detect the AM CVn stars by the existence mainly of helium emission lines in their spectra, since they are faint objects with an average magnitude of 12 - 17 (\cite{1987ApJ...313..757W}; \cite{1997ApJ...480..383P}; \cite{patterson1997superhumps}; \cite{bib17}). The AM CVn stars manifest brightness variability usually in the range of 2-4 magnitude at optical wavelengths, detected by the observational analysis (\cite{bib14}; \cite{bib16}; \cite{bib17}; \cite{bib33}) or by theoretical models \citep{1997PASJ...49...75T}. The outbursts are also reported for the AM CVn objects with orbital periods ranges from 20 - 50 min \citep{2021AJ....162..113V}. The AM CVn stars can be categorized by the different phases of their evolution \citep{1995Ap&SS.225..249W}, \citep{1996MNRAS.280.1035T}, \citep{bib31}: When they have short periods, their mass - transfer rates are high and the systems are in a high state. Quiescent systems have longer periods and lower mass-transfer rates. In this paper, we study the AM CVn star member CR Boo. With our results of an object like CR Boo, we are contributing to improve the knowledge of the star’s color index and to enlarge the observational database. We report our observational results of CR Boo in Sects. \ref{subsec3.1} and \ref{subsec3.2} and the detected observational effects are described in Sect. \ref{subsec3.3}. In Sect. \ref{sec4} the color index and the temperature are calculated at the maximum brightness. We discuss the appearance of humps and superhumps, and the variations of the parameters in Sect. \ref{sec5}. \section{Target details}\label{sec2} CR Boo was discovered in 1986 by Palomar Green \citep{bib9} and cataloged as PG 1346+082. The observations of Wood (\cite{1987ApJ...313..757W}) show brightness variability with amplitude 13.0 - 18.0 mag in the V band. The spectrum of CR Boo shows broad, shallow He I absorption at active state and He I emission at the quiescence state \citep{1987ApJ...313..757W}. The average orbital period of CR Boo is determined as $P_{orb}=0.0170290(6)$ days \citep{1997ApJ...480..383P}, \citep{bib14}, which is about $24.5$ min $\approx 1471.3$ s. We apply these values of $P_{orb}$ in all calculations of this paper. The masses of two components were estimated to be in ranges of: $M_{1} = 0.7-1.1 M\odot$ for the mass of the primary and $M_{2} = 0.044-0.09 M\odot$ for the mass of the secondary \citep{2010PASP..122.1133S}, \citep{2007ApJ...666.1174R}. CR Boo is an interacting double white dwarf object, in which the white dwarf primary accretes from the helium white dwarf companion \citep{paczynski1967gravitational}, \cite{bib7}, \citep{bib17}, \citep{bib32}. By its helium rich atmosphere, CR Boo is classified as DB spectral type object \citep{sion1983proposed}. In the classification of Solheim \citep{2010PASP..122.1133S}, the AM CVn objects are divided in groups by their orbital periods and the disc\textquoteright s properties. Since the orbital period of CR Boo is in a range of $20 < Porb < 40$ min, it can be associated to the 3rd group, with a variable size of disc, producing outbursts or occasional super-outbursts. CR Boo is categorized in the group of outburst systems \citep{bib17}, \citep{bib10} with an amplitude variations in brightness of $ > 1$ mag, lasting from days to months. Two individual states of CR Boo have been observed: a faint state, with normal outbursts usually lasting one to five days \citep{2021MNRAS.502.4953D} and regular super outbursts last for several weeks \citep{bib14}, \citep{bib18}; a bright state, with frequent outburst activity, sometime prolongs for months \citep{bib18}, \citep{bib12}. The produced high outbursts during the faint state recur with a frequency in a super-cycle of about $\approx 46$ days \citep{bib16}, \citep{bib15}. Such behavior is similar to SU Uma type dwarf novae - a class of cataclysmic variables (CVs) \citep{1995Ap&SS.225..249W}. \section{Observations and effects}\label{sec3} \subsection{Technical details and data reduction}\label{subsec3.1} We perform $\approx 20$ hours of observations of CR Boo, distributed over 5 nights, during different observational campaigns: 1 July, 2019; 5 July, 2019; 16 April, 2020; 12 February, 2021; 4 February, 2022 \footnotemark[0]. We report observational data, obtained with four different telescopes: the 2.0 m telescope of the National Astronomical Observatory (NAO) Rozhen, Bulgaria (hereafter 2m Roz), the 50/70 cm Schmidt telescope (hereafter Sch) of NAO Rozhen, the 60 cm telescope of the Belogradchik Observatory, Bulgaria and the 1.4m Astronomical Station Vidojevica (hereafter ASV) telescope, Serbia. The 2m telescope with two channel focal reductor FoReRo2 and two identical CCD cameras Andor iKON-L was used in the nights of: July 1, 2019 in V band, July 5, 2019 in UVR bands, February 12, 2021 in the B, R bands, February 4, 2022 in the UBVR bands. In the night of July 5, the 50/70 cm Schmidt telescope, equipped with CCD camera FLI PL16803 and the 1.4 m telescope at ASV, equipped with the CCD camera Andor iKON-L were used, both in B and R bands. The observations in the B band were also obtained with the 60 cm telescope of the Belogradchik Observatory (with CCD camera FLI PL16803), performed on 16 April, 2020 (hereafter 60 Bel). Data reduction was performed with standard tools for processing of CCD images and aperture photometry. Photometric standards were applied. Six comparison stars are chosen in the field of CR Boo (see Fig. A1 in the Appendix). Based on the B and V magnitudes published in the APASS9 catalog, the photometry of these comparison stars was performed over all available frames in these bands. Using r’ and i’ data from APASS9, Rc and Ic are calculated on the 3 different relations from \cite{1996AJ....111.1748F} and in the same way we obtained average magnitudes of comparison stars. The magnitudes in the U band for 3 standard stars are obtained using both the new (B-V) and (V-Rc) and the color indexes of normal dwarfs published by \cite{2013ApJS..208....9P}. The obtained stellar magnitudes of the standard stars are given in the Appendix (Table A1). The stellar magnitude of CR Boo (in a given band) was obtained by ensemble aperture photometry on all field-visible comparison stars. \footnotetext{Note: Date formats used in this paper: YYYY-MM-DD and DD Month, Year } \subsection{Results from observations}\label{subsec3.2} During the time of our observations, the apparent magnitude of CR Boo varies in between 14.06 and 17.04 in average, in B. We see that CR Boo changes its brightness states during the time of our observations. To be more capable to assess the brightness measurements, we put all the observational data for all 5 nights in (Fig. \ref{fig:1}). \begin{figure}[!htb] \begin{center} \includegraphics[width=0.75\columnwidth]{fig1.eps} \caption{Light curves of CR Boo for the period of all observations, 2019 - 2022, in UBVR. The data in I band was obtained by estimations. The data were received with four different telescopes. } \label{fig:1} \end{center} \end{figure} Further, we give the observational data in details, separately for each day. The journal of all observations is shown in Table \ref{tab1}. In \citep{bib4} we have reported some initial observational results from July 1, 2019 and July 5, 2019. In the current paper, we make more precise observational analysis on these two nights. We also add data in the R band to the observations on July 5. The obtained light curves are presented in Fig. \ref{fig:2} and Fig. \ref{fig:3}. It is seen that the star\textquoteright s average brightness in V decreases with 0.7 magnitudes (Fig. \ref{fig:3}), in a period of 5 days. The average amplitude variations of the magnitude in these two nights is $\approx 0.23$ and the standard deviation varies in values of $0.04 - 0.1$. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.70\columnwidth]{fig2.eps} \caption{Light curves of CR Boo: 1 July, 2019 in V band} \label{fig:2} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=0.70\columnwidth]{fig3.eps} \caption{Light curves of CR Boo: 5 July, 2019 in UBVR bands. The data are obtained with three different telescopes } \label{fig:3} \end{center} \end{figure} On the 3rd night (16 April, 2020), the observable brightness of CR Boo increases with 2-2.5 magnitudes in B band, comparing to the dates in July 2019. The star\textquoteright s magnitude reaches $13.95 (\pm 0.02)$ in B (Table \ref{tab1}), with $\approx 0.15$ amplitude variations and a standard deviation of $0.04$. According to the object\textquoteright s details (Section 2) and the observable magnitude of CR Boo, we can suggest that on this night the star has been in its outburst state (Fig. \ref{fig:4}). \begin{figure}[!htb] \begin{center} \includegraphics[width=0.70\columnwidth]{fig4.eps} \caption{Light curve of CR Boo: 16 April, 2020, in B band. The data are obtained with 60 cm telescope of the Belogradchik Observatory, Bulgaria } \label{fig:4} \end{center} \end{figure} The obtained observational data from 12 February, 2021 show that CR Boo is in its high state, again (Fig. \ref{fig:5}) - in comparison to the magnitudes of the nights during the July 2019 campaign. The amplitude variations of the magnitude are in a range: $0.06 - 0.08$ and the standard deviation is $0.002 - 0.02$. \begin{figure}[!htb] \centering \includegraphics[width=0.70\textwidth]{fig5.eps} \caption{Light curves of CR Boo: 12 February, 2021 in BR bands. Superhumps activity are detected in both bands. The data are obtained with the 2m telescope of NAO Rozhen. } \label{fig:5} \end{figure} Our latest observations of CR Boo were performed on 04 February, 2022 (Fig. \ref{fig:6}), in UBVR bands. The magnitudes for all bands are given in Table \ref{tab1}. The amplitudes of the variations for all bands are: $0.22 - 0.26$ mag and for the corresponding standard deviations we have: $0.009 - 0.04$. A trend of increase in brightness is seen, more clearly expressed in U and V bands in the end of the night. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.70\columnwidth]{fig6.eps} \caption{Light curves of CR Boo: 04 February, 2022 in UVBR bands. The data are obtained with the 2m telescope of NAO Rozhen } \label{fig:6} \end{center} \end{figure} \begin{sidewaystable}[!h] \sidewaystablefn% \begin{center} \caption{Journal of observations of CR Boo in UBVR} \label{tab1} \begin{tabular}{@{}cccccccc@{}} \toprule Date & Band & Telescope & Exp.time [s] & start / end [UT] & max/min [mag] & Avr [mag] & Error \\ \hline 2019-07-01 & V & 2m Roz & 60 & 19:41 / 21:03 & 16.17 / 16.35 & 16.26 & $\pm 0.01$ \\ \hline 2019-07-05 & U & 2m Roz & 300 & 19:50 / 21:19 & 15.85 / 16.11 & 15.98 & $\pm 0.04$\\ & B & 50/70 Sch & 120 & 20:15 / 20:30 & 16.78 / 17.23 & 17.04 & $\pm 0.1$ \\ & B & Vid & 30/60 & 20:28 / 21:22 & 16.87 / 17.08 & 16.98 & $\pm 0.02$ \\ & V & 2m Roz & 60 & 19:42 / 21:22 & 16.74 / 17.00 & 16.89 & $\pm 0.02$ \\ & R & Sch & 90 & 20:08 / 21:22 & 16.51 / 16.93 & 16.74 & $\pm 0.1$ \\ \hline 2020-04-16 & B & 60 Bel & 90 & 23:58 / 02:05 & 13.95 / 14.15 & 14.06 & $\pm 0.02$ \\ \hline 2021-02-12 & B & 2m Roz & 60/50/40 & 23:55 / 02:03 & 14.13 / 14.21 & 14.17 & $\pm 0.01$ \\ & V & 2m Roz & 40 & 02:26 / 02:40 & 14.08 / 14.14 & 14.12 & $ \pm 0.005$ \\ & R & 2m Roz & 20 & 23:54 / 02:04 & 14.14 / 14.26 & 14.22 & $\pm 0.01$ \\ \hline 2022-02-04 & U & 2m Roz & 300 & 02:40 / 03:56 & 14.23 / 14.49 & 14.33 & $\pm 0.01$ \\ & B & 2m Roz & 50 & 23:20 / 02:32 & 15.24 / 15.49 & 15.39 & $\pm 0.01$ \\ & V & 2m Roz & 60 & 02:40 / 03:56 & 14.98 / 15.21 & 15.08 & $\pm 0.005$ \\ & R & 2m Roz & 20 & 23:20 / 02:32 & 15.11 / 15.36 & 15.26 & $\pm 0.01$ \\ \botrule \end{tabular} \end{center} \end{sidewaystable} \subsection{Observational effects. Humps and superhumps}\label{subsec3.3} During the times of our observations, two observational effects are clearly distinguished. They appeared as short-period, low-magnitude brightness variations. The authors recognized them as humps and superhumps \citep{bib14}, \citep{bib17}. The humps are observed in the quiescence low state of the cataclysmic variables and AM CVn stars. They appear with a periodicity $P_{h}$ similar to the binary orbital period. On the other hand, the superhumps\textquoteright periodicity $P_{sh}$ is a few percent longer than the binary period and they can be observed during the outbursts state of the objects \citep{1995Ap&SS.225..249W}. Using the observational data of CR Boo (Section 3.2), we obtain the periodicity of the maxima in brightness variations for each observational night. To analyze these periodicity, we apply the PDM (Phase Dispersion Minimization) method by \cite{1978ApJ...224..953S}. We also check our results with additional software packages like an OnLine based PGRAM (exoplanetarchive.ipac.caltech.edu) and PerSea \citep{2005BaltA..14..205M}) based on fast and statistically optima period search in an uneven sampled observation method by A. Schwarzenberg-Czerny \citep{1996ApJ...460L.107S}. The measured period of the amplitude variations on July 1 and July 5 (see Table \ref{tab3}) is approximately the same (with a difference of $\pm 0.07 - 0.2$ min) as the orbital period of $24.5$ min. In the previous section, we defined that on the first two nights the object was in its quiescent state. Following the above terminology, these periodic small-scale amplitude variations in the magnitude of CR Boo could be related to the manifestation of humps, more clearly seen in the U and V bands. The humps are also called orbital humps, since they usually appear regularly with the orbital period. On the second two nights, 16 April, 2020 and 12 February, 2021, CR Boo is in its outburst state (see Subsection 3.2). The estimated average periodicity of the maximum brightness (see Table \ref{tab3}) on 2020-04-16 and on 2021-02-12 are slightly ($\approx 1.5 \% $) higher than the orbital period of the binary. The observed brightness variations then are assumed to be an appearance of superhumps. These values are close to the estimated superhumps periods of CR Boo in the analysis of \cite{patterson1997superhumps} and \cite{bib14}. On the last date, 04 February 2022, the star was in a condition with a rising brightness during the night. In a frame of $\approx 90$ minutes its magnitude increased with $\approx 0.26$ mag in the U and V bands. It was probably transitioning to an outburst state. Brightness variations with small-amplitudes of $0.023 - 0.110$ mag are observed in the B and R bands. These brightness variations have very short periodicity, in a range of $4.42 - 10.30$ min. We found they look much more like quasi-periodic oscillations in a stage before the star is turning to the higher state. \begin{table*}[!h] \begin{center} \caption{Periodicity of the maximum brightness variations, calculated for each observational nights. } \label{tab3} \begin{tabular}{@{} ccc @{}} \toprule Parameter & $P_{h}$ [min] & $P_{sh}$ [min] \\ Date & & \\ \hline \\ 2019-07-01 & $23.41-24.40$ & - \\ & $(\pm 0.05)$ & \\ \hline \\ 2019-07-05 & $ 23.71-24.60 $ & - \\ & $(\pm 0.035 )$ & \\ \hline \\ 2020-04-16 & - & $24.76$ \\ & & $(\pm 0.023)$ \\ \hline \\ 2021-02-12 & - & $24.92$ \\ & & $(\pm 0.0012)$ \\ \hline \\ 2022-02-04 & $4.42-10.30$ & - \\ & $(\pm 0.45)$ & - \\ \hline \\ Ref. values [min] & $23.21$ \footnotemark[1] & $(24.79-25.44)$ \footnotemark[2] \\ & & $24.78$ \footnotemark[3] \\ \botrule \end{tabular}\\ \footnotetext{Note: Ref. Table 3} \footnotemark[1]{\citep{bib4}} \footnotetext{Note: Ref. Table 3} \footnotemark[2]{\citep{bib14}} \footnotetext{Note: Ref. Table 3} \footnotemark[3]{\citep{patterson1997superhumps}} \end{center} \end{table*} \section{System parameters: Color index and temperature }\label{sec4} The observations in UBVR filters on July 5, April 16th, February 12th and February 4 allow us to estimate the color indexes of CR Boo. Following the data, we obtain the indices values at average brightness of these four nights. The derredened color indexes $(B-V)_{0}$ are obtained using the color excess $E(B-V) = 0.013 \pm 0.006$, calculated on the base of the field - averaged selective extinction $<Av> = 0.04 \pm 0.02$ \citep{2007ApJ...666.1174R}, excluding the negative results for the Ref.3 star for CR Boo and the standard extinction law \citep{1989ApJ...345..245C}, \citep{1999PASP..111...63F}, \citep{2005ApJ...619..931I}. The observed color index on July 5 is slightly larger than zero or $\approx 0$, which shows a tendency for the source to become red at maximum brightness. The usually rare observations for CR Boo in U band, give negative value of the color index $(U-B)_{0}$ for both nights: 2019-07-05 and 2022-02-04. Further, using the derredened color index $(B-V)_{0}$, we could calculate the color temperature. The formula of \citep{bib1} is appropriate to apply: \begin{equation} T[K] = 4600 [{\frac {1}{0.92(B-V)_{0}+1.7}}+{\frac {1}{0.92(B-V)_{0}+0.62}}] \end{equation} The obtained value for the temperature on July 5th is then: $T(B-V)_{0} \approx 8900 \pm 400 K$ The B-V index on 16 April, 2020 goes from zero to a slightly negative value, which is an indication of a bluer and hotter source of the superhumps events on this night. This results on the temperature variations, which reaches the value at maximum brightness: $T(B-V)_{0}\approx 11700 \pm 400 K$. The observations during the February 12th, 2021 campaign show that the source of the appeared superhumps is rather redder in relation to the B and U color. Based on the B-V index on the 2021-02-12, the estimation of the average temperature gives a lower value: $T(B-V)_{0} \approx 9600 \pm 200 K$, which is in accordance with the reddening of the source. The lightcurves obtained on February 4, 2022 displays the observable trend in brightness. These excursions are typical for CR Boo and were described by \cite{patterson1997superhumps}. For that reason, in order to estimate the color indexes cited above, we use the average trend values in UBVR bands. The obtained temperature is then: $T(B-V)_{0} \approx 7700 \pm 220 K$. The obtained average values of the color indices for each date and the corresponding color temperatures are given in Table \ref{tab4}. \begin{table*}[!h] \begin{center} \caption{Color index and color temperature for the four observational dates.} \label{tab4} \begin{tabular}{@{}ccccc@{}} \hline Date / & 2019-07-05 & 2020-04-16 & 2021-02-12 & 2022-02-04 \\ Parameter & \\ \hline & & & & \\ $U-B$ & $- 1.09 \pm 0.04$ & $- 0.772 \pm 0.04$ & - & $- 1.06 \pm 0.04$\\ \hline $(U-B)_{0}$ & $- 1.097 $ & $ - 0.782 $ & - & $- 1.071 $\\ & $(\pm 0.04)$ & $(\pm 0.04)$ & - & $(\pm 0.05)$\\ \hline $B-V$ & $0.12 $ & $- 0.094 $ & $0.054 $ & $0.27 $\\ & $(\pm 0.02)$ & $(\pm 0.023)$ & $(\pm 0.021)$ & $(\pm 0.03)$\\ \hline $(B-V)_{0}$ & $0.109$ & $ - 0.107 $ & $0.041$ & $0.257$ \\ & $(\pm 0.04)$ & $(\pm 0.03)$ & $(\pm 0.02)$ & $(\pm 0.04)$\\ \hline $B-R$ & $0.27 \pm 0.03$ & - & $-0.042 \pm 0.021$ & $0.13 \pm 0.01$ \\ \hline $ V-R$ & $0.15 \pm 0.05$ & $0.110 \pm 0.05$ & - & $-0.15 \pm 0.02$ \\ \hline $T_{col}(B-V)_{0} [K]$ & $8900$ & $11700$ & $9600$ & $7700$ \\ & $(\pm 400)$ & $(\pm 400)$ & $(\pm 200)$ & $(\pm 220)$ \\ \hline \\ \hline \end{tabular} \end{center} \end{table*} In Fig. \ref{fig:7} we present the color evolution and the temperature gradient through the different states of CR Boo for the time of observations. The figure shows their variations during the humps and superhumps activity (see also Fig. \ref{fig:1} ). \begin{figure}[!htb] \begin{center} \includegraphics[width=0.60\textwidth]{fig7a.eps} \hfill \includegraphics[width=0.55\textwidth]{fig7b.eps} \hfill \includegraphics[width=0.55\textwidth]{fig7c.eps} \caption{The color – magnitude diagram B: B-R (upper) presents CR Boo in outburst state. It is seen, the star becomes bluer and brighter. The color evolution $(B-V)_{0}$ (middle) and temperature variations (lower) for the period from 2019-07-05 to 2022-02-04, in MJD. } \label{fig:7} \end{center} \end{figure} \section{Discussion}\label{sec5} AM CVn stars are objects in the final stage of binary stars evolution. They create a medium to study the physical properties of such systems, to obtain more information about the white dwarf stars and to understand the double white dwarfs evolution, respectively. \subsection{Humps and superhumps production}\label{sec5.1} As we have seen in Section 3, the observational results show manifestation of small scale amplitude semi-periodic variations during the low state of the CR Boo on the 1st two days of observations (2019-07-01 and 2019-07-05). The similar variations, with a longer period, are observed in the third and fourth nights, when the star is in its outburst state (2020-04-16 and 2021-02-12). Here, we discus the probable sources of their production and their probable positions throughout the accretion disc. Following the results in Sects. \ref{subsec3.2} and \ref{subsec3.3}, our first assumption is that the humps during dates 2019-07-01 and 2019-07-05 are an exhibition of the orbital humps \citep{2002A&A...383..574O}. The acceptance is that the source of those humps is the periodic appearance of the hot spot, placed in the outer part of the accretion disc. While the CR Boo binary system rotates, the periodical disturbances in the light curve are produced, when the hot spot is at the position facing the direction of observations. This happens in every full rotational cycle and CR Boo is in its quiescence state. From the geometrical point of view, the orbital inclination of CR Boo, $i = 30^{\circ}$, \citep{2001A&A...373..222N} could effect on the observable to us line of sight. In the result, by all this configuration a partially effect on the light curve could be detected. This might be the minor disturbances in the brightness, or humps. It is known that the superhumps during the outburst\textquoteright s activity could be produced by a tidal instability, with an effect of the disc precession (see \citep{bib11}, \citep{bib19}, \cite{2011ApJ...741..105W}, \citep{bib21}, \citep{1995Ap&SS.225..249W}). The production of superhumps could also be caused by other mechanisms. As a result of interaction, the spiral density wave formation on the outer disc edge could have an significant effect on the light curve and correspondingly on the superhump\textquoteright production (\citep{1998ApJ...506..360S}, \citep{bib22}). Then, the brightness increases by the energy releases by the density strengthening at the places of spiral-density wave interactions with other disc\textquoteright s shock formations. Along with the mention above, if a blob exists onto the accretion disc surface, its role can be pointed here as to its interaction with spiral arms, which leads to the production of short-time oscillations on the light curve. On the other hand, the tidal wave coming from the secondary star through the Lagrange point $L_{1}$ could make the hot spot parameters (as its size and density) unstable. This may cause further unregular or fading humps production. \subsection{Variations in parameters}\label{sec5.2} According to the calculated indices (section \ref{sec4}), the color of the star varies from U, B to R, in different states and dates. It becomes ultraviolet and the same time red (on 2019-07-05), when it is in a low state, with humps activity. When CR Boo is in a low to rising state (on 20220204), with a manifestation of quasi-periodic oscillations, an ultraviolet excess is observed and the star stays redder. We have two different colors states of the star, both during the outbursts on two different nights (2020-04-16 and 2021-02-12). Comparing the results between two of the observational dates (2020-04-16 and 2021-02-12), when the superhumps are observed, it is notable that: on the first night (2020-04-16) the object is bluer and its color temperature on this night is a bit higher by $\approx 2100 \pm 450 K$ than the value of the second night (Table \ref{tab4}). A source of these variations in the temperature in B band has been probably changed its parameters at the later date (2021-02-12), when the star is redder and we have the lower values of the temperature. Many Cataclysmic variables staying in a brightness low states have analogous behavior of their color indices – e.g. MV Lyr \citep{1981ApJ...251..611R}, KR Aur \citep{2006ASPC..349..197B} etc. When they became in a very low state (such as a deep state or a deep minimum), they show a higher temperature and a bluer color, compared to the high state. This could be caused by the appearance in the energy distribution of the white dwarf or the inner hot parts of a weak accretion disk. Comparing CR Boo, by our results, with other AM CVn objects \citep{2021MNRAS.505..215R}, we see the star is in contrast to the behavior of binary SDSS 0807 and it is similar to AM CVn SDSS 1411, on the nights of 2019-07-05 and 2020-04-16. On the night of 2021-02-12, CR Boo\textquoteright s color is in a close consistent with SDSS 1137 and SDSS 0807. The behavior of our object on 2022-02-04 looks more like SDSS 1411. In the case of our observations, CR Boo generally gets redder as its brightness weakens, except in the night of February, 12, 2022. This is probably due to the disc is being sufficiently bright, even in the lowest states, that we have observed and it is dominating in the system\textquoteright s emission. The resulting negative value of the color index $U-B < 0$ during our observations is an indication of a hotter radiation zone in the accretion disc around the primary star. The heating parts of accretion disc could also be responsible for the humps and superhumps productions. \section{Conclusion}\label{sec6} We presented our observational results of AM CVn star CR Boo in the UBVR bands, performed over five nights during different observational campaigns. The data were obtained with the Rozhen National Astronomical Observatory, Belogradchik and AS Vidojevica telescopes. The brightness of the system varied between $13.95 - 17.23$ in B band. We confirmed the appearance of humps during the quiescence state and superhumps during the active state in the CR Boo type of objects. The superhumps periodicity is obtained, $P_{sh} \approx 24.76 - 24.92$ min, for the nights, when the object was in its probable outburst state. During the superhumps, the color was $ -0.094 < B-V < 0.054$. We calculated the color temperature using the dereddening color indices and it is in the range $9600 [K] < T(B-V)_{0} < 11700 [K]$. We observe an increasing temperature during the superhumps observation period, compared to the values of the humps period with an average of $2100 \pm 450 K$ for both nights. The star becomes bluer when it is brighter in times of superhumps. We found that during the two nights with superhumps activity, CR Boo has different behavior: the star became blue in the first night, but it is red in the second night. \bmhead{Acknowledgments} The authors thank the anonymous referee for the useful comments that helped us to improve the paper. The authors thank to the grant: Binary stars with compact objects, $K\Pi -06-H28/2$ $08.12.2018$ (Bulgarian National Science Fund). This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. \section*{Declarations} \begin{itemize} \item This work was supported by the grant [Binary stars with compact objects, $K\Pi -06-H28/2$ $08.12.2018$ (Bulgarian National Science Fund)]. \item The authors declare that they have no conflicts of interest. \item Ethics approval \item All authors contributed to the study conception and design. Observations were performed by [Svetlana Boeva], [Daniela Boneva], [Georgi Latev], [Yanko Nikolov] and [Zorica Cvetkovi\'{c}]. Material preparation, data collection and analysis were performed by [Svetlana Boeva], [Daniela Boneva], [Radoslav Zamanov] and [Georgi Latev]. The text preparation and correction by [Daniela Boneva],[Svetlana Boeva],[Georgi Latev]and [Wojciech Dimitrov]. The first draft of the manuscript was written by [Daniela Boneva] and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. \end{itemize} \begin{appendices} \section{}\label{secA} In this Appendix, we present information about the comparison stars, used in the photometric analysis of the recent observations of CR Boo (Section 3.1). The chart in Fig. A1 shows the configuration of the stars in the field of CR Boo, where the comparison stars are denoted with numbers and blue circles. The obtained magnitudes of the standard stars are shown in Table A1. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.75\columnwidth]{fig8.eps} \caption{A chart of comparison stars. The stars are denoted with blue circles and numbers} \label{fig:8} \end{center} \end{figure} \begin{table}[!htb] \caption{Standard stars photometry (Johnson - Cousins). The stars numbered refer to those in Fig. 8} \label{tab5} \begin{tabular}{@{}c c c c c c @{}} \toprule Star / Band & U & B & V & R & I \\ \toprule \\ (3) & - & $16.377$ $\pm 0.020$ & $15.545$ $\pm 0.010$ & $15.088$ $\pm 0.010$ & $14.628 \pm 0.05$ \\ \hline \\ (4) & $16.30$ $\pm 0.05$ & $14.959$ $\pm 0.030$ & $13.742$ $\pm 0.025$ & $12.991$ $\pm 0.020$ & $12.373 \pm 0.005$ \\ \hline \\ (7) & - & $16.335$ $\pm 0.030$ & $15.758$ $\pm 0.010$ & $15.399$ $\pm 0.010$ & $15.038 \pm 0.05$ \\ \hline \\ (8) & - & $14.911$ $\pm 0.020$ & $14.152$ $\pm 0.015$ & $13.762$ $\pm 0.030$ & $13.357 \pm 0.02$ \\ \hline \\ (16) & $14.07$ $\pm 0.03$ & $13.229$ $\pm 0.015$ & $12.259$ $\pm 0.015$ & $11.670$ $\pm 0.005$ & $11.147 \pm 0.01$ \\ \hline \\ (22) & $12.64$ $\pm 0.02$ & $12.621$ $\pm 0.015$ & $12.070$ $\pm 0.005$ & $11.781$ $\pm 0.010$ & $11.472 \pm 0.005$ \\ \\ \botrule \end{tabular} \end{table} \end{appendices}
1,477,468,749,962
arxiv
\section{Introduction} The aim of the MODEST collaboration \citep{article:modest1} is to model and understand dense stellar systems, which requires a good understanding of what happens when two single stars or binary systems undergo a close encounter. A possible outcome of such an encounter is a collision followed by the merging of two or more stars. This is a possible formation channel for blue straggler stars (\emph{e.g.} \citet{article:sills_on_axis}). Understanding the formation and evolution of blue stragglers is important for understanding the Hertzsprung-Russell diagram of clusters. \section{Method} We have developed a version of the Eggleton stellar evolution code \citep{article:eggleton_evlowmass, article:pols_approxphys} that can import the output of a collision calculation and calculate the subsequent evolution of the remnant, in principle without human intervention. We have used this code to calculate detailed evolution models of collision remnants from the $N$-body simulation of M67 by \citet{article:hurley_m67} and compared these with normal main sequence stars of the same mass as well as homogeneous models with the same average abundances. The post-merger profiles were calculated with the parametrised code of \citet{article:lombardi_mmas}. The collisions in the $N$-body simulation span a range of total masses $M = M_1+M_2$, mass ratio $q=M_2/M_1$ and time of collision $t$. We have also calculated a grid of models spanning four collision times ($t = 2800, 3100, 3400 \mbox{ and } 3700 \mathrm{M yr}$), four mass ratios ($q = 0.4, 0.6, 0.8 \mbox{ and } 1.0$) and six total masses ($M = 1.5, 1.6, 1.7, 1.8, 1.9 \mbox{ and } 2.0$), which covers the parameter space of collisions found in the $N$-body simulation. \section{Results and Conclusions} \begin{figure} \ifpdf \includegraphics[width=\textwidth]{glebbeek_fig_hrd} \else \includegraphics[angle=270,width=\textwidth]{glebbeek_fig_hrd} \fi \caption{Colour-magnitude diagram of the open cluster M67 ($\blacklozenge$). Overplotted are the locations of our models at $4 \mathrm{Gyr}$, the age of M67. The black ({\Large$\bullet$, $\blacktriangle$}) symbols are collisions from the M67 simulation. Two of these are double collisions, which are indicated by {\Large$\blacktriangle$}. The grey ({\Large$\bullet$}) symbols are from our larger grid.} \label{fig:hrd_m67} \end{figure} Compared to normal stars, collision products are helium enhanced. Most of the helium enhancement is in the interior and does not affect the opacity of the envelope. The helium enhancement does increase the mean molecular weight and therefore the luminosity of the star. This decreases the remaining lifetime of collision products compared to normal stars of the same mass and can be important for the predicted number of blue stragglers from cluster simulations. The increased luminosity changes the distribution of blue stragglers in the colour-magnitude diagram, moving it above the extension of the main sequence. The evolution track of a fully mixed model can be significantly bluer than a self-consistently calculated evolution track of a merger remnant. Fully mixed models are closer to the zero age main sequence. Our grid of models covers most of the observed blue straggler region of M67 (Figure \ref{fig:hrd_m67}). A better coverage of the blue part of the region can be achieved by increasing the upper mass limit in the grid. The brightest observed blue straggler falls outside the region of our grid because it requires at least a double collision to explain.
1,477,468,749,963
arxiv
\section{Introduction}\label{sec:introduction} When releasing information publicly from a database or sharing data with collaborators, data collectors are always concerned about exposing sensitive personal information of individuals who contribute to the data. Even with key identifiers removed, data users may still identify a participant in a data set such as via linkage with public information. Differential privacy (DP) provides a strong privacy guarantee to data release without making assumptions about the background knowledge or behavior of data users \cite{dwork2006calibrating, dwork2008, dwork2011differential}. For a given privacy budget, information released via a differentially private mechanism guarantees no additional personal information of an individual in the data can be inferred, regardless how much background information data users already possess about the individual. DP has spurred a great amount work in the development of differentially private mechanisms to release results and data, including the Laplace mechanism \cite{dwork2006calibrating}, the Exponential mechanism \cite{mcsherry2007mechanism, mcsherry2009privacy}, the medium mechanism \cite{roth2010median}, the multiplicative weights mechanism \cite{multiplicative}, the geometric mechanism \cite{geometric}, the staircase mechanism \cite{staircase}, the Gaussian mechanism \cite{privacybook}, and applications of DP for private and secure inference in a Bayesian setting \cite{privateBayes}, among others. In this paper, we unify the Laplace mechanism and the Gaussian mechanism in the framework of a general family, referred to as the generalized Gaussian (GG) mechanism. The GG mechanism is based on the $l_p$ global sensitivity (GS) of queries, a generalization of the $l_1$ GS. We demonstrate the nonexistence of a scale parameter that would lead to a GG mechanism of pure $\epsilon$-DP in the case of $p\ne1$ if the results to be released are unbounded, but suggest the GG mechanism of $(\epsilon, \delta)$-probabilistic DP (pDP) as an alternative in such cases. For bounded data we introduce the truncated GG mechanism and the boundary inflated truncated GG mechanism that satisfy pure $\epsilon$-DP. We investigate the connections between the GG mechanism and the Exponential mechanism when the utility function in the latter is based on the Minkowski distance, and establish the relationship between the sensitivity of the utility function in the Exponential mechanism and the $l_p$ GS of queries. We then take a closer look at the Gaussian mechanism (the GG mechanism of order 2), and derive a lower bound on the scale parameter that delivers $(\epsilon,\delta)$-pDP. The bound is tighter than the bound to satisfy $(\epsilon,\delta)$-approximate DP (aDP) in the Gaussian mechanism \cite{privacybook}, implying less noise being injected in the sanitized results. We compare the utility of sanitized results, in terms of the tail probability and dispersion or mean squared errors (MSE), from independent applications of the Gaussian mechanism and the Laplace mechanism. Finally, we run 3 experiments on the mildew, Czech, and adult data, respectively, and sanitize the count data via the Laplace mechanism, the Gaussian mechanisms of $(\epsilon,\delta)$-pDP and $(\epsilon,\delta)$-aDP. We compare the accuracy of sanitized results in terms of the $l_1$ distance and Kullback-Leibler divergence from the original results, and examine how sanitization affects the prediction accuracy of support vector machines constructed with the sanitized data in the adult experiment. The rest of the paper is organized as follows. Section \ref{sec:GGM} defines the $l_p$ GS and presents the GG mechanism of $(\epsilon,\delta)$-pDP, the truncated GG mechanism, and the boundary inflated truncated GG mechanism that satisfy pure $\epsilon$-DP. It also connects and differentiates between the GG mechanisms and the Exponential mechanism when the utility function in the latter is based the Minkowski distance. Section \ref{sec:gaussian} take a close look at the Gaussian mechanism of $(\epsilon,\delta)$-pDP, and compares it with the Gaussian mechanism of $(\epsilon,\delta)$-aDP. It also compares the tail probability and the dispersion of the noises injected via the Gaussian mechanism of $(\epsilon,\delta)$-pDP and the Laplace mechanism. Section \ref{sec:experiments} presents the findings from the 3 experiments. Concluding remarks are given in Section \ref{sec:discussion}. \section{Generalized Gaussian Mechanism} \label{sec:GGM} \subsection{differential privacy (DP)} DP was proposed and formulated in Dwork \cite{dwork2006} and Dwork et al. \cite{dwork2006calibrating}. A perturbation algorithm $\mathcal{R}$ gives $\epsilon$-differential privacy if for all data sets $(\mathbf{x},\mathbf{x}')$ that differ by only one individual ($d(\mathbf{x},\mathbf{x}')=1$), and all possible query results $Q\subseteq \mathcal{T}$ to query $\mathbf{s}$ ($\mathcal{T}$ denotes the output range of $\mathcal{R}$), \begin{equation}\label{eqn:dp} \left|\log\left(\frac{\Pr(\mathcal{R}( \mathbf{s}(\mathbf{x})) \in Q)}{\Pr(\mathcal{R}( \mathbf{s}(\mathbf{x}'))\in Q)} \right)\right|\le\epsilon, \end{equation} \noindent where $\epsilon>0$ is the privacy $``$budget$"$ parameter. $\mathbf{s}$ refers to queries about data $\mathbf{x}$ and $\mathbf{x}'$, we also use it to denote the query results (unless stated otherwise, the domain of the query results is the set of all real numbers). $d(\mathbf{x},\mathbf{x}')=1$ is often defined in two ways in the DP community: $\mathbf{x}$ and $\mathbf{x}'$ are of the same size and differ in exactly one record (row) in at least one attributes (columns); and $\mathbf{x}$ is exactly the same as $\mathbf{x}'$ except that it has one less (more) record. Mathematically, Eqn (\ref{eqn:dp}) states that the probabilities of obtaining the same query result perturbed via $\mathcal{R}$ are roughly the same regardless of whether the query is sent to $\mathbf{x}$ or $\mathbf{x}'$. In layman's terms, DP implies the chance an individual will be identified based on the perturbed query result is very low since the query result would be about the same with or without the individual in the data. The degree of $``$roughly the same$"$ is determined by the privacy budget $\epsilon$. The lower $\epsilon$ is, the more similar the probabilities of obtaining the same query results from $\mathbf{x}$ and $\mathbf{x}'$ are. DP provides a strong and robust privacy guarantee in the sense that it does not assume anything regarding the background knowledge or the behavior on data users. In addition to the $``$pure$"$ $\epsilon$-DP in Eqn (\ref{eqn:dp}), there are softer versions of DP, including the $(\epsilon,\delta)$-approximate DP (aDP) \cite{dwork2006delta}, the $(\epsilon,\delta)$-probabilistic DP (pDP) \cite{onthemap}, the $(\epsilon,\delta)$-random DP (rDP) \cite{randomDP}, and the $(\epsilon,\tau)$-concentrated DP (cDP) \cite{cPD}. In all the relaxed versions of DP, one additional parameter is employed to characterize the amount of relaxation on top of the privacy budget $\epsilon$. Both the $(\epsilon,\delta)$-aDP and the $(\epsilon,\delta)$-pDP reduce to $\epsilon$-DP when $\delta=0$, but are different with respect to the interpretation of $\delta$. In $(\epsilon,\delta)$-aDP, \begin{equation}\label{eqn:adp} \Pr(\mathcal{R}( \mathbf{s}(\mathbf{x})) \in Q)\le e^{\epsilon}\Pr(\mathcal{R}( s(\mathbf{x}'))\in Q) + \delta; \end{equation} while a perturbation algorithm $\mathcal{R}$ satisfies $(\epsilon,\delta)$-pDP if \begin{equation}\label{eqn:pdp} \Pr\left(\left|\log\left(\frac{\Pr(\mathcal{R}( \mathbf{s}(\mathbf{x})) \in Q)}{\Pr(\mathcal{R}( \mathbf{s}(\mathbf{x}'))\in Q)} \right)\right|>\epsilon\right)\le\delta; \end{equation} that is, the probability of $\mathcal{R}$ generating an output belonging to the disclosure set is bounded below $\delta$, where the disclosure set contains all the possible outputs that leak information for a given privacy budget $\epsilon$. The fact that probabilities are within $[0,1]$ puts constraints on the values of $\epsilon,\Pr(\mathcal{R}(\mathbf{s}(\mathbf{x}')\in Q)$, and $\delta$ in the framework of $(\epsilon,\delta)$-aDP. By contrast, $(\epsilon,\delta)$-pDP seems to be less constrained and more intuitive with its probabilistic flavor. When $\delta$ is small, $(\epsilon,\delta)$-aDP and $(\epsilon,\delta)$-aDP are roughly the same. The $(\epsilon,\delta)$-rDP is also a probabilistic relaxation of DP; but it differs from $(\epsilon,\delta)$-pDP in that the probabilistic relaxation is with respect to data generation. In $(\epsilon,\tau)$-cDP, privacy cost is treated as a random variable with an expectation of $\epsilon$ and the probability of the actual cost $>\epsilon$)$>a$ is bounded by $e^{-(a/\tau)^2/2}$. The $(\epsilon,\tau)$-cDP, similar to the $(\epsilon,\delta)$-pDP, relaxes the satisfaction of DP with respect to $\mathcal{R}$ and is broader in scope. \subsection{ \texorpdfstring{$l_p$}{} global sensitivity} \begin{defn}\label{def:Lpgs} For all $(\mathbf{x},\mathbf{x}')$ that is $d(\mathbf{x},\mathbf{x}')=1$, the $l_p$-global sensitivity (GS) of query $\mathbf{s}$ is \begin{equation}\label{eqn:Lpgs} \Delta_p=\maxx\|\mathbf{s}(\mathbf{x})-\mathbf{s}(\mathbf{x}')\|_p=\left(\textstyle\sum_{k=1}^r\!\left|s_k(\mathbf{x})-s_k(\mathbf{x}')\right|^p\right)^{1/p}\mbox{for integer } p\!>\!0. \end{equation} \end{defn} \noindent In layman's term, $\Delta_{p}$ is the maximum difference measured by the Minkowski distance in query results $\mathbf{s}$ between two neighboring data set $\mathbf{x},\mathbf{x}'$ with $d(\mathbf{x},\mathbf{x}')=1$. The sensitivity is $``$global$"$ since it is defined for all possible data sets and all possible ways that $\mathbf{x}$ and $\mathbf{x}'$ differ by one. The higher $\Delta_{p}$ is, the more disclosure risk there is on the individuals from releasing the original query results $\mathbf{s}$. The $l_p$ GS is a key concept in the construction of the generalized Gaussian mechanism in Section \ref{sec:GGM}. The $l_p$ GS is a generalization of the $l_1$ GS \cite{dwork2006, dwork2006calibrating} and the $l_2$ GS \cite{privacybook}. The $``$difference$"$ between $\mathbf{s}(\mathbf{x})$ and $\mathbf{s}(\mathbf{x}')$ measured by $\Delta_{1}$ is the largest among all $\Delta_{p}$ for $p\ge1$ since that $\|\mathbf{s}\|_{p+a} \le \|\mathbf{s}\|_p$ for any real-valued vector $\mathbf{s}$ and $a \ge 0$. In addition, $\Delta_{1}$ is also the most $``$sensitive$"$ measure given that the rate of change with respective to any $s_k$ is the largest among all $p\ge1$. When $s$ is a scalar, $\Delta_{p}=\Delta_{1}$ for all $p>0$. When $\mathbf{s}$ is multi-dimensional, an easy upper bound for $l_1$ GS $\Delta_{1}$ is $\sum_{k=1}^r\Delta_{1,k},$ the sum of the $l_1$ GS of each element $k$ in $\mathbf{s}$, by the triangle inequality. Lemma \ref{lem:GSp1} gives an upper bound on $\Delta_{p}$ for a general $p$ that includes $p=1$ as a special case (the proof is provided in Appendix \ref{app:GSp1}). \begin{lem}\label{lem:GSp1} $\left(\sum_{k=1}^r\Delta_{1,k}^p\right)^{1/p}$ is an upper bound for $\Delta_{p},$ where $\Delta_{1,k}$ is the $l_1$ GS of $s_k$. \end{lem} \noindent The upper bound given in Lemma \ref{lem:GSp1} can be conservative in cases where the change from $\mathbf{x}$ to $\mathbf{x}'$ does not necessarily alter every entry in the multidimensional $\mathbf{s}$. For example, the $l_p$ GS of releasing a histogram with $r$ bins is 1 (if $d(\mathbf{x},\mathbf{x}')=1$ is defined as $\mathbf{x}'$ is one record less/more than $\mathbf{x}$). In other words, the GS is not $r^{1/p}$ even though there are $r$ counts in the released histogram, but is the same as in releasing a single cell because removing one record only alters the count in a single bin. It is obvious that each element $s_k$ in $\mathbf{s}$ for $k=1,\ldots,r$ needs to be bounded to obtain a finite $\Delta_p$. The most extreme case is the change from $\mathbf{x}$ to $\mathbf{x}'$ makes $s_k$ jump from one extreme to the other, implying the range of $s_k$ can be used as an upper bound for $\Delta_{k,1}$, which, combined with Lemma \ref{lem:GSp1}, leads to the following claim. \begin{claim}\label{cla:bounds vs GS} Denote the bounds of statistic $s_k$ by $[c_{k0},c_{k1}]$, both of which are finite. The GS $\Delta_k\le c_{k1}-c_{k0}$ and the GS for $\mathbf{s}=\{s_k\}_{k=1,\ldots,r}$ is $\Delta_p\le \left(\sum_{k=1}^r (c_{k1}-c_{k0})^p\right)^{1/p}$. \end{claim} \subsection{generalized Gaussian distribution} The GG mechanism is defined based on the GG distribution GG$(\mu,b,p)$ with location parameter $\mu$, scale parameter $b>0$, shape parameter $p>0$. The probability density function (pdf) is $$f(x|\mu,b,p) = \frac{p}{2b\Gamma(p^{-1})}\exp\left\{\left(\frac{|x-\mu|}{b}\right)^p\right\}.$$ The mean and variance of $x$ are $\mu$ and $b^2\Gamma(3/b)/\Gamma(1/b)$, respectively. ($\Gamma(t)=\int_0^{\infty} x^{t-1}e^{-x}dx$ is the Gamma function). When $p=1$, the GG distribution is the Laplace distribution with mean $\mu$ and variance $2b^2$; when $p=2$, the GG distribution becomes the Gaussian distribution with mean $0$ and variance $b^2/2$. \begin{figure}[!hbt] \begin{center} \includegraphics[scale=0.6]{GG.png} \caption{Density of GG distributions}\label{fig:GG} \end{center}\end{figure} Figure \ref{fig:GG} presents some examples of the GG distributions at different $p$. All the distributions in the left plot have the same scale $b=\sqrt{2}$ and location $0$, and those in the right plot have the same variance $1$ and location $0$. When the scale parameter is the same (the left plot), the distributions become less spread as $p$ increases, and the Laplace distribution ($p=1$) looks very different from the rest. When the variance is the same (the right plot), the Laplace distribution is the most likely to generate values that are close to the mean, followed by the Gaussian distribution ($p=2$). \subsection{GG mechanism of \texorpdfstring{$\epsilon$}{}-DP}\label{sec:GGMDP} We first examine the GG mechanism of $\epsilon$-DP with the domain for $s_k^*$ defined on $(-\infty,\infty)$ for $k=1,\ldots,r$. $\mathbf{s}$ needs to bounded to calculate the $l_p$ GS, but the bounding requirement does not necessarily goes into formulating the GG distribution for the GG mechanism in the first place. If bounding for $\mathbf{s}^*$ is necessary, it can be incorporated in a post-hoc manner after being generated from the GG mechanism. A well-known example is the Laplace mechanism. It employs a Laplace distribution defined on $(-\infty,\infty)$, though its scale parameter $b=\Delta_1/\epsilon$ requires $\mathbf{s}$ to be bounded for $\Delta_1$ to be calculated. Eqn (\ref{eqn:GGD}) presents the GG distribution from which sanitized $\mathbf{s}^*$ would be generated to satisfy $\epsilon$-DP, assuming $b$ exists. \begin{align} f(\mathbf{s}^*)&\propto e^{\left(\|\mathbf{s}^*-\mathbf{s}\|_p/b\right)^p}\propto\textstyle\prod_{k=1}^r \exp\{-(|s^*_k-s_k|/b)^p\}\notag\\ &=\textstyle\prod_{k=1}^r \frac{p}{2b\Gamma(p^{-1})}\exp\{(|s^*_k-s_k|/b)^p\} = \textstyle\prod_{k=1}^r \mbox{GG}(s_k,b,p) \label{eqn:GGD} \end{align} \begin{claim}\label{cla:lowerb} There does not exist a lower bound on $b$ for the GG distribution in Eqn (\ref{eqn:GGD}) when $p\ne1$ that generates $\mathbf{s}^*$ with $\epsilon$-DP. When $p=1$, the lower bound on $b$ that leads to $\epsilon$-DP is $\epsilon^{-1}\Delta_1$. \end{claim} Appendix \ref{app:GGM} lists the detailed steps that lead to Claim \ref{cla:lowerb}. In brief, to achieve $\epsilon$-DP, we need $b^{-p} \left(\textstyle\sum_{k=1}^r\sum_{j=1}^{p-1}\!(_j^p)|s^*_k-s_k|^{p-j}\Delta_{1,k}^j+\Delta_{p}^p\right)\le\epsilon$ (Eqn \ref{eqn:fourth}). However, this inequality depends on the random GG noise $e_k=s^*_k-s_k$ for $k=1,\ldots,r$, the support of which is $(-\infty,\infty)^r$. In other words, there does not exist a random noise-free solution on $b$, unless $p=1$ in which case the inequality no longer involves the error terms and the GG mechanism reduces to the familiar Laplace mechanism of $\epsilon$-DP. We propose two approaches to fix the problem and achieve DP through the GG mechanism. The first approach leverages the bounding requirement for $\mathbf{s}$ and builds in the requirement in the GG distribution in the first place to generate $\mathbf{s}^*$ with $\epsilon$-DP, assuming that $\mathbf{s}^*$ and $\mathbf{s}$ share the same bounded domain (Section \ref{sec:boundedGGM}). The second approach still uses the GG distribution in Eqn (\ref{eqn:GGD}) to sanitize $\mathbf{s}$, only satisfying $(\epsilon,\delta)$-pDP instead of the pure $\epsilon$-DP (Section \ref{sec:GGMpDP}). The sanitized $\mathbf{s}^*$ can be bounded in a post-hoc manner, as needed. \subsection{truncated GG mechanism and boundary inflated truncated GG mechanism of \texorpdfstring{$\epsilon$}{}-DP}\label{sec:boundedGGM} \begin{defn} \label{def:tGGM} Denote the bounds on query result $\mathbf{s}$ by $[c_{k0},c_{k1}]_{k=1,\ldots,r}$. For integer $p\ge1$, the truncated GG mechanism of order $p$ generates $\mathbf{s}^*\!\!\in\!\![c_{k0},c_{k1}]_{k=1,\ldots,r}$ with $\epsilon$-DP by drawing from the truncated GG distribution \begin{align} &f(\mathbf{s}^*|c_{k0}\!\le\!s^*_k\! \le\! c_{k1},\forall\; k=1,\ldots,r)=\prod_{k=1}^r\frac{p\exp\{(|s^*_k-s_k|/b)^p\}}{2b\Gamma(p^{-1}) A(s_k, b, p)} \mbox{ with scale parameter }\label{eqn:tGGD}\\ &\qquad\qquad b \ge \left(2\epsilon^{-1}\!\!\left(\!\sum_{k=1}^r\sum_{j=1}^{p-1}(_j^p)|c_{k1}-c_{k0}|^{p-j}\Delta_{1,k}^j+\Delta_{p}^p\right)\!\right)^{1/p}\!\!\!\!\!\!\!, \label{eqn:b1} \end{align} where $A(s_k, b,p)\!=\!\Pr(c_{k0}\!\le\!s^*_k\!\le\! c_{k1}; s_k, b,p)\!=\!(\Gamma(p^{-1}))^{-1}(\gamma[p^{-1},(c_{k1}-s_k)/b] +\gamma[p^{-1},(s_k-c_{k0})/b])$ ($\gamma$ is the lower incomplete gamma function), $\Delta_{1,k}$ is the $l_1$ GS of $s_k$, and $\Delta_{p}$ is the $l_p$ GS of $\mathbf{s}$. \end{defn} The proof of $\epsilon$-DP of the truncated GG mechanism is given in Appendix \ref{app:tGGM}. The truncated GG mechanism perturbs each element in $\mathbf{s}$ independently; thus Eqn (\ref{eqn:tGGD}) involves the product of $r$ independent density functions. Though the closed interval $[c_{k0},c_{k1}]$ is used to denote the bounds on $s_k$, Definition \ref{def:tGGM} remains the same regardless of whether the interval is closed, open, or half-closed since the GG distribution is defined on a continuous domain. If $s_k$ is discrete in nature such as counts, post-hoc rounding on perturbed $\mathbf{s}_k^*$ can be applied. The lower bound on $b$ in Eqn (\ref{eqn:b1}) depends on $\Delta_{p}$. We may apply Lemma \ref{lem:GSp1} and set $\Delta_{p}^p$ at its upper bound $\sum_{k=1}^r\Delta^p_{1,k}$ to obtain a less tight bound on $b$. \begin{align}\label{eqn:b2} \!\!\!\!\!b\! \ge\!\left(2\epsilon^{-1}\textstyle\!\left(\!\sum_{k=1}^r\sum_{j=1}^p(_j^p)|c_{k1}-c_{k0}|^{p-j}\Delta_{1,k}^j\right)\!\right)^{1/p}\!\!\!. \end{align} \begin{defn} \label{def:bitGGM} Denote the bounds on query result $s_k$ by $[c_{k0},c_{k1}]$ for $k=1,\ldots,r$. For integer $p\ge1$, the $p\mbox{\textsuperscript{th}}$ order boundary inflated truncated (BIT) GG mechanism sanitizes $\mathbf{s}$ with $\epsilon$-DP by drawing perturbed $\mathbf{s}^*$ from the following piecewise distribution \begin{align}\label{eqn:thGGD} f(\mathbf{s}^*|c_{k0}\!\le\!s^*_k\! \le\! c_{k1},\forall\; k=1,\!\ldots,\!r)\!=\! \textstyle\prod_{k=1}^r\!\left\{\!p_k^{\mathrm{I}(s_k^*=c_{k0})}q_k^{\mathrm{I}(s_k^*=c_{k1})} \!\left(\frac{p\exp\{(|s^*_k-s_k|/b)^p\}}{2b\Gamma(p^{-1})}\right)^{\!\!\mathrm{I}(c_{k0}<s_k^*<c_{k1})}\right\}\!,\!\! \end{align} where $p_k\!=\!\Pr(s_k^*\!<\! c_{k0}; s_k,p,b)\!=\!\frac{1}{2}\!-\!\gamma(p^{-1}, ((s_k\!-\!c_{k0})/b)^p)(2\Gamma(p^{-1}))^{-1}$ and $q_k=\Pr(s_k^*> c_{k1}; s_k,p,b) =\frac{1}{2}-\gamma(p^{-1}, ((c_{k1}-s_k)/b^p))(2\Gamma(p^{-1}))^{-1}$, $\gamma$ is the lower incomplete gamma function, and $\Gamma$ is the gamma function; and $\mathrm{I}()$ is the indicator function that equals 1 if the argument in the parentheses is true, 0 otherwise. \end{defn} In brief, the BIT GG distribution replaces out-of-bound values with the boundary values and keeps the within-bound values as is, leading to a piecewise distribution. This is in contrast to the truncated GG distribution which throws away out-of-bound values. The challenge with perturbing $\mathbf{s}$ directly via Eqn (\ref{eqn:thGGD}) lies in solving for a lower bound $b$ that satisfies $\epsilon$-DP from \begin{equation}\label{eqn:bit.ineq1} \log\left|\frac{f(\mathbf{s}^*|c_{k0}\!\le\!s^*_k\! \le\! c_{k1},\forall\; k=1,\ldots,r)}{f(\mathbf{s}^{'*}|c_{k0}\!\le\!s^*_k\! \le\! c_{k1},\forall\; k=1,\ldots,r)}\right|\le\epsilon \end{equation} where $\mathbf{s}^*=\{s^*_k\}$ and $\mathbf{s}'^{*}=\{s'^{*}_k\}$ are the sanitized results from data $\mathbf{x}$ and $\mathbf{x}'$ that are $d(\mathbf{x},\mathbf{x}')=1$, respectively. The lower bound given in Eqns (\ref{eqn:b1}) and \ref{eqn:b2} can be used when the output subset $Q$ is a subset of $(c_{10}, c_{11})\times\cdots\times(c_{r0}, c_{r1})$ (open intervals). However, when $Q$ is $\{s_k=c_{k0}\;\forall\; k=1,\ldots,r\}$ and $\{s_k=c_{k1}\;\forall\; k=1,\ldots,r\}$, respectively, there are no analytical solutions on $b$ in either Eqns (\ref{eqn:bit.ineq2}) or (\ref{eqn:bit.ineq3}) \begin{align} \!\!\!\!\!&\log\!\left|\textstyle\prod_{i=1}^r\!\frac{1/2\!-\!\gamma(p^{-1}, (\!(s_k-c_{k0})/b)^p)(2\Gamma(p^{-1}))^{-1}}{1/2-\gamma(p^{-1}, ((s'_k-c_{k0})/b)^p)(2\Gamma(p^{-1}))^{-1}}\!\right|\!\le\!\epsilon\label{eqn:bit.ineq2}\\ \!\!\!\!\!&\log\!\left|\textstyle\prod_{i=1}^r\!\frac{1/2\!-\!\gamma(p^{-1}, (\!(s_k-c_{k0})/b)^p)(2\Gamma(p^{-1}))^{-1}}{1/2-\gamma(p^{-1}, ((s'_k-c_{k0})/b)^p)(2\Gamma(p^{-1}))^{-1}}\!\right|\!\le\!\epsilon.\label{eqn:bit.ineq3} \end{align} The most challenging situation is when $Q$ is a mixture set of $(c_{k0}, c_{k1})$, $c_{k0}$, and $c_{k1}$ for different $k=1,\ldots,r$. In summary, the BIT GG mechanism is not very appealing from a practical standpoint. \subsection{GG mechanism of \texorpdfstring{$(\epsilon, \delta)$}{}-pDP}\label{sec:GGMpDP} The second approach to obtain a lower bound on the scale parameter $b$ for the GG distribution in Eqn (\ref{eqn:GGD}) when $p\ge 2$ is to employ a soft version of DP. Corollary \ref{cor:GGMpDP} presents a solution on $b$ that satisfies $(\epsilon,\delta)$-pDP. \begin{cor} \label{cor:GGMpDP} If the scale parameter $b$ in the GG distribution in Eqn (\ref{eqn:GGD}) satisfies \begin{align} \!\!&\Pr\!\left(\!\textstyle\sum_{k=1}^r\!\sum_{j=1}^{p-1}(_j^p)|s^*_k\!-\!s_k|^{p-j}\Delta_{1,k}^j\! >\!b^p\epsilon\!-\!\Delta_{p}^p\!\right)\!<\! \delta,\label{eqn:GGMpDP} \end{align} then the GG mechanism satisfies $(\epsilon,\delta)$-pDP when $p\ge2$. \end{cor} \noindent The proof is straightforward. Specifically, rather than setting the left side of Eqn (\ref{eqn:fourth}) $\le\epsilon$ (i.e. with 100\%), we attach a probability of achieving the inequality, that is, Pr(Eqn (\ref{eqn:fourth})$<\epsilon)>1-\delta$, leading to Eqn (\ref{eqn:GGMpDP}). The $(\epsilon,\delta)$-pDP does not apply to the Laplace mechanism ($p=1$) at least in the framework laid out in Corollary \ref{cor:GGMpDP}. When $p\!=\!1$, Eqn (\ref{eqn:first}) becomes $b^{-1}\sum_{k=1}^r\!\big||e_k|\!-\!|e_k+d_k|\big|\!\le\! b^{-1}\sum_{k=1}^r|d_k|\!\le\! b^{-1}\Delta_{1}$, which does not involve the random variable $\mathbf{s}^*$; in other words, as long as $b^{-1}\Delta_{\mathbf{s},1}\le\epsilon$, the pure $\epsilon$-DP is guaranteed. Corollary \ref{cor:GGMpDP} does not list a closed-form solution on $b$ as it is likely that only numerical solutions exist in most cases. Given that $s^*_k$ is independent across $k=1,\ldots,r$, $a_k= \textstyle\sum_{j=1}^{p-1}(_j^p)|s^*_k-s_k|^{p-j}\Delta_{1,k}^j $ a function of $s^*_k$, is also independent across $k$. Therefore, the problem becomes searching for a lower bound on $b$ where the probability of a sum of $r$ independent variables ($a_1,\ldots,a_r$) exceeding $b^p-\Delta_p^p\epsilon$ is smaller than $\delta$. If there exists a closed-form distribution function for $\sum_{k=1}^r a_k$, an exact solution on $b$ can be obtained. When $p=2$, an analytical lower bound $b$ can be obtained (see Section \ref{sec:gaussian}); when $p> 2$ we only manage to obtain the distribution function for $(_j^p)|s^*_k-s_k|^{p-j}\Delta_{1,k}^j$, but not for $a_k$ or $\sum_{k=1}^r a_k$ at the current stage. A relatively simple case is when the elements of statistics $\mathbf{s}$ are calculated on disjoint subsets of the original data, thus removing one individual from the data only affects one element out of $r$, $\Delta_1=\Delta_p= \Delta_{1,k'}$, leading to the Corollary \ref{cor:disjoint}. \begin{cor} \label{cor:disjoint} When all $r$ elements in $\mathbf{s}$ are based disjoint subsets of the data, the lower bound on $b$ satisfies $\Pr(\sum_{j=1}^p(_j^p)|s^*_{k'}\!-\!s_{k'}|^{p-j}\Delta_{1,k'}\!>\!b^p\epsilon)\!<\! \delta$, where $k'=\mbox{argmax}_k\Delta_{1,k}$. \end{cor} When the query is a histogram, $\Delta_1=\Delta_p= \Delta_{1,k'}=1$, and the lower bound $b$ for $(\epsilon,\delta)$-pDP can be derived from $\Pr(\sum_{j=1}^p(_j^p)|e_{k'}|^{p-j}\!>\!b^p\epsilon)\!<\! \delta$. The proof of \ref{cor:disjoint} is trivial. With disjoint queries, only one element in $\mathbf{s}$ is affected by changing from $\mathbf{x}$ to $\mathbf{x}'$ while the other $r-1$ elements in Eqn (\ref{eqn:second}) in Appendix \ref{app:GGM} are 0 as $s_k(\mathbf{x})=s_k(\mathbf{x}')$, and Eqn (\ref{eqn:second}) $=b^{-p}\sum_{j=1}^p(_j^p)|e_{k'}|^{p-j}|d_{k'}|^j\le|b^{-p}\sum_{j=1}^p(_j^p)|e_{k'}|^{p-j}\Delta_{1,k'}$. Numerical approaches can be applied to obtain a lower bound on $b$ when the closed-form solutions are difficult to attain. Figure \ref{fig:lowerbound} depicts the lower bounds on $b$ at different $p$ and $(\epsilon,\delta)$ obtained via the Monte Carlo approach. We set $\Delta_{1,k}$ at $1,0.1,0.05$ for $k=1,2,3$, respectively and applied Lemma \ref{lem:GSp1} to obtain an upper bound on $\Delta_{p}$ for a given $p$ value. As expected, the lower bound on $b$ increases with decreased $\epsilon$ (lower privacy budget) and decreased $\delta$ (reduced chance of failing the pure $\epsilon$-DP). The results also suggest $b$ increases with $p$ to maintain $(\epsilon,\delta)$-pDP in the examined scenarios. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.6]{b_lowerbound.pdf} \caption{Numerical Lower bound on $b$ from Corollary \ref{cor:GGMpDP}}\label{fig:lowerbound} \end{center}\end{figure} $\mathbf{s}^*$ sampled from the GG mechanism of $(\epsilon,\delta)$-pPD in Eqn (\ref{eqn:GGD}) once $b$ is determined -- analytically or numerically -- ranges $(-\infty,\infty)$. To bound $\mathbf{s}^*$, it is straightforward to apply a post processing procedure such as the truncation and the boundary inflated truncation (BIT) procedure \cite{liu2016b}. The truncation procedure throws away the out-of-bounds values and only keeps those in bounds while the BIT procedure sets the out-of-bounds values at the bounds. If the bounds are noninformative in the sense that the bounds are global and do not contain any data-specific information, then neither one of the two post-hoc bounding procedures will leak the original information or compromise the established $(\epsilon,\delta)$-pDP. \subsection{Connection between GG mechanism and Exponential Mechanism}\label{sec:connection} The exponential mechanism was introduced by McSherry and Talwar \cite{mcsherry2007mechanism}. We paraphrase the original definition as follows, covering both discrete and continuous outcomes. Let $\mathcal{S}$ denote the set containing all possible output $\mathbf{s}^{\ast}$. The exponential mechanism releases $\mathbf{s}^{\ast}$ with probability \begin{equation}\label{eqn:exp} f(\mathbf{s}^{\ast})=\exp\left(u(\mathbf{s}^{\ast}|\mathbf{x})\frac{\epsilon}{2\Delta_u}\right)(A(\mathbf{x}))^{-1} \end{equation} \noindent to ensure $\epsilon$-DP. $A(\mathbf{x})$ is a normalizing constant so that $f(\mathbf{s}^{\ast})$ sums or integrates to 1, and equals to $\sum_{\mathbf{s}^{\ast}\in\mathcal{S}} \exp\left(\!u(\mathbf{s}^{\ast}|\mathbf{x})\frac{\epsilon}{2\Delta_u}\!\right)$ or $\int_{\mathbf{s}^{\ast}\in\mathcal{S}} \exp\left(\!u(\mathbf{s}^{\ast}|\mathbf{x})\frac{\epsilon}{2\Delta_u}\!\right)\!d\mathbf{s}^{\ast}$, depending on whether $\mathcal{S}$ is a countable/discrete sample space, or a continuous set, respectively. $u$ is the utility function and assigns a $``$utility$"$ score to each possible outcome $\mathbf{s}^*$ conditional on the original data $\mathbf{x}$, and $\Delta_u=\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1, \s^*\in\mathcal{S}}|u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')|$ is the maximum change in the utility score across all possible output $\mathbf{s}^*$ and all possible data sets $\mathbf{x}$ and $\mathbf{x}'$ that is $d(\mathbf{x},\mathbf{x}')=1$. From a practical perspective, the scores should properly reflect the $``$usefulness$"$ of $\mathbf{s}^*$. For example, $``$usefulness$"$ can be measured the similarity between perturbed $\mathbf{s}^{\ast}$ and original $\mathbf{s}$ if $\mathbf{s}$ is numerical. The closer $\mathbf{s}^{\ast}$ is to the original $\mathbf{s}$, the larger $u(\mathbf{s}^{\ast}|\mathbf{x})$ is, and the higher the probability $\mathbf{s}^{\ast}$ will be released. The Exponential mechanism can be conservative (See Appendix \ref{app:exp}), in the sense that the actual privacy cost is lower than the nominal privacy budget $\epsilon$, or more than necessary amount of perturbation is injected to preserve $\epsilon$-DP. Despite the conservativeness, the Exponential mechanism is a widely used mechanism in DP with its generality and flexibility as long as the utility function $u$ is properly designed. When $u$ is defined as the negative $p\textsuperscript{th}$ power of the $p\mbox{\textsuperscript{th}}$-order Minkowski distance between $\mathbf{s}^{\ast}$ and $\mathbf{s}$, that is, $u(\mathbf{s}^{\ast}|\mathbf{s})\!=\!-\|\mathbf{s}^{\ast}-\mathbf{s}\|_p^p$, the Exponential mechanism generates perturbed $\mathbf{s}^*$ from the GG distribution \begin{align}\label{eqn:explp0} &\textstyle f(\mathbf{s}^{\ast}|\mathbf{s})\!=\!(A(\mathbf{s}))^{-1}\!\exp\!\left(\!\!-\|\mathbf{s}^{\ast}-\mathbf{s}\|_p^p\frac{\epsilon}{2\Delta_u}\!\right) =\textstyle(A(\mathbf{s}))^{-1}\!\prod_{k=1}^r\exp\!\left(\!-\frac{|s_k^{\ast}-s_k|^p}{2\Delta_u\epsilon^{-1}}\!\right)\!=\!\prod_{k=1}^r\mbox{GG}(s_k,b,p) \end{align} with $A(\mathbf{s})\!=\!\left(p^{-1}2b\Gamma(p^{-1})\right)^{\!r}$ and $b^p\!=\!2\Delta_u\epsilon^{-1}$. The scale parameter $b$ in Eqn (\ref{eqn:explp0}) is a function of the GS of the utility function $\Delta_u$ and the privacy budget $\epsilon$. For bounded data $s_k^*\in[c_{k0},c_{k}]$ for $k=1,\ldots,r$, the Exponential mechanism based on the GG distribution is \begin{align}\label{eqn:explp1} &f(\mathbf{s}^{\ast}|\mathbf{s}^*\in[\mathbf{c}_0,\mathbf{c}_1])= (A(\mathbf{s}))^{-1}\textstyle\prod_{k=1}^r(B(s_k))^{-1}\exp\!\left(\!-\frac{|s_k^{\ast}-s_k|^p}{2\Delta_u\epsilon^{-1}}\!\right), \end{align} where $B(s_k)=\Pr(s_k^*\in[c_{k0},c_{k}])$ is calculated from the pdf $\mbox{GG}(s_k,b,p)$. Compared to the truncated GG mechanism in Definition \ref{def:tGGM}, the only difference in the Exponential mechanism in Eqn (\ref{eqn:explp1}) is how the scale parameter $b$ is defined. In Definition \ref{def:tGGM}, $b$ depends on the GS of $\mathbf{s}$ ($\Delta_p$) while it is a function of the GS of the utility function $u$ ($\Delta_u$) in the Exponential mechanism. Specifically, $b^p\ge2\epsilon^{-1}\Delta_u$ in the Exponential mechanism, and the lower bound on $b$ is given in Eqn (\ref{eqn:b1}) in the GG mechanism. While both mechanisms will lead to the satisfaction of $\epsilon$-DP, the one with a smaller $b$ is preferable at the same $\epsilon$. The magnitude of $b$ in each case depends on the bounds of $\mathbf{s}$, and the order $p$, in addition to $\Delta_u$ or $\Delta_{p}$. Though not a direct comparison on $b$, Lemma \ref{lem:deltau.s.relationship} explores the relationship between $\Delta_u$ and $\Delta_{p}$, with the hope to shed light on the comparison of $b$ (the proof is in Appendix \ref{app:deltau.s.relationship}). \begin{lem}\label{lem:deltau.s.relationship} Let $[c_{k0},c_{k1}]$ denote the bounds on $s_k$ for $k=1,\ldots,r$. \begin{enumerate} \item[a)] When $u=-\|\mathbf{s}^{\ast}-\mathbf{s}\|_1$, $\Delta_u\le\Delta_{1}$. Both the GG mechanism and the GG-distribution based Exponential mechanism reduce to the truncated Laplace mechanism with the same $b$. \item[b)] When $u=-\|\mathbf{s}^{\ast}-\mathbf{s}\|_2^2$, $\Delta_u\le2\sum_{k=1}^r\Delta_{1,k}|c_{k1}-c_{k0}|$. \item[c)] When $u\!=-\|\mathbf{s}^{\ast}-\mathbf{s}\|_p^p$ for $p\ge3$, $\Delta_u\!\le\!\sum_{k=1}^r\!\sum_{j=1}^p\! (^p_j)\!\left(\mbox{max}\{|c_{k0}|, |c_{k1}|\}\right)^{p-j}\!\Delta_{1,k}^{(j)}$, where $\Delta^{(j)}_{1,k}=\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}|(s_k(\mathbf{x}))^j-(s_k(\mathbf{x}'))^j|$ is $l_1$ GS of $(s_k)^j$. \end{enumerate} \end{lem} As a final note on the GG-distribution based Exponential mechanism, we did not use the negative Minkowski distance directly as the utility function due to a couple of potential practical difficulties with this approach. First, $\Delta_u$ can be difficulty to obtain. Second, $f(\mathbf{s}^*)\!\propto\!\textstyle\exp\{-\left(\sum_{k=1}^r |s_k^*-s_k|^p\right)^{1/p}\epsilon(2\Delta_u)^{-1}\}$, does not appear to be associated with any known distributions (except when $p=1$), and additional efforts are required to study the properties of $f(\mathbf{s}^*)$ and to develop an efficient algorithm to draw samples from it. \section{Gaussian Mechanism}\label{sec:gaussian} A special case of the GG mechanism is the Gaussian mechanism when $p=2$ that draws $s_k^*$ independently from a Gaussian distribution with mean $s_k$ and variance $\sigma^2=b^2/2$ for $k=1,\ldots,r$. Applying Eqn (\ref{eqn:tGGD}) with $b$ defined in Eqns (\ref{eqn:b1}) and (\ref{eqn:b2}), we can obtain the truncated Gaussian mechanism of $\epsilon$-DP for bounded $\mathbf{s}\in[c_{10},c_{11}]\times\cdots\times[c_{r0},c_{r1}]$ \begin{align}\label{eqn:GaussianDP} f(\mathbf{s}^*|\mathbf{s}) &=\textstyle \prod_{k=1}^r\left\{\left(\Phi(c_{k1};\mu, \sigma^2)-\Phi(c_{k0};\mu, \sigma^2)\right)^{-1}\phi(s_k^*;\mu=s_k, \sigma^2=b^2/2)\right\},\mbox{ where}\\ b^2 &\ge 2\epsilon^{-1}\textstyle\left(2\sum_{k=1}^r|c_{k1}-c_{k0}|\Delta_{1,k}+\Delta_{2}^2\right)\ge 2\epsilon^{-1}\textstyle\sum_{k=1}^r\left(2|c_{k1}-c_{k0}|\Delta_{1,k}+\Delta_{1,k}^2\right),\notag \end{align} \noindent where $\phi$ and $\Phi$ are the pdf and the CDF of the Gaussian distribution, respectively. An analytical solution on the lower bound of $b$ for the Gaussian mechanism of $(\epsilon,\delta)$-pDP is provided in Lemma \ref{lem:lowerbound2} (the proof is provided in Appendix \ref{app:lowerbound2}). \begin{lem}\label{lem:lowerbound2} The lower bound on the scale parameter $b$ from the Gaussian mechanism of $(\epsilon,\delta)$-pDP is $b\ge2^{-1/2}\epsilon^{-1}\!\Delta_2\!\left(\!\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)\!\right)$. \end{lem} \noindent Given the relationship between $b$ and the standard deviation of the Gaussian distribution $\sigma=b/\sqrt{2}$, the lower bound can also be expressed in $\sigma$, \begin{eqnarray}\label{eqn:lowerbound2} \sigma \ge (2\epsilon)^{-1}\Delta_2 \left(\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)\right). \end{eqnarray} The pDP lower bound given in Eqn (\ref{eqn:lowerbound2}) is different from the lower bound \begin{equation}\label{eqn:privacybook} \!\!\!\!\sigma\!>\!\epsilon^{-1}\Delta_2c, \mbox{ with $\epsilon\!\in\!(0,1)$ and $c^2\! >\! 2 \ln(1.25/\delta)$}. \end{equation} in Dwork and Roth \cite{privacybook} for $(\epsilon,\delta)$-aDP (Eqn (\ref{eqn:adp})). The pDP bound in Eqn (\ref{eqn:lowerbound2}) is tighter than the aDP bound in Eqn (\ref{eqn:privacybook}) for the same set of $(\epsilon,\delta)$ (note the interpretation of $\delta$ in pDP and aDP is different, but the DP guarantee is roughly the same when $\delta$ is small). In addition, the pDP bound does not constrain $\epsilon$ to be $<1$ as required in the aDP bound. Figure \ref{fig:lowerbound2} compares the two two lower bounds at several $\epsilon\in (0,1)$ and $\delta\in(0,0.5)$. As observed, the ratio between the aPD vs. pDP lower bounds is always $< 1$ for the same $(\epsilon,\delta)$. The smaller $\epsilon$ is, or the larger $\delta$ is, the smaller the ratio is and the larger the difference is between the two bounds. \begin{figure}[htb]\begin{center} \includegraphics[scale=0.85]{eqn9_10.pdf} \caption{Comparison of pDP lower bound (Eqn \ref{eqn:lowerbound2}) vs. aDP bound (Eqn \ref{eqn:privacybook}) on $\sigma$ in the Gaussian mechanism for $\epsilon<1$ (the aDP bound requires $\epsilon<1$)}\label{fig:lowerbound2} \end{center}\end{figure} Dwork and Roth \cite{privacybook} list several advantages of the Gaussian noises, such as the Gaussian noise is a $``$familiar$"$ type of noise as many noise sources in real life can be well approximated by Gaussian distributions; the sum of Gaussian variable is still a Gaussian; and finally, in the case of multiple queries or when $\delta$ is small, the pure-DP guarantee in the Laplace mechanism and the pDP guarantee in the Gaussian mechanism see minimal difference. A theoretical disadvantage to Gaussian noise is that it does not guarantee DP in some cases (e.g., Report Noisy Max)\cite{privacybook}. We investigate the accuracy of $\mathbf{s}^*$ by examining the tail probability and the dispersion of the noises injected via the $\epsilon$-DP Laplace mechanism and the $(\epsilon,\delta)$-pDP Gaussian mechanism. Denote the noise drawn from the Laplace distribution by $e_1$ and that from the Gaussian distribution by $e_2$. The location parameters of both are $\mu=0$; the tail probability $p_1=\Pr(e_1>|t|)=\exp(-|t|\epsilon/\Delta_1)$ in the Laplace distribution and $p_2=\Pr(e_2>|t|)=2\Phi(-|t|/\sigma)$ in the Gaussian distribution, where $\sigma$ is given in Eqn (\ref{eqn:lowerbound2}). Since the CDF $\Phi()$ does not have a close-formed expression, we examine several numerical examples to compare $p_1$ and $p_2$ (Figure \ref{fig:tail}). We set $\epsilon$ to be the same (0.1, 1, 2, respectively) between the two mechanisms and examine $\delta= (1\%, 5\%, 10\%, 20\%)$ for the $(\epsilon,\delta)$-pDP Gaussian mechanism. If the ratio $p_1:p_2$ is $<1$, it implies that the Laplace mechanism is less likely to generate more extreme $\mathbf{s}^*$ compared to the Gaussian mechanism at the same privacy specification of $\epsilon$. We should focus on the meaningful cases where noise $|t|$ at least has a non-ignorable chance to occur in either mechanism. We used cutoff $10^{-4}$; that is, either $p_1>10^{-4}$ or $p_1>10^{-4}$ (other cutoffs can be used, depending on how ``unlikely'' is defined). It is interesting to observe that after the initial take-off at 1 when $|t|=0$, the ratio decreases until it hits the bottom and then bounds back with some cases eventually exceeding 1 at some value of $|t|$, depending on the privacy parameter specification. The smaller $\epsilon$ or $\delta$ is, the longer it takes for the bounce-back to occurs. The observation suggests that the Laplace mechanism is in some cases more likekly to generate sanitized results $\mathbf{s}^*$ that are far away from $\mathbf{s}$. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.75]{tailratio1.pdf} \caption{Ratio on the tail probabilities $p_1:p_2$ (the gray curves represent the unlikely cases where both $p_1$ and $p_2$ are $<10^{-4}$)} \label{fig:tail} \end{center}\end{figure} We also compare the privacy parameter $\epsilon$ between the two mechanisms when both have the same tail probability. Figure \ref{fig:epsGaussian} shows the calculated $\epsilon_2$ value associated with the Gaussian mechanism of $(\epsilon_2,\delta)$-DP for a given $\delta$ that yields $\Pr(e_2<|t|)=\Pr(e_1<|t|)$ with the Laplace mechanism of $\epsilon_1$-DP. If the ratio of $\epsilon_2:\epsilon_1<1$ at some $|t|$ and a small and somewhat ignorable $\delta$, it implies the same tail probability can be achieved with less privacy cost with the Gaussian mechanism compared to the Laplace mechanism. Figure \ref{fig:epsGaussian} suggests that at the same $|t|$, the more relaxation of the pure $\epsilon$-DP is allowed (i.e., the larger $\delta$ is), the smaller $\epsilon_2$ is (relative to baseline $\epsilon_1$), which expected as the $\epsilon$ and $\delta$ together determine the noise released in the Gaussian mechanism. \begin{figure}[htb] \begin{center} \includegraphics[scale=0.8]{epsGaussian.pdf} \caption{Relative privacy cost $\epsilon_2:\epsilon_1$ (the gray curves represent the unlikely cases where both $p_1$ and $p_2$ are $<10^{-4}$)} \label{fig:epsGaussian} \end{center}\end{figure} Lemma \ref{lem:L1L2} presents the precision comparison of $\mathbf{s}^*$ between the Laplace mechanism of $\epsilon$-DP and the Gaussian mechanism of $(\epsilon,\delta)$-pDP. With the same location parameter in the Laplace and Gaussian distributions, a larger precision is equivalent to a smaller mean squared error (MSE). \begin{lem}\label{lem:L1L2} Between the Gaussian mechanism of $(\epsilon,\delta)$-pDP and the Laplace mechanism of $\epsilon$-DP for sanitizing a statistic $s$, when $\delta\!<\!2\Phi(\sqrt{2})\!\approx\!0.157$, the variance of the Gaussian distribution in the Gaussian mechanism is always greater than the variance of the Laplace distribution associated with the Laplace mechanism. \end{lem} \noindent The proof is provided in Appendix \ref{app:L1L2}. Lemma \ref{lem:L1L2} suggests that there is more dispersion in the perturbed $s^*$ released by the Gaussian mechanism of $(\epsilon,\delta< 0.157)$-pDP than the Laplace mechanism of $\epsilon$-DP. In other words, if there are multiple sets of $s^*$ released via the Gaussian and the Laplace mechanisms respectively, then the former sets would have a wider spread than the latter. Since $(\epsilon,\delta)$-pDP provides less privacy protection than $\epsilon$-pDP, together with the larger MSE, it can be argued that the Laplace mechanism is superior to the Gaussian mechanism (which is also reflected in the 3 experiments in Section \ref{sec:experiments}). It should be noted that $\delta<0.157$ in Lemma \ref{lem:L1L2} is a sufficient but not necessary condition. In other words, the Gaussian mechanism may not be less dispersed than the Laplace mechanism when $\delta\ge0.157$. Furthermore, since $\delta$ needs to be small to provide sufficient privacy protection in the setting of $(\epsilon,\delta)$-pDP, it is very unlikely to have $\delta>0.157$ in practical applications. Also noted is that the setting explored in Lemma \ref{lem:L1L2}, where the focus is on examining the precision (dispersion) of a single perturbed statistic given the specificized privacy parameters and the original statistics when the sample size of a data set is public, is different from the recent work on the bounds of sample complexity (required sample size) to reach a certain level of a statistical \emph{accuracy} in perturbed results with $\epsilon$-DP or $(\epsilon,\delta)$-aDP \cite{harvard} (more discussions are provided in Section \ref{sec:discussion} on this point). \section{Experiments}\label{sec:experiments} We run three experiments on the mildew data set, the Czech data set, and the Census Income data set; a.k.a. the adult data. The mildew data contains information of parental alleles at 6 loci on the chromosome for 70 strands of barley powder mildew\cite{charest}. Each loci has two levels, yielding a very sparse 6-way cross-tabulation (22 cells out of the 64 are non-empty with low frequencies in many other cells). The Czech data contains data collected on 6 potential risk factors for coronary thrombosis for 1841 workers in a Czechoslovakian car factory \cite{charest}. Each risk factor has 2 levels (Y or N). The cross-tabulation is also 6-way with 64 cells, the same as the mildew data, but table is not as sparse with the large $n$ (only one empty cell). The adult data was extracted from the 1994 US Census database to yield a set of reasonably clean records that satisfy a set of conditions\cite{adult}. The data set is often used to test classifiers by predicting whether a person makes over 50K a year. We used only the completers in the adult data (with no missing values on the attributes) and then split them to 2/3 training (20009 subjects) and 1/3 testing (10005 subjects). \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.55]{mildew-count-ggm3.pdf} \caption{sanitized vs. original cell counts in the mildew data}\label{fig:mildew.count} \end{center}\end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.6]{mildew-L1KL-ggm3.pdf} \caption{$l_1$ distance and KL divergence between sanitized and original counts in the mildew data}\label{fig:mildew.L1KL} \end{center}\end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.55]{czech-count.pdf} \caption{sanitized vs. original cell counts in the Czech data}\label{fig:czech.count} \end{center}\end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.6]{czech-L1KL.pdf} \caption{$l_1$ distance and KL divergence between sanitized and original counts in the Czech data}\label{fig:czech.L1KL} \end{center}\end{figure} In each experiment, we run the Laplace mechanism of $\epsilon$-DP, the Gaussian mechanism of $(\epsilon,\delta)$-pDP presented in Section \ref{sec:gaussian}, and the Gaussian mechanism of of $(\epsilon,\delta)$-aDP \cite{privacybook} to sanitize count data. We examined $\epsilon=0.5, 1,2$ and $\delta=0.01, 0.05, 0.1, 0.25$. To examine the variation of noises, we run 500 repeats and computed the means and standard deviations of $l_1$ distances between the sanitized and the original counts and the Kullback-Leibler (KL) divergence between the empirical distributions of the synthetic data and the original data over the 500 repeats. In addition, we tested the GG mechanism of order 3 ($p=3$) in the mildew data, and compared the classification accuracy of the income outcome in the testing data set in the adult experiment based on the support vector machines (SVMs) trained with the original training data and the sanitized training data, respectively. The KL distance was calculated using the \texttt{KL.Dirichlet} command in R package \texttt{entropy} that computes a Bayesian estimate of the KL divergence. The SVMs were trained using the \texttt{svm} command in R package \texttt{e1071}. In all experiments, $\Delta_p=1$ for all $p$ since the released query is a histogram and the bin counts are based on disjoint subsets of data. The scale parameters of the Laplace mechanism and the Gaussian mechanisms were obtained analytically ($\Delta_1\epsilon^{-1}$, Eqns (\ref{eqn:lowerbound2}) and (\ref{eqn:privacybook}), respectively), the grid search and the MC approach were applied to obtain the lower bound $b$ for GGM-3 via Corollary \ref{cor:disjoint}. In the mildew and Czech experiments, we sanitized all bins in the histograms, including the empty bins, assuming all combinations of the 6 attributes in each case are practically meaningful (in other words, the empty cells are sample zeros rather than population zeros). In the adult data, there are 14 attributes and $\sim\!1.944\times10^{13}$ bins in the 14-attribute histogram, a non-ignorable portion of which do not make any practical sense (e.g., a 90-age works $>80$ hours per week). For simplicity, we only sanitized the 17,985 nonempty cells in the training data. After the sanitization, we set the out-of-bounds synthetic counts $<0$ at 0 and those $>n$ at $n$, respectively, and normalized the sanitized counts to sum up to the original sample size $n$ in all 3 experiments, assuming $n$ itself is public or does not carry privacy information. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.55]{adult-count-training.png} \caption{sanitized vs. original cell counts in the adult data}\label{fig:adult.count} \end{center}\end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.6]{adult-L1KL-training.pdf} \caption{$l_1$ distance and KL divergence between sanitized and original counts in the adult data} \label{fig:adult.L1KL} \end{center}\end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.6]{adult-acc-testing.pdf} \caption{Prediction accuracy in testing data via SVMs trained on sanitized and original data in the adult data}\label{fig:adult.acc} \end{center}\end{figure} The results are given in Figures \ref{fig:mildew.count} to \ref{fig:adult.acc}. In Figures \ref{fig:mildew.count}, \ref{fig:czech.count} and \ref{fig:adult.count}, the closer the points are to the identity line, the more similar are the original and sanitized counts. The Laplace sanitizer is the obvious winner in all 3 cases, producing the sanitized counts closest to the original with the smallest $l_l$ error and the KL divergence, followed by the Gaussian mechanism of $(\epsilon,\delta)$-pDP, and GGM3 of $(\epsilon,\delta)$-pDP in the mildew data; the Gaussian mechanism of $(\epsilon,\delta)$-aDP is the worst. In the mildew experiment, the performance of the Gaussian mechanism of $(\epsilon,\delta)$-pDP is similar when $\epsilon=2$ or $\delta\ge0.1$. The decrease in the $l_1$ error and the KL divergence seems to decrease more or less in a linear manner as $\epsilon$ increases from 0.5 to 1 to 2, while the impact of $\delta$ seemed to have less a profound impact on the $l_1$ error and the KL divergence. In the Czech experiment, the sanitized counts approach the original counts more quickly than the mildew case with increased $\epsilon$ and $\delta$, but there is significantly more variability for small $\epsilon$ (0.1); and the $l_1$ error and the KL divergence no longer decreases in a linear fashion, but drastically from $\epsilon=0.5$ to 1 and much less from $\epsilon=1$ to 2. The differences in the results between the mildew and the Czech experiments can be explained by the larger $n$ in the latter. In the adult experiment, Figure \ref{fig:adult.acc} suggests the prediction accuracy via the SVMs built on sanitized data is barely affected compared to the original accuracy regardless of the mechanism.There are some decreases in the accuracy rates from the original, but they are largely ignorable (on the scale of 0.25\% to 1\%), even with the variation take into account. In addition, the Gaussian mechanism of $(\epsilon,\delta)$-aDP, though being the worst in preserving the original counts measured the $l_1$ distance and KL divergence, is no worse than the two Gaussian mechanisms in prediction. \section{Discussion}\label{sec:discussion} We introduced a new concept of the $l_p$ GS, and unified the Laplace mechanism and the Gaussian mechanism in the family of the GG mechanism. For bounded data, we discussed the truncated and the BIT GG mechanisms to achieve $\epsilon$-DP. We also proposed $(\epsilon,\delta)$-pDP as an alternative paradigm to the pure $\epsilon$-DP for the GG mechanism for order $p\ge2$. We showed the connections and distinctions between the GG mechanism and the Exponential mechanism when the utility function is defined as the negative $p\mbox{\textsuperscript{th}}$-power of the Minkowski distance between the original and sanitized results. We also presented the Gaussian mechanism as an example of the GG mechanism and derived a lower bound for the scale parameter of the associated Gaussian distribution to achieve $(\epsilon,\delta)$-pDP. The bound is tighter than the lower bound for the Gaussian mechanism of $(\epsilon,\delta)$-aDP. We compared the tail probability and the dispersion of the the noise generated via the Gaussian mechanism of $(\epsilon,\delta)$-pDP and the Laplace mechanism. We finally applied the Gaussian mechanisms of $(\epsilon,\delta)$-pDP and $(\epsilon,\delta)$-aDP and the Laplace mechanism of $\epsilon$-DP in three real-life data sets. The GG mechanism is based on the $l_p$ $``$global$"$ sensitivity of query results in the sense that the sensitivity is independent of any specific data. Though the employment of the GS is robust in terms of privacy protection, it could result in a large amount of noises being injected to query results. There is work that allows the sensitivity of a query to vary with data ($``$local$"$ sensitivity) \cite{smooth, robust} with the purpose to increase the accuracy of sanitized results. How to develop the GG mechanism in the context of local sensitivity is a topic for future investigation. The setting for the examination on the tail probability and dispersion in Section \ref{sec:gaussian} is different from, though related to, the work on upper and lower bounds on sample complexity -- the required sample size $n$ to reach a certain level of accuracy $\alpha$ and privacy guarantee $(\epsilon,\delta)$ for count queries \cite{geometry, fingerprinting, harvard}. $\alpha$ often refers to the accuracy of perturbed results in the DP literature, such as the worst case accuracy $L_\infty$ or average accuracy $L_1,$ and might also refer to the tail probability and the MSE of released data, among others. A differential privacy mechanism is characterized by $\epsilon$ (and $\delta$) for privacy guarantee, $\alpha$ to measure information preservation and utility of sanitized results, and the sample size $n$ of original data. The existing work on sample complexity focuses on bounding $n$ given $\epsilon$ (and $\delta)$ and $\alpha$, while the results in Section \ref{sec:gaussian} focus on the the accuracy and precision of sanitized results given $\epsilon$ (and $\delta)$ and $n$. If the bias from perturbed results (relative to the original results) are the same between the two mechanisms, a larger precision is equivalent to a smaller MSE. \section*{Appendix} \begin{appendix} \renewcommand{\theequation}{\Alph{section}.\arabic{equation}} \setcounter{equation}{0} \section{\normalsize{Proof of Lemma \ref{lem:GSp1}}}\label{app:GSp1} \vspace{-12pt} \begin{align*} \Delta_{p}&\!=\!\textstyle\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\!\! \left(\sum_{k=1}^r\left|s_k(\mathbf{x})\!-\!s_k(\mathbf{x}')\right|^p\right)^{\!1/p} =\!\left(\textstyle\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\! \! \sum_{k=1}^r\left|s_k(\mathbf{x})\!-\!s_k(\mathbf{x}')\right|^p\!\right)^{\!1/p}\!\!\!\!.\\ &\mbox{\hspace{12pt}Since }\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1} \textstyle\sum_{k=1}^r\left|s_k(\mathbf{x})\!-\!s_k(\mathbf{x}')\right|^p \le\textstyle\sum_{k=1}^r\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\! \! \left|s_k(\mathbf{x})-s_k(\mathbf{x}')\right|^p\\ &=\textstyle\sum_{k=1}^r\! \! \left(\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1} \left|s_k(\mathbf{x})-s_k(\mathbf{x}')\right|\right)^p =\textstyle\sum_{k=1}^r\Delta_{1,k}^p. \end{align*} Therefore, $\left(\!\sum_{k=1}^r\!\Delta_{1,k}^p\!\right)^{\!1/p}$ is an upper bound for $\Delta_{p}.\blacksquare$ \section{\normalsize{Proof of Claim \ref{cla:lowerb}}}\label{app:GGM} \allowdisplaybreaks \begin{align} &\left|\log\!\!\left(\!\frac{\Pr(\mathbf{s}^{\ast} \in Q|\mathbf{x})}{\Pr(\mathbf{s}^{\ast}\in Q|\mathbf{x}')}\!\right)\right| =\left|\log\!\!\left(\frac{ \exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x})\|_p^p\right) }{\exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x}')\|_p^p\right) }\!\right)\right|\notag\\ &=b^{-p}\big| \|\mathbf{s}^*-\mathbf{s}(\mathbf{x})\|_p^p- \|\mathbf{s}^*-\mathbf{s}(\mathbf{x}')\|_p^p\big|=b^{-p}\big|\textstyle\sum_{k=1}^r\left(|s_k^*-s_k(\mathbf{x})|^p- |s_k^*-s_k(\mathbf{x}')|^p\right)\big|\notag\\ &\le b^{-p}\textstyle\sum_{k=1}^r\big||s_k^*-s_k(\mathbf{x})|^p- |s_k^*-s_k(\mathbf{x}')|^p\big| =b^{-p}\textstyle\sum_{k=1}^r\!\big||e_k|^p- |e_k+d_k|^p\big|,\label{eqn:first} \\ & \mbox{ where $e_k=s_k^*-s_k(\mathbf{x})$ and $d_k=s_k(\mathbf{x})-s_k(\mathbf{x}')$}\notag \\ &=b^{-p}\textstyle\sum_{k=1}^r\!\big||e_k^p|- |(e_k+d_k)^p|\big|\mbox{ for integers $p\ge1$}\notag\\ &\le b^{-p}\textstyle\sum_{k=1}^r\!\big|e_k^p-(e_k+d_k)^p\big|=b^{-p}\textstyle\sum_{k=1}^r\!\left|\sum_{j=1}^p (_j^p)e_k^{p-j}d_k^j\right| \mbox{ by reverse triangle inequality} \notag\\ & \le b^{-p}\textstyle\sum_{k=1}^r\sum_{j=1}^p(_j^p)|e_k|^{p-j}|d_k|^j\label{eqn:second}\\ &= b^{-p}\!\left(\!\textstyle p\sum_{k=1}^r\!|e_k|^{p-1}|d_k|\!+\!\frac{(p-1)p}{2}\sum_{k=1}^r\!|e_k|^{p-2} |d_k|^2\!+\cdots\!+\! (p-1)p\textstyle\sum_{k=1}^r\!|e_k|^2|d_k|^{p-2}/2\!\right.\notag\\ &\qquad+\textstyle\left.\! p\! \sum_{k=1}^r\!|e_k|\cdot|d_k|^{p-1} + \!\sum_{k=1}^r|d_k|^p \right)\notag\\ \!\!&\!\!\!\le\! b^{-p}\textstyle\left(p\!\sum_{k=1}^r\!|e_k|^{p-1}\!\Delta_{1,k}\! +\!\frac{(p-1)p}{2}\sum_{k=1}^r\!|e_k|^{p-2}\! \Delta_{1,k}^2\!+\cdots\right.\notag\\ &\qquad\quad\textstyle\left. \!+ \frac{(p-1)p}{2}\sum_{k=1}^r\!|e_k|^2 \Delta_{1,k}^{p-2}\!+\!p\! \sum_{k=1}^r\!|e_k|\Delta_{1,k}^{p-1}\!+\!\Delta_{p}^p\right),\label{eqn:third} \end{align} where $\Delta_{1,k}$ is the $l_1$ GS of $s_k$ and $\Delta_{p}$ is the $l_p$ GS of $\mathbf{s}$. To achieve $\epsilon$-DP, Eqn (\ref{eqn:third}) needs to be $\le\epsilon$; that is, \begin{equation}\label{eqn:fourth} \textstyle\Delta_{p}^p+\sum_{k=1}^r\sum_{j=1}^{p-1}(_j^p)|e_k|^{p-j}\Delta_{1,k}^j\le b^p\epsilon. \end{equation} A less tight bound can be obtained by applying Lemma \ref{lem:GSp1} ($\Delta_{p}^p\le\sum_{k=1}^r\Delta_{1,k}^p$), thus \begin{equation}\label{eqn:fifth} \textstyle\sum_{k=1}^r\sum_{j=1}^p(_j^p)|e_k|^{p-j}\Delta_{1,k}^j\le b^p\epsilon. \end{equation} The inequalities in Eqns (\ref{eqn:fourth}) or (\ref{eqn:fifth}) susgest that the lower bound on $b$ depends on the random GG noise $e_k=s^*_k-s_k$ for $k=1,\ldots,r$, the support of which is $(-\infty,\infty)^r$. In other words, there does not exist a random noise-free solution on $b$, unless $p=1$ in which case the inequality no longer involves the error terms and the GG mechanism reduces to the familiar Laplace mechanism of $\epsilon$-DP, leading to Claim \ref{cla:lowerb}. When $p=1$, Eqn (\ref{eqn:first}) $\!\!\le b^{-1}\!\sum_{k=1}^r\!|d_k|\!\le\! b^{-1}\textstyle\sum_{k=1}^r\!|\Delta_{1,k}|\!=\!b^{-1}\Delta_{1}\!<\!\epsilon$, and thus $b\!>\!\Delta_{1}\epsilon^{-1}$. $\blacksquare$ \section{\normalsize{Proof of \texorpdfstring{$\epsilon$}{}-DP of the truncated GG mechanism in Definition \ref{def:tGGM}}}\label{app:tGGM} To satisfy $\epsilon$-DP, we need \begin{align} &\left|\log\!\left(\!\frac{\Pr(\mathbf{s}^{\ast} \in Q|\mathbf{x}, \mathbf{s}^{\ast}\in[c_{10},c_{11}]\times\!\cdots\!\times[c_{r0},c_{r1}])}{\Pr(\mathbf{s}^{\ast}\in Q|\mathbf{x}', \mathbf{s}^{\ast}\in[c_{10},c_{11}]\times\!\cdots\!\times[c_{r0},c_{r1}])}\!\right)\right|\notag\\ =&\left|\log\!\left(\frac{ \exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x})\|_p^p\right)}{\prod_{k=1}^r\Pr(c_{k0}\! \le \!s^*_k\! \le\! c_{k1}; s_k, b,p)}\times\frac{\prod_{k=1}^r\Pr(c_{k0}\! \le \!s^*_k\! \le\! c_{k1}; s'_k, b,p)}{\exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x}')\|_p^p\right) }\!\right)\right|\notag\\ =&\left|\log\!\left(\!\frac{ \exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x})\|_p^p\right) }{\exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x}')\|_p^p\right) }\!\right)+\log\!\left(\frac{ \prod_{k=1}^r\Pr(c_{k0}\! \le\!s^*_k\! \le\! c_{k1}; s_k, b,p)}{\prod_{k=1}^r\Pr(c_{k0}\! \le\!s^*_k\!\le\! c_{k1}; s'_k, b,p)}\right)\right|\notag\\ \le&\left|\log\!\left(\!\frac{ \exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x})\|_p^p\right) }{\exp\left(-b^{-p}\|\mathbf{s}^*-\mathbf{s}(\mathbf{x}')\|_p^p\right) }\!\right)\right|+\label{eqn:tGGD1}\\ &\left|\log\!\left(\frac{ \prod_{k=1}^r\Pr(c_{k0}\! \le\!s^*_k\! \le\! c_{k1}; s_k, b,p)}{\prod_{k=1}^r\Pr(c_{k0}\! \le\!s^*_k\!\le\! c_{k1}; s'_k, b,p)}\right)\right|\le\epsilon\label{eqn:tGGD2} \end{align} If the term in Eqn (\ref{eqn:tGGD1}) satisfies $\epsilon/2$-DP, so does Eqn (\ref{eqn:tGGD2}). Appendix \ref{app:GGM} establishes that Eqn (\ref{eqn:tGGD2}) satisfies $\epsilon/2$-DP when $b^p(\epsilon/2)\ge \Delta_{p}^p+\sum_{k=1}^r\sum_{j=1}^{p-1}(_j^p)|s^*_k-s_k|^{p-j}\Delta_{1,k}^j$ Since $\mathbf{s}^*$ is bounded within $[c_{k0},c_{k1}]$ for $k=1,\ldots,K$, $|s^*_k-s_k|\le|c_{k1}-c_{k0}|$. Setting $b^p(\epsilon/2)\ge \Delta_{p}^p+\sum_{k=1}^r\sum_{j=1}^{p-1}(_j^p)|c_{k1}-c_{k0}|^{p-j}\Delta_{1,k}^j$ ensures the truncated GG mechanism is of $\epsilon$-DP; or equivalently, $b^p \ge 2\epsilon^{-1}\!\!\textstyle\left(\!\sum_{k=1}^r\sum_{j=1}^{p-1}(_j^p)|c_{k1}-c_{k0}|^{p-j}\Delta_{s_k}^j\!+\!\Delta_{\mathbf{s},p}^p\!\right)$ ensures that the truncated GG mechanism is of $\epsilon$-DP. $\quad\blacksquare$ \section{\normalsize{Conservativeness of Exponential mechanism}}\label{app:exp} \begin{cor}\label{lem:conservative} The actual privacy cost of the Exponential mechanism of $\epsilon$-DP is always less than the nominal budget $\epsilon$. When the normalization factor $A(\mathbf{x})$ in Eqn (\ref{eqn:exp}) is independent of $\mathbf{x}$, the actual privacy cost is $\epsilon/2$. \end{cor} $A(\mathbf{x})$ independent of $\mathbf{x}$ implies increases and decreases in the utility scores upon the change from $\mathbf{x}$ to $\mathbf{x}'$ $``$cancel out$"$ when integrated or summed over all possible $\mathbf{s}^*$ in the form of $\exp\!\left(u(\mathbf{s}^{\ast}|\mathbf{x})\frac{\epsilon}{2\Delta_u}\right)$. \begin{proof} Since $ u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')\le\Delta_u$, \begin{align} &\left|\log\!\left(\!\frac{\Pr(\mathbf{s}^{\ast}(\mathbf{x}) \in Q)}{\Pr(\mathbf{s}^{\ast}(\mathbf{x}')\in Q)}\!\right)\right|\!=\!\left|\log\!\!\left(\!\frac{\exp\left(u(\mathbf{s}^{\ast}|\mathbf{x}) \frac{\epsilon}{2\Delta_u}\!\right)}{\exp\left(u(\mathbf{s}^{\ast}|\mathbf{x}')\frac{\epsilon}{2\Delta_u}\right)}\!\!\times\!\frac{A(\mathbf{x}')}{A(\mathbf{x})}\!\right)\!\right| \le \!\left|\log\!\left(\!e^{\epsilon/2}\frac{A(\mathbf{x}') }{A(\mathbf{x})}\!\right)\!\right| \label{eqn:ratio1}\\ &=\left|\frac{\epsilon}{2}+ \log\left(\frac{A(\mathbf{x}') }{A(\mathbf{x})} \right)\right| \le \frac{\epsilon}{2}+ \left|\log\left(\frac{A(\mathbf{x}')}{A(\mathbf{x})}\right)\right|\label{eqn:ratio2} \end{align} by the triangle inequality, and \begin{align} A(\mathbf{x}') =& \int_{\mathbf{s}^*\in\mathcal{S}}\exp\!\left(u(\mathbf{s}^{\ast}|\mathbf{x}')\frac{\epsilon}{2\Delta_u}\right) d\mathbf{s}^{\ast} \le\int_{\mathbf{s}^*\in\mathcal{S}}\exp\!\left((u(\mathbf{s}^{\ast}|\mathbf{x})+\Delta_u)\frac{\epsilon}{2\Delta_u}\right) d\mathbf{s}^{\ast} \label{eqn:ratio3}\\ =&\exp\!\left(\frac{\epsilon}{2}\right)\!\!\int_{\mathbf{s}^*\in\mathcal{S}}\exp\!\left(u(\mathbf{s}^{\ast}|\mathbf{x})\right)d\mathbf{s}^{\ast} = \exp\!\left(\frac{\epsilon}{2}\right)A(\mathbf{x})\notag \end{align} Therefore, $\log\left(\frac{A(\mathbf{x}')}{A(\mathbf{x})}\right)\le\epsilon/2$, and Eqn (\ref{eqn:ratio2}) becomes \begin{align} \!\!\left|\log\!\!\left(\frac{\Pr(\mathbf{s}^{\ast}(\mathbf{x}) \in Q)}{\Pr(\mathbf{s}^{\ast}(\mathbf{x}')\in Q)}\right)\right| \le \frac{\epsilon}{2}\!+\!\left|\log\left(\frac{A(\mathbf{x})}{A(\mathbf{x}')} \right)\right|\!\le\!\epsilon\label{eqn:ratio4} \end{align} \normalsize The same result can be obtained by replacing the integral with summation when $\mathcal{S}$ is a discrete set in the equation set (\ref{eqn:ratio4}). The above results seem to suggest $\epsilon$ can be achieved exactly since $``$equality$"$ appears in all the inequalities above (Eqn (\ref{eqn:ratio1}) to (\ref{eqn:ratio4})); however, equality cannot occur simultaneously in Eqns (\ref{eqn:ratio1}) and (\ref{eqn:ratio3}) unless $\Delta_u$ was 0, which is meaningless in DP. In addition, $\Delta_u$ is defined as the maximum change in $u$ for all $d(\mathbf{x},\mathbf{x}')=1$. While it is likely that the maximum change occurs at more than a single value of $\mathbf{s}^*$, it is not possible that the utility scores at all values of $\mathbf{s}^*$ increase or decreases by the same amount $\Delta_{u}$. In other words, the $``$equality$"$ in Eqn (\ref{eqn:ratio3}) itself is unlikely to hold. All taken together, the actual privacy cost in the Exponential mechanism is always less than $\epsilon$ and never attains the exact upper bound $\epsilon$. In the extreme, the actual privacy cost can be down to $\epsilon/2$ when $A(\mathbf{x})\equiv A(\mathbf{x}')\; \forall\; \mathbf{x},\mathbf{x}'$ and $d(\mathbf{x},\mathbf{x}')=1$, as suggested by Eqn (\ref{eqn:ratio2}). \end{proof} \section{\normalsize{Proof of Lemma \ref{lem:deltau.s.relationship}}}\label{app:deltau.s.relationship} \begin{proof} \noindent \textbf{Part a)}. Denote $\mathbf{s}(\mathbf{x})$ by $\mathbf{s}$ and $\mathbf{s}(\mathbf{x}')$ by $\mathbf{s}'$. When $p=1$, $u(\mathbf{s}^*|\mathbf{x})=-\|\mathbf{s}^{\ast}-\mathbf{s}\|_1$, $|u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')| =\!\big|\! \sum_{k=1}^r\!(|s^{\ast}_k-s_k|-|s^{\ast}_k-s'_k|)\big| \!\le\! \sum_{k=1}^r \!\big| |s^{\ast}_k-s_k|-|s^{\ast}_k-s'_k|\big| \!\le\! \sum_{k=1}^r \!\big| s^{\ast}_k-s_k- (s^{\ast}_k-s_k)\big| \!= \! \sum_{k=1}^r\! |s_k-s'_k|\!=\!|\mathbf{s}-\mathbf{s}'|_1$. Therefore, $\Delta_u=\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1, \s^*\in\mathcal{S}}\!|u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')|\!\le\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\!\|\mathbf{s}-\mathbf{s}'\|_1=\Delta_{\mathbf{s},1}$ \end{proof} \begin{proof} \noindent \textbf{Part b)}. When $p=2$, $u(\mathbf{s}^*|\mathbf{x})=-\|\mathbf{s}^{\ast}-\mathbf{s}\|^2_2$, $|u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')|=\big|\sum_{k=1}^r(s_k-s_k^{\ast})^2-(s'_k-s_k^{\ast})^2\big|\le \sum_{k=1}^r \big|(s_k-s_k^{\ast})^2-(s'_k-s_k^{\ast})^2\big|= \sum_{k=1}^r |s_k-s'_k|\cdot|s_k-s_k^{\ast}+s'_k-s_k^{\ast}|\le \sum_{k=1}^r\Delta_{1,k}(|s_k-s_k^{\ast}|+|s'_k-s_k^{\ast}|)$. Suppose $s_k$ is bounded within $[c_{k0},c_{k1}]$, so is $s_k^{\ast}$, then \begin{align}\label{eqn:deltau2} \Delta_u&=\!\!\!\!\textstyle\maxxu \big|\sum_{k=1}^r (s_k(\mathbf{x})-s_k^{\ast})^2\!-\!\sum_{k=1}^r (s_k(\mathbf{x}')-s_k^{\ast})^2\big|\le2\textstyle\sum_{k=1}^r\Delta_{1,k}(c_{k1}-c_{k0}) \end{align} When $c_{k1}-c_{k0}\equiv b-a\;\forall\;k$, $\Delta_u\le2(b-a)\sum_{k=1}^r\Delta_{1,k} =2(b-a)\Delta_{1}$. \end{proof} \begin{proof} \noindent \textbf{Part c)}. When $u(\mathbf{s}^*|\mathbf{x})=-\|\mathbf{s}^{\ast}-\mathbf{s}\|^p_p$ for integer $p\ge1$, $|u(\mathbf{s}^{\ast}|\mathbf{x})-u(\mathbf{s}^{\ast}|\mathbf{x}')|= \big|\|\mathbf{s}^{\ast}-\mathbf{s}\|^p_p-\|\mathbf{s}^{\ast}-\mathbf{s}'\|^p_p\big|=\big|\sum_{k=1}^r|s_k-s_k^{\ast}|^p-\sum_{k=1}^r|s'_k-s_k^{\ast}|^p\big|\le\sum_{k=1}^r\big\|(s_k-s_k^{\ast})^p|-|(s'_k-s_k^{\ast})^p|\big| \le\sum_{k=1}^r\big|(s_k-s_k^{\ast})^p-(s'_k-s_k^{\ast})^p\big| \!=\!\sum_{k=1}^r\big|\sum_{i=1}^p(^p_i)(-s_k^{\ast})^{p-i}\left[ s_k^i -(s'_k)^i\right]\big| \le\sum_{k=1}^r\sum_{i=1}^p (^p_i)\big|(s_k^{\ast})^{p-i}\left[ s_k^i -(s'_k)^i\right]\big|$. Suppose $s_k$ is bounded within $(c_{k0},c_{k1})$, so is $s_k^{\ast}$. \\ Define $\Delta_{1,k}^{(i)}=\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}|s_k^i -(s'_k)^i|$ , then \begin{align}\label{eqn:deltaup} &\Delta_u\!=\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1, \s^*\in\mathcal{S}}\big|\!\textstyle\sum_{k=1}^r\! |s_k-s_k^{\ast}|^p\!-\!|s'_k-s_k^{\ast}|^p\big|\notag\\ &\!\le\!\textstyle\sum_{k=1}^r\sum_{i=1}^p (^p_i)\Delta_{1,k}^{(i)}\left(\mbox{max}\{|c_{k0}|, |c_{k1}|\}\right)^{p-i}\! \end{align} \noindent When $p=1$, Eqn (\ref{eqn:deltaup}) reduces to $\Delta_u\le\sum_{k=1}^r\Delta_{1,k}$ in Part a). When $p=2$, Eqn (\ref{eqn:deltaup}) becomes $\sum_{k=1}^r\!\left(\Delta_{1,k}^{(2)}+2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}\right)$, not as tight an upper bound as Eqn (\ref{eqn:deltau2}). To see this, we can show $2\Delta_{1,k}(c_{k1}-c_{k0})\le \Delta_{1,k}^{(2)}+2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}$ or $2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}-2\Delta_{1,k}(c_{k1}-c_{k0})+\Delta_{1,k}^{(2)}\ge0$ holds for each $k$. When $c_{k0}c_{k1}\!\ge\!0$, $c_{k1}-c_{k0}\!<\!\mbox{max}\{|c_{k0}|, |c_{k1}|\}$, $2\Delta_{1,k}(c_{k1}-c_{k0})\le 2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}<2\Delta_{1,k}\mbox{max}\{|c_\frac{}{}{k0}|, |c_{k1}|\}+\Delta_{1,k}^{(2)}$. When $c_{k0}c_{k1}\le0$ and $\mbox{max}\{|c_{k1}|,c_{k0}|\}=c_{k1}$, $2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}-2\Delta_{1,k}(c_{k1}-c_{k0})+ \Delta_{1,k}^{(2)} =2\Delta_{1,k}c_{k1} -2\Delta_{1,k}(c_{k1}-c_{k0})+\Delta_{1,k}^{(2)}=2\Delta_{1,k}c_{k0}+\Delta_{1,k}^{(2)}$. \\ Since $\Delta_{1,k}^{(2)}\!=\!\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}|s_k^2 -(s'_k)^2|\!=\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}|s_k -s'_k|\cdot|s_k +s'_k| \ge \!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\!|s_k -s'_k|\cdot|2c_{k0}|= 2\Delta_{1,k}|c_{k0}|$, $\Delta_{1,k}^{(2)}-2\Delta_{1,k}|c_{k0}|=\Delta_{1,k}^{(2)}+2\Delta_{1,k}c_{k0}\ge0$. When $c_{k0}c_{k1}\le0$ and $\mbox{max}\{|c_{k1}|,c_{k0}|\}=|c_{k0}|$, $2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}+ \Delta_{1,k}^{(2)}-\Delta_{1,k}(c_{k1}-c_{k0})=2\Delta_{1,k}|c_{k0}|-2\Delta_{1,k}(c_{k1}-c_{k0})+\Delta_{1,k}^{(2)}=\Delta_{1,k}^{(2)}-2\Delta_{1,k}c_{k1}$. Since $\Delta_{1,k}^{(2)}=\!\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\!|s_k^2 -(s'_k)^2|\ge \!\!\!\!\mbox{max}_{\mathbf{x},\mathbf{x}',d(\mathbf{x},\mathbf{x}')=1}\!|s_k -s'_k|\cdot|2c_{k1}|= 2\Delta_{1,k}c_{k1}$, $\Delta_{1,k}^{(2)}-2\Delta_{1,k}c_{k1}\ge0$. All taken together, $2\sum_{k=1}^r\Delta_{1,k}(c_{k1}-c_{k0})\le\sum_{k=1}^r\!\left(\Delta_{1,k}^{(2)}+2\Delta_{1,k}\mbox{max}\{|c_{k0}|, |c_{k1}|\}\right)$. \end{proof} \section{\normalsize{Proof of Lemma \ref{lem:lowerbound2}}}\label{app:lowerbound2} \begin{proof} When $r=1$ ($\mathbf{s}$ is a scalar), $\Delta_{p}\equiv\Delta$ for all $p\ge1$. To satisfy $(\epsilon,\delta)$-pDP, we set \begin{align}\label{eqn:pDPr=1} \!&\!\!\textstyle\Pr\!\left(|s^*\!-\!s|\!>\!\frac{\epsilon b^2\Delta^{-1}\!-\Delta}{2}\right) \!=\!2\Phi\!\left(\!\frac{\Delta/2-\epsilon b^2(2\Delta)^{-1} }{b/\sqrt{2}}\!\right)\!\!\le\!\delta\\ &\Rightarrow\;\Delta b^{-1}\!\!-\!\epsilon b\Delta^{-1}\!\!\le\!\!\sqrt{2}\Phi^{-1}(\delta/2) \Rightarrow\;b\!\ge\!2^{-1/2}\epsilon^{-1}\Delta\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2).\notag \end{align} Together with the requirement $b^2\!-\!\epsilon^{-1}\Delta^2\!>\!0$, $b\!\ge\!\max\!\left\{\!\epsilon^{-1/2}\Delta, \left(\epsilon^{-1/2}\Delta\right)\!\frac{\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)}{\sqrt{2\epsilon}}\!\right\}$. Since $\delta\!<\!1$, $\Phi^{-1}(\delta/2)\!<\!0$, $\sqrt{(\Phi^{-1}(\delta/2))^2\!+\!2\epsilon}\!- \! \Phi^{-1}(\delta/2) \ge\sqrt{2\epsilon}$, and thus \\ $b\!\ge\! \left(\epsilon^{-1/2}\Delta\right)\!\frac{\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)}{\sqrt{2\epsilon}}$. When $r>1$, we leverage the proof in Appendix A (page 265) in \cite{privacybook} and obtain \begin{align*} &\left|\log\!\!\left(\!\frac{\Pr(\mathbf{s}^{\ast} \in Q|\mathbf{x})}{\Pr(\mathbf{s}^{\ast}\in Q|\mathbf{x}')}\!\right)\right|\! =\!\left|\log\!\!\left(\!\frac{\exp\!\left(\!-\|\mathbf{e}\|_2^2/b^2\right) }{\exp\left(\!-\|\mathbf{e}+\mathbf{d}\|_2^2/b^2\right) }\!\right)\right|\\ =&\left|b^{-2}\left(\|\mathbf{e}\|_2^2-\|\mathbf{e}+\mathbf{d}\|_2^2\right) \right| \le \left|b^{-2}\left(2\lambda\Delta_2+\Delta_2^2\right) \right| \le b^{-2}\left( 2\Delta_2 |\lambda|+\Delta_2^2\right), \end{align*} where $\mathbf{e}=\mathbf{s}^*-\mathbf{s}(\mathbf{x}), \mathbf{d}=\mathbf{s}(\mathbf{x})-\mathbf{s}(\mathbf{x}')$ defined in Eqn (\ref{eqn:second}), and $\lambda\sim N(0,b^2/2)$. To satisfy $(\epsilon,\delta)$-pDP, we set \begin{align*} &\Pr(b^{-2}\left( 2\Delta_2 |\lambda|+\Delta_2^2\right)<\epsilon)= \Pr(\left(|\lambda|<(b^2\epsilon\Delta^{-1}_2-\Delta_2^2)/2\right)>1-\delta\\ \Rightarrow&\Pr\left(|\lambda|\!>\!(b^2\epsilon\Delta^{-1}_2\!\!-\!\Delta_2^2)/2\right) \textstyle=2\Phi\!\left(\!\frac{\Delta_2-\epsilon b^2\Delta_2^{-1} }{\sqrt{2}b}\!\right)\!\!>\delta, \end{align*} which is the same as Eqn (\ref{eqn:pDPr=1}) for $r=1$. Similar to the case of $r=1$, we need $b^2\epsilon\Delta^{-1}_2-\Delta_2^2>0$, and the lower bound of $b$ for $r>1$ is $b\!\ge\!\max\!\left\{\!\epsilon^{-1/2}\Delta_2, \left(\epsilon^{-1/2}\Delta_2\right)\!\frac{\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)}{\sqrt{2\epsilon}}\!\right\}$. Since $\delta<1$, $\Phi^{-1}(\delta/2)<0$, thus $b\!\ge\! \left(\epsilon^{-1/2}\Delta_2\right)\!\frac{\sqrt{(\Phi^{-1}(\delta/2))^2+2\epsilon}-\Phi^{-1}(\delta/2)}{\sqrt{2\epsilon}}$ \end{proof} \section{\normalsize{Proof of Lemma \ref{lem:L1L2}}}\label{app:L1L2} \begin{proof} If $\sigma$ is set at the lower bound in Eqn (\ref{eqn:lowerbound2}), the ratio of the variance between the Gaussian distribution of the Gaussian mechanism of $(\epsilon,\delta)$-pDP and the Laplace distribution of the Laplace mechanism of $\epsilon$-DP is \begin{align} &\textstyle\left(\!(2\epsilon)^{-1}\Delta_s\!\! \left(\!\sqrt{(\Phi^{-1}(\frac{\delta}{2}))^2+2\epsilon}\!-\!\Phi^{-1}(\frac{\delta}{2})\!\right)\!/(\!\sqrt{2}\epsilon^{-1}\Delta_s)\!\right)^{\!2}\notag\\ &\textstyle=\left(\!\sqrt{(\Phi^{-1}(\frac{\delta}{2}))^2+2\epsilon}-\Phi^{-1}(\frac{\delta}{2})\!\right)^2\!/8 =4^{-1}\!(\Phi^{-1}(\frac{\delta}{2}))^2\!+\!\epsilon\!-\! \Phi^{-1}(\frac{\delta}{2})\sqrt{(\Phi^{-1}(\frac{\delta}{2}))^2+2\epsilon}\label{eqn:varratio} \end{align} Since $\delta\!\in\![0,1]$, $\delta/2\!\in\![0,0.5]$ and $\Phi^{-1}(\delta/2)\!\in\!(-\infty,0)$. Together with the fact $\epsilon\!>\!0$, Eqn (\ref{eqn:varratio}) $\!>\!(\Phi^{-1}(\delta/2))^2/2$. Let $(\Phi^{-1}(\delta/2))^2/2\!>\!1$, then $\delta/2\!<\!\Phi(-\sqrt{2})$, leading to $\delta<2\Phi(-\sqrt{2})\!\approx\!0.157$ \end{proof} \end{appendix} \bibliographystyle{ieeetran} \input{arxivv5.bbl} \end{document}
1,477,468,749,964
arxiv
\section{Introduction} Half-BPS codimension two defect operators form a rich class of observables in supersymmetric quantum field theories. Their vacuum expectation values, as those of all defect operators, are diagnostic tools to identify the phase of the quantum field theory \cite{Wilson:1974sk,tHooft:1977nqb,Gukov:2013zka}. Various quantum field theoretic constructions of codimension two defects have been proposed and explored in the literature, see for example the review \cite{Gukov:2014gja}. First, one can engineer a defect by defining a prescribed singularity for the gauge fields (and additional vector multiplet scalars) along the codimension two surface, as in \cite{Gukov:2006jk}. Second, a defect operator can be constructed by coupling a quantum field theory supported on its worldvolume to the bulk quantum field theory. The coupling can be achieved by gauging lower-dimensional flavor symmetries with higher-dimensional gauge fields and/or by turning on superpotential couplings. Third, a codimension two defect in a theory $\mathcal T$ can be designed in terms of a renormalization group flow from a larger theory $\widetilde{\mathcal T}$ triggered by a position-dependent, vortex-like Higgs branch vacuum expectation value \cite{Gaiotto:2012xa,Gaiotto:2014ina}.\footnote{The gauged perspective of \cite{Gaiotto:2012xa} is equivalent to considering sectors with fixed winding in a `Higgs branch localization' computation. See \cite{Benini:2012ui,Doroud:2012xw,Fujitsuka:2013fga,Benini:2013yva,Peelaers:2014ima,Pan:2014bwa,Closset:2015rna,Chen:2015fta,Pan:2015hza} for such computations in various dimensions.} Naturally, some defects can be constructed in multiple ways. Nevertheless, it is of importance to study all constructions separately, as their computational difficulties and conceptual merits vary. Such study is helped tremendously by the fact that when placing the theory on a compact Euclidean manifold, all three descriptions are, in principle, amenable to an exact analysis using localization techniques. See \cite{Pestun:2016zxk} for a recent comprehensive review on localization techniques. The M-theory construction of four-dimensional $\mathcal N=2$ supersymmetric theories of class $\mathcal S$ (of type $A_{N-1}$)\cite{Gaiotto:2009we} allows one to identify the class of concrete defects of interest to this paper: adding additional stacks of M2-branes ending on the main stack of $N$ M5-branes can introduce surface defects in the four-dimensional theory. The thus obtained M2-brane defects are known to be labeled by a representation $\mathcal R$ of $SU(N)$. In \cite{Gomis:2014eya}, the two-dimensional quiver gauge theory residing on the support of the defect and its coupling to the bulk four-dimensional theory were identified in detail. In fact, for the case of defects labeled by symmetric representations two different coupled systems were proposed. For the purposes of this paper, it is important to remark that one of these descriptions can alternatively be obtained from the third construction described in the previous paragraph.\footnote{The fact the application of this Higgsins prescription introduces M2-brane defects labeled by symmetric representations was understood in the original paper \cite{Gaiotto:2012xa}, see for example also \cite{Alday:2013kda}.} Allowing for simultaneous insertions of multiple half-BPS defects, intersecting each other along codimension four loci, while preserving one quarter of the supersymmetry, enlarges the collection of defects considerably and is very well-motivated. Indeed, in \cite{Gomis:2016ljm} it was conjectured and overwhelming evidence was found in favor of the statement that the squashed four-sphere partition function of theories of class $\mathcal S$ in the presence of intersecting M2-brane defects, wrapping two intersecting two-spheres, is the translation of the insertion of a generic degenerate vertex operator in the corresponding Liouville/Toda conformal field theory correlator through the AGT dictionary \cite{Alday:2009aq,Wyllard:2009hg}, extending and completing \cite{Gomis:2014eya,Alday:2009fs}. Note that such defects are labeled by a pair of representations $(\mathcal R',\mathcal R)$, which is precisely the defining information of a generic degenerate vertex operator in Liouville/Toda theory.\footnote{A generic degenerate momentum reads $\alpha = -b \Omega_{\mathcal R} - b^{-1} \Omega_{\mathcal R'}$, in terms of the highest weight vectors $\Omega_\mathcal{R}, \Omega_{\mathcal R'}$ of irreducible representations $\mathcal R$ and $\mathcal R'$ respectively, and $b$ parametrizes the Virasoro central charge.} In \cite{Gomis:2016ljm}, the insertion of intersecting defects was engineered by considering a coupled 4d/2d/0d system. In this description, the defect is engineered by coupling quantum field theories supported on the respective codimension two worldvolumes as well as additional degrees of freedom residing at their intersection to each other and to the bulk quantum field theory. The precise 4d/2d/0d coupled systems describing intersecting M2-brane defects were conjectured. As was also the case for a single defect, intersecting defects labeled by symmetric representations can be described by two different coupled systems. A localization computation, performed explicitly in \cite{Gomis:2016ljm}, allows one to calculate the squashed four-sphere partition function of such system.\footnote{See also \cite{Lamy-Poirier:2014sea} for a localization computation in the presence of a single defect.} Let $\mathcal T$ denote the four-dimensional theory and let $\tau^{\text{L/R}}$ denote two-dimensional theories residing on the defects wrapping the two-spheres $S^2_{\text{L}}$ and $S^2_{\text{R}}$, which intersect each other at the north pole and south pole. The full partition function then takes the schematic form \begin{equation} Z^{(\mathcal{T},S^2_\text{L} \cup S^2_\text{R} \subset S^4_b)} = \SumInt \ Z_{\text{pert}}^{(\mathcal T,S^4_b)}\ Z_{\text{pert}}^{(\tau^{\text{L}},S^2_\text{L})} \ Z_{\text{pert}}^{(\tau^{\text{R}},S^2_\text{R})} \ Z^{+}_{\text{intersection}}\ Z^{-}_{\text{intersection}}\ \left|Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}\right|^2 \;,\label{generalform4d2d0d} \end{equation} where the factors $Z_{\text{pert}}^{(T,\mathbb M)}$ denote the product of the classical action and one-loop determinant of the theory $T$ placed on the manifold $\mathbb M$ (in their Coulomb branch localized form). Furthermore, $Z^{\pm}_{\text{intersection}}$ are the one-loop determinants of the degrees of freedom at the two intersection points respectively, and $|Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}|^2$ are two copies of the instanton partition function, one for the north pole and one for the south pole, describing instantons in the presence of the intersecting surface defects spanning the local coordinate planes $\mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}}$ in $\mathbb R^4$. In \cite{Gomis:2016ljm} the focus was on the already very rich dynamics of 4d/2d/0d systems without four-dimensional gauge fields, thus avoiding the intricacies of the instanton partition functions. In this paper we aim at considering intersecting defects in interacting four-dimensional field theories and addressing the problem of instanton counting in the presence of such defects.\footnote{By taking one of the intersecting defects to be trivial, one can always simplify our results to the case of a single defect. In \cite{Gomis:2014eya} an extensive study was performed of the squashed four-sphere partition function of theories of free hypermultiplets in the presence of a single defect.} Our approach will be, alternative to that in \cite{Gomis:2016ljm}, to construct theories $\mathcal T$ in the presence of intersecting M2-brane defects labeled by symmetric representations using the aforementioned third strategy, \textsl{i.e.\@}{}, by considering a renormalization group flow from a larger theory $\widetilde {\mathcal T}$ triggered by a position-dependent vacuum expectation value with an intersecting vortex-like profile.\footnote{To be more precise, the configuration that triggers the renormalization group flow is a solution to the (perturbed) Seiberg-Witten monopole equations \cite{Witten:1994cg}, see \cite{Pan:2015hza}.} When the theory $\widetilde{\mathcal{T}}$ is a Lagrangian theory on $S^4_b$, this Higgsing prescription offers a straightforward computational tool to calculate the partition function $Z^{(\mathcal{T},S^2_\text{L} \cup S^2_\text{R} \subset S^4_b)}$ of $\mathcal T$ in the presence of said intersecting defects. In more detail, it instructs one to consider the residue of a certain pole of the partition function $Z^{(\mathcal{\tilde{T}}, S^4_b)}$, which can be calculated by considering pinching poles of the integrand of the matrix integral computing $Z^{(\widetilde{\mathcal{T}}, S^4_b)}$. The result involves intricate sums over a restricted set of Young diagrams, which we subsequently cast in the form of a coupled 4d/2d/0d system as in \eqref{generalform4d2d0d}, by reorganizing the sums over the restricted diagrams into the integrals over gauge equivariant parameters and sums over magnetic fluxes of the partition functions of the two-dimensional theories $\tau^\text{L/R}$. This step heavily relies on factorization properties of the summand of instanton partition functions, which we derive in appendix \ref{appendix:IPF-factorization}, when evaluated at special values of their gauge equivariant parameter. More importantly, we obtain concrete expressions for the instanton partition function, computing the equivariant volume of the instanton moduli space in the presence of intersecting codimension two singularities, and their corresponding ADHM matrix model. The main result of the paper, thus obtained, is the $S^4_b$-partition function of a four-dimensional $\mathcal N=2$ $SU(N)$ gauge theory with $N$ fundamental and $N$ antifundamental hypermultiplets,\footnote{While there is no distinction between a fundamental and antifundamental hypermultiplet, it is a useful terminology to keep track of the respective quiver gauge theory nodes. We choose to call the right/upper node of each link the fundamental one.\label{footnote:afund-fund}} \textsl{i.e.\@}{}, SQCD, in the presence of intersecting M2-brane surface defects, labeled by $n^{\text{R}}$ and $n^{\text{L}}$-fold symmetric representations respectively. It takes the form \eqref{generalform4d2d0d} and can be found explicitly in \eqref{def:SQCDA-defect-matrix-model}. To be more precise, the coupled system we obtain involves chiral multiplets as zero-dimensional degrees of freedom, \textsl{i.e.\@}{}, it coincides with the one described in conjecture 4 of \cite{Gomis:2016ljm} with four-dimensional $\mathcal{N} = 2$ SQCD. The left subfigure in figure \ref{fig:ADHM} depicts the 4d/2d/0d coupled system under consideration. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{./Figures-PDF/spacetime-ADHM-quiver-symmetric-no-brane} \caption{\label{fig:ADHM} On the left, the coupled 4d/2d/0d quiver gauge theory realizing the insertion, in four-dimensional $\mathcal N=2$ SQCD, of intersecting M2-brane surface defects labeled by symmetric representations of rank $n^{\text{R}}$ and $n^{\text{L}}$ respectively is depicted. The zero-dimensional multiplets are denoted using two-dimensional $\mathcal N=(0,2)$ quiver notation reduced to zero dimensions. Various superpotential couplings are turned on, in direct analogy to the ones given in detail in \cite{Gomis:2016ljm}. On the right, the ADHM model for $k$-instantons of the left theory is shown. The model preserves the dimensional reduction to zero dimensions of two-dimensional $\mathcal N=(0,2)$ supersymmetry. We used the corresponding quiver conventions. A J-type superpotential equal to the sum of the $U(k)$ adjoint bilinears formed out of the two pairs of chiral multiplets is turned on for the adjoint Fermi multiplet. The flavor charges carried by the various multiplets are also compatible with a quadratic J- or E-type superpotential for the Fermi multiplets charged under $U(n^{\text{L/R}})$.\protect\footnotemark{}} \end{figure} We derive the instanton partition function $Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}$ in the presence of intersecting planar surface defects and find it to take the form \begin{equation} Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)} = \sum_{\vec Y} {{q}^{|\vec Y|}} \ z_{{\text{vect}}}^{{\mathbb{R}^4}}(\vec Y)\ z_{{\text{afund}}}^{{\mathbb{R}^4}}(\vec Y)\ z_{{\text{fund}}}^{{\mathbb{R}^4}}(\vec Y) \ z_{{\text{defect}}}^{\mathbb{R}_{\text{L}}^2}(\vec Y)\ z_{{\text{defect}}}^{\mathbb{R}_{\text{R}}^2}(\vec Y)\;, \end{equation} where we omitted all gauge and flavor equivariant parameters. It is expressed as the usual sum over $N$-tuples $\vec Y$ of Young diagrams. The summand contains the new fctors $z_{{\text{defect}}}^{\mathbb{R}_{\text{L/R}}^2}$, which can be found explicitly in \eqref{zdefect}, capturing the contributions to the instanton counting of the additional zero-modes in the presence of intersecting surface defects, in addition to the standard factors $z_{{\text{vect}}}^{{\mathbb{R}^4}}$, $z_{{\text{fund}}}^{{\mathbb{R}^4}}$ and $z_{{\text{afund}}}^{{\mathbb{R}^4}}$ describing the contributions from the vector multiplet and $N+N$ hypermultiplets. The coefficient of $q^k$ of the above result can be derived from the ADHM model for $k$-instantons depicted in the right subfigure of figure \ref{fig:ADHM}. We have confirmed this ADHM model by analyzing the brane construction of said instantons, see section \ref{sec:IPFwDef} for all the details. In section \ref{sec:conclusions} we present conjectural generalizations of the instanton counting in the case of generic intersecting M2-brane defects. \footnotetext{The partition function is insensitive to the presence of superpotential couplings.}The paper is organized as follows. We start in section \ref{section:Higgsing_Prescription} by briefly recalling the Higgsing prescription to compute squashed sphere partition functions in the presence of (intersecting) M2-brane defects labeled by symmetric representations. We also present its brane realization. In section \ref{section:free-hyper} we implement the prescription for the case where $\mathcal T$ is a four- or five-dimensional theory of $N^2$ free hypermultiplets placed on a squashed sphere. The vacuum expectation value in $\mathcal T$ of intersecting M2-brane defects on the sphere has been computed in \cite{Gomis:2016ljm} from the point of view of the 4d/2d/0d or 5d/3d/1d coupled system and takes the form \eqref{generalform4d2d0d} (without the instanton contributions). For the case of symmetric representations, we reproduce this expression directly, and provide a derivation of a few details that were not addressed in \cite{Gomis:2016ljm}. We notice that the superpotential constraints of the coupled system on the parameters appearing in the partition function are reproduced effortlessly in the Higgsing computation thanks to the fact that they have a common origin in the theory $\widetilde{\mathcal T}$, which in this case is SQCD. These relatively simple examples allow us to show in some detail the interplay of the various ingredients of the Higgsed partition function of theory $\widetilde{\mathcal T}$, and how to cast it in the form \eqref{generalform4d2d0d}. In section \ref{section: interacting theories} we turn our attention to inserting defects in four-dimensional $\mathcal N=2$ SQCD. We apply the Higgsing prescription to an $SU(N)\times SU(N)$ gauge theory with bifundamental hypermultiplets and for each gauge group an additional $N$ fundamental hypermultiplets, and cast the resulting partition function in the form \eqref{generalform4d2d0d}. As a result we obtain a sharp prediction for the instanton partition function in the presence of intersecting surface defects. This expression provides concrete support for the ADHM matrix model that we obtain in section \ref{sec:IPFwDef} from a brane construction. We present our conclusions and some future directions in section \ref{sec:conclusions}. Five appendices contain various technical details and computations. \section{Higgsing and codimension two defects}\label{section:Higgsing_Prescription} In this section we start by briefly recalling the Higgsing prescription to compute the partition function of a theory $\mathcal T$ in the presence of (intersecting) defects placed on the squashed four/five-sphere \cite{Gaiotto:2012xa,Gaiotto:2014ina}. We also consider the brane realization of this prescription, which provides a natural bridge to the description of intersecting surface defects in terms of a 4d/2d/0d (or 5d/3d/1d) coupled system as in \cite{Gomis:2016ljm}. \subsection{The Higgsing prescription}\label{subsec:Higgsing_Prescription} We will be interested in four/five-dimensional quantum field theories with eight supercharges.\footnote{The localization computations we will employ throughout this paper rely on a Lagrangian description, but the Higgsing prescription is applicable outside the realm of Lagrangian theories. We will restrict attention to (Lagrangian) four-dimensional $\mathcal N=2$ supersymmetric quantum field theories of class $\mathcal S$ and their five-dimensional uplift.} Let us for concreteness start by considering four-dimensional $\mathcal N=2$ supersymmetric theories. Consider a theory $\mathcal T$ whose flavor symmetry contains an $SU(N)$ factor, and consider the theory of $N^2$ free hypermultiplets, which has flavor symmetry $USp(2N^2)\supset SU(N)\times SU(N)\times U(1)$. By gauging the diagonal subgroup of the $SU(N)$ flavor symmetry factor of the former theory with one of the $SU(N)$ factors of the latter theory, we obtain a new theory $\widetilde{\mathcal T}$. As compared to $\mathcal T$, the theory $\widetilde{\mathcal T}$ has an extra $U(1)$ factor in its flavor symmetry group. We denote the corresponding mass parameter as $\check M.$ The theory $\widetilde{\mathcal T}$ can be placed on the squashed four-sphere $S^4_b$,\footnote{\label{definitionsquashedfoursphere}We consider $S^4_b$ defined through the embedding equation in five-dimensional Euclidean space $\mathbb R^5 = \mathbb R\times \mathbb C^2$ with coordinates $x,z_1,z_2$ \begin{equation*} \frac{x^2}{r^2} + \frac{|z_1|^2}{ \ell^2} + \frac{|z_2|^2}{\tilde \ell^2} = 1\;, \end{equation*} in terms of parameters $r,\ell,\tilde \ell$ with dimension of length. The squashing parameter $b$ is defined as $b^2=\frac{\ell}{\tilde \ell}$. The isometries of $S^4_b$ are given by $U(1)^{\text{R}}\times U(1)^{\text{L}}$, which act by rotating the $z_1$ and $z_2$ plane respectively. The fixed locus of $U(1)^{\text{R}}$ is a squashed two-spheres: $S^2_{\text{R}} = S^4_b \big|_{z_1 = 0}$ and, similarly, the fixed locus of $U(1)^{\text{L}}$ is $S^2_{\text{L}} = S^4_b \big|_{z_2 = 0}$. The two-spheres $S^2_{\text{R}}$ and $S^2_{\text{L}}$ intersect at their north pole and south pole, \textsl{i.e.\@}{}, the points with coordinates $z_1=z_2=0$ and $x_0=\pm r$.} and its partition function can be computed using localization techniques \cite{Pestun:2007rz,Hama:2012bg}. Let us denote the supercharge used to localize the theory as $\mathcal Q$. Its square is given by \begin{equation} \mathcal Q^2 = b^{-1} \mathcal M_{\text{R}} + b \mathcal M_{\text{L}} - (b+b^{-1}) \mathcal R + i\sum_J M_J F_J + \text{gauge transformation}\;, \end{equation} where $\mathcal M_{\text{R/L}}$ are generators of the $U(1)^{\text{R/L}}$ isometries of $S^4_b$ (see footnote \ref{definitionsquashedfoursphere}), $\mathcal R$ is the $SU(2)_{\mathcal R}$ Cartan generator and $F_J$ are the Cartan generators of the flavor symmetry algebra. The coefficients $M_J$ are mass parameters rescaled by $\sqrt{\ell\tilde\ell}$, where $\ell$ and $\tilde \ell$ are two radii of the squashed sphere (see footnote \ref{definitionsquashedfoursphere}), to make them dimensionless. Localization techniques simplify the computation of the $S^4_b$ partition function to the calculation of one-loop determinants of quadratic fluctuations around the localization locus given by arbitrary constant values for $\Sigma^{\widetilde{\mathcal T}}$, the imaginary part of the vector multiplet scalar of the total gauge group.\footnote{More precisely, this is the ``Coulomb branch localization'' locus. Alternatively, one can perform a ``Higgs branch localization'' computation, see \cite{Chen:2015fta,Pan:2015hza}.} The final result for the $S^4_b$ partition function of the theory $\widetilde{\mathcal T}$ is then \begin{equation}\label{S4bpartitionfunction} Z^{(\widetilde{\mathcal T},S^4_b)}(M) = \int \text{d}\Sigma^{\widetilde{\mathcal T}}\ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_b)}(\Sigma^{\widetilde{\mathcal T}})\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_b)}(\Sigma^{\widetilde{\mathcal T}},M) \ |Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,\Sigma ,M^\epsilon )|^2 \;, \end{equation} where $Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_b)}$ denotes the classical action evaluated on the localization locus, $Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_b)}$ is the one-loop determinant and $|Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,\Sigma ,M^\epsilon )|^2$ are two copies of the Nekrasov instanton partition function \cite{Nekrasov:2002qd,Nekrasov:2003rj}, capturing the contribution to the localized path integral of instantons residing at the north and south pole of $S^4_b$. In \cite{Gaiotto:2012xa,Gaiotto:2014ina}, it was argued, by considering the physics at the infrared fixed point of the renormalization group flow triggered by a position dependent Higgs branch vacuum expectation value for the baryon constructed out of the hypermultiplet scalars, which carries charges $\mathcal M_{\text{L}} = -n^{\text{L}}, \mathcal M_{\text{R}} = -n^{\text{R}}, \mathcal R = N/2$ and $\check F = N$, that the partition function $Z^{(\widetilde{\mathcal T},S^4_b)}(M)$ necessarily has a pole when \begin{equation}\label{poleposition} i \check M = \frac{b+b^{-1}}{2} + b^{-1} \frac{n^{\text{R}}}{N} + b \frac{n^{\text{L}}}{N}\;. \end{equation} Moreover, the residue of the pole precisely captures the partition function of the theory $\mathcal T$ in the presence of M2-brane surface defects labeled by $n^{\text{R}}$-fold and $n^{\text{L}}$-fold symmetric representations respectively up to the left-over contribution of the hypermultiplet that captures the fluctuations around the Higgs branch vacuum. These defects wrap two intersecting two-spheres $S^2_{\text{R/L}},$ the fixed loci of $U(1)^{\text{R/L}}$. The pole at \eqref{poleposition} of $Z^{(\widetilde{\mathcal T},S^4_b)}(M)$ finds its origin in the matrix integral \eqref{S4bpartitionfunction} because of poles of the integrand pinching the integration contour. To see this, let us separate out the $SU(N)$ gauge group that gauges the free hypermultiplet to $\mathcal T$, and split $\Sigma^{\widetilde{\mathcal T}}$ accordingly: $\Sigma^{\widetilde{\mathcal T}} = (\Sigma^{\mathcal T}, \Sigma),$ where $\Sigma^{\mathcal T}$ is the vector multiplet scalar of the full gauge group of theory $\mathcal T,$ and $\Sigma$ the $SU(N)$ vector multiplet scalar. We can then rewrite \eqref{S4bpartitionfunction} as \begin{multline}\label{S4bpartitionfunctionsplit} Z^{(\widetilde{\mathcal T},S^4_b)}(M) = \int \text{d}\Sigma^{\mathcal T}\ \int \text{d}\Sigma\ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_b)}(\Sigma^{\mathcal T},\Sigma)\ Z_{\text{1-loop}}^{({\mathcal T},S^4_b)}(\Sigma^{\mathcal T},\Sigma,M) \ |Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,\Sigma^{\mathcal T},\Sigma,M^\epsilon)|^2 \\ \times \prod_{\substack{A,B=1\\A\neq B}}^{N} \Upsilon_b(i(\Sigma_A - \Sigma_B))\ \prod_{A=1}^{N}\prod_{I=1}^{N}\Upsilon_b\left(i(\Sigma_A-M_I-\check M)+\frac{Q}{2}\right)^{-1}\;. \end{multline} The first factor in the second line is the one-loop determinant of the $SU(N)$ vector multiplet, while the second factor is the contribution of the $N^2$ extra hypermultiplets, organized into $N$ $SU(N)$ fundamental hypermultipets.\footnote{See appendix \ref{app:Special functions} for the definition and some useful properties of the various special functions that are used throughout the paper.} Here $M_I, I=1,\ldots, N$ denote the mass parameters associated to the $SU(N)$ flavor symmetry (with $\sum_I M_I =0$). The integrand of the $\Sigma$-integral has poles (among many others) located at \begin{equation}\label{polepositionspinching} i\Sigma_A = i M_{\sigma(A)} + i \check M - n^{\text{R}}_A b^{-1} - n^{\text{L}}_A b - \frac{b+b^{-1}}{2} \qquad \text{with} \qquad n^{\text{R/L}}_A\geq 0\;, \quad A=1,\ldots,N\;, \end{equation} where $\sigma$ denotes a permutation of $N$ variables. These poles arise from the one-loop determinant of the extra hypermultiplets. When the $U(1)$ mass parameter $\check M$ takes the value of \eqref{poleposition}, they pinch the integration contour if \begin{equation}\label{partitions} n^{\text{R}} = \sum_{A=1}^{N} n^{\text{R}}_A\;, \qquad n^{\text{L}} = \sum_{A=1}^{N} n^{\text{L}}_A\;, \end{equation} since we only have $N-1$ independent $SU(N)$ integration variables. Note that the residue of the pole of $Z^{(\widetilde{\mathcal T},S^4_b)}$ at \eqref{poleposition} is equal to the sum over all partitions of $n^{\text{R}},n^{\text{L}}$ in \eqref{partitions} of the residue of the $\Sigma$-integrand of $Z^{(\widetilde{\mathcal T},S^4_b)}$ at the pole position \eqref{polepositionspinching} when treating the $\Sigma_A$ as $N$ independent variables.\footnote{Upon gauging the additional $U(1)$ flavor symmetry and turning on a Fayet-Iliopoulos parameter, which coincides with the gauged setup of \cite{Gaiotto:2012xa,Gaiotto:2014ina}, the residues of precisely these poles were given meaning in the ``Higgs branch localization'' computation of \cite{Pan:2015hza} in terms of Seiberg-Witten monopoles.} A similar analysis can be performed for five-dimensional $\mathcal N=1$ theories. The theory $\widetilde {\mathcal T}$ can be put on the squashed five-sphere $S^5_{\vec\omega}$,\footnote{\label{definitionsquashedfivesphere}The squashed five-sphere $S^5_{\vec\omega = (\omega_1,\omega_2,\omega_3)}$ is given by the locus in $\mathbb C^3$ satisfying \begin{equation} \omega_1^2 |z_1|^2 + \omega_2^2 |z_2|^2 + \omega_3^2 |z_3|^2 = 1\;. \end{equation} Its isometries are $U(1)^{(1)}\times U(1)^{(2)} \times U(1)^{(3)},$ which act by rotations on the three complex planes respectively. The fixed locus of $U(1)^{(\alpha)}$ is the squashed three-sphere $S^3_{(\alpha)} = S^5_{\vec\omega} \big |_{z_\alpha=0},$ while the fixed locus of $U(1)^{(\alpha)}\times U(1)^{(\beta\neq \alpha)}$ is the circle $S^1_{(\alpha\cap \beta)} = S^5_{\vec\omega} \big |_{z_\alpha=z_\beta=0}.$ The notation indicates that it appears as the intersection of the three-spheres $S^3_{(\alpha)}$ and $S^3_{(\beta)}.$ A convenient visualization of the five-sphere and its fixed loci under one or two of the $U(1)$ isometries is as a $T^3$-fibration over a solid triangle, where on the edges one of the cycles shrinks and at the corners two cycles shrink simultanously.} and its partition function can again be computed using localization techniques \cite{Hosomichi:2012ek,Kallen:2012va,Kim:2012ava,Imamura:2012bm,Lockhart:2012vp,Kim:2012qf}. The localizing supercharge $\mathcal Q$ squares to \begin{equation} \mathcal Q^2 = \sum_{\alpha=1}^3 \omega_\alpha \mathcal M_{(\alpha)} -(\omega_1+\omega_2+\omega_3) \mathcal R + i \sum_J M_J F_J + \text{gauge transformation}\;, \end{equation} where $\mathcal M_{(\alpha)}$ are the generators of the $U(1)^{(1)}\times U(1)^{(2)} \times U(1)^{(3)}$ isometry of the squashed five-sphere $S^5_{\vec\omega}$ (see footnote \ref{definitionsquashedfivesphere}). The localization locus consists of arbitrary constant values for the vector multiplet scalar $\Sigma^{\widetilde{\mathcal T}}$, hence the partition function reads \begin{equation}\label{S5omegapartitionfunction} Z^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(M) = \int \text{d}\Sigma^{\widetilde{\mathcal T}}\ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma^{\widetilde{\mathcal T}})\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma^{\widetilde{\mathcal T}},M) \ |Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1)}(q,\Sigma^{\widetilde{\mathcal{T}}} ,M^\omega )|^3 \;. \end{equation} One can argue that $Z^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(M)$ has a pole at \begin{equation}\label{polepositionS5} i \check M = \frac{\omega_1+\omega_2+\omega_3}{2} + \sum_{i=1}^3 \omega_\alpha \frac{n^{(\alpha)}}{N}\;, \end{equation} whose residue computes the $S^5_{\vec\omega}$ partition function of $\mathcal T$ in the presence of codimension two defects labeled by $n^{(\alpha)}$-fold symmetric representations and wrapping the three-spheres $S^3_{(\alpha)}$ obtained as the fixed loci of the $U(1)^{(\alpha)}$ isometries (see footnote \ref{definitionsquashedfivesphere}), respectively. These three-spheres intersect each other in pairs along a circle. Again, this pole arises from pinching the integration contour by poles of the one-loop determinant of the $N^2$ hypermultiplets located at \begin{equation}\label{polepositionsS5pinching} i\Sigma_A = i M_{\sigma(A)} + i \check M - \sum_{\alpha=1}^3 n_A^{(\alpha)} \omega_{\alpha} - \frac{\omega_1+\omega_2+\omega_3}{2} \qquad \text{with} \qquad n_A^{(\alpha)}\geq 0\;, \quad A=1,\ldots,N\;, \end{equation} if $n^{(\alpha)} = \sum_{A=1}^{N} n_A^{(\alpha)}$. The residue of $Z^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(M)$ at the pole given in \eqref{polepositionS5} equals the sum over partitions of the integers $n^{(\alpha)}$ of the residue of the integrand at the pole position \eqref{polepositionsS5pinching} with the $\Sigma_A$ treated as independent variables.\footnote{In \cite{Pan:2014bwa}, these residues were interpreted as the contribution to the partition function of K-theoretic Seiberg-Witten monopoles.} \subsection{Brane realization}\label{subsec:brane realization} To sharpen one's intuition of the Higgsing prescription outlined in the previous subsection, one may look at its brane realization \cite{Gaiotto:2014ina}. Consider a four-dimensional $\mathcal N=2$ gauge theory $\mathcal T$ described by the linear quiver and corresponding type IIA brane configuration\footnote{\label{branedirections}The branes in this figure as well as those in figure \ref{fig:manynodequiver_higgsed} and the following discussion span the following dimensions: \begin{center} \begin{tabular}{l | cccccccccc} & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\ \hline NS5 & --- & --- & --- & --- & --- & --- & & & & \\ \ \ D4 & --- & --- & --- & --- & & & --- & & & \\ \ \ D2$_\text{L}$ & --- & --- & & & & & & & & --- \\ \ \ D2$_\text{R}$ & & & --- & --- & & & & & & --- \\ \ \ D0 & & & & & & & --- & & & \\ \end{tabular} \end{center} } \begin{center} \includegraphics[width=0.8\textwidth]{./Figures-PDF/ManyNodeQuiver} \end{center} Gauging in a theory of $N^2$ hypermultiplets amounts to adding an additional NS5-brane on the right end of the brane array. The Higgsing prescription of the previous subsection is then trivially implemented by pulling away this additional NS5-brane (in the 10-direction of footnote \ref{branedirections}), while suspending $n^{\text{R}}$ D2$_\text{R}$ and $n^{\text{L}}$ D2$_\text{L}$-branes between the displaced NS5-brane and the right stack of D4-branes, see figure \ref{fig:manynodequiver_higgsed}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{./Figures-PDF/Higgsing-brane-construction} \caption{\label{fig:manynodequiver_higgsed} Gauging the diagonal subgroup of the $SU(N)$ flavor symmetry carried by the right-hand stack of D4-branes and an $SU(N)$ subgroup of the flavor symmetry of an additional $N^2$ free hypermultiplets amounts to adding an additional NS5-brane on the right end of the brane array. This leads to the figure on the left. Higgsing the system as in subsection \ref{subsec:Higgsing_Prescription} corresponds to pulling away this NS5-brane from the main stack, while stretching $n^{\text{R}}$ D2$_\text{R}$ and $n^{\text{L}}$ D2$_\text{L}$-branes in between it and the D4-branes, producing the middle figure. The final figure represents the system in the Coulomb phase.} \end{figure} Various observations should be made. First of all, the brane picture in figure \ref{fig:manynodequiver_higgsed} was also considered in \cite{Gomis:2016ljm} to describe intersecting M2-brane surface defects labeled by $n^{\text{R}}$ and $n^{\text{L}}$-fold symmetric representations respectively. Its field theory realization is described by a coupled 4d/2d/0d system, described by the quiver in figure \ref{fig:InsertSymmetrics} (see \cite{Gomis:2016ljm}). \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{./Figures-PDF/InsertSymmetrics} \caption{\label{fig:InsertSymmetrics} Coupled 4d/2d/0d quiver gauge theory realizing intersecting M2-brane surface defects labeled by symmetric representations, of rank $n^{\text{R}}$ and $n^{\text{L}}$ respectively, in a four-dimensional $\mathcal N=2$ linear quiver gauge theory. The two-dimensional degrees of freedom, depicted in $\mathcal N=(2,2)$ quiver notation, are coupled to the four-dimensional ones through cubic and quartic superpotential couplings. The explicit superpotentials can be found in \cite{Gomis:2016ljm}. The zero-dimensional degrees of freedom, denoted using two-dimensional $\mathcal N=(0,2)$ quiver notations dimensionally reduced to zero dimensions, with solid lines representing chiral multiplets, participate in E- and J-type superpotentials. } \end{figure} Note that the two-dimensional theories, residing on the D2$_\text{R}$ and D2$_\text{L}$-branes, are in their Higgs phase, with equal Fayet-Iliopoulos parameter $\xi_{\text{FI}}$ proportional to the distance (in the 7-direction) between the displaced NS5-brane and the next right-most NS5-brane. Before Higgsing, this distance was proportional to the inverse square of the gauge coupling of the extra $SU(N)$ gauge node: \begin{equation} \xi_{\text{FI}} = \frac{4\pi}{g_{\text{YM}}^2}\;. \end{equation} In particular, the Higgsing prescription will produce gauge theory results in the regime where $\xi_{\text{FI}}$ is positive, and where the defect is inserted at the right-most end of the quiver. In this paper we will restrict attention to this regime. Note however that sliding the displaced NS5-brane along the brane array in figure \ref{fig:manynodequiver_higgsed} implements hopping dualities \cite{Gadde:2013ftv,Gomis:2014eya} (see also \cite{Assel:2015oxa, Closset:2016arn}), which in the quiver gauge theory description of figure \ref{fig:InsertSymmetrics} translate to coupling the defect world volume theory to a different pair of neighboring nodes of the four-dimensional quiver, while not changing the resulting partition function. In \cite{Gomis:2016ljm}, a first-principles localization computation was performed to calculate the partition function of the coupled 4d/2d/0d system when placed on a squashed four-sphere, with the defects wrapping two intersecting two-spheres $S^2_{\text{R/L}},$ the fixed loci of $U(1)^{\text{R/L}}$, in the case of non-interacting four-dimensional theories. Our aim in the next section will be to reproduce these results from the Higgsing point of view. When the four-dimensional theory contains gauge fields, the localization computation needs as input the Nekrasov instanton partition function in the presence of intersecting planar surface defects, which modify non-trivially the ADHM data. The Higgsing prescription does not require such input, and in section \ref{section: interacting theories} we will apply it to $\mathcal N=2$ SQCD. This computation will allow us to extract the modified ADHM integral. The brane realization of figure \ref{fig:manynodequiver_higgsed} already provides compelling hints about how the ADHM data should be modified. In this setup, instantons are described by D0-branes stretching between the NS5-branes. Their worldvolume theory is enriched by massless modes (in the Coulomb phase, \textsl{i.e.\@}{}, when $\xi_{\text{FI}}=0$), if any, arising from open strings stretching between the D0-branes and the D2$_\text{R}$ and D2$_\text{L}$-branes. These give rise to the dimensional reduction of a two-dimensional $\mathcal N=(2,2)$ chiral multiplet to zero dimensions, or equivalently, the dimensional reduction of a two-dimensional $\mathcal N=(0,2)$ chiral multiplet and Fermi multiplet. We will provide more details about the instanton counting in the presence of defects in section \ref{sec:IPFwDef}. Our Higgsing computation of section \ref{section: interacting theories} will provide an independent verification of these arguments. \section{Intersecting defects in theory of \texorpdfstring{$N^2$}{N2} free hypermultiplets }\label{section:free-hyper} In this section we work out in some detail the Higgsing computation for the case where $\mathcal T$ is a theory of free hypermultiplets. We will find perfect agreement with the description of intersecting M2-brane defects labeled by symmetric representations in terms of a 4d/2d/0d (or 5d/3d/1d) system\cite{Gomis:2016ljm}. Our computation also provides a derivation of the Jeffrey-Kirwan-like residue prescription used to evaluate the partition function of the coupled 4d/2d/0d (or 5d/3d/1d) system, and of the flavor charges of the degrees of freedom living on the intersection. In the next section we will consider the case of interacting theories $\mathcal T$. \subsection{Intersecting codimension two defects on \texorpdfstring{$S^5_{\vec{\omega}}$}{S5}} As a first application of the Higgsing prescription of the previous section, we consider the partition function of a theory of $N^2$ free hypermultiplets on $S^5_{\vec{\omega}}$ in the presence of intersecting codimension two defects wrapping two of the three-spheres $S^3_{(\alpha)}$ fixed by the $U(1)^{(\alpha)}$ isometry (see footnote \ref{definitionsquashedfivesphere}, and also figure \ref{figure:T3fibration}), \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{./Figures-PDF/T3fibration} \caption{\label{figure:T3fibration} A convenient representation of $S^5_{\vec \omega}$ in terms of a $T^3$-fibration over a triangle. Each edge represents a three-sphere invariant point-wise under one of the $U(1)$ isometries, and each vertex represents an $S^1$, where two $S^3$'s intersect, invariant point-wise under two $U(1)$ isometries. Each $S^1$ has two tubular neighborhoods of the form $S^1 \times \mathbb{R}^2$ in the two intersecting $S^3$'s, with omega-deformation parameters given in terms of $b_{(\alpha)}^{\pm 1}$, as shown in the figure.} \end{figure} say $S^3_{(1)}$ and $S^3_{(2)}$. Our aim will be to cast the result in the manifest form of the partition function of a 5d/3d/1d coupled system, as in \cite{Gomis:2016ljm}. We consider this case first since the fact that the intersection $S^3_{(1)}\cap S^3_{(2)}=S^1_{(1\cap 2)}$ has a single connected component is a simplifying feature that will be absent in the example of $S^4_b$ in the next subsection. \subsubsection{\texorpdfstring{$S^5_{\vec\omega}$}{S5} partition function of \texorpdfstring{$\widetilde{\mathcal T}$}{\widetilde T}} Our starting point, the theory $\widetilde{\mathcal T}$, is described by the quiver \begin{center} \includegraphics[width=.2\textwidth]{./Figures-PDF/SQCDQuiver}\;. \end{center} That is, it is an $SU(N)$ gauge theory with $N$ fundamental and $N$ anti-fundamental hypermultiplets, \textsl{i.e.\@}{}, $\mathcal N=2$ SQCD.\footnote{Recall our terminology of footnote \ref{footnote:afund-fund}.} The $S^5_{\vec\omega}$-partition function of $\widetilde{\mathcal T}$ is computed by the matrix integral \eqref{S5omegapartitionfunction} \cite{Qiu:2013aga,Qiu:2014oqa,Hosomichi:2012ek,Kallen:2012va,Kim:2012ava,Imamura:2012bm,Lockhart:2012vp,Kim:2012qf} \begin{equation} Z^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(M,\tilde M) = \int \text{d}\Sigma \ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma)\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma, M,\tilde M)\ |Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1)}(q,\Sigma ,M^\epsilon ,\tilde M^\epsilon )|^3 \;. \end{equation} The explicit expression for the classical action is given by \begin{equation}\label{S5classicalaction_SQCD} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma) = \exp\left[-\frac{8\pi^3}{\omega_1\omega_2\omega_3 g^2_{\text{YM}}}\Tr \Sigma^{2}\right]\;, \end{equation} while the one-loop determinant $Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}$ is the product of the one-loop determinants of the $SU(N)$ vector multiplet, the $N$ fundamental hypermultiplets and the $N$ antifundamental hypermultiplets: \begin{align}\label{S5oneloop_SQCD} &Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}(\Sigma, M,\tilde M) = Z_{{\text{vect}}}^{S^5_{\vec\omega}}(\Sigma )\ Z_{\text{fund}}^{S^5_{\vec\omega}}(\Sigma ,M)\ Z_{\text{afund}}^{S^5_{\vec\omega}}(\Sigma ,\tilde M) \\ &=\frac{\prod_{\substack{A , B = 1\\A\neq B}}^N {{S_3}(i({\Sigma_A} - {\Sigma_B})\ |\ \vec\omega )}}{\prod_{A = 1}^N \prod_{I = 1}^N {{S_3}(i({\Sigma_A} - {M_I}) + | \vec\omega |/2\ |\ \vec\omega )} \ \prod_{A = 1}^N \prod_{J = 1}^N {{S_3}(i( - {\Sigma_A} + {{\tilde M}_J}) + |\vec\omega|/2\ |\ \vec\omega )} }\;, \end{align} written in terms of the triple sine function. Here we used the notation $|\vec\omega|=\omega_1+\omega_2+\omega_3$. Note that we did not explicitly separate the masses for the $SU(N)\times U(1)$ flavor symmetry, but instead considered $U(N)$ masses. Finally, there are three copies of the K-theoretic instanton partition function, capturing contributions of instantons residing at the circles kept fixed by two out of three $U(1)$ isometries. Concretely, one has \begin{multline}\label{IPFS5_SQCD} |Z_{\text{inst}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1)}(q,\Sigma ,{M^\omega },{{\tilde M}^\omega }){|^3} \equiv \; Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(2\cap 3)})}\Big(q_1,\frac{\Sigma }{{{\omega _1}}},\frac{{{M^\omega }}}{{{\omega _1}}},\frac{{{{\tilde M}^\omega }}}{{{\omega _1}}},\frac{{2\pi }}{{{\omega _1}}},\frac{{{\omega _3}}}{{{\omega _1}}},\frac{{{\omega _2}}}{{{\omega _1}}}\Big) \\ \times Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 3)})}\Big(q_2,\frac{\Sigma }{{{\omega _2}}},\frac{{{M^\omega }}}{{{\omega _2}}},\frac{{{{\tilde M}^\omega }}}{{{\omega _2}}},\frac{{2\pi }}{{{\omega _2}}},\frac{{{\omega _3}}}{{{\omega _2}}},\frac{{{\omega _1}}}{{{\omega _2}}}\Big) \ Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 2)})}\Big(q_3,\frac{\Sigma }{{{\omega _3}}},\frac{{{M^\omega }}}{{{\omega _3}}},\frac{{{{\tilde M}^\omega }}}{{{\omega _3}}},\frac{{2\pi }}{{{\omega _3}}},\frac{{{\omega _1}}}{{{\omega _3}}},\frac{{{\omega _2}}}{{{\omega _3}}}\Big) \;, \end{multline} where $q_\alpha = \exp\left(-\frac{8\pi^2}{g_{YM}^2}\frac{2\pi}{\omega_\alpha}\right)$. Each factor can be written as a sum over an $N$-tuple of Young diagrams \cite{Nekrasov:2002qd, Nekrasov:2003rj} \begin{equation} \vec Y=(Y_1, Y_2,\ldots, Y_N)\;, \quad \text{with} \quad Y_A=(Y_{A1}\geq Y_{A2}\geq \ldots \geq Y_{AW_{Y_A}}\geq Y_{A(W_{Y_A}+1)} = \ldots = 0 ) \end{equation} of a product over the contributions of vector and matter multiplets: \begin{multline}\label{IPFR4S1_SQCD} Z_{{\text{inst}}}^{(\widetilde{\mathcal T},{\mathbb{R}^4} \times {S^1})}\Big( {q,\frac{\beta }{{2\pi }}\Sigma,\frac{\beta }{{2\pi }}{M^\omega },\frac{\beta }{{2\pi }}{{\tilde M}^\omega },\beta ,{\epsilon _1},{\epsilon _2}} \Big)\\ =\sum_{\vec Y} {q^{|\vec Y|}}z_{\text{vect}}^{{\mathbb{R}^4} \times {S^1}}\left(\vec Y, {\frac{\beta }{{2\pi }}\Sigma} \right) z_{\text{fund}}^{{\mathbb{R}^4} \times {S^1}}\left(\vec Y, {\frac{\beta }{{2\pi }}\Sigma,\frac{\beta }{{2\pi }}{M^\omega }} \right)z_{\text{afund}}^{{\mathbb{R}^4} \times {S^1}}\left(\vec Y, {\frac{\beta }{{2\pi }}\Sigma,\frac{\beta }{{2\pi }}{{\tilde M}^\omega }} \right) \;. \end{multline} Here we have omitted the explicit dependence on $\epsilon_1, \epsilon_2$ in all factors $z^{{\mathbb{R}^4} \times {S^1}}$. The instanton counting parameter $q$ is given by $q=\exp\left(-\frac{8\pi^2\beta}{g_{YM}^2}\right),$ and $|\vec Y|$ denotes the total number of boxes in the $N$-tuple of Young diagrams. The expression for $z_{{\text{fund}}}$ reads \begin{equation}\label{IPFR4S1_fund} z_{\text{fund}}^{{\mathbb{R}^4} \times {S^1}}\left(\vec Y, {\frac{\beta }{{2\pi }}\Sigma,\frac{\beta }{{2\pi }}{M^\omega }} \right)=\prod_{A = 1}^N \prod_{I = 1}^N \prod_{r = 1}^\infty \prod_{s=1}^{Y_{Ar}} 2i\sinh \pi i \left(\frac{\beta }{{2\pi }}(i\Sigma _A - iM^\omega_I) + r\epsilon_1 + s\epsilon_2 \right) \;, \end{equation} while those of $z_{{\text{vect}}}^{{\mathbb{R}^4} \times {S^1}}$ and $z_{{\text{afund}}}^{{\mathbb{R}^4} \times {S^1}}$ are given in \eqref{zvectIPF}-\eqref{z(a)fundIPF} in appendix \ref{appendix:IPF-factorization}.\footnote{In appendix \ref{appendix:IPF-factorization} we have simultaneously performed manipulations of four-dimensional and five-dimensional instanton partition functions, which is possible after introducing the generalized factorial with respect to a function $f(x)$, defined in appendix \ref{subapp: generalized factorials}, with $f(x)$ in four and five dimensions given in \eqref{fin4dand5d}. } Note that the masses that enter in \eqref{IPFR4S1_SQCD} are slightly shifted (see \cite{Okuda:2010ke}): \begin{equation} {M^\omega } \equiv M - \frac{i}{2}(\omega_1 + \omega_2 + \omega_3)\;, \qquad {\tilde M^\omega } \equiv \tilde M - \frac{i}{2}(\omega_1 + \omega_2 + \omega_3)\;. \end{equation} \subsubsection{Implementing the Higgsing prescription} As outlined in the previous section, to introduce intersecting codimension two defects wrapping the three-spheres $S^3_{(1)}$ and $S^3_{(2)}$ and labeled by the $n^{(1)}$-fold and $n^{(2)}$-fold symmetric representation respectively, we should consider the residue at the pole position \eqref{polepositionsS5pinching} with $n^{(3)}=0$ (and hence $n_A^{(3)}=0$ for all $A=1,\ldots, N$)\footnote{Recall that we have regrouped the mass for the $U(1)$ flavor symmetry and those for the $SU(N)$ flavor symmetry into $U(N)$ masses.} \begin{equation}\label{poleS3S3} i\Sigma_A = i M_{\sigma(A)} - n_A^{(1)} \omega_{1} - n_A^{(2)} \omega_{2} - \frac{\omega_1+\omega_2+\omega_3}{2} \qquad \text{for} \qquad A=1,\ldots,N\;, \end{equation} while treating $\Sigma_A$ as $N$ independent variables, and sum over all partitions $\vec n^{(1)}$ of $n^{(1)}$ and $\vec n^{(2)}$ of $n^{(2)}$. As before, $\sigma(A)$ is a permutation of $A = 1, ..., N$ which we take to be, without loss of generality, $\sigma(A) = A$. At this point let us introduce the notation that ``$\rightarrow$'' means evaluating the residue at the pole \eqref{poleS3S3} and removing some spurious factors. As we aim to cast the result in the form of a matrix integral describing the coupled 5d/3d/1d system, we try to factorize all contributions accordingly in pieces depending only on information of either three-sphere $S^3_{(1)}$ or $S^3_{(2)}$. As we will see, the non-factorizable pieces nicely cancel against each other, except for a factor that will ultimately describe the one-dimensional degrees of freedom residing on the intersection. It is straightforward to work out the residue at the pole position \eqref{poleS3S3}. The classical action \eqref{S5classicalaction_SQCD} and the one-loop determinant \eqref{S5oneloop_SQCD} become, using recursion relations for the triple sine functions (see \eqref{recursion-triple-sine}),\footnote{Here we omitted on the right-hand side the left-over hypermultiplet contributions mentioned in the previous section as well as the classical action evaluated on the Higgs branch vacuum at infinity, \textsl{i.e.\@}, on the position-independent Higgs branch vacuum.} \begin{equation}\label{S5cl1loop_atpole_SQCD} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})}\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^5_{\vec\omega})} \rightarrow Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})}\ \Big(Z^{S^3_{(1)}}_{\text{cl}|\vec n^{(1)}}\ Z^{S^3_{(1)}}_{\text{1-loop}|\vec n^{(1)}}\Big)\ \Big(Z^{S^3_{(2)}}_{\text{cl}|\vec n^{(2)}}\ Z^{S^3_{(2)}}_{\text{1-loop}|\vec n^{(2)}}\Big) \Big(Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{1-loop,extra}}\Big)\;. \end{equation} Let us unpack this expression a bit. First, $Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})}$ is the one-loop determinant of $N^2$ free hypermultiplets, which constitute the infrared theory $\mathcal T.$ It reads \begin{equation} Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})} = \prod_{A=1}^N\prod_{J=1}^N \frac{1}{S_3(-iM_A + i \tilde M_J + |\vec \omega| \ |\ \vec \omega)} = \prod_{A=1}^N\prod_{J=1}^N \frac{1}{S_3(iM_A - i \tilde M_J \ |\ \vec \omega)}\;. \end{equation} Note that the masses of the $N^2$ free hypermultiplets, represented by a two-flavor-node quiver, are $M_{AJ} = M_A - \tilde M_J + i \frac{|\vec \omega|}{2}$. Recall that $\frac{1}{N}\sum_{J=1}^N i \tilde M_J = i \check{ \tilde M},$ while $ \frac{1}{N}\sum_{A=1}^N iM_A = \frac{|\vec \omega|}{2} + \frac{n^{(1)}}{N} \omega_1 + \frac{n^{(2)}}{N}\omega_2$. Second, we find the classical action and one-loop determinant of squashed three-sphere partition functions of a three-dimensional $\mathcal N=2$ supersymmetric $U(n^{(\alpha)})$ gauge theory with $N$ fundamental and $N$ antifundamental chiral multiplets and one adjoint chiral multiplet, \textsl{i.e.\@}{}, the quiver gauge theory \begin{center} \includegraphics[width=.2\textwidth]{./Figures-PDF/SQCDA_quiver} \end{center} We will henceforth call this theory `SQCDA.'\footnote{Note that the rank of the gauge group is the rank of one of the symmetric representations labeling the defects supported on the codimension two surfaces, or in other words, it can be inferred from the precise coefficients of the pole of the $\widetilde{\mathcal T}$ partition function, see \eqref{polepositionS5}.} These quantities are in their Higgs branch localized form,\footnote{\label{footnoteHBLS3}The squashed three-sphere partition function of a theory $\tau$ can be computed using two different localization schemes. The usual ``Coulomb branch localization'' computes it as a matrix integral of the schematic form \cite{Kapustin:2009kz,Jafferis:2010un,Hama:2010av,Hama:2011ea} \begin{equation*} Z^{(\tau,S^3_{b})} = \int \text{d}\sigma\ Z_{\text{cl}}^{({\tau},S^3_{b})}(\sigma)\ Z_{\text{1-loop}}^{({\tau},S^3_{b})}(\sigma)\;, \end{equation*} while a ``Higgs branch localization'' computation brings it into the form \cite{Fujitsuka:2013fga,Benini:2013yva} \begin{equation*} Z^{(\tau,S^3_{b})} = \sum_{\text{HV}} Z_{\text{cl}|\text{HV}}^{({\tau},S^3_{b})}\ Z_{\text{1-loop}|\text{HV}}^{({\tau},S^3_{b})}\ Z_{\text{vortex}|\text{HV}}^{(\tau,\mathbb R^2\times S^1)}(b)\ Z_{\text{vortex}|\text{HV}}^{(\tau, \mathbb R^2\times S^1)}(b^{-1})\;. \end{equation*} Here the sum runs over all Higgs vacua $\text{HV}$ and the subscript $|\text{HV}$ denotes that the quantity is evaluated in the Higgs vacuum HV. Furthermore, one needs to include two copies of the K-theoretic vortex partition function $Z_{\text{vortex}}^{\mathbb R^2\times S^1}$. The two expressions for $Z$ are related by closing the integration contours in the former and summing over the residues of the enclosed poles. In the main text the theory $\tau$ will always be SQCDA and hence we omit the superscripted label. Note that for SQCDA, the sum over vacua is a sum over partitions of the rank of the gauge group. See appendix \ref{HBLS2S3} for all the details.} hence the additional subscript indicating the Higgs branch vacuum, \textsl{i.e.\@}, the partition $\vec n^{(\alpha)}$. Their explicit expressions can be found in appendix \ref{subapp: S3 partition function}. The Fayet-Iliopoulos parameter $\xi^{(\alpha)}_{\text{FI}}$, the adjoint mass $m_X^{(\alpha)}$, and the fundamental and antifundamental masses $m_I^{(\alpha)},\tilde m_I^{(\alpha)}$ entering the three-dimensional partition function on $S^{3}_{(\alpha)}$ are identified with the five-dimensional parameters as follows, with ${\lambda _{(\alpha)} } \equiv \sqrt {{\omega _{(\alpha)} }/({\omega _1}{\omega _2}{\omega _3})}$, \begin{align}\label{parameteridentifications_S5_SQCD} &\xi _{{\text{FI}}}^{(\alpha)} = \frac{{8{\pi ^2}{\lambda _{(\alpha)} }}}{{g_{{\text{YM}}}^2}},& & m_X^{(\alpha)} = i{\omega _{(\alpha)} }{\lambda _{(\alpha)} } \;, & \\ &m^{(\alpha)}_I = {\lambda _{(\alpha)} }\left( { {M_I} + \frac{i}{2}(|\vec\omega | + {\omega _{(\alpha)} })} \right), & & \tilde m^{(\alpha)} _J = -i{\omega _{(\alpha)} }{\lambda _{(\alpha)} } + \lambda _{(\alpha)} \left(\tilde M_J + \frac{i}{2}(|\vec\omega | + {\omega _{(\alpha)} })\right) \;.&\label{parameteridentifications2_S5_SQCD} \end{align} Note that the relation on the $U(1)$ mass $\frac{1}{N}\sum_{I=1}^N iM_I = \frac{|\vec \omega|}{2} + \frac{n^{(1)}}{N} \omega_1 + \frac{n^{(2)}}{N}\omega_2 $ translates into a relation on the $U(1)$ mass of the fundamental chiral multiplets. Finally, both the classical action and the one-loop determinant produce extra factors which cannot be factorized in terms of information depending only on $\vec n^{(1)}$ or $\vec n^{(2)}$, \begin{equation} Z_\text{1-1oop,extra}^{\tilde{\mathcal{T}};{{\vec n}^{(1)}},{{\vec n}^{(2)}}} = Z_{{\text{vf,extra}}}^{{{\vec n}^{(1)}},{{\vec n}^{(2)}}}(M)\ Z_{{\text{afund,extra}}}^{{{\vec n}^{(1)}},{{\vec n}^{(2)}}}(\tilde M)\;, \qquad Z_\text{cl,extra}^{\vec n^{(1)}, \vec n^{(2)}} = (q_3\bar q_3)^{- \sum_{A=1}^N n^{(1)}_A n^{(2)}_A}\;, \end{equation} where $Z_\text{afund,extra}^{\vec n^\text{L}, \vec n^\text{R}}$ captures the non-factorizable factors from the antifundamental one-loop determinant, while $Z_\text{vf,extra}^{\vec n^\text{L}, \vec n^\text{R}}$ captures those from the vector multiplet and fundamental hypermultiplet one-loop determinants, which can be found in \eqref{def:Z-afund-extra}-\eqref{def:Z-vf-extra}. These factors will cancel against factors produced by the instanton partition functions, which we consider next. When employing the Higgsing prescription to compute the partition function in the presence of defects, the most interesting part of the computation is the result of the analysis and massaging of the instanton partition functions \eqref{IPFS5_SQCD} evaluated at the value \eqref{poleS3S3} for their gauge equivariant parameter. We find that each term in the sum over Young diagrams can be brought into an almost factorized form. As mentioned before, certain non-factorizable factors cancel against the extra factors in \eqref{S5cl1loop_atpole_SQCD}, but a simple non-factorizable factor remains. When recasting the final expression in the form of a 5d/3d/1d coupled system, it is precisely this latter factor that captures the contribution of the degrees of freedom living on the intersection $S^1_{(1\cap 2)}$ of the three-spheres on which the defects live. Let us start by analyzing the instanton partition functions $Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(2\cap 3)})}$ and $Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 3)})}$. It is clear from \eqref{IPFR4S1_fund} that upon plugging in the gauge equivariant parameter \eqref{poleS3S3} in $Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(2\cap 3)})}$, the $N$-tuple of Young diagrams $\vec Y$ has zero contribution if any of the Young diagrams $Y_A$ has more than $n^{(2)}_A$ rows. Similarly, $Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 3)})}$ does not receive contributions from $\vec Y$ if any of its members $Y_A$ has more than $n^{(1)}_A$ rows. Hence the sum over Young diagrams simplifies to a sum over all possible sequences of $n^{(\alpha)}$ non-decreasing integers. The summands of the instanton partition functions undergo many simplifications at the special value for the gauge equivariant parameter, and in fact one finds that they become precisely the K-theoretic vortex partition function for SQCDA upon using the parameter identifications \eqref{parameteridentifications_S5_SQCD} (see appendix \ref{subapp: reduction-to-vortex-partition-function} for more details):\footnote{This fact has for example also been observed in \cite{Bonelli:2011wx,Bonelli:2011fq,Nieri:2013vba,Aganagic:2013tta}, and can also be read off from the brane picture in figure \ref{fig:manynodequiver_higgsed}. Before Higgsing, the instantons of the extra $SU(N)$ gauge node are realized by D0-branes spanning in between the NS5-branes. After Higgsing, the D0-branes can still be present if they end on the D2$_\text{R}$ and D2$_\text{L}$-branes. If, say, $n^{\text{L}}=0$, they precisely turn into vortices of the two-dimensional theory living on the D2-branes.} \begin{equation} Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(2\cap 3)})} \rightarrow Z_{\text{vortex}|\vec{n}^{(2)}}^{\mathbb R^2\times S^1_{(2\cap 3)}}(b_{(2)}^{-1})\;, \qquad Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 3)})} \rightarrow Z_{\text{vortex}|\vec{n}^{(1)}}^{\mathbb R^2\times S^1_{(1\cap 3)}}(b_{(1)}^{-1})\;, \end{equation} with the three dimensional squashing parameters defined as \begin{equation} {b_{(1)}} \equiv \sqrt {{\omega _2}/{\omega _3}} , \qquad {b_{(2)}} \equiv \sqrt {{\omega _1}/{\omega _3}}, \qquad {b_{(3)}} \equiv \sqrt {{\omega _1}/{\omega _2}} \;. \end{equation} The third instanton partition function, $Z_{{\text{inst}}}^{(\widetilde{\mathcal T},\mathbb R^4\times S^1_{(1\cap 2)})}$, behaves more intricately when substituting the gauge covariant parameter of \eqref{IPFS5_SQCD}. From \eqref{IPFR4S1_fund} one immediately finds that $N$-tuples of Young diagrams $\vec Y$ have zero contribution if any of its constituting diagrams $Y_A$ contain the ``forbidden box'' with coordinates $\text{(column,row)}=(n_A^{(1)}+1, n_A^{(2)}+1)$. We split the remaining sum over $N$-tuples of Young diagrams into two, by defining the notion of \textit{large} $N$-tuples, as those $N$-tuples satisfying the requirement that all of its members $Y_A$ contain the box with coordinates $(n_A^{(1)}, n_A^{(2)})$, and calling all other $N$-tuples \textit{small}. Let us focus on the former sum first. Given a large $N$-tuple $\vec Y$, we define $\vec Y^\text{L}$ and $\vec Y^\text{R}$ as the Young diagrams \begin{equation}\label{defYLYR} \begin{aligned} &Y^\text{L}_{Ar} = Y_{Ar} - n^{(2)}_A &&\quad \text{for} \quad 1\leq r \leq n^{(1)}_A\;, \qquad \text{and}\; \qquad Y^\text{L}_{Ar} =0 \quad &&\text{for} \quad n^{(1)}_A < r \\ &Y^\text{R}_{Ar} = Y_{A(n^{(1)}_A + r)} &&\quad \text{for} \quad 1 \le r\;. && \end{aligned} \end{equation} Furthermore, we define the non-decreasing sequences of integers \begin{equation}\label{definitionsms} \mathfrak{m}_{A\mu }^{\text{L}} \equiv Y_{A(n_A^{(1)} - \mu )}^{\text{L}},\quad\mu = 0,...,n_A^{(1)} - 1,\qquad \qquad \mathfrak{m}_{A\nu }^{\text{R}} \equiv \tilde Y_{A(n_A^{(2)} - \nu )}^{\text{R}},\quad\nu = 0,...,n_A^{(2)} - 1\;, \end{equation} where $\tilde Y^{\text{R}}_A$ denotes the transposed diagram of $Y^{\text{R}}_A$. Figure \ref{fig:Y} clarifies these definitions. \begin{figure}[t] \centering \includegraphics[width=.8\textwidth]{./Figures-PDF/Y} \caption{\label{fig:Y} A constituent $Y$ of a large $N$-tuple of Young diagrams $\vec Y$ for $n^{(1)}=4, n^{(2)}=8$. The red box denotes the ``forbidden box'' with coordinates $(n^{(1)}+1,n^{(2)}+1).$ The green and blue areas denote $Y^\text{L}$ and $Y^\text{R}$ respectively. The definitions of $\mathfrak m_\mu^{\text L}$ and $\mathfrak m_\nu^{\text R},$ see \eqref{definitionsms}, are also indicated.} \end{figure} With these definitions in place, one can show (see appendix \ref{subapp:factorization-of-instanton-partition-function-large}) the following factorization of the summand of the instanton partition function for large tuples of Young diagrams $\vec Y$ \begin{multline} q_3^{|\vec Y_{\text{large}}|}\ Z_{{\text{inst}}}^{(\widetilde{\mathcal T},{\mathbb{R}^4} \times {S^1_{(1\cap 2)}})} \left(\vec Y_{\text{large}}\right) \rightarrow\ q_3^{|\mathfrak m^{\text L}|+|\mathfrak m^{\text R}|}\ Z_{\text{vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text L}|b_{(1)} )\ Z_{\text{vortex}|\vec n_2}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text R}| b_{(2)} ) \\ \times Z_{\text{intersection}}^{\text{large}|\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text L},\mathfrak m^{\text R})\ \Big(Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{1-loop,extra}}\Big)^{-1} \;.\label{factorizeLarge} \end{multline} Here we used $Z_{\text{vortex}|\vec n}^{\mathbb{R}^2\times S^1}(\mathfrak m|b )$ to denote the summand of the $U(n)$ SQCDA K-theoretic vortex partition function, \textsl{i.e.\@}{}, \begin{equation} Z_{\text{vortex}|\vec n}^{\mathbb{R}^2\times S^1}( b ) = \sum_{\substack{\mathfrak m_{A\mu}\geq 0\\\mathfrak m_{A\mu}\leq \mathfrak m_{A(\mu+1)}}} z_b^{|\mathfrak m|} Z_{\text{vortex}|\vec n}^{\mathbb{R}^2\times S^1}(\mathfrak m| b ) \;, \end{equation} where $|\mathfrak m|=\sum_A\sum_\mu \mathfrak m_{A\mu}.$ (See appendix \ref{subapp: S3 partition function} for concrete expressions.) The factor $Z_{\text{intersection}}^{\text{large}|\vec n^{(1)},\vec n^{(2)}}$ is given by \begin{align}\label{intersectionfactorlarge} &Z_{\text{intersection}}^{\text{large}|\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text L},\mathfrak m^{\text R})\nonumber\\ &\equiv \;\prod_{A,B = 1}^N \prod_{\mu = 0}^{n^{(1)}_A - 1} \prod_{\nu = 0}^{n^{(2)}_B - 1} {\Big(2i\sinh \pi i \Big[i\frac{\beta}{2\pi}(M_A-M_B) + \epsilon_2 (\mathfrak m_{A\mu}^{\text{L}}+\nu) - \epsilon_1(\mathfrak m_{B\nu}^{\text{R}}+\mu) - \epsilon_1\Big]\Big)^{-1} }\nonumber\\ &\qquad\qquad\qquad\quad\ \ \ \times{\Big(2i\sinh \pi i \Big[i\frac{\beta}{2\pi}(M_A-M_B) + \epsilon_2 (\mathfrak m_{A\mu}^{\text{L}}+\nu) - \epsilon_1(\mathfrak m_{B\nu}^{\text{R}}+\mu) + \epsilon_2 \Big]\Big)^{-1} }\;. \end{align} As announced, the extra factors in the second line of \eqref{factorizeLarge} cancel against those in \eqref{S5cl1loop_atpole_SQCD}. For small diagrams, we can still define $\vec Y^{\text{R}}$ as in the second line of \eqref{defYLYR}, but $\vec Y^{\text{L}}$ is not a proper $N$-tuple of Young diagrams due to the presence of negative entries. Nevertheless, we can define sets of non-decreasing integers as \begin{equation} \mathfrak m_{A\mu}^{\text{L}} \equiv Y_{A(n_A^{(1)}-\mu)}-n^{(2)}_A\;, \quad \text{for} \quad 0\leq \mu \leq n_A^{(1)}-1\;, \qquad \mathfrak m_{A\nu}^{\text{R}}\equiv \tilde Y^{\text{R}}_{A(n_A^{(2)}-\nu)}\;, \quad \text{for} \quad 0\leq \nu \leq n_A^{(2)}-1\;.\label{smallms} \end{equation} It is clear that $\mathfrak{m}_{A\mu}^\text{L} $ can take negative values. Then one can show (see appendix \ref{subapp:factorization-for-small-young-diagrams}) that \begin{multline} q_3^{|\vec Y_{\text{large}}|}\ Z_{{\text{inst}}}^{(\widetilde{\mathcal T},{\mathbb{R}^4} \times {S^1_{(1\cap 2)}})} \left(\vec Y_{\text{small}}\right) \rightarrow\ q_3^{|\mathfrak m^{\text L}|+|\mathfrak m^{\text R}|}\ Z_{\text{(semi-)vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text L}|b_{(1)} )\ Z_{\text{vortex}|\vec n_2}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text R}| b_{(2)} ) \\ \times Z_{\text{intersection}}^{\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text L},\mathfrak m^{\text R})\ \Big(Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{1-loop,extra}}\Big)^{-1} \;.\label{factorizeSmall} \end{multline} The intersection factor for generic (small) $N$-tuples of Young diagrams is a generalization of \eqref{intersectionfactorlarge} that can be found explicitly in \eqref{smallintersectionfactor}. The factor $Z_{\text{(semi-)vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text L}|b_{(1)} )$ is a somewhat complicated expression generalizing $Z_{\text{vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}$, which we present in \eqref{smallfactorization}. Putting everything together, and noting that summing over all $N$-tuples of Young diagrams avoiding the forbidden box is equivalent to summing over all possible values of $\mathfrak m^{\text{L/R}}_{A\mu}$ , we find the following result for the Higgsed partition function \begin{align}\label{totalS5n1n2} Z^{(\widetilde{\mathcal T},S^5_{\vec\omega})}\rightarrow Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})}\ &\Bigg(\ \sideset{}{^{\prime}}\sum_{\text{large }\vec Y} Z_{\vec n^{(1)}}(\mathfrak m^{\text{L}}| b_{(1)} )\;\; Z_{\text{intersection}}^{\text{large}|\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text{L}},\mathfrak m^{\text{R}}) \;\; Z_{\vec n^{(2)}}(\mathfrak m^{\text{R}}| b_{(2)} )\nonumber \\ & \quad + \sideset{}{^{\prime}}\sum_{\text{small }\vec Y} \hat Z_{\vec n^{(1)}}(\mathfrak m^{\text{L}}| b_{(1)} )\;\; Z_{\text{intersection}}^{\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text{L}},\mathfrak m^{\text{R}}) \;\; Z_{\vec n^{(2)}}(\mathfrak m^{\text R}| b_{(2)} )\Bigg) \end{align} where \begin{align}\label{Zn1} Z_{\vec n^{(1)}}(\mathfrak m^{\text{L}}| b_{(1)} ) &= \ Z^{S^3_{(1)}}_{\text{cl}|\vec n^{(1)}}\ Z^{S^3_{(1)}}_{\text{1-loop}|\vec n^{(1)}}\ q_3^{|\mathfrak m^{\text L}|}Z_{\text{vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}(\mathfrak m^{\text L}|b_{(1)} ) \ Z_{\text{vortex}|\vec n^{(1)}}^{\mathbb{R}^2\times S^1_{(1\cap 3)}}(b_{(1)}^{-1} )\;, \end{align} and similarly for $Z_{\vec n^{(2)}}(\mathfrak m^{\text R}| b_{(2)} )$. The expression for $\hat Z_{n_1}(\mathfrak m^\text{L}|b_{(1)} )$ is obtained by replacing $Z_{\text{vortex}|n_1}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}$ with $Z_{\text{(semi-)vortex}|n_1}^{\mathbb{R}^2\times S^1_{(1\cap 2)}}$. The prime on the sums over Young diagrams in \eqref{totalS5n1n2} indicates that only $N$-tuples of Young diagrams avoiding the ``forbidden box'' are included. To obtain the final result of the Higgsed partition function, we need to sum the right-hand side of \eqref{totalS5n1n2} over all partitions $\vec n^{(1)}$ of $n^{(1)}$ and $\vec n^{(2)}$ of $n^{(2)}$. \subsubsection{Matrix model description and 5d/3d/1d coupled system}\label{subsubsection: matrixmodel5d/3d/1d} Our next goal is to write down a matrix model integral that reproduces the $S^5_{\vec \omega}$-partition function of the theory $\mathcal T$ of $N^2$ free hypermultiplets in the presence of intersecting codimension two defects, \textsl{i.e.\@}{}, a matrix integral that upon closing the integration contours appropriately reproduces the expression on the right-hand side of \eqref{totalS5n1n2}, summed over all partitions of $n^{(1)}$ and $n^{(2)}$, as its sum over residues of encircled poles. A candidate matrix model is obtained relatively easily by analyzing the contribution of the large tuples of Young diagrams in \eqref{totalS5n1n2}. It reads \begin{equation}\label{matrixmodelS3US3} Z^{(\mathcal T,S^3_{(1)}\cup S^3_{(2)}\subset S^5_{\vec \omega})} = \frac{Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})}}{n^{(1)}!n^{(2)}!} \int_{\mathrm{JK}} \prod_{a=1}^{n^{(1)}} \text{d}\sigma^{(1)}_a \ \prod_{b=1}^{n^{(2)}} \text{d}\sigma^{(2)}_b \ Z^{S^3_{(1)}}(\sigma^{(1)}) \ Z_{\text{intersection}}(\sigma^{(1)},\sigma^{(2)})\ Z^{S^3_{(2)}}(\sigma^{(2)})\;, \end{equation} where $Z^{S^3_{(1)}}(\sigma^{(1)})$ denotes the classical action times the one-loop determinant of the $S^3_{(1)}$ partition function of SQCDA, that is, of a three-dimensional $\mathcal N=2$ gauge theory with gauge group $U(n^{(1)})$, and $N$ fundamental, $N$ antifundamental and one adjoint chiral multiplet, and similarly for $Z^{S^3_{(2)}}(\sigma^{(2)})$.\footnote{See appendix \ref{subapp: S3 partition function} for concrete expressions for the integrand of the three-sphere partition function.} The contribution from the intersection $S^1_{(1\cap 2)}$ reads \begin{equation} Z_{\text{intersection}}(\sigma^{(1)},\sigma^{(2)}) = \prod_{a = 1}^{n^{(1)} } \prod_{b = 1}^{n^{(2)} } \prod_{\pm} \left[2i\sinh \pi i \Big(\Delta_{ab}\pm\frac{1}{2}\big(b_{(1)}^2 +b_{(2)}^2\big)\Big)\right]^{-1} \;, \end{equation} with $\Delta _{ab} =-i b_{(2)}\sigma ^{(2)}_b +i b_{(1)}\sigma ^{(1)}_a$. Note that from \eqref{parameteridentifications_S5_SQCD} we deduce that the Fayet-Iliopoulos parameters $\xi_\text{FI}^{(1)}$ and $\xi_\text{FI}^{(2)}$ are both positive. The mass and other parameters on \emph{both} three-spheres satisfy relations which follow from the identifications in \eqref{parameteridentifications_S5_SQCD}-\eqref{parameteridentifications2_S5_SQCD}. Concretely, we find \begin{equation}\label{parameterS3US3} \begin{aligned} &b_{(1)}\xi_{\text{FI}}^{(1)} = b_{(2)} \xi_{\text{FI}}^{(2)}\;, \qquad &&b_{(1)}\left(m^{(1)}_I + \frac{i}{2}b_{(1)}\right) = b_{(2)}\left(m^{(2)}_I +\frac{i}{2} b_{(2)}\right)\;,\qquad && m_X^{(1)} = i\frac{b_{(2)}^2}{b_{(1)}}\;,\\ & &&b_{(1)}\left(\tilde m^{(1)}_J-\frac{i}{2}b_{(1)} \right) = b_{(2)}\left(\tilde m^{(2)}_J -\frac{i}{2}b_{(2)} \right)\;, \qquad && m_X^{(2)} = i\frac{b_{(1)}^2}{b_{(2)}}\;, \end{aligned} \end{equation} where $m^{(\alpha)}_I, \tilde m^{(\alpha)}_J$ and $m^{(\alpha)}_X$ are the fundamental, antifundamental and adjoint masses on the respective spheres. Moreover, the differences of the relations in \eqref{parameteridentifications2_S5_SQCD}, for fixed $\alpha$, relate the three-dimensional mass parameters on $S^3_{(\alpha)}$ to the five-dimensional mass parameters of the $N^2$ free hypermultiplets, \textsl{i.e.\@}{}, to $M_{IJ}=M_I-\tilde M_J + i \frac{|\vec \omega|}{2}$: \begin{equation}\label{MIJ_S5_relation} M_{IJ} = \lambda _{(\alpha )}^{ - 1}\left( {m_I^{(\alpha )} - \tilde m_J^{(\alpha )}} \right) - i{\omega _\alpha } + i\frac{{|\vec \omega |}}{2}\;. \end{equation} The matrix integral \eqref{matrixmodelS3US3} is evaluated using a Jeffrey-Kirwan-like residue prescription\cite{1993alg.geom..7001J}. We have derived it explicitly by demanding that the integral \eqref{matrixmodelS3US3} reproduces the result of the Higgsing computation (see below). The prescription is fully specified by the following charge assignments: the matter fields that contribute to $Z_{S^3_{(1)}}(\sigma^{(1)})$ and $Z_{S^3_{(2)}}(\sigma^{(2)})$ are assigned their standard charges under the maximal torus $U(1)^{n^{(1)}}\times U(1)^{n^{(2)}}$ of the total gauge group $U(n^{(1)})\times U(n^{(2)})$, while \textit{all} factors contributing to $Z_{\text{intersection}}(\sigma^{(1)},\sigma^{(2)})$ are assigned charges of the form $(0,\ldots, 0, +b_{(1)},0\ldots,0 \ ;\ 0,\ldots, 0, -b_{(2)},0\ldots,0 )$. Furthermore, we pick the JK-vector $\eta = (\xi_{\text{FI}}^{(1)}; \xi_{\text{FI}}^{(2)})$, where we treat the Fayet-Iliopoulos parameters as an $n^{(1)}$-vector and $n^{(2)}$-vector respectively. Recall from \eqref{parameteridentifications_S5_SQCD} that both are positive. Before verifying that the matrix model \eqref{matrixmodelS3US3}, with the pole prescription just described, indeed faithfully reproduces the expression \eqref{totalS5n1n2} summed over all partitions $\vec n^{(1)},\vec n^{(2)}$, we remark that it takes precisely the form of the partition function of the 5d/3d/1d coupled system of figure \ref{fig:InsertSymmetrics5dfreeHM}, which is the trivial dimensional uplift of figure \ref{fig:InsertSymmetrics} specialized to the case of $N^2$ free hypermultiplets described by a two-flavor-node quiver. This statement can be verified by dimensionally uplifting the localization computation of \cite{Gomis:2016ljm}. \begin{figure}[t!] \centering \includegraphics[width=0.3\textwidth]{./Figures-PDF/InsertSymmetrics5dfreeHM} \caption{\label{fig:InsertSymmetrics5dfreeHM} Coupled 5d/3d/1d quiver gauge theory realizing intersecting M2-brane surface defects labeled by $n^{\text{R}}$- and $n^{\text{L}}$-fold symmetric representations in the five-dimensional theory of $N^2$ free hypermultiplets. The three-dimensional degrees of freedom are depicted in $\mathcal N=2$ quiver gauge notation, while the one-dimensional ones are denoted using one-dimensional $\mathcal N=2$ quiver notations, with solid lines representing chiral multiplets. Various superpotential couplings are turned on, as in figure \ref{fig:InsertSymmetrics} (see \cite{Gomis:2016ljm}). Applying the Higgsing prescription to SQCD precisely results in the partition function of this quiver gauge theory.} \end{figure} In some detail, $Z_{\text{1-loop}}^{(\mathcal T,S^5_{\vec\omega})}$ captures the contributions to the partition function of the five-dimensional degrees of freedom, \textsl{i.e.\@}{}, of the theory $\mathcal T$ consisting of $N^2$ free hypermultiplets, while $Z^{S^3_{(\alpha)}}$ encodes those of the degrees of freedom living on $S^3_{(\alpha)}$, described by $U(n^{(\alpha)})$ SQCDA, for $\alpha=1,2$, and the factor $Z_{\text{intersection}}$ precisely equals the one-loop determinant of the one-dimensional bifundamental chiral multiplets living on the intersection $S^3_{(1)} \cap S^3_{(2)} = S^1_{(1\cap 2)}$. Moreover, the mass relations \eqref{MIJ_S5_relation}, which we find straightforwardly from the Higgsing prescription, are the consequences of cubic superpotential couplings in the 5d/3d/1d coupled system, which were analyzed in detail in \cite{Gomis:2016ljm}. The mass relations among the (anti)fundamental chiral multiplet masses in \eqref{parameterS3US3} are in fact a solution of \eqref{MIJ_S5_relation} obtained by subtracting the equation for $\alpha=1$ and $\alpha=2$ and subsequently performing a separation of the indices $I,J$. The separation constants appearing in the resulting solutions can be shifted to arbitrary values by performing a change of variables in the three-dimensional integrals, up to constant prefactors stemming from the classical actions. The Higgsing prescription also fixes the classical actions and hence we find specific values for the separation constants. The adjoint masses in \eqref{parameterS3US3} are the consequence of a quartic superpotential. Also observe that our computation fixes the flavor charge of the one-dimensional chiral multiplets, which enter explicitly in $Z_{\text{intersection}}$, and for which no first-principles argument was provided in \cite{Gomis:2016ljm}. The integrand of \eqref{matrixmodelS3US3} has poles in each of the three factors; the Jeffrey-Kirwan-like residue prescription is such that, among others, it picks out classes of poles, which we refer to as poles of type-$\hat \nu$. They read, for partitions $\vec n^{(1)}$ and $\vec n^{(2)}$ of $n^{(1)}$ and $n^{(2)}$ respectively, over all of which we sum, and for sequences of integers $\{\hat \nu_A\}$ where $\hat \nu_A \in \{-1, 0, \ldots, n_A^{(2)} - 1\}$, \begin{equation}\label{polesnuhat_main} \begin{aligned} & \text{poles of type-}\hat{\nu}: & \sigma _{A\mu} ^{(1)} = & \;m^{(1)}_A + \mu m_X^{(1)} - i\mathfrak{m}_{A\mu} ^{\text{L}}{b_{(1)}} - i\mathfrak{n}_{A\mu} ^{\text{L}}b_{(1)}^{ - 1} , \qquad && \mu = 0, \ldots, n^{(1)}_A - 1\\ & ~ & \sigma _{B\nu} ^{(2)} = & \;{m_B^{(2)}} + \nu m_X^{(2)} - i\mathfrak{m}_{B\nu} ^{\text{R}}{b_{(2)}} - i\mathfrak{n}_{B\nu} ^{\text{R}}b_{(2)}^{ - 1} \;, \qquad &&\nu = 0, \ldots, n^{(2)}_B - 1\;. \end{aligned} \end{equation} where $\mathfrak{m}^{\text{L}}_\mu$, $\mathfrak{m}^{\text{R}}_\nu$, $\mathfrak{n}^{\text{L}}_\mu$, $\mathfrak{n}^{\text{R}}_\nu$ are non-decreasing sequences of integers, such that $\mathfrak{n}_{A\mu} ^{\text{L}},\mathfrak{n}_{B\nu} ^{\text{R}} \geqslant 0$ and (where $\hat \nu$ enters) \begin{equation}\label{polesnuhat_main2} \left\{ \begin{aligned} & \mathfrak{m}_{A\mu \ge 0} ^{\text{L}} \ge 0\qquad &&{\text{if }}\hat\nu_A = -1 \\ & \mathfrak{m}_{A\mu \ge 1}^{\text{L}} \ge \mathfrak{m}_{A0}^{\text{L}} = - \hat\nu_A - 1 \quad &&{\text{if }}\hat\nu_A \ge 0 \hfill \\ \end{aligned} \right.\;, \qquad \quad \mathfrak{m}_{0 \leqslant \nu \leqslant \hat \nu_A}^{\text{R}} = 0\;,\qquad\mathfrak{m}_{\nu \ge \hat \nu_A + 1 }^{\text{R}} \ge 0\;. \end{equation} Note that if all $\hat{\nu}_A = -1$ these poles are simply obtained by assigning to $\sigma^{(1)}$ a pole position of ${Z_{S_{(1)}^3}}$ and to $\sigma^{(2)}$ a pole position of ${Z_{S_{(2)}^3}}$, whose residues precisely reproduce the sum over large diagrams in \eqref{totalS5n1n2}. Precisely this fact motivated the candidate matrix model in \eqref{matrixmodelS3US3}. In appendix \ref{subapp:constructing-Young-diagrams}, we describe a simple algorithm to construct Young diagrams avoiding the ``forbidden box'' associated with poles of type-$\hat \nu$. Furthermore, we show in appendix \ref{subapp:residues-and-instanton-partition-function} that the sum over the corresponding residues precisely reproduce the sum over Young diagrams in \eqref{totalS5n1n2}. Finally, we show in appendix \ref{subapp:extra-poles-and-diagrams} that the residues of poles not of type-$\hat \nu$, but contained in the Jeffrey-Kirwan-like prescription, cancel among themselves by studying a simplified example. We thus conclude that the integral \eqref{matrixmodelS3US3} indeed faithfully reproduces the sum over Young diagrams in \eqref{totalS5n1n2}. \subsection{Intersecting surface defects on \texorpdfstring{$S^4_{b}$}{S4b}} Let us next study the partition function of $N^2$ free hypermultiplets on $S^4_{b}$ in the presence of intersecting codimension two defects wrapping the two-spheres $S^2_{\text{L/R}}$, the fixed loci of the $U(1)^{\text{L/R}}$ isometries (see footnote \ref{definitionsquashedfoursphere}). The intersection of $S^2_{\text{L}}$ with $S^2_{\text{R}}$ consists of two points. The analysis largely parallels the one in the previous subsection, so we will be more brief. \subsubsection{\texorpdfstring{$S^4_{b}$}{S4b} partition function of \texorpdfstring{$\widetilde{\mathcal T}$}{\widetilde T}} The theory $\widetilde{\mathcal T}$ is an $\mathcal N=2$ supersymmetric gauge theory with gauge group $SU(N)$ and $N$ fundamental and $N$ antifundamental hypermultiplets. Its squashed four-sphere partition function is computed by the matrix integral \eqref{S4bpartitionfunction} (or \eqref{S4bpartitionfunctionsplit}), \begin{equation}\label{S4b_SQCD_partitionfunction} Z^{(\widetilde{\mathcal T},S^4_{b})}(M,\tilde M) = \int \text{d}\Sigma \ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma)\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma, M,\tilde M)\ |Z_{\text{inst.}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,\Sigma ,M^\epsilon ,\tilde M^\epsilon )|^2 \;. \end{equation} The classical action is given by \begin{equation}\label{classactionSQCD} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma) = \exp\left[-\frac{8\pi^2}{g_{\text{YM}}^2} \Tr\Sigma^{2} \right] \end{equation} and the one-loop factor reads \begin{equation}\label{oneloopSQCD} Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S_b^4)}(\Sigma, M,\tilde M) = Z_{{\text{vect}}}^{S_b^4}(\Sigma )\ Z_{\text{fund}}^{S_b^4}(\Sigma ,M)\ Z_{\text{afund}}^{S_b^4}(\Sigma ,\tilde M)\;, \end{equation} where \begin{align} Z_{\text{fund}}^{S_b^4}(\Sigma ,M) = &\; \prod_{I = 1}^N {\prod_{A = 1}^N {\frac{1}{{{\Upsilon _b}(i{\Sigma _A}-i{M_I} + Q/2)}}} } \;,\qquad &Z_{{\text{vect}}}^{S_b^4}(\Sigma ) = &\; \prod_{\substack{A,B = 1\\A \ne B}}^N {{\Upsilon _b}(i{\Sigma _A}-i{\Sigma _B})}\;,\nonumber\\ Z_{\text{afund}}^{S_b^4}(\Sigma ,\tilde M) = &\; \prod_{J = 1}^N {\prod_{A = 1}^N {\frac{1}{{{\Upsilon _b}(-i{{\Sigma}_A} + i{{\tilde M}_J} + Q/2)}}} }\;.& \label{S4b_one_loop_SQCD} \end{align} We have denoted the masses associated with the $U(N)$ flavor symmetry of the $N$ fundamental hypermultiplets as $M_I$ and those of the $N$ antifundamental hypermultiplets as $\tilde M_J$. We also denote $Q = b + b^{-1}$. The instanton partition functions can be written as a sum over $N$-tuples of Young diagrams as \begin{equation}\label{SQCDIPF} Z_{\text{inst.}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,\Sigma ,M^\epsilon ,\tilde M^\epsilon )=\sum_{\vec Y} {{q^{|\vec Y|}}\ z_{{\text{vect}}}^{\mathbb R^4}(\vec Y,\Sigma,\epsilon_1,\epsilon_2 )\ z_{{\text{fund}}}^{\mathbb R^4}(\vec Y,\Sigma ,M^\epsilon,\epsilon_1,\epsilon_2)\ z_{{\text{afund}}}^{\mathbb R^4}(\vec Y,\Sigma ,\tilde M^\epsilon,\epsilon_1,\epsilon_2)} \;. \end{equation} The various factors in the summand are defined in \eqref{zvectIPF} and \eqref{z(a)fundIPF} in appendix \ref{appendix:IPF-factorization}. The $\Omega$-deformation parameters are identified as $\epsilon_1=b$ and $\epsilon_2=b^{-1}$, the superscript $^\epsilon$ denotes the usual shift of hypermultiplet masses \cite{Okuda:2010ke} \begin{equation} {M^\epsilon } \equiv M - \frac{i}{2}({\epsilon _1} + {\epsilon _2})\;,\qquad {\tilde M^\epsilon } \equiv \tilde M - \frac{i}{2}({\epsilon _1} + {\epsilon _2})\;, \end{equation} and the modulus squared simply entails sending $q = \exp(2\pi i \tau)\rightarrow \bar q$, with $\tau = \frac{\vartheta}{2\pi} + \frac{4\pi i}{g_{\text{YM}}^2}$. \subsubsection{Implementing the Higgsing prescription} The Higgsing prescription instructs us to consider the poles of the fundamental one-loop factor given by \begin{equation}\label{poleS2S2} i\Sigma_A = i M_{\sigma(A)} - n_A^{\text{L}} b - n_A^{\text{R}} b^{-1} - \frac{b+b^{-1}}{2} \qquad \text{for} \qquad A=1,\ldots,N\;, \end{equation} with $\sigma$ a permutation of $N$ elements, which we choose to be the identity. At the end of the computation, we should sum over all partitions $\vec n^{\text{L/R}}$ of $n^{\text{L/R}}$, \textsl{i.e.\@}{}, $n^{\text{L/R}}= \sum_A n_A^{\text{L/R}}$. The fact that the two two-spheres intersect at two disjoint points, namely their north poles and south poles, adds another layer of complication compared to the analysis in the previous subsection. Even so, when evaluating the residue at \eqref{poleS2S2}, the analysis of the classical action and one-loop determinants is straightforward. Both can be brought into a factorized form in terms of pieces depending only on information on either two-sphere, using the shift formula \eqref{recursion-upsilon} for the latter, up to extra factors which will cancel against certain non-factorizable factors coming from the instanton partition functions. Explicitly, \begin{equation}\label{S4cl1loop_atpole_SQCD} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_{b})} \rightarrow Z_{\text{1-loop}}^{(\mathcal T,S^4_{b})}\ \Big(Z^{S^2_{\text{L}}}_{\text{cl}|\vec n^{\text{L}}}\ Z^{S^2_{\text{L}}}_{\text{1-loop}|\vec n^{\text{L}}}\Big)\ \Big(Z^{S^2_{\text{R}}}_{\text{cl}|\vec n^{\text{R}}}\ Z^{S^2_{\text{R}}}_{\text{1-loop}|\vec n^{\text{R}}}\Big) \Big(Z^{\widetilde{\mathcal T};\vec n^{\text{L}},\vec n^{\text{R}}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{\text{L}},\vec n^{\text{R}}}_{\text{1-loop,extra}}\Big)^{2}\;, \end{equation} where $Z_{\text{1-loop}}^{({\mathcal T},S^4_{b})}$ is the one-loop determinant of $N^2$ hypermultiplets, which constitute the infrared theory $\mathcal T$, and have masses $M_{IJ} = M_I - \tilde M_J + i\frac{Q}{2}$. Furthermore, $Z^{S^2_{\text{L/R}}}_{\text{\ldots}|\vec n^{\text{L/R}}}$ denote factors in the Higgs branch localized two-dimensional $\mathcal N=(2,2)$ SQCDA two-sphere partition function (see footnote \ref{footnoteHBLS3} for the equivalent three-sphere discussion, and appendix \ref{subapp: S2 partition function} for explicit expressions). The two-dimensional FI-parameter $\xi_{\text{FI}}$, fundamental masses $m_I$, antifundamental masses $\tilde m_J$ and adjoint masses $m_X$ are related to the four-dimensional parameters as \begin{align}\label{realtionsS4_free} &\xi _{{\text{FI}}}^{\text{L}} = \xi _{{\text{FI}}}^{\text{R}} = \frac{{4\pi }}{{g_{{\text{YM}}}^2}}\;,& &m_I^\text{L} = b M_I + \frac{i}{2} + i b^2\;, & &\tilde m^\text{L}_J = b \tilde M_J + \frac{i}{2}\;, && m_X^\text{L} = ib^2\\\label{realtionsS4_free2} &{\vartheta ^{{\text{L/R}}}} = \vartheta \;, & &m_I^\text{R} = b^{-1} M_I + \frac{i}{2} + i b^{-2}\;, & &\tilde m^\text{R}_J = b^{-1} \tilde M_J + \frac{i}{2}\;, && m_X^\text{R} = ib^{-2}\;. \end{align} Finally the extra factors are \begin{equation} Z_\text{1-loop,extra}^{\vec n^\text{L}, \vec n^\text{R}} = Z_\text{vf,extra}^{\vec n^\text{L}, \vec n^\text{R}}(M)\ Z_\text{afund,extra}^{\vec n^\text{L}, \vec n^\text{R}}(\tilde M), \qquad Z_\text{cl,extra} = (q\bar q)^{-\sum_{A = 1}^N n^\text{L}_An^\text{R}_A}\;, \end{equation} where $Z^{\vec n^\text{L}, \vec n^\text{R}}_\text{vf,extra}$ and $Z^{\vec n^\text{L}, \vec n^\text{R}}_\text{afund,extra}$ are as before the non-factorizable pieces produced by applying the shift formulae to the vector and (anti)fundamental one-loop determinant and can be found in \eqref{def:Z-afund-extra}-\eqref{def:Z-vf-extra}. The massaging of each of the two instanton partition functions, which now both describe instantons located at intersection points, is completely similar to the one we performed above. First, the sum over $N$-tuples of Young diagrams $\vec Y$ can be restricted to a sum over tuples whose constituents $Y_A$ all avoid the ``forbidden'' box at $(n^{\text{L}}_A+1,n^{\text{R}}_A+1)$. Second, the left-over sum can be decomposed into sums over large and small diagrams, and moreover their summands can almost be factorized in terms of the summands of vortex partition functions, after canceling some overall factors with the extra factors from the classical action and one-loop determinants in \eqref{S4cl1loop_atpole_SQCD}. The remaining non-factorizable factor is an intersection factor, \begin{multline} Z_{\text{intersection}}^{\text{large}|\vec n^{(1)},\vec n^{(2)}}(\mathfrak m^{\text L},\mathfrak m^{\text R})= \;\prod_{A,B = 1}^N \prod_{\mu = 0}^{n^{(1)}_A - 1} \prod_{\nu = 0}^{n^{(2)}_B - 1} {\Big(i(M_A-M_B) + \epsilon_2 (\mathfrak m_{A\mu}^{\text{L}}+\nu) - \epsilon_1(\mathfrak m_{B\nu}^{\text{R}}+\mu) - \epsilon_1\Big)^{-1} }\\ \times{\Big( i(M_A-M_B) + \epsilon_2 (\mathfrak m_{A\mu}^{\text{L}}+\nu) - \epsilon_1(\mathfrak m_{B\nu}^{\text{R}}+\mu) + \epsilon_2 \Big)^{-1} }\;. \label{intersection_factor_HB_R4} \end{multline} for large diagrams, and \eqref{smallintersectionfactor} for generic diagrams. The full expression for the residue at the pole location \eqref{poleS2S2} thus involves the product of the two massaged instanton partition functions, together with the leftover classical action and one-loop determinant factors, \begin{align}\label{totalS4n1n2} Z^{(\widetilde{\mathcal T},S^4_b)} \rightarrow &\ Z_{\text{1-loop}}^{(\mathcal T,S^4_b)}\ \Big(Z^{S^2_{\text{L}}}_{\text{cl}|\vec n^{\text{L}}}\ Z^{S^2_{\text{L}}}_{\text{1-loop}|\vec n^{\text{L}}}\Big)\ \Big(Z^{S^2_{\text{R}}}_{\text{cl}|\vec n^{\text{R}}}\ Z^{S^2_{\text{R}}}_{\text{1-loop}|\vec n^{\text{R}}}\Big) \nonumber\\ &\times \Bigg|\ \sideset{}{^{\prime}}\sum_{\text{large }\vec Y} q^{|\mathfrak m^{\text{L}}|+|\mathfrak m^{\text{R}}|} Z_{\text{vortex}|\vec n^{\text{L}}}^{\mathbb{R}^2}(\mathfrak m^{\text L} ) \;\; Z_{\text{intersection}}^{\text{large}|\vec n^{\text{L}},\vec n^{\text{R}}}(\mathfrak m^{\text{L}},\mathfrak m^{\text{R}}) \;\; Z_{\text{vortex}|\vec n^{\text{R}}}^{\mathbb{R}^2}(\mathfrak m^{\text R} ) \\ &\qquad + \sideset{}{^{\prime}}\sum_{\text{small }\vec Y} q^{|\mathfrak m^{\text{L}}|+|\mathfrak m^{\text{R}}|} Z_{\text{semi-vortex}|\vec n^{\text{L}}}^{\mathbb{R}^2}(\mathfrak m^{\text L} ) \;\; Z_{\text{intersection}}^{\vec n^{\text{L}},\vec n^{\text{R}}}(\mathfrak m^{\text{L}},\mathfrak m^{\text{R}}) \;\; Z_{\text{vortex}|\vec n^{\text{R}}}^{\mathbb{R}^2}(\mathfrak m^{\text R} ) \Bigg|^2\;.\nonumber \end{align} The final result for the Higgsed partition function is obtained by summing the right-hand side of this expression over all partitions $\vec n^{\text{L/R}}$ of $n^{\text{L/R}}$. \subsubsection{Matrix model description and 4d/2d/0d coupled system} As in the previous subsection, the contribution of large tuples in both instanton partition functions suggests the following matrix integral \begin{multline} Z^{(\mathcal{T},S^2_\text{L} \cup S^2_\text{R} \subset S^4_b)} = Z^{({\mathcal T},S^4_b)}_{\text{1-loop}} \frac{1}{n^{\text{L}}!n^{\text{R}}!}\sum_{B^{\text{R}}\in\mathbb Z^{n^\text{R}}}\sum_{B^{\text{L}}\in\mathbb Z^{n^\text{L}}} \int_{\mathrm{JK}} \prod_{a=1}^{n^\text{R}} \frac{d\sigma^{\text{R}}_a}{2\pi} \ \prod_{c=1}^{n^\text{L}} \frac{d\sigma^{\text{L}}_c}{2\pi} \ Z^{S^2_{\text{R}}}(\sigma^{\text{R}},B^{\text{R}}) \ Z^{S^2_{\text{L}}}(\sigma^{\text{L}},B^{\text{L}})\\ \times\ \prod_\pm Z_{\text{intersection}}^\pm(\sigma^{\text{L}}, B^{\text{L}},\sigma^{\text{R}},B^{\text{R}}) \label{matrixmodelS2US2} \;, \end{multline} where $Z^{S^2_{\text{R}}}(\sigma^{\text{R}},B^{\text{R}})$ denotes the summand/integrand of the $S^2_{\text{R}}$ partition function for SQCDA with gauge group $U(n^{\text{R}})$, and similarly for $Z^{S^2_{\text{L}}}(\sigma^{\text{L}},B^{\text{L}})$.\footnote{Concrete expressions can be found in appendix \ref{subapp: S2 partition function}.} The intersection factors read \begin{equation} Z_{\text{intersection}}^\pm(\sigma^{\text{L}}, B^{\text{L}},\sigma^{\text{R}},B^{\text{R}}) = \prod_{a = 1}^{n^\text{R} } \prod_{c = 1}^{n^\text{L} } \left[ \left(\Delta_{ac}^\pm + \frac{b+b^{-1}}{2}\right)\left(\Delta_{ac}^\pm - \frac{b+b^{-1}}{2}\right)\right]^{-1} \;, \label{intersection-factor-CBL-S2} \end{equation} with $\Delta_{ac}^\pm =b^{-1}\left(i\sigma^{\text{R}}_a \pm \frac{B^{\text{R}}_a}{2}\right)- b \left(i\sigma^{\text{L}}_c \pm \frac{B^{\text{L}}_c}{2}\right)$ and where $b$ is the four-sphere squashing parameter. The factor labeled by the plus sign arises from the intersection point at the north pole, and the other factor from the south pole. The mass and other parameters on both two-spheres satisfy relations, which can be derived from \eqref{realtionsS4_free}-\eqref{realtionsS4_free2}, \begin{equation}\label{paramidentificationsS2US2} \begin{aligned} &\xi_{\text{FI}}^{\text{L}} = \xi_{\text{FI}}^{\text{R}}\;, \qquad &&b^{-1} \left(m^{\text{L}}_I + \frac{i}{2} \right) = b\left(m^{\text{R}}_I + \frac{i}{2}\right)\;,\qquad && m_X^\text{L} = ib^2\;,\\ &\vartheta^{\text{L}} = \vartheta^{\text{R}} \;, &&b^{-1}\left(\tilde m^{\text{L}}_J - \frac{i}{2} \right) = b\left(\tilde m^{\text{R}}_J-\frac{i}{2} \right)\;, \qquad && m_X^{\text{R}} = ib^{-2}\;, \end{aligned} \end{equation} while the hypermultiplet masses $M_{IJ} = M_I - \tilde M_J + i\frac{Q}{2}$ are related to the two-dimensional mass parameters as \begin{equation}\label{4dmassfreeHM} i{b^{ - 1}} = \left[ {{M_{IJ}} + \frac{i}{2}(b + {b^{ - 1}})} \right] - {b^{ - 1}}(m_I^{\text{L}} - \tilde m_J^{\text{L}}) \;, \qquad i{b} = \left[ {{M_{IJ}} + \frac{i}{2}(b + {b^{ - 1}})} \right] - {b}(m_I^{\text{R}} - \tilde m_J^{\text{R}})\;. \end{equation} The residue prescription used to evaluate the integrals in \eqref{matrixmodelS2US2} is completely similar to Jeffrey-Kirwan-like prescription introduced in the previous subsection: the matter fields contributing to $Z^{S^2_{\text{R/L}}}$ are assigned their natural charges under the Cartan subgroup $U(1)^{n^\text{R}}\times U(1)^{n^\text{L}}$ of the total gauge group, while all factors of the intersection factors are assigned charges of the form $(0,\ldots, 0, b^{-1},0\ldots,0 \ ;\ 0,\ldots, 0, -b,0\ldots,0 )$. The JK-vector is again given in terms of the FI-parameters, $\eta = (\xi_{\text{FI}}^{\text{R}},\xi_{\text{FI}}^{\text{L}})$. We have derived this prescription by demanding that the matrix integral reproduces the result of the Higgsing computation. It was shown in \cite{Gomis:2016ljm} that the partition function of the 4d/2d/0d coupled system of figure \ref{fig:InsertSymmetrics} for the case of $N^2$ free hypermultiplets described by a two-flavor-node quiver, reproduced in figure \ref{fig:InsertSymmetrics4dfreeHM} for convenience, precisely equals the matrix integral \eqref{matrixmodelS2US2}. \begin{figure}[t!] \centering \includegraphics[width=0.3\textwidth]{./Figures-PDF/InsertSymmetrics4dfreeHM} \caption{\label{fig:InsertSymmetrics4dfreeHM} Coupled 4d/2d/0d quiver gauge theory realizing intersecting M2-brane surface defects labeled by $n^{\text{R}}$- and $n^{\text{L}}$-fold symmetric representations in the four-dimensional theory of $N^2$ free hypermultiplets. Various superpotential couplings are turned on and are given in detail in \cite{Gomis:2016ljm}. The Higgsing prescription applied to SQCD precisely reproduces the partition function of this coupled system.} \end{figure} In particular, $\prod_\pm Z_{\text{intersection}}^\pm$ computes the one-loop determinant of the zero-dimensional bifundamental chiral multiplets at the two intersection points of the two-spheres $S^2_{\text{R}}$ and $S^2_{\text{L}}$. In the first-principles localization computation of \cite{Gomis:2016ljm}, the relations \eqref{4dmassfreeHM} are consequences of cubic superpotential couplings. Up to separation constants, their solutions can be found to be the mass relations in \eqref{paramidentificationsS2US2}. As explained in the previous subsection, the Higgsing computation fixes the separation constants to specific values. Note that our computations fixes the flavor symmetry charges of the zero-dimensional fields and provides a derivation of the residue prescription. The proof that the matrix integral reproduces the result of the Higgsing computation follows the same logic as the one in the previous subsection, but is substantially more involved due to the fact that two copies of the intersection factor are present. We present some of the details in appendix \ref{appendix:extra-pole-2d}. \section{Intersecting surface defects in interacting theories}\label{section: interacting theories} In the previous section, we have computed the expectation value of intersecting surface defects in four-/five-dimensional theories $\mathcal T$ of free hypermultiplets placed on the four-/five-sphere. In this section, we consider intersecting surface defects inserted in interacting theories. More precisely, we focus on $\mathcal T$ being an $\mathcal N=2$ supersymmetric theory with gauge group $SU(N)$ and $N$ fundamental and $N$ anti-fundamental hypermultiplets, \textsl{i.e.\@}{}, $\mathcal N=2$ SQCD. The partition function of SQCD on the four-sphere has appeared in our earlier computations, see \eqref{S4b_SQCD_partitionfunction}. In particular, it involves the contribution of instantons located at the north pole and south pole of the four-sphere. When decorating the computation with intersecting surface defects, which precisely have these points as their intersection locus, we should expect the instanton counting to be modified non-trivially. By performing the Higgsing procedure on a theory $\widetilde{\mathcal T}$ described by the $\mathcal N=2$ quiver \begin{center} \includegraphics[width=.3\textwidth]{./Figures-PDF/TwoNodeQuiver}\;, \end{center} we will be able to derive a precise description of the modified ADHM integral by casting both the Higgsed partition function as well as its instanton contributions in a matrix integral form. \paragraph{The $S^4_b$-partition function of $\widetilde{\mathcal T}$} is given by \begin{multline} Z^{(\widetilde{\mathcal T},S^4_{b})}(M,\tilde M,\hat M) = \int \prod_{A,B = 1}^N {d{\Sigma _A}\ d{\Sigma'_B}}\ Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma ,\Sigma')\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma ,\Sigma', M,\tilde M,\hat M) \\ \times \ {\left| {Z_{\text{inst.}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,q',\Sigma ,\Sigma',M^\epsilon,\tilde M^\epsilon,\hat M^\epsilon)} \right|}^2 \;, \end{multline} where $M_I$ and $\tilde M_J$ denote the masses associated to the $U(N)$ flavor symmetry of the $N$ fundamental and antifundamental hypermultiplets respectively, while $\hat M$ is the mass associated to the $U(1)$ flavor symmetry of the bifundamental hypermultiplet. The classical action reads \begin{equation} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma,\Sigma') = \exp\left[-\frac{8\pi^2}{g_{\text{YM}}^2} \Tr\Sigma^{ 2} -\frac{8\pi^2}{g_{\text{YM}}^{\prime 2}} \Tr\Sigma^{\prime 2} \right]\;, \end{equation} while the one-loop determinant is given by \begin{equation} Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_{b})}(\Sigma ,\Sigma', M,\tilde M,\hat M) = Z_{{\text{vect}}}^{S_b^4}(\Sigma )\ Z_{{\text{vect}}}^{S_b^4}(\Sigma' )\ Z_{\text{fund}}^{S_b^4}(\Sigma,M)\ Z_{\text{afund}}^{S_b^4}(\Sigma' ,\tilde M)\ Z_{\text{bifund}}^{S_b^4}(\Sigma ,\Sigma',\hat M)\;, \end{equation} where all factors were defined in \eqref{S4b_one_loop_SQCD} but \begin{equation} Z_{\text{bifund}}^{S_b^4}(\Sigma ,\Sigma',\hat M) = \prod_{A = 1}^N {\prod_{B = 1}^N {\frac{1}{{{\Upsilon _b}(i\Sigma _B'-i{\Sigma _A} + i\hat M + Q/2)}}} } \;. \end{equation} The instanton partition function is given by a double sum over $N$-tuples of Young diagrams \begin{multline}\label{IPFtwonode} {Z_{\text{inst.}}^{(\widetilde{\mathcal T},\mathbb R^4)}(q,q',\Sigma ,\Sigma',M^\epsilon,\tilde M^\epsilon,\hat M^\epsilon)} = \sum_{\vec Y,\vec Y'} {q^{|\vec Y|}}{{q'}^{|\vec Y'|}}\ z_{{\text{vect}}}^{{\mathbb{R}^4}}(\vec Y,\Sigma )\ z_{{\text{vect}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma ')\ z_{{\text{fund}}}^{{\mathbb{R}^4}}(\vec Y,\Sigma ,M^\epsilon)\\ \times z_{{\text{afund}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma' , \tilde M^\epsilon)\ z_{{\text{bifund}}}^{{\mathbb{R}^4}}(\vec Y,\vec Y',\Sigma ,\Sigma ',\hat M^\epsilon) \;. \end{multline} The contributions of the various multiplets can be found in appendix \ref{appendix:IPF-factorization}. The superscripts $^\epsilon$ again denote the usual shift \cite{Okuda:2010ke} \begin{equation} M^\epsilon = M-\frac{i}{2}({\epsilon _1} + {\epsilon _2})\;, \qquad \tilde M^\epsilon = \tilde M-\frac{i}{2}({\epsilon _1} + {\epsilon _2})\;, \qquad \hat M^\epsilon = \hat M-\frac{i}{2}({\epsilon _1} + {\epsilon _2})\;. \end{equation} \paragraph{Implementing the Higgsing prescription} once again amounts to considering the poles of the fundamental one-loop factor given by \begin{equation}\label{poleS2S2_bis} i\Sigma_A = i M_{\sigma(A)} - n_A^{\text{L}} b - n_A^{\text{R}} b^{-1} - \frac{b+b^{-1}}{2} \qquad \text{for} \qquad A=1,\ldots,N\;, \end{equation} with $\sigma$ a permutation of $N$ elements, which we choose to be the identity. Here $\vec n^{\text{L/R}}$ is a partition of $n^{\text{L/R}}$, and we will sum over all. It is straightforward to compute the residues of the one-loop determinant at \eqref{poleS2S2_bis}: \begin{equation}\label{S4cl1loop_atpole_twonode} Z_{\text{cl}}^{(\widetilde{\mathcal T},S^4_{b})}\ Z_{\text{1-loop}}^{(\widetilde{\mathcal T},S^4_{b})} \rightarrow Z_{\text{cl}}^{({\mathcal T},S^4_{b})}Z_{\text{1-loop}}^{(\mathcal T,S^4_{b})}\ \Big(Z^{S^2_{\text{L}}}_{\text{cl}|\vec n^{\text{L}}}\ Z^{S^2_{\text{L}}}_{\text{1-loop}|\vec n^{\text{L}}}\Big)\ \Big(Z^{S^2_{\text{R}}}_{\text{cl}|\vec n^{\text{R}}}\ Z^{S^2_{\text{R}}}_{\text{1-loop}|\vec n^{\text{R}}}\Big) \Big(Z^{\widetilde{\mathcal T};\vec n^{\text{L}},\vec n^{\text{R}}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{\text{L}},\vec n^{\text{R}}}_{\text{1-loop,extra}}\Big)^{2}\;, \end{equation} Here $Z_{\text{cl}}^{({\mathcal T},S^4_{b})}Z_{\text{1-loop}}^{(\mathcal T,S^4_{b})}$ are the classical action and one-loop determinant of the theory $\mathcal T$, \textsl{i.e.\@}{}, of four-dimensional $\mathcal N=2$ supersymmetric SQCD. Their expression can be found in \eqref{classactionSQCD} and \eqref{oneloopSQCD} respectively.\footnote{In the previous section the theory $\widetilde {\mathcal T}$ was SQCD.} The antifundamental masses of $\mathcal T$ are simply given by $\tilde M_J$, but the fundamental masses take the values \begin{equation} M'_A = M_A - \hat M + iQ/2 \end{equation} in terms of the fundamental and bifundamental masses of the quiver theory $\widetilde{\mathcal T}$. As before, $Z^{S^2_{\text{L/R}}}_{\text{\ldots}|\vec n^{\text{L/R}}}$ denote factors in the Higgs branch localized SQCDA two-sphere partition function. The two-dimensional FI-parameters $\xi^\text{L/R}_{\text{FI}}$, fundamental masses $m^\text{L/R}_I$, antifundamental masses $\tilde m^\text{L/R}_J$ and adjoint masses $m^\text{L/R}_X$ are now related to the four-dimensional parameters of theory $\widetilde{\mathcal T}$ as \begin{align}\label{paramsS4_1} &\xi _{{\text{FI}}}^{\text{L}} = \frac{{4\pi }}{{g_{{\text{YM}}}^2}}\;,& &m_I^{\text{L}} = b({M_I} + iQ/2) + \frac{i}{2}{b^2}\;, & &\tilde m_J^\text{L} = b(\Sigma '_J + \hat M) + \frac{i}{2}\;, && m_X^{\text{L}} = ib^2\\ &\xi _{{\text{FI}}}^{\text{R}} = \frac{{4\pi }}{{g_{{\text{YM}}}^2}}\;,& &m_I^\text{R} = b^{ - 1}(M_I + iQ/2) + \frac{i}{2}b^{ - 2}\;, & &\tilde m_J^\text{R} = b^{-1}(\Sigma '_J + \hat M) + \frac{i}{2}\;, && m_X^{\text{R}} = ib^{-2}\;,\label{paramsS4_2} \end{align} together with $\vartheta^\text{L/R} = \theta $. Note that the two-dimensional masses depend on the four-dimensional gauge parameter. The explicit expressions for the extra one-loop factors, which now receives contributions from the fundamental hypermultiplet, vector multiplet and bifundamental hypermultiplet one-loop determinant, can be found in \eqref{def:Z-vf-extra}-\eqref{def:Z-bifund-extra}. Again, $Z_\text{cl,extra}^{\vec n^\text{L}, \vec n^\text{R}} = (q\bar q)^{-\sum_A n^\text{L}_A n^\text{R}_A}$. When substituting the gauge equivariant parameter \eqref{poleS2S2_bis} in the instanton partition functions \eqref{IPFtwonode}, the only non-vanishing contributions arise from $N$-tuples $\vec Y$ avoiding the ``forbidden box'' and arbitrary $N$-tuples $\vec Y'$. As before, we can split the sum over the former into one over large and one over small tuples. As we have learned in the previous section, the analysis of the large tuples is sufficient to derive the matrix model integral describing the infrared system, \textsl{i.e.\@}{}, the theory $\mathcal T$ with intersecting defects inserted. We thus focus only on such large tuples. We find \begin{align} &q^{|\vec Y_{\text{large}}|} q^{\prime |\vec Y'|}\ Z_{{\text{inst}}}^{(\widetilde{\mathcal T},{\mathbb{R}^4} \times {S^1_{(1\cap 2)}})} (\vec Y',\vec Y_{\text{large}}) \nonumber \\ &\rightarrow\ q^{\prime |\vec Y'|}\ z_{{\text{vect}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma ')\ z_{{\text{afund}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma' ,\tilde M^\epsilon)\ z_{{\text{fund}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma' ,M^{\prime\epsilon}) \ z^{\mathbb{R}^2_{\text{L}}}_\text{defect}(\vec Y', \Sigma' , \mathfrak{m}^\text{L})\ z^{\mathbb{R}^2_{\text{R}}}_\text{defect}(\vec {\tilde Y}',\Sigma' , \mathfrak{m}^\text{R}) \nonumber\\ &\phantom{\rightarrow\ }\times q^{|\mathfrak m_\text{L}| + |\mathfrak m_\text{R}|}\ Z_{\text{vortex}|\vec n^{\text{L}}}^{\mathbb{R}^2}(\mathfrak m^{\text L} ) \ Z_{\text{intersection}}^{\text{large}|\vec n^{\text{L}},\vec n^{\text{R}}}(\mathfrak m^{\text{L}},\mathfrak m^{\text{R}}) \ Z_{\text{vortex}|\vec n^{\text{R}}}^{\mathbb{R}^2}(\mathfrak m^{\text R} ) \ \Big( Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{cl,extra}}\ Z^{\widetilde{\mathcal T};\vec n^{(1)},\vec n^{(2)}}_{\text{1-loop,extra}} \Big)^{-1}\label{S4inst_atpole_twonode}\;, \end{align} The intersection factor was already given in \eqref{intersection_factor_HB_R4}. The expression for the factors $z^{\mathbb{R}^2_{\text{L/R}}}_\text{defect}$ can be found in \eqref{zLdef_HB}. They clearly correspond to new ingredients in the instanton partition function of $\mathcal T$, arising due to the presence of the defects on the local $\mathbb R^2_{\text{L/R}}$. Momentarily, we will study the modified ADHM data and its corresponding ADHM integral computing this modified instanton partition function. Recall that to obtain the final expression for the Higgsed partition function, we need to sum over all partitions of $n^{\text{L/R}}$. \paragraph{A matrix model integral} describing the $S^4_b$ partition function of $SU(N)$ SQCD in the presence of intersecting surface defects supported on $S^2_\text{L, R}$ can be inferred from \eqref{S4cl1loop_atpole_twonode} and \eqref{S4inst_atpole_twonode} to be \begin{small} \begin{align} &Z^{(\mathcal{T},S^2_\text{L} \cup S^2_\text{R} \subset S^4_b)}=\frac{1}{n^{\text{L}}!n^{\text{R}}!}\sum_{B^{\text{R}}\in\mathbb Z^{n^\text{R}}}\sum_{B^{\text{L}}\in\mathbb Z^{n^\text{L}}} \int d\Sigma ' \int_{\mathrm{JK}} \prod_{a=1}^{n^\text{R}} \frac{d\sigma^{\text{R}}_a}{2\pi} \ \prod_{b=1}^{n^\text{L}} \frac{d\sigma^{\text{L}}_b}{2\pi} \;Z^{({\mathcal T},S^4_b)}_{\text{cl}}(\Sigma')\ Z^{({\mathcal T},S^4_b)}_{\text{1-loop}}(\Sigma',M',\tilde M) \nonumber\\ &\times Z^{S^2_{\text{R}}}(\sigma^{\text{R}},B^{\text{R}};\Sigma') \ Z^{S^2_{\text{L}}}(\sigma^{\text{L}},B^{\text{L}};\Sigma')\ Z_{\text{intersection}}^+(\sigma^{\text{L}}, B^{\text{L}},\sigma^{\text{R}},B^{\text{R}}) \ Z_{\text{intersection}}^-(\sigma^{\text{L}}, B^{\text{L}},\sigma^{\text{R}},B^{\text{R}}) \label{def:SQCDA-defect-matrix-model} \\ & \times {\left| {\sum_{\vec Y'} {{q'}^{|\vec Y'|}} z_{{\text{vect}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma ')\, z_{{\text{afund}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma' ,\tilde M^\epsilon)\, z_{{\text{fund}}}^{{\mathbb{R}^4}}(\vec Y',\Sigma' ,M^{\prime\epsilon}) \, z_{{\text{defect}}}^{\mathbb{R}_{\text{L}}^2}(\vec Y', \Sigma', \sigma^\text{L}, B^\text{L})\, z_{{\text{defect}}}^{\mathbb{R}_{\text{R}}^2}(\vec Y', \Sigma', \sigma^\text{R}, B^\text{R}) } \right|^2}.\nonumber \end{align} \end{small}% Here the factors in the first lines are the classical action and one-loop determinant of $\mathcal T$, \textsl{i.e.\@}{}, four-dimensional SQCD, and the factors in the second line are the $S^2_{\text{L/R}}$ partition functions for SQCDA as well as the intersection factors \eqref{intersection-factor-CBL-S2}. The last line contains two copies of the instanton partition function, computed in the presence of the locally planar intersecting surface defects. The mass parameters on the two two-spheres are related as in \eqref{paramidentificationsS2US2}, while the parameters of the four-dimensional theory $\mathcal T$ are related to the two-dimensional ones as \begin{equation}\label{4dmassSQCD} i{b^{ - 1}} = \left[ {{M_{I}'-\Sigma'_J} + \frac{i}{2}(b + {b^{ - 1}})} \right] - {b^{ - 1}}(m_I^{\text{L}} - \tilde m_J^{\text{L}}) \;, \qquad i{b} = \left[ {{M_{I}'-\Sigma'_J} + \frac{i}{2}(b + {b^{ - 1}})} \right] - {b}(m_I^{\text{R}} - \tilde m_J^{\text{R}})\;. \end{equation} Note that when performing the integral over the four-dimensional gauge parameter $\Sigma'$, one should use \begin{equation} \tilde m_J^\text{L} = b\Sigma_J' + \tilde m^{\text{L}}_{U(1)} \;, \qquad \tilde m_J^\text{R}= b^{-1} \Sigma_J' + \tilde m^{\text{R}}_{U(1)}\;, \end{equation} where $\tilde m^{\text{L/R}}_{U(1)} = \frac{1}{N}\sum_{K=1}^N \tilde m_K^\text{L/R}$. These follow directly from \eqref{4dmassSQCD}. In the two-dimensional one-loop determinants we have made explicit this $\Sigma'$-dependence. Note that by performing the change of variables $\sigma^{\text{L/R}}_I \rightarrow \sigma^{\text{L/R}}_I + \tilde m^{\text{L/R}}_{U(1)}$ in the two-dimensional integrals, one effectively changes the $U(1)$ masses as $\tilde m^{\text{L/R}}_{U(1)}\rightarrow 0$ and $m^{\text{L/R}}_{U(1)}\rightarrow m^{\text{L/R}}_{U(1)} -\tilde m^{\text{L/R}}_{U(1)}$ in the matrix integral, up to an overall constant factor originating from the two-dimensional classical actions.\footnote{Note that in terms of the effective variables, the relation \eqref{4dmassSQCD} remains unaffected, but \eqref{paramidentificationsS2US2} is modified as \begin{equation} b^{-1} \left(m^{\text{L}}_I + \frac{i}{2} \right) = b\left(m^{\text{R}}_I + \frac{i}{2}\right) + c\;, \qquad b^{-1}\left(\tilde m^{\text{L}}_J - \frac{i}{2} \right) = b\left(\tilde m^{\text{R}}_J-\frac{i}{2} \right) + c\;, \end{equation} with $c = b^{-1} \tilde m^{\text{L}}_{U(1)}-b \tilde m^{\text{R}}_{U(1)}+\frac{i}{2}(b-b^{-1})$.} Henceforth, we choose to work with this effective new integral. The contribution of the locally planar surface defect, supported on the local $\mathbb R^2_{\text{L}}$, to the north pole copy of the instanton partition function is given by \begin{equation}\label{zdefect} z_{{\text{defect}}}^{\mathbb{R}_\text{L}^2}(\vec Y',\Sigma',\sigma ^{\text{L}}, {B^{\text{L}}}) = \prod_{a = 1}^{{n^{\text{L}}}} \prod_{B=1}^N \prod_{r=1}^{W_{Y'_B}} \prod_{s=1}^{Y'_{Br}} \frac{-\epsilon_2 (i\sigma_a + B_a/2 - i \tilde m_B ) + r\epsilon_1 + s \epsilon_2}{-\epsilon_2 (i\sigma_a + B_a/2 - i \tilde m_B ) + (r-1)\epsilon_1 + s \epsilon_2}\;, \end{equation} where $W_{Y_B'}$ denotes the width of the Young diagram $Y_B'$. Similarly, $z_{{\text{defect}}}^{\mathbb{R}_\text{R}^2}$ is obtained by swapping $\text{L}\leftrightarrow \text{R}$, $\epsilon_1 \leftrightarrow \epsilon_2$ and $Y_B \leftrightarrow \tilde Y_B$. The combination $i \sigma + \frac{1}{2}B$ is valid for the north pole contributions; to get the south pole counterpart one replaces it with $i \sigma - \frac{1}{2}B$. One can verify that if we perform the integrations over $\sigma^\text{L/R}$ and the sums over $B^\text{L/R}$ using the same Jeffrey-Kirwan-like residue prescription as discussed in the previous section, the matrix model \eqref{def:SQCDA-defect-matrix-model} reproduces the result obtained from the Higgsing prescription. \paragraph{The 4d/2d/0d coupled system} whose partition function is computed by \eqref{def:SQCDA-defect-matrix-model} is depicted in figure \ref{fig:InsertSymmetricsSQCD}. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{./Figures-PDF/InsertSymmetricsSQCD} \caption{\label{fig:InsertSymmetricsSQCD} Coupled 4d/2d/0d quiver gauge theory realizing the insertion, in four-dimensional $\mathcal N=2$ SQCD, of intersecting M2-brane surface defects labeled by symmetric representations of rank $n^{\text{R}}$ and $n^{\text{L}}$ respectively . Various superpotential couplings are turned, in direct analogy to the ones given in detail in \cite{Gomis:2016ljm}. The Higgsing prescription applied to a linear quiver gauge theory with two gauge nodes reproduces the partition function of this coupled system.} \end{figure} The first line of \eqref{def:SQCDA-defect-matrix-model} captures the classical action and one-loop determinant of the four-dimensional theory, while the second line captures the contributions of the two-dimensional degrees of freedom residing on the intersecting two-spheres as well as the one-loop determinants of the zero-dimensional bifundamental chiral multiplets at their intersection points. The most salient new feature of this coupled system is the fact that part of the two-dimensional flavor symmetry is gauged by the four-dimensional gauge symmetry. This fact is reflected in the relations in \eqref{4dmassSQCD}, relating the two-dimensional mass parameters $\tilde m$ to the gauge parameter $\Sigma'$, which are the consequence of the usual cubic superpotential couplings. As mentioned above, when computing the squashed four-sphere partition function of the coupled system, the instanton counting is modified non-trivially due to the presence of the intersecting surface defects. The argument of the modulus squared in the last line of \eqref{def:SQCDA-defect-matrix-model} provides a concrete expression for the modified instanton partition function. In the next section, we turn to a more detailed analysis of the degrees of freedom which give rise to this instanton partition function. \section{Instanton partition function and intersecting surface defects}\label{sec:IPFwDef} Let us start by considering the familiar brane realization of a $k$-instanton in $SU(N)$ $\mathcal N=2$ SQCD as an additional stack of $k$ D0-branes as depicted in the the left part of figure \ref{fig:SQCD_Brane_Instanton}, ignoring the gray branes for the time being. \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{./Figures-PDF/SQCD_Brane_Instanton} \caption{\label{fig:SQCD_Brane_Instanton} The left part of the figure depicts the brane configuration realizing $k$-instantons in $\mathcal N=2$ SQCD, in the presence of intersecting surface defects, of M2-type and labeled by symmetric representations, represented by the gray branes.\protect\footnotemark{} The right part of the figure shows the quiver description of the worldvolume theory of the D0-branes. As the system preserves two-dimensional $\mathcal N=(0,2)$ supersymmetry dimensionally reduced to zero dimensions, the quiver is drawn using $\mathcal N=(0,2)$ notations, with full lines representing chiral multiplets and dashed lines Fermi multiplets. In the absence of the defects, the preserved supersymmetry is $\mathcal N=(0,4)$. We thus learn that one needs to turn a J-type superpotential $J_\Lambda$ for the adjoint Fermi multiplet $\Lambda$ consisting of the sum of the adjoint bilinears of the scalars of the two pairs of chiral multiplets. The charges in table \ref{table:ADHMstrings} are also compatible with quadratic E- or J-type superpotentials for the Fermi multiplets charged under $U(n^{\text{L/R}})$. } \end{figure} \footnotetext{The brane directions are as in footnote \ref{branedirections}.} The supersymmetry preserved by the worldvolume theory of the D0-branes is the dimensional reduction to zero dimensions of two-dimensional $\mathcal N=(0,4)$ supersymmetry. Its matter content can be straightforwardly read off by quantizing the open strings stretching between the D0-brane and the various D4-branes, as well as between the D0-branes themelves, see \cite{Douglas:1995bn,Douglas:1996uz}. We summarize it in table \ref{table:ADHMstrings}, and have depicted the resulting quiver gauge theory in the right part of figure \ref{fig:SQCD_Brane_Instanton} (omitting the gray quiver nodes and links). The partition function of this zero-dimensional theory computes the (non-perturbative) $k$-instanton partition function, which we denote as $Z_k^{\mathbb R^4}$. \renewcommand{\arraystretch}{1.3} \begin{table}[ht] \begin{center} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c} strings & D0-D4$_1$ & \multicolumn{2}{|c|}{D0-D4$_2$} & D0-D4$_3$ & \multicolumn{4}{|c|}{D0-D0} & \multicolumn{2}{|c}{{\color{gray} D0-D2$_\text{R}$}} & \multicolumn{2}{|c}{{\color{gray} D0-D2$_\text{L}$}} \\ \hline\hline $\mathcal N=(0,4)$ & FM & \multicolumn{2}{|c|}{HM} & FM & \multicolumn{2}{|c|}{VM} & \multicolumn{2}{|c|}{HM} & \multicolumn{2}{|c}{{\color{gray}\tiny (not preserved)}} & \multicolumn{2}{|c}{{\color{gray}\tiny (not preserved)}}\\ \hline\hline $\mathcal N=(0,2)$ & FM & CM & CM & FM & VM & FM & CM & CM & {\color{gray}CM} & {\color{gray}FM} & {\color{gray}CM}&{\color{gray}FM}\\ \hline $J$ & 0 & $\frac{1}{2}$ & $\frac{1}{2}$ & 0 & 0&1 & $\frac{1}{2}$ & $\phantom{-}\frac{1}{2}$ & {\color{gray}0} & {\color{gray}$\frac{1}{2}$} & {\color{gray}$\phantom{-}$0} & {\color{gray}$\frac{1}{2}$}\\ \hline $J_l$ & 0& 0 & 0 & 0 & 0 & 0& $\frac{1}{2}$ & $-\frac{1}{2}$ & {\color{gray}$\frac{1}{2}$} & {\color{gray}0} & {\color{gray} $-\frac{1}{2}$} & {\color{gray}0}\\ \end{tabular} \caption{Massless excitations of strings stretching between the branes indicated in the first row organized in multiplets of the dimensional reduction of two-dimensional $\mathcal N=(0,4)$ and $\mathcal N=(0,2)$ supersymmetry to zero dimensions in the second and third row respectively. Here VM denotes vectormultiplet, HM hypermultiplet, FM Fermi multiplet and CM chiral multiplet. Note that the system including the D2-branes only preserves $\mathcal N=(0,2)$, hence we leave the $\mathcal N=(0,4)$ entries corresponding to D0-D2 strings open. The last two rows list the charges of the mutliplets under the flavor symmetry charges $J$ and $J_l$. \label{table:ADHMstrings}} \end{center} \end{table} In some more detail, the instanton partition function of a four-dimensional $\mathcal N=2$ supersymmetric theory is computed by a localization computation on $\mathbb R^4_{\epsilon_1, \epsilon_2}$, \textsl{i.e.\@}{}, in the $\Omega$-background parametrized by $\epsilon_1,\epsilon_2$ \cite{Nekrasov:2002qd,Nekrasov:2003rj}. The localizing supercharge $\mathcal Q$ squares to \begin{equation}\label{QsqIPF} \mathcal Q^2 = (\epsilon_1 + \epsilon_2)(J_r + \mathcal R) + (\epsilon_1 - \epsilon_2)J_l + i\Sigma\cdot G + iM\cdot F\;, \end{equation} where $J_l,J_r$ are the Cartan generators of the $SU(2)_l\times SU(2)_r \simeq SO(4)$ rotational symmetries of $\mathbb R^4$,\footnote{The $\Omega$-deformation breaks the rotational symmetry to $SO(2)_1\times SO(2)_2$. In terms of their Cartan generators $J_1, J_2$ one has $J_l = \frac{1}{2}(J_1-J_2)$ and $J_r = \frac{1}{2}(J_1+J_2)$.} while $\mathcal R$ is the $SU(2)_{\mathcal R}$ generator. We define $J=J_r+\mathcal{R}$. Furthermore, $G$ denotes the collection of Cartan generators of the gauge symmetry and $F$ those of the flavor symmetry; $\phi$ and $M$ are their respective equivariant parameters. The localization locus consists of point-instantons located at the origin. The integration over the $k$-instanton moduli space is captured by the non-perturbative $k$-instanton partition function, which equals the partition function of the zero-dimensional ADHM model, read off as the worldvolume theory on the D0-branes, computed by localization with respect to the induced supercharge, that is, with respect to the supercharge in its $\mathcal N=(0,2)$ supersymmetry (sub)algebra satisfying the same square as in \eqref{QsqIPF} (up to gauge transformations). From the $\mathcal N=(0,2)$ zero-dimensional point of view, the charges $J=J_r + \mathcal R$ and $J_l$ appear as flavor charges, as do both $G$ and $F$. $\mathcal Q^2$ additionally includes $\phi\cdot G_{0d}$. Dimensionally reducing the localization results of \cite{Hori:2014tda} and in particular \cite{Hwang:2014uwa}, it is now straightforward to compute $Z_k^{\mathbb R^4}$ as \begin{equation}\label{Zkintegral} Z_k^{\mathbb R^4}= \int_{\text{JK}} \prod_{I = 1}^k {d{\phi _I}\ {Z_{{\text{D0-D0}}}}(\phi)\ Z_{\text{D0-D4}_1} (\phi,\tilde M)\ {Z_{{\text{D0-D4}_2}}}(\phi ,\Sigma ')}\ Z_{\text{D0-D4}_3} (\phi,M)\;, \end{equation} where \begin{align} {Z_{{\text{D0-D0}}}}(\phi ) = &\; \prod_{I,J = 1}^k {\frac{{({\phi _{IJ}})'({\phi _{IJ}} + {\epsilon _1} + {\epsilon _2})}}{{({\phi _{IJ}} + {\epsilon _1})({\phi _{IJ}} + {\epsilon _2})}}} \;, \\ Z_{\text{D0-D4}_1} (\phi,\tilde M) = &\; \prod_{I = 1}^k {\prod_{A = 1}^N {({\phi _I} - i{\tilde M_A})} } \;, \qquad Z_{\text{D0-D4}_3} (\phi, M) = \; \prod_{I = 1}^k {\prod_{A = 1}^N {({\phi _I} - i{ M_A})} }\;,\\ {Z_{{\text{D0-D4}_2}}}(\phi ,\Sigma ') = & \; \prod_{I = 1}^k {\prod_{A = 1}^N {\frac{1}{{({\phi _I} - i\Sigma _A' + \frac{1}{2}({\epsilon _1} + {\epsilon _2}))( - {\phi _I} + i\Sigma _A' + \frac{1}{2}({\epsilon _1} + {\epsilon _2}))}}} } \;, \end{align} where ${\phi _{IJ}} = \phi_I - \phi_J$, and the prime on $({\phi_{IJ}})'$ indicates to omit the factors with $I=J$. Here we denoted the equivariant parameters for the various $SU(N)$ symmetries as in the previous section. The integral \eqref{Zkintegral} is computed using the Jeffrey-Kirwan residue prescription. We choose to select the contributions of negatively charged fields, and thus collect the residues of the poles defined by solving the equations \begin{equation} {\phi _I} = i\Sigma_A' + \frac{1}{2}({\epsilon _1} + {\epsilon _2}) , \qquad {\phi _I} = \phi _J + \epsilon_1 , \qquad {\phi _I} = {\phi _J} + {\epsilon _2} \;. \end{equation} The contributing poles are labeled by $N$-tuples of Young diagrams $\vec Y = \{Y_A\}$ such that $\sum_A |Y_A| = k$, \begin{equation} {\phi _I} = i{\Sigma _A'} - \frac{1}{2}({\epsilon _1} + {\epsilon _2}) + r{\epsilon _1} + s{\epsilon _2}\;, \qquad (r,s) \in Y_A\;. \end{equation} It is easy to convince oneself that summing over the residues precisely reproduces the $q^k$ term of the SQCD instanton partition function given in \eqref{SQCDIPF}. Let us now re-introduce the intersecting surface defects in the setup.\footnote{See also \cite{Gaiotto:2014ina} for an analysis of the equivariant integral of a five-dimensional theory in the presence of three-dimensional chiral multiplets.} The brane configuration was already depicted in the left part of figure \ref{fig:SQCD_Brane_Instanton}, now also considering the gray branes. Upon inserting the defects, the $\mathcal N=(0,4)$ symmetry, dimensionally reduced to zero dimensions, carried by the D0-branes is broken to the dimensional reduction of $\mathcal N=(0,2)$. We have used precisely this subalgebra in the previous paragraphs already. The zero-dimensional model is enriched by the modes arising from the quantization of the open strings stretching between the D0 and D2$_{\text{L}}$ and D2$_{\text{R}}$-branes. They each contribute an additional $\mathcal N=(0,2)$ Fermi and chiral multiplet,\footnote{The brane system consisting of a stack of D0-branes and one stack of D2-branes, each ending on an NS5-brane, preserves on the D0-brane the dimensional reduction to zero dimensions of $\mathcal N=(2,2)$ supersymmetry. The open string modes thus organize themselves in an $\mathcal N=(2,2)$ chiral multiplet.} and the final ADHM quiver theory is depicted in the right part of figure \ref{fig:SQCD_Brane_Instanton}. The additional multiplets carry charges under $J$ and $J_l$ as in table \ref{table:ADHMstrings}. It is then straightforward to include their contributions to the ADHM matrix model. It is given by \begin{multline} Z_\text{D0-D2}(\phi ,\Sigma ',\sigma ,B) \\ \equiv \prod_{I = 1}^K \prod_{A = 1}^N \Bigg[ \prod_{a = 0}^{n_A^{\text{L}} - 1} \frac{{\phi _I} - \epsilon_2(i\sigma_a^{\text{L}} + \frac{B_a^{\text{L}}}{2})+\frac{1}{2}(\epsilon_1+\epsilon_2)}{{\phi _I} - \epsilon_2(i\sigma_a^{\text{L}} + \frac{B_a^{\text{L}}}{2})-\frac{1}{2}(\epsilon_1-\epsilon_2)} \prod_{a = 0}^{n_A^{\text{R}} - 1} \frac{{\phi _I} - \epsilon_1(i\sigma_a^{\text{R}} + \frac{B_a^{\text{R}}}{2})+\frac{1}{2}(\epsilon_1+\epsilon_2)}{{\phi _I} - \epsilon_1(i\sigma_a^{\text{R}} + \frac{B_a^{\text{R}}}{2})+\frac{1}{2}(\epsilon_1-\epsilon_2)} \Bigg]\;. \end{multline} where we used $\sigma_a^{\text{L/R}} -i \frac{B_a^{\text{L/R}}}{2}$ as the gauge equivariant parameter of the $U(n^{\text{L/R}})$ symmetry, as this is the combination that enters in our computations on $S^4_b$ at the north pole. (The south pole contribution would be obtained by changing the sign in front of $B$.)\footnote{Note that we are using the effective description obtained by performing a change of variables in the two-dimensional integrals and omitting some irrelevant constant prefactor as explained below equation \eqref{def:SQCDA-defect-matrix-model}.} Noting that our JK-prescription does not select the poles of the above factor, it is straightforward to see that the matrix integral \begin{equation}\label{Zkintegraldefects} Z_k^{\mathbb R^2_{\text{L}} \cup \mathbb R^2_{\text{R}} \subset \mathbb R^4}= \int_{\text{JK}} \prod_{I = 1}^k {d{\phi _I}\, {Z_{{\text{D0-D0}}}}(\phi)\, Z_{\text{D0-D4}_1} (\phi,\tilde M)\, {Z_{{\text{D0-D4}_2}}}(\phi ,\Sigma ')}\, Z_{\text{D0-D4}_3} (\phi,M)\, Z_\text{D0-D2}(\phi ,\Sigma ',\sigma ,B) \end{equation} precisely reproduces the modified instanton partition function as it appeared in the last line of \eqref{def:SQCDA-defect-matrix-model}. \section{Discussion}\label{sec:conclusions} In this paper, we have extended the study of intersecting codimension two defects, initiated in \cite{Gomis:2016ljm}, to interacting four-dimensional theories. We have employed the Higgsing prescription of \cite{Gaiotto:2012xa,Gaiotto:2014ina} to compute the vacuum expectation value of intersecting M2-brane defects, labeled by $n^{\text{L}}$ and $n^{\text{R}}$-fold symmetric representations respectively, inserted in four-dimensional $\mathcal N=2$ SQCD. Subsequently we cast the result in the form of a partition function of a coupled 4d/2d/0d system, see \eqref{def:SQCDA-defect-matrix-model}, which takes the schematic form\footnote{Before tackling this computation, we also considered the theory of $N^2$ free hypermultiplets. Also for this case, we cast the result in the form of a partition function of a coupled system.} \begin{equation}\label{SQCD_conclusion} Z^{(\mathcal{T},S^2_\text{L} \cup S^2_\text{R} \subset S^4_b)} = \SumInt \ Z_{\text{pert}}^{(\mathcal T,S^4_b)}\ Z_{\text{pert}}^{(\tau^{\text{L}},S^2_\text{L})} \ Z_{\text{pert}}^{(\tau^{\text{R}},S^2_\text{R})} \ Z^{+}_{\text{intersection}}\ Z^{-}_{\text{intersection}}\ \left|Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}\right|^2\;. \end{equation} The leftmost subfigure of figure \ref{fig:ADHM_conclusions} depicts the 4d/2d/0d coupled system under consideration. The theory $\mathcal T$ is four-dimensional $\mathcal N=2$ SQCD and $\tau^{\text{L/R}}$ are identified as two-dimensional $\mathcal N=(2,2)$ $U(n^{\text{L/R}})$ SQCDA. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{./Figures-PDF/spacetime-ADHM-quivers-symmetric} \caption{\label{fig:ADHM_conclusions} The type IIA brane-configuration in the middle describes the 4d/2d/0d coupled system on the left as the worldvolume theory of the D4/D2$_\text{L}$/D2$_\text{R}$-branes, see also figure \ref{fig:InsertSymmetricsSQCD}. The worldvolume theory of the $k$ D0-branes is shown on the right, see figure \ref{fig:SQCD_Brane_Instanton} for more details. Its partition function computes the $k$-instanton partition function of the 4d/2d/0d coupled system.} \end{figure} Our computation provides an explicit formula for the instanton partition function in the presence of the above-mentioned intersecting defects, $Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}$, appearing in \eqref{SQCD_conclusion}, see equation \eqref{def:SQCDA-defect-matrix-model}. We also found the ADHM model whose partition function computes the $k$-instanton contribution to $Z_{\text{inst}}^{(\mathcal T, \mathbb R^2_{\text{L}}\cup \mathbb R^2_{\text{R}} \subset \mathbb R^4)}$, see the rightmost subfigure in figure \ref{fig:ADHM_conclusions}. This model can be read off from a D-brane construction, as also indicated in the figure. Starting from a theory $\mathcal T$ whose flavor symmetry contains an $SU(N)$ factor, one can gauge in successively multiple, say $p$, theories of $N^2$ free hypermultiplets. The resulting theory $\widetilde {\mathcal T}$ has $p$ additional $U(1)$ factors in its flavor symmetry group compared to the original theory $\mathcal T$. It is clear that one can apply the Higgsing prescription consecutively to each of these starting from the outermost one along the quiver. The associated type IIA brane-realization is a simple generalization of the one we have discussed in section \ref{subsec:brane realization}. We depict the case $p=2$ for $\mathcal T$ the theory of $N^2$ free hypermultiplets in figure \ref{figure:consecutive-Higgsing-brane-construction}. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{./Figures-PDF/consecutive-Higgsing-brane-construction} \caption{\label{figure:consecutive-Higgsing-brane-construction} One starts with the theory $\mathcal T$ of $N^2$ free hypermultiplets and successively gauges in two more theories of $N^2$ free hypermultiplets. The brane realization of the resulting theory $\widetilde {\mathcal T}$ is shown in the leftmost figure. One can then apply the Higgsing prescription twice, corresponding to pulling the two rightmost NS5-branes away from the main stack, and stretching $(n^\text{L}_2 - n^\text{L}_1, n^\text{R}_2 - n^\text{R}_1)$ (D2$_\text{L}$, D2$_\text{R}$) branes and ($n^\text{L}_1, n^\text{R}_1$) (D2$_\text{L}$, D2$_\text{R}$) branes respectively in between them and the flavor D4- branes. The two-dimensional part of the system is in its Higgs phase, and can be brought into its Coulomb phase by aligning the two displaced NS5-branes, as shown in the middle figure. The corresponding 4d/2d/0d coupled system can be read off easily, and is shown in the last figure. This system was also considered in \cite{Gomis:2016ljm}.} \end{figure} The corresponding 4d/2d/0d coupled system can be read off from the brane picture and is given in the rightmost subfigure in figure \ref{figure:consecutive-Higgsing-brane-construction}. We conjecture that the M-theory interpretation of this procedure corresponds to the insertion of multiple intersecting M2-branes ending on the main stack of M5-branes, describing theory $\mathcal T$, all labeled by symmetric representations. General intersecting M2-brane defects labeled by two generic irreducible representations $(\mathcal{R}^\text{L}, \mathcal{R}^\text{R})$ of $SU(N)$ can also be described by 4d/2d/0d coupled systems \cite{Gomis:2016ljm}; when the four-dimensional theory is $\mathcal N=2$ SQCD, we have depicted an example in the bottom left of figure \ref{figure:spacetime-ADHM-quivers-antisymmetric}. The two-dimensional degrees of freedom are described by quiver gauge theories which encode the representation $\mathcal R$ through their gauge group ranks \cite{Gomis:2014eya}. The coupled system involves zero-dimensional Fermi multiplets, transforming in the bifundamental representation of the innermost two-dimensional gauge groups, as degrees of freedom living at the intersection points. Such 4d/2d/0d coupled system can be engineered as the worldvolume theory of the D4/D2$_\text{L}$/D2$_\text{R}$-branes in the type IIA system shown in figure \ref{figure:spacetime-ADHM-quivers-antisymmetric}. When attempting to use this coupled system to compute the vacuum expectation value of general intersecting M2-brane defects, one needs as an input the instanton partition function in the presence of the defects. We propose that the structure of its $k$-instanton ADHM model can also in this case be read off from the D0-brane worldvolume theory in the type IIA system. The resulting quiver theory, which has two-dimensional $\mathcal{N} = (0,2)$ supersymmetry reduced to zero dimensions, is also included in figure \ref{figure:spacetime-ADHM-quivers-antisymmetric}. It is almost the same as the one in figure \ref{fig:ADHM_conclusions}, up to the orientation of an arrow. This new ADHM model however leads to a dramatically different ADHM integration: extra poles coming from the factor $Z_\text{D0-D2}$ will be selected by the JK-prescription, and the result of the ADHM integral is a double sum over $N$-tuples of Young diagrams and, separately, $n^\text{L}$-tuples of Young diagrams, together having $k$ boxes in total. It would be very interesting to study in more detail these new ADHM integrals. \begin{figure}[t!] \centering \includegraphics[width=\textwidth]{./Figures-PDF/spacetime-ADHM-quivers-antisymmetric} \caption{\label{figure:spacetime-ADHM-quivers-antisymmetric} The type IIA brane realization of general intersecting M2-brane defects labeled by two irreducible representation $(\mathcal{R}^\text{L}, \mathcal{R}^\text{R})$ inserted in SQCD, as well as its corresponding 4d/2d/0d coupled system and ADHM model are depicted.} \end{figure} When $\mathcal{R}^\text{L/R}$ are both symmetric representations, the descriptions of figures \ref{figure:spacetime-ADHM-quivers-antisymmetric} and \ref{fig:ADHM_conclusions} are both valid. In \cite{Gomis:2016ljm}, the equality of the resulting squashed four-sphere partition functions was verified for four-dimensional theories without gauge fields, and for defects labeled by fundamental representations. It would be of interest to study the duality between the two descriptions in interacting theories. A construction for general intersecting M2-brane defects in terms of a renormalization group flow from a larger theory $\widetilde{\mathcal T}$ triggered by some position-dependent Higgs branch vacuum expectation value is currently unknown. Presumably it requires $\widetilde{\mathcal T}$ to be a non-Lagrangian theory of class $\mathcal S$. Reversing the logic, one might hope to recover information about the partition function of the UV non-Lagrangian theory $\widetilde {\mathcal{T}}$ by investigating the partition function of all intersecting M2-brane defects, for which we have nice quiver description, and to which the UV theory can flow. Note that Higgs branch localized expressions of partition functions are a simple example of this aspiration \cite{Pan:2015hza}. It was conjectured in \cite{Gomis:2016ljm} that the partition function of $SU(N)$ SQCD in the presence of general intersecting M2-brane defects can be identified with Liouville/Toda five-point functions. In particular, identifying $x' = qz$, $x = z$, with $|q|, |z| < 1$, along with other parameter identifications, one expects \begin{align} Z^{(\mathcal T, S_{\text{L}}^2 \cup S_{\text{R}}^2 \subset S_b^4)} (q,z,M',\tilde M) = \mathcal{A}(x, x') \left\langle {{{\hat V}_{{\alpha _0}}}(0)\ \hat V_{-b \Omega_{\mathcal{R}^\text{L}} - b^{-1} \Omega_{\mathcal{R}^\text{R}} }(x')\ {{\hat V}_\beta }(x)\ {{\hat V}_{{\alpha _1}}}(1)\ {{\hat V}_{{\alpha _\infty }}}(\infty )} \right\rangle \;, \label{AGT-correspondence-defect} \end{align} where $\mathcal{A}(x, x') \equiv A|x'{|^{2{\gamma _0}}}|1 - x'{|^{2{\gamma _1}}}|x{|^{2{\gamma _2}}}|1 - x{|^{2{\gamma _3}}}|x - x'{|^{2\gamma _4 }}$, for some $\gamma_i$. Furthermore $\alpha_0$, $\alpha_\infty$ are generic, while $\beta$, $\alpha_1$ are semi-degenerate momenta determined in terms of the masses of the gauge theory. Let us perform a few checks of this statement, leaving a more thorough analysis for the future. Consider the simple case with $\mathcal{R}^\text{L}=\mathbf 1$ and $\mathcal{R}^\text{R} = \operatorname{symm}^{n^\text{R}} \square$. In the OPE limit $1 > |q| > |z| \to 0$, the leading terms in $z$ read, up to the factor $\mathcal{A}$, \begin{equation} \sum\limits_{\mathfrak{t}} |z|^{2\Delta ({\alpha _0} - b^{-1}\mathfrak{t}) - 2\Delta ({\alpha _0}) - 2\Delta ( - {n^{\text{R}}}b^{-1}h_1)}\hat C_{{\alpha _0}, - {n^{\text{R}}}b^{-1}h_1}^{{\alpha _0} - b^{-1}\mathfrak{t}}\left\langle {{{\hat V}_{{\alpha _0} - b^{-1}\mathfrak{t}}}(0){{\hat V}_\beta }(x){{\hat V}_{{\alpha _1}}}(1){{\hat V}_{{\alpha _\infty }}}(\infty )} \right\rangle \;. \label{AGT-with-defect-small-z} \end{equation} Here $\mathfrak{t} = \sum_{A = 1}^N n^\text{R}_A h_A$ and $h_A$ are the weights of the fundamental representation of $SU(N)$. The set of natural numbers $\vec n^\text{R}$ is any partition of $n^\text{R}$, and the sum over $\mathfrak{t}$ means summing over all such partitions. On the gauge theory side, in the $z \to 0$ limit, one can close the contour of the integration over $\sigma^\text{R}$ in the partition function $Z^{(\mathcal{T}, S^2_\text{R} \subset S^4_b)}$, as in \eqref{def:SQCDA-defect-matrix-model} with $n^\text{L} = 0$, and obtain the Higgs branch localized expression as a sum over the two-dimensional Higgs vacua labeled by partitions $\vec n^\text{R}$. This sum is mapped to the sum over $\mathfrak{t}$ in \eqref{AGT-with-defect-small-z}. The leading terms in $z$ simply come from the zero-vortex sector. The four-dimensional matrix integral in each leading term, which now depends on the Higgs vacuum $\vec n^\text{R}$, is simply an $S^4_b$-partition function with shifted fundamental masses $M_A^{{n^{\text{R}}}} \equiv M'_A - \hat M + i(b + {b^{ - 1}})/2 + in_A^{\text{R}}b^{-1}$, thanks to \begin{equation} z^{\mathbb{R}^4}_\text{fund} (\vec Y',{{M'}^\varepsilon }) \ z_\text{defect}^{\mathbb{R}_{\text{R}}^2 \subset \mathbb{R}^4}(\vec Y',\Sigma ',i{\sigma ^{\text{R}}}, {B^{\text{R}}}) \to {z^{\mathbb{R}^4}_{{\text{fund}}}}(\vec Y',{({M^{{n^{\text{R}}}}})^\varepsilon })\;, \end{equation} where $\to$ indicates the evaluation at the Higgs vacuum $\vec n^\text{R}$. This $S^4_b$-partition function is mapped to the four-point function in \eqref{AGT-with-defect-small-z}. In particular, the four-dimensional one-loop determinant together with the two-dimensional one-loop determinant evaluated at the Higgs vacuum $\vec n^\text{R}$ is precisely equal to the structure constants $\hat C_{{\alpha _0}, - {n^{\text{R}}}b^{-1}h_1}^{{\alpha _0} - b^{-1}\mathfrak{t}} \hat C_{\alpha_0 - b^{-1} \mathfrak{t} , \beta}^{\alpha} \hat C_{\alpha, \alpha_1, \alpha_\infty}$, up to some uninteresting constants. In fact, the statement \eqref{AGT-correspondence-defect} in the case of $\mathcal{R}^\text{L/R}$ being both symmetric, can be viewed as a degeneration of the well-established AGT conjecture without surface defects, by considering the commuting diagram in figure \ref{degeneration-of-AGT} \cite{Bonelli:2011wx,Bonelli:2011fq,Nieri:2013vba}. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{Figures-PDF/degeneration-of-AGT.pdf} \caption{\label{degeneration-of-AGT} A commuting diagram showing the relation between the Higgsing prescription and degenerating semi-degenerate momentum.} \end{figure} We remind ourselves that $i\check M^{n^\text{L}, n^\text{R}} = (b{n^{\text{L}}} + {b^{ - 1}}{n^{\text{R}}})/N + (b + {b^{ - 1}})/2$, and in the AGT correspondence $\beta' = \left[ {N(b + {b^{ - 1}})/2 - \sum_{A = 1}^N {i{M_A}} } \right]{h_1}$. The Higgsing prescription sends the $U(1)$ mass $\check M \to \check M^{n^\text{L}, n^\text{R}}$, which is equivalent to degenerating the semi-degenerate momentum $\beta' \to - n^\text{L} b h_1 - n^\text{R} b^{-1} h_1$. The correspondence \eqref{AGT-correspondence-defect} is a great tool to discover and understand new dualities of the 4d/2d/0d coupled system. The Liouville/Toda correlation functions enjoy various symmetries, including, but not limited to, the invariance under conjugation and Weyl reflection of the momenta, and conformal and crossing symmetries. It would be interesting to translate these CFT symmetries to dualities on the gauge theory side, especially when the intersecting defects are coupled to four-dimensional interacting theories. \section*{Acknowledgments} The authors would like to thank Giulio Bonelli, Jaume Gomis, Bruno Le Floch, Fabrizio Nieri, Daniel Park, Massimiliano Ronzani, Alessandro Tanzini and Maxim Zabzine for helpful conversations and useful suggestions. We are also grateful to Bruno Le Floch for comments on a draft of this paper. Y.P. is supported in part by Vetenskapsr{\aa}det under grant {\#}2014- 5517, by the STINT grant and by the grant ``Geometry and Physics'' from the Knut and Alice Wallenberg foundation. The work of W.P. is supported in part by the DOE grant DOE-SC0010008.
1,477,468,749,965
arxiv
\section{Introduction} \subsection{Manin's conjecture} Manin's conjecture \cite{MR89m:11060} predicts an asymptotic formula for the number of rational points of bounded height on Fano varieties. Its most classical version is the following: let $X$ be a smooth Fano variety over $\mathbb{Q}$ whose set of rational points is Zariski dense. Let $H \colon X(\mathbb{Q}) \to \mathbb{R}$ be an anticanonical height function. {For an open subset $U$ of $X$, let $ N_{X,U,H}(B)$ denote the number of $x \in U(\mathbb{Q})$ with $ H(x) \le B$. Then one expects that there is a dense open subset $U \subseteq X$ and a positive number $c$ such that \begin{equation}\label{MC} N_{X, U,H}(B) = (1+o(1)) c B(\log B)^{\rank \Pic X-1}. \end{equation} Peyre \cite{MR1340296} proposed a product formula for $c$, and in the sequel we refer to this predicted value of $c$ as Peyre's constant. When the dimension is large compared to the degree of the variety, one may apply the circle method to estimate $ N_{X,U,H}(B)$. In this way, Browning and Heath-Brown \cite{MR3605019} confirmed Manin's conjecture whenever $X$ is geometrically integral and the inequality $\dim X \ge ((\deg X)-1)2^{\deg X}-1$ holds. The asymptotic formula \eqref{MC} is also known for several classes of equivariant compactifications of algebraic groups or homogeneous spaces, namely certain horospherical varieties (flag varieties \cite{MR89m:11060}, toric varieties \cite{MR1620682} and toric bundles over flag varieties \cite{MR1723811}), wonderful compactifications of semisimple groups of adjoint type \cite{MR2328719,MR2482443}, and equivariant $\mathbb{G}_\mathrm{a}^n$-compactifications \cite{MR1906155}. Here the proofs use harmonic analysis on adelic points, and the cases covered include some low-dimensional Fano varieties. In absence of additional structure, we only know four more low-dimensional cases: Manin's conjecture was verified for two smooth quintic del Pezzo surfaces \cite{MR1909606,MR2099200}, for one smooth quartic del Pezzo surface \cite{MR2838351}, and (in the thin set version) for a quadric bundle in $\mathbb{P}^3 \times \mathbb{P}^3$ \cite{BHB}. Beyond that, there are many more results on versions of Manin's conjecture for \emph{singular} varieties. This is not surprising because usually analytic techniques are easier to implement in the presence of singularities. In this paper, we initiate a systematic study of Manin's conjecture for varieties for which we have access to the Cox ring, and where a universal torsor is given by a polynomial of the shape \eqref{torsor} below. This includes a fairly large class of cases, and we proceed to describe our main application. \subsection{Spherical varieties} Let $G$ be a connected reductive group. A normal $G$-variety $X$ is called spherical if a Borel subgroup of $G$ has a dense orbit in $X$. Spherical varieties have a rich theory. They include symmetric varieties, and the corresponding space $L^2(X)$ has been the subject of intense investigation from the point of view of (local) harmonic analysis and the (relative) Langlands program (e.\,g.,~ \cite{Sa, SV}). Spherical varieties also admit a combinatorial description. This is achieved by the recently completed Luna program \cite{lun01,bp15b,cf2,los09a} and the Luna--Vust theory of spherical embeddings \cite{lv83,kno91}. We recall the relevant theory in Section~\ref{sec2} and refer to \cite{bl11,per14,tim11} as general references. In this paper, we are interested in the size of smooth spherical varieties in the context of Manin's conjecture. If the acting group $G$ has semisimple rank zero, then $G$ is a torus and Manin's conjecture is known (\cite{MR1620682}, see also \cite{MR1679841}). The next interesting case is $G$ of semisimple rank one. Here we may assume $G = \mathrm{SL}_2\times \mathbb{G}_\mathrm{m}^r$ by passing to a finite cover (see Section~\ref{sec:ssr1} for more details). Let $G/H = (\mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}^r)/H$ be the open orbit in $X$. Let $H'\times \mathbb{G}_\mathrm{m}^r = H\cdot \mathbb{G}_\mathrm{m}^r \subseteq \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}^r$. Then the homogeneous space $\mathrm{SL}_2/H'$ is spherical, and hence either $H'$ is a maximal torus (\emph{the case $T$}) or $H'$ is the normalizer of a maximal torus in $\mathrm{SL}_2$ (\emph{the case $N$}) or the homogeneous space $\mathrm{SL}_2/H'$ is horospherical, in which case $X$ is isomorphic (as an abstract variety, possibly with a different group action) to a toric variety, so we may exclude this case from our discussion. \subsection{Spherical Fano threefolds} We start our discussion with dimension 3, the smallest dimension where nonhorospherical spherical varieties of semisimple rank one exist. A complete classification of nontoric smooth spherical Fano threefolds over $\overline{\mathbb{Q}}$ was established by Hofscheier \cite{hofscheier}, cf.\ Table~\ref{tab:classification_spherical}. In this situation, the acting group always has semisimple rank one, so our present setup is in fact already the general picture, and the following discussion applies to all nontoric smooth spherical Fano threefolds. There are precisely four nonhorospherical examples of type $T$ that are not equivariant $\Bbb{G}_{\rm a}^3$-compactifications. They have natural split forms $X_1,\dots,X_4$ over $\mathbb{Q}$, which we describe in Section~\ref{sec:fano3folds} in detail; see Table~\ref{tab:v18} for an overview. In the classification of smooth Fano threefolds by Iskovskikh \cite{I1,I2} and Mori--Mukai \cite{MR641971}, they have types III.24, III.20 (of Picard number $3$), IV.8, IV.7 (of Picard number $4$), respectively. In Section~\ref{sec:height}, we will define natural anticanonical height functions $H_j \colon X_j(\mathbb{Q}) \to \mathbb{R}$ using the anticanonical monomials in their Cox rings. We establish the Manin--Peyre conjecture in all these cases. We write $N_j(B)$ for $N_{X_j, U_j, H_j}(B)$, where here and in all subsequent cases, the open subset $U_j$ will be the set of all points with nonvanishing Cox coordinates. \begin{theorem}\label{dim3} The Manin--Peyre conjecture holds for the varieties $X_1, \ldots, X_4$. More precisely, there exist explicit constants $C_1,\ldots, C_4$ such that $$N_j(B) = (1 + o(1))C_j B (\log B)^{\rank \Pic X_j -1} $$ for $1 \leq j \leq 4$. The values of $C_j$ are the ones predicted by Peyre. \end{theorem} It is a fun exercise to compute $C_j$ explicitly (cf.\ Appendix~\ref{A}), for which the interesting and apparently previously unknown integral identities involving sin-integrals and Fresnel integrals in Lemma~\ref{sin} play an important role. One obtains \begin{displaymath} \begin{split} & C_1 = \frac{40 -\pi^2}{12} \prod_{p} (1 - p^{-2})^3, \quad C_3 = \frac{5(258 - 4\pi^2)}{1296} \prod_p\left(1-\frac 1 p\right)^4\left(1+\frac 4 p+\frac 4{p^2}+\frac 1{p^3}\right),\\ & C_2 = \frac{170 - \pi^2 - 96\log 2}{36} \prod_{p} (1 - p^{-2})^3, \quad C_4 = \frac{94-2\pi^2 }{72}\prod_p\left(1-\frac 1 p\right)^4\left(1+\frac 4 p+\frac 4{p^2}+\frac 1{p^3}\right). \end{split} \end{displaymath} Theorem~\ref{dim3} is the first example where Manin's conjecture is established for smooth Fano threefolds that are not covered by general results concerning equivariant compactifications of algebraic groups or homogeneous spaces. It is also the first example of a Manin--Peyre formula for smooth spherical varieties that is not covered by general results. Theorem~\ref{dim3} in fact confirms the Manin--Peyre conjecture for \emph{all smooth spherical Fano threefolds of semisimple rank one and type $T$.} Theorem~\ref{dim3} is an easy consequence of Theorem~\ref{manin-cor} that proves the Manin--Peyre conjecture for smooth split spherical Fano varieties of arbitrary dimension with semisimple rank one and type $T$, subject to a number of technical conditions that are straightforward to check in every given instance. Similar methods apply also to smooth spherical Fano varieties of type $N$, but these have some additional features to which we return in a subsequent paper. Previously the knowledge of the number of rational points on these varieties has been much less precise. Manin \cite{MR1199203} shows that smooth Fano threefolds have at least linear growth for rational points in Zariski dense open subsets of bounded anticanonical height over sufficiently large ground fields. A closer inspection of his arguments reveals in fact lower bounds of the correct order of magnitude: $N_j \gg B(\log B)^{\text{rk}(\text{Pic } X_j) - 1}$ in the situation of Theorem~\ref{dim3} (cf.\ the proof of \cite[Proposition~1.4]{MR1199203} as the $X_j$ in Theorem~\ref{dim3} are blow-ups of toric varieties). Tanimoto \cite[\S 7]{arXiv:1812.03423} proves the upper bounds $N_j \ll B^{5/2 + \varepsilon}$ for $j = 1, 2, 4$ and $N_3 \ll B^{2+ \varepsilon}$. \subsection{Higher-dimensional cases} A classification of higher-dimensional spherical varieties is currently not available, but our methods work equally well in dimension exceeding three. For a given dimension, there are still only finitely many cases of smooth spherical Fano varieties of semisimple rank one, and we include some representative examples with interesting torsor equations and high Picard number. Many other examples are available by the same method. The four varieties $X_5, X_6, X_7, X_8$ that we investigate here are smooth spherical Fano varieties of semisimple rank one and type $T$ of dimension $4, 5, 6, 7$, respectively, with $\rank\Pic X_5=5$, $\rank\Pic X_6=3$, $\rank\Pic X_7=5$, and $\rank\Pic X_8=6$. We refer to Section~\ref{sec:geometry_X5_X6} for their combinatorial description and Table \ref{tab:v18} for a quick overview. As all the previous examples, they are neither horospherical nor wonderful compactifications of semisimple groups of adjoint type nor equivariant compactifications of $\mathbb{G}_\mathrm{a}^n$; hence Manin's conjecture for these varieties does not follow from previous results (cf.\ Appendix~\ref{rem:not}). \begin{theorem}\label{dim4} The Manin--Peyre conjecture holds for the varieties $X_5, \ldots, X_8$. More precisely, there exist explicit constants $C_5, \ldots, C_8 > 0$ such that $$N_j(B) = (1 + o(1))C_j B (\log B)^{\rank\Pic X_j-1} $$ for $j = 5, \ldots, 8$. The values of $C_j$ are the ones predicted by Peyre. \end{theorem} \subsection{The methods} The starting point of the quantitative analysis of Fano varieties in this paper is a good understanding of their Cox ring. We use it to pass to a universal torsor and translate Manin's conjecture into an explicit counting problem whose structure we describe in a moment and that is amenable to analytic techniques. The descent to a universal torsor is a common technique in analytic approaches to Manin's conjecture, but in many cases it proceeds by ad hoc considerations. Here we take a more systematic approach and derive the passage from the Cox ring to the explicit counting problem in considerable generality. This is summarized in Proposition~\ref{prop:countingproblem_abstract}. Next we take the opportunity to express Peyre's constant in terms of Cox coordinates in Proposition~\ref{prop:peyre} as a product of a surface integral, the volume of a polytope and an Euler product, so that a verification of the complete Manin--Peyre conjecture is possible without additional ad hoc computations. This first part of the paper is presented in greater generality than necessary for the direct applications to spherical varieties, and should prove to be useful in other situations. The second part of the paper is devoted to an explicit solution of counting problems having the structure required in Proposition~\ref{prop:countingproblem_abstract}. In many important cases, a universal torsor is given by a single equation of the shape \begin{equation}\label{torsor} \sum_{i=1}^k b_i \prod_{j=1}^{J_i} x_{ij}^{h_{ij}} = 0 \end{equation} with integral coefficients $b_i$ and certain exponents $h_{ij} \in \Bbb{N}$. In particular, this is the case for nontoric spherical varieties of semisimple rank one (regardless of the dimension), but also for several nonspherical smooth Fano threefolds \cite{MR3348473}, for most weak del Pezzo surfaces whose universal torsor is given by one equation \cite{MR3180592}, for numerous varieties with a torus action of complexity one or higher (see \cite{HS,fahrner,HHW} and the references therein, for example), and for many other varieties. We may have additional variables $x_{01}, \ldots, x_{0J_{0}}$ that do not appear in the torsor equation; for those we put formally $h_{0j} = 0$. Equation \eqref{torsor} is then to be solved in nonzero integers $x_{ij}$. This seemingly simple diophantine problem has to be analyzed with certain coprimality constraints on the variables, and the variables are restricted to a highly cuspidal region. As specified in Proposition~\ref{prop:countingproblem_abstract}, the height condition translates into inequalities \begin{equation}\label{height} \prod_{i=0}^{k} \prod_{j=1}^{J_i} |x_{ij}|^{\alpha^\nu_{ij}} \leq B \quad ( 1\le \nu \le N) \end{equation} for certain nonnegative exponents\footnote{The superscript $\nu$ is not an exponent, but an index. This notation is chosen in accordance with the notation in Section~\ref{sec:charts_torsors}.} $\alpha^\nu_{ij}$. In order to describe the coprimality conditions on the variables $x_{ij}$ in \eqref{torsor}, let $S_{\rho} \subseteq \{(i,j) : i = 0, \ldots, k, j = 1, \ldots, J_i\}$ $(1\le \rho\le r)$ be a collection of sets that define $r$ conditions \begin{equation}\label{gcd} \gcd\{x_{ij}: (i,j)\in S_\rho\} = 1 \quad (1\le \rho\le r). \end{equation} Now fix a set of coefficients $b_i$ in \eqref{torsor}, and let $N_{\mathbf b}(B)=N(B)$ denote the number of $x_{ij} \in \Bbb{Z} \setminus \{0\}$ ($0 \leq i \leq k$, $1 \leq i \leq J_i$) satisfying \eqref{torsor}, \eqref{height} and \eqref{gcd}. We aim to establish an asymptotic formula of the shape \begin{equation}\label{manin} N(B) = (1+o(1)) c_1 B (\log B)^{c_2} \end{equation} for some constants $c_1 > 0$, $c_2 \in \Bbb{N}_0$, and our method succeeds subject to quite general conditions. Of course, for a proper solution of the Manin--Peyre conjecture, we not only have to establish \eqref{manin}, but to recover the geometric and arithmetic nature of $c_1$ and $c_2$ in terms of the Manin--Peyre predictions. This will require some natural consistency conditions involving the exponents $h_{ij}$ in the torsor equation \eqref{torsor} and $\alpha^\nu_{ij}$ in the height conditions \eqref{height}, cf.\ in particular \eqref{1c}, \eqref{1a} below. We now describe in more detail the analytic machinery that yields asymptotic formulas of type \eqref{manin} for the problem given by \eqref{torsor}, \eqref{height}, \eqref{gcd}. Input of two types is required. On the one hand, we need a preliminary upper bound of the expected order of magnitude for the count in question. The precise requirements are formulated in the form of Hypothesis~\ref{H2} below. In many instances, the desired bounds can be verified by soft and elementary techniques. In particular, for smooth spherical Fano varieties of semisimple rank one and type $T$, this can be checked by computing dimensions and extreme points of certain polytopes, see Proposition~\ref{propH2}. On the other hand, we require an asymptotic formula for the number of integral solutions of \eqref{torsor} in potentially lopsided boxes, with variables restricted by $\frac{1}{2} X_{ij} \leq |x_{ij}| \leq X_{ij}$, say. As a notable feature of the method, the asymptotic information is required only when the $k$ products $\prod_j X_{ij}^{h_{ij}}$ $(1 \leq i \leq k)$ have roughly the same size. The circle method deals with this auxiliary counting problem in considerable generality, culminating in Proposition~\ref{circle-method} that comes with a power saving in the shortest variable $\min_{ij} X_{ij}$. The method described in Section~\ref{sec8} transfers the information obtained for counting in boxes to the strangely shaped region described by the conditions \eqref{height}. In \cite{BB} we presented a combinatorial method to achieve this for certain regions of hyperbolic type. Here we use complex analysis to do this work for us in a far more general context. A prototype of this idea, developed only in a special (and nonsmooth) case, can be found in \cite{BBS2}. The final result is Theorem~\ref{analytic-theorem} that we will state once the relevant notation has been developed. Again we are working in greater generality than needed for the immediate applications in this paper, with future applications in mind. In the case of smooth spherical Fano threefolds of semisimple rank one and type $T$ (and in many other examples that can be found in \cite{MR3348473,fahrner,HS}, for example), the torsor equation \eqref{torsor} is of the shape ``2-by-2 determinant equals some monomial'', that is (up to changing signs) \begin{equation}\label{typeT} x_{11} x_{12} + x_{21} x_{22} + \prod_{j=1}^{J_3} x_{3j}^{h_{3j}} = 0. \end{equation} While the general transition method is independent of the shape of the torsor equation, for the particular case \eqref{typeT}, Theorem \ref{analytic-theorem} together with Propositions~\ref{circle-method} and \ref{propH2} offers a ``black box'' to obtain the Manin--Peyre conjecture in any given situation with a small amount of elementary computations. This is formalized in Theorem~\ref{manin-cor}, which readily yields the proofs of Theorems~\ref{dim3} and \ref{dim4} in Sections~\ref{appl1} and \ref{appl2}. The following table summarizes the analytic data discussed in this subsection for the varieties $X_1, \ldots, X_8$ featured in Theorems~\ref{dim3} and \ref{dim4}. Here $N$ is the number of height conditions in \eqref{height}; the total number of variables is $J = J_0+\dots+J_3 = \dim X_i + \rank \Pic X_i + 1$. \begin{table}[ht] \centering \begin{tabular}[ht]{ccccccc} \hline & dim & rk Pic & torsor equation & $N$ \\ \hline\hline $X_1$ & $3$ & 3 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$} & 13\\ $X_2$& $3$ & 3 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2$} & 13\\ $X_3$ & $3$ & 4 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$} & 14 \\ $X_4$ & $3$ & 4 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$} & 17 \\ \hline $X_5$ & $4$ & 5 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}$} & 34 \\ $X_6$ & $5$ & 3 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}^2$} & 24\\ $X_7$ & $6$ & 5 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}x_{34}x_{35}^2$} & 80 \\ $X_8$ & $7$ &6 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2x_{34}^2$} & 156\\ \hline $\widetilde{X}^\dagger$ & 3 & 4 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2$} & 13 \\ \hline \end{tabular} \caption{} \label{tab:v18} \end{table} \subsection{Another application} Theorem~\ref{manin-cor} offers a promising line of attack to establish Manin's conjecture in many instances, not only those covered by Theorems~\ref{dim3} and \ref{dim4}. As proof of concept, we include a somewhat different application featuring a singular spherical Fano threefold. The last two authors \cite{MR3853044} have studied some examples, and have confirmed Manin's conjecture for two families of singular spherical Fano threefolds. One family was given by the equation $ad-bc-z^{n+1}=0$ in weighted projective space $\mathbb{P}(1,n,1,n,1)$, the other was the family of hypersurfaces given by $ad-bc-y^n z^{n+1}=0$ in a certain toric variety ($n \ge 2$). For the counting problem on the torsor, elementary analytic techniques were enough. We believe that this is related to the fact that all the varieties have noncanonical (log terminal) singularities, with the exception of the first variety for $n = 2$, which is a slightly harder case with canonical singularities and a crepant resolution. However, for similar varieties, the elementary counting techniques in \cite{MR3853044} do not seem to be of strength sufficient for a proof of Manin's conjecture. In Section~\ref{sec:Xdagger}, we use the much stronger technology developed in this paper to discuss one such case. Let $X^\dagger$ be the anticanonical contraction of the blow-up of the hypersurface $\mathbb{V}(z_{11}z_{12}-z_{21}z_{22}-z_{31}z_{32})$ in $\mathbb{P}^2_\mathbb{Q} \times \mathbb{P}^2_\mathbb{Q}$ (with coordinates $(z_{11}:z_{21}:z_{31})$ and $(z_{12}:z_{22}:z_{32})$) in the two curves $\mathbb{V}(z_{31}) \times \{(0:0:1)\}$ and $\mathbb{V}(z_{31}, z_{32})$. This is a singular Fano threefold admitting a crepant resolution. \begin{theorem}\label{thm2} There exists a positive number $C^{\dag}$ such that \begin{equation*} N^{\dag}(B) = (1+o(1)) C^\dag B(\log B)^3 \end{equation*} The value of $C^{\dag}$ is the one predicted by Peyre \cite{MR2019019}. \end{theorem} Further applications are postponed to a separate paper. \medskip \noindent \emph{Notational remarks.} This work draws on results from various areas of mathematics. Due to the large number of topics covered it seemed impracticable to aim for an entirely consistent notation. Any attempt to do so would be in conflict with traditions in the respective fields. We opt for a pragmatic approach and use notation that, locally, seems natural to working mathematicians. For example, almost everywhere in the paper, the letter $B$ signals the threshold for the height of points in several counting problems, but in Section~\ref{sec2}, a Borel subgroup of the group $G$ that occurs in the definition of a spherical variety is denoted by $B$. This is just one example of double booking for symbols that are often ``frozen'' in less interdisciplinary writings. We therefore introduce notation at the appropriate stage of the argument. \part{Heights and Tamagawa measures in Cox coordinates}\label{part1} Given a variety whose Cox ring with precisely one relation is known explicitly, we show (under mild conditions) how to write down an anticanonical height function \eqref{eq:height_definition}, how to make the counting problem on a universal torsor explicit (Proposition~\ref{prop:countingproblem_abstract}), and how to express Peyre's constant (Proposition~\ref{prop:peyre}). This is achieved in terms of the Cox ring data, without constructing an anticanonical embedding in a projective space, widely generalizing results from \cite{MR1340296,MR1681100,MR1679841,BBS1,BBS2}. \section{Varieties and universal torsors in Cox coordinates}\label{sec:charts_torsors} Let $X$ be a smooth split projective variety over $\mathbb{Q}$ with big and semiample anticanonical class $\omega_X^\vee$ whose Picard group is free of finite rank. Assume that it has a finitely generated Cox ring $\mathscr{R}(X)$ with precisely one relation with integral coefficients. In other words, $X$ has a Cox ring over $\mathbb{Q}$ \cite{arXiv:1408.5358} of the form \begin{equation}\label{eq:cox_ring} \mathscr{R}(X) \cong \mathbb{Q}[x_1,\dots,x_J]/(\Phi), \end{equation} where $x_1,\dots,x_J$ is a system of pairwise nonassociated $\Pic X$-prime generators and the relation $\Phi \in \mathbb{Z}[x_1,\dots,x_J]$ is nonzero. According to \cite[\S 3.2.5]{adhl15}, \eqref{eq:cox_ring} defines a canonical embedding of $X$ into a (not necessarily complete) ambient toric variety $Y^\circ$. By \cite[Proposition~2.4.2.6]{adhl15}, $Y^\circ$ can be completed to a projective toric variety $Y$ such that the natural map $\Cl Y \to \Cl X=\Pic X$ is an isomorphism and $-K_X$ is big and semiample on $Y$. Its Cox ring is $\mathscr{R}(Y) = \mathbb{Q}[x_1,\dots,x_J]$. Let $\Sigma$ be the fan of $Y$, and let $\Sigma_\mathrm{max}$ be the set of maximal cones. The generators $x_1,\dots,x_J$ have the same grading as in $\mathscr{R}(X)$ and are in bijection to the rays $\rho \in \Sigma(1)$; we also write $x_\rho$ for $x_i$ corresponding to $\rho$. We generally write \begin{equation}\label{JN} J = \#\Sigma(1), \quad N = \#\Sigma_\mathrm{max}, \end{equation} and we assume: \begin{equation}\label{eq:toric_smooth} \text{The projective toric variety $Y$ can be chosen to be regular.} \end{equation} \subsection{Affine charts in Cox coordinates}\label{sec:affine_charts} Since $\mathscr{R}(X) \cong \mathbb{Q}[x_\rho : \rho \in \Sigma(1)]/(\Phi)$ with $\Pic X$-homogeneous $\Phi$, our variety $X$ is a hypersurface defined by $\Phi$ (in Cox coordinates) in the toric variety $Y$ (with Cox ring $\mathscr{R}(Y)=\mathbb{Q}[x_\rho : \rho \in \Sigma(1)]$). On $Y$, we can regard $X$ as a prime divisor of class $\deg\Phi \in \Cl Y$. We introduce further notation for the toric variety $Y$. In Part~\ref{part1}, let $U$ be the open torus in $Y$. For each $\rho \in \Sigma(1)$, we have a $U$-invariant Weil divisor $D_\rho$ defined by $x_\rho$ of class $[D_\rho]=\deg(x_\rho) \in \Cl Y$. Let $D_0 \coloneqq \sum_{\rho \in \Sigma(1)} D_\rho$, which is an effective divisor of class $[D_0]=-K_Y$. For a $U$-invariant divisor $D=\sum_{\rho \in \Sigma(1)} \lambda_\rho D_\rho$, let \begin{equation}\label{eq:x^D} x^D \coloneqq \prod_{\rho \in \Sigma(1)} x_\rho^{\lambda_\rho} \end{equation} denote the corresponding monomial of degree $[D]$. For example, $x^{D_0} = \prod_{\rho \in \Sigma(1)} x_\rho$. For each $\sigma \in \Sigma_\mathrm{max}$, the set $\{[D_\rho] : \rho \notin \sigma(1)\}$ is a basis of $\Cl Y$; in other words, \begin{equation}\label{eq:sigma-basis} \{\deg(x_\rho) : \rho \notin \sigma(1)\} \end{equation} is a basis of $\Pic X$. \begin{lemma}\label{lem:monomials_degree_L} For each $\sigma \in \Sigma_\mathrm{max}$, there is a unique effective Weil divisor $D(\sigma)=\sum_{\rho \notin \sigma(1)} \alpha^\sigma_\rho D_\rho$ of class $-K_X$ whose support is contained in $\bigcup_{\rho \notin \sigma(1)} D_\rho$. \end{lemma} \begin{proof} For the existence, choose an effective $U$-invariant $\mathbb{Q}$-Weil divisor $D$ on $Y$ with $[D]=-K_X$. Let $M$ be the character lattice of the torus $U$. Choose $\chi_\sigma \in M_\mathbb{Q}$ such that $(\Div \chi_\sigma)_{|U_\sigma} = D_{|U_\sigma}$. Define $D(\sigma) \coloneqq D-\Div \chi_\sigma$. Then $D(\sigma)$ is of class $-K_X$ and its support is contained in $\bigcup_{\rho \notin \sigma(1)} D_\rho$. Moreover, a multiple of $-K_X$ being globally generated means that we have $\chi_\sigma \le \chi_{\sigma'}$ on $\sigma'$ for every $\sigma' \in \Sigma_\mathrm{max}$ \cite[Theorem 6.1.7]{MR2810322}. Hence $D(\sigma)$ is an effective $\mathbb{Q}$-divisor. Because of \eqref{eq:sigma-basis}, there is a unique $\mathbb{Z}$-linear combination of the $D_\rho$ with $\rho \notin \sigma(1)$ of class $-K_X$, which must be equal to $D(\sigma)$. \end{proof} For $\sigma \in \Sigma_\mathrm{max}$, notation \eqref{eq:x^D} gives \begin{equation}\label{eq:height_monomials} x^{D(\sigma)} = \prod_{\rho \notin \sigma(1)} x_\rho^{\alpha^\sigma_\rho}, \end{equation} where $\alpha^\sigma_\rho$ are the unique nonnegative integers satisfying $-K_X = \sum_{\rho \notin \sigma(1)} \alpha^\sigma_\rho \deg(x_\rho)$ in $\Pic X$ (as in Lemma~\ref{lem:monomials_degree_L}). Every $\sigma \in \Sigma_\mathrm{max}$ defines an affine chart on $Y$ as follows. For each $\rho' \in \Sigma(1)$, we can write \begin{equation}\label{eq:alpha_sigma_rho_rho'} \deg(x_{\rho'}) = \sum_{\rho \notin \sigma(1)} \alpha^\sigma_{\rho',\rho} \deg(x_{\rho}) \end{equation} with certain $\alpha^\sigma_{\rho',\rho} \in \mathbb{Z}$ by \eqref{eq:sigma-basis}. Then \begin{equation*} z^\sigma_{\rho'} \coloneqq x_{\rho'}/\prod_{\rho \notin \sigma(1)} x_\rho^{\alpha^\sigma_{\rho',\rho}} \end{equation*} is a rational section of degree $0 \in \Cl Y$, with $z^\sigma_{\rho'}=1$ for $\rho' \notin \sigma(1)$. The sections $z^\sigma_{\rho'}$ for $\rho' \in \sigma(1)$ define an isomorphism $U^\sigma \to \mathbb{A}^{\sigma(1)}_\mathbb{Q}$, where $U^\sigma$ is the open subset of $Y$ where $x_\rho \ne 0$ for all $\rho \notin \sigma(1)$ (i.\,e.,~ the complement of $\bigcup_{\rho \notin \sigma(1)} D_\rho$ in $Y$). We also obtain affine charts on the open subset $X^\sigma \coloneqq X \cap U^\sigma$ of $X$. The image of $X^\sigma$ in $\mathbb{A}^{\sigma(1)}_\mathbb{Q}$ is defined by \begin{equation*} \Phi^\sigma \coloneqq \Phi(z^\sigma_\rho)= \Phi(x_\rho)/\prod_{\rho\notin \sigma(1)}x_\rho^{\beta^\sigma_\rho} \end{equation*} (where $\beta^\sigma_\rho \in \mathbb{Z}$ satisfy $\deg\Phi=\sum_{\rho\notin \sigma(1)} \beta^\sigma_\rho\deg(x_\rho)$) since $x_\rho \ne 0$ on $U^\sigma$ for $\rho \notin \sigma(1)$. By the implicit function theorem, for every $P \in X^\sigma(\mathbb{Q}_v)$ with $\partial \Phi^\sigma/\partial z^\sigma_{\rho_0}(P) \ne 0$ for some $\rho_0 \in \sigma(1)$, there is an open $v$-adic neighborhood $U \subseteq X^\sigma(\mathbb{Q}_v)$ such that the composition of $X^\sigma \to \mathbb{A}_\mathbb{Q}^{\sigma(1)}$ with the natural projection $\pi^\sigma_{\rho_0} \colon \mathbb{A}_\mathbb{Q}^{\sigma(1)} \to \mathbb{A}_\mathbb{Q}^{\sigma(1) \setminus \{\rho_0\}}$ that drops the $\rho_0$-coordinate induces a chart $U \to \mathbb{Q}_v^{\sigma(1) \setminus \{\rho_0\}}$. Its inverse is obtained by computing the $\rho_0$-coordinate $z^\sigma_{\rho_0}=\phi((z^\sigma_\rho)_{\rho \in \sigma(1)\setminus\{\rho_0\}})$ using the implicit function $\phi$ obtained by solving $\Phi^\sigma$ for $z^\sigma_{\rho_0}$. \subsection{Universal torsors and models}\label{sec:torsors_models} Let $T \cong \mathbb{G}_{\mathrm{m},\mathbb{Q}}^{\rank \Pic X}$ be the N\'eron--Severi torus of $X$ (i.\,e.,~ the torus whose characters are $\Pic X =\Cl Y$). Cox's construction and the theory of Cox rings \cite[\S 8]{MR1679841} and \cite[\S 5.1]{MR2810322} give universal torsors $X_0 \subset Y_0$ (with inclusion morphism $\iota_0 \colon X_0\to Y_0$) over $X \subset Y$ (with inclusion $\iota : X\to Y$). Here $Y_0$ is the principal universal torsor over $Y$ under $T$. Both projections $X_0\to X$ and $Y_0 \to Y$ are called $\pi$. We have fans $\Sigma_1 \supset \Sigma_0 \to \Sigma$ (with the sets of rays $\Sigma_1(1)=\Sigma_0(1)$ in natural bijection to $\Sigma(1)$) corresponding to the toric varieties $\mathbb{A}_\mathbb{Q}^J = \mathbb{A}_\mathbb{Q}^{\Sigma(1)} = Y_1 \supset Y_0 \to Y$. Let $U_0$ be the open torus in $Y_0$. We have $Y_0 = Y_1 \setminus Z_Y$, where $Z_Y$ is defined by the \emph{irrelevant ideal} generated by the monomials \begin{equation}\label{eq:underline_sigma} x^{\underline\sigma} \coloneqq \prod_{\rho \notin \sigma(1)} x_\rho \end{equation} for all maximal cones $\sigma\in\Sigma_\mathrm{max}$. By \cite[Proposition~5.1.6]{MR2810322}, there are \emph{primitive collections} \begin{equation}\label{eq:primitive_collections} S_1,\dots,S_r \subseteq \Sigma(1) \end{equation} (i.\,e.,~ $S_j \not\subseteq \sigma(1)$ for all $\sigma \in \Sigma$, but for every proper subset $S_j'$ of $S_j$, there is a $\sigma \in \Sigma$ with $S_j' \subseteq \sigma(1)$) such that the $r$ irreducible components of $Z_Y$ are defined by the vanishing of $x_\rho$ for all $\rho \in S_j$. The fans and their maps allow us to construct $\mathbb{Z}$-models $\widetilde{\pi}\colon \widetilde{Y}_1 \setminus \widetilde{Z}_Y = \widetilde{Y}_0 \to \widetilde{Y}$ with an action of $\widetilde{T} \cong \mathbb{G}_{\mathrm{m},\mathbb{Z}}^{\rank \Cl Y}$ on $\widetilde{Y}_0$ and $\widetilde{Y}_1$ (see \cite[Remark 8.6b]{MR1679841} and later). The characteristic space $X_0$ is defined in $Y_0$ by $\Phi$ (interpreted as an affine equation). Then $X_0 = X_1 \setminus Z_X$, where $X_1 = \Spec\mathscr{R}(X)$ is defined by $\Phi$ in $Y_1$, and $Z_X = Z_Y \cap X_1$. We have $\widetilde{\pi} \colon \widetilde{X}_1 \setminus \widetilde{Z}_X = \widetilde{X}_0 \to \widetilde{X}$ for $\mathbb{Z}$-models of $X,X_0,X_1,Z_X$ defined in $\widetilde{Y},\widetilde{Y}_0,\widetilde{Y}_1,\widetilde{Z}_Y$ by $\Phi$ (regarded as an affine equation for $\widetilde{X}_0,\widetilde{X}_1,\widetilde{Z}_X$ and as $\Cl Y$-homogeneous for $\widetilde{X}$). \begin{prop}\label{prop:lift_to_torsor} We have \begin{align*} \widetilde{X}_0(\mathbb{Z}) &= \{\mathbf{x}=(x_\rho)_{\rho \in \Sigma(1)} \in \mathbb{Z}^{\Sigma(1)} : \Phi(\mathbf{x})=0,\ \gcd\{x_\rho : \rho \in S_j\}=1 \text{ {\rm for all }} j=1,\dots,r\},\\ \widetilde{X}_0(\mathbb{Z}_p) &= \{\mathbf{x}=(x_\rho)_{\rho \in \Sigma(1)} \in \mathbb{Z}_p^{\Sigma(1)} : \Phi(\mathbf{x})=0,\ p \nmid \gcd\{x_\rho : \rho \in S_j\} \text{ {\rm for all }}j=1,\dots,r\}. \end{align*} The map $\widetilde{\pi}$ induces a $2^{\rank \Pic X} : 1$-map $\widetilde{X}_0(\mathbb{Z}) \to \widetilde{X}(\mathbb{Z})=X(\mathbb{Q})$. \end{prop} \begin{proof} Arguing as in \cite[(11.5)]{MR1679841}, but using the description of $\widetilde{Z}_Y$ by the primitive collections shows \begin{equation*} \widetilde{Y}_0(\mathbb{Z}) = \{\mathbf{y} \in \mathbb{Z}^{\Sigma(1)} : \gcd\{y_\rho : \rho \in S_j\}=1 \text{ for all } j=1,\dots,r\}. \end{equation*} Since $\widetilde{X}$ is defined by $\Phi$ in $\widetilde{Y}$, the first result follows. The description of $\widetilde{X}(\mathbb{Z}_p)$ is obtained similarly. By \cite[Lemma~11.4]{MR1679841}, $\widetilde{\pi}$ induces a $2^{\rank \Cl Y} : 1$-map $\widetilde{Y}_0(\mathbb{Z}) \to \widetilde{Y}(\mathbb{Z}) = Y(\mathbb{Q})$. Restricting to the points where $\Phi$ vanishes gives the result. \end{proof} \section{Heights in Cox coordinates}\label{sec:metrics_heights} We keep the assumptions and notation from Section~\ref{sec:charts_torsors}. \subsection{Adelic metrization of $\omega_X^{-1}$ via Poincar\'e residues}\label{sec:poincare} A special case of the following can be found in \cite[\S 5]{BBS1}. There is a global nowhere vanishing section $s_Y$ of $\omega_Y(D_0)$ whose restriction to every open subset $U^\sigma \subset Y$ for $\sigma \in \Sigma_\mathrm{max}$ is $\pm \bigwedge_{\rho \in \sigma(1)} \frac{\,{\mathrm d} z^\sigma_\rho}{z^\sigma_\rho}$ (see \cite[Proposition~8.2.3]{MR2810322}). For each $\sigma \in \Sigma_\mathrm{max}$, we define the nowhere vanishing global section \begin{equation*} \varpi^\sigma \coloneqq \frac{x^{D_0}}{x^{D(\sigma)}\Phi} s_Y \in \Gamma(Y,\omega_Y(D(\sigma)+X)). \end{equation*} On $U^\sigma$, we have \begin{equation*} \varpi^\sigma = \frac{\pm 1}{\Phi^\sigma} \bigwedge_{\rho \in \sigma(1)} \,{\mathrm d} z^\sigma_\rho \in \Gamma(U^\sigma, \omega_Y(X)) \end{equation*} since $\sum_{\rho \in \Sigma(1)} \deg(x_\rho) = -K_Y = -K_X+\deg\Phi = \sum_{\rho\notin \sigma} (\alpha^\sigma_\rho+\beta^\sigma_\rho)\deg(x_\rho)$ by adjunction. The Poincar\'e residue map $\Res \colon \omega_Y(X) \to \iota_*\omega_X$ is a homomorphism of $\mathscr{O}_Y$-modules. On the smooth open subset $U^\sigma$ of $Y$, it sends $\varpi^\sigma \in \Gamma(U^\sigma,\omega_Y(X))$ to $\Res \varpi^\sigma \in \Gamma(U^\sigma,\iota_*\omega_X) = \Gamma(X^\sigma,\omega_X)$, which is given by \begin{equation}\label{eq:residue} \Res \varpi^\sigma = \frac{\pm 1}{\partial \Phi^\sigma/\partial z^\sigma_{\rho_0}} \bigwedge_{\rho \in \sigma(1)\setminus\{\rho_0\}} \,{\mathrm d} z^\sigma_\rho \end{equation} on the open subset of $X^\sigma$ where $\partial \Phi^\sigma/\partial z^\sigma_{\rho_0} \ne 0$, for any $\rho_0 \in \sigma(1)$. \begin{lemma} The section $\Res \varpi^\sigma$ extends uniquely to a nowhere vanishing global section of $\omega_X(D(\sigma)\cap X)$. \end{lemma} \begin{proof} This is similar to \cite[Lemma~13]{BBS1}. Since $s_Y$ generates the $\mathscr{O}_Y$-module $\omega_Y(D_0)$, each $$\varpi^\sigma = \frac{x^{D_0}}{x^{D(\sigma)} \Phi} s_Y$$ generates the $\mathscr{O}_Y$-module $\omega_Y(X+D(\sigma))$. Since $\iota^*\mathscr{O}_Y(D(\sigma)) = \mathscr{O}_X(D(\sigma) \cap X)$ (using that $X \not\subseteq \Supp D(\sigma)$), the isomorphism $\iota^*\omega_Y(X) \to \omega_X$ adjoint to $\Res \colon \omega_Y(X) \to \iota_* \omega_X$ induces an isomorphism $\iota^*\omega_Y(X+D(\sigma)) \to \omega_X(D(\sigma) \cap X)$ that maps $\iota^*\varpi^\sigma$ to $\Res \varpi^\sigma$. Hence $\Res\varpi^\sigma$ generates $\omega_X(D(\sigma) \cap X)$, i.\,e.,~ it is a nowhere vanishing global section. \end{proof} Therefore, $\tau^\sigma \coloneqq (\Res \varpi^\sigma)^{-1}$ is a global nowhere vanishing sections of $\omega_X^{-1}(-D(\sigma)\cap X)$, which we can also view as a global section of $\omega_X^{-1}$. \begin{lemma}\label{lem:tau} The section $\tau^\sigma \in \Gamma(X,\omega_X^{-1})$ does not vanish anywhere on $X^\sigma$. \end{lemma} \begin{proof} The previous lemma shows that $\tau^\sigma$, as a global section of $\omega_X^{-1}$, has corresponding divisor $D(\sigma) \cap X$, whose support is contained in $X \cap \bigcup_{\rho \notin \sigma} D_\rho$, which is the complement of $X^\sigma$. \end{proof} For any place $v$ of $\mathbb{Q}$, we define a $v$-adic norm (or metric) on $\omega_X^{-1}$ by \begin{equation}\label{eq:def_norm} \|\tau(P)\|_v \coloneqq \min_{\sigma \in \Sigma_\mathrm{max} : P \notin D(\sigma)} \left|\frac{\tau}{\tau^\sigma}(P)\right|_v \end{equation} for any local section $\tau$ of $\omega_X^{-1}$ not vanishing in $P \in X(\mathbb{Q}_v)$. The next result shows that our family of local norms $\|\cdot\|_v$ for all places $v$ is an adelic anticanonical norm as in \cite[D\'efinition~2.3]{MR2019019}; see also \cite[Lemma~8.5]{BBS2}. \begin{lemma} Let $p$ be a prime such that $\widetilde{X}$ is smooth over $\mathbb{Z}_p$. On $\omega_X^{-1}$, the $p$-adic norm $\|\cdot\|_p$ defined by \eqref{eq:def_norm} coincides with the model norm $\|\cdot\|_p^*$ determined by $\widetilde{X}$ over $\mathbb{Z}_p$ as in \cite[Definition~2.9]{MR1679841}. \end{lemma} \begin{proof} Let $P \in X(\mathbb{Q}_p)$, and let $\tau$ be a local section of $\omega_X^{-1}$ not vanishing in $P$. Choose $\xi \in \Sigma_\mathrm{max}$ such that $|(\tau^\xi/\tau)(P)|_p = \max_{\sigma \in \Sigma_\mathrm{max}} |(\tau^\sigma/\tau)(P)|_p$, which is positive by Lemma~\ref{lem:tau} and the fact that the sets $X^\sigma$ cover $X$; in particular, $\tau^\xi$ does not vanish in $P$. Hence we can compute \begin{equation*} \|\tau^\xi(P)\|_p^{-1} = \max_{\sigma \in \Sigma_\mathrm{max}} \left|\frac{\tau^\sigma}{\tau^\xi}(P)\right|_p = \max_{\sigma \in \Sigma_\mathrm{max}} \frac{|(\tau^\sigma/\tau)(P)|_p}{|(\tau^\xi/\tau)(P)|_p} = 1. \end{equation*} On the other hand, for each $\sigma \in \Sigma_\mathrm{max}$, the section $\tau^\sigma$ extends to a global section $\widetilde\tau^\sigma$ of $\omega_{\widetilde{X}/\mathbb{Z}_p}^{-1}$, and $\omega_{\widetilde{X}/\mathbb{Z}_p}^{-1}$ is generated by the set of all these $\widetilde\tau^\sigma$ as an $\mathscr{O}_{\widetilde{X}}$-module. The computation above shows for every $\sigma \in \Sigma_\mathrm{max}$ that $\left|\frac{\tau^\sigma}{\tau^\xi}(P)\right|_p \le 1$, hence $\tau^\sigma(P) = a_\sigma \tau^\xi(P)$ for some $a_\sigma \in \mathbb{Z}_p$ in the $\mathbb{Q}_p$-module $\omega_X^{-1}(P)$, and hence also $\widetilde\tau^\sigma(P) = a_\sigma \widetilde\tau^\xi(P)$ in the $\mathbb{Z}_p$-module ${\widetilde P}^*(\omega_{\widetilde{X}/\mathbb{Z}_p}^{-1})$. Therefore, ${\widetilde P}^*(\omega_{\widetilde{X}/\mathbb{Z}_p}^{-1})$ is generated by $\tau^\xi(P)$ and consequently $\|\tau^\xi(P)\|_p^*=1$ by definition of the model norm. Finally we have \begin{equation*} \|\tau(P)\|_p = |(\tau/\tau^\xi)(P)|_p \cdot \|\tau^\xi(P)\|_p = |(\tau/\tau^\xi)(P)|_p \cdot \|\tau^\xi(P)\|_p^* = \|\tau(P)\|_p^*. \qedhere \end{equation*} \end{proof} \subsection{Height function}\label{sec:height} As in \cite[D\'efinition~2.3]{MR2019019}, our adelic anticanonical norm $(\|\cdot\|_v)_v$ \eqref{eq:def_norm} allows us to define an anticanonical height $H : X(\mathbb{Q}) \to \mathbb{R}_{>0}$, namely \begin{equation}\label{eq:height_definition} H(P) \coloneqq \prod_v \|\tau(P)\|_v^{-1} \end{equation} for any local section $\tau$ of $\omega_X^{-1}$ not vanishing in $P \in X(\mathbb{Q})$; here and elsewhere, the product is taken over all places $v$ of $\mathbb{Q}$. This anticanonical height on $X(\mathbb{Q})$ depends only on the choice of Cox coordinates on $X$ (\ref{eq:cox_ring}). In the following lemma, $x^{D(\sigma)}$ and $F_0$ are homogeneous elements of $\mathbb{Q}[x_\rho : \rho \in \Sigma(1)]$ of the same degree in $\Pic X$. Therefore, $x^{D(\sigma)}/F_0$ can be regarded as a rational function on $X$ that can be evaluated in $P \in X(\mathbb{Q})$ if $F_0$ does not vanish in $P$. \begin{lemma}\label{lem:height} For any polynomial $F_0$ of degree $-K_X$ not vanishing in $P \in X(\mathbb{Q})$, one has \begin{equation*} H(P) = \prod_v \max_{\sigma \in \Sigma_\mathrm{max}} \left|\frac{x^{D(\sigma)}}{F_0}(P)\right|_v. \end{equation*} \end{lemma} \begin{proof} Since the sets $X^\sigma$ for $\sigma \in \Sigma_\mathrm{max}$ cover $X$, our point $P$ is contained in $X^{\xi}(\mathbb{Q})$ for some $\xi \in \Sigma_\mathrm{max}$. By Lemma~\ref{lem:tau}, we can compute $H(P)$ with $\tau \coloneqq \tau^{\xi}$. We have $\varpi^\sigma = x^{-D(\sigma)}x^{D(\xi)}\varpi^{\xi}$. Since $\Res$ is an $\mathscr{O}_Y$-module homomorphism, this implies $\tau^\sigma = x^{D(\sigma)}x^{-D(\xi)}\tau^{\xi}$. Therefore, \begin{equation}\label{eq:norm_max} \|\tau^{\xi}(P)\|_v^{-1} = \max_{\sigma \in \Sigma_\mathrm{max}} \left|\frac{\tau^\sigma}{\tau^{\xi}}(P)\right|_v = \max_{\sigma \in \Sigma_\mathrm{max}}\left|\frac{x^{D(\sigma)}}{x^{D(\xi)}}(P)\right|_v, \end{equation} hence our claim holds for $F_0 \coloneqq x^{D(\xi)}$. By the product formula, it follows for arbitrary $F_0$ not vanishing in $P$. \end{proof} \subsection{Heights on torsors} We lift the height function $H$ to the universal torsor $X_0$ as follows. Let \begin{equation*} H_0 \colon X_0(\mathbb{Q}) \to \mathbb{R}_{>0} \end{equation*} be the composition of $\pi \colon X_0(\mathbb{Q}) \to X(\mathbb{Q})$ and the height function $H$ defined in \eqref{eq:height_definition}. The following is analogous to \cite[Proposition~10.14]{MR1679841}. \begin{lemma}\label{lem:height_torsor} For $P_0 \in X_0(\mathbb{Q})$, we have \begin{equation*} H_0(P_0) = \prod_v \max_{\sigma \in \Sigma_\mathrm{max}} |x^{D(\sigma)}(P_0)|_v. \end{equation*} \end{lemma} \begin{proof} Let $P = \pi(P_0) \in X(\mathbb{Q})$. For $F_0$ of degree $-K_X$ not vanishing in $P$ and $\sigma \in \Sigma_\mathrm{max}$, we can compute $(x^{D(\sigma)}/F_0)(P)$ as in Lemma~\ref{lem:height_torsor}, but we can also regard $x^{D(\sigma)}$ and $F_0$ as regular functions on $X_0$ that can be evaluated in $P_0$. Here we have $x^{D(\sigma)}(P_0)/F_0(P_0) = (x^{D(\sigma)}/F_0)(P)$. Using Lemma~\ref{lem:height}, we obtain \begin{equation*} H_0(P_0)=H(P) = \prod_v \max_{\sigma \in \Sigma_\mathrm{max}} \left|\frac{x^{D(\sigma)}}{F_0}(P)\right|_v = \prod_v \max_{\sigma \in \Sigma_\mathrm{max}} \left|\frac{x^{D(\sigma)}(P_0)}{F_0(P_0)}\right|, \end{equation*} and $\prod_v |F_0(P_0)|_v = 1$ by the product formula. \end{proof} The next result is analogous to \cite[Proposition~11.3]{MR1679841}. \begin{cor}\label{cor:height_torsor_integral} For any prime $p$ and $P_0 \in \widetilde{X}_0(\mathbb{Z}_p)$, we have \begin{equation*} \max_{\sigma \in \Sigma_\mathrm{max}} |x^{D(\sigma)}(P_0)|_p=1. \end{equation*} For $P_0 \in \widetilde{X}_0(\mathbb{Z})$, we have \begin{equation*} H_0(P_0) = \max_{\sigma \in \Sigma_\mathrm{max}} |x^{D(\sigma)}(P_0)|_\infty. \end{equation*} \end{cor} \begin{proof} Let $p$ be a prime and $P_0 \in \widetilde{X}_0(\mathbb{Z}_p)$. Then $P_0 \bmod p$ is in $\widetilde{X}_0(\mathbb{F}_p)$. Since $\widetilde{X}_0$ is defined by the irrelevant ideal in $\widetilde{X}_1$ as in \eqref{eq:underline_sigma}, there is a $\xi \in \Sigma_\mathrm{max}$ such that $x^{\underline{\xi}}(P_0 \bmod p) \ne 0 \in \mathbb{F}_p$. Since the support of $D(\xi)$ is as in Lemma~\ref{lem:monomials_degree_L}, we have $x^{D(\xi)}(P_0 \bmod p) \ne 0 \in \mathbb{F}_p$, and hence $|x^{D(\xi)}(P_0)|_p=1$. Using $x^{D(\sigma)}(P_0) \in \mathbb{Z}_p$ for all $\sigma \in \Sigma_\mathrm{max}$, we conclude $\max_{\sigma \in \Sigma_\mathrm{max}} |x^{D(\sigma)}(P_0)|_p=1$. Therefore, for $P_0 \in \widetilde{X}_0(\mathbb{Z})$, only the archimedean factor in Lemma~\ref{lem:height_torsor} remains. \end{proof} \subsection{Parameterization in Cox coordinates} The following proposition translates the analysis of $N_{X, U, H}(B)$ into a counting problem as described in the introduction that is amenable to methods of analytic number theory. It parameterizes the rational points on $X$ by integral points on the universal torsor $\widetilde{X}_0$ in terms of the torsor equation from the Cox ring \eqref{eq:cox_ring}, the height conditions from the anticanonical monomials \eqref{eq:height_monomials} and the coprimality conditions from the primitive collections \eqref{eq:primitive_collections}. \begin{prop}\label{prop:countingproblem_abstract} Let $X$ be a variety as in the first paragraph of Section~\ref{sec:charts_torsors} that satisfies the assumption~\eqref{eq:toric_smooth}. Let $U=X \setminus \bigcup_{\rho \in \Sigma(1)} D_\rho$ be the open subset of $X$ where all Cox coordinates $x_\rho$ are nonzero. Let $H$ be the anticanonical height function on $X(\mathbb{Q})$ defined in \eqref{eq:height_definition}. Then \begin{equation*} N_{X,U,H}(B) = \frac{1}{2^{\rank\Pic X}} \#\left\{\mathbf{x} \in \mathbb{Z}^{\Sigma(1)}_{\ne 0} : \begin{aligned} &\Phi(\mathbf{x})=0,\, \max_{\sigma \in \Sigma_\mathrm{max}}|\mathbf{x}^{D(\sigma)}|_\infty \le B,\\ &\gcd\{x_\rho : \rho \in S_j\} = 1 \text{ for every $j=1,\dots,r$} \end{aligned} \right\}\text{,} \end{equation*} using the notation \eqref{eq:cox_ring}, \eqref{eq:height_monomials}, \eqref{eq:primitive_collections}. \end{prop} \begin{proof} We combine the $2^{\rank\Pic X} : 1$-map and the description of $\widetilde{X}_0(\mathbb{Z})$ from Proposition~\ref{prop:lift_to_torsor} with the lifted height function in Corollary~\ref{cor:height_torsor_integral}. The preimage of $U(\mathbb{Q})$ in $\widetilde{X}_0(\mathbb{Z})$ is the set where $x_\rho \ne 0$ for all $\rho \in \Sigma(1)$. \end{proof} \subsection{Some linear algebra} The monomials $\textbf{x}^{D(\sigma)}$ and the polynomial $\Phi$ that appear in Proposition~\ref{prop:countingproblem_abstract} are not independent. In this subsection, we analyze this dependence and describe it in the form of a rank condition on a certain matrix. This will be useful later when we apply methods from complex analysis to obtain an asymptotic formula for $N_{X, U, H}(B)$. We consider $\mathbb{Q}^J=\mathbb{Q}^{\Sigma(1)}$ with standard basis $(e_\rho)_{\rho \in \Sigma(1)}$ indexed by the rays of $\Sigma$. Let \begin{equation*} p\colon \mathbb{Q}^{\Sigma(1)} \to (\Pic X)_\mathbb{Q} \end{equation*} be the surjective linear map that sends $e_\rho$ to $[D_\rho]=\deg(x_\rho)$. For $\mathbf{x} = (x_\rho)_{\rho \in \Sigma(1)} \in \mathbb{Q}_v^{\Sigma(1)}$ for some place $v$ of $\mathbb{Q}$ and $\mathbf{v} = (v_\rho)_{\rho \in \Sigma(1)} \in \mathbb{Z}_{\ge 0}^{\Sigma(1)}$, let $\mathbf{x}^\mathbf{v} \coloneqq \prod_{\rho \in \Sigma(1)} x_\rho^{v_\rho}$. \begin{lemma}\label{lemma:lpp} The set $Q \coloneqq p^{-1}(-K_X) \cap \mathbb{Q}_{\ge 0}^{\Sigma(1)}$ is a bounded polytope of dimension $J-\rank\Pic X$. Its set $\mathscr{V}$ of vertices of $Q$ lies in $\mathbb{Z}_{\ge 0}^{\Sigma(1)}$. Let $v$ be a place of $\mathbb{Q}$. For all nonzero $\mathbf{x} \in \mathbb{Q}_v^{\Sigma(1)}$, we have \begin{equation*} \max_{\sigma \in \Sigma_\mathrm{max}} |\mathbf{x}^{D(\sigma)}|_v = \max_{\mathbf{v} \in \mathscr{V}} |\mathbf{x}^{\mathbf{v}}|_v. \end{equation*} \end{lemma} \begin{proof} In the notation of the proof of Lemma~\ref{lem:monomials_degree_L}, write $D = \sum_{\rho}a_\rho D_\rho$. Then the $-\chi_\sigma$ are the vertices, and possibly (if $-K_X$ is not ample) some other points, of the $\rank M$-dimensional polytope \begin{align*} P_D = \{\chi \in M_\mathbb{Q} : \langle n_\rho, \chi\rangle \ge -a_\rho \text{ for all $\rho$}\}\text{;} \end{align*} see \cite[\S 4.3 and after Lemma~9.3.9]{MR2810322}. Now consider the injective affine map $\phi\colon M_\mathbb{Q} \to \mathbb{Q}^{\Sigma(1)}$, $\chi \mapsto \sum_{\rho} (a_\rho + \langle n_\rho, \chi\rangle)e_\rho$ as well as the linear surjective map $p\colon \mathbb{Q}^{\Sigma(1)} \to (\Cl Y)_\mathbb{Q}$. We have $\rank M = J - \rank \Pic X$ and $\operatorname{im}(p \circ \phi) = \{-K_X\}$. Moreover, the condition $\phi(\chi) \in \mathbb{Q}^{\Sigma(1)}_{\ge 0}$ is equivalent to $\langle n_\rho, \chi\rangle \ge -a_\rho$ for all $\rho$. It follows that $\phi$ restricts to a bijection $P_D \to Q = p^{-1}(-K_X) \cap \mathbb{Q}^{\Sigma(1)}_{\ge 0}$. Hence $Q$ is bounded and of dimension $J - \rank \Pic X$. As we have $p(-\chi_\sigma) = D(\sigma)$, where $D(\sigma)$ is interpreted as an element of $\mathbb{Z}^{\Sigma(1)}$ in the obvious way, we obtain $\mathscr{V} \subseteq \phi(\{D(\sigma) : \sigma \in \Sigma_\mathrm{max}\}) \subseteq Q$. Hence the equality \begin{equation*} \max_{\sigma \in \Sigma_\mathrm{max}} |\mathbf{x}^{D(\sigma)}|_v = \max_{\mathbf{v} \in \mathscr{V}} |\mathbf{x}^{\mathbf{v}}|_v \end{equation*} holds, and, since $\phi(M) \subseteq \mathbb{Z}^{\Sigma(1)}$, we also obtain $\mathscr{V} \subset \mathbb{Z}_{\ge 0}^{\Sigma(1)}$. \end{proof} We recall \eqref{JN} and the notation \eqref{eq:height_monomials} for the exponents $a_{\rho}^{\sigma}$ occurring in $\textbf{x}^{D(\sigma)}$. We write $\Phi$ in the form \begin{equation}\label{Phi-sec3} \Phi = \sum_{i=1}^k b_i\prod_{\rho \in \Sigma(1)} x_\rho^{h_{i\rho}}, \end{equation} (i.\,e.,~ $k$ is the number of monomials, and $\mathbf{h}_i = (h_{i\rho})_{\rho \in \Sigma(1)} \in \mathbb{Z}_{\ge 0}^{\Sigma(1)}$ is the exponent vector of the $i$-th term of $\Phi$). We now consider the block matrix \begin{equation}\label{matrix} \mathscr{A} = \begin{pmatrix}\mathscr{A}_1&\mathscr{A}_2\\ \mathscr{A}_3&\mathscr{A}_4\end{pmatrix} \in \mathbb{R}^{(J+1)\times(N+k)}. \end{equation} Here $\mathscr{A}_1 = (\alpha_{\rho}^{\sigma})_{(\rho, \sigma) \in \Sigma(1) \times \Sigma_\mathrm{max}} \in \Bbb{R}^{J \times N}$ is the height matrix for the height function from Proposition~\ref{prop:countingproblem_abstract}. We let $\mathscr{A}_2 \in \mathbb{R}^{J \times k}$ be the matrix whose $i$-th column is $\mathbf{h}_i-\mathbf{h}_k$ for $i=1,\dots,k-1$ and whose $k$-th column is $\mathbf{h}_k-(1,\dots,1)^{\top}$. Furthermore, let $\mathscr{A}_3 = (1, \dots, 1) \in \mathbb{R}^{1 \times N}$ and $\mathscr{A}_4 = (0,\dots,0,-1) \in \mathbb{R}^{1 \times k}$. The definition of $\mathscr{A}_2$ may appear to be somewhat artificial. Its purpose will become clear in \eqref{zast} in Section~\ref{54}. \begin{lemma}\label{rank} We have $\rank \mathscr{A} = \rank \mathscr{A}_1 = J - \rank\Pic X + 1$. \end{lemma} \begin{proof} According to Lemma~\ref{lemma:lpp}, the polytope $Q$ spans an affine subspace of dimension $J-\rank\Pic X$ in $\mathbb{R}^{J}$, which does not contain $0$ since $-K_X \ne 0$. It follows that $Q$ spans a vector space of dimension $J-\rank\Pic X + 1$ in $\mathbb{R}^{J}$. This shows $\rank \mathscr{A}_1 = J - \rank\Pic X + 1$. Since the columns of $\mathscr{A}_1$ lie in an affine subspace of $\mathbb{R}^J$ that does not contain $0$, a linear combination of these columns can be $0$ only if the sum of the coefficients is $0$. It follows that we have $ \rank \mathopen{}\mathclose\bgroup\left(\begin{smallmatrix}\mathscr{A}_1 \\ \mathscr{A}_3\end{smallmatrix}\aftergroup\egroup\right) = \rank \mathscr{A}_1$. Since $\Phi$ is $\Pic X$-homogeneous, the first $k-1$ columns of $\mathscr{A}_2$ lie in $p^{-1}(0)$. Moreover, note that the last column of $\mathscr{A}_2$ lies in $p^{-1}(K_X)$ since $\deg\Phi-\sum_{\rho \in \Sigma(1)} \deg(x_\rho)=K_X$ by \cite[Proposition~3.3.3.2]{adhl15}. Together with the fact that the columns of $\mathscr{A}_1$ lie in $p^{-1}(-K_X)$, we obtain $ \rank \mathscr{A} = \rank \mathopen{}\mathclose\bgroup\left(\begin{smallmatrix}\mathscr{A}_1 \\ \mathscr{A}_3\end{smallmatrix}\aftergroup\egroup\right)\text{.}$ \end{proof} Let ${\bm \zeta} = (\zeta_1, \ldots, \zeta_k) \in \Bbb{R}^{k}$ be a vector satisfying \begin{equation}\label{zeta} \zeta_i > 0 \text{ for all } 1 \leq i \leq k, \quad \sum_{i=1}^kh_{i\rho} \zeta_i < 1\text{ for all }\rho\in \Sigma(1), \quad \sum_{i=1}^k \zeta_i = 1. \end{equation} This condition will reappear in Part~\ref{part2} as \eqref{zeta1}. \begin{lemma}\label{pos} Let ${\bm \zeta}$ be as in \eqref{zeta}, ${\bm \tau}_1= (1-\sum_{i=1}^k h_{i\rho}\zeta_i)_{\rho \in \Sigma(1)} = (1,\dots,1)-\sum_{i=1}^k \zeta_i \mathbf{h}_i$, and let ${\bm\tau}= ({\bm \tau}_1, 1)^\top$. The system of $J+1$ linear equations \begin{align*} \begin{pmatrix}\mathscr{A}_1 \\ \mathscr{A}_3 \end{pmatrix}{\bm\sigma} = {\bm\tau} \end{align*} has a solution ${\bm\sigma} \in \mathbb{R}^N_{> 0}$. \end{lemma} \begin{proof} According to \cite[Proposition~3.3.3.2]{adhl15}, we have $\bm\tau_1 \in p^{-1}(-K_X)$. It follows from $Q = p^{-1}(-K_X) \cap \mathbb{Q}_{\ge 0}^{\Sigma(1)}$ that the relative interior of $Q$ satisfies $Q^\circ \supseteq p^{-1}(-K_X) \cap \mathbb{Q}_{> 0}^{\Sigma(1)}$. Since all coordinates of $\bm\tau_1$ are positive, we obtain $\bm\tau_1 \in Q^\circ$. Since the columns of $\mathscr{A}_1$ are the vertices of $Q$, the column $\tau_1^\top$ can be written as a linear combination of the columns of $\mathscr{A}_1$ with strictly positive coefficients whose sum is $1$. The existence of ${\bm\sigma} \in \mathbb{R}^N_{> 0}$ as required follows. \end{proof} \section{Tamagawa numbers in Cox coordinates}\label{sec:tamagawa_cox} We continue to work in the setting of Sections~\ref{sec:charts_torsors} and \ref{sec:metrics_heights}. Additionally, we assume that $X$ is an almost Fano variety (e.\,g.,~ a smooth Fano variety) as in \cite[D\'efinition~3.1]{MR2019019} (i.\,e.,~ $X$ is smooth, projective and geometrically integral with $H^1(X,\mathscr{O}_X) = H^2(X,\mathscr{O}_X) = 0$, free geometric Picard group of finite rank, and big $\omega_X^\vee$). \subsection{Local measures} By \cite[(2.2.1)]{MR1340296}, \cite[Notations~4.3]{MR2019019} and \cite[Theorem~1.10]{MR1679841}, the $v$-adic norm $\|\cdot\|_v$ on $\omega_X^{-1}$ defined in \eqref{eq:def_norm} induces a measure $\mu_v$ on $X(\mathbb{Q}_v)$. We express it using the Poincar\'e residues from Section~\ref{sec:poincare} and the affine charts from Section~\ref{sec:affine_charts}. See \cite[(5.8), (5.9)]{BBS1} for an example of the next result. \begin{prop}\label{prop:local_measure} Let $\xi \in \Sigma_\mathrm{max}$. For a Borel subset $N_v$ of $X^{\xi}(\mathbb{Q}_v)$, we have \begin{equation}\label{eq:local_measure_abstract} \mu_v(N_v) =\int_{N_v} \frac{|\Res\varpi^{\xi}|_v}{\max_{\sigma \in \Sigma_\mathrm{max}} |\tau^\sigma\Res\varpi^{\xi}|_v} =\int_{N_v} \frac{|\Res\varpi^{\xi}|_v}{\max_{\sigma \in \Sigma_\mathrm{max}} |x^{D(\sigma)}/x^{D(\xi)}|_v}, \end{equation} where $|\Res\varpi^{\xi}|_v$ is the $v$-adic density on $X^{\xi}(\mathbb{Q}_v)$ of the volume form $\Res\varpi^{\xi}$ on $X^{\xi}$. Let $\rho_0 \in \xi(1)$. If $N_v$ is contained in a sufficiently small open $v$-adic neighborhood of a point $P$ in $X^{\xi}(\mathbb{Q}_v)$ with $\partial \Phi^{\xi}/\partial z^{\xi}_{\rho_0}(P) \ne 0$, then \begin{equation}\label{eq:local_measure_explicit} \mu_v(N_v)=\int_{\pi^{{\xi}}_{\rho_0}(N_v)} \frac{\bigwedge_{\rho \in {\xi(1)} \setminus \{\rho_0\}} \,{\mathrm d} z^{\xi}_\rho} {|\partial \Phi^{\xi}/\partial z^{\xi}_{\rho_0}(\mathbf{z}^{\xi})|_v \max_{\sigma \in \Sigma_\mathrm{max}}|x^{D(\sigma)}(\mathbf{z}^{\xi})|_v} \end{equation} in the affine coordinates $\mathbf{z}^{\xi} = (z^{\xi}_{\rho})_{\rho \in {\xi(1)}}$, where $\pi^{\xi}_{\rho_0} \colon U^{\xi}(\mathbb{Q}_v)=\mathbb{Q}_v^{{\xi(1)}} \to \mathbb{Q}_v^{\xi(1)\setminus\{\rho_0\}}$ is the natural projection and $z^{\xi}_{\rho_0}$ is expressed in terms of the other coordinates using the implicit function for $\Phi^{\xi}$. \end{prop} \begin{proof} The implicit function theorem gives a $v$-adic neighborhood $U \subseteq X^\xi(\mathbb{Q}_v)$ of $P$ and an implicit function $\phi \colon V \to \mathbb{Q}_v$ for $V = \pi^\xi_{\rho_0}(U) \subseteq \mathbb{Q}_v^{\xi(1)\setminus\{\rho_0\}}$ such that $\Phi^\xi(\mathbf{z}^\xi)=0$ for all $\mathbf{z}^\xi \in X^\xi(\mathbb{Q}_v)$ with $z^\xi_{\rho_0}$ the image of $(z^\xi_\rho)_{\rho \in \xi(1)\setminus\{\rho_0\}} \in V$ under $\phi$. We work with $\|\tau^\xi(P)\|_v$ and use $x^{D(\xi)}(\mathbf{z}^\xi)=1$ in our affine coordinates on $X^\xi(\mathbb{Q}_v)$. Then the formulas in \cite[(2.2.1)]{MR1340296} and \cite[Theorem~1.10]{MR1679841} give \eqref{eq:local_measure_explicit} for $N_v \subseteq U$. Indeed, our chart is $\pi \coloneqq \pi^\xi_{\rho_0} \colon U \to V \subseteq \mathbb{Q}_v^{\xi(1)\setminus\{\rho_0\}}$. In this chart, by \eqref{eq:residue}, the image of the local canonical section $\bigwedge_{\rho \in \xi(1) \setminus \{\rho_0\}} \,{\mathrm d} z^\xi_\rho$ under \begin{equation*} \omega(\pi) \colon \pi^*\omega_{\mathbb{A}_\mathbb{Q}^{\xi(1) \setminus \{\rho_0\}}} \to \omega_X \end{equation*} is $\partial \Phi^\xi/\partial z^\xi_{\rho_0} \cdot \Res\varpi^\xi$. This implies that the image of the local anticanonical section $\bigwedge_{\rho \in \xi(1) \setminus \{\rho_0\}} \frac{\partial}{\partial z^\xi_{\rho}}$ under \begin{equation*} {}^t\omega(\pi)^{-1} \colon \pi^*\omega^{-1}_{\mathbb{A}_\mathbb{Q}^{\xi(1) \setminus \{\rho_0\}}} \to \omega^{-1}_X \end{equation*} is $(\partial \Phi^\xi/\partial z^\xi_{\rho_0})^{-1} \cdot \tau^\xi$. Therefore, $\mu_v(N_v)$ for $N_v \subseteq U$ as defined in [Peyre95, (2.2.1)] is the integral over $\pi(N_v)$ of \begin{align*} \omega_v &= \|((\partial \Phi^\xi/\partial z^\xi_{\rho_0})^{-1}\cdot\tau^\xi)(\pi^{-1}((z^\xi_\rho)_{\rho \in \xi(1)\setminus\{\rho_0\}}))\|_v \bigwedge_{\rho \in \xi(1) \setminus \{\rho_0\}} \,{\mathrm d} z^\xi_\rho\\ &=|\partial \Phi^\xi/\partial z^\xi_{\rho_0}(\mathbf{z}^\xi)|_v^{-1} \cdot \|\tau^\xi(\mathbf{z}^\xi)\|_v\bigwedge_{\rho \in \xi(1) \setminus \{\rho_0\}} \,{\mathrm d} z^\xi_\rho \end{align*} Using \eqref{eq:norm_max} together with $x^{D(\xi)}(\mathbf{z}^\xi)=1$, we obtain \eqref{eq:local_measure_explicit}. By \eqref{eq:residue}, we see that the right hand side of \eqref{eq:local_measure_abstract} coincides with \eqref{eq:local_measure_explicit} for $N_v \subseteq U$. Since $X$ is smooth, $X^\xi(\mathbb{Q}_v)$ can be covered with such $U$, hence $\mu_v(N_v)$ is equal to the right hand side for all $N_v \subseteq X^\xi(\mathbb{Q}_v)$. Since $\varpi^\sigma/\varpi^\xi = x^{D(\xi)}/x^{D(\sigma)}$, we have $\tau^\sigma \Res\varpi^\xi = \tau^\sigma/\tau^\xi = x^{D(\sigma)}/x^{D(\xi)}$, and hence the integrals in \eqref{eq:local_measure_abstract} are equal. \end{proof} \subsection{Tamagawa number} Here we use some standard notation as in \cite[\S 2]{MR1340296}, \cite[\S 4]{MR2019019}. Let $S$ be a sufficiently large finite set of finite places of $\mathbb{Q}$ as in \cite[Notations~4.5]{MR2019019}. For any prime $p \in S$, let $L_p(s,\Pic\overline{X}) \coloneqq \det(1-p^{-s}\Fr_p \mid \Pic(X_{\overline \mathbb{F}_p}) \otimes \mathbb{Q})^{-1}$. Since $X$ is split, $L_p(s, \Pic\overline{X}) = (1- p^{-s})^{-\rank \Pic X}$, hence \begin{equation*} L_S(s,\Pic\overline{X}) \coloneqq \prod_{p \notin S} L_p(s,\Pic\overline{X}) = \zeta(s)^{\rank \Pic X}\prod_{p \in S}(1- p^{-s})^{\rank \Pic X}. \end{equation*} Therefore, $\lim_{s \to 1} (s-1)^{\rank \Pic X} L_S(s,\Pic\overline{X}) = \prod_{p \in S}(1- p^{-1})^{\rank \Pic X}$, and the convergence factors are \begin{equation*} \lambda_p^{-1} \coloneqq L_p(1,\Pic\overline X)^{-1} = \left(1-p^{-1}\right)^{\rank\Pic X} \end{equation*} for $p \notin S$ and $\lambda_p^{-1} \coloneqq 1$ for $p \in S$. Hence Peyre's Tamagawa number \cite[D\'efinition~4.5]{MR2019019} is \begin{equation}\label{eq:tamagawa} \tau_H(X) =\mu_\infty(X(\mathbb{R})) \prod_p (1 - p^{-1})^{\rank\Pic X} \mu_p(X(\mathbb{Q}_p)). \end{equation} The Euler product converges by \cite[Remarque~4.6]{MR2019019}. \subsection{Measures on the torsor} By \cite[Proposition~8.2.3]{MR2810322}, we have a rational $\#\Sigma(1)$-form \begin{equation*} s_{Y_0} = \bigwedge_{\rho \in \Sigma_0(1)} \frac{\,{\mathrm d} y_\rho}{y_\rho} \end{equation*} on the toric principal universal torsor $Y_0 \subset Y_1 = \mathbb{A}^{\Sigma_0(1)}_\mathbb{Q}$ with coordinates $y_\rho$ for $\rho \in \Sigma_0(1)$, using our bijection $\Sigma_0(1) \to \Sigma(1)$. Now we regard $\Phi$ and $y^D$ (defined as in \eqref{eq:x^D} for $U$-invariant divisors $D$ on $Y$) as polynomials in $y_\rho$ and as functions on $Y_0$. As in \cite[(5.12)]{BBS1}, we define \begin{equation*} \varpi_{Y_0}^\sigma = \frac{y^{D_0}}{y^{D(\sigma)}\Phi} s_{Y_0} \end{equation*} for each $\sigma \in \Sigma_\mathrm{max}$, and \begin{equation*} \varpi_{Y_0} = \frac{1}{\Phi}\bigwedge_{\rho \in \Sigma_0(1)} \,{\mathrm d} y_\rho. \end{equation*} We have $\varpi_{Y_0}^\sigma = \varpi_{Y_0}/y^{D(\sigma)}$ on the open subset $Y_0^\sigma \coloneqq \pi^{-1}(U^\sigma)$ of $ Y_0$. We have $\varpi_{Y_0}^\sigma \in \Gamma(Y_0^\sigma, \omega_{Y_0}(X_0))$ with Poincar\'e residue $\Res\varpi_{Y_0}^\sigma \in \Gamma(X_0^\sigma,\omega_{X_0})$ on $X_0^\sigma = \pi^{-1}(X^\sigma) = X_0 \cap Y_0^\sigma$. As above, we obtain a $v$-adic measure $m_v$ on $X_0(\mathbb{Q}_v)$ defined by \begin{equation*} m_v(M_v) = \int_{M_v} \frac{|\Res\varpi_{Y_0}^\xi|_v} {\max_{\sigma \in \Sigma_\mathrm{max}} |y^{D(\sigma)}/y^{D(\xi)}|_v} \end{equation*} for a Borel subset $M_v$ of $X_0^\xi(\mathbb{Q}_v)$. Alternatively, we can write \begin{equation*} m_v(M_v) = \int_{M_v} \frac{|\Res\varpi_{Y_0}|_v}{\max_{\sigma \in \Sigma_\mathrm{max}} |y^{D(\sigma)}|_v} \end{equation*} because $\varpi_{Y_0} \in \Gamma(Y_0,\omega_{Y_0}(X_0))$ has a residue form $\Res\varpi_{Y_0} \in \Gamma(X_0,\omega_{X_0})$ that restricts to $y^{D(\xi)}\Res\varpi_{Y_0}^{\xi}$ on $X_0^{\xi}$. If $M_v$ is sufficiently small, this is explicitly \begin{equation}\label{eq:measure_torsor_explicit} m_v(M_v) = \int_{\pi_{\rho_0}(M_v)} \frac{\bigwedge_{\rho \in \Sigma_0(1) \setminus \{\rho_0\}} \,{\mathrm d} y_\rho} {|\partial \Phi/\partial x_{\rho_0}(\mathbf{y})|_v \max_{\sigma \in \Sigma_\mathrm{max}} |\mathbf{y}^{D(\sigma)}|_v} \end{equation} in the coordinates $\mathbf{y}=(y_\rho)_{\rho \in \Sigma_0(1)}$, where $\pi_{\rho_0}$ is the projection to all coordinates $y_\rho$ with $\rho\ne\rho_0$ and where $y_{\rho_0}$ is expressed in terms of these coordinates using the implicit function theorem. \begin{lemma} Let $D_0^{Y_0} = \pi^* D_0$ be the sum of the prime divisors defined by $y_\rho=0$ for $\rho \in \Sigma_0(1)$. Then there is a unique nowhere vanishing global section $s_{Y_0/Y} \in \Gamma(Y_0,\omega_{Y_0/Y})$ such that $s_{Y_0} = s_{Y_0/Y} \otimes \pi^*s_Y$ via the natural isomorphism $\omega_{Y_0}(D_0^{Y_0}) = \omega_{Y_0/Y} \otimes \pi^* \omega_Y(D_0)$. Let $s_{X_0/X}$ be the image of $\iota_0^*s_{Y_0/Y}$ under the isomorphism $\Gamma(X_0,\iota_0^*\omega_{Y_0/Y}) \to \Gamma(X_0,\omega_{X_0/X})$, and $s_{X_0/X}^\sigma$ be the restriction of $s_{X_0/X}$ to $X_0^\sigma$. Then $\Res\varpi_{Y_0}^\sigma = s_{X_0/X}^\sigma \otimes \pi^*\Res\varpi^\sigma$ under the canonical isomorphism $\omega_{X_0} = \omega_{X_0/X} \otimes \pi^*\omega_X$. \end{lemma} \begin{proof} See \cite[Lemma~16]{BBS1}. \end{proof} \begin{lemma}\label{lem:measure_variety_torsor_p-adic} For any prime $p$, we have $m_p(\widetilde{X}_0(\mathbb{Z}_p)) = (1-p^{-1})^{\rank \Pic X} \mu_p(X(\mathbb{Q}_p))$. \end{lemma} \begin{proof} Our proof follows \cite[Lemma~18]{BBS1}. By \cite[pp.~126--127]{MR1679841}, the map $\pi\colon X_0 \to X$ induces an $v$-adic analytic torsor $\pi_v \colon X_0(\mathbb{Q}_v) \to X(\mathbb{Q}_v)$ under $T(\mathbb{Q}_v)$. By \cite[Theorem~1.22]{MR1679841} and the previous lemma, the relative volume form $s_{X_0/X}$ defines $v$-adic measures on the fibers of $\pi_v$ over $X(\mathbb{Q}_v)$. Integrating along these fibers gives a linear functional $\Lambda_v \colon C_c(X_0(\mathbb{Q}_v)) \to C_c(X(\mathbb{Q}_v))$. Let $\chi_p\colon X_0(\mathbb{Q}_p) \to \{0,1\}$ be the characteristic function of $\widetilde{X}_0(\mathbb{Z}_p) \subset \widetilde{X}_0(\mathbb{Q}_p) = X_0(\mathbb{Q}_p)$. Since $\chi_p \in C_c(X_0(\mathbb{Q}_p))$, we have $m_p(\widetilde{X}_0(\mathbb{Z}_p)) = \int_{X(\mathbb{Q}_p)} \Lambda_p(\chi_p) \mu_p$. We claim that $(\Lambda_p(\chi_p))(P) = (1-p^{-1})^{\rank \Pic X}$ for every $P \in X(\mathbb{Q}_p) = \widetilde{X}(\mathbb{Z}_p)$. Indeed, we have $s_{\widetilde{Y}_0} = s_{\widetilde{Y}_0/\widetilde{Y}} \otimes \pi^* s_{\widetilde{Y}}$, where $s_{\widetilde{Y}_0/\widetilde{Y}}$ is the extension of $s_{Y_0/Y}$ to a $\widetilde{T}$-equivariant generator of $\omega_{\widetilde{Y}_0/\widetilde{Y}}$. Furthermore, $s_{X_0/X}$ extends to a $\widetilde{T}$-equivariant generator $s_{\widetilde{X}_0/\widetilde{X}}$ of $\omega_{\widetilde{X}_0/\widetilde{X}}$. For a point $P \in \widetilde{X}(\mathbb{Z}_p)$, the torsor $\widetilde{X}_0\to \widetilde{X}$ can be pulled back to $(\widetilde{X}_0)_P \to P$, and hence $s_{\widetilde{X}_0/\widetilde{X}}$ pulls back to a $\widetilde{T}_{\mathbb{Z}_p}$-equivariant global section $s_{(\widetilde{X}_0)_P}$ on $\omega_{(\widetilde{X}_0)_P/\mathbb{Z}_p}$. But the torsor over $P$ is trivial, and $\widetilde{T} \cong \mathbb{G}_{\mathrm{m}}^r$ with $r = \rank\Pic X$, hence there are affine coordinates $(t_1,\dots,t_r)$ for the affine $\mathbb{Z}_p$-scheme $(\widetilde{X}_0)_P$ with $s_{(\widetilde{X}_0)_P} = \,{\mathrm d} t_1/t_1 \wedge \dots \wedge \,{\mathrm d} t_r/t_r$. Therefore, \begin{equation*} (\Lambda_p(\chi_p))(P) = \int_{(\widetilde{X}_0)_P(\mathbb{Z}_p)} |s_{(\widetilde{X}_0)_P}|_p = \Big(\int_{\mathbb{Z}_p^\times} \frac{\,{\mathrm d} t}{t}\Big)^r = (1- p^{-1})^r.\qedhere \end{equation*} \end{proof} \subsection{Comparison to the number of points modulo $p^\ell$} In this section, we describe $ \mu_p(X(\mathbb{Q}_p))$ in terms of congruences. In the special case $Y=\mathbb{P}^n_\mathbb{Q}$, this was worked out in \cite[Lemma~3.2]{MR1681100}. Let $p$ be a prime. For $\ell \in \mathbb{Z}_{>0}$, we have \begin{equation*} \widetilde{X}_0(\mathbb{Z}/p^\ell\mathbb{Z}) = \{\mathbf{x} \in (\mathbb{Z}/p^\ell\mathbb{Z})^{\Sigma(1)} : \Phi(\mathbf{x})=0 \in \mathbb{Z}/p^\ell \mathbb{Z}, \ p \nmid \gcd\{x_\rho : \rho \in S_j\} \text{ for all } j=1,\dots,r\} \end{equation*} as in Proposition~\ref{prop:lift_to_torsor} and define \begin{equation}\label{eq:c_p} c_p \coloneqq \lim_{\ell \to \infty} \frac{\# \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})}{(p^\ell)^{\#\Sigma(1)-1}} \,\, \text{ and } \,\, c_{\mathrm{fin}} \coloneqq \prod_p c_p. \end{equation} We will see in Proposition~\ref{prop:measure_torsor_mod_p^l} that the sequence defining $c_p$ becomes stationary; in particular, the limit $\ell \rightarrow \infty$ exists. The convergence of $c_{\mathrm{fin}}$ will follow from Proposition~\ref{prop:p-adic_density}; see \eqref{eq:tamagawa}. For $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})$, let \begin{equation*} \widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x} \coloneqq \{\mathbf{y} \in \widetilde{X}_0(\mathbb{Z}_p)\mid \mathbf{y} \equiv \mathbf{x} \bmod{p^\ell}\}. \end{equation*} \begin{lemma}\label{lem:partial_derivatives} There is an $\ell_1 \in \mathbb{Z}_{>0}$ such that the following holds for all $\ell \ge \ell_1$: for any $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})$, there is a nonnegative integer $c_\mathbf{x}<\ell_1$ and an $\rho_\mathbf{x} \in \Sigma(1)$ such that for all $\mathbf{y} \in \widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}$ one has \begin{equation*} \inf_{\rho \in \Sigma(1)}\{v_p(\partial \Phi/\partial x_\rho(\mathbf{y}))\} = v_p(\partial \Phi/\partial x_{\rho_\mathbf{x}}(\mathbf{y})) = c_\mathbf{x}. \end{equation*} \end{lemma} \begin{proof} Since $X$ is smooth, $X_0$ is also smooth. Hence for any $\mathbf{y} \in X_0(\mathbb{Q}_p)$, we have $\partial\Phi/\partial x_\rho(\mathbf{y}) \ne 0$ for some $\rho \in \Sigma(1)$. In particular, for any $\mathbf{y} \in \widetilde{X}_0(\mathbb{Z}_p)$, the valuation $v_p(\partial\Phi/\partial x_\rho(\mathbf{y}))$ is finite for some $\rho$. Hence $I_p(\mathbf{y}) \coloneqq \inf_{\rho \in \Sigma(1)}\{v_p(\partial \Phi/\partial x_\rho(\mathbf{y}))\}$ is finite. There is an $\ell_1$ such that $I_p(\mathbf{y})<\ell_1$ for all $\mathbf{y} \in \widetilde{X}_0(\mathbb{Z}_p)$. To see this, assume the contrary. Then there is a sequence $\mathbf{y}_1,\mathbf{y}_2,\dots \in \widetilde{X}_0(\mathbb{Z}_p)$ with $I_p(\mathbf{y}_j)\ge j$ for all $j$. The description of $\widetilde{X}_0(\mathbb{Z}_p)$ in Proposition~\ref{prop:lift_to_torsor} shows that this sequence has an accumulation point $\mathbf{y}_0 \in \widetilde{X}_0(\mathbb{Z}_p)$: infinitely many $\mathbf{y}_i$ have the same first $p$-adic digits, infinitely many of these have the same second $p$-adic digits, and so on; we obtain $\mathbf{y}_0$ by using these $p$-adic digits; $\Phi(\mathbf{y}_0)=0$ since $\Phi$ is continuous, and $\mathbf{y}_0$ satisfies the coprimality conditions since these depend only on the first $p$-adic digits. Passing to a subsequence, we may assume that $\mathbf{y}_0$ is the limit of the sequence $(\mathbf{y}_j)_j$. Then $\partial \Phi/\partial x_\rho(\mathbf{y}_0) = \lim_{j \to \infty} \partial \Phi/\partial x_\rho(\mathbf{y}_j) = 0$ for all $\rho \in \Sigma(1)$. This contradicts the smoothness of $X$ over $\mathbb{Q}_p$. Let $\ell \ge \ell_1$ and $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})$. For any $\mathbf{y} \in \widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}$, the first $\ell$ digits of $\partial \Phi/\partial x_\rho(\mathbf{y})$ depend only on $\mathbf{x}$, and since $I_p(\mathbf{y}) < \ell_1 \le \ell$, at least one of these digits is nonzero for some $\rho \in \Sigma(1)$. We choose $c_\mathbf{x}$ and $\rho_\mathbf{x}$ such that digit number $c_\mathbf{x}$ (i.\,e.,~ the coefficient of $p^{c_\mathbf{x}}$ in the $p$-adic expansion) of $\partial \Phi/\partial x_{\rho_\mathbf{x}}(\mathbf{y})$ is nonzero, while all lower digits of $\partial \Phi/\partial x_\rho(\mathbf{y})$ for all $\rho \in \Sigma(1)$ are zero. \end{proof} \begin{prop}\label{prop:measure_torsor_mod_p^l} For every prime $p$, there is an $\ell_0 \in \mathbb{Z}_{>0}$ such that for all $\ell \ge \ell_0$ we have \begin{equation*} m_p(\widetilde{X}_0(\mathbb{Z}_p)) = \frac{\# \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})}{(p^\ell)^{\dim X_0}}. \end{equation*} \end{prop} \begin{proof} Let $\ell_1$ be as in Lemma~\ref{lem:partial_derivatives}. For $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell_1}\mathbb{Z})$ and $\ell \ge \ell_1$, let \begin{equation*} \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})_\mathbf{x} \coloneqq \{\mathbf{y} \in (\mathbb{Z}/p^{\ell}\mathbb{Z})^{\Sigma(1)} \mid \Phi(\mathbf{y})=0 \in \mathbb{Z}/p^{\ell}\mathbb{Z},\ \mathbf{y} \equiv \mathbf{x} \bmod{p^{\ell_1}}\}. \end{equation*} We will see that \begin{equation}\label{eq:measure_eq_number_x} m_p(\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}) = \frac{\# \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})_\mathbf{x}}{(p^{\ell})^{\#\Sigma(1)-1}} \end{equation} for all $\ell \ge \ell_1+c_\mathbf{x}$ with $c_\mathbf{x}<\ell_1$ as in Lemma~\ref{lem:partial_derivatives}. Since $\widetilde{X}_0(\mathbb{Z}_p)$ is the disjoint union of the sets $\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}$ and $ \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})$ is the disjoint union of the sets $ \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})_\mathbf{x}$ for $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell_1}\mathbb{Z})$, our result follows for all $\ell \ge \ell_0 \coloneqq 2\ell_1-1$. For the proof of \eqref{eq:measure_eq_number_x}, we fix $\mathbf{x} \in \widetilde{X}_0(\mathbb{Z}/p^{\ell_1}\mathbb{Z})$ and let $c_\mathbf{x}, \rho_\mathbf{x}$ be as in Lemma~\ref{lem:partial_derivatives}. We claim that $\Phi(\mathbf{y}) \bmod{p^{\ell_1+c_\mathbf{x}}}$ is the same for all $\mathbf{y} \in \mathbb{Z}_p^{\Sigma(1)}$ with $\mathbf{y} \equiv \mathbf{x} \bmod{p^{\ell_1}}$; we write $\Phi^*(\mathbf{x})$ for this value in $\mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z}$. Indeed, for $\mathbf{y},\mathbf{y}' \in \mathbb{Z}_p^{\Sigma(1)}$, we have \begin{equation*} \Phi(\mathbf{y}') = \Phi(\mathbf{y}) + \sum_{\rho \in \Sigma(1)} (y_\rho'-y_\rho)\cdot \partial\Phi/\partial x_\rho(\mathbf{y}) + \sum_{\rho',\rho'' \in \Sigma(1)} \Psi_{\rho',\rho''}(\mathbf{y},\mathbf{y}')(y_{\rho'}'-y_{\rho'})(y_{\rho''}'-y_{\rho''}) \end{equation*} for certain polynomials $\Psi_{\rho',\rho''} \in \mathbb{Z}_p[X_\rho,X'_\rho : \rho \in \Sigma(1)]$ by Taylor expansion. If $\mathbf{y}' \equiv \mathbf{y} \bmod{p^{\ell_1}}$, we conclude $\Phi(\mathbf{y}') \equiv \Phi(\mathbf{y}) \bmod{p^{\ell_1+c_\mathbf{x}}}$. If $\Phi^*(\mathbf{x}) \ne 0 \in \mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z}$, then there is no $\mathbf{y} \in \mathbb{Z}_p^{\Sigma(1)}$ with $\mathbf{y} \equiv \mathbf{x} \bmod{p^{\ell_1}}$ and $\Phi(\mathbf{y})=0$, hence the set $\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}$ is empty, and the same holds for $ \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})_\mathbf{x}$ for all $\ell \ge \ell_1+c_\mathbf{x}$ for similar reasons. Now assume $\Phi^*(\mathbf{x}) = 0 \in \mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z}$. By Hensel's lemma, the map $\pi_{\rho_\mathbf{x}}$ that drops the $\rho_\mathbf{x}$-coordinate defines an isomorphism from the integration domain $\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}$ to the set \begin{align*} &\{(y_\rho)_{\rho \in \Sigma(1) \setminus \{\rho_\mathbf{x}\}} \in \mathbb{Z}_p^{\Sigma(1) \setminus \{\rho_\mathbf{x}\}} \mid y_\rho \equiv x_\rho \bmod{p^{\ell_1}}\text{ for all }\rho \in \Sigma(1) \setminus \{\rho_\mathbf{x}\}\}\\ ={}&\{(x_\rho+z_\rho)_{\rho \in \Sigma(1) \setminus \{\rho_\mathbf{x}\}} \mid z_\rho \in p^{\ell_1}\mathbb{Z}_p\} \cong (p^{\ell_1}\mathbb{Z}_p)^{\Sigma(1) \setminus \{\rho_\mathbf{x}\}} \end{align*} Therefore, by \eqref{eq:measure_torsor_explicit} and the first statement in Corollary~\ref{cor:height_torsor_integral}, \begin{equation*} m_p(\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}) = \int_{\pi_{\rho_\mathbf{x}}(\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x})} \frac{\bigwedge_{\rho \in \Sigma(1)\setminus \{\rho_\mathbf{x}\}} \,{\mathrm d} y_\rho} {|\partial\Phi/\partial x_{\rho_\mathbf{x}}(\mathbf{y})|_p}, \end{equation*} where $y_{\rho_\mathbf{x}}$ is expressed in terms of the other coordinates using $\pi_{\rho_\mathbf{x}}^{-1}$. We have $|\partial\Phi/\partial x_{\rho_\mathbf{x}}(\mathbf{y})|_p = p^{-c_\mathbf{x}}$ on the integration domain. Thus, \begin{equation*} m_p(\widetilde{X}_0(\mathbb{Z}_p)_\mathbf{x}) = \int_{(p^{\ell_1}\mathbb{Z}_p)^{\Sigma(1) \setminus \{\rho_\mathbf{x}\}} } \frac{\bigwedge_{\rho \in \Sigma(1) \setminus \{\rho_\mathbf{x}\}} \,{\mathrm d} z_\rho}{p^{-c_\mathbf{x}}} = p^{c_\mathbf{x}-\ell_1(\#\Sigma(1)-1)}. \end{equation*} On the other hand, by the discussion above, $\Phi^*(\mathbf{x})=0 \in \mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z}$ means $\Phi(\mathbf{y})=0 \in \mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z}$ for all $\mathbf{y} \equiv \mathbf{x} \bmod{p^{\ell_1}}$. Therefore, \begin{align*} \frac{\# \widetilde{X}_0(\mathbb{Z}/p^{\ell_1+c_\mathbf{x}}\mathbb{Z})_\mathbf{x}}{(p^{\ell_1+c_\mathbf{x}})^{\#\Sigma(1)-1}} = \frac{p^{c_\mathbf{x} \#\Sigma(1)}}{(p^{\ell_1+c_\mathbf{x}})^{\#\Sigma(1)-1}} = p^{c_\mathbf{x}-\ell_1(\#\Sigma(1)-1)}. \end{align*} Using Hensel's lemma as before, we see that $\# \widetilde{X}_0(\mathbb{Z}/p^{\ell}\mathbb{Z})_\mathbf{x}/(p^\ell)^{\#\Sigma(1)-1}$ has the same value for all $\ell \ge \ell_1+c_\mathbf{x}$. This completes the proof of \eqref{eq:measure_eq_number_x}. \end{proof} \begin{prop}\label{prop:p-adic_density} We have \begin{equation*} (1 - p^{-1})^{\rank \Pic X}\mu_p(X(\mathbb{Q}_p)) = c_p. \end{equation*} \end{prop} \begin{proof} We combine Lemma~\ref{lem:measure_variety_torsor_p-adic} and Proposition~\ref{prop:measure_torsor_mod_p^l} with \eqref{eq:c_p}. \end{proof} \subsection{The real density} In this section, we compute the real density and Peyre's $\alpha$-constant in terms of quantities that come up naturally in the analytic method in Sections~\ref{sec8} and \ref{sec9}. For the case $Y = \mathbb{P}^n_\mathbb{Q}$, see \cite[\S 5.4]{MR1340296}. For any $\sigma \in \Sigma_\mathrm{max}$, we can write \begin{align*} -K_X = \sum_{\rho \notin \sigma(1)} \alpha^\sigma_\rho \deg(x_\rho) \end{align*} with $\alpha^\sigma_\rho \in \mathbb{Z}$ by Lemma~\ref{lem:monomials_degree_L}. In this section, we assume for convenience: \begin{equation}\label{eq:assumption_real_density_strong} \begin{aligned} &\text{Every variable $x_\rho$ appears in at most one monomial of $\Phi$.}\\ &\text{There are $\sigma \in \Sigma_\mathrm{max}$, $\rho_0 \in \sigma(1)$ and $\rho_1 \in \Sigma(1) \setminus \sigma(1)$ such that $\alpha^\sigma_{\rho_1} \ne 0$, }\\ &\text{the variable $x_{\rho_0}$ appears with exponent $1$ in $\Phi$, and }\\ &\text{no $x_\rho$ with $\rho \in \sigma(1) \cup \{\rho_1\} \setminus \{\rho_0\}$ appears in the same monomial of $\Phi$ as $x_{\rho_0}$,} \end{aligned} \end{equation} This assumption will be satisfied and easy to check in all our applications. It implies assumption~\eqref{simplifying} below and hence will allow us to compare Peyre's real density with $c_\infty$ as in Section~\ref{sec9}. We fix $\sigma,\rho_0,\rho_1$ as in \eqref{eq:assumption_real_density_strong}. Let $\sigma(1)' \coloneqq \sigma(1) \cup \{\rho_1\}$. When we write $\rho \notin \sigma(1)'$, we mean $\rho \in \Sigma(1) \setminus \sigma(1)'$. Because of $\alpha^\sigma_{\rho_1} \ne 0$ and \eqref{eq:sigma-basis}, $\{\deg(x_\rho) : \rho \notin \sigma(1)'\} \cup \{K_X\}$ is an $\mathbb{R}$-basis of $(\Pic X)_\mathbb{R}$. Hence we can define the real numbers $b_{\rho,\rho'}$ and $b_{\rho'}$ to satisfy \begin{align*} \deg(x_{\rho'}) = - b_{\rho'}K_X -\sum_{\rho \notin \sigma(1)'} b_{\rho,\rho'}\deg(x_\rho) \end{align*} for $\rho' \in \sigma(1)'$. We consider the height matrix $\mathscr{A}_1 = (a_{\rho}^{\sigma})_{(\rho, \sigma) \in \Sigma(1) \times \Sigma_\mathrm{max}} \in \mathbb{R}^{\Sigma(1) \times \Sigma_\mathrm{max}} = \Bbb{R}^{J \times N}$ as in \eqref{matrix}. Let $Z_\rho$ for $\rho \in \Sigma(1)$ be the rows of this matrix. The following shows that our definition of $b_{\rho,\rho'}$ and $b_{\rho'}$ is consistent with definitions \eqref{beta} and \eqref{beta0} that will be needed in Section~\ref{sec8}. \begin{lemma}\label{lem19} We have \begin{align*} Z_{\rho} = \sum_{\rho' \in \sigma(1)'} b_{\rho,\rho'} Z_{\rho'} \quad \text{and} \quad (1,\dots,1) = \sum_{\rho' \in \sigma(1)'} b_{\rho'} Z_{\rho'} \end{align*} for all $\rho \notin \sigma(1)'$. In particular, with \begin{equation}\label{eq:rkPic} R = 2+\dim X = J-\rank\Pic X + 1, \end{equation} the $R$ rows $\{Z_{\rho'} : \rho' \in \sigma(1)'\}$ form a maximal linearly independent subset. \end{lemma} \begin{proof} As in \eqref{matrix}, let $\mathscr{A}_3 = (1,\dots,1) \in \Bbb{R}^{1\times \Sigma_\mathrm{max}} = \mathbb{R}^{1 \times N}$. Let $\{e_\rho : \rho \in \Sigma(1)\} \cup \{e_0\}$ be the standard basis of $\mathbb{R}^{\Sigma(1)} \times \mathbb{R}$. We define $\deg(e_\rho) = \deg(x_\rho)$ for $\rho \in \Sigma(1)$ and $\deg(e_0) = K_X$. Consider the sequence of linear maps \begin{align*} \mathbb{R}^{\Sigma_\mathrm{max}} \xlongrightarrow{\begin{pmatrix}\mathscr{A}_1\\\mathscr{A}_3\end{pmatrix}} \mathbb{R}^{\Sigma(1)} \times \mathbb{R} \xlongrightarrow{\deg} (\Pic X)_\mathbb{R}\xlongrightarrow{} 0\text{.} \end{align*} The second map is surjective, and the image of the first is contained in the kernel of the second. Since we have $\rank \mathscr{A}_1 = \#\Sigma(1) + 1 - \rank \Pic X$ by Lemma~\ref{rank}, this sequence is exact. It follows that the dual sequence \begin{align*} \mathbb{R}^{\Sigma_\mathrm{max}} \xlongleftarrow{\begin{pmatrix}\mathscr{A}_1^\top & \mathscr{A}_3^\top\end{pmatrix}} \mathbb{R}^{\Sigma(1)} \times \mathbb{R} \xlongleftarrow{\deg^\vee} (\Pic X)_\mathbb{R}^\vee\xlongleftarrow{} 0 \end{align*} is exact as well. Let $\{d^\vee_\rho : \rho \notin \sigma(1)'\} \cup \{K_X^\vee\}$ be the $\mathbb{R}$-basis of $(\Pic X)_\mathbb{R}^\vee$ dual to the $\mathbb{R}$-basis of $(\Pic X)_\mathbb{R}$ given above. We have \begin{align*} \deg^\vee(d_\rho^\vee) = e_\rho - \sum_{\rho' \in \sigma(1)'} b_{\rho,\rho'} e_{\rho'} \quad \text{and} \quad \deg^\vee(K_X^\vee) = e_0 - \sum_{\rho' \in \sigma(1)'} b_{\rho'} e_{\rho'} \end{align*} for all $\rho \notin \sigma(1)'$. Since these elements lie in the kernel of the leftmost map in the dual exact sequence, this gives the required relations between the rows of the matrix $\mathscr{A}_1$ and the row $\mathscr{A}_3$. \end{proof} We compare the factor $\alpha(X)$ of Peyre's constant as in \cite[D\'efinition~2.4]{MR1340296} to \begin{equation}\label{cast-final-first} c^{\ast} \coloneqq \text{vol}\Big\{ \textbf{r} \in [0, \infty]^{\Sigma(1)\setminus\sigma(1)'} : b_{\rho'} -\sum_{\rho \notin \sigma(1)'} r_{\rho} b_{\rho,\rho'} \geq 0 \text{ for all } \rho' \in \sigma(1)' \Big\}, \end{equation} which will appear in \eqref{cast-final}. \begin{lemma}\label{lem:alpha} We have \begin{equation*} \alpha(X) = \frac{1}{|\alpha^\sigma_{\rho_1}|} c^{\ast}. \end{equation*} \end{lemma} \begin{proof} Let $\vol_{\mathbb{Z}}$ be the volume on $(\Pic X)_\mathbb{R}$ defined by the lattice $\Pic X$, and let $\vol_\mathbb{R}$ be the volume on $(\Pic X)_\mathbb{R}$ defined by the basis $\{K_X\} \cup \{\deg(x_\rho) : \rho \notin \sigma(1)'\}$. Since the determinant of the transformation matrix is $-\alpha^\sigma_{\rho_1}$, we have $\vol_\mathbb{Z} = |\alpha^\sigma_{\rho_1}|\vol_\mathbb{R}$. For the corresponding dual volumes on $(\Pic X)^\vee_\mathbb{R}$, we have $\vol^\vee_\mathbb{Z} = |\alpha^\sigma_{\rho_1}|^{-1}\vol^\vee_\mathbb{R}$. Peyre considers the unique $(\rank \Pic X-1)$-form $\vol_{\mathrm{P}}$ on $(\Pic X)_\mathbb{R}^\vee$ such that $\vol_{\mathrm{P}} \wedge K_X = \vol^\vee_\mathbb{Z}$. We also consider the form $\vol_V = \bigwedge_{\rho \notin \sigma(1)'} \deg(x_\rho)$. Note that we have $\vol_V \wedge K_X = \vol_\mathbb{R}^\vee$. It follows that we have $\vol_{\mathrm P} = |\alpha^\sigma_{\rho_1}|^{-1} \vol_V$. These forms can be restricted to volumes on any affine subspace parallel to the subspace $V = \{\phi \in (\Pic X)_\mathbb{R}^\vee : \langle\phi, K_X\rangle = 0\}$. Hence \begin{align*} \alpha(X) &= \vol_P{}\{r \in (\Eff X)^\vee : \langle r, K_X\rangle = -1\}\\ &= |\alpha^\sigma_{\rho_1}|^{-1}\vol_V{}\{r \in (\Pic X)_\mathbb{R}^\vee : \langle r, K_X \rangle = -1, \langle r, \deg x_\rho\rangle \ge 0 \text{ for all $\rho \in \Sigma(1)$}\}\\ &= |\alpha^\sigma_{\rho_1}|^{-1}\vol_V{}\mathopen{}\mathclose\bgroup\left\{r_0 K_X^\vee + \sum_{\rho \notin \sigma(1)'} r_\rho d^\vee_\rho : \begin{aligned} &r_0 = -1, r_\rho \ge 0\text{ for all }\rho \notin \sigma(1)',\\ & b_{\rho'}-\textstyle\sum_{\rho \notin \sigma(1)'} r_{\rho}b_{\rho,\rho'} \ge 0 \text{ for all } \rho' \in \sigma(1)' \end{aligned} \aftergroup\egroup\right\}, \end{align*} and the claim follows. \end{proof} Next we analyze Peyre's real density $\mu_\infty(X(\mathbb{R}))$ as given in Proposition~\ref{prop:local_measure}. By our assumption \eqref{eq:assumption_real_density_strong}, the equation $\Phi = 0$ can be solved for $x_{\rho_0}$ when all $x_\rho$ with $\rho \notin \sigma(1)'$ are nonzero; here, the implicit function $\phi$ is a rational function in $\{x_\rho : \rho \in \Sigma(1) \setminus \{\rho_0\}\}$ whose total $\Pic X$-degree is $\deg(x_{\rho_0})$. Whenever $S \subseteq \sigma(1)' \setminus \{\rho_0\}$ and $\mathbf{u} = (u_\rho) \in \mathbb{R}^S$, we write $\phi(\mathbf{u},\mathbf{1})$ for $\phi((x_\rho)_{\rho \in \Sigma(1) \setminus \{\rho_0\}})$ with $x_\rho=u_\rho$ for $\rho \in S$ and $x_\rho=1$ otherwise; this is a polynomial expression in $\mathbf{u}$. We write \begin{equation*} H_\infty(\mathbf{x}) \coloneqq \max_{\sigma' \in \Sigma_\mathrm{max}}|\mathbf{x}^{D(\sigma')}| \end{equation*} for any $\mathbf{x} \in \mathbb{R}^{\Sigma(1)}$. For the computation of $\mu_\infty(X(\mathbb{R}))$, we work with \eqref{eq:local_measure_explicit} and the chart from the subset of $X^\sigma(\mathbb{R})$ to $\mathbb{R}^{\sigma(1) \setminus \{\rho_0\}}$ that drops the $\rho_0$-coordinate. Its inverse is induced by the map \begin{equation*} f\colon \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \to \mathbb{R}^{\Sigma(1)},\quad \mathbf{z} =(z_\rho) \mapsto (x_\rho) \text{ with } x_\rho \coloneqq \begin{cases} \phi(\mathbf{z},\mathbf{1}), & \rho=\rho_0,\\ z_\rho, & \rho \in \sigma(1) \setminus \{\rho_0\},\\ 1, & \rho \notin \sigma(1) \end{cases} \end{equation*} if we interpret the right hand side in Cox coordinates. Since $f(\mathbb{R}^{\sigma(1) \setminus \{\rho_0\}})$ and $X(\mathbb{R})$ differ by a set of measure zero, Peyre's real density can be expressed as \begin{equation}\label{eq:omegaInftyPeyre} \omega_\infty \coloneqq \mu_\infty(X(\mathbb{R})) = \int_{\mathbf{z}\in \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}}} \frac{\,{\mathrm d}\mathbf{z}}{|\partial\Phi/\partial x_{\rho_0}(f(\mathbf{z}))|\cdot H_\infty(f(\mathbf{z}))}. \end{equation} Using the map \begin{equation*} g\colon \mathbb{R}^{\sigma(1)' \setminus \{\rho_0\}} \to \mathbb{R}^{\Sigma(1)},\quad \mathbf{t} = (t_\rho) \mapsto (x_\rho) \text{ with }x_\rho \coloneqq \begin{cases} \phi(\mathbf{t},\mathbf{1}), & \rho=\rho_0,\\ t_\rho, & \rho \in \sigma(1)' \setminus \{\rho_0\},\\ 1, & \rho \notin \sigma(1)', \end{cases} \end{equation*} we define \begin{equation}\label{cinf-first} c_\infty \coloneqq 2^{\#\Sigma(1)-\#\sigma(1)-1} \int_{\mathbf{t} \in \mathbb{R}^{\sigma(1)' \setminus \{\rho_0\}},\ H_\infty(g(\mathbf{t})) \le 1} \frac{\,{\mathrm d} \mathbf{t}}{|\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|}, \end{equation} which will reappear in \eqref{cinf-final} and \eqref{eq:c_infty}. To compare $\omega_\infty$ and $c_\infty$, we use the following substitution. \begin{lemma}\label{lem:transform} Let $\Psi$ be a $\Pic X$-homogeneous rational function in $\{x_\rho : \rho \in \Sigma(1)\}$ of degree \begin{equation*} \sum_{\rho \notin \sigma(1)} \alpha^\sigma_{\Psi,\rho}\deg(x_\rho). \end{equation*} Let $\alpha^\sigma_{\rho',\rho} \in \mathbb{Z}$ for $\rho' \in \Sigma(1)$ and $\rho \notin \sigma(1)$ be as in \eqref{eq:alpha_sigma_rho_rho'}. Then the substitution $z_{\rho'}=t_{\rho_1}^{-\alpha^\sigma_{\rho',\rho_1}}t_{\rho'}$ for $\rho' \in \sigma(1) \setminus \{\rho_0\}$ gives $\Psi(f(\mathbf{z}))=t_{\rho_1}^{-\alpha^\sigma_{\Psi,\rho_1}}\Psi(g(\mathbf{t}))$. In particular, $\phi(\mathbf{z},\mathbf{1})=t_{\rho_1}^{-\alpha^\sigma_{\rho_0,\rho_1}}\phi(\mathbf{t},\mathbf{1})$. If $t_{\rho_1}$ appears in $\phi(\mathbf{t},\mathbf{1})$ with odd exponent, then there is another $t_\rho$ with odd exponent in the same monomial or there is a $t_\rho$ with odd exponent in each of the other monomials of $\phi(\mathbf{t},\mathbf{1})$. \end{lemma} \begin{proof} Consider the case $\Psi=x_\rho$ first. For $\rho \in \sigma(1)\setminus \{\rho_0\}$, the claim holds by definition of the substitution. For $\rho = \rho_1$, we have $\Psi(f(\mathbf{z}))=1=t_{\rho_1}^{-1}\cdot t_{\rho_1} = t_{\rho_1}^{-\alpha^\sigma_{\Psi,\rho_1}}\Psi(g(\mathbf{t}))$. For $\rho \notin \sigma(1)'$, we have $\Psi(f(\mathbf{z}))=1\cdot 1=t_{\rho_1}^{-\alpha^\sigma_{\Psi,\rho_1}}\Psi(g(\mathbf{t}))$. Therefore, the claim holds for all monomials and hence also for all homogeneous polynomials and all homogeneous rational functions in $\{x_\rho : \rho \in \Sigma(1) \setminus \{\rho_0\}\}$. In particular, in the case $\Psi=x_{\rho_0}$, since $\phi$ is such a rational function of degree $\deg(x_{\rho_0})$, the substitution gives $\Psi(f(\mathbf{z}))=\phi(\mathbf{z},\mathbf{1}) = t_{\rho_1}^{-\alpha^\sigma_{\rho_0,\rho_1}}\phi(\mathbf{t},\mathbf{1}) = t_{\rho_1}^{-\alpha^\sigma_{\Psi,\rho_1}}\Psi(g(\mathbf{t}))$. Now the claim follows for all monomials, homogeneous polynomials, and finally all homogeneous rational functions in $\{x_\rho : \rho \in \Sigma(1)\}$. Let $\psi$ be the numerator of $\phi$. Because of \eqref{eq:assumption_real_density_strong}, $t_{\rho_1}$ appears in at most one monomial of $\psi(\mathbf{t},\mathbf{1})$; we assume that it appears in the first monomial with odd exponent. Therefore, either the exponent of $t_{\rho_1}$ in the first monomial of $t_{\rho_1}^{-\alpha^\sigma_{\psi,\rho_1}}\psi(\mathbf{t},\mathbf{1})$ is odd, or the exponents of $t_{\rho_1}$ in all other monomials of this expression are odd. But since our substitution gives $\psi(\mathbf{z},\mathbf{1})=t_{\rho_1}^{-\alpha^\sigma_{\psi,\rho_1}}\psi(\mathbf{t},\mathbf{1})$, the exponent of $t_{\rho_1}$ in a certain monomial of $t_{\rho_1}^{-\alpha^\sigma_{\psi,\rho_1}}\psi(\mathbf{t},\mathbf{1})$ can only be odd if there is a $z_\rho$ with odd exponent in the corresponding monomial of $\psi(\mathbf{z},\mathbf{1})$, and then the exponent of $t_\rho$ in this monomial of $\psi(\mathbf{t},\mathbf{1})$ is also odd. \end{proof} \begin{prop}\label{prop:omega_infty-c_infty} We have \begin{equation*} \mu_\infty(X(\mathbb{R})) = \frac{|\alpha^\sigma_{\rho_1}|}{2^{\rank \Pic X}} c_\infty. \end{equation*} \end{prop} \begin{proof} Our starting point is \eqref{eq:omegaInftyPeyre}. We use the identity (for positive real $s$) \begin{equation*} \frac{1}{s} = \int_{z_{\rho_1} > 0,\ sz_{\rho_1}\le 1} \,{\mathrm d} z_{\rho_1} \end{equation*} to deduce \begin{equation*} \omega_\infty = \int_{(\mathbf{z},z_{\rho_1}) \in \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \times \mathbb{R}_{>0},\ H_\infty(f(\mathbf{z}))\cdot z_{\rho_1} \le 1} \frac{\,{\mathrm d}\mathbf{z}\,{\mathrm d} z_{\rho_1}}{|\partial\Phi/\partial x_{\rho_0}(f(\mathbf{z}))|}. \end{equation*} We use the transformation $z_{\rho_1} = t_{\rho_1}^{\alpha^\sigma_{\rho_1}}$ (with positive $t_{\rho_1}$) and the transformations from Lemma~\ref{lem:transform}. The latter give $H_\infty(f(\mathbf{z})) = t_{\rho_1}^{-\alpha^\sigma_{\rho_1}}H_\infty(g(\mathbf{t}))$ since all monomials appearing in the definition of the anticanonical height function $H_\infty$ have degree $-K_X$; therefore, $H_\infty(f(\mathbf{z}))\cdot z_{\rho_1} = H_\infty(g(\mathbf{t}))$. Furthermore, $|\partial\Phi/\partial x_{\rho_0}(f(\mathbf{z}))| = |t_{\rho_1}^{-\alpha^\sigma_{\partial\Phi/\partial x_{\rho_0},\rho_1}}\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|$ (even without using the observation that these are the same constant by \eqref{eq:assumption_real_density_strong}). We obtain $\,{\mathrm d} z_{\rho_1} = |\alpha^\sigma_{\rho_1} t_{\rho_1}^{\alpha^\sigma_{\rho_1}-1}| \,{\mathrm d} t_{\rho_1}$ and \begin{equation*} \,{\mathrm d} \mathbf{z} = |t_{\rho_1}^{-\sum_{\rho' \in \sigma(1) \setminus \{\rho_0\}}\alpha^\sigma_{\rho',\rho_1}}|\bigwedge_{\rho' \in \sigma(1) \setminus \{\rho_0\}} \,{\mathrm d} t_{\rho'}. \end{equation*} The integration domain is unchanged. We have $-K_X=\sum_{\rho' \in \Sigma(1)} \deg(x_{\rho'}) - \deg(\Phi)$ by \cite[Proposition~3.3.3.2]{adhl15}, and $\deg(\partial\Phi/\partial x_{\rho_0})=\deg(\Phi)-\deg(x_{\rho_0})$. Therefore, $\alpha^\sigma_{\rho_1}=\sum_{\rho' \in \Sigma(1)} \alpha^\sigma_{\rho',\rho_1}-\alpha^\sigma_{\Phi,\rho_1}$ and $\alpha^\sigma_{\partial\Phi/\partial x_{\rho_0},\rho_1}=\alpha^\sigma_{\Phi,\rho_1}-\alpha^\sigma_{\rho_0,\rho_1}$. Since $\alpha^\sigma_{\rho',\rho}=\delta_{\rho'=\rho}$ for all $\rho',\rho \notin \sigma(1)$, we conclude that $$\alpha^\sigma_{\rho_1}=\sum_{\rho' \in \sigma(1) \setminus \{\rho_0\}}\alpha^\sigma_{\rho',\rho_1}+1-\alpha^\sigma_{\partial\Phi/\partial x_{\rho_0},\rho_1}.$$ This shows that the powers of $t_{\rho_1}$ cancel out, so that $\,{\mathrm d}\mathbf{z}\,{\mathrm d} z_{\rho_1}/|\partial\Phi/\partial x_{\rho_0}(f(\mathbf{z}))| = \,{\mathrm d} \mathbf{t}/|\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|$. Therefore, \begin{equation*} \omega_\infty = |\alpha^\sigma_{\rho_1}| \int_{\mathbf{t} \in \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \times \mathbb{R}_{>0},\ H_\infty(g(\mathbf{t}))\le 1} \frac{\,{\mathrm d} \mathbf{t}}{|\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|}. \end{equation*} We claim that \begin{equation*} \omega_\infty^- \coloneqq |\alpha^\sigma_{\rho_1}| \int_{\mathbf{t} \in \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \times \mathbb{R}_{<0},\ H_\infty(g(\mathbf{t}))\le 1} \frac{\,{\mathrm d} \mathbf{t}}{|\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|} \end{equation*} has the same value as $\omega_\infty$. Indeed, $\phi(\mathbf{t},\mathbf{1})$ (the $\rho_0$-component of $g(\mathbf{t})$) is the only place where the sign of $t_{\rho_1}$ might matter. Our claim is clearly true if $t_{\rho_1}$ does not appear in $\phi(\mathbf{t},\mathbf{1})$ or if $t_{\rho_1}$ has an even exponent in $\phi(\mathbf{t},\mathbf{1})$. If $t_{\rho_1}$ appears in $\phi(\mathbf{t},\mathbf{1})$ with odd exponent, then the change of variables $t_{\rho_1}' \coloneqq -t_{\rho_1}$ and $t_\rho' \coloneqq -t_\rho$ for all $t_\rho$ appearing in the final statement of Lemma~\ref{lem:transform} in $\omega_\infty^-$ shows that $\omega_\infty^-=\omega_\infty$. Therefore, \begin{equation*} \mu_\infty(X(\mathbb{R})) = \omega_\infty = \frac{1}{2}(\omega_\infty+\omega_\infty^-) = \frac{|\alpha^\sigma_{\rho_1}|}{2} \int_{\mathbf{t} \in \mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \times \mathbb{R}_{\ne 0},\ H_\infty(g(\mathbf{t}))\le 1} \frac{\,{\mathrm d} \mathbf{t}}{|\partial\Phi/\partial x_{\rho_0}(g(\mathbf{t}))|}. \end{equation*} Since $\rank \Pic X = \#\Sigma(1)-\#\sigma(1)$ and replacing $\mathbb{R}^{\sigma(1) \setminus \{\rho_0\}} \times \mathbb{R}_{\ne 0}$ by $\mathbb{R}^{\sigma(1)' \setminus \{\rho_0\}}$ does not change the integral, this completes the proof. \end{proof} \subsection{Peyre's constant in Cox coordinates} \begin{prop}\label{prop:peyre} Let $X$ be a split almost Fano variety over $\mathbb{Q}$ with semiample $\omega_X^\vee$ that has a finitely generated Cox ring $\mathscr{R}(X)$ with precisely one relation $\Phi$ with integral coefficients and satisfies the assumptions \eqref{eq:toric_smooth} and \eqref{eq:assumption_real_density_strong}. Then Peyre's constant for $X$ with respect to the anticanonical height $H$ as in \eqref{eq:height_definition} is \begin{equation*} c = \frac{1}{2^{\rank\Pic X}}c^\ast c_\infty c_{\mathrm{fin}}, \end{equation*} using the notation~\eqref{eq:c_p}, \eqref{cast-final-first}, \eqref{cinf-first}. \end{prop} \begin{proof} According to \cite[5.1]{MR2019019}, Peyre's constant for $X$ is $c = \alpha(X) \beta(X) \tau_H(X)$. Here the cohomological constant is \begin{equation*} \beta(X)=\#H^1(\Gal(\overline{\mathbb{Q}}/\mathbb{Q}), \Pic(X \otimes_{\mathbb{Q}} \overline{\mathbb{Q}})) = 1 \end{equation*} since $X$ is split. Recall \eqref{eq:tamagawa} for $\tau_H(X)$. By Lemma~\ref{lem:alpha} and Proposition~\ref{prop:omega_infty-c_infty}, $\alpha(X)\mu_\infty(X(\mathbb{R}))= c^\ast c_\infty$. Furthermore, we use Proposition~\ref{prop:p-adic_density} for the $p$-adic densities. \end{proof} \part{The asymptotic formula}\label{part2} This part, culminating in Theorem~\ref{analytic-theorem}, is devoted to a proof of the asymptotic formula \eqref{manin} for the counting problem described by \eqref{torsor}, \eqref{height} and \eqref{gcd}, subject to certain conditions to be specified in due course. This has the structure as given in Proposition~\ref{prop:countingproblem_abstract}, except that we specialize the general polynomial $\Phi$ to a polynomial of the shape \eqref{torsor}. In other words, every variable appears in at most one monomial, and for better readability in comparison with \eqref{Phi-sec3}, we relabel the variables and their exponents as in \eqref{torsor}. In the notation of \eqref{torsor}, we have $$J = J_0 + J_1 + \ldots + J_k$$ variables, where $J_0$ is the number of variables that do not occur in any of the monomials. As mentioned in the introduction, the particular shape \eqref{torsor} is not an atypical situation; it appears sufficiently often in practice that it deserves special treatment. In Section~\ref{sec9}, we will also show that if the conditions \eqref{torsor}--\eqref{gcd} come from an algebraic variety satisfying the hypotheses of Proposition~\ref{prop:peyre}, then the leading constant in \eqref{manin} agrees with Peyre's prediction as computed in Proposition~\ref{prop:peyre}. Before we begin, we fix some notation for use in the remainder of the paper. Vector operations are to be understood componentwise. In particular, just like the common addition of vectors, for $\mathbf{x} = (x_1, \ldots, x_n)\in \Bbb{C}^n$, $\mathbf{y} = (y_1, \ldots y_n)\in \Bbb{C}^n$, we write $\mathbf{x} \cdot \mathbf{y} = (x_1y_1, \ldots, x_ny_n) \in \Bbb{C}^n$. If $\mathbf{x} \in \Bbb{R}^n_{>0}$, $\mathbf{y} \in \Bbb{C}^n$, we write $\mathbf{x}^{\mathbf{y}} = x_1^{y_1} \cdots x_n^{y_n}$. We also use this notation when $\mathbf{x} \in \Bbb{R}^n$ and $\mathbf y \in \mathbb N^n$. We put $\langle \mathbf x \rangle = x_1x_2\cdots x_n$. We write $|\,\cdot\,|_{1}$ for the usual 1-norm, and $|\,\cdot\,|$ denotes the maximum norm. For $q \in \Bbb{N}$, we write $\underset{a \bmod{q}}{\left.\sum \right.^{\ast}}$ for a sum over reduced residue classes modulo $q$. Finally we apply the following convention concerning the letter $\varepsilon$: whenever $\varepsilon$ occurs in a statement, it is asserted that the statement is true for any positive real number $\varepsilon$. Note that this allows implicit constants in Landau or Vinogradov symbols to depend on $\varepsilon$, and that one may conclude from $A_1\ll B^\varepsilon$ and $A_2\ll B^\varepsilon$ that one has $A_1A_2\ll B^\varepsilon$, for example. \def\varepsilon{\varepsilon} \section{Diophantine analysis of the torsor}\label{dioph} In this section and the next, we study the torsor equation \eqref{torsor} with its variables \emph{restricted to boxes}. For the number of its integral solutions, we seek an asymptotic expansion whose leading term features a product of local densities. All estimates are required uniformly relative to the coefficients $b_1,\ldots,b_k\in {\mathbb Z}\setminus\{0\}$ that occur in \eqref{torsor}. We assume $k \geq 3$ throughout. The building blocks of the local densities are Gau\ss\ sums and their continuous analogues, and we begin by defining the former. Let $\mathbf h=(h_1,\ldots,h_n)\in\mathbb N^n$ be a ``chain of exponents''. Then, for $a\in\mathbb Z$, $q\in\mathbb N$ let \begin{equation}\label{E1} E(q,a;{\mathbf h}) = q^{-n} \sum_{\substack{1\le x_j \le q\\ 1\le j\le n} } e\Big(\frac{ax_1^{h_1}x_2^{h_2}\cdots x_n^{h_n}}{q}\Big) = q^{-n} \sum_{\substack{1\le x_j \le q\\ 1\le j\le n} } e\Big(\frac{a {\mathbf x}^{\mathbf h}}{q}\Big) . \end{equation} For a continuous counterpart, let $\mathbf Y \in [\frac12, \infty)^n$, put ${\mathscr Y}=\{\mathbf y\in\mathbb R^n : \frac12 Y_j< |y_j| \le Y_j\;\; (1\le j\le n)\}$ and define \begin{equation}\label{E2} I(\beta, {\mathbf Y};\mathbf h) = \int_{\mathscr Y} e(\beta y_1^{h_1}y_2^{h_2}\cdots y_n^{h_n})\,\mathrm d\mathbf y. \end{equation} This exponential integral satisfies the simple bound \begin{equation}\label{E3} I(\beta, {\mathbf Y};\mathbf h) \ll \langle \mathbf Y\rangle (1+ {\mathbf Y}^{\mathbf h}|\beta|)^{-1}. \end{equation} Indeed, if $n=1$, then integration by parts yields \eqref{E3} immediately. If $n>1$ one uses the obvious relation $$ I(\beta, {\mathbf Y};\mathbf h) = \int_{\frac12 Y_1\le |y|\le Y_1} I(\beta y^{h_1}, (Y_2,\ldots, Y_{n}); (h_2,\ldots, h_{n}))\,\mathrm dy $$ together with induction. With \eqref{E3} in hand for $n-1$ in place of $n$, one infers \eqref{E3} for $n$ from $$ I(\beta, {\mathbf Y};\mathbf h) \ll Y_2 Y_3\cdots Y_{n} \int_{\frac12 Y_1\le |y|\le Y_1} (1+Y_2^{h_2}\cdots Y_{n}^{h_{n}}|y^{h_1}\beta|)^{-1}\,\mathrm d y. $$ We now describe the counting problem at the core of this section. For $\mathbf{b} \in (\Bbb{Z}\setminus\{0\})^k$ and $\mathbf{X}=(X_{ij}) \in [1, \infty)^{J}$, let $\mathscr{N}_{\mathbf{b}}(\mathbf{X})$ denote the number of solutions $\mathbf{x} \in \Bbb{Z}^J$ to \eqref{torsor} satisfying $\frac{1}{2}X_{ij} \leq |x_{ij}| \leq X_{ij}$. Associated with each summand in \eqref{torsor} are a chain of exponents $\mathbf h_i=(h_{i1},\ldots,h_{iJ_i})$ and boxing vectors $\mathbf X_i=(X_{i1},\ldots,X_{iJ_i})$. In the interest of brevity, we now put \begin{equation}\label{E4} E_i(q,a) = E(q,a; \mathbf h_i), \quad I_i(\beta, \mathbf X)= I(\beta, \mathbf X_i; \mathbf h_i) \quad (1\le i \le k). \end{equation} The {\em singular integral} for this counting problem is then defined by \begin{equation}\label{E5} \mathscr{I}_{\mathbf{b}}(\mathbf{X}) = \langle\mathbf X_0\rangle\int_{-\infty}^\infty I_1(b_1\beta,\mathbf X) I_2(b_2\beta,\mathbf X)\cdots I_k(b_k\beta, \mathbf X)\,\mathrm d\beta, \end{equation} and the {\em singular series} is \begin{equation}\label{E6} {\mathscr E}_{\mathbf b} = \sum_{q=1}^{\infty} \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E_1(q,ab_1)E_2(q,ab_2)\cdots E_k(q,ab_k). \end{equation} By \eqref{E3}, the singular integral converges absolutely provided only that $k\ge 2$. Unfortunately, it is not as easy to determine whether the singular series converges; this depends on the chains of exponents in a subtle manner. However, we note that an argument paralleling that in the proof of \cite[Lemma~2.11]{Va} shows that the sum \begin{equation} \label{E7} \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E_1(q,ab_1)E(q,ab_2)\cdots E_k(q,ab_k) \end{equation} is a multiplicative function of $q$. Hence, based on the hypothesis that the singular series is absolutely convergent, one has the alternative representation \begin{equation*} {\mathscr E}_{\mathbf b} = \prod_p \sum_{l=0}^\infty \underset{a \bmod{p^l}}{\left.\sum \right.^{\ast}} E_1(p^l,ab_1)E_2(p^l,ab_2)\cdots E_k(p^l,ab_k). \end{equation*} By orthogonality of additive characters, the partial sums $0 \leq l \leq L$ count congruences modulo $p^L$, and (still under the assumption of absolute convergence) we can therefore express the singular series as a product of ``local densities'': \begin{equation}\label{localdensities} {\mathscr E}_{\mathbf b} = \prod_p \lim_{L \rightarrow \infty} \frac{1}{p^{L(n_1 + \ldots +n_k - 1)}} \#\Big\{(\textbf{x}_1, \ldots, \textbf{x}_k) \bmod{ p^L} : b_1 \textbf{x}_1^{\textbf{h}_1} + \ldots + b_k\textbf{x}_k^{\textbf{h}_k} \equiv 0 \bmod{ p^L}\Big\} . \end{equation} The transition method to be detailed in Section~\ref{sec8} works on the proviso that the product $ {\mathscr E}_{\mathrm b} \mathscr{I}_{\mathbf{b}}(\mathbf{X})$ is a good approximation to $\mathscr{N}_{\mathbf{b}}(\mathbf{X})$. We summarize these requirements as follows; note that \eqref{zeta1} is \eqref{zeta} specialized to the equation \eqref{torsor}. \begin{hyp}\label{H1} The singular series $\mathscr{E}_{\mathbf{b}}$ is absolutely convergent. There are real numbers $\beta_1,\dots,\beta_k \le 1$ with \begin{equation}\label{E} \mathscr{E}_{\mathbf{b}} \ll |b_1|^{\beta_1} |b_2|^{\beta_2}\cdots |b_k|^{\beta_k}. \end{equation} Further, there exist $\bm\zeta\in \mathbb R^{k}$ satisfying \begin{equation}\label{zeta1} \zeta_i > 0 \text{ for all } 1 \leq i \leq k, \quad h_{ij} \zeta_i < 1\text{ for all }i, j, \quad \sum_{i=1}^k \zeta_i = 1 \end{equation} and real numbers $0 < \lambda \leq 1$, $\delta_1>0$ and $C\ge 0$ such that whenever $\mathbf{X} \in [1, \infty)^{J}$ obeys the condition that \begin{equation}\label{samesize} \min_{0 \leq i \leq k} \mathbf X_i^{\mathbf h_i} \geq \bigl(\max_{1 \leq i \leq k} \mathbf X_i^{\mathbf h_i}\big)^{1-\lambda}, \end{equation} then uniformly in $\mathbf b\in(\mathbb Z\setminus\{0\})^k$, one has \begin{equation}\label{errorterm} \mathscr{N}_{\mathbf{b}}(\mathbf{X}) - \mathscr{E}_{ \mathbf{b}} \mathscr{I}_{\mathbf{b}}(\mathbf{X}) \ll |b_1 \cdots b_k|^C (\min_{ij}X_{ij})^{-\delta_1} \prod_{i=0}^k \prod_{j=1}^{J_i} X_{ij}^{1 - h_{ij}\zeta_i + \varepsilon}. \end{equation} \end{hyp} In the situation of \eqref{typeT}, Hypothesis~\ref{H1} is in fact a theorem. \begin{prop}\label{circle-method} Suppose that $k=3$, $J_1\ge J_2\ge 2$ and $h_{ij} = 1$ for $i = 1, 2$, $1\le j \le J_i$. Then, for any $\bm \zeta = (\zeta_1, \zeta_2, \zeta_3)$ satisfying \eqref{zeta1}, Hypothesis~\ref{H1} is true with \begin{equation}\label{lambdabeta} \lambda = \Big(700 \Big(\sum_{ij} h_{ij}\Big)^2 \Big)^{-1} \quad \text{and} \quad {\bm \beta} = \Big(\frac{1}{2}(1-\mu)+\varepsilon, \frac{1}{2}(1-\mu)+\varepsilon, \mu\Big) \end{equation} for any $\varepsilon > 0$, where $\varepsilon < \mu < 1/ \max_j h_{3j}$. \end{prop} We prove this in the next section. Here we continue with some preliminary bounds for the local factors, and we begin with an upper bound for the singular integral. At the same time, we compare the singular integral with a truncated version of it. To define the latter, let $Z_0$ be the maximum of the numbers $\mathbf X_i^{\mathbf h_i}$ $(1\le i\le k)$, and let $Q\ge 1$. Then put $$ \mathscr{I}_{\mathbf{b}}(\mathbf{X}, Q) = \int_{-QZ_0^{-1}}^{QZ_0^{-1}} I_1(b_1\beta,\mathbf X) I_2(b_2\beta,\mathbf X)\cdots I_k(b_k\beta, \mathbf X)\,\mathrm d\beta. $$ \begin{lemma}\label{singint} Let $k\ge 2$, and let $\zeta_i$ be positive real numbers with $\zeta_1+\zeta_2+\ldots+\zeta_k=1$. Then $$ \mathscr{I}_{\mathbf{b}}(\mathbf{X}) \ll |b_1|^{-\zeta_1} \cdots |b_k|^{-\zeta_k} \prod_{i=0}^k \prod_{j=1}^{J_i} X_{ij}^{1 - h_{ij}\zeta_i}.$$ Further, there is a number $\delta>0$ such that whenever $Q\ge 1$ one has $$ \mathscr{I}_{\mathbf{b}}(\mathbf{X})-\mathscr{I}_{\mathbf{b}}(\mathbf{X},Q) \ll Q^{-\delta} \prod_{i=0}^k \prod_{j=1}^{J_i} X_{ij}^{1 - h_{ij}\zeta_i}.$$ \end{lemma} \begin{proof} By H\"older's inequality, $$ \int_{-\infty}^\infty \prod_{i=1}^k (1+\mathbf X_i^{\mathbf h_i} |b_i\beta|)^{-1}\,\mathrm d\beta \le \prod_{i=1}^k \Big( \int_{-\infty}^\infty (1+\mathbf X_i^{\mathbf h_i} |b_i\beta|)^{-1/\zeta_i}\,\mathrm d\beta\Big)^{\zeta_i}, $$ and by \eqref{E5} and \eqref{E3} the first statement in the lemma is immediate. For the second, one picks $\iota$ with $Z_0=\mathbf X_\iota^{\mathbf h_\iota}$ and observes that $$ \int_{QZ_0^{-1}}^\infty (1+\mathbf X_\iota^{\mathbf h_\iota} |b_\iota\beta|)^{-1/\zeta_\iota}\,\mathrm d\beta \ll Q^{1-(1/\zeta_\iota)} \mathbf X_\iota^{-\mathbf h_\iota}.$$ If this bound is used within the preceding application of H\"older's inequality, one arrives at the second statement in the lemma. \end{proof} We continue with some general remarks on Gau\ss\ sums. \begin{lemma}\label{Gauss} Let $\mathbf h\in\mathbb N^n$. Let $b\in\mathbb Z$, $q\in\mathbb N$ and $q'=q/(q,b)$, $b'=b/(q,b)$. Then $E(q,b;\mathbf h)= E(q',b';\mathbf h)$. If $n\ge 2$, $h_1=1$ and $(b,q)=1$ then $$ E(q,b,\mathbf h) = q^{1-n} \#\{x_2,\ldots,x_n:\, 1\le x_j\le q, \; x_2^{h_2}x_3^{h_3}\cdots x_n^{h_n}\equiv 0 \bmod q\}. $$ Further $$ E(q,b, (1,\ldots, 1)) = q^{1-n}\sum_{\substack{ d_j\mid q \\ q\mid d_2d_3\cdots d_n}} \varphi\Big(\frac{q}{d_2}\Big)\cdots \varphi\Big(\frac{q}{d_n}\Big). $$ In particular, $E(q,b, (1,\ldots, 1)) \ll q^{\varepsilon-1}$ and $E(q,b,(1,1))=q^{-1}$. \end{lemma} \begin{proof} We have $b/q=b'/q'$ whence $e(bx_1^{h_1}\cdots x_n^{h_n}/q)$ has period $q'$ in all $x_j$. Summing over all $x_j$ modulo $q$ gives the first statement at once. The second statement follows from \eqref{E1} and orthogonality, after carrying out the sum over $x_1$. If we specialize the second statement to $h_j=1$ for all $j$ and sort the $x_j$ according to the values of $d_j=(x_j,q)$, then we arrive at the formula for $E(q,b, (1,\ldots, 1))$, from which the remaining claims are immediate. \end{proof} \begin{lemma}\label{Gaussaverage} Let $\mathbf h\in\mathbb N^n$ with $h_1\le h_2\le \ldots\le h_n$. Then, for each $b\in\mathbb Z$, the sum $$ D(q,b,\mathbf h) = \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E(q,ab,\mathbf h) $$ is multiplicative as a function of $q$, and one has $D(q,b,\mathbf h) \ll (q,b)^{1/h_n} q^{1+\varepsilon-1/h_n}$. \end{lemma} \begin{proof} Within this proof, we write $D(q)=D(q,b,\mathbf h)$. In view of \eqref{E7}, this function is multiplicative. Suppose that $q=p^l$ with $p$ prime and $l\in\mathbb N$. By Lemma~\ref{Gauss}, $$ D(p^l) = \underset{a \bmod{p^l}}{\sum} E(p^l,ab,\mathbf h) - \underset{a \bmod{p^{l-1}}}{ \sum } E(p^{l-1},ab,\mathbf h). $$ Let $M(q)$ denote the number of $x_1,\ldots,x_l$ with $1\le x_j\le q$ $(1\le j\le n)$ and $bx_1^{h_1}\cdots x_n^{h_n} \equiv 0 \bmod q$. Then, by orthogonality, it follows from \eqref{E1} that $$ D(p^l) = p^{l(1-n)} M(p^l) - p^{(l-1)(1-n)} M(p^{l-1}). $$ Obviously, if $l\le \beta$, then $M(p^l) = p^{ln}$, while for $l>\beta$, we find that there is a constant $c$ depending only on $\mathbf h$ with the property that $M(p^l) \le cp^{ln-\lceil (l-\beta)/h_n \rceil}$. It follows that whenever $l\le \beta$, then $D(p^l)=\varphi(p^l)$, whereas for $l>\beta$, we infer that $D(p^l) \le 2c p^{l-\lceil (l-\beta)/h_n \rceil}$, as is readily checked. The lemma follows by multiplicativity and a standard divisor function estimate. \end{proof} We now use these results to discuss the singular series that arises in Proposition~\ref{circle-method}. Then we have $k=3$, $J_1\ge J_2\ge 2$, and we may use the last clause of Lemma~\ref{Gauss} with $\mathbf h_1$ and $\mathbf h_2$. Further, we put $h= \max h_{3j}$ and use Lemma~\ref{Gaussaverage} to confirm that \begin{equation}\label{innergauss} \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E_1(q,ab_1)E_2(q,ab_2)E_3(q,ab_3) \ll q^{\varepsilon-1-1/h} (q,b_1)(q,b_2)(q,b_3)^{1/h}. \end{equation} It is now immediate that the singular series converges absolutely, and with $\varepsilon < \mu < 1/h$, it is also readily shown that \begin{displaymath} \begin{split} & \sum_{q=1}^\infty \Big| \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E_1(q,ab_1)E_2(q,ab_2)E_3(q,ab_3)\Big| \ll \sum_{q=1}^\infty q^{\varepsilon-1-\mu} (q,b_1)(q,b_2)b_3^{\mu}\\ & \ll b_3^{\mu} \sum_{c_1 \mid b_1} \sum_{c_2 \mid b_2} \frac{c_1c_2}{{\rm lcm}(c_1, c_2)^{1+\mu - \varepsilon }} \leq b_3^{\mu} \sum_{c_1 \mid b_1} \sum_{c_2 \mid b_2} (c_1c_2)^{\frac{1}{2} (1 - \mu + \varepsilon)} \ll b_3^{\mu}(b_1b_2)^{\frac{1}{2}(1 - \mu) + \varepsilon}. \end{split} \end{displaymath} This establishes all the statements in Proposition~\ref{circle-method} that concern the singular series. \section{The circle method} \subsection{Weyl sums} In this section, we apply the circle method to establish Proposition~\ref{circle-method}. We prepare the ground with a discussion of the generalized Weyl sums \begin{equation*} W(\alpha,\mathbf Y;\mathbf h) = \sum_{\mathbf y \in \mathbb Z^n \cap \mathscr Y} e(\alpha {\mathbf y}^{\mathbf h}). \end{equation*} Here and in the sequel, we continue to use the notation from the previous section, and in particular, $\mathbf h$, $\mathbf Y$ and $ \mathscr Y$ are as in \eqref{E2}. On applying orthogonality, an inspection of the underlying diophantine equation and a divisor function estimate reveal that \begin{equation}\label{W2} \int_0^1 |W(\alpha,\mathbf Y;\mathbf h)|^2 \,\mathrm d \alpha \ll \langle\mathbf Y\rangle^{1+\varepsilon}. \end{equation} The next result is a version of Weyl's inequality. \begin{lemma}\label{Weyl} Let $\alpha\in\mathbb R$, $a\in\mathbb Z$, $q\in\mathbb N$ and $|q\alpha -a |\le q^{-1}$. Suppose that $Y_1\ge Y_2\ge \ldots\ge Y_n$. Then $$ |W(\alpha,\mathbf Y;\mathbf h)|^{2^{|\mathbf h|_1 -n}} \ll \langle \mathbf Y\rangle ^{2^{|\mathbf h|_1 -n+\varepsilon}} \Big(\frac1{q} + \frac1{Y_n}+\frac{q}{\mathbf Y^{\mathbf h}}\Big).$$ \end{lemma} \begin{proof} If $n=1$ this is the familiar form of Weyl's inequality. If $n\ge 2$, then we apply repeated Weyl differencing. Let $h\in\mathbb N$. On combining \cite[Lemma 2.4]{Va} with \cite[Exercise 2.8.1]{Va}, one has $$ \Big| \sum_{X<x\le 2X} e(\beta x^h)\Big|^{2^{h-1}} \le (2X)^{2^{h-1}-h} \sum_{\substack{|u_j|\le X\\ 1\le j < h}} \sum_{x\in I(\mathbf u)} e\big(h!\beta u_1u_2\ldots u_{h-1}(x + {\textstyle \frac12}|\mathbf u|_1)\big)$$ where the $I(\mathbf u)$ are certain subintervals of $[X,2X]$. Further, one trivially has $$ \Big| \sum_{-2X\leq x< -X} e(\beta x^h)\Big|= \Big| \sum_{X<x\le 2X} e(\beta x^h)\Big|, $$ and hence it follows that \begin{equation}\label{wstep} \Big| \sum_{X<|x|\le 2X} e(\beta x^h)\Big|^{2^{h-1}} \ll X^{2^{h-1}-h} \sum_{\substack{|u_j|\le X\\ 1\le j < h}} \sum_{x\in I(\mathbf u)} e\big(h!\beta u_1u_2\ldots u_{h-1} (x + {\textstyle \frac12}|\mathbf u|_1)\big). \end{equation} By H\"older's inequality, $$ |W(\alpha,\mathbf Y;\mathbf h)|^{2^{h_1 -1}} \le (Y_2\cdots Y_n)^{2^{h_1-1}-1} \sum_{\substack{\frac12 Y_\nu< |y_\nu|\le Y_\nu \\ 2\le \nu\le n}} \Big|\sum_{\frac12 Y_1<|y_1|\le Y_1} e(\alpha y_1^{h_1}y_2^{h_2}\cdots y_n^{h_n})\Big|^{ 2^{h_1 -1}}. $$ We apply \eqref{wstep} with $\beta=\alpha y_2^{h_2}\cdots y_n^{h_n}$ to the sum over $y _1$. We write $\mathbf h'=(h_2,h_3,\ldots, h_n)$, $\mathbf Y'=(Y_2,Y_3,\ldots, Y_n)$ and then find that $$ |W(\alpha,\mathbf Y;\mathbf h)|^{2^{h_1 -1}} \ll Y_1^{2^{h_1-1}-h_1} \langle \mathbf Y'\rangle^{2^{h_1 -1}-1} \sum_{\substack{|u_j|\le Y_1\\ 1\le j < h_1}} \sum_{y\in I_1(\mathbf u)} W(h_1! \alpha u_1u_2\cdots u_{h_1-1}(y+ {\textstyle \frac12}|\mathbf u|_1) , \mathbf Y'; \mathbf h') $$ where $I_1(\mathbf u)$ are certain subintervals of $[\frac12 Y_1, Y_1]$. Now we apply H\"older's inequality again to bring in $|W(\beta,\mathbf Y'; \mathbf h')|^{2^{h_2-1}}$. We may then estimate the sum over $y_2$ by \eqref{wstep}. Repeated use of this process produces the inequality \begin{equation}\label{W5} |W(\alpha,\mathbf Y;\mathbf h)|^{2^{h_1-1}\cdots 2^{h_n-1}} \ll \langle \mathbf Y\rangle^{2^{h_1+\ldots+h_n-n}} {\mathbf Y}^{-\mathbf h} \sum_{\mathbf u_1,\ldots, \mathbf u_n} \sum_{\substack{y_\nu\in I_\nu(\mathbf u_\nu) \\ 1\le \nu <n}} \Big| \sum_{y_n\in I_n(\mathbf u_n)} e(\alpha vy_n)\Big| \end{equation} in which $\mathbf u_\nu \in \Bbb{Z}^{h_{\nu} - 1}$ runs over integer vectors with $|\mathbf u_\nu|\le Y_\nu$ for $1\le \nu\le n$, the $I_\nu(\mathbf u_\nu)$ are certain subintervals of $[\frac12 Y_\nu, Y_\nu]$ and $$ v= h_1! h_2!\cdots h_n!\langle \mathbf u_1\rangle\cdots \langle \mathbf u_n\rangle y_1y_2\cdots y_{n-1}. $$ Note that $v=0$ will occur in \eqref{W5} only when one the $\mathbf u_\nu$ has a zero entry, so that the total contribution to \eqref{W5} from summands with $v=0$ does not exceed $ \langle \mathbf Y\rangle^{2^{h_1+\ldots+h_n-n}} Y_n^{-1}$, which is acceptable. For nonzero $v$, the innermost sum in \eqref{W5} does not exceed $\min(Y_n, \|\alpha v\|^{-1})$. Further, we have $v\ll \mathbf Y^{\mathbf h}Y_n^{-1}$, and a divisor estimate shows that there are no more than $O(|v|^\varepsilon)$ choices for $\mathbf u_\nu$, $y_\nu$ that correspond to the same $v$. This shows that $$ |W(\alpha,\mathbf Y;\mathbf h)|^{2^{h_1-1}\cdots 2^{h_n-1}} \ll \langle \mathbf Y\rangle^{2^{|\mathbf h|_1-n}} Y_n^{-1} + \langle \mathbf Y\rangle^{2^{|\mathbf h|_1-n}+\varepsilon} {\mathbf Y}^{-\mathbf h}\! \sum_{1\le v \ll \mathbf Y^{\mathbf h}Y_n^{-1}}\! \min(Y_n, \|\alpha v\|^{-1}). $$ Reference to \cite[Lemma~2.1]{Va} completes the proof. \end{proof} We complement this result with an approximate evaluation of $W$. \begin{lemma}\label{Weylapprox} Let $\alpha\in\mathbb R$, $a\in\mathbb Z$, $q\in\mathbb N$ and $\alpha = (a/q) +\beta$. Suppose that $Y_1\ge Y_2\ge \ldots\ge Y_n$. Then $$ W(\alpha,\mathbf Y;\mathbf h)= E(q,a;\mathbf h) I(\beta,\mathbf Y;\mathbf h) + O\big(Y_1Y_2\cdots Y_{n-1}q(1+\mathbf Y^{\mathbf h}|\beta|)\big). $$ \end{lemma} \begin{proof} The case $n=1$ is a rough and elementary version of \cite[Theorem 4.1]{Va}. We now induct on $n$ and suppose that the lemma is already available with $n-1$ in place of $n$. As before, we write $\mathbf Y'=(Y_2,Y_3,\ldots,Y_n)$ etc., isolate the sum over $y_1$ and invoke the induction hypothesis with $\alpha y_1^{h_1}$ for $\alpha$. This yields \begin{align*} W(\alpha,\mathbf Y;\mathbf h) &= \sum_{\frac12 Y_1 <|y_1|\le Y_1} \Big( E(q,ay_1^{h_1};\mathbf h')I(\beta y_1^{h_1},\mathbf Y'; \mathbf h') + O\big(Y_2\cdots Y_{n-1}q(1+{\mathbf Y'}^{\mathbf h'} |y_1|^{h_1}|\beta|)\big)\Big) \\ & = \sum_{\frac12 Y_1 <|y_1|\le Y_1} E(q,ay_1^{h_1};\mathbf h')I(\beta y_1^{h_1},\mathbf Y'; \mathbf h') + O\big(Y_1 Y_2\cdots Y_{n-1}q(1+{\mathbf Y}^{\mathbf h} |\beta|)\big). \end{align*} In view of \eqref{E1} and \eqref{E2} we may rewrite the sum over $y_1$ on the right hand side as $$ q^{1-n}\sum_{\substack{1\le x_\nu\le q\\ 2\le \nu\le n}} \int_{{\mathscr Y}'} \sum_{\frac12 Y_1 <|y_1|\le Y_1} e\Big(y_1^{h_1}\Big(\beta \mathbf y'^{\mathbf h'} + \frac{a\mathbf x'^{\mathbf h'}}{q}\Big)\Big)\,\mathrm d\mathbf y', $$ where $\mathscr Y'$ is the analogue of $\mathscr Y$ in the coordinates $\mathbf y'$. We may now apply the case $n=1$ with $\beta \mathbf y'^{\mathbf h'}$ for $\beta$ and $a\mathbf x'^{\mathbf h'}$ for $a$ to conclude that \begin{align*} \sum_{\frac12 Y_1 <|y_1|\le Y_1} &e\Big(y_1^{h_1}\Big(\beta \mathbf y'^{\mathbf h'} + \frac{a\mathbf x'^{\mathbf h'}}{q}\Big)\Big)\\ & = q^{-1} \sum_{x_1=1}^q e\Big(\frac{ax_1^{h_1} x'^{\mathbf h'}}{q}\Big) \int_{\frac12 Y_1<|y_1|\le Y_1} e(\beta y_1^{h_1}\mathbf y'^{\mathbf h'})\,\mathrm dy_1 + O\big(q+q Y_1^{h_1}|y'^{\mathbf h'}\beta|\big). \end{align*} The induction is now completed by inserting this last formula into the two preceding displays. \end{proof} \def\omega{\omega} \subsection{Towards the circle method}\label{sec6.2} We are ready to embark on the proof of Proposition~\ref{circle-method}. We work in the broader framework of Hypothesis~\ref{H1} in large parts of the argument, but will restrict to the situation described in Proposition~\ref{circle-method} on several occasions. We hope that the wider scope of our presentation will be helpful in related investigations. We begin with a general remark concerning the ``dummy variables'' $x_{0j}$ that do not occur explicitly in the torsor equation. Suppose that Hypothesis~\ref{H1} has been established for a given torsor equation, without any dummy variables, that is, with $J_0=0$. Now consider the same torsor equation with $J_0\ge 1$ dummy variables. For this new problem, the count $\mathscr N_{\mathbf b}(\mathbf X)$ factorizes as $\mathscr N_{\mathrm b}(\mathbf X)=W_0(\mathbf X_0)\mathscr N^*$, say, where $\mathscr N^*$ is the number of solutions counted by $\mathscr N_{\mathbf b}(\mathbf X)$ but with the variables $\mathbf x_0$ ignored, and $W_0(\mathbf X_0)$ is the number of $\mathbf x_0\in\mathbb Z^{J_0}$ with $\frac 12 X_{0j}<x_{0j} \le X_{0j}$ for $1\le j\le J_0$. A trivial lattice point count yields $$ W_0(\mathbf X_0) = \langle \mathbf X_0\rangle + O(\langle \mathbf X_0\rangle (\min X_{0j})^{-1}), $$ and if one multiplies this with the asymptotic formula for $\mathscr N^*$ that we have assumed to be available to us, then one derives the claims in Hypothesis~\ref{H1} with dummy variables. This shows that it suffices to address the problem of verifying Hypothesis~\ref{H1} only in the case where $J_0=0$, and we will assume this for the rest of this section. To launch the circle method argument, recall the definition of $\mathscr N_{\mathbf b}(\mathbf X) $ in the paragraph encapsulating displays \eqref{E4}--\eqref{E6}. In the notation of that section, we define $$ W_i(\alpha, \mathbf X) = W(\alpha, \mathbf X_i; \mathbf h_i) \quad (1\le i\le k) . $$ By orthogonality, \begin{equation*} \mathscr N_{\mathbf b}(\mathbf X) = \int_0^1 W_1(b_1 \alpha, \mathbf X) \cdots W_k(b_k \alpha, \mathbf X)\,\mathrm d\alpha . \end{equation*} Our main parameters are \begin{equation*} Z= \min_{1\le i\le k} \mathbf X_i^{\mathbf h_i}, \quad Z_0 = \max_{1\le i\le k} \mathbf X_i^{\mathbf h_i}, \quad M = \min_{ij} X_{ij}, \end{equation*} and we find it convenient to renumber variables to ensure that \begin{equation}\label{C4} X_{i1}\le X_{i2}\le \ldots \le X_{iJ_i} \quad (1\le i\le k). \end{equation} Once and for all, fix positive numbers $\zeta_i$ as in \eqref{zeta1} and $\omega$ with \begin{equation}\label{omegabound} 0<\omega < \Bigl(100 \sum_{i,j} h_{ij}\Bigr)^{-1}. \end{equation} Let $\mathfrak M(q,a)$ denote the set of $\alpha\in\mathbb R$ with $|\alpha-(a/q)|\le Z^{\omega-1}$. Since $\omega< 1/100$ these intervals with $1\le a\le q\le Z^\omega$, $(a,q)=1$ are disjoint, and we denote their union by $\mathfrak M$. Let $\mathfrak m = [Z^{\omega-1}, 1+Z^{\omega-1}]\setminus \mathfrak M$. On writing $$ \mathscr N_{\mathfrak A} = \int_{\mathfrak A} W_1(b_1 \alpha, \mathbf X) \cdots W_k(b_k \alpha, \mathbf X)\,\mathrm d\alpha $$ one has \begin{equation}\label{C5} \mathscr N_{\mathbf b}(\mathbf X) =\mathscr N_{\mathfrak M} +\mathscr N_{\mathfrak m}. \end{equation} The circle method treatment depends on the relative size of $M$ and $Z$. We first give a proof of Proposition~\ref{circle-method} in the case where $M\ge Z^{10k\omega}$ (the {\em tame} case). \subsection{The tame case: major arcs}\label{sec6.3} For $\alpha\in\mathfrak M$, there is a unique pair $a,q$ with $1\le a\le q\le Z^\omega$, $(a,q)=1$ and a number $\beta\in\mathbb R$ with $|\beta|\le Z^{\omega-1}$ and $\alpha =(a/q)+\beta$. By Lemma~\ref{Weylapprox}, \begin{equation}\label{Sapprox} W_i(b_i\alpha, \mathbf X) = E_i(q,ab_i)I_i(\beta b_i,\mathbf X_i) + O(\langle\mathbf X^\dag_i\rangle q (1+\mathbf X_i^{\mathbf h_i}|b_i\beta|)) \end{equation} where, temporarily, $\mathbf X^\dag_i=(X_{i2},\ldots,X_{iJ_i})$ is the vector that is $\mathbf X_i$ with its smallest entry deleted. Since we are in the tame case, this implies that $\langle\mathbf X^\dag_i\rangle \le \langle\mathbf X_i\rangle Z^{-10k\omega}$. Further, by hypothesis and \eqref{samesize}, $\mathbf X_i^{\mathbf h_i} \le Z_0\le Z^{1/(1-\lambda)}$. Now choose $\lambda$ so small that \begin{equation}\label{lam1} (1- \lambda)^{-1} \leq 1 + \omega. \end{equation} Then $\mathbf X_i^{\mathbf h_i} \le Z^{1+\omega}$ and $$W_i(b_i\alpha, \mathbf X) = E_i(q,ab_i)I_i(\beta b_i,\mathbf X_i) + O(\langle\mathbf X_i\rangle Z^{-9k\omega}|b_i|). $$ Noting the trivial bounds $$ W_i(b_i\alpha, \textbf{X})\ll \langle\mathbf X_i\rangle, \qquad E_i(q,ab_i)I_i(\beta b_i,\mathbf X_i) \ll \langle\mathbf X_i\rangle $$ and the trivial identity $$ W_1W_2 \cdots W_k - T_1T_2\cdots T_k = \sum_{i=1}^k (W_i-T_i)W_1\cdots W_{i-1}T_{i+1}\cdots T_k, $$ we conclude that $$ \prod_{i=1}^k W_i(b_i\alpha, \mathbf X) = \prod_{i=1}^k E_i(q,ab_i)I_i(\beta b_i,\mathbf X_i) +O( \langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 Z^{-9k\omega}). $$ We integrate this over $\mathfrak M$. Since the measure of $\mathfrak M$ is $O(Z^{3\omega-1})$, the error will contribute an amount not exceeding $$ \langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 Z^{-8k\omega-1}\le \langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 M^{-1/5}Z^{-6k\omega-1}\le \langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 M^{-1/5}Z_0^{-1}. $$ It follows that \begin{equation}\label{C6} {\mathscr N}_{\mathfrak M} = {\mathscr E}_{\mathbf b} (Z^{\omega}) {\mathscr I}_{\mathbf b}(\mathbf X, Z^\omega)+O(\langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 M^{-1/5}Z_0^{-1}), \end{equation} where $$ {\mathscr E}_{\mathbf b} (Q) = \sum_{q\le Q} \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E_1(q,ab_1)E_2(q,ab_2)\cdots E_k(q,ab_k).$$ We are now required to complete the singular series. At this stage, we have to be content with the setup in Proposition~\ref{circle-method}, but then have recourse to \eqref{innergauss}, which shows that $$ {\mathscr E}_{\mathbf b} (Z^{\omega}) ={\mathscr E}_{\mathbf b} +O(Z^{-\omega/(2h)}|b_1b_2b_3|), $$ as well as to Lemma~\ref{singint}. Then we infer that $$ {\mathscr E}_{\mathbf b} (Z^{\omega}) {\mathscr I}_{\mathbf b}(\mathbf X,Z^\omega) = {\mathscr E}_{\mathbf b} {\mathscr I}_{\mathbf b}(\mathbf X) +O( |b_1b_2b_3|Z^{-\omega/(2h)} \langle\mathbf X_1\rangle \langle\mathbf X_2\rangle \langle\mathbf X_3\rangle \mathbf X_1^{-\zeta_1\mathbf h_1}\mathbf X_2^{-\zeta_2\mathbf h_2}\mathbf X_3^{-\zeta_3\mathbf h_3}). $$ It follows that in the tame case, there are numbers $\delta_1>0$, $0<\lambda<1$ such that \begin{equation}\label{C9} {\mathscr N}_{\mathfrak M}={\mathscr E}_{\mathbf b} {\mathscr I}_{\mathbf b}(\mathbf X) + O( |b_1b_2b_3|M^{-\delta_1} \langle\mathbf X_1\rangle\langle\mathbf X_2\rangle \langle\mathbf X_3\rangle \mathbf X_1^{-\zeta_1\mathbf h_1}\mathbf X_2^{-\zeta_2\mathbf h_2}\mathbf X_3^{-\zeta_3\mathbf h_3}). \end{equation} \subsection{The tame case: minor arcs}\label{sec6.4} In our treatment of the minor arcs, we will work subject to the conditions in Proposition~\ref{circle-method}. There are two cases. First suppose that $|b_3|\le Z^{\omega/2}$. We apply Weyl's inequality to $W_3(b_3\alpha, \textbf{X})$. Let $$ H= 2^{h_{31}+\ldots+h_{3J_3}-J_3} .$$ We claim that uniformly for $\alpha\in\mathfrak m$, one has \begin{equation}\label{C9a} W_3(b_3\alpha,\mathbf X) \ll \langle \mathbf X_3\rangle Z^{-\omega/(3H)}. \end{equation} Indeed, if $Z$ is large and $\alpha\in\mathbb R$ is such that $|W_3(b_3\alpha,\mathbf X)|\ge \langle \mathbf X_3\rangle Z^{-\omega/(3H)}$, then a familiar coupling of Lemma~\ref{Weyl} with Dirichlet's theorem on diophantine approximation shows that there are coprime numbers $a$, $q$ with $|qb_3\alpha -a |\le Z^{\omega/2}\mathbf X_3^{-\mathbf h_3} \le Z^{(\omega/2)-1}$ and $1\le q\le Z^{\omega/2}$. But then $1\le |b_3|q\le Z^{\omega}$, and hence $\alpha$ cannot be in $\mathfrak m$. By \eqref{W2} and an obvious substitution, $$ \int_0^1 |W_i(b_i\alpha,\mathbf X)|^2\,\mathrm d\alpha \ll \langle \mathrm X_i\rangle^{1+\varepsilon}. $$ Hence, by Schwarz's inequality and \eqref{C9a}, $$ \mathscr N_{\mathfrak m} \ll \big( \langle \mathbf X_1\rangle \langle \mathbf X_2\rangle \big)^{1/2+\varepsilon} \sup_{\alpha\in\mathfrak m} |W_3(b_3\alpha,\mathbf X)| \ll \langle \mathbf X_1\rangle \langle \mathbf X_2\rangle \langle \mathbf X_3\rangle Z^{\varepsilon-1-\omega/(3H)}. $$ For \begin{equation}\label{lam2} (1-\lambda) (1 + \omega/(3H)) \geq 1 + \omega/(6H), \end{equation} we have $Z^{-1-(\omega/3H)}\ll Z_0^{-1-\omega/(6H)}$, which shows that $\mathscr N_{\mathfrak m}$ is an acceptable error in Proposition~\ref{circle-method}. This combines with \eqref{C5} to complete the proof of Proposition~\ref{circle-method} in the case under consideration. Now consider the case where $|b_3|>Z^{\omega/2}$. Then we take $C>200/\omega$ and have $|b_1b_2b_3|^C>Z^{100}$. Hence, the claim in Proposition~\ref{circle-method} is true for trivial reasons. This completes the argument in the tame case, and an inspection of \eqref{lam1} and \eqref{lam2} confirms the value of $\lambda$ stated in Proposition~\ref{circle-method}. \subsection{Major arcs again} It remains to deal with the case where $M<Z^{10k\omega}$. We assume this inequality from now on. Again, we work in the broader framework of Sections~\ref{sec6.2} and~\ref{sec6.3}, and refine the circle method approach to cover the current situation as well. We say that a variable $x_{ij}$ is small if $X_{ij}<Z^{10k\omega}$. By hypothesis, there is at least one small variable. Also, by \eqref{C4}, there is a number $J'_i$ such that the $x_{ij}$ with $j\le J'_i$ are small, and those with $j>J'_i$ are not. We have $$ \prod_{j\le J'_i} X_{ij} \le Z^{10k\omega J_i} < \langle \mathbf X_i\rangle^{1/4}, $$ whence for each $i$, there are variables $x_{ij}$ that are not small. This is important throughout this section. We put $$ \mathbf X'_i = (X_{i1},\ldots,X_{iJ'_i}), \quad \mathbf X''_i = (X_{i,J'_i+1},\ldots,X_{iJ_i}), \quad \mathbf X_i= (\mathbf X'_i,\mathbf X''_i), $$ where $\mathbf X'_i$ is void if $x_{i1}$ is not small. In the same way, we dissect the variable $\mathbf x_i = (\mathbf x'_i,\mathbf x''_i)$ and the chain of exponents $\mathbf h_i = (\mathbf h'_i,\mathbf h''_i)$. By orthogonality, we then have \begin{equation}\label{C10} \mathscr N_{\mathbf b}(\mathbf X) = \sum_{(\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'}\int_0^1 W(b_1\alpha {\mathbf x'_1}^{\mathbf h'_1}, \mathbf X''_1; \mathbf h''_1) \ldots W(b_k\alpha {\mathbf x'_k}^{\mathbf h'_k}, \mathbf X''_k; \mathbf h''_k) \,\mathrm d\alpha, \end{equation} where \begin{equation}\label{C11} \mathscr{Y}' \coloneqq \{ \textbf{x}' \in \Bbb{R}^{J_1' + \ldots +J_k'} : {\textstyle\frac12 } X_{ij} <|x_{ij}| \le X_{ij} \text{ for } 1\le i\le k, 1\le j\le J'_i\}. \end{equation} We apply the circle method to the integral in \eqref{C10}. By Lemma~\ref{Weylapprox}, when $\alpha = (a/q)+\beta$, one finds that subject to \eqref{C11}, one has \begin{equation*} W\big(b_i\alpha {\mathbf x'_i}^{\mathbf h'_i}, \mathbf X''_i; \mathbf h''_i\big)= E\big(q,ab_i {\mathbf x'_i}^{\mathbf h'_i}; \mathbf h''_i\big) I\big(\beta b_i {\mathbf x'_i}^{\mathbf h'_i}, \mathbf X''_i; \mathbf h''_i\big) +O\big(\langle \mathbf X''_i\rangle Z^{-10k\omega } q(1+|b_i \beta| {\mathbf X''_i}^{\mathbf h''_i})\big). \end{equation*} Here it is worth recalling that $ \mathbf X''_i$ is not void and has all its components at least as large as $Z^{10k\omega}$. For $\alpha \in \mathfrak M$, the error in the preceding display does not exceed $$ \langle \mathbf X''_i\rangle Z^{-10k\omega }|b_i| Z^{2\omega-1} \mathrm X_i^{\mathrm h_i} \le \langle \mathbf X''_i\rangle |b_i|Z^{3\omega -10k\omega } \le \langle \mathbf X''_i\rangle |b_i|Z^{-9k\omega }.$$ Let ${\sf S}$ denote the integrand in \eqref{C10}, and let ${\sf M}$ denote the product of the $ E(q,ab_i {\mathbf x'_i}^{\mathbf h'_i}, \mathbf h''_i) I(\beta b_i {\mathbf x'_i}^{\mathbf h'_i}, \mathbf X''_i; \mathbf h''_i)$ with $1\le i\le k$. Then, following the discussion in Section~\ref{sec6.3}, we obtain \begin{equation}\label{C11a} {\sf S}- {\sf M}\ll \langle \mathbf X''_1\rangle\cdots \langle \mathbf X''_k\rangle |\mathbf b| Z^{-9k\omega}. \end{equation} We integrate over $\mathfrak M$ and sum over the set \eqref{C11}. Then, again as in Section~\ref{sec6.3}, this gives \begin{equation}\label{C12} \mathscr N_{\mathbf b}(\mathbf X) = \sum_{ (\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'} \mathscr E' \mathscr I' + \mathscr N^\dag + O(\langle \mathbf X_1\rangle\cdots \langle \mathbf X_k\rangle |\mathbf b| Z^{-8k\omega-1}), \end{equation} where \begin{align} \mathscr E' & = \sum_{q\le Z^\omega}\underset{a \bmod{q}}{\left.\sum \right.^{\ast}} E(q,ab_1 {\mathbf x'_1}^{\mathbf h'_1}, \mathbf h''_1)\cdots E(q,ab_k {\mathbf x'_k}^{\mathbf h'_k}, \mathbf h''_k)\label{C13}, \\ \mathscr I' &= \int_{-Z^{\omega-1}}^{Z^{\omega-1}} I(\beta b_1 {\mathbf x'_1}^{\mathbf h'_1}, \mathbf X''_1; \mathbf h''_1)\cdots I(\beta b_k {\mathbf x'_k}^{\mathbf h'_k}, \mathbf X''_k; \mathbf h''_k) \,\mathrm d\beta, \nonumber \end{align} and where $\mathscr N^\dag$ is the same expression as in \eqref{C10} but with integration over the minor arcs $\mathfrak m$. Exchanging the sum with the integral in \eqref{C10}, we see that $\mathscr N^\dag = \mathscr N_{\mathfrak m}$. Note that the error in \eqref{C12} also occurred in Section~\ref{sec6.3} and was shown to be of acceptable size. The difficulty now is that the moduli $q$ in \eqref{C13} are too large for the small variables to be arranged in residue classes modulo $q$. We therefore prune the sum over $q$. In preparation for this manoeuvre, we bound $\mathscr I'$ uniformly in $\mathbf x'_i$. Whenever $\mathbf x'_i \in \mathscr{Y}'$, one finds from \eqref{E3} that \begin{equation*} I(\beta b_i {\mathbf x'_i}^{\mathbf h'_i}, \mathbf X''_i ; \mathbf h''_i) \ll \langle\mathbf X''_i\rangle (1+ {\mathbf X''_i}^{\mathbf h''_i}| {\mathbf x'_i}^{\mathbf h'_i} b_i\beta|)^{-1} \ll \langle \mathbf X''_i \rangle(1+ \mathbf X_i^{\mathbf h_i} |b_i\beta|)^{-1}. \end{equation*} Hence, by H\"older's inequality, \begin{equation}\label{C15} \mathscr I' \ll \prod_{i=1}^k \langle \mathbf X''_i\rangle \Big(\int_{-\infty}^\infty (1+ \mathbf X_i^{\mathbf h_i} |b_i\beta|)^{-1/\zeta_i}\,\mathrm d\beta\Big)^{\zeta_i} \ll \prod_{i=1}^k \langle \mathbf X''_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}. \end{equation} Now let $\mathscr E^\dag$ be the portion of the sum defining $\mathscr E$ where $q\le M^{1/8}$, and let $\mathscr E^\ddag$ be the portion with $M^{1/8}<q\le Z^\omega$. Then $\mathscr E'= \mathscr E^\dag+\mathscr E^\ddag$, and \eqref{C13}--\eqref{C15} yield \begin{equation}\label{C16} \sum_{ (\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'} \mathscr E^\ddag \mathscr I' \ll \Big(\prod_{i=1}^3 \langle \mathbf X''_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}\Big) \sum_{M^{1/8}<q<Z^{\omega}} \sum_{ (\mathbf x'_1,\ldots,\mathbf x'_k)\in \mathscr{Y}' } \Big|\underset{a \bmod{q}}{\left.\sum \right.^{\ast}} \prod_{i=1}^k E(q,ab_i {\mathbf x'_i}^{\mathbf h'_i}; \mathbf h''_i)\Big|. \end{equation} At this point, we require a workable upper bound for the innermost sum. In the situation of Proposition~\ref{circle-method}, such a bound is provided by \eqref{innergauss}. With $h=\max h_{3j}$, this yields \begin{equation}\label{C17} \underset{a \bmod{q}}{\left.\sum \right.^{\ast}} \prod_{i=1}^k E(q,ab_i {\mathbf x'_i}^{\mathbf h'_i}; \mathbf h''_i) \ll \frac{(q,b_1 \langle \mathbf x'_1\rangle) (q,b_2 \langle \mathbf x'_2\rangle)(q,b_3 {\mathbf x'_3}^{\mathbf h'_3})^{1/h} }{q^{1+1/h}}. \end{equation} Now $ (q,b_1 \langle \mathbf x'_1\rangle) \le |b_1| (q,x_{11})\cdots(q,x_{1J'_1}) $ and likewise for $(q,b_2 \langle \mathbf x'_2\rangle)$. Similarly, $$ (q,b_3 {\mathbf x'_1}^{\mathbf h'_3})^{1/h}\le |b_3| (q,x_{31}^{h_{31}})^{1/h}\cdots(q,x_{3J'_3}^{h_{3J'_3}})^{1/h} \le |b_3| (q,x_{31})\cdots(q,x_{3J'_3}). $$ We may sum \eqref{C17} over $\mathbf x'_i \in \mathscr{Y}'$, using the simple bound $$ \sum_{x\le X} (q,x) \ll q^\varepsilon X. $$ It then follows that the right hand side of \eqref{C16} does not exceed \begin{equation}\label{C17a} \ll \Big(\prod_{i=1}^3 |b_i|\langle \mathbf X'_i\rangle \langle \mathbf X''_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}\Big) \sum_{M^{1/8}<q<Z^{\omega}} q^{\varepsilon-1-1/h} \ll M^{-1/(9h)} |b_1b_2b_3| \prod_{i=1}^3 \langle \mathbf X_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}. \end{equation} In the specific situation of Proposition~\ref{circle-method}, this is an acceptable error term. We now turn to the product $\mathscr E^\dag \mathscr I'$. Here we prune the range of integration. Let $$ \mathscr I^\dag = \int_{-M^{1/8}Z_0^{-1}}^{M^{1/8}Z_0^{-1}} I(\beta b_1 {\mathbf x'_1}^{\mathbf h'_1}, \mathbf X''_1; \mathbf h''_1)\cdots I(\beta b_k {\mathbf x'_k}^{\mathbf h'_k}, \mathbf X''_k; \mathbf h''_k)\,\mathrm d\beta, $$ and let $\mathscr I^\ddag$ be the complementary integral over $ M^{1/8}Z_0^{-1}< |\beta|\le Z^{\omega-1}$ so that $\mathscr I'=\mathscr I^\dag+\mathscr I^\ddag$. To obtain an upper bound for $\mathscr I^\ddag$, choose an index $\iota$ with $Z_0 = \mathbf X_\iota^{\mathbf h_\iota}$. Then $$ \int_{M^{1/8}Z_0^{-1}}^\infty (1+\mathbf X_\iota^{\mathbf h_\iota}|b_\iota\beta|)^{-1/\zeta_\iota}\, \mathrm d\beta \ll \mathbf X_\iota^{-\mathbf h_\iota} M^{(\zeta_\iota-1)/8}, $$ and since $\zeta_\iota<1$, we observe that the exponent of $M$ is negative. With this adjustment, the argument in \eqref{C15} shows that uniformly for $\mathbf x'_i \in \mathscr{Y}'$ one has \begin{equation}\label{C18} \mathscr I^\ddag \ll M^{(\zeta_i-1)\zeta_i/8} \prod_{i=1}^k \langle \mathbf X''_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}. \end{equation} We can now imitate the argument from \eqref{C16}--\eqref{C17a}, this time applying \eqref{C18} and summing over $q\le M^{1/8}$. In the cases covered by Proposition~\ref{circle-method}, this yields $$ \sum_{(\mathbf x'_1,\ldots,\mathbf x'_3) \in \mathscr{Y}'} \mathscr E^\dag \mathscr I^\ddag \ll M^{(\zeta_i-1)/9} |b_1b_2b_3| \prod_{i=1}^3 \langle \mathbf X_i\rangle \mathbf X_i^{-\zeta_i \mathbf h_i}, $$ which can be absorbed in the error term when $\delta_1 < (1-\zeta)/8$ where $\zeta = \max \zeta_i <1$. On collecting together, we deduce from \eqref{C12} and the discussion above that \begin{equation}\label{C19} \mathscr N_{\mathbf b}(\mathbf X) = \sum_{(\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'} \mathscr E^\dag \mathscr I^\dag + \mathscr N_{\mathfrak m} +O(F), \end{equation} where $F$ is an acceptable error provided that $C>1$ and $\delta_1 $ is small enough. It would now be possible to exchange the sums over $\mathbf x'_i$ with the summations present in the definition of $\mathscr E^\dag$, and to evaluate these sums by arranging the $x_{ij}$ in arithmetic progressions, as suggested earlier. However, we prefer an indirect argument that is technically simpler. Let $\mathfrak N$ denote the union of the pairwise disjoint intervals $|\alpha-(a/q)|\le M^{1/8}Z_0^{-1}$ with $1\le a\le q\le M^{1/8}$ and $(a,q)=1$. Observe that $\mathfrak N \subset \mathfrak M$. Hence, integrating \eqref{C11a} over $\mathfrak N$ we find that \begin{equation}\label{C20} \sum_{ (\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'}\int_{\mathfrak N} W(b_1\alpha {\mathbf x'_1}^{\mathbf h'_1}, \mathbf X''_1; \mathbf h''_1) \cdots W(b_k\alpha {\mathbf x'_k}^{\mathbf h'_k}, \mathbf X''_k; \mathbf h''_k) \,\mathrm d\alpha = \sum_{ (\mathbf x'_1,\ldots,\mathbf x'_k) \in \mathscr{Y}'} \mathscr E^\dag \mathscr I^\dag + O(F') \end{equation} where $F'$ is an error that certainly does not exceed the error present in \eqref{C12} because the measure of $\mathfrak N$ is smaller than that of $\mathfrak M$. Exchanging sum and integral, it transpires that the left hand side of \eqref{C20} is simply the major arc contribution $\mathscr N_{\mathfrak N}$. To evaluate the latter, we can run an argument from Section~\ref{sec6.3} with $\mathfrak N$ in place of $\mathfrak M$. The bound \eqref{Sapprox} becomes $$ W_i(b_i\alpha, \mathbf X) = E_i(q,ab_i)I_i(\beta b_i,\mathbf X_i) + O(\langle\mathbf X_i\rangle M^{-3/4} |b_i\beta|), $$ and then the result in \eqref{C6} changes to $$ {\mathscr N}_{\mathfrak N} = {\mathscr E}_{\mathbf b} (M^{1/8}) {\mathscr I}_{\mathbf b}(\mathbf X, M^{1/8})+O(\langle\mathbf X_1\rangle\cdots \langle\mathbf X_k\rangle |\mathbf b|_1 M^{-3/8}Z_0^{-1}). $$ We can now complete the singular series and the singular integral as in Section~\ref{sec6.3}. The argument that produced \eqref{C9} now delivers exactly the same asymptotics for $\mathscr N_{\mathfrak N}$. Via \eqref{C19} and \eqref{C20}, it follows that $ \mathscr N_{\mathbf b}(\mathbf X) = {\mathscr E}_{\mathbf b} {\mathscr I}_{\mathbf b}(\mathbf X) + \mathscr N_{\mathfrak m} + O(F'') $ where $F''$ is an error acceptable to Hypothesis~\ref{H1}. Consequently, it remains to estimate the contribution from the minor arcs. \subsection{Minor arcs again} The argument of Section~\ref{sec6.4} yields an acceptable bound for $\mathscr N_{\mathfrak m}$ provided that the estimate \eqref{C9a} remains valid in cases that are not tame. Hence we now complete the proof of Proposition~\ref{circle-method} by showing that indeed \eqref{C9a} holds in the wider context, uniformly for $\alpha\in\mathfrak m$ and $1\le |b_3|\le Z^{\omega/2}$. In doing so, we may suppose that $x_{31}$ is small, for otherwise our previous argument leading to \eqref{C9a} still applies. We write $$ T(\alpha,\mathbf x'_3) =W(b_3\alpha {\mathbf x'_3}^{\mathbf h'_3},\mathbf X''_3; \mathbf h''_3). $$ Then $$ W_3(b_3\alpha, \mathbf X) = \sum_{\mathbf x'_3} T(\alpha,\mathbf x'_3), $$ with the sum extending over $\frac{1}{2} X_{3j} \leq |x'_{3j}| \leq X_{3j}$. We apply Weyl's inequality to $ T(\alpha,\mathbf x'_3)$. Let $K=2^{|\mathbf h''_3|_1 - J_3+J'_3}$, and note that all entries in $\mathbf X''_3$ are at least as large as $Z^\omega$. Hence, whenever the real number $\gamma$ and $c\in\mathbb Z$ and $t\in\mathbb N$ are such that $|t\gamma-c|\le t^{-1}$, then by Lemma~\ref{Weyl}, one has \begin{equation}\label{C30} W(\gamma, \mathbf X''_3;\mathbf h''_3) \ll \langle\mathbf X''_3\rangle^{K+\varepsilon} \Big(\frac1{t} + \frac1{Z^\omega} + \frac{t}{{\mathbf X''_3}^{\mathbf h''_3}}\Big). \end{equation} By Dirichlet's theorem on diophantine approximation, there are $c$ and $t$ with $t\le Z^{-\omega} {\mathbf X''_3}^{\mathbf h''_3}$ and $|t\gamma-c|\le Z^\omega {\mathbf X''_3}^{-\mathbf h''_3} $. Then, on applying a familiar transference principle (see \cite[Exercise 2.8.2]{Va}) to \eqref{C30}, we find that $$ |W(\gamma, \mathbf X''_3;\mathbf h''_3)|^K \ll \langle\mathbf X''_3\rangle^{K+\varepsilon} \Big(\frac1{Z^\omega} + \frac{1}{t+ {\mathbf X''_3}^{\mathbf h''_3}|t\gamma-c|}\Big). $$ Observe that $K<H$. Consequently, for a given $\mathbf x'_3$, we either have $T(\alpha,\mathbf x'_3) \ll \langle\mathbf X''_3\rangle Z^{-\omega/(3H)}$ or there are $t=t(\mathbf x'_3)$ and $c=c( \mathbf x'_3)$ with $t\le Z^{\omega/3}$ and \begin{equation}\label{C31} \Big|b_3\alpha {\mathbf x'_3}^{\mathbf h'_3} - \frac{c}{t}\Big| \le \frac{Z^{\omega/3}}{t{\mathbf X''_3}^{\mathbf h''_3}}. \end{equation} Let $\mathscr X$ be the set of all $\mathbf x'_3$ where the latter case occurs. Then \begin{equation}\label{C32} W_3(b_3\alpha, \mathbf X) \ll \langle\mathbf X_3\rangle Z^{-\omega/(3H)} + \langle\mathbf X''_3\rangle \sum_{\mathbf x'_3 \in\mathscr X} \big(t+ {\mathbf X''_3}^{\mathbf h''_3}|tb_3\alpha {\mathbf x'_3}^{\mathbf h'_3}-c|\big)^{-1/H}. \end{equation} We write $Q={\mathbf X'_3}^{\mathbf h'_3}Z^\omega$ and apply Dirichlet's theorem again to find coprime numbers $a$, $q$ with $1\le q \le Q$ and $|qb_3\alpha -a|\le Q^{-1}$. On comparing this approximation to $b_3\alpha$ with that given by \eqref{C31}, we find that whenever $\mathbf x'_3\in\mathscr X$, then $$ |at {\mathbf x'_3}^{\mathbf h'_3} -cq | \le QZ^{\omega/3} {\mathbf X''_3}^{-\mathbf h''_3} + Q^{-1}t {\mathbf X'_3}^{-\mathbf h'_3} .$$ But $t\le Z^{\omega/3}$, and therefore, the second summand on the right does not exceed $Z^{-\omega/2}$. For the first summand, we recall the definition of small variables, which shows that $ {\mathbf X'_3}^{\mathbf h'_3}\le Z^{30\omega|\mathbf h_3|_1}$. It follows that $ {\mathbf X''_3}^{\mathbf h''_3}=\mathbf X_3^{\mathbf h_3}{\mathbf X'_3}^{-\mathbf h'_3}\ge Z {\mathbf X'_3}^{-\mathbf h'_3} \ge Z^{1-30\omega|\mathbf h_3|_1}$. From this, we see that $$ QZ^{\omega/3} {\mathbf X''_3}^{-\mathbf h''_3} = Z^{4\omega/3} {\mathbf X''_3}^{\mathbf h''_3}{\mathbf X'_3}^{-\mathbf h'_3} \le Z^{(60|\mathbf h_3|_1+2)\omega -1}, $$ and by \eqref{omegabound} the exponent here is negative. For large $Z$, it follows that $at {\mathbf x'_3}^{\mathbf h'_3} =cq$, we infer that $ t= q/(q, {\mathbf x'_3}^{\mathbf h'_3})$ and \eqref{C32} simplifies to $$ W_3(b_3\alpha, \mathbf X) \ll \langle\mathbf X_3\rangle Z^{-\omega/(3H)} + \langle\mathbf X''_3\rangle \sum_{\mathbf x'_3 \in\mathscr X} (q, {\mathbf x'_3}^{\mathbf h'_3})^{1/H} \big(q+ \mathbf X_3^{-\mathbf h_3}|b_3||qb_3\alpha-a|\big)^{-1/H}.$$ Here we can sum over {\em all} $\mathbf x'_3$ and apply an argument paralleling that leading from \eqref{C17} to \eqref{C17a}. This produces $$ W_3(b_3\alpha, \mathbf X) \ll \langle\mathbf X_3\rangle Z^{-\omega/(3H)} + \langle\mathbf X_3\rangle q^\varepsilon \big(q+ \mathbf X_3^{-\mathbf h_3}|qb_3\alpha-a|\big)^{-1/H}.$$ The bound \eqref{C9a} is now evident, and the proof of Proposition~\ref{circle-method} is complete. \section{Upper bound estimates} \subsection{The upper bound hypothesis} As we mentioned in the introduction, not only asymptotic information of the type encoded in Hypothesis~\ref{H1} is required as an input for the transition method in Section~\ref{sec8}, but also certain upper bound estimates that are needed, for example, to handle the contribution to the count that comes from solutions of \eqref{torsor} where the summands are very unbalanced. Again, we formulate the requirements as a hypothesis that can then be checked in the particular cases at hand. We recall the definition of the block matrix \begin{equation}\label{newA} \mathscr{A} = \left( \begin{matrix} \mathscr{A}_1 & \mathscr{A}_2\\ \mathscr{A}_3 & \mathscr{A}_4 \end{matrix} \right) \in \Bbb{R}^{(J+1) \times (N+k)} \end{equation} in \eqref{matrix}. In the slightly simpler setup of the torsor equation \eqref{torsor} and the height conditions \eqref{height} we have \begin{equation}\label{newA1} \mathscr{A}_1 = (\alpha_{ij}^{\nu}) \in \mathbb{R}_{\ge 0}^{J\times N} \end{equation} with $0 \leq i \leq k$, $1 \leq j \leq J_i$, $1 \leq \nu \leq N$ and \begin{equation}\label{A2new} \mathscr{A}_2 = (e_{ij}^\mu) \in \mathbb{R}^{J\times k} \text{ with $e_{ij}^\mu=\begin{cases}\delta_{\mu=i}h_{ij}&\text{$i<k$, $\mu < k$,} \\-h_{kj}&\text{$i=k$, $\mu < k$,}\\ -1&\text{$i<k$, $\mu = k$,}\\ h_{kj}-1 &\text{$i=k$, $\mu = k$.} \end{cases}$} \end{equation} This notation is more convenient for the analytic manipulations in the following sections. Throughout we assume that \begin{equation}\label{1c} \text{rk}(\mathscr{A}_1) = \text{rk}(\mathscr{A}) = R \quad \text{(say).} \end{equation} In our applications, this will be satisfied by Lemma~\ref{rank}, and $R$ plays by Lemma~\ref{lem19} the same role as in \eqref{eq:rkPic}. We define \begin{equation}\label{defc2} c_2 = J-R, \end{equation} so that by \eqref{eq:rkPic} this choice of $c_2$ is the expected exponent in \eqref{manin}. For any vector ${\bm \zeta}$ satisfying the properties specified in \eqref{zeta1}, where we allow more generally also $\zeta_i \geq 0$, and for arbitrary $\zeta_0 > 0$, we also assume that the system of $J+1$ linear equations \begin{equation}\label{1a} \begin{split} \left(\begin{matrix} \mathscr{A}_1\\ \mathscr{A}_3\end{matrix}\right) {\bm \sigma} = \Big(1 - h_{01}\zeta_0, \ldots, 1 - h_{kJ_k}\zeta_k, 1\Big)^{\top} \end{split} \end{equation} in $N$ variables has a solution ${\bm \sigma} \in \Bbb{R}_{>0}^{N}$. In our applications, this is ensured by Lemma~\ref{pos} (whose proof also works for $\zeta_i \geq 0$). \begin{remark} The condition $\rank \mathscr{A} = \rank \mathscr{A}_1$ puts some restrictions on the height matrix $\mathscr{A}_1$. For instance, no row of $\mathscr{A}_1$ can vanish completely (since every column of $\mathscr{A}_2$ is linearly dependent on the columns of $\mathscr{A}_1$). For future reference, we remark that this implies that the set of conditions \eqref{height} for $x_{ij} \in \Bbb{Z} \setminus \{0\}$ implies $|x_{ij}| \leq B$ for all $(i, j)$. \end{remark} Now let $H \geq 1$, $0 < \lambda \leq 1$ and $\mathbf{b}, \mathbf{y} \in \Bbb{N}^{J}$. Let $N_{\mathbf{b}, \mathbf{y}}(B, H, \lambda)$ be the number of solutions $\mathbf{x} \in (\Bbb{Z}\setminus \{0\})^J$ satisfying the conditions \begin{equation}\label{equation} \begin{split} & \sum_{i=1}^k \prod_{j=1}^{J_i} (b_{ij} x_{ij})^{h_{ij}} = 0,\quad \prod_{i=0}^k \prod_{j=1}^{J_i} | y_{ij} x_{ij}|^{\alpha^\nu_{ij}} \leq B \quad (1 \leq \nu \leq N), \end{split} \end{equation} and at least one of the inequalities \begin{equation}\label{Hlambda} \begin{split} & \min_{ij} |x_{ij}| \leq H, \quad \quad \min_{1 \leq i \leq k} \prod_{j = 1}^{J_i} |x_{ij}|^{h_{ij}} < \Bigl(\max_{1 \leq i \leq k} \prod_{j = 1}^{J_i} |2x_{ij}|^{h_{ij}}\Bigr)^{1-\lambda}. \end{split} \end{equation} Note that for $\textbf{x} \in (\Bbb{Z}\setminus \{0\})^J$ satisfying \eqref{equation}, the first condition in \eqref{Hlambda} is always satisfied for $H=B$ and the second condition in \eqref{Hlambda} is never satisfied for $\lambda = 1$. Let $\mathscr{S}_{\textbf{y}}(B, H, \lambda)$ denote the set of all $\mathbf{x} \in [1, \infty)^J$ that satisfy \eqref{Hlambda} and the $N$ inequalities in the second part of \eqref{equation}. As in \eqref{gcd}, we denote by $S_{\rho}$, $1 \leq \rho \leq r$, subsets of the set of pairs $(i, j)$ with $0 \leq i \leq k$, $1 \leq j \leq J_i$ corresponding to the coprimality conditions. \begin{hyp}\label{H2} Let $c_2$ be the number introduced in \eqref{defc2} and let $\lambda$, ${\bm \zeta}$ be as in Hypothesis~\ref{H1}. Suppose that there exist ${\bm \eta} = (\eta_{ij}) \in \Bbb{R}_{> 0}^J$ and $ \delta_2, \delta_2^{\ast} > 0$ with the following properties: \begin{equation}\label{3} C_1({\bm \eta}) \colon \quad \sum_{(i, j) \in S_{\rho}}\eta_{ij} \geq 1+\delta_2 \quad \text{for all} \quad 1 \leq \rho \leq r, \end{equation} \begin{equation}\label{2a} N_{\mathbf{b}, \mathbf{b} \cdot \mathbf{y}}(B, H, \lambda) \ll B (\log B)^{c_2 -1+\varepsilon} (1+\log H) \mathbf{b}^{-{\bm \eta}} \langle\mathbf{y}\rangle^{-\delta_2^{\ast}} \end{equation} and \begin{equation}\label{continuous} \int_{\mathscr{S}_{\textbf{y}}(B, H, \lambda)} \prod_{ij} x_{ij}^{-h_{ij}\zeta_i} \,{\mathrm d}\mathbf{x} \ll B (\log B)^{c_2 -1+\varepsilon} (1+\log H) \langle \mathbf{y}\rangle^{-\delta_2^{\ast}} \end{equation} for any $\varepsilon > 0$. \end{hyp} The bound \eqref{2a} is the desired upper bound $B(\log B)^{c_2+\varepsilon}$ with some saving in the coefficients $\textbf{b}$, $\textbf{y}$ and with some extra logarithmic saving in the situation of condition \eqref{Hlambda}, i.\,e.,~ if one variable is short (that is, $\log H = o((\log B)^{1+\varepsilon})$) or the blocks $\prod_j |x_{ij}|^{h_{ij}}$ for $1 \leq i \leq k$ are unbalanced in size (so that the second assumption in \eqref{Hlambda} holds and we may choose $H$ very small even if all $x_{ij}$ are large). \subsection{Reduction to linear algebra} Our main applications involve the torsor equation \eqref{typeT}. In this case, the verification of Hypothesis~\ref{H2} can be checked simply by a linear program. This will be established in Proposition~\ref{propH2} below. We start with two elementary lemmas. \begin{lemma}\label{lattice} Let $\mathbf{v}\in \Bbb{Z}^3$ be primitive and let $H_1, H_2, H_3 > 0$. Then the number of primitive ${\mathbf u} \in \Bbb{Z}^3$ that satisfy $u_1v_1 + u_2v_2 + u_3v_3 = 0$ and that lie in the box $|u_i| \leq H_i$ $(1 \leq i \leq 3)$ is $O(1+H_1H_2|v_3|^{-1})$. \end{lemma} This is \cite[Lemma 3]{HB}. \begin{lemma}\label{gcd-lemma} Let $\alpha, \beta, \gamma \in \Bbb{N}$, $A, B, X_1, \ldots, X_r \geq 1$, $h_1, \ldots, h_r \in \Bbb{N}$ with $h_1 \leq \ldots \leq h_r$. Then $$\sum_{a \leq A} \sum_{b \leq B} \sum_{\substack{x_j \leq X_j\\ 1 \leq j \leq r}} (\alpha a, \beta b, \gamma \mathbf{x}^{\textbf{h}}) \ll (\alpha, \beta, \gamma)^{1/h_r}(\alpha, \beta)^{1-1/h_r} \tau(\alpha)\tau(\beta)\tau(\gamma)\tau_r(\alpha\beta\gamma) A B \langle \textbf{X} \rangle. $$ \end{lemma} \begin{proof} The left hand side of the formula is at most \begin{displaymath} \begin{split} & \sum_f f \sum_{\substack{a \leq A\\ f \mid \alpha a}} \sum_{\substack{b \leq B\\ f \mid \beta b}} \sum_{\substack{x_j \leq X_j\, (1 \leq j \leq r)\\ f \mid \gamma \textbf{x}^{\textbf{h}}}}1\leq AB \sum_f \frac{(f, \alpha)(f, \beta)}{f} \sum_{f_1 \cdots f_r = f/(f, \gamma)} \sum_{\substack{x_j \leq X_j\, (1 \leq j \leq r)\\ f_j \mid x_j^{h_j} }}1 \\ &\leq AB\langle \textbf{X} \rangle \sum_f \frac{(f, \alpha)(f, \beta) (f, \gamma)^{1/h_r}\tau_r(f)} {f^{1 + 1/h_r}} \leq \zeta(1 + 1/h_r)^rAB\langle \textbf{X} \rangle \sum_{a \mid \alpha} \sum_{b \mid \beta} \sum_{c \mid \gamma} \frac{abc^{1/h_r}\tau_r([a, b, c]) }{[a, b, c]^{1 + 1/h_r}}. \end{split} \end{displaymath} Since $abc^{\delta}[a, b, c]^{-1-\delta} \leq (a, b)^{1-\delta}(a, b, c)^{\delta}$ for $0 \leq \delta \leq 1$, the lemma follows. \end{proof} We apply the previous two lemmas to analyze the number of solutions $\textbf{x} \in (\Bbb{Z} \setminus\{0\})^J$ to the first equation in \eqref{equation} in the special case where $k = 3$, $J_1 = J_2 = 2$ and $h_{11} = h_{12} = h_{21} = h_{22} = 1$, cf.\ \eqref{typeT}. In this case, the equation reads \begin{equation}\label{thisequation} b_{11} b_{12} x_{11} x_{12} + b_{21} b_{22} x_{21} x_{22} + \prod_{j=1}^{J_3} (b_{3j} x_{3j})^{h_{3j}} = 0. \end{equation} Without loss of generality, assume \begin{equation}\label{h3jsorted} \text{$h_{31} \leq \ldots \leq h_{3J_3}$, and let $\nu$ be the largest index with $h_{3 \nu} = 1$.} \end{equation} If no such index exists, we put $\nu = 0$. For notational simplicity, we write \begin{equation}\label{defmu} \mu = 1 - h_{3J_3}^{-1} \in [0, 1). \end{equation} Suppose first that $\nu \geq 1$. Let us temporarily restrict to $\textbf{x}$ satisfying \begin{equation}\label{copr} (x_{11}x_{12}, x_{21}x_{22}, x_{31} \cdots x_{3\nu}) = 1. \end{equation} For $X_{ij} \leq |x_{ij}| \leq 2 X_{ij}$ in dyadic boxes, by Lemma~\ref{lattice} (with $x_{12}, x_{22}, x_{31}$ in the roles of $u_1, u_2, u_3$) and Lemma~\ref{gcd-lemma}, the number of such solutions to \eqref{thisequation} is \begin{displaymath} \begin{split} & \ll \langle \textbf{X}_0\rangle \underset{\substack{X_{11} \leq x_{11} \leq 2X_{11}\\ X_{21} \leq x_{21} \leq 2 X_{21}}}{\sum\sum} \sum_{\substack{X_{3j} \leq x_{3j}\leq 2X_{3j}\\ 2 \leq j \leq J_3}} \Big(1 + \frac{X_{12}X_{22} }{x_{31}^{-1}\prod_j (b_{3j}x_{3j})^{h_{3j}}} \Big(b_{11}b_{12}x_{11}, b_{21}b_{22}x_{21}, x_{31}^{-1}\prod_j (b_{3j} x_{3j})^{h_{3j}}\Big)\Big)\\ & \ll \langle \textbf{X}_0 \rangle \Big(X_{11}X_{21} \frac{\langle \textbf{X}_3\rangle}{X_{31}} + |\textbf{b}|^{\varepsilon}\Big( \frac{(b_{11}b_{12}, b_{21}b_{22}) }{\textbf{b}_3^{\textbf{h}_3}} \Big)^{\mu}X_{11}X_{12}X_{21}X_{22} \prod_{j} X_{3j}^{1 - h_{3j}}\Big). \end{split} \end{displaymath} By symmetry, this improves itself to \begin{equation}\label{sym1} \langle \textbf{X}_0 \rangle \Big( \frac{\min(X_{11}, X_{12}) \min(X_{21}, X_{22}) \langle \textbf{X}_3\rangle }{\max(X_{31}, \ldots, X_{3 \nu})} + |\textbf{b}|^{\varepsilon}\Big( \frac{(b_{11}b_{12}, b_{21}b_{22}) }{\textbf{b}_3^{\textbf{h}_3}} \Big)^{\mu} X_{11}X_{12}X_{21}X_{22} \prod_{j} X_{3j}^{1 - h_{3j}}\Big). \end{equation} Permuting the roles of $u_1, u_2, u_3$ in Lemma~\ref{lattice}, we obtain similarly the bound \begin{displaymath} \begin{split} & \ll \langle \textbf{X}_0\rangle \underset{\substack{X_{11} \leq x_{11} \leq 2X_{11}\\ X_{21} \leq x_{21} \leq 2 X_{21}}}{\sum\sum} \sum_{\substack{X_{3j} \leq x_{3j}\leq 2X_{3j}\\ 2 \leq j \leq J_3}} \Big(1 + \frac{X_{12}X_{31}}{b_{21}b_{22}x_{21}} \Big(b_{11}b_{12}x_{11}, b_{21}b_{22}x_{21}, \prod_j (b_{3j} x_{3j})^{h_{3j}}\Big)\Big)\\ & \ll \langle \textbf{X}_0 \rangle \Big(X_{11}X_{21} X_{32}\cdots X_{3J_3} + |\textbf{b}|^{\varepsilon} X_{11}X_{12}\langle \textbf{X}_3 \rangle \Big). \end{split} \end{displaymath} Again by symmetry, this improves itself to $$ \langle \textbf{X}_0 \rangle \Big(\frac{\min(X_{11}, X_{12}) \min(X_{21} ,X_{22}) \langle \textbf{X}_3\rangle }{\max(X_{31}, \ldots, X_{3 \nu})}+ |\textbf{b}|^{\varepsilon} \min(X_{11}X_{12}, X_{21}X_{22}) \langle \textbf{X}_3 \rangle \Big).$$ Together with \eqref{sym1}, we now see that the number of $\textbf{x} \in (\Bbb{Z} \setminus \{0\})^J$ satisfying \eqref{thisequation}, \eqref{copr} and $X_{ij} \leq |x_{ij}| \leq 2 X_{ij}$ does not exceed \begin{equation}\label{sym2} \begin{split} |\textbf{b}|^{\varepsilon} \langle \textbf{X}_0 \rangle \Big(&\frac{\min(X_{11}, X_{12}) \min(X_{21}, X_{22}) \langle \textbf{X}_3\rangle }{\max(X_{31}, \ldots, X_{3 \nu})} \\ &+ \frac{X_{11}X_{12}X_{21}X_{22}\langle \textbf{X}_3\rangle }{\max(X_{11} X_{12}, X_{21}X_{22}, (\textbf{b}_3^{\textbf{h}_3}(b_{11}b_{12}, b_{21}b_{22})^{-1})^{\mu} \textbf{X}_3^{\textbf{h}_3})} \Big) \end{split} \end{equation} We now replace the minima and maxima in \eqref{sym2} by suitable geometric means. With future applications in mind, we keep the result as general as is possible. For $\ell = 1, 2$ and ${\bm \tau}^{(\ell)} = (\tau^{(\ell)}_{ij}) \in \Bbb{R}_{> 0}^J$ with \begin{equation}\label{tau1} \begin{split} &\tau^{(\ell)}_{0j} = 1, \quad \tau^{(\ell)}_{11} + \tau^{(\ell)}_{12} \geq 1, \quad \tau^{(\ell)}_{21} + \tau^{(\ell)}_{22} \geq 1, \quad \sum_{j=1}^{\nu} \tau^{(\ell)}_{3j} \geq \nu-1, \quad \tau^{(\ell)}_{3j} = 1\, (j > \nu),\\ & \min(\tau^{(\ell)}_{11}, \tau^{(\ell)}_{12}) + \min(\tau^{(\ell)}_{21}, \tau^{(\ell)}_{22}) + \min(\tau^{(\ell)}_{31}, \ldots, \tau^{(\ell)}_{3\nu}) > 1 \end{split} \end{equation} (where $\nu$ is as in \eqref{h3jsorted}), we have $$\frac{\langle \textbf{X}_0\rangle\min(X_{11}, X_{12}) \min(X_{21}, X_{22}) \langle \textbf{X}_3\rangle }{\max(X_{31}, \ldots, X_{3 \nu})} \leq \textbf{X}^{{\bm \tau}^{(\ell)}}.$$ (The second line in \eqref{tau1} is not needed here, but will be required later when we remove condition \eqref{copr}.) Let $\bm \zeta'$ satisfy \eqref{zeta1} and let $\zeta'_0\in \Bbb{R}$ be arbitrary. Then $$\frac{\langle\textbf{X}_0\rangle X_{11}X_{12}X_{21}X_{22}\langle \textbf{X}_3\rangle }{\max(X_{11} X_{12}, X_{21}X_{22}, (\textbf{b}_3^{\textbf{h}_3}(b_{11}b_{12}, b_{21}b_{22})^{-1})^{\mu} \textbf{X}_3^{\textbf{h}_3})} \leq \Big(\frac{(b_{11}b_{12} b_{21}b_{22})^{1/2}}{ \textbf{b}_3^{\textbf{h}_3} } \Big)^{\mu\zeta_3'} \prod_{ij} X_{ij}^{1 - h_{ij} \zeta'_i} . $$ Thus we can bound \eqref{sym2} by $$ |\textbf{b}|^{\varepsilon } \Big( \textbf{X}^{{\bm \tau}^{(1)}} + \Big(\frac{(b_{11}b_{12} b_{21}b_{22})^{1/2}}{ \textbf{b}_3^{\textbf{h}_3} } \Big)^{\mu\zeta'_3} \prod_{ij} X_{ij}^{1 - h_{ij} \zeta'_i} \Big) $$ and also by $$ |\textbf{b}|^{\varepsilon+1} \Big( \textbf{X}^{{\bm \tau}^{(2)}} + \prod_{ij} X_{ij}^{1 - h_{ij} \zeta_i} \Big) $$ (upon choosing $\bm \zeta' = \bm \zeta$) and so, for any $0 < \alpha\leq 1$, by \begin{equation}\label{before-d} |\textbf{b}|^{\varepsilon+\alpha} \Big( \textbf{X}^{{\bm \tau}^{(1)}} + \Big(\frac{(b_{11}b_{12} b_{21}b_{22})^{1/2}}{ \textbf{b}_3^{\textbf{h}_3} } \Big)^{\mu\zeta'_3} \prod_{ij} X_{ij}^{1 - h_{ij} \zeta'_i} \Big)^{1-\alpha} \Big( \textbf{X}^{{\bm \tau}^{(2)}} + \prod_{ij} X_{ij}^{1 - h_{ij} \zeta_i} \Big)^{ \alpha}. \end{equation} We will apply this with $\alpha$ very small (but fixed). The idea of this maneuver is to separate the $\textbf{b}$- and $\textbf{y}$-decay in \eqref{2a} from the bound in $B$ and $H$. Before we proceed with the estimation, we remove the condition \eqref{copr}. Let us therefore assume that $(x_{11}x_{12}, x_{21}x_{22}, x_{31} \cdots x_{3\nu}) = d$. Then we can apply the previous analysis with $X_{ij}/d_{ij}$ in place of $X_{ij}$ for numbers $d_{ij}$ satisfying $d_{11} d_{12} = d_{21} d_{22} = d_{31} \cdots d_{3\nu} = d$ for $i = 1, 2, 3$. The second line in \eqref{tau1} and \eqref{zeta1} (recall that $h_{11} = h_{12} = h_{21} = h_{22} = h_{31} = \ldots = h_{3\nu} = 1$) ensure that summing \eqref{before-d} over all $d$ (and all such combinations of $d_{ij}$) yields a convergent sum. Thus the bound \eqref{before-d} remains true for the number of all $\textbf{x} \in (\Bbb{Z} \setminus \{0\})^J$ satisfying the first equation in \eqref{equation} and $X_{ij} \leq |x_{ij}| \leq 2X_{ij}$. We are currently working under the assumption $\nu \geq 1$, but this is only for notational convenience. Indeed, if $\nu = 0$, we apply Lemma~\ref{lattice} with one of $u_1, u_2, u_3$ equal to 1, and in \eqref{sym2} we agree on the convention that the maximum of the empty set is 1. Condition \eqref{copr} is automatically satisfied in this case (the empty product being defined as 1), and hence the second line in \eqref{tau1} is not needed, so that we may define as usual the minimum of the empty set as $\infty$. With these conventions, \eqref{before-d} remains true also if $\nu = 0$. We now invoke the $N$ inequalities in \eqref{equation}. We choose $$ \bm \zeta' = (\zeta_1', \zeta_2', \zeta_3')= \Big( \frac{1}{2} - \frac{1}{5 h_{3J_3}}, \frac{1}{2} - \frac{1}{5 h_{3J_3}}, \frac{2}{5h_{3J_3}}\Big)$$ and \begin{equation}\label{tau1final} \bm \tau^{(1)} = \big(1 - h_{01}\zeta''_0, \ldots, 1 - h_{kJ_k}\zeta''_k\big) \end{equation} where $\bm\zeta'' = (\zeta_1'', \zeta_2'', \zeta_3'')$ satisfies \begin{equation*} \bm\zeta'' = (\zeta_1'', \zeta_2'', \zeta_3'') = \begin{cases} (1/3, 1/3, 1/3), & h_{3J_3} = 1,\\ (1/2, 1/2, 0), & h_{3J_3} > 1. \end{cases} \end{equation*} Then ${\bm \tau}^{(1)}$ satisfies \eqref{tau1}. By \eqref{1a}, there exists ${\bm \sigma}^{(1)} \in \Bbb{R}_{> 0}^N$ with \begin{equation}\label{sigma} |{\bm \sigma}^{(1)}|_1 \leq 1, \quad \mathscr{A}_1 {\bm \sigma}^{(1)} = {\bm \tau}^{(1)}. \end{equation} Such a vector also exists if ${\bm \tau}^{(1)}$ is replaced by ${\bm \tau}= (1 - h_{00}\zeta_0', \ldots, 1 - h_{3J_3}\zeta'_3)$. Now, taking suitable combinations of the $N$ inequalities of the second condition in \eqref{equation}, we see that every $\textbf{x}$ satisfying these also satisfies $$\prod_{ij} |x_{ij}|^{\tau_{ij}^{(1)}} \leq B \textbf{y}^{- {\bm \tau}^{(1)}}, \quad \prod_{ij} |x_{ij}|^{1 - h_{ij} \zeta_i'} \leq B \prod_{ij} y_{ij}^{h_{ij}\zeta_i' - 1}.$$ Define \begin{equation*} \begin{split} {\bm \zeta}^{\ast} & = \Big(\zeta_1' - \frac{1}{2} \mu \zeta_3', \zeta_2' - \frac{1}{2} \mu \zeta_3', \ \zeta_3'(1 + \mu) \Big) = \Big( \frac{1}{2} - \frac{1}{5(1+\mu) h_{3J_3}}, \frac{1}{2} - \frac{1}{5 (1+\mu)h_{3J_3}}, \frac{2}{5(1+\mu)h_{3J_3}}\Big) \end{split} \end{equation*} with $\mu$ as in \eqref{defmu} and $\tilde{\bm \tau} = (1 - h_{ij} \zeta_i^{\ast})_{ij}$. We summarize our findings in the following lemma. \begin{lemma}\label{hypo1} In the situation of equation~\eqref{thisequation}, suppose that $\mathbf{b}, \mathbf{y} \in \Bbb{N}^J$, $1 \leq H \leq B$, $0 < \alpha, \lambda \leq 1$, $\tau_{\ast} \coloneqq \min_{ij}(\tau_{ij}^{(1)}, 1-h_{ij}\zeta_i') > 0$. Let ${\bm \zeta}$ be as in Hypothesis~\ref{H1} and ${\bm \tau}^{(2)} \in \Bbb{R}_{> 0}^J$ as in \eqref{tau1}. Then \begin{equation}\label{X1} \begin{split} N_{\textbf{b}, \textbf{b} \cdot \textbf{y}}(B, H, \lambda) \ll& |\textbf{b}|^{\varepsilon + \alpha} \Big( \langle \textbf{y} \rangle^{-\tau_{\ast}} \big( \textbf{b}^{-{\bm \tau}^{(1)}} + \textbf{b}^{-{\tilde{\bm \tau}}} \big) B\Big)^{1-\alpha} \left.\sum_{\textbf{X}}\right.^{\ast}\Big( \textbf{X}^{{\bm \tau}^{(2)}\alpha} + \prod_{ij} X_{ij}^{(1 - h_{ij} \zeta_i)\alpha} \Big) \end{split} \end{equation} where $\textbf{X} = (X_{ij})$ and the asterisk indicates that each $X_{ij} = 2^{\xi_{ij}}$ runs over powers of 2 and is subject to $\prod_{ij} X_{ij}^{\alpha^\nu_{ij}} \leq B$ for $1 \leq \nu \leq N$ and at least one of the inequalities \begin{equation*} \min_{ij} X_{ij} \leq H, \quad \quad \min_{1 \leq i \leq k} \prod_{j = 1}^{J_i} X_{ij}^{h_{ij}} < \Bigl(\max_{1 \leq i \leq k} \prod_{j = 1}^{J_i} (2X_{ij})^{h_{ij}}\Bigr)^{1-\lambda}. \end{equation*} \end{lemma} In a much simpler way, we derive the continuous analogue \begin{equation}\label{X2} \int_{\mathscr{S}_{\textbf{y}}(B, H, \lambda)} \prod_{ij} x_{ij}^{-h_{ij}\zeta_i} \,{\mathrm d}\mathbf{x} \ll \big(\langle \textbf{y} \rangle^{-\tau^{\dag}} B\big)^{1-\alpha} \left.\sum_{\textbf{X}}\right.^{\ast}\prod_{ij} X_{ij}^{(1 - h_{ij} \zeta_i)\alpha} \end{equation} with $\tau^{\dag} = \min_{ij}(1-h_{ij} \zeta_i ) > 0$ and the sum is subject to the same conditions. \\ As mentioned above, we will choose $\alpha$ in \eqref{X1} very small. The key property of ${\bm \tau^{(1)}}$ and ${\tilde{\bm \tau}}$ is that all their entries are $\geq 1/2$ where equality is only possible for ${\bm \tau^{(1)}}$ at indices $(ij)$ with $i\in \{1, 2\}$ if $h_{3J_3} \geq 2$. Since $|S_{\rho}| \geq 2$ for all $1 \leq \rho \leq r$, we conclude that the conditions $$C_1\big((1-\alpha){\bm \tau^{(1)}}\big), \quad C_1\big((1-\alpha)\tilde{\bm \tau}\big)$$ in \eqref{3} hold for sufficiently small $\alpha > 0$ provided that \begin{equation}\label{fail} \max_{ij} h_{ij} = 1 \,\,\text{or there exists no $\rho$ with } S_{\rho} = \{(i_1, j_1), (i_2, j_2)\}, i_1, i_2 \in \{1, 2\}. \end{equation} We now transform the $X$-sums in \eqref{X1} and \eqref{X2}. For an arbitrary vector ${\bm \tau} \in \Bbb{R}_{\geq 0}^J$, we rewrite a sum $\sum^{\ast}_{\textbf{X}} \textbf{X}^{{\bm \tau}\alpha}$ of the type appearing in \eqref{X1} and \eqref{X2} as \begin{equation}\label{typical} \underset{{\bm \xi} \in \Bbb{N}_0^J}{\left.\sum \right.^{\ast}} B^{\alpha \, \tilde{\bm \xi}^{\top} {\bm \tau}}, \quad\quad \tilde{\bm \xi} = \frac{\log 2}{\log B} {\bm \xi}, \end{equation} and now $\sum^{\ast}$ indicates that the sum is subject to \begin{equation}\label{poly1} \mathscr{A}_1^{\top} \tilde{\bm \xi} \leq (1, \ldots, 1)^{\top} \in \Bbb{R}^N \end{equation} (the inequality being understood componentwise) and at least one of the inequalities \begin{align}\label{poly2} & \tilde{\xi}_{ij} \leq \frac{\log H}{\log B} \quad \text{for some } i, j,\\ \label{poly3} & \min_{1 \leq i \leq k}\sum_{j=1}^{J_i} \tilde{\xi}_{ij} h_{ij} < \max_{1 \leq i \leq k}\sum_{j=1}^{J_i} \Big(\tilde{\xi}_{ij} +\frac{\log 2}{\log B}\Big) h_{ij}(1-\lambda). \end{align} For future reference, we note that \begin{equation}\label{ref} \max_{1 \leq i \leq k}\sum_{j=1}^{J_i} \Big(\tilde{\xi}_{ij} +\frac{\log 2}{\log B}\Big) h_{ij}(1-\lambda) = \max_{1 \leq i \leq k}\sum_{j=1}^{J_i} \tilde{\xi}_{ij} h_{ij}(1-\lambda) + O\Big(\frac{1}{\log B}\Big). \end{equation} For $0 \leq i \leq k$, $1 \leq j \leq J_i$, $0 < \lambda \leq 1$ and a permutation $\pi \in S_k$, we consider the closed, convex polytopes \begin{equation}\label{polytope} \begin{split} \mathscr{P} & = \{ {\bm \psi} \in \Bbb{R}^J : {\bm \psi} \geq 0, \, \mathscr{A}_1^{\top} {\bm \psi} \leq (1, \ldots, 1)^{\top}\},\\ \mathscr{P}_{ij}& = \{ {\bm \psi} \in \mathscr{P} : \psi_{ij} = 0\},\\ \mathscr{P}(\lambda, \pi) &= \Big\{ {\bm \psi} \in \mathscr{P} : \sum_{j=1}^{J_{\pi(1)}} \psi_{\pi(1), j} h_{\pi(1), j} \leq \ldots \leq \sum_{j=1}^{J_{\pi(k)}} \psi_{\pi(k), j} h_{\pi(k), j}, \\ & \quad\quad\quad\quad\quad \sum_{j=1}^{J_{\pi(1)}} \psi_{\pi(1), j} h_{\pi(1), j} \leq(1-\lambda) \sum_{j=1}^{J_{\pi(k)}} \psi_{\pi(k), j} h_{\pi(k), j}\Big\}. \end{split} \end{equation} We assume that \begin{equation}\label{simplex1} C_2({\bm \tau}) \colon \quad \max\{ {\bm \psi}^{\top} {\bm \tau} : {\bm \psi} \in \mathscr{P}\} = 1. \end{equation} The intersection of the hyperplane $\mathscr{H} \colon {\bm \psi}^{\top} {\bm \tau} = 1$ with any of the above polytopes is again a closed convex polytope, and we assume that the dimensions satisfy \begin{equation}\label{simplex2} C_3({\bm \tau}) \colon \quad \begin{array}{l} \dim(\mathscr{H} \cap \mathscr{P} ) \leq c_2,\\ \dim(\mathscr{H} \cap \mathscr{P}_{ij} ) \leq c_2 - 1, \quad 0 \leq i \leq k, 1 \leq j \leq J_i,\\ \dim(\mathscr{H} \cap \mathscr{P}(\lambda, \pi) ) \leq c_2 - 1, \quad \pi \in S_k. \end{array} \end{equation} With this notation and the assumptions \eqref{simplex1} and \eqref{simplex2}, we return to \eqref{typical}. Clearly the sum has $O((\log B)^J)$ terms, so the contribution of ${\bm \xi}$ with $$\tilde{\bm \xi}^{\top} {\bm \tau} \leq 1 - \frac{J \log\log B}{\alpha \log B} $$ to \eqref{typical} is $O(B^{\alpha})$. By \eqref{simplex1}, we may now restrict to \begin{equation}\label{restrict} 1 - \frac{J \log\log B}{\alpha \log B} \leq \tilde{\bm \xi}^{\top} {\bm \tau} \leq 1 \end{equation} in the sense that \begin{equation}\label{sensethat} \underset{{\bm \xi} \in \Bbb{N}_0^J}{\left.\sum \right.^{\ast}} B^{\alpha \, \tilde{\bm \xi}^{\top} {\bm \tau}} \ll B^{\alpha}\big(1 +\#\mathscr{X}_1 + \#\mathscr{X}_2\big) \end{equation} where $$\mathscr{X}_1 = \{{\bm \xi} \in \Bbb{N}_0^J : \eqref{poly1}, \eqref{poly2}, \eqref{restrict} \} , \quad \mathscr{X}_2 = \{{\bm \xi} \in \Bbb{N}_0^J : \eqref{poly1}, \eqref{poly3}, \eqref{restrict} \}.$$ We define $$\mathscr{Y}_1 = \{{\bm \xi} \in \Bbb{R}_{\ge 0}^J : \eqref{poly1}, \eqref{poly2}, \eqref{restrict} \} , \quad \mathscr{Y}_2 = \{{\bm \xi} \in \Bbb{R}_{\ge 0}^J : \eqref{poly1}, \eqref{poly3}, \eqref{restrict} \}$$ and bound $\#\mathscr{X}_1$ resp.\ $\#\mathscr{X}_2$ by the Lipschitz principle, i.\,e.,~ by the volume and the volume of the boundary of $\mathscr{Y}_1$ resp.\ $\mathscr{Y}_2$ (or a superset thereof). By the third condition in \eqref{simplex2} as well as \eqref{ref} and \eqref{restrict} we see that $\mathscr{Y}_2$ is contained in an $O_{\alpha}(\log\log B)$ neighborhood of a union of polytopes of dimension at most $c_2-1$ and side lengths $O(\log B)$, so that $$\#\mathscr{X}_2 \ll_{\alpha, \lambda} (\log B)^{c_2-1}(\log\log B)^{J-(c_2-1)} \ll (\log B)^{c_2 - 1+\varepsilon}.$$ Similarly, by the first two conditions in \eqref{simplex2} and \eqref{restrict} we see that $\mathscr{Y}_2$ is contained in an $O_{\alpha}(\log\log B)$ neighborhood of a union of parallelepipeds of dimension at most $c_2$, where at most $c_2-1$ of the side lengths of each parallelepiped are of size $O(\log B)$ and the remaining ones (if any) are of size $O(\log H)$. We conclude $$\#\mathscr{X}_1 \ll_{\alpha} (\log B)^{c_2-1}(\log H + \log\log B) (\log\log B)^{J-c_2} \ll (\log B)^{c_2 - 1+\varepsilon} (1 + \log H).$$ We substitute the bounds for $\#\mathscr{X}_1$, $\#\mathscr{X}_2$ into \eqref{sensethat} and use this in \eqref{X1} and \eqref{X2}. From Lemma \ref{hypo1} we conclude the following result. \begin{prop}\label{propH2} In the situation of equation~\eqref{thisequation}, let $\lambda$, ${\bm \zeta}$ be as in Hypothesis~\ref{H1}. Define the matrix $\mathscr{A}_1$ as in \eqref{newA1} and the polytopes $\mathscr{P}, \mathscr{P}_{ij}, \mathscr{P}(\lambda, \pi)$ as in \eqref{polytope}. Choose ${\bm \tau}^{(2)}$ satisfying \eqref{tau1}. Suppose that \eqref{fail} holds as well as the conditions \begin{equation}\label{ass2} C_2( {\bm \tau}^{(2)}), \quad C_3( {\bm \tau}^{(2)}), \quad C_2((1 - h_{ij} \zeta_{i})_{ij}), \quad C_3((1 - h_{ij} \zeta_{i})_{ij}) \end{equation} hold as in \eqref{simplex1}, \eqref{simplex2}. Then Hypothesis~\ref{H2} is true. \end{prop} Condition \eqref{ass2} requires a linear program. In principle this can be done by hand (we show this in a special case in Appendix~\ref{A}), but a straightforward computer-assisted verification is more time-efficient. We can replace \eqref{fail} by the following condition: there exist vectors ${\bm \tau}^{(1)} \in \Bbb{R}^J$, ${\bm \sigma} \in \Bbb{R}^N$ satisfying \eqref{tau1final} and \eqref{sigma} such that $C_1({\bm \tau}^{(1)})$ holds. \section{The transition method}\label{sec8} In this section, we describe a method that derives an asymptotic formula for $N(B)$ as in \eqref{manin} from the input provided by Hypotheses~\ref{H1} and \ref{H2}. Our main result will be formulated at the end of the section. In the interest of brevity, we now choose $b_1=\ldots=b_k=1$ in \eqref{torsor}. No extra difficulties arise should one wish to handle the more general case, but a more elaborate notation would be needed. All equations that occur in the examples treated in this paper may be interpreted to have coefficients $1$ only. We begin with some more notation. We continue to use the vector operations introduced in Section~\ref{dioph}. In addition, if $\mathscr{R} \subseteq \Bbb{R}^n$ and $\mathbf{x} \in \Bbb{R}^n$, then $\mathbf{x}\cdot \mathscr{R} = \{\mathbf{x} \cdot \mathbf{y} : \mathbf{y} \in \mathscr{R}\} \subseteq \Bbb{R}^n$. For $\textbf{v}= (v_1, \ldots, v_n) \in \Bbb{R}^n$, we write \begin{equation}\label{tilde} \widetilde{\textbf{v}} = (2^{v_1}, \ldots, 2^{v_n}) \in \Bbb{R}^n. \end{equation} For $\mathbf{g} \in \Bbb{N}^r$, we write $\mu(\mathbf{g}) = \prod_{\rho=1}^r \mu(g_\rho).$ We write $\mathbf{1} = (1, \ldots, 1)$, the dimension of the vector being understood from the context. For $0 < \Delta < 1$ let $f_{\Delta} \colon[0, \infty) \rightarrow [0, 1]$ be a smooth function with \begin{equation}\label{smooth0} \text{supp}(f_{\Delta}) \subseteq [0, 1 + \Delta), \quad f_{\Delta} = 1 \text{ on } [0, 1], \quad \frac{d^j}{dx^j} f_{\Delta}(x) \ll_j \Delta^{-j} \end{equation} whose Mellin transform $\widehat{f}_{\Delta}$ obeys, once $\delta_3>0$ and $A\geq 0$ are fixed, the inequality \begin{equation}\label{smooth} \frac{\,{\mathrm d}^j}{\,{\mathrm d} s^j} \widehat{f}_{\Delta}(s) \ll_{j, A, \delta_3} \frac{(1+\Delta |s|)^{-A}}{|s|} \end{equation} for all $j \in \Bbb{N}_0$, uniformly in $\delta_3 \leq \Re s < 2$. A construction of $f_{\Delta}$ is given in \cite[(2.3)]{BBS1}. From \eqref{smooth}, we infer the useful estimate \begin{equation}\label{useful} \mathscr{D}\Bigl(\mathbf{s}^{\mathbf{a}} \prod_{\nu=1}^N \widehat{f}_{\Delta}(s_{\nu})\Big) \ll \Delta^{-\| \mathbf{a}\|_1 - c} |\mathbf{s}|^{-c} \langle \textbf{s} \rangle^{-1} \end{equation} for $\mathbf{s} = (s_1, \ldots, s_{N}) \in \Bbb{C}^N$ with $2>\Re s_{\nu} \geq \delta_3 > 0$, $\mathbf{a} \in \Bbb{N}_0^N$, $c \geq 1$ and any linear differential operator $\mathscr{D}$ with constant coefficients in $s_1, \ldots, s_N$, the implied constant being dependent on $\textbf{a}, N, c, \mathscr{D}$. We write $\int^{(n)}$ for an iterated $n$-fold Mellin--Barnes integral. The lines of integration will be clear from the context or otherwise specified in the text. If all $n$ integrations are over the same line $(c)$, then we write this as $\int_{(c)}^{(n)}$. We continue to work subject to the conditions \eqref{1c}, \eqref{1a}. Also, we suppose that Hypotheses~\ref{H1} and \ref{H2} are available to us. With $\beta_i$ as in Hypothesis~\ref{H1} and $S_{\rho}$ as in \eqref{gcd}, we suppose that there is some $\delta_4 >0$ with \begin{equation}\label{1b} \sum_{(i,j) \in S_{\rho}} (1 - \beta_i h_{ij}) \geq 1 + \delta_4 \,\,\, (1 \leq \rho \leq r) \quad \text{ and } \quad \beta_ih_{ij} \leq 1 \,\, (1 \leq i \leq k, 1 \leq j \leq J_i). \end{equation} In order to efficiently work with the asymptotic formula in Hypothesis~\ref{H1}, it is necessary to rewrite the singular integral as a Mellin transform. With ${\bm \zeta}$ as in Hypothesis~\ref{H1} (in particular satisfying \eqref{zeta1}), we assume that \begin{equation}\label{J-cond} J_i \geq 2 \quad \text{whenever}\quad \zeta_i \geq 1/2. \end{equation} We also define \begin{equation*} J^* = J_1 + \ldots +J_k \end{equation*} for the number of variables appearing in the torsor equation. \begin{lemma}\label{todo} Let $\mathbf{b} \in (\Bbb{Z}\setminus\{0\})^k$ and $\mathbf{X} \in [1/2, \infty)^{J}$. For $1 \leq i \leq k$, put \begin{equation}\label{Ki} \mathscr{K}_i(z) = \begin{cases} \Gamma(z) \cos(\pi z/2), & h_{ij} \text{ odd for some } 1 \leq j \leq J_i,\\ \Gamma(z) \exp({\rm i} \pi z/2), &h_{ij} \text{ even for all } 1 \leq j \leq J_i. \end{cases} \end{equation} Then, on writing $z_k = 1 - z_1 - \ldots - z_{k-1}$, one has \begin{equation*} \mathscr{I}_{\mathbf{b}}(\mathbf{X}) =\frac{2^{J^*}}{\pi} \langle \textbf{X}_0\rangle\int_{(\zeta_1)} \cdots \int_{(\zeta_{k-1})} \prod_{i=1}^k \frac{ \mathscr{K}_i(z_i) }{b_i^{z_i}} \prod_{j=1}^{J_i} \Bigl(X_{ij}^{1 - h_{ij} z_i } \frac{1 - 2^{h_{ij}z_i-1}}{1 - h_{ij} z_i }\Bigr) \frac{\,{\mathrm d} z_1 \cdots \,{\mathrm d} z_{k-1}}{(2\pi {\rm i})^{k-1}} . \end{equation*} \end{lemma} Note that \eqref{zeta1} implies that $\Re z_k = \zeta_k$. \begin{proof} We start with the absolutely convergent Mellin identity $$e(w) = \int_{\mathscr{C}} \Gamma(s) \exp\left(\frac{1}{2}\text{sgn}(w) {\rm i} \pi s\right) |2\pi w|^{-s} \frac{\,{\mathrm d} s}{2\pi {\rm i}}$$ for $w \in \Bbb{R} \setminus \{0\}$ and $\mathscr{C}$ the contour \begin{equation*} \textstyle(-1-{\rm i}\infty, -1-{\rm i}] \cup [-1-{\rm i}, \frac{1}{k} - {\rm i}] \cup [ \frac{1}{k}- {\rm i}, \frac{1}{k} + {\rm i}] \cup [ \frac{1}{k} + {\rm i}, -1 + {\rm i}] \cup [-1 + {\rm i}] \cup [-1 + {\rm i} \infty), \end{equation*} which can simply be checked by moving the contour to the left and comparing power series. Integrating this over $\mathscr{Y}$ as in \eqref{E2} based on $$\int_{\frac{1}{2} Y \leq y \leq Y} y^{-hs} = \frac{1- 2^{hs}}{1-hs} Y^{1-hs}$$ and using the definition \eqref{E4}, we obtain \begin{equation}\label{twosided} I_i(b_i \beta, \textbf{X}_i) = 2^{J_i} \int_{\mathscr{C}}\frac{ \mathscr{K}_i(z_i) }{(2\pi|b_i\beta|)^{z_i}} \prod_{j=1}^{J_i} \Bigl(X_{ij}^{1 - h_{ij} z_i } \frac{1 - 2^{h_{ij}z_i-1}}{1 - h_{ij} z_i }\Bigr) \frac{\,{\mathrm d} z_i}{2\pi {\rm i}} \end{equation} for every $i$. Note that $\text{sgn}(\textbf{y}_i^{\textbf{h}_i})$ is always 1 if and only if $h_{ij}$ is even for all $1 \leq j \leq J_i$. At this point, we can straighten the contour and replace it with $\Re z_i = \zeta_i$. The expression is still absolutely convergent, provided that \eqref{J-cond} holds. We insert this formula into \eqref{E5} for $i = 1, \ldots, k-1$ getting \begin{displaymath} \begin{split} \mathscr{I}_{\textbf{b}}(\textbf{X}) = \langle\textbf{X}_0\rangle \int_{-\infty}^{\infty} &2^{J_1 + \ldots J_{k-1}} \int^{(k-1)}_{\Re z_i = \zeta_{i}} \prod_{i=1}^{k-1} \frac{ \mathscr{K}_i(z_i) }{(2\pi|b_i|)^{z_i}} \prod_{j=1}^{J_i} \Bigl(X_{ij}^{1 - h_{ij} z_i } \frac{1 - 2^{h_{ij}z_i-1}}{1 - h_{ij} z_i }\Bigr) \frac{\,{\mathrm d} \textbf{z}}{(2\pi {\rm i})^{k-1}} \\ &\times I_{k}(b_k\beta, \textbf{X}_k) |\beta|^{-z_1 - \ldots - z_{k-1}} d\beta . \end{split} \end{displaymath} The integral in $\beta$ is still absolutely convergent, by \eqref{E3} and \eqref{zeta1}. It is the two-sided Mellin transform of $ I_{k}(b_k\beta, \textbf{X}_k)$ in $\beta$ at $z_k = 1- z_1- \ldots - z_{k-1}$. An evaluation can be read off from \eqref{twosided} by Mellin inversion, and the lemma follows. \end{proof} We are now prepared to describe our method in detail. \subsection{Step 1: Initial manipulations}\label{sec51} Let $\chi \colon (\Bbb{Z} \setminus \{0\})^J \rightarrow [0, 1]$ be the characteristic function on the set of solutions to the torsor equation \eqref{torsor} subject to $b_1 = \ldots = b_k = 1$, and let $\psi \colon (\Bbb{Z} \setminus \{0\})^J \rightarrow [0, 1]$ be the characteristic function on $J$-tuples of nonzero integers satisfying the coprimality conditions \eqref{gcd}. For $1 \leq \nu \leq N$, let \begin{equation}\label{defP} P_{\nu}(\textbf{x}) = \prod_{ij} |x_{ij}|^{\alpha_{ij}^{\nu}} \end{equation} denote the monomials appearing in the height conditions \eqref{height}. We start with some smoothing. Let $0 < \Delta < 1/10$ and define $$ F_{\Delta, B}(\mathbf{x}) = \prod_{\nu = 1}^{N} f_{\Delta} \left(\frac{P_{\nu}(\mathbf{x})}{{B}}\right). $$ Then the counting function $$N_{\Delta}({B}) = \sum_{\mathbf{x} \in (\Bbb{Z}\setminus\{0\})^J} \psi(\mathbf{x}) \chi(\mathbf{x}) F_{\Delta, B}(\mathbf{x}) $$ satisfies \begin{equation}\label{sandwich} N_{\Delta}({ B}(1 - \Delta)) \leq N(B) \leq N_{\Delta}({B}). \end{equation} We remove the coprimality conditions encoded in $\psi$ by M\"obius inversion. As in \cite[Lemma 2.1]{BBS2}, we have $$N_{\Delta}({B}) = \sum_{\mathbf{g} \in \Bbb{N}^r} \mu(\mathbf{g}) \sum_{\mathbf{x} \in (\Bbb{Z}\setminus\{0\})^J} \chi({\bm \gamma} \cdot \mathbf{x}) F_{\Delta, B}({\bm \gamma} \cdot \mathbf{x}),$$ where for given $\mathbf{g} \in \Bbb{N}^r$, we wrote \begin{equation}\label{gamma} {\bm \gamma} = (\gamma_{ij}) \in\Bbb{N}^J, \quad \gamma_{ij} = {\rm lcm}\{g_{\rho} \mid (i, j) \in S_{\rho}\}. \end{equation} for $0 \leq i \leq k$, $1 \leq j \leq J_i$. For later purposes, we state the following elementary lemma. \begin{lemma}\label{kgV} For ${\bm \gamma}\in \Bbb{N}^J$ as in \eqref{gamma}, $\delta > 0$, $1 \leq \rho \leq r$, and ${\bm \eta} = (\eta_{ij}) \in \Bbb{R}^J_{\geq 0}$, the series $$\sum_{\textbf{g} \in \Bbb{N}^r} {\bm \gamma}^{-{\bm \eta}} g_{\rho}^{\delta}$$ is convergent provided that $$\sum_{(i, j) \in \S_{\rho}} \eta_{ij} > 1 + \delta$$ holds for all $1 \leq \rho \leq r$. \end{lemma} \begin{proof} Suppose that $\sum_{(i, j) \in S_{\rho}} \eta_{ij} \geq 1+\delta+ \delta_0$ for all $\rho$ and some $\delta_0 > 0$. The sum in question can be written as en Euler product, and a typical Euler factor has the form $$\sum_{{\bm \alpha} \in \Bbb{N}^r_0} p^{ \displaystyle\delta \alpha_\rho -\sum_{i, j} \eta_{ij}\max_{(i, j) \in S_t} \alpha_t} = 1 + O\Big(\sum_{\alpha=1}^{\infty} \frac{(1+\alpha)^r}{p^{\alpha(1+ \delta_0)}}\Big).$$ The statement is now clear. \end{proof} For $1 \leq T \leq B$, we define $$N_{\Delta, T}({B}) = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \sum_{\mathbf{x} \in (\Bbb{Z}\setminus\{0\})^J} \chi({\bm \gamma} \cdot \mathbf{x}) F_{\Delta, B}({\bm \gamma} \cdot \mathbf{x}).$$ By \eqref{2a}, \eqref{3} (recall $\Delta \leq 1/10$) and Lemma~\ref{kgV}, and by Rankin's trick, \begin{equation}\label{error1} \begin{split} |N_{\Delta, T}({B}) - N_{\Delta}({B}) | &\leq \sum_{|\mathbf{g}| > T } N_{{\bm \gamma}, {\bm \gamma}}(2B, 2B, 1) \ll B(\log B)^{c_2 + \varepsilon} \sum_{|\mathbf{g}| > T } {\bm \gamma}^{-{\bm \eta}}\\ & \leq B(\log B)^{c_2 + \varepsilon}\sum_{\mathbf{g} } {\bm \gamma}^{-{\bm \eta}} \Bigl(\frac{|\mathbf{g}|}{T}\Bigr)^{\delta_2 - \varepsilon}\ll B(\log B)^{c_2 + \varepsilon} T^{-\delta_2 }. \end{split} \end{equation} Next we write each factor $f_{\Delta}$ in the definition of $F_{\Delta, B}$ as its own Mellin inverse, so that $$N_{\Delta, T}({B}) = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \sum_{\mathbf{x} \in (\Bbb{Z}\setminus\{0\})^J} \frac{ \chi({\bm \gamma} \cdot \mathbf{x}) }{{\bm \gamma}^\mathbf{v} }\prod_{ij} |x_{ij}|^{-v_{ij}} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}$$ where \begin{equation}\label{vs} \mathbf{v} = (v_{ij}) = \mathscr{A}_1 \mathbf{s} \in \Bbb{C}^J \end{equation} and $\mathscr{A}_1 = (\alpha_{ij}^{\nu}) \in \Bbb{R}^{J\times N}$ is as before. By partial summation, we obtain \begin{displaymath} \begin{split} \sum_{\mathbf{x} \in (\Bbb{Z}\setminus\{0\})^J} \frac{ \chi({\bm \gamma} \cdot \mathbf{x}) }{{\bm \gamma}^\mathbf{v} } \prod_{ij} |x_{ij}|^{-v_{ij}} & = \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} v_{ij}\Bigr)\int_{[1, \infty)^J} \sum_{0 < |x_{ij}| \leq X_{ij}} \chi({\bm \gamma} \cdot \mathbf{x}) \mathbf{X}^{-\mathbf{v} - \mathbf{1}} \,{\mathrm d}\mathbf{X}\\ &= \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{[1, \infty)^J} \sum_{\frac{1}{2} X_{ij} < |x_{ij}| \leq X_{ij}} \chi({\bm \gamma} \cdot \mathbf{x}) \mathbf{X}^{-\mathbf{v} - \mathbf{1}} \,{\mathrm d}\mathbf{X}, \end{split} \end{displaymath} so that \begin{equation*} \begin{split} N_{\Delta, T}({B}) = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{[1, \infty)^J}\frac{\mathscr{N}_{{\bm \gamma}^{\ast}}(\mathbf{X})}{\mathbf{X}^{\mathbf{v} + \mathbf{1}}} \,{\mathrm d}\mathbf{X} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}} \end{split} \end{equation*} in the notation of Hypothesis~\ref{H1}, where \begin{equation}\label{gammaast} {\bm \gamma}^{\ast} = \Big(\prod_{j=1}^{J_i} \gamma_{ij}^{h_{ij}}\Big)_{1 \leq i \leq k} \in \Bbb{N}^k. \end{equation} \subsection{Step 2: Removing the cusps} We would like to insert the asymptotic formula from Hypothesis~\ref{H1}. This gives a meaningful error term only if $\min X_{ij}$ is not too small, and the formula is only applicable if \eqref{samesize} holds. Thus, for $0 < \delta<1, 0<\lambda \le 1$ we define the set $$\mathscr{R}_{\delta, \lambda} = \Bigl\{\mathbf{X} = (\textbf{X}_1, \ldots, \textbf{X}_{k}) \in [1, \infty)^J : \min_{i, j} X_{ij} \geq \max X_{ij}^{\delta}, \, \min_{1 \leq i \leq k} \textbf{X}_i^{\textbf{h}_i} \geq \big(\max_{1 \leq i \leq k} \textbf{X}_i^{\textbf{h}_i}\big)^{1-\lambda}\Bigr\}.$$ Correspondingly we put \begin{equation}\label{defNDeltaTdeltalambda} N_{\Delta, T, \delta, \lambda} = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{\mathscr{R}_{\delta, \lambda}}\frac{\mathscr{N}_{{\bm \gamma}^{\ast}}(\mathbf{X})}{\mathbf{X}^{\mathbf{v} + \mathbf{1}}} \,{\mathrm d}\mathbf{X} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}. \end{equation} While $ \lambda $ is fixed, $\delta$ is allowed to depend on $B$ and will later be chosen as a negative power of $\log B$. In particular, all subsequent estimates will be uniform in $\delta$. \begin{lemma}\label{lemma2} We have $$N_{\Delta, T}({B}) - N_{\Delta, T, \delta, \lambda} \ll T^r B(\log B)^{c_2 + \varepsilon} (\delta + (\log B)^{-1}).$$ \end{lemma} \begin{proof} This is essentially \cite[Lemma 5.1]{BBS2}. The idea is to revert all steps from Section~\ref{sec51} and apply the bound \eqref{2a}. By a change of variables, we have \begin{displaymath} \begin{split} N_{\Delta, T, \delta, \lambda} = & \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\sum_{{\bm \sigma} \in \{0, 1\}^J} (-1)^{|{\bm \sigma}|_1} \\ &\times \int_{ \widetilde{-{\bm \sigma}} \cdot \mathscr{R}_{\delta, \lambda}} \sum_{0< |x_{ij}| \leq X_{ij}} \chi({\bm \gamma} \cdot \mathbf{x})(\widetilde{{\bm \sigma}} \cdot \mathbf{X})^{-\mathbf{v}} \frac{\,{\mathrm d}\mathbf{X}}{\langle \textbf{X} \rangle} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}},\end{split} \end{displaymath} where we recall the notation \eqref{tilde}. By partial summation, this equals \begin{displaymath} \begin{split} & \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \Bigl(\prod_{i, j} \frac{1}{1 - 2^{-v_{ij}}}\Bigr)\sum_{{\bm \sigma} \in \{0, 1\}^J} (-1)^{|{\bm \sigma}|_1} 2^{-\sum_{ij} \sigma_{ij} v_{ij}} \sum_{ \mathbf{x} \in \widetilde{-{\bm \sigma}} \cdot \mathscr{R}_{\delta, \lambda}} \frac{ \chi({\bm \gamma} \cdot \mathbf{x})}{{\bm \gamma}^{\mathbf{v}}\mathbf{x}^{\mathbf{v}}} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}. \end{split} \end{displaymath} We conclude that \begin{displaymath} \begin{split} |N_{\Delta, T}({B}) - N_{\Delta, T, \delta, \lambda} | \leq & \sum_{|\mathbf{g}| \leq T } \sum_{{\bm \sigma} \in \{0, 1\}^J} \Big|\int_{(1)}^{(N)} \Bigl(\prod_{i, j} \frac{1}{1 - 2^{-v_{ij}}}\Bigr) \\ &\times 2^{-\sum_{ij} \sigma_{ij} v_{ij}} \sum_{ \mathbf{x} \in (\Bbb{Z} \setminus\{0\})^J \setminus \widetilde{-{\bm \sigma}} \cdot \mathscr{R}_{\delta, \lambda}} \frac{ \chi({\bm \gamma} \cdot \mathbf{x})}{{\bm \gamma}^{\mathbf{v}}\mathbf{x}^{\mathbf{v}}} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}\Big|. \end{split} \end{displaymath} Finally we write each factor $(1 - 2^{-v_{ij}})$ as a geometric series and apply Mellin inversion to recast the right hand side as \begin{displaymath} \sum_{|\mathbf{g}| \leq T } \sum_{{\bm \sigma} \in \{0, 1\}^J} \sum_{\mathbf{k} \in \Bbb{N}_0^J} \sum_{ \mathbf{x} \in (\Bbb{Z} \setminus\{0\})^J \setminus \widetilde{-{\bm \sigma}} \cdot \mathscr{R}_{\delta, \lambda}} \chi({\bm \gamma} \cdot \mathbf{x})F_{\Delta, B}({\bm \gamma} \cdot (\widetilde{\textbf{k} + {\bm \sigma}} ) \cdot \mathbf{x}). \end{displaymath} Note that any $\mathbf{x} \not\in \widetilde{-{\bm \sigma}} \cdot \mathscr{R}_{\delta, \lambda}$ in the support of $F_{\Delta, B}({\bm \gamma} \cdot (\widetilde{\textbf{k} + {\bm \sigma}} ) \cdot \mathbf{x})$ satisfies $$\min_{ij} |x_{ij}| \leq ( (1+\Delta)B)^{\delta}\quad \text{or} \quad \min_{1 \leq i \leq k} \prod_{j = 1}^{J_i} |x_{ij}|^{h_{ij}} \leq \Bigl(\max_{1 \leq i \leq k} \prod_{j = 1}^{J_i} |2x_{ij}|^{h_{ij}}\Bigr)^{1-\lambda},$$ so that $$|N_{\Delta, T}({B}) - N_{\Delta, T, \delta, \lambda} | \leq 2^J \sum_{|\mathbf{g}| \leq T } \sum_{\mathbf{k} \in \Bbb{N}_0^J} N_{{\bm \gamma}, {\bm \gamma } \cdot \widetilde{\textbf{k}}}((1+\Delta)B, ((1+\Delta)B)^{\delta}, \lambda)$$ by \eqref{Hlambda}. The lemma follows from \eqref{2a}. Note that $\delta_2^{\ast} > 0$ in \eqref{2a} ensures that the $\textbf{k}$-sum converges. \end{proof} \subsection{Step 3: The error term in the asymptotic formula} We insert Hypothesis~\ref{H1} into \eqref{defNDeltaTdeltalambda}. For convenience, we now write $\Psi_{\mathbf b}(\mathbf X) = N_{\mathbf b}(\mathbf X)- \mathscr E_{\mathbf b}\mathscr I_{\mathbf b}(\mathbf X)$. In this section, we estimate the contribution of the error $\Psi_{\mathbf b}(\mathbf X)$, which amounts to bounding $$E_{\Delta, T, \delta, \lambda} = \sum_{|\mathbf{g}| \leq T } \Bigl| \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{\mathscr{R}_{\delta, \lambda}} \frac{\Psi_{{\bm \gamma}^{\ast}}(\mathbf{X})}{ \mathbf{X}^{\mathbf{v} +\mathbf{1}}} \,{\mathrm d}\mathbf{X} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}\Bigr|. $$ For $\mathbf{X} \in \mathscr{R}_{\delta, \lambda}$, we use \eqref{errorterm} and $\min X_{ij}^{-\delta \delta_1} \leq \prod_{ij} X_{ij}^{-\delta\delta_1/J}$ to conclude that $$\Psi_{{\bm \gamma}^{\ast}}(\mathbf{X}) \ll {\bm \gamma}^{C\textbf{h}} \Big(\prod_{i=0}^k \prod_{j=1}^{J_i} X_{ij}^{1- h_{ij}\zeta_i + \varepsilon - \delta\delta_1/J}\Big).$$ Thus the $\mathbf{X}$-integral is absolutely convergent provided that \begin{equation}\label{provided} \Re v_{ij} > 1 - h_{ij}\zeta_i - \delta\delta_1/J \end{equation} holds for each $i, j$. We now choose appropriate contours for the $\mathbf{s}$-integral. By \eqref{vs}, the choice $\Re \mathbf{s} = {\bm \sigma} = (\sigma_{\nu}) \in\Bbb{R}_{>0}^N$ as in \eqref{1a} is admissible to ensure \eqref{provided}. These contours stay also to the right of the poles of $\widehat{f}_{\Delta}$ at $s = 0$ (and in fact inside the validity of \eqref{smooth} and \eqref{useful} if $\delta_3$ is sufficiently small) and to the right of the poles of $(1 - 2^{-v_{ij}})^{-1}$ at $\Re v_{ij} = 0$ by \eqref{zeta1} if $\delta$ is sufficiently small. By \eqref{1a}, this ${\bm \sigma}$ satisfies $\sum \sigma_{\nu} = 1$. We now shift each $s_{\nu}$-contour to $\Re s_{\nu} = \sigma_{\nu} - \delta\delta_1/(2JA)$, where $$A = \max_{ij} \sum_\nu \alpha^\nu_{ij}.$$ Then $ \Re v_{ij} \geq 1 - h_{ij}\zeta_i - \delta\delta_1/(2J)$ in accordance with \eqref{provided}, and poles of any $ (1 - 2^{-v_{ij}})^{-1}$ or $\widehat{f}_{\Delta}(s_{\nu})$ remain on the left of the lines of integration provided that $\delta$ is less than a sufficiently small constant (it will later tend to zero as $B \rightarrow \infty$). Having shifted the $\mathbf{s}$-contour in this way, we estimate trivially. The $\mathscr{R}_{\delta, \lambda}$-integral is $\ll \delta^{-J}$, so that \begin{equation}\label{error2} \begin{split} E_{\Delta, T, \delta, \lambda}& \ll \delta^{-J} B^{1 -\frac{\delta\delta_1N}{2JA} }\sum_{|\mathbf{g}| \leq T } {\bm \gamma}^{C\textbf{h}} \int^{(N)} \Big| \langle \textbf{v} \rangle \prod_{\nu} \widehat{f}_{\Delta}(s_{\nu}) \Big| \, |\,{\mathrm d}\mathbf{s}|\\ & \ll T^{CS+r} \delta^{-J} B^{1 -\frac{\delta\delta_1N}{2JA} } \Delta^{-J+\varepsilon} \end{split} \end{equation} by \eqref{useful} (which is still applicable if $\delta_3$ is sufficiently small) with $\mathscr{D} = {\rm id}$, $c = \varepsilon$, $\| \mathbf{a} \|_1 = J$, where \begin{equation}\label{S} S = \sum_{\rho=1}^r \sum_{(i, j) \in S_{\rho}} h_{ij}. \end{equation} \subsection{Step 4: Inserting the asymptotic formula}\label{54} We now insert the main term in Hypothesis~\ref{H1} into \eqref{defNDeltaTdeltalambda}. In order to compute this properly, we re-insert the cuspidal contribution and replace the range $\mathscr{R}_{\delta, \lambda}$ of integration with $[1,\infty)^J$. In this section, we estimate the error \begin{displaymath} E^{\ast}_{\Delta, T, \delta, \lambda} = \sum_{|\mathbf{g}| \leq T }\Bigl| \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{[1, \infty)^J \setminus \mathscr{R}_{\delta, \lambda}} \frac{\mathscr{E}_{{\bm \gamma}^{\ast}} \mathscr{I}_{{\bm \gamma}^{\ast}}(\mathbf{X}) }{\mathbf{X}^{\mathbf{v} + \mathbf{1}}} \,{\mathrm d}\mathbf{X} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}\Bigr|. \end{displaymath} We interchange the $\mathbf{s}$- and $\mathbf{X}$-integral and compute the $\mathbf{s}$-integral first. Writing as before each $(1 - 2^{-v_{ij}})^{-1}$ as a geometric series, we obtain $$ \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}\mathbf{X}^{\mathbf{v} } }\Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr) \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}} = \sum_{\mathbf{k} \in \Bbb{N}_0^J}\int_{(1)}^{(N)} (\widetilde{\textbf{k}} \cdot {\bm \gamma}\cdot \mathbf{X})^{-\mathbf{v}} \langle \textbf{v} \rangle \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}, $$ and $\langle \textbf{v} \rangle \prod_{\nu} ( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}} ) $ is a linear combination of terms of the form $\prod_{\nu=1}^N s_{\nu}^{a_{\nu}} \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}$ for vectors $\mathbf{a} = (a_{\nu}) \in \Bbb{N}_0^N$ with $\| \mathbf{a} \|_1 = J$. The inverse Mellin transform of $s^a \widehat{f}_{\Delta}(s)$ is ${\tt D}^af_{\Delta}$ where ${\tt D}$ is the differential operator $f(x) \mapsto -x f'(x)$. Hence defining $$F^{(\mathbf{a})}_{\Delta, B}(\mathbf{x}) = \prod_{\nu = 1}^{N} {\tt D}^{a_{\nu}} f_{\Delta} \left(\frac{|P_{\nu}(\mathbf{x})|}{{B}}\right)$$ with $P_{\nu}$ as in \eqref{defP}, we see that $E^{\ast}_{\Delta, T, \delta, \lambda}$ is bounded by a linear combination of terms of the form \begin{displaymath} \begin{split} & \sum_{|\mathbf{g}| \leq T } \int_{[1, \infty)^J \setminus \mathscr{R}_{\delta, \lambda}} \frac{|\mathscr{E}_{{\bm \gamma}^{\ast}} \mathscr{I}_{{\bm \gamma}^{\ast}}(\mathbf{X})|}{\langle \textbf{X} \rangle} \sum_{\mathbf{k} \in \Bbb{N}_0^J} |F^{(\mathbf{a})}_{\Delta, B}(\widetilde{\textbf{k}} \cdot {\bm \gamma}\cdot \mathbf{X})|\,{\mathrm d}\mathbf{X}\\ & \ll \Delta^{-J} \sum_{|\mathbf{g}| \leq T }{\bm \gamma}^{\textbf{h}} \sum_{\mathbf{k} \in \Bbb{N}_0^J} \int_{[1, \infty)^J \setminus \mathscr{R}_{\delta, \lambda}} \Bigl(\prod_{ij} X_{ij}^{-h_{ij}\zeta_i}\Bigl)F_{0, B(1+\Delta)}(\widetilde{\textbf{k}} \cdot {\bm \gamma}\cdot \mathbf{X})\,{\mathrm d}\mathbf{X} \end{split} \end{displaymath} by Lemma~\ref{singint}, \eqref{E} and \eqref{smooth0}. By \eqref{continuous} with $\textbf{b} = (1, \ldots, 1)$, $\textbf{y} = \widetilde{\textbf{k}} \cdot {\bm \gamma}$ and $H = ((1+\Delta) B)^{\delta}$, we obtain \begin{equation}\label{error3} E^{\ast}_{\Delta, T, \delta, \lambda} \ll T^{S+r} \Delta^{-J}B(\log B)^{c_2+\varepsilon}(\delta + (\log B)^{-1}) \end{equation} with $S$ as in \eqref{S}. Again $\delta_2^{\ast} > 0$ in \eqref{continuous} ensures that the $\textbf{k}$-sum converges. Combining Lemma~\ref{lemma2}, \eqref{error2} and \eqref{error3} and choosing $\delta = (\log B)^{-1+\varepsilon}$, we have shown \begin{equation}\label{step5} N_{\Delta, T}({B}) = N^{(1)}_{\Delta, T}({B}) + O(T^{S+r} \Delta^{-J}B(\log B)^{c_2-1+\varepsilon}) \end{equation} where $$N^{(1)}_{\Delta, T}({B}) = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \frac{1}{{\bm \gamma}^{\mathbf{v}}} \Bigl(\prod_{i, j} \frac{v_{ij}}{1 - 2^{-v_{ij}}}\Bigr)\int_{[1, \infty)^J } \frac{\mathscr{E}_{{\bm \gamma}^{\ast}} \mathscr{I}_{{\bm \gamma}^{\ast}}(\mathbf{X}) }{\mathbf{X}^{\mathbf{v} + \mathbf{1}}} \,{\mathrm d}\mathbf{X} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}.$$ We insert Lemma~\ref{todo} and integrate over $\mathbf{X}$. This gives \begin{displaymath} \begin{split} N^{(1)}_{\Delta, T}({B}) = \frac{2^{J^{\ast}}}{\pi} &\sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \int_{\Re z_i = \zeta_i}^{(k-1)} \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{{\bm \gamma}^{\mathbf{v}} ({\bm \gamma}^{\ast})^{\mathbf{z}}} \Bigl(\prod_{i=1}^{k} \mathscr{K}_i(z_i) \prod_{j=1}^{J_i} \frac{1 - 2^{h_{ij}z_i-1}}{1 - h_{ij} z_i } \Bigr) \\ &\times \Bigl(\prod_{i=0}^k \prod_{j=1}^{J_i} \frac{v_{ij}}{(1 - 2^{-v_{ij}})w_{ij}}\Bigr) \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{k-1}} \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}}. \end{split} \end{displaymath} where $w_{ij} = v_{ij} + h_{ij} z_i -1$ and we recall our convention $z_k = 1 - z_1 - \ldots - z_{k-1}$. If we write $\mathbf{w} = (w_{ij}) \in \Bbb{C}^J$, then by \eqref{vs} and \eqref{A2new}, we have \begin{equation}\label{zast} \mathbf{w} = \mathscr{A}_1 \mathbf{s} + \mathscr{A}_2 \mathbf{z}^{\ast}, \quad \mathbf{z}^{\ast} = (z_1, \ldots, z_{k-1}, 1). \end{equation} This explains the seemingly artificial definition of $\mathscr{A}_2$. We can simplify this first by recalling the definition \eqref{gammaast} of ${\bm \gamma}^{\ast}$, which implies ${\bm \gamma}^{\mathbf{v}} ({\bm \gamma}^{\ast})^{\mathbf{z}} = {\bm \gamma}^{\mathbf{w} + \mathbf{1}}.$ Next we use our convention $h_{0j} = 0$ and insert a redundant factor $2^{J_0} \prod_{j=1}^{J_0} (1 - 2^{h_{0j}z_0 - 1})$. We also write $\kappa = k-1$. In this way, we can recast $ N^{(1)}_{\Delta, T}({B})$ as \begin{displaymath} \frac{2^J}{\pi} \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{(1)}^{(N)} \int_{\Re z_i = \zeta_i}^{(\kappa)} \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{{\bm \gamma}^{\mathbf{w} + \mathbf{1}} } \Bigl(\prod_{i=1}^{k} \mathscr{K}_i(z_i)\Bigr) \frac{1}{\langle \textbf{w}\rangle } \frac{\phi(\mathbf{v})}{\phi(\mathbf{v} - \mathbf{w})} \prod_{\nu=1}^N\Bigl( \widehat{f}_{\Delta}(s_{\nu})B^{s_{\nu}}\Bigr) \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{\kappa}} \frac{\,{\mathrm d}\mathbf{s}}{(2\pi {\rm i})^{N}} \end{displaymath} where \begin{equation}\label{defphi} \phi(\mathbf{v}) = \prod_{i=0}^k \prod_{j=1}^{J_i} \frac{v_{ij}}{1 - 2^{-v_{ij}}}. \end{equation} \subsection{Step 5: Contour shifts}\label{peyre} In this section, we evaluate asymptotically $N^{(1)}_{\Delta, T}({B})$ by contour shifts. Let ${\bm \sigma} = (\sigma_{\nu}) \in \Bbb{R}_{>0}^N$ be as in \eqref{1a}. For some small $\varepsilon > 0$, we shift the $\mathbf{s}$-contour to $\Re s_{\nu} = \sigma_{\nu} + \varepsilon$ without crossing any poles. Shifting a little further to the left will pick up the poles at $\mathbf{w} = 0$, whose residues produce the main term for $N(B)$. To make this transparent, we make a change of variables as follows. By \eqref{1c} we have $\text{rk}(\mathscr{A}) =\text{rk}(\mathscr{A}_1\, \mathscr{A}_2) = R$, so we can choose $R$ linearly independent members of the linear forms $w_{ij}$ in $\mathbf{s}$ and $\mathbf{z}^{\ast} = (z_1, \ldots, z_{k-1}, 1)$, say $w^{(1)}, \ldots, w^{(R)}$, and then the remaining $w_{ij}$ are linearly dependent. Since also $\text{rk}(\mathscr{A}_1) = R$, we may, for fixed $\mathbf{z}$, change variables in the $\mathbf{s}$-integral by completing the $R$ functions $w^{(1)} , \ldots, w^{(R)} $ to a basis in any way such that the determinant of the Jacobian is $\pm 1$. We call the new variables $\mathbf{y} = (y_1, \ldots, y_N)$. We can describe this also in terms of matrices. We pick a maximal linearly independent set of $R$ rows $Z_{1}, \ldots, Z_{R}$ of the matrix $(\mathscr{A}_1\, \mathscr{A}_2)$. Let $Z_{R+1}, \ldots, Z_{J}$ denote the remaining rows of $(\mathscr{A}_1\, \mathscr{A}_2)$ and let $\mathscr{B} = (b_{kl}) \in \Bbb{R}^{(J-R) \times R}$ be the unique matrix satisfying \begin{equation}\label{beta} \mathscr{B} \left(\begin{smallmatrix} Z_{1} \\ \vdots \\ Z_{R} \end{smallmatrix}\right) = \left(\begin{smallmatrix} Z_{R+1} \\ \vdots \\ Z_{J} \end{smallmatrix}\right). \end{equation} That is, $\mathscr{B}$ expresses the remaining $w_{ij}$ in terms of the selected linearly independent set. Again by \eqref{1c}, we can also write the last row $(\mathscr{A}_3\, \mathscr{A}_4)$ of $\mathscr{A}$ as a linear combination of $Z_1, \ldots, Z_R$, say \begin{equation}\label{beta0} \sum_{\ell = 1}^R b_{\ell} Z_{\ell} = (\mathscr{A}_3\, \mathscr{A}_4). \end{equation} The coefficients $b_{kl}$ and $b_{\ell}$ play the same role as in Lemma~\ref{lem19}. Choose a matrix \begin{equation}\label{mathcalC} \mathscr{C} = (\mathscr{C}_1\, \mathscr{C}_2) = \left(\begin{smallmatrix} Z_{1} \\ \vdots \\ Z_{R} \\ \boxed{ \,\,\,\, \begin{smallmatrix} \\ \ast \\ \\\end{smallmatrix} \,\,\,\, }\boxed{ \,\,\,\, \begin{smallmatrix} \\ 0 \\ \\\end{smallmatrix} \,\,\,\, } \end{smallmatrix}\right) \in \Bbb{R}^{N \times (N+k)} , \quad (\mathscr{C}_1 \in \Bbb{R}^{N \times N}, \mathscr{C}_2 \in \mathscr{R}^{N \times k}), \end{equation} with $\boxed{\ast} \in \Bbb{R}^{(N-R)\times N}$ chosen such that $\mathscr{C}_1 \in \Bbb{R}^{N \times N}$ satisfies $\det \mathscr{C}_1 = 1$. This is possible since $\text{rk}(\mathscr{A}_1) = R$ by \eqref{1c}. Given $\mathbf{s} \in \Bbb{C}^N$, $\mathbf{z} \in \Bbb{C}^{k-1}$, we define the vector \begin{equation}\label{defy} (y_1, \ldots, y_N)^{\top} = \mathbf{y} = \mathbf{y}(\mathbf{s}, \mathbf{z}^{\ast}) = \mathscr{C} (\mathbf{s}, \mathbf{z}^{\ast})^{\top} = \mathscr{C}_1\mathbf{s} ^{\top}+ \mathscr{C}_2 {\mathbf{z}^{\ast}}^{\top} . \end{equation} We write \begin{equation*} {\bm \eta} = \mathbf{y}({\bm \sigma}, (\zeta_1, \ldots, \zeta_{k-1}, 1)) \in \Bbb{R}^N, \quad {\bm \eta}^{\ast} = \mathbf{y}({\bm \sigma} +\varepsilon \cdot \mathbf{1}, (\zeta_1, \ldots, \zeta_{k-1}, 1)) \in \Bbb{R}^N \end{equation*} with ${\bm \sigma}$ as in \eqref{1a} and some fixed $\varepsilon > 0$. In the new variables $\mathbf{y}$, the path of integration $\Re s_{\nu} = \sigma_{\nu} + \varepsilon$ becomes $\Re y_{\nu} = \eta^{\ast}_{\nu}$. Moreover, by \eqref{beta} and \eqref{beta0}, we have \begin{equation}\label{linear1} \langle \textbf{w} \rangle =y_1 \cdots y_{R} \prod_{\iota=1}^{J-R} \mathscr{L}_\iota(\mathbf{y}), \quad \mathscr{L}_{\iota}(\mathbf{y}) = \sum_{\ell=1}^Rb_{\iota\ell} y_{\ell} \end{equation} and \begin{equation}\label{linear2} -1 + \sum_{\nu=1}^N s_{\nu} = \mathscr{L}(\mathbf{y}), \quad \mathscr{L}(\mathbf{y}) = \sum_{\ell=1}^Rb_{\ell} y_{\ell}. \end{equation} Thus we can recast $ N^{(1)}_{\Delta, T}({B})$ as \begin{equation}\label{reca} \begin{split} \frac{2^J}{\pi} \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \int_{\Re z_i = \zeta_i}^{(\kappa)} & \int_{\Re y_{\nu} = \eta_{\nu}^{\ast}}^{(N)} \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{{\bm \gamma}^{\mathbf{w} + \mathbf{1}} }\frac{\phi(\mathbf{v})}{\phi(\mathbf{v} - \mathbf{w})} \Bigl( \prod_{\nu=1}^N \widehat{f}_{\Delta}(s_{\nu})\Bigr)\Bigl(\prod_{i=1}^{k} \mathscr{K}_i(z_i)\Bigr) \\ & \times \frac{B^{1+\mathscr{L}(\mathbf{y})}}{y_1 \cdots y_{R} \prod_{\iota=1}^{J-R} \mathscr{L}_\iota(\mathbf{y}) } \frac{\,{\mathrm d}\mathbf{y}}{(2\pi {\rm i})^{N}} \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{\kappa}}, \end{split} \end{equation} where now $\mathbf{s}, \mathbf{v}, \mathbf{w}$ are linear forms in $\mathbf{y}, \mathbf{z}^{\ast}$ given by \eqref{vs}, \eqref{zast}, \eqref{beta} and \eqref{defy}. We now shift the $y_1, \ldots, y_{R}$-contours appropriately within a sufficiently small $\varepsilon$-neighborhood of ${\bm \eta}$ (in which in particular $ \phi(\mathbf{v})/\phi(\mathbf{v} - \mathbf{w}) \prod_{\nu} \widehat{f}_{\Delta}(s_{\nu}) $ is holomorphic), always keeping $\Re z_i = \zeta_i$. Recalling definitions \eqref{defphi} and \eqref{Ki} as well as $\textbf{v} - \textbf{w} = (1 - h_{ij} z_{ij})_{ij}$, we record the bound \begin{equation}\label{residuebound} \begin{split} \mathscr{D}\Bigg( \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{{\bm \gamma}^{\mathbf{w}+\mathbf{1}} } \Bigl(\phi(\mathbf{v}) \prod_{\nu=1}^N \widehat{f}_{\Delta}(s_{\nu})\Bigr)& \Bigl(\frac{1}{\phi(\mathbf{v} - \mathbf{w})}\prod_{i=1}^{k} \mathscr{K}_i(z_i)\Bigr) \Bigg) \ll T^S \Delta^{-J-c}|\mathbf{s}|^{-c}_{\infty} \Bigl(\prod_{i=1}^k |z_i|^{\zeta_i-\frac{1}{2} -J_i+\varepsilon} \Bigr)\\ & = T^S\Delta^{-J-c} \Bigl(\prod_{i=1}^k |z_i|^{\zeta_i-\frac{1}{2} -J_i+\varepsilon} \Bigr)\big|\mathscr{C}_1^{-1}\mathbf{y} - \mathscr{C}_1^{-1}(\mathscr{C}_2\mathbf{z}^{\ast} )\big|^{-c}_{\infty} \end{split} \end{equation} that holds for any fixed linear differential operator $\mathscr{D}$ with constant coefficients in $s_1, \ldots, s_{N}, z_1, \ldots, z_{k-1}$ and any fixed $c > 0$. This follows from Stirling's formula, \eqref{useful}, \eqref{E} and \eqref{S}. In particular, choosing $c > N$ and recalling \eqref{J-cond}, this expression is absolutely integrable over $\mathbf{z}$ and $\mathbf{y}$. We return to \eqref{reca} and evaluate the $(y_1, \ldots, y_R)$-integral asymptotically by appropriate contour shifts. The integrals that arise are of the form \begin{equation*} B (\log B)^{\alpha_0} \int^{(R)} \frac{B^{\ell(\tilde{\textbf{y}})} H(\tilde{\textbf{y}})}{\ell_1(\tilde{\textbf{y}}) \cdots \ell_{J_0}(\tilde{\textbf{y}}) } \frac{{\rm d}\tilde{\textbf{y}}}{(2\pi i)^{R_0}} \end{equation*} where $\alpha_0 \in \Bbb{N}_0$, $\ell_1, \ldots, \ell_{J_0}$ are linear forms in $R_0$ variables spanning a vector space of dimension $R_0$, $\ell$ is a linear form, the contours of integration are in an $\varepsilon$-neighborhood of $\Re y_{\nu} = 0$ and $H$ is a holomorphic function in this region satisfying the bound \eqref{residuebound}; initially we have $R_0 = R$, $J_0 = J$, $\alpha_0 = 0$. As long as $\Re \ell(\tilde{\textbf{y}}) > 0$, we can shift one of the variables to the left (if appearing with positive coefficient) or to the right (if appearing with negative coefficient), getting a small power saving in $B$ in the remaining integral and picking up the residues on the way. Inductively we see that in each step $J_0 - R_0 +\alpha_0$ is nonincreasing. Recalling the definition of $c_2$ in \eqref{defc2}, we obtain eventually \begin{equation}\label{N1} \begin{split} N^{(1)}_{\Delta, T}({B}) = & c^{\ast} c_{\text{fin}}(T) c_{\infty}(\Delta)B (\log B)^{c_2} + O(T^{S+r+\varepsilon} \Delta^{-J-N-\varepsilon}B(\log B)^{c_2-1}) \end{split} \end{equation} for some constant $c^{\ast} \in \Bbb{Q}$ (to be computed in a moment) and \begin{equation}\label{defcinf} \begin{split} &c_{\text{fin}}(T) = \sum_{|\mathbf{g}| \leq T } \mu(\mathbf{g}) \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{\langle {\bm \gamma} \rangle } , \\ &c_{\infty}(\Delta) =\frac{2^J}{\pi} \int_{\Re z_i = \zeta_i}^{(\kappa)} \int_{\Re y_{\nu} = \eta^{\ast}_{\nu}}^{(N-R)} \Bigl( \prod_{\nu=1}^N \widehat{f}_{\Delta}(s_{\nu})|_{y_1 = \ldots = y_R = 0}\Bigr) \Bigl(\prod_{i=1}^{k} \mathscr{K}_i(z_i)\Bigr) \frac{\,{\mathrm d} y_{R+1} \cdots \,{\mathrm d} y_N}{(2\pi {\rm i})^{N-R}} \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{\kappa}}. \end{split} \end{equation} That the multiple integral in the formula for $c_{\infty}(\Delta)$ is absolutely convergent follows again from \eqref{residuebound}. Combining \eqref{N1} with \eqref{error1} and \eqref{step5}, we have shown \begin{equation}\label{eventually} N_{\Delta }(B) = c^{\ast} c_{\text{fin}} (T) c_{\infty}(\Delta) B (\log B)^{c_2} + O\big(B(\log B)^{c_2-1+\varepsilon} (T^{S+r} \Delta^{-J-N-\varepsilon} + T^{-\delta_2}\log B)\big) \end{equation} for any $1 < T < B$. \subsection{Step 6: Computing the leading constant} We proceed to compute explicitly the leading constant in \eqref{eventually}. In this subsection, we consider $c^{\ast}$ and $c_{\text{fin}}(T)$, and we start with the former. To this end, we observe that in the course of the contour shifts, only the polar behavior at $\textbf{w}=0$ is relevant, so that $$c^{\ast} = \lim_{B \rightarrow \infty} \frac{1}{(\log B)^{c_2}} \int^{(R)} B^{\mathscr{L}(y)} \prod_{\ell=1}^R F(y_\ell) \prod_{\iota=1}^{J-R} \mathscr{L}_{\iota}(\textbf{y})^{-1} \frac{\,{\mathrm d} \textbf{y}}{(2\pi {\rm i})^R}$$ for any function $F$ that is holomorphic except for a simple pole at $0$ with residue 1, provided the integral is absolutely convergent. We choose $F = \widehat{f}_{\Delta_0}$ for some $\Delta_0 > 0$ as in \eqref{smooth0}--\eqref{smooth}, recall the notation \eqref{linear1}--\eqref{linear2}, and insert the formula $s^{-1} = \int_0^1 t^{s-1} \,{\mathrm d} t$ for $\Re s > 0$. In this way we get the absolutely convergent expression \begin{displaymath} \begin{split} c^{\ast} & = \lim_{B \rightarrow \infty} \frac{1}{(\log B)^{c_2}} \int^{(R)} B^{\mathscr{L}(y)} \prod_{\ell=1}^R \widehat{f}_{\Delta_0}(y_\ell) \int_{[0, 1]^{J-R}} \prod_{ \iota=1}^{J-R} t_{\iota}^{\mathscr{L}_{\iota}(\textbf{y})-1} \,{\mathrm d} \textbf{t} \, \frac{\,{\mathrm d} \textbf{y}}{(2\pi {\rm i})^R}\\ & = \lim_{B \rightarrow \infty} \int^{(R)} B^{\mathscr{L}(y)} \prod_{\ell=1}^R \widehat{f}_{\Delta_0}(y_\ell) \int_{[0, \infty]^{J-R}} \prod_{\iota=1}^{J-R} B^{ -r_{\iota} \mathscr{L}_{\iota}(\textbf{y})} \,{\mathrm d} \textbf{r} \, \frac{\,{\mathrm d} \textbf{y}}{(2\pi {\rm i})^R}\\ & = \lim_{B \rightarrow \infty} \int_{[0, \infty]^{J-R}} \int^{(R)} \Big( \prod_{\ell=1}^R \widehat{f}_{\Delta_0}(y_\ell) \Big)B^{\sum_\ell (b_{\ell} -\sum_{\iota} r_{\iota} b_{\iota \ell} )y_{\ell} } \frac{\,{\mathrm d} \textbf{y}}{(2\pi {\rm i})^R}\, \,{\mathrm d} \textbf{r} \\ & = \lim_{B \rightarrow \infty} \int_{[0, \infty]^{J-R}} \prod_{\ell=1}^R f_{\Delta_0}\big(B^{ - b_{\ell} +\sum_{\iota} r_{\iota} b_{\iota \ell} } \big) \,{\mathrm d} \textbf{r}. \end{split} \end{displaymath} Here we used a change of variables along with $c_2 = J-R$ in the first step, cf.\ \eqref{defc2}, and Mellin inversion in the last step. This formula holds for every $\Delta_0 > 0$, so we can take the limit $\Delta_0\rightarrow 0$ getting \begin{equation}\label{cast-final} c^{\ast} = \text{vol}\Big\{ \textbf{r} \in [0, \infty]^{J-R} : b_{\ell} -\sum_{\iota=1}^{J-R} r_{\iota} b_{\iota \ell} \geq 0 \text{ for all } 1 \leq \ell \leq R \Big\} . \end{equation} Next we investigate $c_{\text{fin}}(T)$. We can complete the $\mathbf{g}$-sum at the cost of an error $$\sum_{|\mathbf{g}| > T }\Bigl| \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{\langle {\bm \gamma} \rangle } \Bigr| \ll \sum_{\mathbf{g} } \Bigl( \prod_{ij} \gamma_{ij}^{-1+h_{ij}\beta_i} \Bigr)\Bigl(\frac{|\mathbf{g}|}{T}\Bigr)^{\delta_4-\varepsilon } \ll T^{-\delta_4+\varepsilon} $$ by \eqref{E}, \eqref{gamma}, \eqref{gammaast}, \eqref{1b} and Lemma~\ref{kgV}, so that \begin{equation}\label{cfin-final} c_{\text{fin}}(T) = c_{\text{fin}} + O(T^{-\delta_4+\varepsilon }), \quad c_{\text{fin}} = \sum_{\mathbf{g} } \mu(\mathbf{g}) \frac{ \mathscr{E}_{{\bm \gamma}^{\ast}}}{\langle {\bm \gamma} \rangle }. \end{equation} Using \eqref{localdensities}, we can rewrite $ c_{\text{fin}} $ in terms of local densities (note that the sum is absolutely convergent). Recall that $\textbf{g} = (g_1, \ldots, g_r)$ is indexed by the coprimality conditions $S_1, \ldots, S_r$ in \eqref{gcd}. For a given choice of $\alpha_1, \ldots, \alpha_r \in \{0, 1\}$, let $$S(\bm \alpha) = \bigcup _{\alpha_{\rho} = 1} S_{\rho}, \quad \delta(ij, {\bm \alpha}) = \begin{cases} 1, & (i, j) \in S(\bm \alpha),\\ 0, & (i, j) \not\in S(\bm \alpha). \end{cases}$$ Then $$c_{\text{fin}} = \prod_p \sum_{{\bm \alpha} \in \{0, 1\}^r} \frac{ (-1)^{|{\bm \alpha}|_1} }{ p^{\#S(\bm \alpha) }} \cdot \lim_{L \rightarrow \infty}\frac{1}{p^{L (J-1)}} \#\Big\{\textbf{x} \bmod{ p^{L}} : \sum_{i=1}^k \prod_{j=1}^{J_i} (p^{\delta(ij, {\bm \alpha})} x_{ij})^{h_{ij}} \equiv 0 \bmod{p^{L}}\Big\}.$$ By inclusion-exclusion, this equals \begin{equation}\label{cfin-final-euler} c_{\text{fin}}= \prod_p \lim_{L \rightarrow \infty}\frac{1}{p^{L (J-1)}} \#\Bigg\{\textbf{x} \bmod{ p^{L}} : \begin{array}{l} \displaystyle \sum_{i=1}^k \prod_{j=1}^{J_i} x_{ij}^{h_{ij}} \equiv 0 \bmod{p^{L}},\\ (\{x_{ij} : (i, j) \in S_{\rho}\}, p) = 1 \text{ for } 1 \leq \rho \leq r \end{array}\Bigg\}. \end{equation} Combining \eqref{eventually} and \eqref{cfin-final}, we conclude \begin{equation*} N_{\Delta}(B) = c^{\ast} c_{\text{fin}} c_{\infty}(\Delta) B (\log B)^{c_2} + O\Big(B(\log B)^{c_2-1-\delta_0} \Delta^{-J-N-\varepsilon}\Big) \end{equation*} for $\delta_0 = \min(\delta_2, \min(\delta_4, 1)(S+r+1)^{-1})> 0$, upon choosing $T = (\log B)^{1/(S + r + 1)}$. Since $N_{\Delta}(B)$ is obviously nonincreasing in $\Delta$, we conclude from \eqref{sandwich} and the previous display that $N(B) =(1+o(1)) c^{\ast} c_{\text{fin}} c_{\infty} B (\log B)^{c_2}$ as $B\rightarrow \infty$ with \begin{equation}\label{cinftylimit} c_{\infty} = \lim_{\Delta \rightarrow 0}c_{\infty}(\Delta), \end{equation} and this limit must exist. We have proved \begin{theorem}\label{analytic-theorem} Suppose that we are given a diophantine equation \eqref{torsor} with $b_1=\dots=b_k=1$ and height conditions \eqref{height} whose variables are restricted by coprimality conditions \eqref{gcd}. Suppose that Hypotheses~\ref{H1} and \ref{H2} and \eqref{1c}, \eqref{1a}, \eqref{1b}, \eqref{J-cond} hold. Then we have the asymptotic formula \begin{equation}\label{rough} N(B) =(1+o(1)) c^{\ast} c_{\text{{\rm fin}}} c_{\infty} B (\log B)^{c_2}, \quad B \rightarrow \infty. \end{equation} Here $c^{\ast}$ is given in \eqref{cast-final} (using the notation \eqref{linear1}--\eqref{linear2}), $c_{\text{{\rm fin}}}$ in \eqref{cfin-final-euler}, $c_{\infty}$ in \eqref{cinftylimit} and \eqref{defcinf}, and $c_2$ in \eqref{defc2}. \end{theorem} \section{The Manin--Peyre conjecture}\label{sec9} In Sections~\ref{dioph}--\ref{sec8}, we established an asymptotic formula for a certain counting problem, subject to several hypotheses. By design, we presented this in an axiomatic style without recourse to the underlying geometry. In the section, we relate the asymptotic formula in Theorem~\ref{analytic-theorem} to the Manin--Peyre conjecture. In particular, we compute $c_{\infty}$ explicitly, and we will show (under conditions that are easy to check) that the leading constant $ c^{\ast} c_{\text{{\rm fin}}} c_{\infty} $ agrees with Peyre's constant for almost Fano varieties as in Part~\ref{part1}. This applies in particular to the spherical Fano varieties in Part~\ref{part3} of the paper. \subsection{Geometric interpretation of $c_{\infty}$} In this subsection, we establish the following alternative formulation of the constant $c_{\infty}$. Recall -- cf.\ \eqref{mathcalC} -- that the first $R$ rows of $\mathscr{C} = (\mathscr{C}_1 \mathscr{C}_2)$ are $R$ linearly independent rows of $(\mathscr{A}_1 \mathscr{A}_2)$, let's say indexed by a set $I$ of pairs $(i, j)$ with $0 \leq i \leq k$, $1 \leq j \leq J_i$ with $|I| = R$. Let \begin{equation}\label{def-cap-Phi} \Phi^{\ast}(\textbf{t}) = \sum_{i = 1}^k\prod_{(i, j) \in I} t_{ij}^{h_{ij}}, \end{equation} and let $\mathscr{F}$ be the affine $(R-1)$-dimensional hypersurface $\Phi^{\ast}(\textbf{t}) = 0$ over $\mathbb{R}$. Let $\chi_I$ be the characteristic function on the set $$\prod_{(i, j) \in I} |t_{ij}|^{\alpha^\mu_{ij}} \leq 1, \quad 1 \leq \mu \leq N.$$ In order to avoid technical difficulties that are irrelevant for the applications we have in mind, we make the simplifying assumption that \begin{equation}\label{simplifying} \text{one of the $k$ monomials in $\Phi^{\ast}$ consists of only one variable, which has exponent 1.} \end{equation} Without loss of generality, we can assume that this is the first monomial. (Assumption \eqref{simplifying} can be removed if necessary and follows from assumption \eqref{eq:assumption_real_density_strong}.) \begin{lemma} Suppose that $\{(1, j) \in I\} = \{(1, 1)\}$ and $h_{11} = 1$. Then $c_{\infty}$ is given by the surface integral \begin{equation}\label{cinf-final} c_{\infty} = 2^{J-R} \int_{\mathscr{F}} \frac{ \chi_I(\textbf{t}) }{ \| \nabla \Phi^{\ast}(\textbf{t})\|}\, {\rm d}\mathscr{F}\textbf{t}. \end{equation} \end{lemma} \begin{proof} We return to the definition \eqref{defcinf} of $c_{\infty}(\Delta)$ and compute the $\textbf{y}$-integral for fixed $\textbf{z}$. Let us write $\widehat{F}(\textbf{y}) = \prod_{\nu=1}^N \widehat{f}_{\Delta}(s_{\nu})$. We recall from \eqref{defy} that $ \mathbf{y} =\mathscr{C}_1\mathbf{s} + \mathscr{C}_2 \mathbf{z}^{\ast} $ with $\det \mathscr{C}_1 = 1$, and we view $\textbf{s}$ as a function of $\textbf{y}$ (for fixed $\textbf{z}$). By Mellin inversion one confirms the formula $$ \int_{\Re y_{\nu} = \eta^{\ast}_{\nu}}^{(N-R)} \widehat{F}(0, \ldots, 0, y_{R+1},\ldots y_N) \frac{\,{\mathrm d} y_{R+1} \cdots \,{\mathrm d} y_N}{(2\pi {\rm i})^{N-R}} = \int_{\Bbb{R}_{>0}^R} \int^{(N)}_{\Re y_{\nu} = \eta^{\ast}_{\nu}} \widehat{F}(\textbf{y}) t_1^{y_1} \cdots t_R^{y_R} \frac{\,{\mathrm d} \textbf{y}}{(2\pi {\rm i})^{N}} \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle} .$$ Note that by Mellin inversion, the $\textbf{t}$-integral on the right hand side is absolutely convergent, even though the combined $\textbf{y}, \textbf{t}$-integral is not. (This formula is a distributional version of the ``identity'' $\int_0^{\infty} t^{y-1} \,{\mathrm d} t = \delta_{y=0}$.) Let us write $\mathscr{C} = (\mathscr{C}_1\, \mathscr{C}_2) = (c_{\nu \mu})\in \Bbb{R}^{N \times (N+k)}$ and $\mathscr{C}_2\textbf{z}^{\ast} = \tilde{\textbf{z}} \in \Bbb{C}^N$. We change back to $\textbf{s}$-variables and compute the $\textbf{s}$-integral in the preceding display by Mellin inversion, getting $$ \int_{\Bbb{R}_{>0}^R} \prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{-c_{\ell, \mu} } \Big) t_1^{\tilde{z}_1} \cdots t_R^{\tilde{z}_R} \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle} .$$ By construction this integral is absolutely convergent for every fixed $\textbf{z}$ with $\Re z_i = \zeta_i$. Plugging back into the definition, we obtain $$c_{\infty}(\Delta) = \frac{2^J}{\pi} \int^{(\kappa)}_{\Re z_i = \zeta_i} \prod_{i=1}^{k} \mathscr{K}_i(z_i) \int_{\Bbb{R}_{>0}^R}\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{-c_{\ell, \mu} } \Big) t_1^{\tilde{z}_1} \cdots t_R^{\tilde{z}_R} \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle} \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{\kappa}} .$$ Here the $\textbf{z}$-integral is absolutely convergent since the multiple integral in \eqref{defcinf} was absolutely convergent. The combined $\textbf{t}, \textbf{z}$-integral, however, is not absolutely convergent. Recall that $\kappa = k-1$, $z_k = 1 - z_1 - \ldots - z_{\kappa}$ and $\mathscr{K}_i(z)$ was defined in \eqref{Ki} with inverse Mellin transform $x \mapsto K_i(x)$, say, where $K_i(x) = \cos(x)$ or $\exp({\rm i} x)$. In order to avoid convergence problems, we define, for $\varepsilon > 0$, the function \begin{equation}\label{defKiepsilon} K_i^{(\varepsilon)}(x) = K_i(x) e^{-(\varepsilon x)^2} = \begin{cases} \cos(x)e^{-(\varepsilon x)^2} , & h_{ij} \text{ odd for some } 1 \leq j \leq J_i,\\ e^{ix}e^{-(\varepsilon x)^2} , &h_{ij} \text{ even for all } 1 \leq j \leq J_i. \end{cases} \end{equation} and its Mellin transform $\mathscr{K}^{(\varepsilon)}_i(z) = \int_0^{\infty} K^{(\varepsilon)}_i(x) x^{z-1} \,{\mathrm d} x$. This can be expressed explicitly in terms of confluent hypergeometric functions by \cite[3.462.1]{GR}, but we do not need this. It suffices to know that $\mathscr{K}^{(\varepsilon)}_i(z)$ is holomorphic in $\Re z > 0$, rapidly decaying on vertical lines, and we have the pointwise limit $\lim_{\varepsilon \rightarrow 0} \mathscr{K}^{(\varepsilon)}_i(z) = \mathscr{K}_i(z)$ for $0 < \Re z < 1$. The latter follows elementarily with one integration by parts by writing $$\int_0^{\infty} (K_i(x) - K_i^{(\varepsilon)}(x)) x^{z-1} {\,{\mathrm d} x} = \int_0^{\varepsilon^{-1/2}} + \int_{{\varepsilon^{-1/2}}}^{\infty} \ll \varepsilon^{1/2} + \varepsilon^{1/2} \rightarrow 0$$ for $\varepsilon \rightarrow 0$. Correspondingly we write \begin{equation*} c^{(\varepsilon)}_{\infty}(\Delta) = \frac{2^J}{\pi} \int^{(\kappa)}_{\Re z_i = \zeta_i} \prod_{i=1}^{k} \mathscr{K}^{(\varepsilon)}_i(z_i) \int_{\Bbb{R}_{>0}^R}\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{-c_{\ell, \mu} } \Big) t_1^{\tilde{z}_1} \cdots t_R^{\tilde{z}_R} \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle} \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{\kappa}}. \end{equation*} This multiple integral is now absolutely convergent, and by dominated convergence we have \begin{equation}\label{dominatedc} c_{\infty}(\Delta) = \lim_{\varepsilon \rightarrow 0} c^{(\varepsilon)}_{\infty}(\Delta). \end{equation} We interchange the $\textbf{t}$- and $\textbf{z}$-integral, fix $\textbf{t}$ and compute the $\textbf{z}$-integral. Mellin inversion yields \begin{equation*} \mathscr{K}^{(\varepsilon)}_k(1 - z_1 - \ldots - z_{\kappa}) = \int_0^{\infty} \int_{(\frac{1}{2}\zeta_k)} \mathscr{K}^{(\varepsilon)}_k(z_k) x^{ - z_1 - \ldots - z_{k}} \frac{\,{\mathrm d} z_k}{2\pi {\rm i}} \,{\mathrm d} x \end{equation*} for $\Re z_i = \zeta_i$, $1\leq i \leq \kappa$. Note that on the right hand side $\Re(z_1 + \ldots + z_k) < 1$ (which is why we chose $\Re z_k = \frac{1}{2} \zeta_k$). Again the double integral is not absolutely convergent, but the $x$-integral is absolutely convergent. In particular, after substituting this into the definition of $c^{(\varepsilon)}_{\infty}(\Delta)$, we may interchange the $x$-integral and the $z_1, \ldots, z_{\kappa}$-integral to conclude $$c^{(\varepsilon)}_{\infty}(\Delta) = \frac{2^J}{\pi} \int_{\Bbb{R}_{>0}^R} \int_0^{\infty} \int^{(k)} \prod_{i=1}^{k} \mathscr{K}^{(\varepsilon)}_i(z_i)\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{-c_{\ell, \mu} } \Big) t_1^{\tilde{z}_1} \cdots t_R^{\tilde{z}_R}x^{ - z_1 - \ldots - z_{k}} \frac{\,{\mathrm d}\mathbf{z}}{(2\pi {\rm i})^{k}} \,{\mathrm d} x \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle},$$ where $\Re z_i = \zeta_i$, $1 \leq i \leq \kappa$, $\Re z_k = \frac{1}{2}\zeta_k$. By Mellin inversion, we can now compute each of the $z_1, \ldots, z_{\kappa}$-integrals. We recall our notation $\tilde{\textbf{z}} = \mathscr{C}_2 \textbf{z}^{\ast}$, so $$\tilde{z}_j = \sum_{i=1}^{\kappa} c_{j, N+i} z_i + c_{j, N+k}.$$ This gives $$c^{(\varepsilon)}_{\infty}(\Delta) = \frac{2^J}{\pi}\int_{\Bbb{R}_{>0}^R} \int_0^{\infty} \Big[\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{-c_{\ell, \mu} } \Big) \Big] \Big[K^{(\varepsilon)}_k(x) \prod_{i=1}^{\kappa} K^{(\varepsilon)}_{i}\Big(x \prod_{\nu=1}^R t_{\nu}^{-c_{\nu, N+i}}\Big)\Big] \prod_{\nu=1}^R t_{\nu}^{c_{\nu, N+k}} \,{\mathrm d} x \frac{\,{\mathrm d}\textbf{t}}{\langle \textbf{t} \rangle}. $$ Changing variables $t_{\nu} \mapsto t_{\nu}^{-1}$ and then $x \mapsto 2 \pi x \prod_{\nu=1}^R t_{\nu}^{1+c_{\nu, N+k}}$, this becomes $$2^J \int_{\Bbb{R}_{>0}^R} \int_{-\infty}^{\infty} \Big[\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{\ell = 1}^Rt_{\ell}^{c_{\ell, \mu} } \Big) \Big] \Big[K^{(\varepsilon)}_k(2\pi x\prod_{\nu=1}^R t_{\nu}^{1+c_{\nu, N+k}}) \prod_{i=1}^{\kappa} K^{(\varepsilon)}_{i}\Big(2\pi x \prod_{\nu=1}^R t_{\nu}^{c_{\nu, N+i}+1 + c_{\nu, N+k}}\Big)\Big] \,{\mathrm d} x\, \,{\mathrm d}\textbf{t}. $$ We re-index the variables $t_{\nu}$ as $t_{ij}$ with $(i, j) \in I$, as described prior to the statement of the lemma. By the definition of $(\mathscr{A}_1 \mathscr{A}_2)$ in \eqref{matrix}, we then have $$ \prod_{\nu=1}^R t_{\nu}^{c_{\nu, N+i} + 1 + c_{\nu, N+k}} = \prod_{(i, j) \in I} t_{ij}^{h_{ij}} \quad (1 \leq i \leq \kappa), \quad\quad \prod_{\nu=1}^R t_{\nu}^{1+c_{\nu, N+k}} = \prod_{(k, j) \in I} t_{kj}^{h_{kj}}, $$ so that \begin{displaymath} \begin{split} c^{(\varepsilon)}_{\infty}(\Delta) & = 2^J \int_{-\infty}^{\infty} \int_{\Bbb{R}_{>0}^R} \Big[\prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{(i, j) \in I} t_{ij}^{\alpha^\mu_{ij} } \Big) \Big] \Big[ \prod_{i=1}^{k} K^{(\varepsilon)}_{i}\Big(2\pi x \prod_{(i, j) \in I} t_{ij}^{h_{ij}} \Big)\Big] \,{\mathrm d} x\, \,{\mathrm d}\textbf{t} . \end{split} \end{displaymath} By symmetry, we may extend $\textbf{t}$-integral to all of $\Bbb{R}^R$, recall \eqref{defKiepsilon} and write \begin{displaymath} \begin{split} c^{(\varepsilon)}_{\infty}(\Delta) & = 2^{J-R} \int_{-\infty}^{\infty} \int_{\Bbb{R}^R} \Psi_{\Delta}(\textbf{t}) e\big(x \Phi^{\ast}(\textbf{t})\big) \exp\big(-(\pi \varepsilon x)^2 \tilde{\Phi}(\textbf{t})\big) \,{\mathrm d} x \, \,{\mathrm d}\textbf{t} \end{split} \end{displaymath} with $\Phi^{\ast}$ as in \eqref{def-cap-Phi} and $$\Psi_{\Delta}(\textbf{t}) = \prod_{\mu= 1}^Nf_{\Delta} \Big(\prod_{(i, j) \in I}|t_{ij}|^{\alpha^\mu_{ij} } \Big), \quad \ \tilde{\Phi}(\textbf{t}) = 4\sum_{i = 1}^k\prod_{(i, j) \in I} t_{ij}^{2h_{ij}}.$$ We compute the $x$-integral, getting \begin{displaymath} \begin{split} c^{(\varepsilon)}_{\infty}(\Delta) & = \frac{2^{J-R}}{\sqrt{\pi} \varepsilon} \int_{\Bbb{R}^R} \Psi_{\Delta}(\textbf{t}) \exp\Big(- \frac{(\Phi^{\ast})^2(\textbf{t})}{\varepsilon^2 \tilde{\Phi}(\textbf{t})}\Big) \frac{ \,{\mathrm d}\textbf{t} }{\sqrt{\tilde{\Phi}(\textbf{t})}}. \end{split} \end{displaymath} By construction, this is absolutely convergent for every fixed $\varepsilon > 0$, and the limit as $\varepsilon \rightarrow 0$ exists by \eqref{dominatedc}. Let $\mathscr{U} \coloneqq \{\textbf{t} \in \Bbb{R}^R : |(\Phi^{\ast})^2(\textbf{t})/\tilde{\Phi}(\textbf{t}) | \leq 1/25\}$. Writing $$ \exp\Big(- \frac{(\Phi^{\ast})^2(\textbf{t})}{\varepsilon^2 \tilde{\Phi}(\textbf{t})}\Big) = \exp\Big(- \frac{(\Phi^{\ast})^2(\textbf{t})}{ \tilde{\Phi}(\textbf{t})}\Big) \exp\Big( (1 - \varepsilon^{-2}) \frac{(\Phi^{\ast})^2(\textbf{t})}{ \tilde{\Phi}(\textbf{t})}\Big),$$ we obtain \begin{equation*} c^{(\varepsilon)}_{\infty}(\Delta) = \frac{2^{J-R}}{\sqrt{\pi} \varepsilon} \int_{\mathscr{U}} \Psi_{\Delta}(\textbf{t}) \exp\Big(- \frac{(\Phi^{\ast})^2(\textbf{t})}{\varepsilon^2 \tilde{\Phi}(\textbf{t})}\Big) \frac{ \,{\mathrm d}\textbf{t} }{\sqrt{\tilde{\Phi}(\textbf{t})}} + O\Big(\frac{1}{\varepsilon}e^{(1 - \varepsilon^{-2})/25}\Big). \end{equation*} We consider now the equation \begin{equation}\label{implicit} \Phi^{\ast}(\textbf{t})/\sqrt{\tilde{\Phi}(\textbf{t})} - u = 0 \end{equation} for $|u| \leq 1/5$. It is only at this point that we use \eqref{simplifying}. We write $\textbf{t} = (t_{11}, \textbf{t}')$ and $$\Phi^{\ast}(\textbf{t}) = t_{11} + (\Phi^{\ast})'(\textbf{t}'), \quad \tilde{\Phi}(\textbf{t}) = 4t_{11}^2 + \tilde{\Phi}'(\textbf{t}').$$ Then for $u = 0$, the equation \eqref{implicit} has the unique solution $t_{11} = -(\Phi^{\ast})'(\textbf{t}')$, while for $0 < |u| \leq 1/5$, both $u$ and $-u$ lead to two solutions $$t_{11} = \frac{-(\Phi^{\ast})'(\textbf{t}') \pm |u| \sqrt{4(\Phi^{\ast})'(\textbf{t}')^2 + \tilde{\Phi}'(\textbf{t}')(1 - 4u^2)}}{1 - 4u^2} \eqqcolon \phi^{\pm}_u(\textbf{t}').$$ For $u=0$, we have $\phi_0^+ = \phi_0^-$, and for notational simplicity we write $\phi_0^{\pm} = \phi = -(\Phi^{\ast})'$. Changing variables, we obtain $$\frac{2^{J-R}}{\sqrt{\pi} \varepsilon} \int_{\mathscr{U}} \Psi_{\Delta}(\textbf{t}) \exp\Big(- \frac{(\Phi^{\ast})^2(\textbf{t})}{\varepsilon^2 \tilde{\Phi}(\textbf{t})}\Big) \frac{ \,{\mathrm d}\textbf{t} }{\sqrt{\tilde{\Phi}(\textbf{t})}} = \frac{2^{J-R}}{\sqrt{\pi} \varepsilon} \int_{-1/5}^{1/5} \exp\Big(- \frac{u^2}{\varepsilon^2 }\Big) \Theta(u) {\,{\mathrm d}}u, $$ where $$ \Theta(u) =\int_{\Bbb{R}^{R-1}} \Xi (\phi^{+}_u(\textbf{t}'), \textbf{t}') {\,{\mathrm d}}\textbf{t}' , \quad \Xi = \frac{2 \tilde{\Phi} \Psi_{\Delta} }{|2\tilde{\Phi} \Phi^{\ast}_{t_{11}} - \Phi^{\ast} \tilde{\Phi}_{t_{11}} | }.$$ By a Taylor expansion, we have $\Theta(u) = \Theta(0) + O(|u|)$ for $|u| \leq 1/5$, so that \begin{equation*} \begin{split} c_{\infty}(\Delta) = \lim_{\varepsilon \rightarrow 0}\frac{2^{J-R}}{\sqrt{\pi} \varepsilon} \int_{-\eta}^{\eta} \exp\Big(- \frac{u^2}{\varepsilon^2 }\Big) \Theta(u) {\,{\mathrm d}}u &= 2^{J-R}\Theta(0) =2^{J-R} \int_{\Bbb{R}^{R-1}} \Xi (\phi(\textbf{t}'), \textbf{t}') {\,{\mathrm d}}\textbf{t}' \\ &= 2^{J-R} \int_{\Bbb{R}^{R-1}} \frac{ \Psi_{\Delta} (\phi(\textbf{t}'), \textbf{t}') }{ | \Phi^{\ast}_{t_{11}} (\phi(\textbf{t}'), \textbf{t}')| } {\,{\mathrm d}}\textbf{t}' . \end{split} \end{equation*} Here we can let $\Delta \rightarrow 0$, obtaining \begin{equation}\label{eq:c_infty} \begin{split} c_{\infty} = 2^{J-R} \int_{\Bbb{R}^{R-1}} \frac{ \chi_I (\phi(\textbf{t}'), \textbf{t}') }{ | \Phi^{\ast}_{t_{11}} (\phi(\textbf{t}'), \textbf{t}')| } {\,{\mathrm d}}\textbf{t}' . \end{split} \end{equation} (Note that the denominator is $1$ by \eqref{simplifying}, but that this formula should also hold without this assumption.) We write this more symmetrically as follows. If $t_{ij}$ is any component of $\textbf{t}'$, then by implicit differentiation, we have $$\phi_{t_{ij}}(\textbf{t}) = -\frac{\Phi^{\ast}_{t_{ij}}(\phi(\textbf{t}'),\textbf{t}')}{\Phi^{\ast}_{t_{11}}(\phi(\textbf{t}'), \textbf{t}')},$$ so that we can write $c_{\infty}$ as a surface integral \begin{displaymath} \begin{split} 2^{J-R} \int_{\Bbb{R}^{R-1}} \frac{ \chi_I(\phi(\textbf{t}'), \textbf{t}') }{ | \Phi^{\ast}_{t_{11}} (\phi(\textbf{t}'), \textbf{t}')| } d\textbf{t}' = 2^{J-R}\int_{\mathscr{F}} \frac{\chi_I(\textbf{t}) }{\| \nabla \Phi^{\ast}(\textbf{t})\|} d\mathscr{F}(\textbf{t}) \end{split} \end{displaymath} as claimed. \end{proof} \subsection{Comparison with the Manin--Peyre conjecture} \begin{theorem}\label{thm:manin-peyre} Let $X,H$ be as in Proposition~\ref{prop:peyre}. Suppose that the corresponding counting problem for $U \subset X$ given by Proposition~\ref{prop:countingproblem_abstract} satisfies all assumptions of Theorem~\ref{analytic-theorem}. Then the Manin--Peyre conjecture holds for $X$ with respect to $H$, that is, \begin{equation*} N_{X,U,H}(B) =(1+o(1)) c B(\log B)^{\rank \Pic X - 1} \end{equation*} with Peyre's constant $c$. \end{theorem} \begin{proof} By Proposition~\ref{prop:countingproblem_abstract}, \begin{equation*} N_{X,U,H}(B) = 2^{-\rank \Pic X} N(B) \end{equation*} for $N(B)$ as in \eqref{manin}. Formula \eqref{rough} in Theorem~\ref{analytic-theorem} states that \begin{equation*} N(B) =(1+o(1)) c^{\ast} c_{\text{{\rm fin}}} c_{\infty} B (\log B)^{c_2}. \end{equation*} Comparing definition \eqref{eq:c_p} with expression \eqref{cfin-final-euler} for $c_{\mathrm{fin}}$, the definitions \eqref{cast-final-first} and \eqref{cast-final} of $c^\ast$, and definition \eqref{cinf-first} with expression \eqref{eq:c_infty} for $c_\infty$ (which are both valid since assumption \eqref{eq:assumption_real_density_strong} implies \eqref{simplifying}), then Proposition~\ref{prop:peyre} shows that the leading constant for $N_{X,U,H}(B)$ is Peyre's constant, and $c_2 = J-R = \rank \Pic X - 1$ by \eqref{eq:rkPic}, \eqref{defc2} and Lemma~\ref{rank}. Therefore, Proposition~\ref{prop:countingproblem_abstract} combined with \eqref{rough} agrees with the Manin--Peyre conjecture. \end{proof} The following part provides numerous applications and shows how to apply this in practice. \part{Application to spherical varieties}\label{part3} Having established the relevant theory in Part~\ref{part1} and Part~\ref{part2} of the paper, we are now prepared to prove Manin's conjecture for concrete families of varieties. In particular, as a consequence of Theorem~\ref{manin-cor}, we obtain Manin's conjecture for all smooth spherical Fano threefolds of semisimple rank one and type $T$. \section{Spherical varieties}\label{sec2} \subsection{Luna--Vust invariants} Let $G$ be a connected reductive group over $\overline{\Qd}$. Let $\overline{\Qd}(X)$ be the function field of a spherical $G$-variety $X$ over $\overline{\Qd}$. Only in this section and in Section~\ref{sec22}, let $B$ denote a Borel subgroup of $G$ with character group $\mathfrak{X}(B)$. The \emph{weight lattice} is defined as \begin{align*} \mathscr{M} = \mathopen{}\mathclose\bgroup\left\{\chi \in \mathfrak{X}(B) : \begin{aligned} \text{there exists $f_\chi \in \overline{\Qd}(X)^\times$ such that}\\ \text{$b\cdot f_\chi = \chi(b)\cdot {f_\chi}$ for every $b \in B$} \end{aligned} \aftergroup\egroup\right\}\text{,} \end{align*} Note that for every $\chi\in\mathscr{M}$, the function $f_\chi$ is uniquely determined up to a constant factor because of the dense $B$-orbit in $X$. The \emph{set of colors} $\mathscr{D}$ is the set of $B$-invariant prime divisors on $X$ that are not $G$-invariant. Moreover, we have the \emph{valuation cone} $\mathscr{V} \subseteq \mathscr{N}_{\mathbb{Q}} = \Hom(\mathscr{M}, \mathbb{Q})$, which can be identified with the $\mathbb{Q}$-valued $G$-invariant discrete valuations on $\overline{\Qd}(X)^\times$. By Losev's uniqueness theorem \cite[Theorem~1]{los09a}, the combinatorial invariants $(\mathscr{M}, \mathscr{V}, \mathscr{D})$ uniquely determine the birational class of (i.\,e.,~ the open $G$-orbit in) the spherical $G$-variety $X$ over $\overline{\Qd}$. Now let $\Delta$ be the set of all $B$-invariant prime divisors on $X$. There is a map $\mathfrak{c} \colon \Delta \to \mathscr{N}_\mathbb{Q}$ defined by $\langle \mathfrak{c}(D), \chi \rangle = \nu_D(f_\chi)$, where $\nu_D$ is the valuation on $\overline{\Qd}(X)^\times$ induced by the prime divisor $D$. For every $G$-orbit $Z \subseteq X$, we define $\mathscr{W}_Z = \{D \in \Delta : Z \subseteq D\}$. Then the collection \begin{align*} \CF X = \{(\cone(\mathfrak{c}(\mathscr{W}_Z)), \mathscr{W}_Z \cap \mathscr{D}) : Z \subseteq X \text{ is a $G$-orbit}\} \end{align*} is called the \emph{colored fan} of $X$. According to the Luna--Vust theory of spherical embeddings \cite{lv83, kno91}, the colored fan $\CF X$ uniquely determines the spherical $G$-variety~$X$ over $\overline{\Qd}$ among those in the same birational class. The divisor class group $\Cl X$ can be computed from $\CF X$: by \cite[Proposition~4.1.1]{bri07}, the maps $\mathscr{M} \to \mathbb{Z}^\Delta$, $\chi \mapsto \Div f_\chi$ and $\mathbb{Z}^\Delta \to \Cl X$, $D \mapsto [D]$ fit into the exact sequence $\mathscr{M} \to \mathbb{Z}^\Delta \to \Cl X \to 0$. Spherical varieties with $\mathscr{V} = \mathscr{N}_{\mathbb{Q}}$ are called \emph{horospherical}. These include flag varieties and toric varieties. In the latter case, $G=B=T$ is a torus, and we have $\mathscr{V} = \mathscr{N}_{\mathbb{Q}}$ and $\mathscr{D} = \emptyset$. \subsection{Semisimple rank one}\label{sec:ssr1} Let $X$ be a spherical $G$-variety over $\overline{\Qd}$. If the connected reductive group~$G$ has semisimple rank one, we may assume $G = \mathrm{SL}_2\times \mathbb{G}_\mathrm{m}^r$ by passing to a finite cover. As a further simplification, we replace the action by a \emph{smart action} as introduced in \cite[Definition~4.3]{ab04}. As before, let $G/H = (\mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}^r)/H$ be the open orbit in $X$. Let $H'\times \mathbb{G}_\mathrm{m}^r = H\cdot \mathbb{G}_\mathrm{m}^r \subseteq \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}^r$. Then the homogeneous space $\mathrm{SL}_2/H'$ is spherical, and hence either $H'$ is a maximal torus in $\mathrm{SL}_2$ (\emph{the case $T$}) or $H'$ is the normalizer of a maximal torus in $\mathrm{SL}_2$ (\emph{the case $N$}) or the homogeneous space $\mathrm{SL}_2/H'$ is horospherical. Since the action is smart, in the horospherical case $H'$ is either a Borel subgroup in $\mathrm{SL}_2$ (\emph{the case~$B$}) or the whole group $\mathrm{SL}_2$ (\emph{the case $G$}). Now let $T \subset G = \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}^r$ be a maximal torus, and let $\alpha \in \mathfrak{X}(T) \cong \mathfrak{X}(B)$ be the simple root with respect to a Borel subgroup $B \subset G$. It follows from the general theory of spherical varieties that in the cases $T$ and $N$, we always have $\mathscr{V} = \{v \in \mathscr{N}_\mathbb{Q} : \langle v, \alpha \rangle \le 0\}$. The colored cones of the form $(\Qd_{\ge0}\cdot u, \emptyset) \in \CF X$, where $u \in \mathscr{M} \cap \mathscr{V}$ is a primitive element, correspond to the $G$-invariant prime divisors in $X$. Let $(\Qd_{\ge0} \cdot u_{0j}, \emptyset) \in \CF X$ for $j = 1, \dots, J_0$ be those with $u \in \mathscr{V} \cap (-\mathscr{V})$, and let $(\Qd_{\ge0} \cdot u_{3j}, \emptyset) \in \CF X$ for $j = 1, \dots, J_3$ be those with $u \notin \mathscr{V} \cap (-\mathscr{V})$. We denote by $D_{ij}$ the $G$-invariant prime divisor in $X$ corresponding to $(\Qd_{\ge0} \cdot u_{ij}, \emptyset) \in \CF X$. Then we have $\mathfrak{c}(D_{ij}) = u_{ij}$. We define $h_{3j} = -\langle u_{3j}, \alpha \rangle$. The following descriptions of the Cox rings in the different cases can be explicitly obtained from \cite[Theorem~4.3.2]{bri07} or \cite[Theorem~3.6]{gag14}. \smallskip Case $T$: There are two colors $D_{11}, D_{12} \in \mathscr{D}$, and we have $\mathfrak{c}(D_{11}) + \mathfrak{c}(D_{12}) = \alpha^\vee|_{\mathscr{M}}$. The Cox ring is given by \begin{equation}\label{eq:coxring_type_T} \mathscr{R}(X) = \overline{\Qd}[x_{01}, \dots, x_{0J_0}, x_{11}, x_{12}, x_{21}, x_{22}, x_{31}, \dots, x_{3J_3}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}^{h_{31}} \cdots x_{3J_3}^{h_{3J_3}}), \end{equation} cf.\ \eqref{typeT}, with \begin{align*} \deg(x_{11}) &= \deg(x_{21}) = [D_{11}] \in \Cl X\text{,}\quad \deg(x_{12}) = \deg(x_{22}) = [D_{12}] \in \Cl X\text{, and} \\ \deg(x_{ij}) &= [D_{ij}] \in \Cl X \text{ for $i \in \{0,3\}$.} \end{align*} \smallskip Case $N$: There is one color $D_{11} \in \mathscr{D}$, and we have $\mathfrak{c}(D_{11}) = \tfrac{1}{2}\alpha^\vee|_{\mathscr{M}}$. The Cox ring is given by \begin{equation*} \mathscr{R}(X) = \overline{\Qd}[x_{01}, \dots, x_{0J_0}, x_{11}, x_{12}, x_{21}, x_{31}, \dots, x_{3J_3}] /(x_{11}x_{12}-x_{21}^2-x_{31}^{h_{31}} \cdots x_{3J_3}^{h_{3J_3}}) \end{equation*} with \begin{equation*} \deg(x_{11}) = \deg(x_{12}) = \deg(x_{21}) = [D_{11}] \in \Cl X\text{,}\quad \deg(x_{ij}) = [D_{ij}] \in \Cl X \text{ for $i \in \{0,3\}$.} \end{equation*} \smallskip Case $B$: We mention this case only for completeness since $X$ is isomorphic to a toric variety here (as an abstract variety with a different group action). There is one color $D_{11} \in \mathscr{D}$, and we have $\mathfrak{c}(D_{11}) = \alpha^\vee|_{\mathscr{M}}$. The Cox ring is given by $ \mathscr{R}(X) = \overline{\Qd}[x_{01}, \dots, x_{0J_0}, x_{11}, x_{12}]$ with \begin{equation*} \deg(x_{11}) = \deg(x_{12}) = [D_{11}] \in \Cl X\text{,}\quad \deg(x_{0j}) = [D_{0j}] \in \Cl X \text{.} \end{equation*} \smallskip Case $G$: We mention this case only for completeness since $X$ is a toric $\mathbb{G}^r_m$-variety here. We have $\mathscr{D} = \emptyset$. The Cox ring is given by $\mathscr{R}(X) = \overline{\Qd}[x_{01}, \dots, x_{0J_0}]$ with $ \deg(x_{0j}) = [D_{0j}] \in \Cl X. $ \subsection{Ambient toric varieties}\label{sec:ambient} Every quasiprojective variety $X$ with finitely generated Cox ring may be embedded into a toric variety $Y^\circ$ with nice properties, as described in \cite[3.2.5]{adhl15}. For a spherical variety $X$, this is explicitly described in \cite{gag17}. According to \cite[Theorem~4.3.2]{bri07}, the Cox ring of $X$ is generated by the union of sets $x_{D1}, \dots, x_{Dr_D} \in \mathscr{R}(X)$ for every $D \in \Delta$. We have $r_D = 1$ if $D \notin \mathscr{D}$ and $r_D \ge 2$ if $D \in \mathscr{D}$. Each $x_{Di}$ corresponds to a ray $\rho_{Di}$ in the fan $\Sigma^\circ$ of the ambient toric variety $Y^\circ$. Even if $X$ is projective, the quasiprojective toric variety $Y^\circ$ might not be projective. This is the case if and only if the colored cones in $\CF X$ do not cover $\mathscr{N}_\mathbb{Q}$. Any $\mathscr{W} \subseteq \Delta$ defines a pair $(\cone(\mathfrak{c}(\mathscr{W})), \mathscr{W} \cap \mathscr{D})$. If $\cone(\mathfrak{c}(\mathscr{W}))$ is strictly convex, we call the pair a \emph{supported colored cone} if $\cone(\mathfrak{c}(\mathscr{W}))^\circ \cap \mathscr{V} \ne \emptyset$ and an \emph{unsupported colored cone} if $\cone(\mathfrak{c}(\mathscr{W}))^\circ \cap \mathscr{V} = \emptyset$. If we can extend $\CF X$ by some of these unsupported colored cones to a collection $(\CF X)_{\rm ext}$ such that every face (in the sense of \cite[Definition~15.3]{tim11}) of a colored cone is again in $(\CF X)_{\rm ext}$, such that different colored cones intersect in faces, and such that the colored cones cover the whole space $\mathscr{N}_\mathbb{Q}$, then $(\CF X)_{\rm ext}$ yields a toric variety $Y$ that completes $Y^\circ$. We recall here how to obtain the fan $\Sigma$ of the toric variety $Y$ from the (possibly extended) colored fan $(\CF X)_{\rm ext}$. Let $\Psi_D = \{\rho_{D1}, \dots, \rho_{Dr_D}\}$, and define $\Psi_D^j = \Psi_D \setminus \{\rho_{Dj}\}$ for every $1 \le j \le r_D$. For every subset $\mathscr{W} \subseteq \Delta$, consider the sets of cones \begin{align*} \Phi(\mathscr{W}) = \bigg\{\cone\bigg(\bigcup_{D \in \mathscr{W}}\Psi_D \cup \bigcup_{D \in \Delta\setminus \mathscr{W}}\Psi_D^{j(D)}\bigg) : j \in \mathbb{N}^{\Delta\setminus \mathscr{W}},\ 1 \le j(D) \le r_D \bigg\}\text{.} \end{align*} Then we have \begin{equation}\label{eq:Sigma_max} \Sigma = \bigcup_{(\cone(\mathfrak{c}(\mathscr{W})), \mathscr{W} \cap \mathscr{D}) \in (\CF X)_{\rm ext}} \Phi(\mathscr{W}) \quad\text{and}\quad \Sigma_\mathrm{max} = \bigcup_{(\cone(\mathfrak{c}(\mathscr{W})), \mathscr{W} \cap \mathscr{D}) \in (\CF X)_{\rm ext,max}} \Phi(\mathscr{W})\text{.} \end{equation} \subsection{Manin's conjecture} We present now the main result of this paper, which implies all theorems stated in the introduction. \begin{theorem}\label{manin-cor} Let $X$ be a smooth split spherical almost Fano variety of semisimple rank one and type $T$ over $\mathbb{Q}$ with semiample $\omega_X^\vee$ satisfying \eqref{eq:toric_smooth} whose colored fan $\CF X$ contains a maximal cone without colors. The corresponding counting problem as in Proposition~\ref{prop:countingproblem_abstract} features a torsor equation \eqref{typeT} with exponents $h_{ij}$, a height matrix $\mathscr{A}$ as in \eqref{newA} and coprimality conditions $S_1, \ldots S_r$ as in \eqref{gcd}. Choose $\bm \zeta $ satisfying \eqref{zeta1}, let $\lambda$ be as in \eqref{lambdabeta} and choose $\bm \tau^{(2)}$ as in \eqref{tau1}. With these data, assume that \eqref{fail} and \eqref{ass2} hold. Then the Manin--Peyre conjecture holds for $X$ with respect to the anticanonical height function \eqref{eq:height_definition}. \end{theorem} \begin{proof} It is enough to check all assumptions of Theorem~\ref{thm:manin-peyre}. We observe that $X$ is as in Proposition~\ref{prop:peyre} by our assumptions. In particular by \eqref{eq:coxring_type_T}, its Cox ring is as required. By \eqref{eq:Sigma_max}, a maximal cone without colors in $\CF X$ gives four maximal cones $\sigma \in \Sigma_\mathrm{max}$ such that the variables corresponding to the rays of $\sigma$ include precisely one of $x_{11},x_{21}$ and precisely one of $x_{12},x_{22}$ in \eqref{eq:coxring_type_T}; it is not hard to see that one of these four cones satisfies \eqref{eq:assumption_real_density_strong}. Next we check that Theorem~\ref{analytic-theorem} applies. The counting problem is of the required form by Proposition~\ref{prop:countingproblem_abstract} and \eqref{eq:coxring_type_T}. Hypothesis~\ref{H1} holds by Proposition~\ref{circle-method}, whose assumptions are satisfied by \eqref{eq:coxring_type_T} and which allows us to choose $${\bm \beta} = \Big( \frac{1}{2} - \frac{1}{5\max_{ij} h_{ij}}, \frac{1}{2} - \frac{1}{5\max_{ij} h_{ij}}, \frac{2}{5\max_{ij} h_{ij}}\Big), $$ so that \eqref{1b} holds. Moreover, we can (and will) choose ${\bm \zeta}$ so that \eqref{J-cond} holds. Hypothesis~\ref{H2} holds by Proposition~\ref{propH2}. The conditions \eqref{1c}, \eqref{1a} hold by Lemmas~\ref{rank} and \ref{pos}. \end{proof} The assumption \eqref{eq:toric_smooth} can be read off of the colored fan $\CF X$, using the method described in Section~\ref{sec:ambient}. The existence of a maximal cone without colors in $\CF X$ is straightforward to check and clearly holds in all our examples below; alternatively, \eqref{eq:assumption_real_density_strong} can be checked directly. As mentioned after Proposition \ref{propH2}, if \eqref{fail} fails, we can apply an alternative, but slightly more complicated criterion. Assumption \eqref{ass2} requires elementary linear algebra (and can be checked quickly by computer if desired). \begin{remark}\label{simpl} If the torsor equation is $x_{11} x_{12} + x_{21} x_{22} + x_{31} x_{33} = 0$, we can use \cite[Proposition 1.2]{BBS2} instead of Proposition~\ref{circle-method} to verify Hypothesis~\ref{H1}, which conveniently yields again ${\bm \beta} = (1/3 + \varepsilon, 1/3 + \varepsilon, 1/3 + \varepsilon)$ and more importantly \begin{equation*} \lambda = 1. \end{equation*} The advantage is that the third line of \eqref{simplex2} is trivially satisfied (the polytope is empty), so that checking \eqref{ass2} requires a little less computational effort. \end{remark} \section{Spherical Fano threefolds}\label{sec:fano3folds} \subsection{Geometry}\label{sec22} According to \cite[\S 6.3]{hofscheier}, all horospherical smooth Fano threefolds are either toric or flag varieties. Furthermore, there are nine smooth Fano threefolds over $\overline{\Qd}$ that are spherical, but not horospherical; they are equipped with an action of $G=\mathrm{SL}_2\times \mathbb{G}_\mathrm{m}$. The notation $T$ and $N$ in \cite[Table~6.5]{hofscheier} and in our Table~\ref{tab:classification_spherical} refers to the cases in Section~\ref{sec:ssr1}. \begin{table}[ht] \centering \begin{tabular}[ht]{ccccccc} \hline rk Pic & Hofscheier & Mori--Mukai & torsor equation & remark\\ \hline\hline 2 & $T_1 12$ & II.31 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}^2$} & eq. $\mathbb{G}_\mathrm{a}^3$-cpct.\\ 2 & $N_1 6, N_1 7$ & II.30 & {$x_{11}x_{12}-x_{21}^2-x_{31}x_{32}$} & eq. $\mathbb{G}_\mathrm{a}^3$-cpct.\\ 2 & $N_1 8$ & II.29 & {$x_{11}x_{12}-x_{21}^2-x_{31}x_{32}^2x_{33}$} & \\ \hline 3 & $T_1 18$ & III.24 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$} & variety $X_1$ \\ 3 & $T_1 21$ & III.20 & {$x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2$} & variety $X_2$\\ 3 & $N_0 3$ & III.22 & $x_{11}x_{12}-x_{21}^2-x_{31}x_{32}$ & \\ 3 & $N_1 9$ & III.19 & $x_{11}x_{12}-x_{21}^2-x_{31}x_{32}$ & \\ \hline 4 & $T_0 3$ & IV.8 & $x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$ & variety $X_3$ \\ 4 & $T_1 22$ & IV.7 & $x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}$ & variety $X_4$\\ \hline \end{tabular} \caption{Smooth Fano threefolds that are spherical, but not horospherical} \label{tab:classification_spherical} \end{table} We proceed to describe the four $T$ cases $X_1, \dots, X_4 $ in Table \ref{tab:classification_spherical} that are not equivariant $\mathbb{G}_\mathrm{a}^3$-compactifications \cite{arXiv:1802.08090} in more detail. In each case, we first construct a split form over $\mathbb{Q}$ following the elementary description from the Mori--Mukai classification, and then we give the description using the Luna--Vust theory of spherical embeddings from Hofscheier's list. Finally we describe in each case an ambient toric variety $Y_i$ satisfying \eqref{eq:toric_smooth} that can be used with Sections~\ref{sec:charts_torsors}--\ref{sec:tamagawa_cox}. Let $\varepsilon_1 \in \mathfrak{X}(B)$ be a primitive character of $\mathbb{G}_\mathrm{m}$ composed with the natural inclusion $\mathfrak{X}(\mathbb{G}_\mathrm{m}) \to \mathfrak{X}(B)$. \smallskip \subsubsection{$X_1$ of type III.24 and $X_4$ of type IV.7}\label{IV.7} Consider $\mathbb{P}^2_\mathbb{Q} \times \mathbb{P}^2_\mathbb{Q}$ with coordinates $(z_{11}:z_{21}:z_{31})$ and $(z_{12}:z_{22}:z_{32})$, and the hypersurface $W_4 = \mathbb{V}(z_{11}z_{12}-z_{21}z_{22}-z_{31}z_{32}) \subset \mathbb{P}^2_\mathbb{Q} \times \mathbb{P}^2_\mathbb{Q}$ of bidegree $(1,1)$. This is a smooth Fano threefold of type II.32. It contains the curves \begin{align*} C_{01} &= \mathbb{V}(z_{11},z_{21},z_{32}) = \{(0:0:1)\}\times\mathbb{V}(z_{32})\text{,}\\ C_{02} &= \mathbb{V}(z_{12},z_{22},z_{31}) = \mathbb{V}(z_{31})\times\{(0:0:1)\} \end{align*} of bidegrees $(0,1)$ and $(1,0)$, respectively. Let $X_1$ be the blow-up of $W_4$ in the curve $C_{01}$. This is a smooth Fano threefold of type III.24. Moreover, let $X_4$ be the further blow-up in the curve $C_{02}$ (which is disjoint from the curve $C_{01}$ in $W_4$). This is a smooth Fano threefold of type IV.7. We may define an action of $G = \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}$ on $W_4$ by \begin{equation*} (A,t)\cdot \mathopen{}\mathclose\bgroup\left( \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix} ,z_{31},z_{32}\aftergroup\egroup\right) = \mathopen{}\mathclose\bgroup\left(A\cdot \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix} \cdot \begin{pmatrix} t^{-1} & 0 \\ 0 & t \end{pmatrix},z_{31},z_{32}\aftergroup\egroup\right), \end{equation*} which turns $W_4$ into a spherical variety. The following description using the Luna--Vust theory of spherical embeddings can be easily verified. The lattice $\mathscr{M}$ has basis $(\frac{1}{2}\alpha + \varepsilon_1, \frac{1}{2}\alpha - \varepsilon_1)$. We denote the corresponding dual basis of the lattice $\mathscr{N}$ by $(d_1, d_2)$. Then there are two colors with valuations $d_1$ and $d_2$, and the valuation cone is given by $\mathscr{V} = \{v \in \mathscr{N}_\mathbb{Q} : \langle v, \alpha \rangle \le 0\}$. Since the curves $C_{01}$ and $C_{02}$ are $G$-invariant, the varieties $X_1$ and $X_4$ are spherical $G$-varieties, and the blow-up morphisms $X_4 \to X_1 \to W_4$ can be described by maps of colored fans. The following figure illustrates this. \begin{align*} \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (-1, 1) circle (2pt); \draw (1, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-4,4); \draw (0,0) -- (4,-4); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=east] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=south west] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south west] {{\tiny{$d_{1}$}}}; \path (-1, 1) node[anchor=south west] {{\tiny{$u_{02}$}}}; \path (1, -1) node[anchor=south west] {{\tiny{$u_{01}$}}}; \begin{scope} \clip (0,0) -- (-1,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (1,-1) -- (1,0) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (13pt); \end{scope} \begin{scope} \clip (1,0) -- (0,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,1) -- (-1,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (13pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longrightarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (1, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (4,-4); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=east] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=south west] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south west] {{\tiny{$d_{1}$}}}; \path (1, -1) node[anchor=south west] {{\tiny{$u_{01}$}}}; \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (1,-1) -- (1,0) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (13pt); \end{scope} \begin{scope} \clip (1,0) -- (0,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (17pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longrightarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=east] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south] {{\tiny{$d_{1}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \end{tikzpicture} \end{align*} Here the elements $u_{31} = -d_1$ and $u_{32} = -d_2$ are the valuations of the $G$-invariant prime divisors $\mathbb{V}(z_{31})$ and $\mathbb{V}(z_{32})$, respectively, while the elements $u_{01} = d_1-d_2$ and $u_{02} = -d_1+d_2$ are the valuations of the exceptional divisors $E_{01}$ and $E_{02}$ over $C_{01}$ and $C_{02}$, respectively. In particular, we see that $X_1$ is the fourth line and that $X_4$ is the last line of Hofscheier's list. The dotted circles in the colored fans of $X_1$ and $X_4$ specify projective ambient toric varieties $Y_1$ and $Y_4$, respectively. From the description of $\Sigma_\mathrm{max}$ in Section~\ref{sec:ambient}, we deduce that $Y_1$ and $Y_4$ are smooth, that $-K_{X_1}$ is ample on $Y_1$, and that $-K_{X_4}$ is ample on $Y_4$. Hence assumption~\eqref{eq:toric_smooth} holds. \subsubsection{$X_2$ of type III.20} Consider $\mathbb{P}^4_\mathbb{Q}$ with coordinates $(z_{11} : z_{12} : z_{21} : z_{22} : z_{33})$ and the hypersurface $Q = \mathbb{V}(z_{11}z_{12} - z_{21}z_{22} - z_{33}^2) \subset \mathbb{P}^4_\mathbb{Q}$. It contains the lines \begin{align*} C_{31} = \mathbb{V}(z_{12},z_{22},z_{33}), \quad C_{32} = \mathbb{V}(z_{11},z_{21},z_{33})\text{.} \end{align*} Let $X_2$ be the blow-up of $Q$ in the lines $C_{31}$ and $C_{32}$. This is a smooth Fano threefold of type III.20. We may define an action of $G = \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}$ on $Q$ by \begin{equation*} (A,t)\cdot \mathopen{}\mathclose\bgroup\left( \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix} ,z_{33}\aftergroup\egroup\right) = \mathopen{}\mathclose\bgroup\left(A\cdot \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix} \cdot \begin{pmatrix} t^{-1} & 0 \\ 0 & t \end{pmatrix},z_{33}\aftergroup\egroup\right), \end{equation*} which turns $Q$ into a spherical variety. Since the lines $C_{31}$ and $C_{32}$ are $G$-invariant, the variety $X_2$ is a spherical $G$-variety. Since $X_2$ is also the blow-up of $W_4$ in the curve $C_{33} = \mathbb{V}(z_{31}, z_{32})$, it has the same birational invariants as $W_4$, and the blow-up morphisms $Q \leftarrow X_2 \to W_4$ can be described by maps of colored fans as illustrated in the following picture. \begin{align*} \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-4,-4); \path (0, 1) node[anchor=south west] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south west] {{\tiny{$d_{1}$}}}; \path (-1, -1) node[anchor=north west] {{\tiny{$u_{33}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-2, -2) -- (2, 0) -- cycle; \draw (0,0) circle (13pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longleftarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (-1, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-4,-4); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=south west] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south west] {{\tiny{$d_{1}$}}}; \path (-1, -1) node[anchor=north west] {{\tiny{$u_{33}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (-1, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (-1,-1) -- (0,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,0) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (1,0) -- (0,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (17pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longrightarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=east] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south] {{\tiny{$d_{1}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \end{tikzpicture} \end{align*} In particular, we see that $X_2$ is the fifth line of Hofscheier's list. As before, the dotted circle in the colored fan of $X_2$ specifies a projective ambient toric variety $Y_2$, which satisfies \eqref{eq:toric_smooth}. \subsubsection{$X_3$ of type IV.8} Consider $W_3=\mathbb{P}^1_\mathbb{Q} \times \mathbb{P}^1_\mathbb{Q} \times \mathbb{P}^1_\mathbb{Q}$ with coordinates $(z_{01}:z_{02})$, $(z_{11}:z_{21})$ and $(z_{12}:z_{22})$. This is a smooth Fano threefold of type III.27. Let $C_{31}$ be the curve $\mathbb{V}(z_{02},z_{11}z_{12}-z_{21}z_{22})$ of tridegree $(0,1,1)$ on $W_3$. Let $X_3$ be the blow-up of $W_3$ in $C_{31}$. This is a smooth Fano threefold of type IV.8. We may define an action of $G = \mathrm{SL}_2 \times \mathbb{G}_\mathrm{m}$ on $W_3$ by \begin{equation*} (A,t)\cdot \mathopen{}\mathclose\bgroup\left(z_{01},z_{02}, \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix} \aftergroup\egroup\right) = \mathopen{}\mathclose\bgroup\left(t\cdot z_{01}, z_{02}, A\cdot \begin{pmatrix} z_{11} & z_{22} \\ z_{21} & z_{12} \end{pmatrix}\aftergroup\egroup\right)\text{,} \end{equation*} which turns $W_3$ into a spherical variety. Its Luna--Vust description is a follows. The lattice $\mathscr{M}$ has basis $(\alpha, \varepsilon_1)$. We denote the corresponding dual basis of the lattice $\mathscr{N}$ by $(d, \varepsilon_1^*)$. Then there are two colors with the same valuation $d = \frac{1}{2}\alpha^\vee$, and the valuation cone is given by $\mathscr{V} = \{v \in \mathscr{N}_\mathbb{Q} : \langle v, \alpha \rangle \le 0\}$. Since the curve $C_{31}$ is $G$-invariant, the variety $X_3$ is a spherical $G$-variety, and the blow-up morphism $X_3 \to W_3$ can be described by the map of colored fans in the figure below. \begin{align*} \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (0, 3) -- (0, -3) -- (-3, -3) -- (-3, 3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (-1, 1) circle (2pt); \draw (0,0) -- (0,3); \draw (0,0) -- (-4,4); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{32}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{01}$}}}; \path (0, 1) node[anchor=west] {{\tiny{$u_{02}$}}}; \path (1, 0) node[anchor=north west] {{\tiny{$d$}}}; \path (-1, 1) node[anchor=east] {{\tiny{$u_{31}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1,1) -- (-1,0) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \draw[densely dotted, thick] (0,0) -- (4,0); \begin{scope} \clip (1,0) -- (0,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,-1) -- (1,0) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (17pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longrightarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (0, 3) -- (0, -3) -- (-3, -3) -- (-3, 3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (0,0) -- (0,3); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{32}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{01}$}}}; \path (0, 1) node[anchor=west] {{\tiny{$u_{02}$}}}; \path (1, 0) node[anchor=west] {{\tiny{$d$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \end{tikzpicture} \end{align*} Here the elements $u_{01} = -\varepsilon_1^*$ and $u_{02} = \varepsilon_1^*$ are the valuations of the $G$-invariant prime divisors $\mathbb{V}(z_{01})$ and $\mathbb{V}(z_{02})$, respectively, the element $u_{32} = -d$ is the valuation of the $G$-invariant prime divisor $\mathbb{V}(z_{11}z_{12}-z_{21}z_{22})$, and $u_{31} = -d + \varepsilon_1^*$ is the valuation of the exceptional divisor $E_{31}$ over $C_{31}$. This is the penultimate line of Hofscheier's list. The dotted circles in the colored fan of $X_3$ are meant to specify a projective ambient toric variety~$Y_3$, but since there are two colors with the same valuation $d$, the picture is ambiguous. There are three possibilities for which unsupported colored cones could be added to the colored cone of $X_3$ to obtain an ambient toric variety: \begin{enumerate} \item $(\cone(u_{01}, d), \{D_{11}\})$ and $(\cone(u_{02}, d), \{D_{11}\})$, \item $(\cone(u_{01}, d), \{D_{12}\})$ and $(\cone(u_{02}, d), \{D_{12}\})$, or \item $(\cone(u_{01}, d), \{D_{11}, D_{12}\})$ and $(\cone(u_{02}, d), \{D_{11}, D_{12}\})$. \end{enumerate} From the description of $\Sigma_\mathrm{max}$ in Section~\ref{sec:ambient}, we deduce that the ambient toric variety in case $(3)$ is singular. On the other hand, in cases $(1)$ and $(2)$, the ambient toric variety is smooth, and $-K_{X_3}$ not ample but semiample on it. We fix $Y_3$ to be as in case $(1)$, satisfying \eqref{eq:toric_smooth}. \subsection{Cox rings and torsors}\label{sec:crt} We proceed to compute explicitly the Cox rings $\mathscr{R}(X)$ in the examples from Section~\ref{sec22} using Section~\ref{sec:ssr1} together with \cite{arXiv:1408.5358} since we work over $\mathbb{Q}$ here. To obtain the universal torsor $\mathscr{T}=X_0$, we compute the set $Z_Y$ as in Section~\ref{sec:torsors_models}. Moreover, we give simplified expressions for $Z_X = Z_Y \cap \Spec \mathscr{R}(X)$, which can be verified using the equation $\Phi$. Finally the anticanonical class is computed using \cite[4.1 and 4.2]{bri97} or \cite[Proposition~3.3.3.2]{adhl15}. In the case of a spherical variety of semisimple rank one of type $T$ or $N$, this is simply the sum of all $B$-invariant divisors. \subsubsection{Type III.24} We have \begin{align*} \mathscr{R}(X_1) = \mathbb{Q}[x_{01},x_{11},x_{12},x_{21},x_{22},x_{31},x_{32}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}) \end{align*} with $\Pic X_1 \cong \Zd^3$, where \begin{align*} & \deg(x_{01})=(0,0,1), \quad \deg(x_{11})=\deg(x_{21})=(0,1,-1),\\ &\deg(x_{12})=\deg(x_{22})=(1,0,0), \quad \deg(x_{31})=(0,1,0), \quad \deg(x_{32})=(1,0,-1)\text{.} \end{align*} Note that each generator $x_{ij}$ of the Cox ring corresponds to the strict transform of $\mathbb{V}(z_{ij})$ or to the element $u_{ij}$ in Section~\ref{IV.7}. The anticanonical class is $-K_{X_1}=(2,2,-1)$. A universal torsor over $X_1$ is \begin{equation*} \mathscr{T}_1 = \Spec\mathscr{R}(X_1) \setminus Z_{Y_1} = \Spec\mathscr{R}(X_1) \setminus Z_{X_1}\text{,} \end{equation*} where \begin{align*} Z_{Y_1} &= \mathbb{V}(x_{11},x_{21},x_{31}) \cup \mathbb{V}(x_{11},x_{21},x_{32}) \cup \mathbb{V}(x_{12},x_{22},x_{01}) \cup \mathbb{V}(x_{12},x_{22},x_{32}) \cup \mathbb{V}(x_{01},x_{31})\text{,}\\ Z_{X_1} &= \mathbb{V}(x_{11},x_{21}) \cup \mathbb{V}(x_{12},x_{22},x_{32}) \cup \mathbb{V}(x_{01},x_{31})\text{.} \end{align*} \subsubsection{Type III.20} The Cox ring is \begin{align*} \mathscr{R}(X_2) = \mathbb{Q}[x_{11},x_{12},x_{21},x_{22},x_{31},x_{32},x_{33}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2) \end{align*} with $\Pic X_2 \cong \Zd^3$, where \begin{align*} & \deg(x_{11})=\deg(x_{21})=(0,1,0)\text{,}\quad \deg(x_{12})=\deg(x_{22})=(1,0,0)\text{,}\\ & \deg(x_{31})=(0,1,-1), \quad \deg(x_{32})=(1,0,-1), \quad \deg(x_{33}) = (0,0,1)\text{.} \end{align*} The anticanonical class is $-K_{X_2}=(2,2,-1)$. A universal torsor over $X_2$ is \begin{equation*} \mathscr{T}_2 = \Spec\mathscr{R}(X_2) \setminus Z_{Y_2} = \Spec\mathscr{R}(X_2) \setminus Z_{X_2}\text{,} \end{equation*} where \begin{align*} Z_{Y_2} &= \mathbb{V}(x_{11},x_{21},x_{31}) \cup \mathbb{V}(x_{11},x_{21},x_{33}) \cup \mathbb{V}(x_{12},x_{22},x_{32}) \cup \mathbb{V}(x_{12},x_{22},x_{33}) \cup \mathbb{V}(x_{31},x_{32}),\\ Z_{X_2} &= \mathbb{V}(x_{11},x_{21},x_{31}) \cup \mathbb{V}(x_{11},x_{21},x_{33}) \cup \mathbb{V}(x_{12},x_{22},x_{32}) \cup \mathbb{V}(x_{12},x_{22},x_{33}) \cup \mathbb{V}(x_{31},x_{32}). \end{align*} \subsubsection{Type IV.8} The Cox ring is \begin{align*} \mathscr{R}(X_3) = \mathbb{Q}[x_{01},x_{02},x_{11},x_{12},x_{21},x_{22},x_{31},x_{32}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}) \end{align*} with $\Pic X_3 \cong \Zd^4$, where \begin{align*} & \deg(x_{01})=(1,0,0,0), \quad \deg(x_{02})=(1,0,0,-1)\text{,}\\ & \deg(x_{11})=\deg(x_{21})=(0,0,1,0)\text{,}\quad \deg(x_{12})=\deg(x_{22})=(0,1,0,0)\text{,}\\ & \deg(x_{31})=(0,0,0,1), \quad \deg(x_{32})=(0,1,1,-1)\text{.} \end{align*} The anticanonical class is $-K_{X_3}=(2,2,2,-1)$. A universal torsor over $X_3$ is \begin{equation*} \mathscr{T}_3 = \Spec\mathscr{R}(X_3) \setminus Z_{Y_3} = \Spec\mathscr{R}(X_3) \setminus Z_{X_3}\text{,} \end{equation*} where \begin{align*} Z_{Y_3} &= \mathbb{V}(x_{11},x_{21},x_{31}) \cup \mathbb{V}(x_{11},x_{21},x_{32}) \cup \mathbb{V}(x_{12},x_{22}) \cup \mathbb{V}(x_{02},x_{32}) \cup \mathbb{V}(x_{01},x_{02}) \cup \mathbb{V}(x_{01},x_{31}),\\ Z_{X_3} &= \mathbb{V}(x_{11},x_{21}) \cup \mathbb{V}(x_{12},x_{22}) \cup \mathbb{V}(x_{02},x_{32}) \cup \mathbb{V}(x_{01},x_{02}) \cup \mathbb{V}(x_{01},x_{31}). \end{align*} \subsubsection{Type IV.7} The Cox ring is \begin{align*} \mathscr{R}(X_4) = \mathbb{Q}[x_{01},x_{02},x_{11},x_{12},x_{21},x_{22},x_{31},x_{32}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}) \end{align*} with $\Pic X_4 \cong \Zd^4$, where \begin{align*} & \deg(x_{01})=(0,0,0,1), \quad \deg(x_{02})=(0,0,1,0)\text{,}\\ & \deg(x_{11})=\deg(x_{21})=(0,1,0,-1)\text{,}\quad \deg(x_{12})=\deg(x_{22})=(1,0,-1,0)\text{,}\\ & \deg(x_{31})=(0,1,-1,0), \quad \deg(x_{32})=(1,0,0,-1)\text{.} \end{align*} The anticanonical class is $-K_{X_4} = (2,2,-1,-1)$. A universal torsor is over $X_4$ is \begin{equation*} \mathscr{T}_4 = \Spec\mathscr{R}(X_4) \setminus Z_{Y_4} = \Spec\mathscr{R}(X_4) \setminus Z_{X_4}\text{,} \end{equation*} where \begin{align*} Z_{Y_4} &= \mathbb{V}(x_{11},x_{21},x_{01}) \cup \mathbb{V}(x_{11},x_{21},x_{31}) \cup \mathbb{V}(x_{11},x_{21},x_{32})\\ &\qquad \cup \mathbb{V}(x_{12},x_{22},x_{02}) \cup \mathbb{V}(x_{12},x_{22},x_{31}) \cup \mathbb{V}(x_{12},x_{22},x_{32})\\ &\qquad \cup \mathbb{V}(x_{02},x_{32}) \cup \mathbb{V}(x_{01},x_{02}) \cup \mathbb{V}(x_{01},x_{31}),\\ Z_{X_4} &= \mathbb{V}(x_{11},x_{21}) \cup \mathbb{V}(x_{12},x_{22}) \cup \mathbb{V}(x_{02},x_{32}) \cup \mathbb{V}(x_{01},x_{02}) \cup \mathbb{V}(x_{01},x_{31}). \end{align*} Note that this is the same variety as $\mathscr{T}_3$, but with a different action of $\mathbb{G}_{\mathrm{m},\mathbb{Q}}^4$. \subsection{Counting problems} Applying Proposition~\ref{prop:countingproblem_abstract} to the Cox rings of the previous section gives the following counting problems, in which $U$ is always the subset where all Cox coordinates are nonzero. To lighten the notation, we generally write $\{x, y\}$ to mean $x$ or $y$, and as in the introduction, we write $N_j(B)$ for $N_{X_j, U_j, H_j}(B)$. \begin{cor}\label{prop:countingproblem_line_11} {\rm (a)} We have \begin{align*} N_1(B) = \frac{1}{8} \#\left\{\mathbf{x} \in \mathbb{Z}^7_{\ne 0} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}=0, \quad \max|\mathscr{P}_1(\mathbf{x})| \le B,\\ &(x_{11},x_{21})=(x_{12},x_{22}, x_{32})=(x_{01},x_{31}) =1\\ \end{aligned} \right\}\text{,} \end{align*} where \begin{equation*} \mathscr{P}_1(\mathbf{x}) =\left\{ \begin{aligned} & x_{31}^2x_{32}^2x_{01}, x_{32}^2x_{01}^3\{x_{11}, x_{21}\}^2, x_{31}^2x_{32}\{x_{12}, x_{22}\},\\ & x_{31} \{x_{11}, x_{21}\} \{x_{12}, x_{22}\}^2, x_{01} \{x_{11}, x_{21}\}^2 \{x_{12}, x_{22}\}^2 \end{aligned} \right\}. \end{equation*} \\ {\rm (b)} We have \begin{align*} N_2(B) = \frac{1}{8} \#\left\{\mathbf{x} \in \mathbb{Z}^7_{\ne 0} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2=0, \quad \max|\mathscr{P}_2(\mathbf{x})| \le B,\\ &(x_{11},x_{21},x_{31})=(x_{11},x_{21},x_{33})=1\\ &(x_{12},x_{22},x_{32})=(x_{12},x_{22},x_{33})=(x_{31}, x_{32})=1\\ \end{aligned} \right\}\text{,} \end{align*} where \begin{equation*} \mathscr{P}_2(\mathbf{x}) =\left\{ \begin{aligned} & x_{32}\{x_{11}, x_{21}\}^2 \{x_{12}, x_{22}\}, x_{32}^2x_{33}\{x_{11}, x_{21}\}^2, x_{31}\{x_{11}, x_{21}\} \{x_{12}, x_{22}\}^2, \\ & x_{31}^2x_{33}\{x_{12}, x_{22}\}^2, x_{31}^2x_{32}^2x_{33}^3 \end{aligned} \right\}. \end{equation*} \\ {\rm (c)} We have \begin{align*} N_3(B) = \frac{1}{16} \#\left\{\mathbf{x} \in \mathbb{Z}^8_{\ne 0} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}=0, \quad \max|\mathscr{P}_3(\mathbf{x})| \le B,\\ &(x_{11},x_{21})=(x_{12},x_{22})=(x_{02},x_{32})=(x_{01},x_{02})=(x_{01},x_{31})=1\\ \end{aligned} \right\}\text{,} \end{align*} where \begin{equation*} \mathscr{P}_3(\mathbf{x}) =\left\{ \begin{aligned} &x_{02}^2x_{31}^3 x_{32}^2 , x_{01}^2x_{31}x_{32}^2, x_{02}^2\{x_{11},x_{21}\}^2\{x_{12},x_{22}\}^2x_{31}\\ &x_{01}^2\{x_{11},x_{21}\}\{x_{12},x_{22}\}x_{32}, x_{01}x_{02}\{x_{11},x_{21}\}^2\{x_{12},x_{22}\}^2 \end{aligned} \right\}. \end{equation*} \\ {\rm (d)} We have \begin{align*} N_4(B) = \frac{1}{16} \#\left\{\mathbf{x} \in \mathbb{Z}^8_{\ne 0} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}=0, \quad \max|\mathscr{P}_4(\mathbf{x})| \le B,\\ &(x_{11},x_{21})=(x_{12},x_{22})=(x_{02},x_{32})=(x_{01},x_{02})=(x_{01},x_{31})=1\\ \end{aligned} \right\}\text{,} \end{align*} where \begin{equation*} \mathscr{P}_4(\mathbf{x}) =\left\{ \begin{aligned} &x_{01} x_{02} x_{31}^2 x_{32}^2 , x_{01}^2\{x_{11},x_{21}\}x_{31}x_{32}^2, x_{02}^2\{x_{12},x_{22}\}x_{31}^2x_{32},\\ &x_{01}^2\{x_{11},x_{21}\}^2\{x_{12},x_{22}\}x_{32}, x_{02}^2\{x_{11},x_{21}\}\{x_{12},x_{22}\}^2x_{31} , x_{01}x_{02}\{x_{11},x_{21}\}^2\{x_{12},x_{22}\}^2 \end{aligned} \right\}. \end{equation*} \end{cor} \begin{proof} This is a special case of Proposition~\ref{prop:countingproblem_abstract}. Note that the coprimality conditions are derived from the expressions for $Z_X$ (instead of $Z_Y$) from Section~\ref{sec:crt}. It can be explicitly verified using the equation $\Phi$ that this is correct even over $\mathbb{Z}$ as required here. \end{proof} \subsection{Application: Proof of Theorem~\ref{dim3}}\label{appl1} We now show how to use Theorem~\ref{manin-cor} in practice and complete the proof of Theorem~\ref{dim3} for the varieties $X_1, \ldots, X_4$. \subsubsection{The variety $X_4$}\label{X4} By Corollary~\ref{prop:countingproblem_line_11}(d), we have $J=8$ torsor variables $x_{ij}$ with $0 \leq i \leq 3$, $1 \leq j \leq 2$ satisfying the equation \begin{equation}\label{particular} x_{11}x_{12} + x_{21}x_{22} + x_{31}x_{32} = 0 \end{equation} (after changing the signs of $x_{22},x_{32}$) with $k=3$ and $h_{ij} = 1$ for $i \geq 1$, $h_{0j} = 0$. In particular, Remark~\ref{simpl} applies. We have $N=17$ height conditions with corresponding exponent matrix \begin{equation*} \mathscr{A}_1 = \left( \begin{smallmatrix} 1 & 2& 2& & & 2&2 &2 & 2& & & & &1 &1 & 1& 1\\ 1& & &2 &2 & & & & &2 & 2&2 &2 &1 & 1& 1& 1\\ & & 1& & & &2 & &2 & &1 & &1 & & &2 &2 \\ & & & 1& & 1&1 & & & 2& 2& & &2 & & 2& \\ &1 & & & &2 & &2 & & 1& & 1& &2 &2 & & \\ & & & &1 & & &1 &1 & & &2 &2 & &2 & & 2\\ 2 & 1& 1&2 &2 & & & & &1 & 1& 1&1 & & & & \\ 2 &2 & 2& 1& 1& 1&1 & 1& 1& & & & & & & & \\ \end{smallmatrix} \right)\in \Bbb{R}_{\geq 0}^{8 \times 17}, \quad \mathscr{A}_2 = \left( \begin{smallmatrix} &&-1\\&&-1\\1&&-1\\1&&-1\\&1&-1\\&1&-1\\-1&-1&\\-1&-1& \end{smallmatrix} \right) \in \Bbb{R}^{8 \times 3}. \end{equation*} As usual, missing entries indicate zeros. We have $r=5$ coprimality conditions with \begin{equation}\label{gcd1} \begin{split} &S_1 = \{(1, 1), (2, 1)\}, \quad S_2 = \{(1, 2), (2, 2)\}, \quad S_3 = \{(0, 2), (3, 2)\},\\ & S_4 = \{(0, 1), (0, 2)\}, \quad S_5 = \{(0, 1), (3, 1)\}. \end{split} \end{equation} We choose \begin{equation}\label{tauzeta} {\bm \tau}^{(2)} = (\underbrace{1, \ldots , 1}_{J_0}, \tfrac{2}{3}, \ldots, \tfrac{2}{3}). \end{equation} (In our case $J_0 = 2$, but we will use the same definition also in other cases later.) Using a computer algebra system, we confirm $C_2({\bm \tau}^{(2)} )$, $C_2((1 - h_{ij}/3)_{ij})$, and with $c_2 = 3$, we find $$\dim (\mathscr{H} \cap \mathscr{P}) = 3, \quad \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 2 \ \text{for all} \ (i,j),$$ confirming \eqref{ass2}. We have now checked all assumptions of Theorem~\ref{manin-cor}. We show in Appendix~\ref{A} how to derive Hypothesis~\ref{H2} without computer help and how to compute the Peyre constant in explicit algebraic terms. \subsubsection{The variety $X_3$} This is very similar to the previous case, so we can be brief. By Corollary~\ref{prop:countingproblem_line_11}(c), we have the same torsor variables as in the previous application satisfying \eqref{particular}. The corresponding exponent matrix is given by \begin{equation*} \mathscr{A}_1 = \left(\begin{smallmatrix} & 2& & & & &2 &2 &2 &2 & 1& 1&1 &1 \\ 2 & & 2& 2&2 &2 & & & & &1 & 1& 1&1 \\ & & &2 & &2 & &1 & &1 & &2 & &2 \\ & &2 &2 & & & 1&1 & & &2 &2 & & \\ & &2 & & 2& &1 & &1 & & 2& &2 & \\ & & & &2 & 2& & &1 &1 & & &2 &2 \\ 3 &1 & 1& 1&1 & 1& & & & & & & & \\ 2 &2 & & & & &1 &1 &1 &1 & & & & \\ \end{smallmatrix}\right) \in \Bbb{R}_{\geq 0}^{8 \times 14}. \end{equation*} We choose ${\bm \tau}^{(2)}$ as before and confirm \eqref{ass2} in the same way with $$\dim (\mathscr{H} \cap \mathscr{P}) = 3, \quad \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 1 \ \text{for} \ (i,j) = (0,1) \ \text{and} \ \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 2 \ \text{otherwise}.$$ \subsubsection{The variety $X_1$} Again the computations are a minor variation on the previous two cases. By Corollary~\ref{prop:countingproblem_line_11}(a), the height matrix is $$\mathscr{A}_1 = \left(\begin{smallmatrix} 1 & 3 & 3 & & & & & & & 1 & 1 & 1 & 1\\ & 2 & & & & 1 & 1& & &2&2&& \\ & & & &1 & & 2& & 2& &2 & &2\\ & & 2& & & & &1 & 1& & &2 & 2\\ & & &1 & & 2& &2 & &2 & &2 & \\ 2 & & & 2& 2&1 &1 &1 &1 & & & & \\ 2 & 2&2 &1 & 1& & & & & & & & \end{smallmatrix}\right) \in \Bbb{R}^{7 \times 13}_{\ge 0}.$$ We make the same choice \eqref{tauzeta} for ${\bm \tau}^{(2)}$, and confirm \eqref{ass2} with $c_2 = 2$ and $$\dim (\mathscr{H} \cap \mathscr{P}) = 2, \quad \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 0 \ \text{for} \ (i,j) =(1,2),(2,2),(3,1),\ \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 1 \ \text{otherwise}.$$ \subsubsection{The variety $X_2$} This case has some new features, as the torsor equation has a slightly different shape. By Corollary~\ref{prop:countingproblem_line_11}(b), we have $J_0 = 0$ and $J = 7$ torsor variables satisfying the more complicated torsor equation $$x_{11}x_{12} + x_{21} x_{22} + x_{31} x_{32} x_{33}^2 = 0.$$ The height matrix is given by $$\mathscr{A}_1 = \left(\begin{smallmatrix} 2& 2& & &2 & &1 &1 & & & & & \\ &1 & &1 & & & &2 & &2 & &2 &\\ & &2 &2 & &2 & & &1 &1 & & &\\ 1& & 1& & & & 2& & 2& & 2& &\\ & & & & & &1 &1 & 1& 1& 2& 2&2\\ 1& 1& 1& 1& 2& 2& & & & & & &2\\ & & & & 1& 1& & & & &1 &1 & 3 \end{smallmatrix}\right) \in \Bbb{R}^{7 \times 13}_{\geq 0}, \quad \mathscr{A}_2 = \left(\begin{smallmatrix} 1&&-1\\1&&-1\\&1&-1\\&1&-1\\-1&-1&\\-1&-1&\\ -2&-2&1 \end{smallmatrix}\right) \in \Bbb{R}^{7 \times 3}.$$ Proposition~\ref{circle-method} ensures the validity of Hypothesis~\ref{H1} with $\lambda = 1/45000$. We have $r=5$ coprimality conditions \begin{displaymath} \begin{split} & S_1 = \{(1, 1), (2, 1), (3, 1)\}, \quad S_2 = \{(1, 1), (2, 1), (3, 3)\}, \quad S_3 = \{(1, 2), (2, 2), (3, 2)\},\\ & S_4 = \{(1, 2), (2, 2), (3, 3)\}, \quad S_5 = \{(3, 1), (3, 2)\}. \end{split} \end{displaymath} We see that \eqref{fail} holds. We choose $${\bm \tau}^{(2)} = (\tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, 1)$$ satisfying \eqref{tau1} and confirm $C_2({\bm \tau}^{(2)} )$, $C_2((1 - h_{ij}/3)_{ij})$. Finally we note that $c_2 = 2$ and compute\footnote{Dimension $-1$ indicates that the set is empty.} \begin{displaymath} \begin{split} \dim(\mathscr{H} \cap \mathscr{P}) &= 2, \\ \dim(\mathscr{H} \cap \mathscr{P}_{ij}) &= \begin{cases} 1, &(i,j)=(3,1),(3,2),(3,3),\\ 0, &\text{otherwise}, \end{cases} \\ \dim(\mathscr{H} \cap\mathscr{P}(1/44800, \pi)) &= -1 \end{split} \end{displaymath} for the vector $(1 - h_{ij}/3)_{ij}$, and \begin{displaymath} \begin{split} \dim(\mathscr{H} \cap \mathscr{P}) &= 0, \\ \dim(\mathscr{H} \cap \mathscr{P}_{ij}) &= \begin{cases} 0, &(i,j)=(3,1),(3,2),\\ -1, &\text{otherwise}, \end{cases} \\ \dim(\mathscr{H} \cap\mathscr{P}(1/44800, \pi)) &= -1 \end{split} \end{displaymath} for the vector ${\bm \tau}^{(2)}$. This confirms \eqref{ass2}. \section{Higher-dimensional examples}\label{sec:geometry_X5_X6} \subsection{Geometry} Consider $G = \mathrm{SL}_2 \times \mathbb{G}_m^r$ and, for $i = 1, \dots, r$, let $\varepsilon_i\in \mathfrak{X}(B)$ be a primitive character of $\mathbb{G}_\mathrm{m}$ composed with the natural inclusion $\mathfrak{X}(\mathbb{G}_\mathrm{m}) \to \mathfrak{X}(B)$ into the $i$-th factor $\mathbb{G}_m$ of $G$. Let $T_{\mathrm{SL}_2} \subset \mathrm{SL}_2$ be a maximal torus, and let $\chi\colon T_{\mathrm{SL}_2} \to \mathbb{G}_m$ be a primitive character. We consider the subgroup \begin{align*} H = \{(\lambda, \chi(\lambda), 1, \dots, 1) : \lambda \in T_{\mathrm{SL}_2}\} \subset G\text{.} \end{align*} Then $G/H$ is a spherical homogeneous space of semisimple rank one and type $T$. The lattice $\mathscr{M}$ has basis $(\frac{1}{2}\alpha + \varepsilon_1, \frac{1}{2}\alpha - \varepsilon_1, \varepsilon_2, \dots, \varepsilon_{r})$. We denote the corresponding dual basis of the lattice $\mathscr{N}$ by $(d_1, d_2, e_3, \dots, e_{r+1})$. There are two colors $D_{11}$ and $D_{12}$ with valuations $d_1$ and $d_2$, respectively. The valuation cone is given by $\mathscr{V} = \{v \in \mathscr{N}_\mathbb{Q} : \langle v, \alpha \rangle \le 0\}$. \subsubsection{The fourfold $X_5$} Let $r = 2$, and consider the polytope in $\mathscr{N}_\mathbb{Q}$ spanned by the vectors \begin{align*} d_1 &= (1, 0, 0), &d_2 &= (0, 1,0 ), & u_{31}&= (0, -1, 0), &u_{32}&= (-1, 0, 0),\\ u_{33}&= (-1, 0, -1), &u_{01}&= (1, -1, 1), &u_{02}&= (1, -1, 0),& u_{03}&= (-1, 1, 0). \end{align*} The colored spanning fan of this polytope, as defined in \cite[Remark~2.6]{gh15}, contains the following maximal colored cones: \begin{align*} &(\cone(d_{1}, d_{2}, u_{33}), \{D_{11}, D_{12}\}), &&(\cone(d_{1}, u_{02}, u_{33}), \{D_{11}\}), &&(\cone(d_{2}, u_{03}, u_{33}), \{D_{12}\}),\\ &(\cone(u_{01}, u_{02}, u_{31}), \emptyset), &&(\cone(u_{01}, u_{03}, u_{32}), \emptyset), &&(\cone(u_{01}, u_{31}, u_{32}), \emptyset),\\ &(\cone(u_{31}, u_{32}, u_{33}), \emptyset), &&(\cone(u_{03}, u_{32}, u_{33}), \emptyset), &&(\cone(u_{02}, u_{31}, u_{33}), \emptyset). \end{align*} It can be verified that each colored cone satisfies the conditions of the smoothness criterion \cite[Th\'eor\`eme~A]{cam01}; see also \cite[Theorem~1.2]{gag15}. Let $X_5$ be the spherical embedding of $G/H$ corresponding to this colored fan. Then $X_5$ is a smooth Fano fourfold with Picard number $5$. The unsupported colored spanning fan of the polytope above (i.\,e.,~ including the unsupported colored cones) specifies a projective ambient toric variety $Y_5$. From the description of $\Sigma_\mathrm{max}$ in Section~\ref{sec:ambient}, we deduce that $Y_5$ is smooth and that $-K_{X_5}$ is ample on $Y_5$; hence \eqref{eq:toric_smooth} holds. \subsubsection{The fivefold $X_6$} Let $r = 3$, and consider the polytope in $\mathscr{N}_\mathbb{Q}$ spanned by the vectors \begin{align*} d_1 &= (1, 0, 0, 0), & d_2 &= (0, 1, 0, 0), & u_{31} = (-1, 0, 1, 0), & & u_{32} = (-1, -1, 1, 0),\\ u_{01} &= (-1, 1, -1, -1), & u_{02} &= ( 1, -1, 0, 1), & u_{03} = (0 , 0, -1, 0). \end{align*} The colored spanning fan of this polytope contains the following maximal colored cones: \begin{align*} &(\cone(d_1, d_2, u_{01}, u_{31}), \{D_{11}, D_{12}\}), &&(\cone(d_1, d_2, u_{02}, u_{31}), \{D_{11}, D_{12}\}),\\ &(\cone(d_1, u_{01}, u_{31}, u_{32}), \{D_{11}\}), &&(\cone(d_1, u_{02}, u_{31}, u_{32}), \{D_{11}\}),\\ &(\cone(d_1, u_{02}, u_{03}, u_{32}), \{D_{11}\}), &&(\cone(d_1, u_{01}, u_{03}, u_{32}), \{D_{11}\}),\\ &(\cone(d_2, u_{01}, u_{03}, u_{31}), \{D_{12}\}), &&(\cone(d_2, u_{02}, u_{03}, u_{31}), \{D_{12}\}),\\ &(\cone(u_{02}, u_{03}, u_{31}, u_{32}), \emptyset), &&(\cone(u_{01}, u_{03}, u_{31}, u_{32}), \emptyset). \end{align*} As in the previous example, we obtain a smooth spherical Fano fivefold $X_6$ with Picard number $3$ in a smooth projective ambient toric variety $Y_6$ on which $-K_{X_6}$ is ample. \subsubsection{The sixfold $X_7$} Let $r = 4$, and consider the polytope in $\mathscr{N}_\mathbb{Q}$ spanned by the vectors \begin{align*} d_1 &= (1, 0, 0, 0, 0), & d_2 &= (0, 1, 0, 0, 0), & u_{01} &= (0, 0, 1, 0, 0), & u_{02} &= (0, 0, 0, 1, 0),\\ u_{03} &= (0, 0, 0, 0, 1), & u_{31} &= (0, -1, 0, 0, 0), & u_{32} &= (-1, 0, 0, 0, 1), & u_{33} &= (-1, 0, 0, 0, 0), \\ u_{34} &= (-1, 0, -1, -1, -1), & u_{35} &= (-1, -1, -1, -1, -1). \end{align*} As above, we obtain a smooth spherical Fano sixfold $X_7$ with Picard number $5$ in a smooth projective ambient toric variety $Y_7$ on which $-K_{X_7}$ is ample. \subsubsection{The sevenfold $X_8$} Let $r = 5$, and consider the polytope in $\mathscr{N}_\mathbb{Q}$ spanned by the vectors \begin{align*} d_1 &= (1, 0, 0, 0, 0, 0), & d_2 &= (0, 1, 0, 0, 0, 0), & u_{01} &= (0, 0, 1, 0, 1, 0), \\ u_{02} &= (0, 0, 0, 1, 0, 1), & u_{03} &= (0, 0, 0, 0, 0, 1), & u_{04} &= (0, 0, 1, 0, 0, -1), \\ u_{05} &= (0, 0, 0, 1, 0, 0), & u_{06} &= (0, 0, 0, 0, 1, 1), & u_{31} &= (0, -1, 0, 0, 0, 0), \\ u_{32} &= (-1, 0, -1, -1, -1, -1), & u_{33} &= (-1, -1, 0, 0, 0, 0), & u_{34} &= (-1, -1, -1, -1, -1, -1). \end{align*} As above, we obtain a smooth spherical Fano sevenfold $X_8$ with Picard number $6$ in a smooth projective ambient toric variety $Y_8$ on which $-K_{X_8}$ is ample. \subsection{Cox rings and torsors} We argue as in Section~\ref{sec:crt}. \subsubsection{The fourfold $X_5$} The Cox ring is \begin{align*} \mathscr{R}(X_5) = \mathbb{Q}[x_{01}, x_{02}, x_{03}, x_{11}, x_{12}, x_{21}, x_{22}, x_{31}, x_{32}, x_{33}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}) \end{align*} with $\Pic X_5 \cong \Cl X_5 \cong \Zd^5$, where \begin{align*} &\deg(x_{01})=\deg(x_{33})= (1, 0, 0, 0, 0),\ \deg(x_{02})= (0, 1, 0, 1, 0),\ \deg(x_{03})= (0, 1, 0, 0, 0),\\ &\deg(x_{11})=\deg(x_{21})= (0, 0, 1, 0, 0),\ \deg(x_{12})=\deg(x_{22})= (0, 0, 0, 0, 1),\\ &\deg(x_{31})= (-1, 0, 0, -1, 1),\ \deg(x_{32})= (0, 0, 1, 1, 0) \end{align*} The anticanonical class is $ -K_{X_5} = \mathopen{}\mathclose\bgroup\left(1, 2, 2, 1, 2\aftergroup\egroup\right). $ A universal torsor over $X_5$ is \begin{equation*} \mathscr{T}_5 = \Spec\mathscr{R}(X_5) \setminus Z_{X_5}\text{,} \end{equation*} where \begin{align*} Z_{X_5} &=\mathbb{V}(x_{31}, x_{11},x_{21}) \cup \mathbb{V}(x_{02}, x_{12},x_{22}) \cup \mathbb{V}(x_{12},x_{22}, x_{31}) \cup \mathbb{V}(x_{32}, x_{11},x_{21})\\ &{} \cup \mathbb{V}(x_{31}, x_{03}) \cup \mathbb{V}(x_{02}, x_{32}) \cup \mathbb{V}(x_{02}, x_{03}) \cup \mathbb{V}(x_{33}, x_{01}) \cup \mathbb{V}(x_{12},x_{22}, x_{32}) \cup \mathbb{V}(x_{03}, x_{11},x_{21}). \end{align*} \subsubsection{The fivefold $X_6$}\label{sec:cox_X6} The Cox ring is \begin{align*} \mathscr{R}(X_6) = \mathbb{Q}[x_{01},x_{02},x_{03},x_{11},x_{12},x_{21},x_{22},x_{31},x_{32}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}^2) \end{align*} with $\Pic X_6 \cong \Cl X_6 \cong \Zd^3$, where \begin{align*} &\deg(x_{01}) = \deg(x_{02}) = (0, 0, -1),\ \deg(x_{03}) = (1, 0, 1),\ \deg(x_{11}) = \deg(x_{21}) = (1, 0, 0),\\ &\deg(x_{12}) = \deg(x_{22}) = (0, 1, 0),\ \deg(x_{31}) = (1, -1, 0),\ \deg(x_{32}) = (0, 1, 0). \end{align*} The anticanonical class is $-K_{X_6} = \mathopen{}\mathclose\bgroup\left(3,1,-1\aftergroup\egroup\right)$. A universal torsor over $X_6$ is \begin{equation*} \mathscr{T}_6 = \Spec\mathscr{R}(X_6) \setminus Z_{X_6}\text{,} \end{equation*} where \begin{align*} Z_{X_6} &= \mathbb{V}(x_{01},x_{02})\cup \mathbb{V}(x_{32}, x_{12}, x_{22})\cup \mathbb{V}(x_{03}, x_{31}, x_{11}, x_{21}). \end{align*} \subsubsection{The sixfold $X_7$}\label{sec:cox_X7} The Cox ring is \begin{align*} \mathscr{R}(X_7) = \mathbb{Q}[x_{01}, x_{02}, x_{03}, x_{11}, x_{12}, x_{21}, x_{22}, x_{31}, \dots, x_{35}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}x_{34}x_{35}^2) \end{align*} with $\Pic X_7 \cong \Cl X_7 \cong \Zd^5$, where \begin{align*} &\deg(x_{01}) = \deg(x_{02}) = (-1, -1, 0, 1, 0),\ \deg(x_{03}) = (-2, -1, 0, 1, 0),\\ &\deg(x_{11}) = \deg(x_{21}) = (0, 0, 0, 1, 0),\ \deg(x_{12}) = \deg(x_{22}) = (0, 0, 0, 0, 1),\\ &\deg(x_{31}) = (1, 1, 1, -1, 1),\ \deg(x_{32}) = (1, 0, 0, 0, 0),\ \deg(x_{33}) = (0, 1, 0, 0, 0),\\ &\deg(x_{34}) = (0, 0, 1, 0, 0),\ \deg(x_{35}) = (-1, -1, -1, 1, 0). \end{align*} The anticanonical class is $-K_{X_7} = \mathopen{}\mathclose\bgroup\left(-3, -2, 1, 4, 2\aftergroup\egroup\right)$. A universal torsor over $X_7$ is \begin{equation*} \mathscr{T}_7 = \Spec\mathscr{R}(X_7) \setminus Z_{X_7}\text{,} \end{equation*} where \begin{align*} Z_{X_7} &= \mathbb{V}(x_{01}, x_{02}, x_{03}, x_{34}) \cup \mathbb{V}(x_{01}, x_{02}, x_{03}, x_{35}) \cup \mathbb{V}(x_{01}, x_{02}, x_{32}, x_{34})\\ &\cup \mathbb{V}(x_{01}, x_{02}, x_{32}, x_{35}) \cup \mathbb{V}(x_{03}, x_{33}) \cup \mathbb{V}(x_{11}, x_{21}, x_{32})\\ & \cup \mathbb{V}(x_{11}, x_{21}, x_{33}) \cup \mathbb{V}(x_{12}, x_{22}, x_{31})\cup \mathbb{V}(x_{12}, x_{22}, x_{35}) \cup \mathbb{V}(x_{31}, x_{34}). \end{align*} \subsubsection{The sevenfold $X_8$}\label{sec:cox_X8} The Cox ring is \begin{align*} \mathscr{R}(X_8) = \mathbb{Q}[x_{01}, \dots, x_{06}, x_{11}, x_{12}, x_{21}, x_{22}, x_{31}, \dots, x_{34}] /(x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2x_{34}^2) \end{align*} with $\Pic X_8 \cong \Cl X_8 \cong \Zd^6$, where \begin{align*} &\deg(x_{01}) = (1, 1, 0, -1, 0, 0),\ \deg(x_{02}) = (1, 1, -1, 0, 0, 0),\\ &\deg(x_{03}) = \deg(x_{05}) = (0, 0, 1, 0, 0, 0),\ \deg(x_{04}) = \deg(x_{06}) = (0, 0, 0, 1, 0, 0),\\ &\deg(x_{11}) = \deg(x_{21}) = (0, 0, 0, 0, 1, 0),\ \deg(x_{12}) = \deg(x_{22}) = (0, 0, 0, 0, 0, 1),\\ &\deg(x_{31}) = (0, 1, 0, 0, -1, 1),\ \deg(x_{32}) = (0, 1, 0, 0, 0, 0),\\ &\deg(x_{33}) = (-1, -1, 0, 0, 1, 0), \deg(x_{34}) = (1, 0, 0, 0, 0, 0). \end{align*} The anticanonical class is $-K_{X_8} = \mathopen{}\mathclose\bgroup\left(2, 3, 1, 1, 1, 2\aftergroup\egroup\right)$. A universal torsor over $X_8$ is \begin{equation*} \mathscr{T}_8 = \Spec\mathscr{R}(X_8) \setminus Z_{X_8}\text{,} \end{equation*} where \begin{align*} Z_{X_8} &= \mathbb{V}( x_{01}, x_{02}, x_{32} ) \cup \mathbb{V}( x_{01}, x_{02}, x_{34}) \cup \mathbb{V}( x_{03}, x_{05} ) \cup \mathbb{V}( x_{04}, x_{06} )\\ & \cup \mathbb{V}( x_{11}, x_{21}, x_{33} ) \cup \mathbb{V}( x_{12}, x_{22}, x_{31} ) \cup \mathbb{V}( x_{12}, x_{22}, x_{34} )\cup \mathbb{V}( x_{31}, x_{32} ). \end{align*} \subsection{Counting problems} \begin{cor}\label{dim4cor} {\rm (a)} We have \begin{equation*} N_5(B) = \frac{1}{32}\#\left\{\mathbf{x} \in \mathbb{Z}_{\ne 0}^{10} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}=0, \quad \max|\mathscr{P}_5(\mathbf{x})| \le B\\ &(x_{31}, x_{11},x_{21})= (x_{02}, x_{12},x_{22})= (x_{12},x_{22}, x_{31})=1\\ &(x_{32}, x_{11},x_{21})= (x_{31}, x_{03})= (x_{02}, x_{32})=1\\ &(x_{02}, x_{03})= (x_{33}, x_{01})= (x_{12},x_{22}, x_{32})= (x_{03}, x_{11},x_{21})=1 \end{aligned} \right\}, \end{equation*} with {\begin{equation*} \mathscr{P}_5(\mathbf{x}) =\left\{ \begin{aligned} &\{x_{01},x_{33}\}^2 x_{02}^2 \{x_{12},x_{22}\} x_{31} \{x_{11},x_{21}\}^2, x_{32} \{x_{01},x_{33}\}^3 x_{02}^2 x_{31}^2 \{x_{11},x_{21}\} ,\\ &x_{03} \{x_{01},x_{33}\} x_{02} \{x_{12},x_{22}\}^2 \{x_{11},x_{21}\}^2 , x_{03} x_{32}^2 \{x_{01},x_{33}\}^3 x_{02} x_{31}^2 ,\\ &x_{03}^2 x_{32} \{x_{01},x_{33}\} \{x_{12},x_{22}\}^2 \{x_{11},x_{21}\} , x_{03}^2 x_{32}^2 \{x_{01},x_{33}\}^2 \{x_{12},x_{22}\} x_{31} \end{aligned} \right\}. \end{equation*}}\\ {\rm (b)} We have \begin{equation*} N_6(B) = \frac{1}{8}\#\left\{\mathbf{x} \in \mathbb{Z}_{\ne 0}^9 : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}^2=0, \quad \max|\mathscr{P}_6(\mathbf{x})| \le B\\ &(x_{01},x_{02})=(x_{32}, x_{12}, x_{22}) = (x_{03}, x_{31}, x_{11}, x_{21}) =1 \end{aligned} \right\}, \end{equation*} with {\begin{equation*} \mathscr{P}_6(\mathbf{x}) =\left\{ \begin{aligned} &\{x_{01},x_{02}\}\{x_{12},x_{22},x_{32}\}^4x_{31}^3, \{x_{01},x_{02}\}\{x_{11},x_{21}\}^3\{x_{12},x_{22},x_{32}\},\\ &\{x_{01},x_{02}\}^4x_{03}^3\{x_{12},x_{22},x_{32}\} \end{aligned} \right\}. \end{equation*}} {\rm (c)} We have \begin{equation*} N_7(B) = \frac{1}{32}\#\left\{\mathbf{x} \in \mathbb{Z}_{\ne 0}^{12} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}x_{34}x_{35}^2=0, \quad \max|\mathscr{P}_7(\mathbf{x})| \le B\\ &(x_{01}, x_{02}, x_{03}, x_{34}) = (x_{01}, x_{02}, x_{03}, x_{35}) = (x_{01}, x_{02}, x_{32}, x_{34}) =1\\ & (x_{01}, x_{02}, x_{32}, x_{35}) = (x_{03}, x_{33}) = (x_{11}, x_{21}, x_{32}) = 1\\ &(x_{11}, x_{21}, x_{33}) = (x_{12}, x_{22}, x_{31}) = (x_{12}, x_{22}, x_{35}) = (x_{31}, x_{34}) = 1 \end{aligned} \right\}, \end{equation*} with \begin{equation*} \mathscr{P}_7(\mathbf{x}) =\left\{ \begin{aligned} &x_{31}^2 x_{32} x_{33}^2 x_{34}^5 x_{35}^6 , \{x_{12},x_{22}\}^2 x_{32} x_{33}^2 x_{34}^5 x_{35}^4 , \{x_{11},x_{21}\} x_{31}^2 x_{33} x_{34}^4 x_{35}^5 , \\ &\{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{33} x_{34}^4 x_{35}^3 , x_{03} \{x_{11},x_{21}\}^2 x_{31}^2 x_{34}^2 x_{35}^3 ,\\ &x_{03} \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\}^2 x_{34}^2 x_{35} , x_{03}^2 \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\}^2 x_{32} x_{34} , \\ &x_{03}^3 \{x_{11},x_{21}\}^2 x_{31}^2 x_{32}^2 x_{35} , x_{03}^3 \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\} x_{31} x_{32}^2 , x_{03}^4 \{x_{12},x_{22}\}^2 x_{32}^5 x_{33}^2 x_{34} , \\ &x_{03}^5 x_{31}^2 x_{32}^6 x_{33}^2 x_{35} , x_{03}^5 \{x_{12},x_{22}\} x_{31} x_{32}^6 x_{33}^2 , \{x_{01},x_{02}\} x_{03} \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\}^2 x_{34} , \\ &\{x_{01},x_{02}\}^2 x_{03} \{x_{11},x_{21}\}^2 x_{31}^2 x_{35} , \{x_{01},x_{02}\}^2 x_{03} \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\} x_{31} , \\ &\{x_{01},x_{02}\}^3 \{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{33} x_{34} , \{x_{01},x_{02}\}^4 \{x_{12},x_{22}\}^2 x_{32} x_{33}^2 x_{34} , \\ &\{x_{01},x_{02}\}^4 \{x_{11},x_{21}\} x_{31}^2 x_{33} x_{35} , \{x_{01},x_{02}\}^4 \{x_{11},x_{21}\} \{x_{12},x_{22}\} x_{31} x_{33} , \\ &\{x_{01},x_{02}\}^5 x_{31}^2 x_{32} x_{33}^2 x_{35} , \{x_{01},x_{02}\}^5 \{x_{12},x_{22}\} x_{31} x_{32} x_{33}^2 \end{aligned} \right\}. \end{equation*} {\rm (d)} We have \begin{equation*} N_8(B) = \frac{1}{64}\#\left\{\mathbf{x} \in \mathbb{Z}_{\ne 0}^{14} : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2x_{34}^2=0, \quad \max|\mathscr{P}_8(\mathbf{x})| \le B\\ &( x_{01}, x_{02}, x_{32} ) = ( x_{01}, x_{02}, x_{34}) = ( x_{03}, x_{05} ) = ( x_{04}, x_{06} ) = 1\\ &( x_{11}, x_{21}, x_{33} ) = ( x_{12}, x_{22}, x_{31} ) = ( x_{12}, x_{22}, x_{34} ) = ( x_{31}, x_{32} ) = 1 \end{aligned} \right\}, \end{equation*} where $\mathscr{P}_8(\mathbf{x})$ is {\begin{equation*} \left\{ \begin{aligned} &\{x_{03},x_{05}\} \{x_{04},x_{06}\} x_{31}^2 x_{32}^4 x_{33}^3 x_{34}^5 , \{x_{03},x_{05}\} \{x_{04},x_{06}\} \{x_{12},x_{22}\}^2 x_{32}^4 x_{33} x_{34}^3 ,\\ &\{x_{03},x_{05}\} \{x_{04},x_{06}\} \{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{32}^3 x_{34}^2 , \{x_{03},x_{05}\} \{x_{04},x_{06}\} \{x_{11},x_{21}\}^3 x_{31}^2 x_{32} x_{34}^2 ,\\ &x_{02} \{x_{03},x_{05}\}^2 \{x_{04},x_{06}\} \{x_{11},x_{21}\}^3 x_{31}^2 x_{34} , x_{02}^2 \{x_{03},x_{05}\}^3 \{x_{04},x_{06}\} \{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{32} ,\\ &x_{02}^2 \{x_{03},x_{05}\}^3 \{x_{04},x_{06}\} \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\} x_{31} , x_{02}^3 \{x_{03},x_{05}\}^4 \{x_{04},x_{06}\} \{x_{12},x_{22}\}^2 x_{32} x_{33} ,\\ &x_{02}^4 \{x_{03},x_{05}\}^5 \{x_{04},x_{06}\} x_{31}^2 x_{33}^3 x_{34} , x_{02}^4 \{x_{03},x_{05}\}^5 \{x_{04},x_{06}\} \{x_{12},x_{22}\} x_{31} x_{33}^2 ,\\ &x_{01} \{x_{03},x_{05}\} \{x_{04},x_{06}\}^2 \{x_{11},x_{21}\}^3 x_{31}^2 x_{34} , x_{01}^2 \{x_{03},x_{05}\} \{x_{04},x_{06}\}^3 \{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{32} ,\\ &x_{01}^2 \{x_{03},x_{05}\} \{x_{04},x_{06}\}^3 \{x_{11},x_{21}\}^2 \{x_{12},x_{22}\} x_{31} , x_{01}^3 \{x_{03},x_{05}\} \{x_{04},x_{06}\}^4 \{x_{12},x_{22}\}^2 x_{32} x_{33} ,\\ &x_{01}^4 \{x_{03},x_{05}\} \{x_{04},x_{06}\}^5 x_{31}^2 x_{33}^3 x_{34} , x_{01}^4 \{x_{03},x_{05}\} \{x_{04},x_{06}\}^5 \{x_{12},x_{22}\} x_{31} x_{33}^2 \end{aligned} \right\}. \end{equation*}} \end{cor} \begin{proof} This is analogous to Corollary~\ref{prop:countingproblem_line_11}. \end{proof} \subsection{Application: Proof of Theorem~\ref{dim4}}\label{appl2} All cases can be proved exactly as in Section \ref{appl1}. \subsubsection{The variety $X_5$} By Corollary~\ref{dim4cor}(a), we have $J=10$ torsor variables $x_{ij}$ satisfying the equation $$x_{11}x_{12} + x_{21}x_{22} + x_{31} x_{32} x_{33} = 0.$$ We have $N=34$ height conditions with corresponding exponent matrix $$\mathscr{A}_1 = \left( \begin{smallmatrix} & & & & & & & & & & & & & & & & &1&1&1&1&1&1&1&1&2&2&2&2&2&2&3&3&3\\ & & & & & &1&1&1&1&1&2&2&2&2&2&2& & & & &1&1&1&1& & &2&2&2&2&1&2&2\\ 2&2&2&2&2&2&1&1&1&1&1& & & & & & &2&2&2&2&1&1&1&1&2&2& & & & &1& & \\ & & & &1&1& & & &2&2& & & &1&2&2& & &1&1& & &2&2& & & & &2&2& & &1\\ & &1&2& &2& & &2& &2& & &1& & &1& &2& &2& &2& &2& &1& &1& &1& & & \\ &1& &1& & & &2&2& & &1&2&2& & & &1&1& & &2&2& & & & &2&2& & & &1& \\ 1&2& & &2& & &2& &2& & &1& & &1& &2& &2& &2& &2& &1& &1& &1& & & & \\ 1& &1& & & &2& & & & &2&1&1&2&1&1& & & & & & & & &1&1&1&1&1&1&2&2&2\\ 2&1&2&1&1&1&2& & & & &1& & &1& & &1&1&1&1& & & & &2&2& & & & &2&1&1\\ 2&1&2&1&1&1&3&1&1&1&1&3&2&2&3&2&2& & & & & & & & & & & & & & & & & \end{smallmatrix}\right) , \quad \mathscr{A}_2 = \left( \begin{smallmatrix} & & -1\\ & & -1\\ & & -1\\ 1 & & -1\\ 1 & & -1\\ & 1 & -1\\ & 1 &-1\\ -1 & -1& \\ -1& -1& \\ -1& -1& \end{smallmatrix} \right).$$ Proposition~\ref{circle-method} gives us $\lambda = 1/34300$. We have $r=10$ coprimality conditions, and we see immediately in this and all other cases that \eqref{fail} holds. We choose \begin{equation*} {\bm \tau}^{(2)} = (1, 1, 1, \tfrac{2}{3}, \ldots, \tfrac{2}{3}) = (1 - h_{ij}/3)_{ij}. \end{equation*} We verify $C_2( {\bm \tau}^{(2)})$ and $C_2((1 - h_{ij}/3)_{ij})$ and compute and confirm \eqref{ass2} by $$\dim (\mathscr{H} \cap \mathscr{P}) = 4, \quad \dim (\mathscr{H} \cap \mathscr{P}_{ij}) = 3,\quad \dim(\mathscr{H} \cap \mathscr{P}(1/34300,\pi))=0.$$ \subsubsection{The variety $X_6$} By Corollary~\ref{dim4cor}(b), we have $J= 9$ torsor variables $x_{ij}$ satisfying the equation $$x_{11}x_{12} + x_{21}x_{22} + x_{31} x_{32}^2 = 0.$$ We have $N=24$ height conditions with corresponding exponent matrix $$\mathscr{A}_1 = \left(\begin{smallmatrix} & & & & & & & & & & & &1&1&1&1&1&1&1&1&1&4&4&4\\ 1&1&1&1&1&1&1&1&1&4&4&4& & & & & & & & & & & & \\ & & & & & & & & &3&3&3& & & & & & & & & &3&3&3\\ & & & & & &3&3&3& & & & & & & & & &3&3&3& & & \\ & & & &1&4& & &1& & &1& & & & &1&4& & &1& & &1\\ & &3&3&3& & & & & & & & & &3&3&3& & & & & & & \\ &4& &1& & & &1& & &1& & &4& &1& & & &1& & &1& \\ 3&3& & & &3& & & & & & &3&3& & & &3& & & & & & \\ 4& &1& & & &1& & &1& & &4& &1& & & &1& & &1& & \end{smallmatrix}\right), \quad \mathscr{A}_2 = \left(\begin{smallmatrix} & & -1\\ & & -1\\ & & -1\\ 1 & & -1\\ 1 & & -1\\ & 1 & -1\\ & 1 & -1\\ -1 & -1& \\ -2 & -2& 1 \end{smallmatrix}\right).$$ Proposition~\ref{circle-method} yields $\lambda = 1/34300$. We choose $${\bm \tau}^{(2)} = (1, 1, 1, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, 1)$$ satisfying \eqref{tau1}. We verify $C_2( {\bm \tau}^{(2)})$ and $C_2((1 - h_{ij}/3)_{ij})$ and compute \begin{displaymath} \begin{split} &\dim(\mathscr{H}\cap \mathscr{P})=2, \\ &\dim(\mathscr{H}\cap \mathscr{P}_{ij}) = -1, (i, j) = (1,1),(2,1), \quad \dim(\mathscr{H}\cap \mathscr{P}_{ij}) = 1\text{ otherwise},\\ &\dim(\mathscr{H} \cap \mathscr{P}(1/34300, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector $ (1 - h_{ij}/3)_{ij}$ and \begin{displaymath} \begin{split} & \dim(\mathscr{H}\cap \mathscr{P})=1, \\ & \dim(\mathscr{H}\cap \mathscr{P}_{ij}) = \begin{cases} 1,&(i, j) = (3,1),\\ 0, &(i,j)=(0,1),(0,2),(0,3),\\ -1, &\text{ otherwise}, \end{cases} \\ & \dim(\mathscr{H} \cap \mathscr{P}(1/34300, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector ${\bm \tau}^{(2)}$. This confirms \eqref{ass2}. \subsubsection{The variety $X_7$} By Corollary~\ref{dim4cor}(c), we have $J=12$ torsor variables $x_{ij}$ satisfying the equation $$x_{11}x_{12} + x_{21}x_{22} + x_{31}x_{32}x_{33}x_{34}x_{35}^2 = 0.$$ We have $N=80$ height conditions; the corresponding matrix $\mathscr{A}_1$ is {\tiny $$\left(\begin{smallmatrix} & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &1&1&1&1&2&2&2&2&2&2&3&3&3&3&4&4&4&4&4&4&4&4&5&5&5\\ & & & & & & & & & & & & & & & & & & & & & & & & & & & & & &1&1&1&1&2&2&2&2&2&2&3&3&3&3&4&4&4&4&4&4&4&4&5&5&5& & & & & & & & & & & & & & & & & & & & & & & & & \\ & & & & & & & & &1&1&1&1&1&1&2&2&2&2&3&3&3&3&3&3&4&4&5&5&5&1&1&1&1&1&1&1&1&1&1& & & & & & & & & & & & & & & &1&1&1&1&1&1&1&1&1&1& & & & & & & & & & & & & & & \\ & & & & & &1&1&1& & & &2&2&2& & &2&2& & & &2&2&2& & & & & & & &2&2& & & &2&2&2& & &1&1& & & & & &1&1&1& & & & & &2&2& & & &2&2&2& & &1&1& & & & & &1&1&1& & & \\ & & & &2&2& & &2& & &2& & &2& &2& &2& & &1& & &1& &2& & &1& &2& &2& & &1& & &1& &2& &2& & & &1&2& & &1& & &1& &2& &2& & &1& & &1& &2& &2& & & &1&2& & &1& & &1\\ & &1&1& &1& & & &2&2&2& & & &2&2& & &2&2&2& & & & & & & & &2&2& & &2&2&2& & & &1&1& & & &1&1&1& & & & & & & &2&2& & &2&2&2& & & &1&1& & & &1&1&1& & & & & & & \\ &2& &2& & & &2& & &2& & &2& &2& &2& & &1& & &1& &2& & &1& &2& &2& & &1& & &1& &2& &2& &2& &1& & & &1& & &1& &2& &2& & &1& & &1& &2& &2& &2& &1& & & &1& & &1& \\ 2& &2& & & &2& & &2& & &2& & & & & & &2&1&1&2&1&1& & &2&1&1& & & & &2&1&1&2&1&1& & & & & &2&1&1& &2&1&1&2&1&1& & & & &2&1&1&2&1&1& & & & & &2&1&1& &2&1&1&2&1&1\\ 1&1& & &1& & & & & & & & & & &1&1&1&1&2&2&2&2&2&2&5&5&6&6&6& & & & & & & & & & & & & & &1& & & &1& & & &1&1&1& & & & & & & & & & & & & & &1& & & &1& & & &1&1&1\\ 2&2&1&1&2&1&1&1&1& & & & & & & & & & & & & & & & &2&2&2&2&2& & & & & & & & & & &1&1&1&1&2&1&1&1&2&1&1&1&2&2&2& & & & & & & & & & &1&1&1&1&2&1&1&1&2&1&1&1&2&2&2\\ 5&5&4&4&5&4&4&4&4&2&2&2&2&2&2&1&1&1&1& & & & & & &1&1& & & &1&1&1&1& & & & & & &1&1&1&1&1& & & &1& & & & & & &1&1&1&1& & & & & & &1&1&1&1&1& & & &1& & & & & & \\ 6&4&5&3&4&3&5&3&3&3&1&1&3&1&1& & & & &1& & &1& & & & &1& & & & & & &1& & &1& & & & & & & &1& & & &1& & &1& & & & & & &1& & &1& & & & & & & &1& & & &1& & &1& & \end{smallmatrix}\right), $$} Proposition~\ref{circle-method} yields $\lambda = 1/70000$. We choose $${\bm \tau}^{(2)} = (1, 1, 1, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{3}{4}, \tfrac{3}{4}, \tfrac{3}{4}, \tfrac{3}{4}, 1)$$ satisfying \eqref{tau1}. We verify $C_2( {\bm \tau}^{(2)})$ and $C_2((1 - h_{ij}/3)_{ij})$ and compute \begin{displaymath} \begin{split} &\dim(\mathscr{H}\cap \mathscr{P})=4, \\ &\dim(\mathscr{H}\cap \mathscr{P}_{ij})= \begin{cases} 1,&(i, j) = (0,1), (0, 2),\\ 0, &(i,j)=(1, 1), (2, 1),\\ 2, & (i, j) = (1, 2), (2, 2),\\ 3, &\text{ otherwise}, \end{cases}\\ &\dim(\mathscr{H} \cap \mathscr{P}(1/70000, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector $ (1 - h_{ij}/3)_{ij}$ and \begin{displaymath} \begin{split} & \dim(\mathscr{H}\cap \mathscr{P})=0, \\ & \dim(\mathscr{H}\cap \mathscr{P}_{ij}) = \begin{cases} 0,&(i, j) = (3,1), (3, 2), (3,3), (3,4),\\ -1, &\text{ otherwise}, \end{cases}\\ & \dim(\mathscr{H} \cap \mathscr{P}(1/70000, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector ${\bm \tau}^{(2)}$. This confirms \eqref{ass2}. \subsubsection{The variety $X_8$} By Corollary~\ref{dim4cor}(d), we have $J=14$ torsor variables $x_{ij}$ with $0 \leq i \leq 3$, $J_0 = 6$, $J_1 = J_2 = 2$, $J_3 = 4$ satisfying the equation $$x_{11}x_{12} + x_{21}x_{22} + x_{31}x_{32}x_{33}^2x_{34}^2 = 0$$ with $k=3$. We have $N=156$ height conditions; it is straightforward to extract the corresponding matrices $\mathscr{A}_1$, $\mathscr{A}_2$ from Corollary~\ref{dim4cor}(d), which we don't spell out for obvious space reasons. Proposition~\ref{circle-method} yields $\lambda = 1/70000$. We choose $${\bm \tau}^{(2)} = (1, 1, 1, 1, 1, 1, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, 1, 1)$$ satisfying \eqref{tau1}. We verify $C_2( {\bm \tau}^{(2)})$ and $C_2((1 - h_{ij}/3)_{ij})$ and compute \begin{displaymath} \begin{split} &\dim(\mathscr{H}\cap \mathscr{P})=5, \\ &\dim(\mathscr{H}\cap \mathscr{P}_{ij}) = \begin{cases} 0,&(i, j) = (1, 1), (2, 1)\\ 2, &(i,j)=(1,2),(2, 2),\\ 4, &\text{ otherwise}, \end{cases}\\ &\dim(\mathscr{H} \cap \mathscr{P}(1/34300, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector $ (1 - h_{ij}/3)_{ij}$ and \begin{displaymath} \begin{split} & \dim(\mathscr{H}\cap \mathscr{P})=3, \\ & \dim(\mathscr{H}\cap \mathscr{P}_{ij}) =\begin{cases} -1,&(i, j) = (1, 1), (1,2),(2, 1), (2, 2),\\ 0, &(i,j)=(3,4)\\ 3, &(i,j)=(3,1), (3,2),\\ 2, &\text{ otherwise}, \end{cases}\\ & \dim(\mathscr{H} \cap \mathscr{P}(1/70000, \pi)) = -1 \text{ for all } \pi \end{split} \end{displaymath} for the vector ${\bm \tau}^{(2)}$. This confirms \eqref{ass2}. \section{A singular example}\label{sec:Xdagger} As in Section~\ref{IV.7}, we consider the spherical $G$-variety $W_4 = \mathbb{V}(z_{11}z_{12}-z_{21}z_{22}-z_{31}z_{32}) \subset \mathbb{P}^2_\mathbb{Q} \times \mathbb{P}^2_\mathbb{Q}$. Let $\smash{\tX}^\dag \to W_4$ be the blow-up in the two disjoint $G$-invariant curves \begin{align*} C_{01} &= \mathbb{V}(z_{12},z_{22},z_{31}) = \mathbb{V}(z_{31})\times\{(0:0:1)\}\text{,}\quad C_{33} = \mathbb{V}(z_{31}, z_{32})\text{.} \end{align*} The anticanonical divisor $-K_{\smash{\tX}^\dag}$ is not ample but semiample. Moreover, $H^1(\smash{\tX}^\dag,\mathscr{O}_{\smash{\tX}^\dag}) = H^2(\smash{\tX}^\dag,\mathscr{O}_{\smash{\tX}^\dag}) = 0$ since $\smash{\tX}^\dag$ is smooth and rational. Hence $\smash{\tX}^\dag$ is an almost Fano variety. We obtain an anticanonical contraction $\pi\colon \smash{\tX}^\dag \to X^\dag$. Here $X^\dag$ is a singular Fano variety with desingularization $\smash{\tX}^\dag$. The sequence of morphisms $W_4 \leftarrow \smash{\tX}^\dag \to X^\dag$ corresponds to the following sequence of maps of colored fans. \begin{align*} \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (0, -1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,3); \draw (0,0) -- (-3, 0); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (0, -1) node[anchor=west] {{\tiny{$u_{32}$}}}; \path (0, 1) node[anchor=east] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south] {{\tiny{$d_{1}$}}}; \begin{scope} \clip (0,0) -- (0,1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (0, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longleftarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (-1, 0) circle (2pt); \draw (-1, -1) circle (2pt); \draw (-1, 1) circle (2pt); \draw (0, -1) circle (2pt); \draw (0,0) -- (0,3); \draw (0,0) -- (-3, 0); \draw (0,0) -- (-3, -3); \draw (0,0) -- (-3, 3); \draw (0,0) -- (0, -3); \path (-1, 0) node[anchor=north] {{\tiny{$u_{31}$}}}; \path (-1, -1) node[anchor=north west] {{\tiny{$u_{33}$}}}; \path (0, -1) node[anchor=north west] {{\tiny{$u_{32}$}}}; \path (-1, 1) node[anchor=north east] {{\tiny{$u_{01}$}}}; \path (0, 1) node[anchor=east] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south] {{\tiny{$d_{1}$}}}; \begin{scope} \clip (0,0) -- (1,0) -- (0,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (0, -1) -- (-1, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (-1,-1) -- (-1,0) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (-1, 0) -- (-1, 1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \draw[densely dotted, thick] (0,0) -- (4,0); \begin{scope} \clip (0,1) -- (1,0) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (13pt); \end{scope} \begin{scope} \clip (-1,1) -- (0,1) -- (0,0) -- cycle; \draw[densely dotted,thick] (0,0) circle (17pt); \end{scope} \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-0.5, -2.24) -- (0.5, -2.24) -- (0.5, 2.24) -- (-0.5, 2.24) -- cycle; \path (0,0) node {{$\longrightarrow$}}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.7] \clip (-2.24, -2.24) -- (2.24, -2.24) -- (2.24, 2.24) -- (-2.24, 2.24) -- cycle; \fill[color=gray!30] (-3, 3) -- (3, -3) -- (-3, -3) -- cycle; \foreach \x in {-3,...,3} \foreach \y in {-3,...,3} \fill (\x, \y) circle (1pt); \draw (1, 0) circle (2pt); \draw (0, 1) circle (2pt); \draw (0, -1) circle (2pt); \draw (-1, -1) circle (2pt); \draw (-1, 1) circle (2pt); \draw (0,0) -- (3,0); \draw (0,0) -- (0,-3); \draw (0,0) -- (-3, -3); \draw (0,0) -- (-3, 3); \path (0,-1) node[anchor=west] {{\tiny{$u_{32}$}}}; \path (-1, -1) node[anchor=north west] {{\tiny{$u_{33}$}}}; \path (-1, 1) node[anchor=north east] {{\tiny{$u_{01}$}}}; \path (0, 1) node[anchor=east] {{\tiny{$d_{2}$}}}; \path (1, 0) node[anchor=south] {{\tiny{$d_{1}$}}}; \begin{scope} \clip (0,0) -- (1,0) -- (0,-1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \begin{scope} \clip (0,0) -- (0,-1) -- (-1, -1) -- cycle; \draw (0,0) circle (13pt); \end{scope} \begin{scope} \clip (0,0) -- (-1,-1) -- (-1,1) -- cycle; \draw (0,0) circle (9pt); \end{scope} \end{tikzpicture} \end{align*} We denote by $E_{31}$ the $G$-invariant exceptional divisor contracted by $\pi$. The singular locus of $X^\dag$ is $\pi(E_{31})$. The dotted circles in the colored fan of $\smash{\tX}^\dag$ specify a smooth projective ambient toric variety $Y^\dag$ such that $-K_{\widetilde{X}^\dag}$ is ample on $Y^\dag$. In the same way as before, a universal torsor of $\smash{\tX}^{\dag}$ can be obtained. The straightforward computations are omitted. This leads to the following counting problem. \begin{cor}\label{prop:countingproblem_line_8} We have \begin{align*} N^{\dag}(B) = \frac{1}{16} \#\left\{\mathbf{x} \in \mathbb{Z}_{\ne 0}^8 : \begin{aligned} &x_{11}x_{12}-x_{21}x_{22}-x_{31}x_{32}x_{33}^2=0, \quad \max|\mathscr{P}^\dag(\mathbf{x})| \le B\\ &(x_{11},x_{21},x_{33})=(x_{11},x_{21},x_{31})=(x_{01},x_{11},x_{21})=1\\ &(x_{12},x_{22})=(x_{01}, x_{32})=(x_{01}, x_{33})=(x_{31}, x_{32})=1\\ \end{aligned} \right\}, \end{align*} with \begin{equation*} \mathscr{P}^\dag(\mathbf{x}) =\left\{ \begin{aligned} &x_{01}x_{31}^2x_{32}^2x_{33}^3, \{x_{11},x_{21}\}x_{31}x_{32}^2x_{33}^2, \{x_{11},x_{21}\}^2\{x_{12},x_{22}\}x_{32},\\ &x_{01}^3 \{x_{12},x_{22}\}^2 x_{31}^2x_{33} , x_{01}^2 \{x_{11},x_{21}\} \{x_{12},x_{22}\}^2 x_{31} \end{aligned} \right\}. \end{equation*} \end{cor} By the same type of computations as before, one concludes Theorem~\ref{thm2} from Corollary~\ref{prop:countingproblem_line_8} and Theorem~\ref{manin-cor} applied to the almost Fano variety $\smash{\tX}^\dag$.
1,477,468,749,966
arxiv
\section{Introduction} \label{sec:intro} The holographic principle~\cite{Tho93, Sus95, Bou02} posits that quantum theories of gravity are holographic: quantum gravity in~$(d+1)$ dimensions (hereafter referred to as the ``bulk'' theory) is dual to a lower-dimensional theory in~$d$ dimensions (the ``boundary''). Within this framework, the \textit{states} of particular physical relevance are those in which the bulk can be described semiclassically\footnote{Theories of quantum gravity that do not admit \textit{any} such states are possibly interesting as theoretical artifacts, but hardly of relevance to a physical study.}, i.e.~the states that can describe the emergence of classical spacetime. What, then, are the lower-dimensional holographic duals of such states? Because little is known about these duals in broad generality, we seek guidance from the most explicit manifestation of holography: the AdS/CFT correspondence~\cite{Mal97, GubKle98, Wit98a} (though ultimately our analysis in this paper will be quite general). Even within this relatively well-understood example, the answer to our question remains elusive: while bulk states with a semiclassical description correspond to the large-$N$, large-$\lambda$ limit of the dual local quantum field theory (QFT), the converse is manifestly false, as not all boundary states give rise to semiclassical gravity duals in this limit\footnote{Here, by ``semiclassical gravity'' we mean the usual definition of a system well-approximated by QFT on a curved background spacetime~$(M,g_{ab})$ (whose dynamics may be related to classical background matter fields, but backreaction from quantized fields can be ignored).} (though see e.g.~\cite{Van10} for early work on this question). A key observation makes the problem more tractable: the most fundamental necessary ingredient for a well-defined bulk QFT is not the bulk spacetime geometry~$(M,g_{ab})$ itself, but rather its \textit{causal structure}; it is precisely this structure that allows microcausality to be well-defined, a \textit{sine qua non} of local (relativistic) quantum field theory (bulk or boundary). This observation motivates a simpler, more basic question: \\ \noindent \textit{Which states of a boundary theory holographically describe a bulk with semiclassical causal structure, and what precisely is this description?}\\ In this paper, we will answer this question most precisely when the boundary theory is a QFT, but we will also give a broad approach to answering the same question when the boundary theory is more general; we will then examine some of the consequences of the answer. Our most explicit construction applies to any holographic description of quantum gravity that obeys the following properties: \begin{description} \item[\namedlabel{H1}{H1}]\textit{(Bulk)} When a semiclassical bulk exists, it contains some bulk matter field~$\phi(x)$ which is weakly interacting and thus can be treated perturbatively in the interaction coupling; \item[\namedlabel{H2}{H2}] \textit{(Boundary)} The lower dimensional theory is a QFT that lives on a timelike geometry that can be embedded on the boundary of the bulk (which need not be an asymptotic boundary); \item[\namedlabel{H3}{H3}] \textit{(Bulk-to-boundary)} There exists an ``extrapolate dictionary'' that relates correlators of local operators~$\Ocal(X)$ in the boundary to an appropriate limit of correlators of dual local operators~$\phi(x)$ in the bulk: \be \lim_{x_i \to X_i} \ev{\phi(x_1)\cdots\phi(x_n)}_\mathrm{bulk} = \ev{\Ocal(X_1)\cdots\Ocal(X_n)}_\mathrm{boundary}, \ee where~$x_i$ and~$X_i$ denote points in the bulk and boundary, respectively. \end{description} We will call any duality obeying~\ref{H1}-\ref{H3} ``strong holography''; the large-$N$, large-$\lambda$ limit of AdS/CFT is one such example. Since much of our discussion is guided by intuition from AdS/CFT, we find it useful for pedagogical purposes to sometimes frame it in that language. We emphasize, however, that our results are applicable to more general forms of holography; we will discuss later generalized versions of conditions~\ref{H1}-\ref{H3} (though we remain agnostic about the existence of such forms of holography). Before proceeding, it is worth pausing to address some potential objections to the question posed above. First, as was pointed out in~\cite{Mar14}, emergent gravity is inherently nonlocal: it is described by a Hamiltonian that is purely a boundary term on shell. This result may raise a potential concern regarding the validity of property~\ref{H1}: how can the bulk be local? This concern is unfounded, as our requirement is only \textit{approximate} bulk locality in the appropriate limit. This is precisely the case in e.g.~AdS/CFT, where semiclassically the bulk is well-described by local quantum fields on a classical asymptotically AdS spacetime. Second, advocates of the so-called strong form of AdS/CFT would contend that any state of a UV-complete CFT should have a bulk dual with asymptotically AdS boundary conditions. Even if we adopt the idea that this strong form is true -- which is far from clear, especially when the boundary theory in question lives on a curved background, as allowed by our present discussion -- there is no tension with our question on the dual of semiclassical causal structure. We are interested in a \textit{sufficient} condition for the existence of a \textit{semiclassical} dual causal geometry and how this geometry is constructed, a question to which the strong form of AdS/CFT has no relevance. We now summarize the answer to our question -- ``when does a state of a QFT have a dual semiclassical causal structure?'' -- in the context of strong holography. We build on the work of~\cite{EngHor16a,EngHor16c} to construct a field theoretic object, which we term a \textit{causal state}~$\widetilde{\ket{\psi}}$ or \textit{causal density matrix}~$\tilde{\rho}$, that encodes precisely the boundary information necessary to identify whether a dual bulk causal structure exists and (re)construct this dual if it does. In other words, the causal state~$\widetilde{\ket{\psi}}$ is obtained from a physical state~$\ket{\psi}$ via a coarse-graining that discards any information not encoded in the dual causal structure. Our approach leads us to several attractive insights: for example, causal state holography appears to be quantum error correcting. It also answers a longstanding question in AdS/CFT: what is the dual of the so-called causal wedge~\cite{BouLei12, HubRan12}? Additional interesting features include defining ``entropy-like'' measures of coarse-graining, causal RG flow, and possible duals of bulk conformal invariants. The relationship to the causal wedge deserves additional comment. Recall that due to superficial parallels between the causal wedge and the so-called entanglement wedge~\cite{CzeKar12, Wal12, HeaHub14}, some works have espoused the belief that the area of the rim of the causal wedge -- the so-called causal surface -- must be dual to an information-theoretic quantity in the boundary theory~\cite{HubRan12}. However, the realization of those expectations via the conjecture of~\cite{FreMos13} was disproved in~\cite{KelWal13}, whose own conjecture, again motivated by intuitions about entanglement, was recently disproved in~\cite{EngWal17}. More importantly, the general arguments of~\cite{EngWal17} make it clear that the geometric similarity between the causal wedge and the entanglement wedge is a red herring: the relationship between the area of the causal surface and the causal wedge is fundamentally different from the relationship between the area of the HRT surface~\cite{RyuTak06, HubRan07} and the entanglement wedge. Attempts to draw na{\"i}ve insights about the causal wedge by drawing on intuitions regarding the entanglement wedge are fundamentally misguided; the two should not be thought of as analogous objects. It is clear that a paradigm shift in the general approach to the causal wedge dual is required. Here we advocate a way of thinking about the causal wedge that does not rely upon entanglement-based intuition. We do not assume that the area of the causal surface is a special quantity, and we do not endorse the idea that it should have a simpler interpretation than the area of any other surface. Instead, we adopt the following stance: since the causal wedge is defined purely from the \textit{causal structure} of the bulk, only the causal structure of the causal wedge should have a natural holographic dual. This dual, as we will show, is none other than the (reduced) causal state~$\widetilde{\ket{\psi}}_\Rcal$. A potential objection to this viewpoint, of course, comes from the argument that the area of the causal surface should be ``distinguished'' if the Generalized Second Law~\cite{Bek73, Haw76} is to have a meaningful holographic interpretation on the boundary. Moreover, since the causal surface lies in the entanglement wedge, its area is obviously computable from the reduced density matrix~\cite{JafLew15, DonHar16}. These considerations do not, however, motivate that the area, or more generally, the generalized entropy, of the causal surface should be special \textit{from the perspective of the causal wedge}. Our opinion is that the area of the causal surface does not have a nice information-theoretic interpretation in terms of only data necessary to reconstruct the causal wedge; we do not exclude the possibility of a nice interpretation in terms of the full data necessary to reconstruct the entanglement wedge. \subsection*{Summary of Results} Let us give a brief preview of how Properties~\ref{H1}-\ref{H3} define a causal state and the corresponding dual causal structure. Consider a state~$\ket{\psi}$ with a semiclassical dual causal structure in the sense described above, and consider the~$n$-point function $\ev{\phi(x_1)\cdots\phi(x_n)}_\psi$ of some local quantum bulk field~$\phi(x)$. If~$\ket{\psi}$ is Hadamard (which we will assume throughout), then this correlator exhibits two types of singularities: when two or more of the points~$x_i$ are coincident, or when all of the~$x_i$ are null separated from a common point~$y$ with the corresponding Landau diagram obeying all conservation laws, as shown in Figure~\ref{subfig:generalLandau}. \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=4cm,page=1]{Figures-pics.pdf} \label{subfig:generalLandau} } \hspace{2cm} \subfigure[]{\includegraphics[width=3cm,page=2]{Figures-pics.pdf} \label{subfig:bndryLandau} } \caption{\subref{subfig:generalLandau}: In a general QFT with perturbative interactions, the correlator~$\ev{\phi(x_1)\cdots\phi(x_n)}_\psi$ is singular when all the~$x_i$ are null separated from a common vertex~$Y$, with the corresponding Landau diagram (shown here) obeying all conservation laws. This is the so-called ``lightcone singularity''. \subref{subfig:bndryLandau}: Reproduced from~\cite{EngHor16a}. From the point of view of the boundary field theory, these bulk-point singularities of a \textit{boundary} correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_n)}_\psi$ identify a bulk point~$p$.} \label{fig:Landau} \end{figure} The extrapolate dictionary dictates that, in the limit that the points $x_{i}$ are taken to boundary points $X_{i}$, the boundary~$n$-point correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_n)}_\psi$ inherits the singularity structure of~$\ev{\phi(x_1)\cdots\phi(x_n)}_\psi$. From a purely \textit{boundary} perspective, the correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_n)}_\psi$ now exhibits three types of singularities: the usual universal short-distance singularities, the usual universal light-cone singularities due to null separation on the boundary, and non-universal singularities when all the~$X_i$ are null-separated from a \textit{bulk} point~$p$ when a semiclassical bulk exists, as shown in Figure~\ref{subfig:bndryLandau}. These latter singularities were coined ``bulk-point singularities'' in~\cite{MalSimZhi}, and their existence, investigated earlier in~\cite{PolSus99, GarGid09, HeePen09, Pen10, OkuPen10}, is a necessary condition for the existence of a semiclassical holographic dual causal structure (with perturbative dynamics). This condition alone, however, is not sufficient to ensure the existence of a dual semiclassical conformal geometry. How are we to determine whether these ``extra'' singularities in~$\ev{\Ocal(X_1) \ldots \Ocal(X_n)}_\psi$ are sourced by a well-behaved semiclassical bulk or are just artifacts of some strange, possibly pathological aspect of the field theory? The answer was provided in~\cite{EngHor16a,EngHor16c}, where it was shown that -- when a semiclassical bulk exists -- the singularities in~$(d+3)$-point correlators (with~$d$ the dimension of the boundary spacetime) can be used to define special spacelike slices of the boundary spacetime; these were called \textit{lightcone cuts} because they correspond to the intersection of lightcone of~$p$ with the boundary~\cite{New76}. The geometry of these cuts yields an overdetermined system of algebraic equations for the conformal metric\footnote{That is, the metric up to an overall conformal factor: this object is precisely the causal structure.} at~$p$ of the bulk dual; by construction, these equations have a unique consistent solution whenever the singularities in question are sourced by a local, causal bulk. This boundary-to-bulk procedure may be made precise as follows: consider the~$(d+3)$-point correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}_\psi$ of some local operator~$\Ocal(X)$ in the boundary theory. If the construction of~\cite{EngHor16a,EngHor16c} from the singularity structure of this correlator gives rise to a well-defined conformal geometry, then that geometry \textit{is} the semiclassical holographic dual causal structure. Moreover, there exists a perturbatively interacting quantum field~$\phi(x)$ dual to~$\Ocal(X)$ which gives rise to the singularities in~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}_\psi$. This is explicit spacetime emergence. The causal state~$\widetilde{\ket{\psi}}$ advertised above is designed to capture precisely this singularity structure by coarse-graining over everything else. Roughly, we say that two states~$\ket{\psi}$ and~$\ket{\psi'}$ are causally equivalent, denoted~$\ket{\psi}\sim\ket{\psi'}$, if they produce the same singularity structure in the correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}$. Then the causal state~$\widetilde{\ket{\psi}}$ associated to some state~$\ket{\psi}$ is precisely the equivalence class of~$\ket{\psi}$ under~$\sim$. In fact, we will argue that it is possible to quotient the full Hilbert space of the theory by~$\sim$, thereby producing a \textit{causal Hilbert space}~$\widetilde{\Hcal}$. The causal state is an element of~$\widetilde{\Hcal}$, which allows definition of the causal density matrix~$\tilde{\rho} \equiv \widetilde{\ket{\psi}} \widetilde{\bra{\psi}}$ as a linear operator on~$\widetilde{\Hcal}$. By construction, therefore, all that is needed to determine the semiclassical holographic dual causal structure of a state~$\ket{\psi}$ (and whether or not there even is one) is the causal state~$\widetilde{\ket{\psi}}$. We find it useful to further consider a generalization of the causal state $\widetilde{\ket{\psi}}$ (or density matrix~$\tilde{\rho}$): given a subregion~$\Rcal$ of the boundary spacetime, we may restrict our attention to the portion of lightcone cuts that intersects~$\Rcal$. By performing this restriction to~$\Rcal$ in two distinct ways (one of which is a stricter version of the other), we obtain two insights. First, if we require that the correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}$ be singular when \textit{all} of the~$X_i$ are contained in~$\Rcal$, we find two intriguing features: \textit{(i)} there is redundancy in how the conformal metric at a particular bulk point~$p$ is encoded in the singularity structure of~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}$; \textit{(ii)} when~$\Rcal$ is too small, the singularity structure of~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}$ restricted to~$\Rcal$ is insufficient to reconstruct the conformal metric at~$p$. These two features bear a strong resemblance to recent insights on \textit{(i)} quantum error correction and \textit{(ii)} quantum secret-sharing in AdS/CFT~\cite{AlmDon15, HaPPY, Har16, MinPol15}; our result may be viewed as a form of ``causal'' quantum error correction in a more general holographic setting than just AdS/CFT. \begin{figure}[t] \centering \includegraphics[width=2.5cm,page=3]{Figures-pics.pdf} \caption{The causal wedge $C_{W}[\Rcal]$ of some causal boundary region~$\Rcal$ is the intersection of the past and future of~$\Rcal$ in the bulk.} \label{fig:causalwedge} \end{figure} Second, consider the case where the correlator~$\ev{\Ocal(X_1) \ldots \Ocal(X_{d+3})}$ is singular when only \textit{some} the~$X_i$ are required to be contained in~$\Rcal$. These singularities are sufficient to reconstruct the portion of the lightcone cuts that intersect~$\Rcal$, which in turn identify bulk points that lie in the intersection of the past and future of~$\Rcal$. When~$\Rcal$ is globally hyperbolic (colloquially known as a boundary causal diamond), this intersection is the causal wedge~$C_{W}[\Rcal]$ of~$\Rcal$, as shown in Figure~\ref{fig:causalwedge}. By an appropriate additional coarse-graining, we may define a \textit{reduced} causal state on~$\Rcal$,~$\widetilde{\ket{\psi}}_\Rcal$, which encodes the information necessary to reconstruct the conformal geometry of~$C_{W}[\Rcal]$. The amount of coarse-graining performed in going from~$\ket{\psi}$ to~$\widetilde{\ket{\psi}}$ can be captured by a number~$D_\psi$ which roughly encodes the ``size'' of the subspace of~$\Hcal$ that projects to~$\widetilde{\ket{\psi}}$; this object is similar in spirit to (but very different in construction from) the entanglement entropy. We now discuss more general forms of holography. Motivated in part by the subtleties involved in the formulation of a holographic duality of two (approximately) local QFTs, we also consider more general dualities that obey a partial relaxation of ``strong holography''. We find that much of our discussion generalizes, albeit less explicitly, to any system obeying Property~\ref{H1} above and weaker versions of Properties~\ref{H2} and~\ref{H3}: \begin{description} \item[\namedlabel{H2prime}{H2$'$}] \textit{(Boundary)} The lower-dimensional theory is a well-defined dynamical theory that can be embedded on the boundary of the bulk (which need not be an asymptotic boundary). This boundary is timelike or null; \item[\namedlabel{H3prime}{H3$'$}] \textit{(Bulk-to-boundary)} There exists an ``extrapolate dictionary'' that relates correlators of local bulk fields $\phi(x)$ to \textit{some} object $O(X)$ in the boundary theory in the appropriate limit: \be \label{eq:Odef} \lim_{x_i \to X_i} \ev{\phi(x_1)\cdots\phi(x_n)}_\mathrm{bulk} = O(X_{1}\cdots X_{n})_\mathrm{boundary}, \ee where~$x_i$ and~$X_i$ denote points in the bulk and boundary spacetimes, respectively. \end{description} Because the left-hand side of equation~\eqref{eq:Odef} encodes bulk scattering, the object $O(X)$ is ``morally'' the bulk S-matrix; any reasonable holographic dual should therefore obey~\ref{H3prime}. We call a duality obeying properties~\ref{H1},~\ref{H2prime}, and~\ref{H3prime} ``weak holography'' (see also~\cite{NomSal16b} for work on a formulation of general holography). Many of our statements apply equally well to such systems, provided the objects $O(X)$ in the boundary theory are known. For most of this paper, we will specialize to the strong form of holography, where~$O(X)$ is the expectation value of a composite operator; in Section~\ref{sec:hol} we will examine which of our statements are generalizable to weak holography. This paper is structured as follows. In Section~\ref{sec:CDM}, we review and generalize the construction of~\cite{EngHor16a,EngHor16c} before defining precisely the equivalence relation~$\sim$ and the causal state and causal density matrix. In Section~\ref{sec:QEC} we discuss the similarities between our construction and quantum error correction. In Section~\ref{sec:causalwedge} we discuss the natural interpretation of the causal wedge that emerges from our formalism. Section~\ref{sec:hol} gives a generalization of our results for weak holography. We conclude in Section~\ref{sec:disc} with some discussion and future directions. \paragraph{Assumptions and Conventions:} We assume that the boundary theory lives on a~$C^{2}$ maximally extended,~$d$-dimensional, globally hyperbolic manifold $\partial M$ that is either timelike or null and geodesically complete (or, if this is a conformal boundary, it can be put in a conformal frame that is geodesically complete). We will restrict our attention to emergent conformal geometries on manifolds that are $C^{2}$, maximally extended, connected,~$(d+1)$-dimensional, and globally or AdS hyperbolic~\cite{Wal12}. The condition of global or AdS hyperbolicity in the bulk is convenient but not essential; we will comment on it in Section~\ref{sec:disc}. All correlation functions are assumed to be time-ordered. All other conventions are as in~\cite{Wald} unless otherwise stated. \section{The Causal Density Matrix} \label{sec:CDM} In this section, we describe precisely how the causal state~$\widetilde{\ket{\psi}}$ is defined, beginning with a review of the lightcone cut construction. We will restrict to the case of strong holography; the generalization to weak holography will be presented in Section~\ref{sec:hol}. \subsection{The Lightcone Cut Construction} \label{subsec:cutreview} In~\cite{EngHor16a, EngHor16c}, it was shown (in the context of AdS$_{d+1}$/CFT$_d$) that the singularity structure of~$(d+3)$-point CFT correlators can be used to obtain the causal structure of a semiclassical bulk gravitational dual. Note that while the construction of~\cite{EngHor16a, EngHor16c} restricted itself to the context of AdS/CFT, its ingredients apply more generally. The purpose of this subsection is both to review and generalize the salient results of~\cite{EngHor16a}. We will begin with the geometrical (bulk) aspects of the construction, and then relate them to the singularities of~$(d+3)$-point correlators in the dual field theory. To that end, recall that the future (past) of a point~$p$, denoted~$I^{+}(p)$~($I^{-}(p)$), is defined as the set of all points that can be reached from~$p$ via a future-directed (past-directed) timelike path. The boundary of the future (past), denoted~$\partial I^{+}(p)$~($\partial I^{-}(p)$), is an achronal surface generated by null future-directed (past-directed) geodesics from~$p$. These generators leave the surface after caustics and intersections, ensuring achronality. The future \textit{lightcone cut} of a point $p$, denoted $C^{+}(p)$, is the intersection of the boundary of the future of $p$ with the boundary, with a similar definition for the past lightcone cut $C^-(p)$: \be C^{\pm}(p)= \partial I^{\pm}(p)\cap \partial M. \ee We will use $ C(p)$ as shorthand whenever it does not matter if we are discussing the past or future lightcone cut of $p$, and we will call it ``the cut of $p$'' for short. \begin{figure}[t] \centering \includegraphics[width=4.5cm,page=4]{Figures-pics.pdf} \caption{The lightcone cuts of $p$ and $q$ are tangent due to a shared generator $\gamma$ (thick black line) of $\partial J(p)$ and $\partial J(q)$.} \label{fig:nullsep} \end{figure} We will need to extend several key results from~\cite{EngHor16a}. First, it was proven in~\cite{EngHor16a} that points $p$ in the causal wedge of $\partial M$, defined as~$C_{W}[\partial M] = \partial I^{+}[\partial M]\cap \partial I^{-}[\partial M]$, are in a one-to-one correspondence with lightcone cuts $C^{\pm}(p)$. Thus the causal wedge of $\partial M$ can be represented as the space of past and future lightcone cuts. Second, it was shown that if two lightcone cuts $C(p)$ and $C(q)$ are tangent at a (boundary) point $X$, then the corresponding bulk points $p$ and $q$ are null-separated\footnote{By null-separated, we mean that there exists a null curve connecting them, but no timelike one.}; see Figure~\ref{fig:nullsep} for an illustration. In~\cite{EngHor16a}, it was assumed that $ M$ was asymptotically AdS (and thus $\partial M$ was conformal to the Einstein Static Universe). We extend these results below in four propositions to include any boundary spacetime obeying our assumptions. \paragraph{Proposition 1: \label{prop1}} If $C(p)$ and $C(q)$ are tangent at a point $X$, then $p$ and $q$ are null-separated. \begin{proof} Let us work with past cuts for simplicity. Since $C^{-}(p)$ is by assumption $C^{1}$ at $x$, then $\partial I^{-}(p)$ must $C^{1}$ at $X$ as well. There is therefore a unique generator of $\partial I^{-}(p)$ at $X$, and this unique generator is orthogonal to $C^{-}(p)$ at $X$. Because $\partial M$ is causal, there is precisely one future-directed inwards-going null vector orthogonal to~$C^-(p)$ at~$X$. This is the generator $\gamma$ of $\partial I^{-}(p)$ at $X$. But the statements above hold equally well for $\partial I^{-}(q)$, so $\gamma$ is also a generator of $q$. Therefore $\gamma$ goes through both $p$ and $q$. Because $\gamma$ is a generator of both $\partial I^{-}(p)$ and $\partial I^{-}(q)$ and is achronal, $p$ and $q$ must be null-separated. \end{proof} \paragraph{Proposition 2: \label{prop2}} $C(p)$ is acausal whenever $p\in M$. \begin{proof} By global (or AdS) hyperbolicity, $C(p)$ is achronal. If $\partial M$ is timelike everywhere on its intersection with $C(p)$, the result follows trivially as well: the intersection of a timelike surface with an achronal surface is spacelike. Suppose $\partial M$ is null somewhere on its intersection with $C(p)$. Then $C(p)$ must be spacelike unless $\partial I(p)$ shares a generator with $\partial M$ (this generator does not leave the congruence due to intersections). But then $p\in\partial M$. \end{proof} \paragraph{Proposition 3:\label{prop3}} $C(p)$ is a complete continuous spatial slice of any connected component of $\partial M$ that intersects $I(p)$. Moreover,~$C(p)$ is~$C^1$ on all but a measure-zero set. \\ \noindent The proof of this is identical to the proof in~\cite{EngHor16a}, thanks to Proposition 2. Propositions 1-3 immediately allow us to determine the sign of null-separation: if $C(p)$ and $C(q)$ are tangent at a point $X$, and in a neighborhood of $X$ $C(p)$ is in the causal past of $C(q)$, then there is a future-directed null geodesic from $p$ to $q$. \paragraph{Proposition 4:\label{prop4}} Let $p\in I^{+}[\partial M]$. Then~$(i)$ $C^{-}(p)$ is nonempty, and~$(ii)$ if $C^{-}(p)=C^{-}(q)$, then $p=q$. A similar result holds for future cuts. \begin{proof} $p\in I^{+}[\partial M]$ implies that $I^{-}(p)\cap \partial M \neq \varnothing$. Global (AdS) hyperbolicity implies that $\exists \ X \in \partial M$ such that $X\notin I^{-}(p)$. Therefore $\partial I^{-}(p)\cap \partial M \neq \varnothing$, proving~$(i)$. Next, suppose $C^{-}(q)=C^{-}(p)$. Then by Rademacher's theorem~\cite{Rademacher1919} (and the fact that the lightcone cuts are Lipshitz manifolds~\cite{HawEll}), $C^{-}(p)$ and $C^{-}(q)$ agree on a $C^{1}$ open set. Therefore $\partial I^{-}(p)$ and $\partial I^{-}(q)$, by the reasoning of the proof of Proposition 1, share an open set of generators. By global (AdS) hyperbolicity, $p=q$, proving~$(ii)$. \end{proof} Note that because the $\partial I(p)\cap \partial M$ is a spacelike slice of the geometry (for causal boundaries) by Property 3, this also means that two different lightcone cuts cannot correspond to one point (they would be causally separated from one another). The final result that we will use is a prescription for obtaining the conformal metric at a point $p$, i.e. the metric up to an overall function, from a sufficiently large discrete set of null vectors at $p$. This remains unaltered from~\cite{EngHor16a}. The idea is simple: if $\{\ell_{i}\}_{i=1}^{d+1}$ is a known set of $(d+1)$ linearly independent null vectors, then the metric at $p$ is determined, up to an overall factor\footnote{If the $\ell_{i}$ were not null, the metric would be fully determined.}, by the inner products of the $\ell_{i}$. By assumption, the $\ell_{i}$ are null, so $\ell_{i}\cdot \ell_{i}=0$ (no summation on the $i$ index). To fix the conformal metric, we need to determine the inner products $\ell_{i}\cdot \ell_{j}$ for $i\neq j$. Let $\{\eta_{k}\}$ be another known set of null vectors at $p$, which can be expanded in the $\ell_{i}$ basis: \be \eta_{k}=\sum\limits_{i}M_{ik}\ell_{i}. \ee Because the $\eta_{k}$ are null, we know that the corresponding inner products vanish. This yields a set of algebraic equations for the inner products $\ell_{i}\cdot \ell_{j}$: \be \label{eq:sys} 0=\eta_{k}\cdot \eta_{k}=\sum\limits_{i,j}M_{ik}M_{jk}\ell_{i}\cdot \ell_{j}. \ee Equation~\eqref{eq:sys} determines the metric at $p$ up to an overall factor in terms of the known coefficients~$M_{ij}$. These equations are generically overdetermined, but we are guaranteed a solution whenever there exists a well-defined bulk metric at $p$. However, we are of course interested in determining the conformal metric at $p$ from \textit{boundary} data rather than bulk data. Using the approach above then requires being able to express null vectors at a bulk point $p$ in terms of objects defined on the boundary. This is precisely what the lightcone cuts do: if $C(p)$ and $C(q)$ are tangent at a point, then there is a null geodesic connecting them, which corresponds to a null vector at $p$. More explicitly, using the cut-point correspondence, we may work in the space of lightcone cuts ${\cal M}$ instead of the bulk: a lightcone cut $C(p)$ is a point $P$ in the space of cuts ${\cal M}$. We \textit{define} $P$ and $Q$ to be null-separated if $C(p)$ and $C(q)$ are tangent, which is the same as defining $P$ and $Q$ to be null-separated when $p$ and $q$ are null-separated. Thus we have endowed ${\cal M}$ with the same Lorentzian structure as the bulk $M$. By taking enough lightcone cuts $C(q)$ that are tangent to $C(p)$, we generate enough null vectors in the lightcone of $P$ to write down the system of equations~\eqref{eq:sys} for the conformal metric at $P$. Because ${\cal M}$ has the same conformal metric as $M$, this is equivalently a system of equations for the conformal metric at $p$. Whenever the lightcone cuts are generated by a well-defined bulk metric, there will be a unique, consistent solution. In order to complete the reconstruction of the conformal metric from boundary data, we must determine how to obtain the lightcone cuts from boundary data. This can be accomplished by noting that the lightcone cuts $C^{\pm}(p)$ of a bulk point $p$ are precisely the boundary locations that are null (and achronally) separated from a single bulk point $p$. Thus, if $\{X_{i}\}_{i=1}^{n}$ is a set of $n$ points on $C^{\pm}(p)$, then they correspond to the minimal time separation at which we can draw a bulk Landau (position-space) diagram for some perturbatively-interacting bulk quantum field $\phi$ such that the diagram endpoints are all null-separated from the common bulk vertex $p$ (see Figure~\ref{fig:vertex}). When this Landau diagram is on-shell, it produces a ``light-cone singularity'' in the $n$-point correlator of the perturbatively interacting field $\phi$. \begin{figure}[t] \centering \includegraphics[width=5cm,page=5]{Figures-pics.pdf} \caption{The lightcone cuts of a point $p$ constructed from its bulk-point singularities.} \label{fig:vertex} \end{figure} The extrapolate dictionary immediately implies that the time-ordered $n$-point correlator of the local boundary operator~$\Ocal(X)$ dual to $\phi(x)$ is also singular~\cite{MalSimZhi}. Thus the lightcone cuts are constituted of singularities of these boundary correlators. We may use this observation to construct the full cuts from correlators as follows. First, note that a set of $(d+1)$ null achronal geodesics in a $(d+1)$-dimensional bulk uniquely identify a bulk point if they intersect; thus to identify a bulk point, we only need a singular $(d+1)$-point correlator. However, generically $(d+1)$-point correlators corresponding to a diagram with a bulk point vertex are not singular, as the bulk Landau diagram does not conserve energy-momentum at the vertex. To ensure conservation of energy-momentum,~\cite{MalSimZhi} considered instead $(d+2)$-point correlators. Thus, to uniquely fix a bulk point, we look for $(d+2)$-points that result in singular correlators. To find the lightcone cuts,~\cite{EngHor16a} added one more point: a $(d+3)$-point correlator gives the freedom of moving one point to trace out the lightcone cut while moving another point to conserve energy momentum. Keeping $(d+1)$ points fixed guarantees that the corresponding bulk point does not move around as the lightcone cut is traced out. An additional requirement of~\cite{EngHor16a} was that the singularity of the $(d+3)$-point correlator be at the smallest possible time-separation on the boundary. This ensures that the object being traced out is the actual lightcone cut, rather than some other cross-section of the lightcone; this subtlety is due to geodesic intersections. The above construction is restricted to the boundary's causal wedge, as it requires conservation of energy-momentum at bulk points, though it needs a small modification in cases of collapsing black holes. In generic spacetimes, it permits the determination of the cuts of points anywhere in the causal wedge, but in certain nongeneric situations like eternal black holes, conservation of energy-momentum at the vertex excludes components of the causal wedge as well. To summarize the construction: consider any local operator~$\Ocal(X)$; if its time-ordered~$(d+3)$-point correlator has a singularity structure that consistently defines a causal geometry by equation~\eqref{eq:sys}, then that causal geometry is the emergent dual, and the operator~$\Ocal(X)$ is dual to some perturbatively interacting, local, causal dynamical field on it. \subsection{Causal States} \label{subsec:CDM} The lightcone cut construction reviewed above gives an explicit algorithm for generating an emergent causal structure from QFT data in the form of the singularities of time-ordered~$(d+3)$-point functions. \textit{A priori}, this data alone does not recover the entire bulk geometry, as it does not capture any information about the bulk conformal factor\footnote{It also by definition misses any spacetime inside an event horizon, and as discussed in~\cite{EngHor16a}, potentially also regions outside of event horizons in nongeneric spacetimes.}; thus we can think of the data contained in this singularity structure as a type of coarse-graining over the bulk conformal factor. This is the coarse-graining promised in Section~\ref{sec:intro}, which we will now use to define causal states. To develop some intuition, consider a (pure) state~$\ket{\psi}$ of the field theory. The key observation is that the conformal geometry dual to~$\ket{\psi}$ is encoded in just the singularity structure of~$O_\psi(X_i) \equiv \ev{\Ocal(X_1) \cdots \Ocal(X_{d+3})}_\psi$. We therefore coarse-grain out any other information about~$\ket{\psi}$ (or the theory) by defining an equivalence relation~$\sim$, where~$\ket{\psi_1} \sim \ket{\psi_2}$ if~$O_{\psi_1}(X_i)$ and~$O_{\psi_2}(X_i)$ have the same singularity structures. Then by construction, if~$\ket{\psi_1} \sim \ket{\psi_2}$, both~$\ket{\psi_1}$ and~$\ket{\psi_2}$ give rise to the same higher dimensional conformal geometry. The \textit{causal state} of~$\ket{\psi}$ is defined as the equivalence class of~$\sim$ to which~$\ket{\psi}$ belongs, which we denote as~$\widetilde{\ket{\psi}}$. Thus by construction, the causal states that give rise to well-defined dual conformal geometries are in one-to-one correspondence with those conformal geometries. We now proceed to formalize this construction. Consider the Hilbert space~$\Hcal$ of the QFT and the space~$\overline{\Lcal(\Hcal)}$ of all linear operators on~$\Hcal$\footnote{This space consists of \textit{all} linear operators, and not just bounded ones; hence the notation~$\overline{\Lcal(\Hcal)}$.}. A state~$\ket{\psi} \in \Hcal$ induces a map to the extended complex numbers~$\overline{\cnum}$ (also known as the Riemann sphere), corresponding to the complex numbers plus the ``point at infinity'': \be \ev{\cdot}_\psi: \overline{\Lcal(\Hcal)} \to \overline{\cnum}, \ee where~$\ev{A}_\psi \equiv \me{\psi}{A}{\psi}$ for any operator~$A \in\overline{\Lcal(\Hcal)}$. This map is just the usual expectation value (though note that in general~$A$ need not be Hermitian). Consider next any local operator~$\Ocal(X) \in \overline{\Lcal(\Hcal)}$; we use its composition to define a binary map on~$(d+3)$ copies of the QFT spacetime manifold~$\partial M$ (denoted as~$\partial M^{d+3}$) as follows\footnote{For CFTs, $\partial M$ should be taken to be in a standard conformal frame~\cite{EngHor15} to avoid issues with boundary geodesic incompleteness.}: \begin{subequations} \label{eq:Wmap} \bea W_{\psi}: \partial M^{d+3} &\to \{0,1\}\\ (X_1,\ldots,X_{d+3}) &\mapsto \begin{cases} 1 & \mbox{if } \left\|\left ( \prod\limits_{i=1}^{d+3}\partial_{i}^{k_{i}} \right) \ev{\Ocal(X_1) \cdots \Ocal(X_{d+3})}_\psi\right\| = \infty \mathrm{\ for \ some \ } \{k_{i}\} \\ 0 & \mbox{otherwise} \end{cases} \eea \end{subequations} where the norm is taken with respect to the metric on~$\partial M$ (in a standard conformal frame),~$\infty$ denotes the ``point at infinity'' of~$\overline{\cnum}$ (i.e.~the north pole of the Riemann sphere), and the $k_{i}$ are integers. The simplest example of such singularities is when all the $k_{i}$ vanish: i.e. when the correlator diverges. More generally, these singularities correspond to nonanalyticities in the correlator, which are captured by derivatives. The support of this map (that is, those points~$X_{i}$ on which~$W_{\psi} = 1$) consists of state-independent singularities (short-distance and boundary light-cone singularities) and state-dependent singularities (e.g.~bulk-sourced singularities). It is these latter singularities that define the lightcone cuts, and therefore the emergent causal geometry, when one exists. The map~$W$ thus precisely captures all of the information necessary to reconstruct the causal geometry, should it exist. It is therefore natural to use this map to define the equivalence relation~$\sim$ by requiring that the maps~$W_{\psi_1}$ and~$W_{\psi_2}$ of any two equivalent states~$\ket{\psi_1}$ and~$\ket{\psi_2}$ coincide: \begin{defn} Let~$\ket{\psi_1}$ and~$\ket{\psi_2}$ be two states in the same Hilbert space with associated maps~$W_{\psi_1}$,~$W_{\psi_2}$. Then we say that~$\ket{\psi_1}$ and~$\ket{\psi_2}$ are \textit{causally equivalent}, written as~$\ket{\psi_1} \sim \ket{\psi_2}$, if we have \be W_{\psi_1}(X_1,\ldots,X_{d+3}) = W_{\psi_2}(X_1,\ldots,X_{d+3}) \ee for all~$\{X_i\}_{i=1}^{d+3}$. \end{defn} In principle, the relation~$\sim$ may depend on the choice of operator~$\Ocal$ used to define the map~$W_{\psi}$. However, the states in which we are ultimately interested are those with the property that the corresponding maps~$W_{\psi}$ are independent of choice of~$\Ocal$ (so that the same bulk conformal geometry arises for all of them); that is, whenever $\Ocal$ has a corresponding dual field in the bulk, which interacts perturbatively. We therefore do not bother labeling~$\sim$ by a choice of~$\Ocal$, though we will comment more on this issue in Section~\ref{sec:disc}. Note that the construction reviewed in Section~\ref{subsec:cutreview} made use of the lightcone cuts, rather than the map~$W_{\psi}$, to reconstruct the conformal geometry. Why do we not define the equivalence relation~$\sim$ using the cuts directly? The reason is that the discussion in Section~\ref{subsec:cutreview} uses statements originating from the bulk geometry, while here we are interested in defining the equivalence~$\sim$ in a purely field theoretic manner, applicable to all states whether or not they have a semiclassical holographic dual. Indeed, given an \textit{arbitrary} state~$\ket{\psi}$, the cut construction from the map~$W_{\psi}$, as outlined in Section~\ref{subsec:cutreview}, need not yield any well-defined surfaces. In such a case, the state~$\ket{\psi}$ does not have a semiclassical holographic dual. Thus though its map~$W_{\psi}$ exists independently of the holographic dual, it is meaningless to talk about its ``lightcone cuts''. However, in the case where~$\ket{\psi}$ \textit{does} have a well-defined semiclassical dual, the construction of Section~\ref{subsec:cutreview} is guaranteed to give rise to well-defined ``cuts'' of~$\ket{\psi}$. In such a case, the map~$W_{\psi}$ gives rise to a unique set of cuts, and any two such states~$\ket{\psi_1}$,~$\ket{\psi_2}$ with the same~$W_{\psi_{1,2}}$ must necessarily give rise to the same cuts, and therefore the same dual conformal geometry. The upshot is that working with~$W_{\psi}$ rather than the (potentially ill-defined) ``cuts'' makes the equivalence relation~$\sim$ a well-defined purely field-theoretic object. This precise definition of the equivalence relation~$\sim$ therefore gives us a precise field-theoretic definition of the causal state~$\widetilde{\ket{\psi}}$ as the equivalence class of~$\ket{\psi}$. It is natural to ask whether this equivalence relation is nontrivial; that is, whether there exist different states that are equivalent under this relation. If this were not the case, then states~$\ket{\psi}$ could be uniquely specified by the singularity structure of the correlator~$\ev{\Ocal(X_1) \cdots \Ocal(X_{d+3})}_\psi$. However, states are specified uniquely by the collection of expectation values of \textit{all} operators, not by the singularity structure of just one. This is a general expectation, but there is a clear argument that the equivalence relation cannot be trivial\footnote{We thank Gary Horowitz for this argument.}: in the large-$N$ limit of AdS/CFT, we may perturb a state at $\mathcal{O}(N^{0})$; this results in subleading corrections that do not change the singularity structure, but by construction change the state. Therefore multiple states must belong in the same equivalence causal equivalence class. This picture is of course perfectly consistent with the dual geometric interpretation: from the point of view of a holographic bulk dual, a particular state~$\ket{\psi}$ is dual to a bulk geometry, while a causal state~$\widetilde{\ket{\psi}}$ is dual to the causal structure of the bulk geometry. The coarse-graining from~$\ket{\psi}$ to~$\widetilde{\ket{\psi}}$ thus simply corresponds to a coarse-graining over all possible bulk conformal factors. \subsection{Reduced Causal States} \label{subsec:reduced} An important aspect of holography is subregion/subregion duality~\cite{BouLei12}: while it is valuable to work with the dual of the entire field theory, we now understand that \textit{subregions} of the field theory have sensible bulk subregion duals~\cite{JafLew15, DonHar16}. This property of holography is no less interesting in the present context, as it has applications to an open question in AdS/CFT: what is the dual of the causal wedge of some boundary region? We now develop some technology that will allow us to provide an answer in Section~\ref{sec:causalwedge}. To that end, note that the equivalence relation~$\sim$ was defined by the condition that $\ket{\psi_1} \sim \ket{\psi_2}$ if and only if the maps~$W_{\psi_1}$ and~$W_{\psi_2}$ agree on the entire CFT spacetime. There are two natural generalizations of this equivalence: one in which we require only some of the~$X_i$ in~$W_\psi(X_i)$ to lie in~$\Rcal$, and another stronger version in which all the~$X_i$ are contained to lie in~$\Rcal$. We will discuss the first generalization here; the second generalization has interesting ties to quantum error correction, and we postpone a more thorough investigation of it to Section~\ref{sec:QEC}. Roughly, we define a ``reduced'' equivalence relation~$\sim_\Rcal$ by the condition that~$\ket{\psi_1} \sim_\Rcal \ket{\psi_2}$ if and only if the corresponding cuts of~$\ket{\psi_1}$ and~$\ket{\psi_2}$ agree within~$\Rcal$ regardless of what they may do elsewhere, thereby ``coarse-graining'' over any behavior of the cuts outside of the region~$\Rcal$ as shown in Figure~\ref{fig:rcs}. To make this precise, we define the equivalence relation~$\sim_\Rcal$ as follows: \begin{figure}[t] \centering \includegraphics[width=3.5cm,page=6]{Figures-pics.pdf} \caption{For the reduced causal density matrix of~$\Rcal$, only the structure of the lightcone cuts inside~$\Rcal$ matters: lightcone cuts that agree on~$\Rcal$ and do not agree elsewhere are identical as perceived by the reduced causal density matrix on~$\Rcal$. The dashed lines denote those portions of the cuts over which we coarse-grain.} \label{fig:rcs} \end{figure} \begin{defn} \label{defn:causalequivR} Let~$\Rcal$ be some subregion of the boundary spacetime, and let~$\ket{\psi_1}$ and~$\ket{\psi_2}$ be two states with associated maps~$W_{\psi_1}$,~$W_{\psi_2}$. Then we say that~$\ket{\psi_1}$ and~$\ket{\psi_2}$ are \textit{causally equivalent on~$\Rcal$}, written as~$\ket{\psi_1} \sim_\Rcal \ket{\psi_2}$, when for every~$k = 2,\ldots,d+3$ \be W_{\psi_1}(X_1,\ldots,X_k,Y_{k+1},\ldots,Y_{d+3}) = W_{\psi_2}(X_1,\ldots,X_k,Y'_{k+1},\ldots,Y'_{d+3}) \ee for all~$\{X_i\}_{i=1}^k \subset \Rcal$ such that at least two of the~$X_i$ are chronally related and for some~$\{Y_i\}_{i=k+1}^{d+3}$,~$\{Y'_i\}_{i=k+1}^{d+3} \not\subset \Rcal$. \end{defn} It follows easily from this definition that the equivalence relations~$\sim_\Rcal$ admit a nesting structure: if~$\ket{\psi_1} \sim_\Rcal \ket{\psi_2}$, then~$\ket{\psi_1} \sim_{\Rcal'} \ket{\psi_2}$ for any~$\Rcal' \subset \Rcal$. In particular, if~$\Rcal$ is the entire boundary spacetime~$\partial M$, then~$\sim_{\partial M}$ is just the equivalence relation~$\sim$ defined in the previous section. Thus if~$\ket{\psi_1} \sim \ket{\psi_2}$, then~$\ket{\psi_1} \sim_\Rcal \ket{\psi_2}$ for any region~$\Rcal$. Now, let us define \textit{causal states on~$\Rcal$}, denoted as~$\widetilde{\ket{\psi}}_\Rcal$, as the equivalence classes of~$\sim_\Rcal$. The nesting structure of the equivalence relations~$\sim_\Rcal$ implies the following property: given a causal state~$\widetilde{\ket{\psi}}_\Rcal$ and a subregion~$\Rcal' \subset \Rcal$, we can obtain the causal state~$\widetilde{\ket{\psi}}_{\Rcal'}$ by taking the equivalence class of~$\widetilde{\ket{\psi}}_\Rcal$ under~$\sim_{\Rcal'}$; however, given a causal state~$\widetilde{\ket{\psi}}_{\Rcal'}$, it is not sensible to take an equivalence class under~$\sim_\Rcal$ to reconstruct~$\widetilde{\ket{\psi}}_\Rcal$. Thus, by going from~$\widetilde{\ket{\psi}}_\Rcal$ to~$\widetilde{\ket{\psi}}_{\Rcal'}$, information about the state is lost; this is the precise interpretation of the coarse-graining advertised above. If~$\Rcal' \subset \Rcal$, then~$\widetilde{\ket{\psi}}_{\Rcal'}$ is more coarse-grained than~$\widetilde{\ket{\psi}}_\Rcal$. \subsection{A Causal Hilbert Space and Causal Density Matrix} \label{subsec:causalspace} So far, the equivalence relations~$\sim_\Rcal$ serve a useful formal role to identify precisely what boundary information is sufficient to reconstruct a bulk causal geometry corresponding to the causal wedge of~$\Rcal$. However, it is natural to wonder whether there is some additional structure associated with this equivalence relation; for instance, does it induce a quotient space structure of the Hilbert space~$\Hcal$? To investigate this question, consider~$\Hcal$ as a vector space (that is, let us temporarily ignore the inner product structure on~$\Hcal$). Recall that an equivalence relation induces a quotient of~$\Hcal$ if and only if it is a \textit{congruence} on~$\Hcal$; that is, if it is compatible with the vector space structure of~$\Hcal$. In other words,~$\sim_\Rcal$ will induce a quotient space of~$\Hcal$ if and only if for any~$\ket{\psi}$,~$\ket{\psi'}$,~$\ket{\phi}$,~$\ket{\phi'} \in \Hcal$, \be \label{eq:congruence} \mbox{If } \ket{\psi} \sim_\Rcal \ket{\psi'} \mbox{ and } \ket{\phi} \sim_\Rcal \ket{\phi'} \mbox{ then } \ket{\psi+\phi} \sim_\Rcal \ket{\psi'+\phi'}. \ee We argue that the above relation does indeed hold, although we do not offer a proof. First, note that we have \begin{subequations} \bea O_{\psi + \phi}(X_{i}) &= O_{\psi}(X_{i}) + O_{\phi}(X_{i}) + \me{\psi}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\phi} + \me{\phi}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\psi}, \\ O_{\psi' + \phi'}(X_{i}) &= O_{\psi'}(X_{i}) + O_{\phi'}(X_{i}) + \me{\psi'}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\phi'} + \me{\phi'}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\psi'}. \eea \end{subequations} Clearly the singularity structures of the first two terms in each line above must agree within~$\Rcal$ (in the sense of Definition~\ref{defn:causalequivR}), since~$\ket{\psi} \sim_\Rcal \ket{\psi'}$ and~$\ket{\phi} \sim_\Rcal \ket{\phi'}$. Thus~$\sim$ will be a congruence if the mixed matrix elements~$\me{\phi}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\psi}$ do not contain any singularities besides those present in~$O_\psi$ and~$O_\phi$. To gain some intuition for these cross terms from AdS/CFT, consider the case where~$\ket{\psi}$ and~$\ket{\phi}$ have semiclassical duals\footnote{We thank Gary Horowitz for providing us with the following argument.}, and denote the energies of these states by~$E_\psi$ and~$E_\phi$. Because we are necessarily working in the large-$N$ limit, these energies will generically be of order~$N^2$; thus~$\ket{\psi}$ will be composed of a superposition of energy eigenstates within a small energy window~$(E_\psi - \Delta, E_\psi + \Delta)$, where~$\Delta$ is subleading in~$1/N$, and likewise for~$\ket{\phi}$. If~$E_\psi \neq E_\phi$, these energy windows will not overlap, and thus~$\ket{\phi}$ and~$\ket{\psi}$ must be orthogonal:~$\inprod{\psi}{\phi} = 0$ (to leading order in~$1/N$). But since~$Ocal(X_1)\cdots \Ocal(X_{d+3})$ consists of order unity copies of the local operator~$\Ocal(X)$, the energy of the state~$Ocal(X_1)\cdots \Ocal(X_{d+3})\ket{\phi}$ must also be~$E_\phi$ to order~$N^2$, and thus by the same logic, the inner product~$\me{\phi}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\psi} = 0$ as well. Thus in this case, the matrix elements do not add any additional singularities to the correlator, as desired. In the case that~$\ket{\psi}$ and~$\ket{\phi}$ have the same energy or do not have semiclassical duals, or more generally if we work outside the context of AdS/CFT, we have less intuition for the behavior of the cross terms~$\me{\phi}{\Ocal(X_1)\cdots \Ocal(X_{d+3})}{\psi}$. Nevertheless, it seems reasonable to expect that the singularity structure of these matrix elements should arise only from the singularity structures of~$O_\psi$ and~$O_\phi$. If this expectation is borne out, these cross terms cannot add any new singularities, and therefore the singularity structures of~$O_{\psi + \phi}$ and~$O_{\psi' + \phi'}$ must match within~$\Rcal$\footnote{The roughness of this argument means we are necessarily ignoring the possibility of fine-tuned cases where the singularities cancel.}. This again verifies that~$\sim_\Rcal$ is a congruence of~$\Hcal$. If the rough arguments outlined above are correct, then the equivalence relation~$\sim_\Rcal$ defines a quotient vector space~$\widetilde{\Hcal}_{\Rcal} \equiv \Hcal/\!\sim_\Rcal$, which we naturally dub the \textit{causal vector space of~$\Rcal$} (or just the \textit{causal vector space} when~$\Rcal$ is the entire QFT spacetime). Thus the causal states~$\widetilde{\ket{\psi}}_\Rcal$ are not just equivalence classes, but can be thought of as elements of the vector space~$\widetilde{\Hcal}_\Rcal$. Due to the nesting structure of the~$\sim_\Rcal$, this family of vector spaces obeys the property that for any~$\Rcal' \subset \Rcal$,~$\widetilde{\Hcal}_\Rcal$ can be quotiented by~$\sim_{\Rcal'}$ to obtain~$\widetilde{\Hcal}_{\Rcal'}$ (but not the other way around). Again, this is an instance of coarse-graining under exclusion. Moreover, any operator~$A \in \overline{\Lcal(\Hcal)}$ that obeys the property \be A\ket{\psi} \sim_\Rcal A\ket{\psi'} \mbox{ for all } \ket{\psi}, \ket{\psi'} \mbox{ such that } \ket{\psi} \sim_\Rcal \ket{\psi'} \ee gives rise to an operator~$\widetilde{A}_\Rcal$ on~$\widetilde{\Hcal}_\Rcal$. Finally, note that we can augment the vector space~$\widetilde{\Hcal}_\Rcal$ with an inner product to obtain a causal inner product space\footnote{This is due to the fact that an inner product can be introduced on any vector space (over~$\mathbb{R}$ or~$\cnum$) via the introduction of a Hamel basis. However, an inner product on~$\widetilde{\Hcal}_\Rcal$ is not inherited in any natural way from the inner product on~$\Hcal$, since given any~$\ket{\psi}$,~$\ket{\psi'}$,~$\ket{\phi}$,~$\ket{\phi'}$ in~$\Hcal$ such that~$\ket{\psi} \sim_\Rcal \ket{\psi'}$ and~$\ket{\phi} \sim_\Rcal \ket{\phi'}$, it need not be the case that~$\inprod{\psi}{\phi} = \inprod{\psi'}{\phi'}$.}. We expect that the resulting inner product space will be complete (since~$\Hcal$ was) and is therefore a Hilbert space; however, subtleties arising from infinite sums may spoil the completeness property. If this is the case, $\widetilde{\Hcal}_\Rcal$ is nevertheless a normed vector space. Assuming the above considerations are correct (and that the resulting space is complete), we conclude that~$\widetilde{\Hcal}_\Rcal$ is a \textit{causal Hilbert space}, and we will term it thus throughout. This inner product structure thereby allows us to define covectors, and thereby define a \textit{causal density matrix on~$\Rcal$} as \be \label{eq:CDM} \tilde{\rho}_\Rcal \equiv \widetilde{\ket{\psi}}_\Rcal \widetilde{\bra{\psi}}_\Rcal. \ee This causal density matrix is equivalent to the original state~$\widetilde{\ket{\psi}}_\Rcal$. As an aside, we should note that there is another sensible notion of what could be meant by a ``causal density matrix''. So far, we have restricted ourselves to pure states~$\ket{\psi}$ because they are elements of the Hilbert space~$\Hcal$, and therefore the causal state~$\widetilde{\ket{\psi}}_\Rcal$ is an element of the causal Hilbert space~$\widetilde{\Hcal}_\Rcal$. However, we may also be interested in mixed states, which are defined instead by a density matrix~$\rho$, which is a linear operator on~$\Hcal$. The map~$W$ defined in~\eqref{eq:Wmap} can then be defined using~$\rho$ by replacing the expectation value with~$\left\langle A \right\rangle_\rho \equiv \Tr(\rho A)$ for any~$A \in \overline{\Lcal(\Hcal)}$. This again defines an equivalence relation~$\sim_\Rcal$ by Definition~\ref{defn:causalequivR}, which may then be used to define an alternative causal density~$\tilde{\rho}'_\Rcal$ matrix as the equivalence class of~$\sim_\Rcal$ to which~$\rho$ belongs. However, this definition is less appealing because unlike~$\widetilde{\ket{\psi}}_\Rcal$, it is not clear that the object~$\tilde{\rho}'_\Rcal$ can be thought of as an element of some structured ``causal'' vector space like~$\widetilde{\Hcal}_\Rcal$, and unlike~$\tilde{\rho}_\Rcal$, it is not an operator on the quotient space~$\widetilde{\Hcal}_\Rcal$. For these reasons, we do not make use of~$\tilde{\rho}'_\Rcal$ here. \section{Causal Quantum Error Correction} \label{sec:QEC} The definition of the equivalence relation~$\sim_\Rcal$ given above consisted of a ``partial'' restriction of the map~$W_{\psi}$ to the region~$\Rcal$ in the sense that only some of the points~$x_i$ were required to be in~$\Rcal$. From the perspective of the lightcone cuts, this condition is necessary to conserve momentum at all bulk points whose cuts intersect~$\Rcal$: the set of boundary points that render~$O_{\psi}(X_i)$ singular must span some minimum ``angle'', as shown in Figure~\ref{fig:bndryangle}. Defining the reduced causal state~$\widetilde{\ket{\psi}}_\Rcal$ in this way ensures that it captures the lightcone cuts that intersect~$\Rcal$, as we may use points outside of~$\Rcal$ to ensure energy-momentum conservation for obtaining them. We may, however, consider a stricter approach wherein we rely on nothing but data within $\Rcal$ itself, as is more parallel to notions of subregion/subregion duality in AdS/CFT. Let us therefore define a different causal state which captures this intuition. Consider restricting the map~$W_{\psi}$ \textit{entirely} to~$\Rcal$; that is, constrain \textit{all} the~$X_i$ to be contained in~$\Rcal$. Roughly, this constraint allows us to recover the open subsets of only those lightcone cuts that intersect $\Rcal$ on sufficiently large sets, i.e.~on sets that are sufficiently large to ensure energy-momentum conservation in the bulk\footnote{This restriction applies at least via the correlator construction; other ways of obtaining the lightcone cuts, should they exist, may circumvent this issue.}, and thus restricts the subset of the bulk that can be reconstructed. Increasing the size of~$\Rcal$ increases the maximum ``angle'' that can be spanned by points contained within~$\Rcal$, and thus allows the recovery of the conformal metric of progressively larger subsets of the bulk. \begin{figure}[t] \centering \includegraphics[width=3.5cm,page=7]{Figures-pics.pdf} \caption{In order to reconstruct the conformal metric at some bulk point~$p$ from singularities of the~$(d+3)$-point correlator~$O_{\psi}(X_i)$, the points~$X_i$ need to be sufficiently spread out around the boundary to ensure momentum conservation at~$p$.} \label{fig:bndryangle} \end{figure} \begin{figure}[t] \centering \subfigure[]{\includegraphics[width=4.5cm,page=8]{Figures-pics.pdf} \label{subfig:birdview} } \hspace{2cm} \subfigure[]{\includegraphics[width=4cm,page=9]{Figures-pics.pdf} \label{subfig:timelikestrips} } \caption{The boundary is divided into three timelike strips: $\Rcal_{1}, \Rcal_{2}, \Rcal_{3}$. If all the~$X_i$ are restricted to any one of the boundary subregion~$\Rcal_{i}$, the conformal metric cannot be reconstructed at all bulk points in causal contact with~$\Rcal_{i}$, as the corresponding Landau diagram does not conserve energy momentum. However, increasing the size of~$\Rcal_{i}$ to include both $\Rcal_{1}$ and $\Rcal_{2}$ allows the reconstruction of the conformal metric at points ``deeper'' in the bulk.} \label{fig:increaseR} \end{figure} This feature -- wherein bulk data that is not contained in a boundary region~$\Rcal'$ can instead be recovered from a sufficiently larger region~$\Rcal \supset \Rcal'$ -- is strongly reminiscent of quantum error correction and quantum secret sharing~\cite{AlmDon15}. We can make this connection clearer by formulating the observations above more precisely, starting with the following definition: \begin{defn} \label{defn:strongcausalequivR} Let~$\Rcal$ be some subregion of the boundary spacetime, and let~$\ket{\psi_1}$ and~$\ket{\psi_2}$ be two states with associated maps~$W_{\psi_1}$,~$W_{\psi_2}$. Then we say that~$\ket{\psi_1}$ and~$\ket{\psi_2}$ are \textit{strongly causally equivalent on~$\Rcal$}, written as~$\ket{\psi_1} \hat{\sim}_\Rcal \ket{\psi_2}$, when \be W_{\psi_1}(X_1,\ldots,X_{d+3}) = W_{\psi_2}(X_1,\ldots,X_{d+3}) \ee for all~$\{X_i\}_{i=1}^k \subset \Rcal$. \end{defn} This equivalence is just a much stronger version of that in Definition~\ref{defn:causalequivR}; we may use it analogously to define the \textit{strongly reduced causal state on~$\Rcal$} as the equivalence class~$\widehat{\widetilde{\ket{\psi}}}_\Rcal$ of~$\ket{\psi}$ under~$\hat{\sim}_\Rcal$. Importantly, if~$\ket{\psi}$ has a semiclassical bulk dual, then~$\widehat{\widetilde{\ket{\psi}}}_\Rcal$ only contains sufficient information to reconstruct the conformal metric at those bulk points~$p$ that are null-separated from~$(d+3)$ points in~$\Rcal$ by an allowed Landau diagram. More precisely, we define what is meant by a sufficient region~$\Rcal$ as follows: \begin{defn} Consider a state~$\ket{\psi}$ with a well-defined semiclassical holographic dual causal structure, and consider a point~$p$ in this dual geometry. An open connected boundary region~$\Rcal$ is \textit{sufficient} for reconstructing the conformal metric at~$p$ if it contains at least~$(d+3)$ points null-separated from~$p$ by an allowed Landau diagram. Otherwise,~$\Rcal$ is \textit{insufficient} for reconstructing the conformal metric at~$p$. \end{defn} Note that in principle a set of~$(d+3)$ points null-separated from~$p$ (by an allowed bulk Landau diagram) is only sufficient to define the point~$p$, not to reconstruct the conformal metric at~$p$. However, because~$\Rcal$ is required to be open, the existence of a set of~$(d+3)$ points null-separated from~$p$ in fact implies that~$\Rcal$ contains enough points to reconstruct open intervals of the lightcone cut of~$p$ and the points in the neighborhood of~$p$. These in turn are sufficient to reconstruct the conformal metric at~$p$, justifying the above definition. The relationship to quantum error correction arises when we consider constructing sufficient regions from a union of insufficient ones. Let $p$ be a bulk point as before and consider a region~$\Rcal$ sufficient to reconstruct the conformal metric at~$p$. We may divide~$\Rcal$ into a number of disjoint regions~$\Rcal_i$, each of which is insufficient for reconstructing the conformal metric at~$p$; for simplicity, let us consider a case where~$\Rcal$ is the full boundary spacetime, which is divided into three strips, each of which is insufficient, but such that the union of any two is sufficient, as shown in Figure~\ref{fig:increaseR}. Then by the above definitions, access to the strongly reduced causal state on the union~$\Rcal_{ij} \equiv \Rcal_i \cup \Rcal_j$ of any two regions is sufficient to fix the conformal metric at~$p$. We thus find that bulk reconstruction via bulk point singularities has the following two properties: \begin{enumerate} \item Quantum Error Correction: the conformal metric at~$p$ is protected against erasure of any one of the~$\Rcal_i$; \item Quantum Secret Sharing: any one of the~$\Rcal_i$ does not contain knowledge of the conformal metric at~$p$. \end{enumerate} Here by ``erasure of~$\Rcal_i$'', we mean restriction of the map~$W_{\psi}$ to the complement of~$\Rcal_i$ in the sense of Definition~\ref{defn:strongcausalequivR}. The comparison with the usual notion of quantum error correction is striking. Recall that in the typical density-matrix based quantum error correction, the Hilbert space can be factorized via three causal diamonds~$\Rcal_i$ as~$\Hcal = \Hcal_1 \otimes \Hcal_2 \otimes \Hcal_3$, and a state~$\rho$ gives rise to reduced density matrices~$\rho_i$ on each of these three diamonds. ``Erasure'' in this case refers to the loss of one of the three diamonds (say~$\Rcal_3$), and quantum error correction is manifested in the fact that a bulk observable can be reconstructed from the reduced density matrix of the remaining regions~($\rho_{12}$). Quantum secret sharing refers to the fact that the reduced density matrix of a single diamond is insufficient to reconstruct this bulk observable. The discussion in almost unchanged in the present case, except that instead of reduced density matrices, we use the strongly reduced causal states. Then quantum error correction refers to the fact that the strongly reduced causal state of the union of two of the three strips is sufficient to reconstruct the conformal metric at~$p$, and quantum secret sharing is manifested in the fact that the strongly reduced causal state of only one strip is not. \section{The Dual to the Causal Wedge} \label{sec:causalwedge} Let us now address the bulk interpretation of the (reduced) causal state~$\widetilde{\ket{\psi}}_\Rcal$. By construction,~$\widetilde{\ket{\psi}}_\Rcal$ contains the information both necessary and sufficient to reconstruct those portions of the lightcone cuts of~$\ket{\psi}$ (when they exist) that intersect~$\Rcal$. These cuts correspond precisely to those bulk points in~$I^+[\Rcal] \cap I^-[\Rcal]$, as shown in Figure~\ref{fig:causalwedgepoints}. In particular, when~$\Rcal$ is a causal diamond, this region is just the causal wedge~$C_W[\Rcal]$ of~$\Rcal$. Since open subsets of the cuts corresponding to points in~$C_W[\Rcal]$ can be used to reconstruct the conformal metric in~$C_W[\Rcal]$, we find that the reduced causal state~$\widetilde{\ket{\psi}}_\Rcal$ contains precisely the field theoretic information necessary and sufficient to reconstruct the conformal geometry of~$C_W[\Rcal]$! \begin{figure}[t] \centering \includegraphics[width=3.3cm,page=10]{Figures-pics.pdf} \caption{The portion of the lightcone cuts that intersect a region~$\Rcal$ correspond to those bulk points $p$ in~$J^+[\Rcal] \cap J^-[\Rcal]$; when~$\Rcal$ is a causal diamond, this region is precisely the causal wedge~$C_{W}[\Rcal]$.} \label{fig:causalwedgepoints} \end{figure} It is worth reminding the reader that part of our purpose in this paper is the proposition for a shift in the general approach to the causal wedge; in particular, because the causal wedge is defined in terms of the causal structure of the bulk geometry, we have no right to expect that anything beyond its causal structure should have a ``nice'' field theoretic dual. For the causal structure itself, this dual is precisely~$\widetilde{\ket{\psi}}_\Rcal$: \textit{the reduced causal state is the field-theoretic dual of (the causal structure of) the causal wedge}. Our position is motivated in part by the general arguments of~\cite{EngWal17}, which suggest that the area of the causal surface has no simple information-theoretic interpretation. This is in contrast with the story regarding the HRT surface and the entanglement wedge; there the reduced density matrix~$\rho_\Rcal$ associated to some causal diamond~$\Rcal$ is understood to be the field-theoretic dual to the entanglement wedge, and the area of the HRT surface is dual to the von Neumann entropy of~$\rho_\Rcal$. Nevertheless, it is possible to draw some parallels between the reduced density matrix~$\rho_\Rcal$ and our causal state~$\widetilde{\ket{\psi}}_\Rcal$. The idea is to focus on the notion of coarse-graining: the reduced density matrix~$\rho_\Rcal$ is obtained from a state~$\rho$ by tracing out the degrees of freedom in the complement~$\overline{\Rcal}$ of~$\Rcal$: \be \rho_\Rcal = \Tr_{\overline{\Rcal}} \rho. \ee The von Neumann entropy of~$\rho_\Rcal$, also called the entanglement entropy of~$\Rcal$, is a measure of the coarse-graining performed in going from~$\rho$ to~$\rho_\Rcal$, and corresponds to the information lost in restricting the state to the region~$\Rcal$. Our causal state~$\widetilde{\ket{\psi}}$ is similarly obtained from~$\ket{\psi}$ via a coarse-graining over any information not contained in the map~$W_{\psi}$. Is there number associated to this coarse-graining that quantifies the information lost in going from~$\ket{\psi}$ to~$\widetilde{\ket{\psi}}_\Rcal$? Likewise, is there a number than quantifies the information lost when going from~$\widetilde{\ket{\psi}}_\Rcal$ to~$\widetilde{\ket{\psi}}_{\Rcal'}$ for $\Rcal'\subset \Rcal$? We suggest answers to these questions below. \subsubsection*{A Measure of Coarse-Graining} Roughly speaking, the quantification of the coarse-graining performed in going from a state~$\ket{\psi}$ to a causal state~$\widetilde{\ket{\psi}}_\Rcal$ requires a measure of the number of unique physical states in the Hilbert space~$\Hcal$ that map to the same causal state~$\widetilde{\ket{\psi}}_\Rcal$. A na\"ive such measure would be the cardinality of the causal state~$\widetilde{\ket{\psi}}_\Rcal$, thought of as a coset of the equivalence relation~$\sim_\Rcal$. However, a more natural measure can be obtained by exploiting the structure of the equivalence relation~$\sim_\Rcal$ on the Hilbert space~$\Hcal$. To that end, note that the property~\eqref{eq:congruence} from Section~\ref{subsec:causalspace} implies that for a given $\ket{\psi}$, the set of vectors \be \{\ket{\phi} \in \Hcal : \ket{\phi} \sim_\Rcal \ket{\psi} \} \ee is closed under addition and scalar multiplication. By augmenting the above set with the zero vector~$\ket{0}$\footnote{We emphasize that this is the zero vector, \textit{not} the vacuum state.}, we obtain a vector space: \be \Hcal_{\Rcal,\psi} \equiv \{\ket{0}\} \cup \{\ket{\phi} \in \Hcal : \ket{\phi} \sim_\Rcal \ket{\psi} \}. \ee The dimension~$D_{\Rcal,\psi} \equiv \mathrm{dim}(\Hcal_{\Rcal,\psi})$ is a well-defined object, and captures precisely the notion of how much coarse-graining has been performed in obtaining the causal state. However, note that it is constructed from the state in a radically different way than the entanglement entropy is; this difference is consistent with our claim that the causal and entanglement wedges are fundamentally different. Of course, because~$\Hcal$ is infinite-dimensional, it is possible for~$D_{\Rcal,\psi}$ to be infinite-dimensional as well. In such a case, we imagine rendering~$\Hcal$ finite via the usual introduction of some kind of regulator. \subsubsection*{Conformal Invariants in the Causal Wedge} Which \textit{bulk} objects are natural candidates for obtaining $\mathbb{C}$-numbers from the (reduced) causal state? Bulk considerations automatically lead us to consider different objects that interestingly turn out to be finite and independent of renormalization scheme. Consider a state~$\ket{\psi}$ known (by the procedure outlined in Section~\ref{subsec:cutreview}) to have a semiclassical holographic dual, and consider a boundary causal diamond~$\Rcal$. We may consider using the causal state~$\widetilde{\ket{\psi}}_\Rcal$ to define numbers that encode properties of~$\widetilde{\ket{\psi}}_\Rcal$, regardless of the coarse-graining performed in obtaining this state from~$\ket{\psi}$. Conceivably, such objects should have bulk duals constructed from the conformal geometry of the causal wedge of~$\Rcal$. We may therefore consider bulk objects of the form \be \label{eq:Qdef} Q = \int_B \bm{f}, \ee where~$B$ is some bulk surface or region associated to~$\Rcal$ and~$\bm{f}$ is a form (of appropriate dimension) that is left unchanged by Weyl transformations of the bulk geometry. Note that the causal holographic information, which takes the same form as the integral above with~$B$ the causal surface and~$\bm{f}$ taken to be the natural volume form on~$B$, is \textit{not} Weyl invariant, since the natural volume form on~$B$ is not. In general it is unclear whether objects of the form~\eqref{eq:Qdef} have a clear field theoretic interpretation. However, in the special case where~$B$ is taken to be the causal wedge of~$\Rcal$, the integral above has an interpretation as an integral over the space of all cuts that intersect~$\Rcal$ (since each point in~$C_{W}[\Rcal]$ corresponds to a pair of cuts intersecting~$\Rcal$). Even in this special case, however, it is not clear if there is a natural choice of~$\bm{f}$: in even bulk dimensions, it is known that there are many allowed choices of local~$\bm{f}$, but no local Weyl-invariant~$\bm{f}$ exists in odd bulk dimensions~\cite{FefGra85,BaiEas94,FefGra07,GraHir08} (indeed, this is the reason there is no conformal anomaly in odd-dimensional CFTs). It may be the case that the objects~\eqref{eq:Qdef} are just uninteresting, or that~$\bm{f}$ needs to be constructed in some unusual way in odd dimensions. \subsubsection*{Causal RG Flow and Bulk Depth} We have so far been using the term ``coarse-graining'' in a broad sense to refer to the process of decreasing our knowledge about a state. However, it is worth asking if there is any relationship between causal states and coarse-graining in the strict sense of renormalization group (RG) flow: that is, can the causal density matrix be related to a flow from the UV to the IR? It has been recently shown by one of us in~\cite{Eng16} that bulk depth can be defined covariantly via the lightcone cuts: a point~$p$ was defined as deeper than a point~$q$ relative to a boundary subregion~$\Rcal$ if the lightcone cuts of~$p$ ``sandwich'' the lightcone cuts of~$q$, as shown in Figure~\ref{fig:bulkdepth}. Motion into the bulk was shown to correspond to both larger distances and longer time scales on the boundary, providing a precise way of relating RG flow to bulk depth. Since this approach towards bulk depth uses the lightcone cuts, our construction of the causal density matrix seems naturally related to it. Is RG flow defined by the lightcone cuts an RG flow of the causal density matrix? \begin{figure}[t] \centering \includegraphics[width=7cm,page=11]{Figures-pics.pdf} \caption{Adapted from~\cite{Eng16}. The lightcone cuts of two points~$p$ and~$q$ give a well-defined notion of relative bulk depth:~$p$ is deeper into the bulk that~$q$ relative to some region~$\Rcal$ if and only if the light-cone cuts of~$p$ in~$\Rcal$ sandwich those of~$q$.} \label{fig:bulkdepth} \end{figure} The answer is yes. It was shown in~\cite{Eng16} that defining bulk depth using lightcone cuts is equivalent to defining bulk depth using the causal wedge:~$p$ is deeper than~$q$ if every causal wedge containing~$p$ also contains~$q$. This could have already been seen as a foreshadowing that the structure of lightcone cuts would give the dual to the causal wedge. An equivalent statement, using our causal state formulation, is that $p$ is deeper than $q$ if and only if every reduced causal state $\widetilde{\ket{\psi}}_\Rcal$ that can be used to reconstruct the conformal metric at~$p$ contains sufficient information to reconstruct the conformal metric at~$q$. Thus RG flow in the sense of a flow ``deeper into the bulk'' corresponds directly to a flow among reduced causal states over progressively larger boundary causal diamonds. \section{General Holography} \label{sec:hol} Since quantum gravity is expected to be holographic in broad generality, it is natural to ask which, if any, of our results are also valid beyond strong holography. In addressing this question, it is useful to use some kind of explicit general holographic framework as a guide. A convenient option is Bousso's covariant formulation of general holography~\cite{Bou99b,Bou99c,Bou99d,Bou02}. Let us therefore examine how our results generalize to Bousso's holographic framework, bearing in mind that the lessons we learn may apply even to dualities not described by it. The essence of Bousso's formalism~\cite{Bou99c} is the identification of distinguished hypersurfaces in a spacetime, termed \textit{preferred holographic screens}, that are particularly well-suited for general holography. These preferred holographic screens are defined as codimension-one hypersurfaces that admit a foliation into spacelike codimension-two surfaces~$\sigma(r)$ (where~$r$ indexes the foliation) called leaves, such that at least one of the two null expansions at each~$\sigma(r)$ vanishes: \be \theta(\sigma(r))=0. \ee By the Bousso bound~\cite{Bou99b}, the area (or more generally, the generalized entropy~\cite{BouFis15}) of distinguished cross-sections of these hypersurfaces bounds the entropy of their interior. This feature makes preferred holographic screens distinguished hypersurfaces for general holography. It is especially appealing that the AdS boundary is itself a very special type of preferred holographic screen (a so-called optimal screen). Generically, however, preferred holographic screens are not necessarily timelike, and they need not be restricted to the asymptotic boundary. The class of preferred holographic screens has since been broken down into two categories: past and future holographic screens~\cite{BouEng15a, BouEng15b}, which we shall refer to as ``screens'' for short. A past screen is characterized by the property that the two future-directed null expansions~$\theta_k(\sigma(r))$ and~$\theta_l(\sigma(r))$ of its leaves~$\sigma(r)$ are zero and strictly positive, respectively: \begin{subequations} \bea & \theta_k(\sigma(r))=0, \\ &\theta_l(\sigma(r))>0. \eea \end{subequations} A future screen is defined analogously, with~$\theta_l(\sigma(r))$ strictly negative. Now, consider a screen in a semiclassical spacetime that arises from some underlying theory of quantum gravity. It has been suggested that the interior of this screen, which must itself be described by perturbative quantum gravity, is holographically dual to some theory that can be embedded on the screen~\cite{NomSal16, NomSal16b}. What type of theory is this? Because screens have distinguished spatial slices, it is immediately clear that a theory living on a screen and dual to its interior will not generally be relativistic. Moreover, the leaves of screens obey an area law~\cite{BouEng15a, BouEng15b} (or more generally, the Generalized Second Law~\cite{BouEng15c}), implying that the entropy of the dual theory living on the screen is not constant. This in turn would appear to require violations of unitarity; see~\cite{Moo17} for a recent discussion of these issues. When the screen in question is null, there are subtleties: the leaves may have constant area, and the leaves are not distinguished slices. It is possible to define a theory on the screen using light-front quantization, but this approach is nonrelativistic. Finally, while there are examples of holographic screens of timelike signature, a generic holographic screen has mixed signature, alternating (in a way constrained by~\cite{BouEng15b}) between spacelike, timelike, and null. These considerations all indicate that general holography as formulated on holographic screens does not obey our definition of strong holography\footnote{The \textit{only} exceptions are conformal boundaries and null surfaces with vanishing expansion; any spatial cross section of these is a marginally trapped surface, allowing for the possibility that strong holography is obeyed.}. At first this might seem an insurmountable hurdle to generalizing any of the work in this paper to holographic screens. However, the situation is not so dire: recall that the conditions of weak holography require that (\ref{H1}) the \textit{bulk} quantum fields are well-approximated by local, relativistic quantum field theory;~(\ref{H2prime}) the boundary theory is well-defined on a timelike or null geometry; and~(\ref{H3prime}) \textit{some} version of the extrapolate dictionary holds (correlators may map to more exotic objects, for example). It then follows that the lightcone singularities of the bulk correlator must be dual to a singularity in some observable in the dual theory. That is, if~$\ev{\phi(x_{1})\cdots \phi(x_{d+3})}$ limits to~$O(X_{1},\ldots, X_{d+3})$ when the $x_{i}\rightarrow X_{i}$, then our formalism still applies as long as~$O$ represents a quantity that is computable in the boundary theory. In particular, if such an~$O(X_1,\ldots,X_{d+3})$ exists, then the lightcone cuts would be obtained by demanding that~$O(X_{1},\ldots, X_{d+3})$ be singular as two of the points $X_{i}$ are varied, thus tracing out a cut. By the propositions of Section~\ref{subsec:cutreview}, this cut will be a complete spacelike slice of any timelike or null connected past/future holographic screen. The procedure for obtaining algebraic equations for the conformal metric remains unaltered. Thus lightcone cuts of timelike or null holographic screens exist and have the requisite properties for the emergence of conformal geometry. Similarly, almost all of the other results of Section~\ref{sec:CDM} generalize. Since the boundary theory in question is either non-local, non-relativistic, non-unitary, or all of the above, it is not clear what the appropriate notion of a state is from the boundary perspective. However, under the assumption that there exists \textit{some} way of defining a boundary state, we may again identify any two states of the system under the equivalence relation~$\sim$ constructed from the object~$O$, thus allowing us to define causal states, reduced causal states, and causal density matrices\footnote{It is not clear in this case whether structure of the original space of states of the boundary theory will be inherited by the space of causal states, so the existence of an object like the causal Hilbert space~$\widetilde{{\cal H}}$ is not guaranteed in general.}. Interestingly, the results of Section~\ref{sec:QEC} apply as well, suggesting that quantum gravity beyond AdS/CFT is generally quantum error correcting. We thus find that a well-defined theory living on a timelike or null space can have a semiclassical conformal geometry dual with an extrapolate dictionary as defined by Property~\ref{H3prime} if and only if there exists a quantity~$O(X_{1},\ldots, X_{d+3})$ such that the singularities of~$O$ give rise to a family of lightcone cuts, which in turn result in a system of algebraic equations with a nontrivial solution. In fact, when this is the case, this solution \textit{defines} the extrapolate dictionary. However, there remains a significant hurdle to applying this formalism to holographic screens in broad generality. In Section~\ref{subsec:cutreview}, we presented a generalization of the theorems in~\cite{EngHor16a} to include timelike and null boundaries that are not necessarily asymptotic boundaries (and when they are, they need not be asymptotically AdS). However, holographic screens are often spacelike, and generally have mixed signature. For instance, in the context of dS/CFT, the asymptotic timelike infinity of dS is an optimal screen that is spacelike. The proofs of Propositions~1-4 do not obviously generalize to such cases, but based on the fact that lightcone cuts may be defined in any spacetime, we expect that generalizations of these propositions should exist. If so, the procedure of obtaining the conformal geometry from singularities of~$O$ could be extended to arbitrary holographic screens. This would be an interesting and important question to address. \section{Discussion} \label{sec:disc} In this paper, we have defined causal states and causal density matrices, new constructs in local quantum field theory. As long as the strong holography conditions~\ref{H1}-\ref{H3} are obeyed, the causal density matrix simultaneously provides~\textit{(i)} a sufficient and necessary condition for the existence of a semiclassical holographic dual causal structure to a state, and~\textit{(ii)} an explicit procedure for obtaining the conformal geometry of this dual. The construction of the causal state involved coarse-graining a regular state over everything except the minimum information required to define a causal, (approximately) local holographic dual (when one exists). Under the strong holography conditions~\ref{H1}-\ref{H3}, we have argued that the space of causal states is a Hilbert space. Our definition allows us to define not just the causal states associated to the full boundary theory, but to also restrict to a subregion~$\Rcal$. We performed this restriction in two ways: a full restriction to~$\Rcal$ exhibits features typical of quantum error correction and quantum secret sharing, while a partial restriction to~$\Rcal$ yields the dual of the causal wedge. We emphasize that these statements are not limited to AdS/CFT, but rather constitute general results about the states of QFTs with semiclassical holographic duals. This, coupled with the holographic hypothesis, has the far-reaching implication that quantum error correction is a feature of semiclassical states of quantum gravity with a holographic extrapolate dictionary. Within the AdS/CFT correspondence, we have found that the reduced causal density matrix provides a natural answer to a longstanding question: what is the dual of the causal wedge? As recent arguments in~\cite{EngWal17} have shown that superficial parallels between the entanglement and causal wedges are misleading rather than instructive, we have proposed a new, alternative approach to the dual of the causal wedge. The crux of this approach is the observation that the causal wedge itself is defined purely in terms of the bulk conformal geometry, and therefore the primary innate property of the causal wedge is precisely its causal structure. The only natural field-theoretic dual is thus one that is sensitive \textit{only} to the bulk conformal geometry; this is precisely the reduced causal density matrix. Our argument thus implies that the natural dual to the causal wedge should be ignorant of the conformal factor, the bulk matter fields, and the bulk dynamics, all of which play no part in the definition of the causal wedge. Since (under the conditions of strong holography) we have argued that the space of causal states is a Hilbert space, we have exploited this structure to define a measure~$D_{\Rcal,\psi}$ of the amount of coarse-graining performed in going from a physical state~$\ket{\psi}$ to a reduced causal state~$\widetilde{\ket{\psi}}_\Rcal$; this is essentially a measure of the size of the space of states causally equivalent to~$\ket{\psi}$. We have also suggested that any objects constructed from the causal state alone should have a bulk interpretation as conformal invariants, though a detailed analysis of what these objects might be is beyond the scope of this paper. While our discussion of causal states (and causal density matrices) and their bulk duals has focused on states of QFTs, we have argued that many of our results continue to hold in the context of weak holography, where conditions~\ref{H1},~\ref{H2prime}, and~\ref{H3prime} hold. Importantly, this generalization relaxes the requirement that the boundary theory be a QFT and allows it to live on a null as well as timelike geometry. A full generalization to holographic theories on arbitrary spaces, and in particular to ones with spacelike boundaries, would be a natural next step, and would also be a significant step towards developing an understanding of the emergence of time. Since our primary purpose here was to define causal states and point out their importance in the reconstruction of semiclassical bulk duals, our analysis of their applications was necessarily brief and exploratory. Many interesting questions remain; we list a few below. \paragraph{Hilbert Space Factorization:} While we have established that parallels between the causal and entanglement pictures are misleading, it is nonetheless interesting to ask if the causal Hilbert space structure permits the definition of analogues of objects akin to entanglement or R\'enyi entropies from the causal density matrix. A natural way to attempt to define such objects is via the factorization of the causal Hilbert space in some way to obtain a definition of a partial trace of the causal density matrix. While it is unlikely that the causal Hilbert space factorizes over subregions like the physical Hilbert space does, it is possible that there exists a different, natural factorization that permits the definition of a partial trace over some part of the causal Hilbert space. \paragraph{Multiple Dual Geometries:} We defined the causal state using~$(d+3)$-point correlators of a local operator~$\Ocal(X)$. By construction, the emergent causal structure sourced by the singularities of $\Ocal(X)$ is such that high energy quanta of the field $\phi$ dual to $\Ocal$ (i.e. the characteristics of $\phi$) are null. It is interesting to note the possibility that two different local operators~$\Ocal_1(x)$ and~$\Ocal_2(x)$ give rise to two \textit{independent} sets of cuts, each of which gives rise to a different well-behaved conformal geometry. In this case, it would appear that \textit{both} conformal geometries are emergent from the same boundary theory state. There are two ways in which this might happen: first, if the high energy quanta of two bulk fields $\phi_{1}$ and $\phi_{2}$ have different propagation velocities\footnote{We thank David Simmons-Duffin for pointing this out.}, then the two apparently distinct conformal geometries capture different manifestations of dynamics on the same background. In such cases, there should exist a consistent way of embedding both fields $\phi_{1}$ and $\phi_{2}$ on the same geometry: the boundary state is really dual to one geometry. The second possibility is that there is no way of embedding the fields $\phi_{1}$ and $\phi_{2}$ in the same conformal geometry. In this latter case, we would say that the boundary state genuinely sources multiple bulk geometries. Can this happen? \begin{figure}[t] \centering \includegraphics[width=3.3cm,page=12]{Figures-pics.pdf} \caption{The entanglement wedge $E_{W}[\Rcal']$ is a proper subset of the causal wedge $C_{W}[\Rcal]$. The reduced causal density matrix of $\Rcal$ completely fixed the conformal geometry of $E_{W}[\Rcal']$. Thus, the reduced causal density matrix of $\Rcal$, which is defined using time-separated points, must significantly constrain the reduced density matrix of $\Rcal'$, which is defined spatially. } \label{fig:wedgenesting} \end{figure} \paragraph{Relationship to Entanglement:} Consider some boundary causal diamond~$\Rcal$ and a subset of it~$\Rcal' \subset \Rcal$ such that the entanglement wedge of~$\Rcal'$ lies within the causal wedge of~$\Rcal$: $E_{W}[\Rcal']\subset C_{W}[\Rcal]$, as shown in Figure~\ref{fig:wedgenesting}. Then the interior geometry of~$E_{W}[\Rcal']$ is fixed up to an overall function (i.e.~the conformal factor) by the reduced causal density matrix of~$\Rcal$. It is thus clear that the causal density matrix significantly constrains the reduced density matrix. This is somewhat surprising, however: the causal density matrix is by definition dependent on temporally-separated points, while the regular density matrix is not. This is quite likely related to the fact that the entanglement wedge is defined locally while the causal wedge is defined teleologically. Thus while we have argued that the causal wedge and entanglement wedge should be thought of in fundamentally different ways, it may be that the causal density matrix will allow purely field theoretic comparisons with the reduced density matrix. This could also potentially answer an RG flow question: we could give a definition of bulk depth via entanglement wedge inclusion instead of causal wedge inclusion (the latter of which is equivalent to the definition of bulk depth via the lightcones cuts~\cite{Eng16}). We would expect that one definition is constrained by the other, however it is not clear what the precise relation is in the bulk. It is possible that this is more easily done by comparing the causal and regular density matrices. \paragraph{Pathological Geometries:} Some of the results presented in Section~\ref{subsec:cutreview} required the use of global or AdS hyperbolicity. This requirement, however, was used only for convenience; it is not required for the reconstruction of the conformal metric from open subsets of the lightcone cuts. This observation raises an interesting question: if the lightcone cut construction can be used to reconstruct non-globally hyperbolic or otherwise pathological spacetimes (e.g.~ones containing closed timelike curves), can these pathologies be interpreted from a boundary perspective? In other words, is there some structure to the singularities of~$O_\psi(X_i)$ which is indicative of a pathological dual causal geometry? Presumably, the corresponding boundary states should be pathological from a purely field theoretic perspective; thus answering this question would provide a boundary interpretation of causally pathological bulk duals. \section*{Acknowledgements} It is a pleasure to thank Chris Akers, Raphael Bousso, Chris Fewster, Daniel Harlow, Tom Hartman, Gary Horowitz, Bernard Kay, Juan Maldacena, Don Marolf, Mark Mezei, Eric Perlmutter, David Simmons-Duffin, Arkady Tseytlin, Mark Van Raamsdonk, and Aron Wall for useful discussions and correspondence. The work of NE was supported in part by NSF grant PHY-1620059. SF was supported by STFC grant ST/L00044X/1, and wishes to thank the UCSB physics department for hospitality while some of this work was completed. \bibliographystyle{jhep}
1,477,468,749,967
arxiv
\section*{Nomenclature} \addcontentsline{toc}{section}{Nomenclature} \begin{IEEEdescription}[\IEEEusemathlabelsep\IEEEsetlabelwidth{$V_1,V_2,1$}] \item[$k$] Time-slot \item[$\mathcal{K}$] Set of time-slots $k$ \item[$x^\mathrm{j,PV}_{k}$] PV generation at time ${k}$, user ${j}$ \item[$x^\mathrm{j,Batt}_{k}$] Energy stored at time ${k}$, user ${j}$ \item[$\mathcal{J}$] Set of users $j$ \item[$\mathcal{B}$] Set of buyers $b$ \item[$\mathcal{S}$] Set of sellers $s$ \item[$\mathcal{P}$] Set of prices $p$ \item[${p}_{b}$] Offer price of buyer $b$ \item[${p}_{s}$] Bid price of seller $s$ \item[$\mathcal{Q}$] Set of quantities of energy $q$ \item[${q}_{b}$] Quantity of energy to purchase by buyer $b$ \item[${q}_{s}$] Quantity of energy to supply by seller $s$ \item[${t}_{d}$] Trading time \item[$\mathcal{ID}$] Set of identification index $id$ \item[$\mathcal{OB}$] Set of all orders in the order book \item[$Mo^{\delta}$] Market order with index $\delta$ \item[$p_{t}$] Transaction price \item[$q_{t}$] Quantity of energy to exchange \item[$Q_{T}$] Total of energy traded \item[$U$] Utility function \item[$C$] Cost function \item[$L_{min}$] Minimum value of trading prices \item[$L_{max}$] Maximum value of trading prices \end{IEEEdescription} \section{Introduction} Increasing penetration of renewable electricity generation and energy storage technology characterise the future of electrical power systems. All these developments along with an advanced network communication constitute one part of the smart grid vision. Undoubtedly, these technologies will bring more active participation of end users. As a consequence, the network will confront changes in its structure as well as in the business model. Australia is one of the leaders in PV installation around the world. Around 900,000 rooftop PV systems were installed in Australia between 2010 and 2012 \cite{mountain_chapter_2014}. The expansion of PV is increasing as a result of some factors such as the rising rates of electricity, subsidies and advances in technologies, which are bringing more profitable solutions for users. However, incentives remain under the expectations of customers. For example, one of the most common production subsidies for energy users is feed-in-tariff, in which households will receive payments for the power exported to the grid. Nevertheless, this subside may not cover the revenue desired to recover the initial investment and the cost associated with energy generation. Users with PV systems and battery storages may take advantage of their surplus of energy, optimising their energy consumption. Hence, users in a smart grid would be willing to seek more profitable alternatives to the current business model and to participate in a more efficient model. A clear example of innovation in this area is the pilot project named \textit{deX} (decentralised energy exchange), which is funded by the Australian Renewable Energy Agency (ARENA) and led by GreenSync \cite{deX}. The aim of that project is an online exchange platform for buying and selling grid-services such as power from distributed energy resources. This scenario brings benefits for consumers through reduced electricity network bills. Furthermore, technological developments have brought more tools to facilitate the implementation of these models. For instance, recently some studies have considered using blockchain technology and smart contracts in electricity markets \cite{mihaylov_nrgcoin:_2014}, \cite{munsing_blockchains_2017}. A platform based on blockchain may be used to enhance the security of transactions, through a virtual currency, in a local energy market. Despite new enabling technologies, their complete deployment is not clear yet. Therefore, many emerging scenarios and uncertainties remain unsolved. In the context of a local low-voltage network with a small group of electricity users, the vision of a local trading market could be established as one alternative to the current business models. In \cite{bayram_survey_2014}, an overview of distributed energy trading in smart grid is presented. That survey explores the existing literature of trading algorithms involved in market frameworks such as distributed, centralised and simulation-based solutions. Likewise, the application and features of a local electricity market have been identified in \cite{ampatzis_local_2014}. In fact, these studies have shown that consumers and prosumers may perform a local market in order to obtain profitable benefits which depend on factors such as load profiles, energy surplus and fair prices. Within this context, previous research has explored the opportunity of trading in micro-grid networks. For example, in \cite{matamoros_microgrids_2012}, the case of energy trading among isolated micro-grids have been addressed. To this end, the authors consider centralised as well as distributed approaches as minimisation cost problem. Similarly, in \cite{cui_electricity_2014-1}, welfare maximisation problem is described to deal with energy trading among micro-grids. While these studies focussed on interaction between networks, a direct participation of low-voltage network users in a local market may also be established. In this paper, we consider that the units to be traded are the result of a surplus of energy generated and/or stored. As discussed in \cite{ilic_energy_2012}, \cite{endo_distributed_2017}, numerous approaches for energy trading have been introduced considering different types for price discovery process. Independently whether the structure in the market is centralised or not, it has been demonstrated that a scenario of prosumers with a surplus of energy and consumers in a local market is viable and reaches high levels of efficiency. In the case of distributed context, via a double auction, the prices may be determined as a consequence of offers submitted continuously by agents. Through this mechanism, all agents may achieve some benefits even though they do not have a specific bidding strategy. Similarly, other works have shown the savings achieved by energy trading with a game theoretic approaches \cite{wang_game-theoretic_2014}, \cite{yaagoubi_energy_2015}. These studies encourage the use of bidding mechanisms for this context. Given these insights, the overall contribution of this paper is an analysis of energy trading in a local market, where prosumers and consumers are able to offer and to purchase energy. This paper seeks partially to bridge energy management system and bidding mechanism in a local market. Firstly, users minimize the cost of self-consumption, and subsequently they identified their amounts and times to trade. Specifically, we considered centralised and distributed approaches as electricity market mechanism. The last one is based on the continuous double auction, which has been defined as high efficient method by previous studies \cite{nicolaisen_market_2001}. Moreover, we consider centralised and distributed approaches in order to compare the benefits of each scheme. In doing so, we illustrate the prices and the amounts of energy exchanged among agents in the market. This paper progresses as follows: The next section of the paper states the system model. This is followed by the description of the implementation in Section III. Then, Section IV presents simulations results and the discussion. Finally, Section V concludes. \section{System Model} Our study is focused on a low-voltage network with \textit{distributed energy resources} (DER), as shown in Fig \ref{scenario}. While one part of the households (prosumers) have PV systems, battery storage and \textit{home energy management systems} (HEMS); the other group is constituted by traditional customers willing to pay rates defined by the grid operator. There are three components in our model. The first one is the local power network, the second one is the customers and the last one is the market for energy trading. \begin{figure}[h] \centering \includegraphics[width=2in]{Scenario1} \caption{Users in a low-voltage network} \label{scenario} \end{figure} \subsection{Local Power Network Model} We consider a smart grid system for energy trading at local level. In order to minimize the energy costs associated to the use of energy, prosumers in the network have PV systems, battery storage and HEMS. Let $\mathcal{J}$ denotes the set of all $j$ users in the local grid. The time is divided into time slots $k \in \left \{ 1,2,...,\mathcal{K}\right \}$, where $\mathcal{K}$ is the total number of time slots. We define $x_{k}^{j}$ as the amount of energy used by the user $j \in {\mathcal{J}}$ in time slot $k$. There are two categories of users in power systems, consumers and prosumers. The model of the users is based on CREST model \cite{mckenna_high-resolution_2016}, , which is a high-resolution stochastic model of electricity demand. This model simulates electrical demand and generation due to appliances, lighting and photovoltaics systems. The first objective of the prosumers in $\mathcal{J}$ is to optimise their self-consumption considering their demand and the energy generated through the PV system $x_{k}^{j,PV}$, and the energy stored in the battery $x_{k}^{j,Batt}$. Therefore, the optimisation problem of a HEMS is given by: \begin{equation}\label{HEMS} \begin{aligned} & \underset{X}{\text{min}} & & \sum_{k=1}^\mathcal{K}{(s_{k}^{+}x_{k}^{+} - s_{k}^{-}x_{k}^{-}}) \\ & \text{s.t.} & & \text{satisfies storage device, comfort, power flow} \\ &&& \text{and energy balance constraints,} \\ &&& \forall{k} \in{\left \{1... \mathcal{K}\right \}}\\ \end{aligned} \end{equation} \noindent where, $x_{k}^{+}$ and $x_{k}^{-}$ are the amount of energy flowing from the grid and to the grid respectively. State variables in the model are $s_{k}^{+} $ and $s_{k}^{-}$. The former is associated with the price of energy in time slot $k$, and the latter with the incentive received for the contribution to the grid. In other words, $s_{k}^{+} $ and $s_{k}^{-}$ may be related to rates (e.g. flat, time-of-use) or incentives (e.g. feed-in-tariff). The outcome of the previous process provides \textit{net load profiles} for users with HEMS. Given the prosumers have an excess of energy after their self-optimisation, a local market for energy trading may be established. \subsection{Energy Trading Model} The effectiveness and performance of the market depends on the mechanisms implemented. In our study, we have considered distributed and centralised schemes. \subsubsection{Distributed Market} The operation of the market in this case involves only two parties interested in the trading. Hence it is peer-to-peer (P2P) and bilateral contract between agents (buyers and sellers). In order to achieve an individual welfare, the agents submit offers/bids based on their preferences and costs. The local market is based on a \textit{continuous doubled auction} (CDA), where there are a set of buyers, $b\in{\mathcal{B}}$, and sellers, $s\in{\mathcal{S}}$, willing to participate continuously in the market considering their trading prices ($p_{b}$, $p_{s}$) and their amount of energy to purchase or supply ($q_{b}$, $q_{s}$) (See Fig \ref{market}). Previous studies have shown that market efficiency may be directly attributed to continuous double auctions \cite{nicolaisen_market_2001}. This auction mechanism is widely used in stock markets around the world. Some examples are the NASDAQ, the New York Stock Exchange and online markets such as the auctions conducted by eBay. In a CDA, the agents offer or bid during a trading time $t_{d}$ and their offers and bids are registered in a \textit{order book} $\mathcal{OB}(id,p,q,t)$, where each order has an index $id \in{\mathcal{ID}}$, price $p \in{\mathcal{P}}$, quantity $q \in{\mathcal{Q}}$ and the time when the order was received $t\in{\mathcal{T}}$. During the trading time, the process of arrivals in the order book follow a Poisson process with a mean $\lambda$. Additionally, this model is a multi-unit market where units exchanged symbolize the flowing of power between agents, which is their main motivation to participate in the market. \begin{figure}[h] \centering \includegraphics[width=1.5in]{market} \caption{Prosumers and Consumers submit continually their offers and bids.} \label{market} \end{figure} Let define $Mo^{\delta} \in{\mathcal{OB}}$ as the market order with index ${\delta \in{\Delta}}$. An order is a market order if this one had a match during the trading time $t_{d}$. Once the market is closed, the outcome of the trading is a set of market orders $Mo^{\delta}$ with a transaction price $p_{t}^{\delta}$ and quantity $q_{t}^{\delta}$ .For the matching process, there are two fundamental properties in offers/bids to be considered. The first one is the price, and the second one is the time in which the offer/bid was recorded in the order book. Hence, the best offer (buyers) is the earliest offer with the highest price. Likewise, in the case of bids (sellers), the best one is the earliest bid with the lowest price. To determine whether a transaction is completed or not, the best bid and the best offer are compared. If $p_{b} \geq p_{s}$, the orders are matching and the agents will exchange energy. Otherwise, it will remain in the order book. If a new offer/bid is not better than the best one, it will be aggregated to the order book regarding its arrival time and price. This process is executed several times in the order book during the trading period. After the matching process, an order can have covered their request partially . If this is the case, it will remain at the top of the order book waiting for a new order. Once the trading time has elapsed, the total of energy $Q_{T}$ is given by: \begin{equation} Q_{T} = \sum_{\delta\in{\Delta}}{q_t^{\delta}} \end{equation} Conventionally, the participants of markets, buyers and sellers, define their offers and bids based on their preferences and costs associated. Since our interest is to assess the benefits of a local market, in our study the agents are \textit{zero intelligence plus} (ZIP) traders. In \cite{gode_allocative_1993}, Gode and Sunder designed zero intelligence traders. Buyers and sellers submit randomly their offers or bids depending on their constraints ($L_{max}$ and $L_{min}$ are the maximum and minimum price respectively). In order to improve the performance of zero intelligence traders, Cliff and Bruten \cite{ZIP} developed ZIP traders. Agents have a profit margin which determines the difference between their limit prices and their offers or bids. Under this strategy, traders adapt and update their margins base on the matching of previous orders. The algorithm \ref{zi} shows an overview of the order book process with ZIP traders. \begin{algorithm} \caption{Algorithm for the ZIP traders} \label{zi} \begin{algorithmic}[1] \Procedure{Order Book}{$\mathcal{OB}$} \State initialization; \While {market is open} \State randomly select a new trader \State new order by ZIP-trader \If{buyer} \State new $\mathcal{OB}(id_{b},p_{b},q_{b},t)$ \Else \State new $\mathcal{OB}(id_{s},p_{s},q_{s},t)$ \EndIf \State allocation of new order in $\mathcal{OB}$ \State evaluate matching process \State \Comment{Update values of profit margins ---------- Buyers} \If{the last order was matched at price $q_{t}$} \State all buyers for which $p_{b}\geq q_{t}$, raise his margin; \If {the last trader was a seller} \State any active buyer for which $p_{b}\leq q_{t}$, \State lower his margin; \EndIf \Else \If {the last trader was a buyer} \State any active buyer for which $p_{b}\leq q_{t}$, \State lower his margin; \EndIf \EndIf \State \Comment{Update values of profit margins ---------- Sellers} \If{the last order was matched at price $q_{t}$} \State all sellers for which $p_{s}\leq q_{t}$, raise his margin; \If {the last trader was a buyer} \State any active seller for which $p_{b}\geq q_{t}$, \State lower his margin; \EndIf \Else \If {the last trader was a seller} \State any active seller for which $p_{b}\geq q_{t}$, \State lower his margin; \EndIf \EndIf \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection{Centralised Market} In this structure, the optimal dispatch is decided based on all information of consumers and prosumers. In order maximize the global welfare, the agents will develop an energy allocation algorithm to identify the market equilibrium in each trading period. Commonly, the social welfare is formulated through the utility and cost functions. Each consumer has a utility function $U(q_{b})$ that models the level of satisfaction due to purchase energy. Likewise, each prosumer has a cost function $C(q_{s})$ that represents the costs associated to the amount generated. Regarding the social welfare problem must ensure power balance, the maximisation of social welfare takes the form: \begin{equation} \begin{aligned} & \underset{q_{b}, q_{s}}{\text{max}} & & \sum_{t \in \mathcal{T}}{U(q_{b}^{t}) - C(q_{s}^{t})} \\ & \text{s.t.} & & \text{units constraints and load balance} \\ &&& \text{constraints} \\ \end{aligned} \end{equation} \noindent While the dispatch is decided through optimal allocation, the methodology for pricing depends on the mechanism to define one \textit{market clearing price} (MCP). The value of this variable represents the price of each unit to be exchanged. Generally, MCP is equal to the equilibrium price, the intersection of supply and demand curves. As a result of this process, all bids with $p_{s} \leq MCP$, as well as all offers with $p_{b} \geq MCP$ are accepted. Consequently, consumers and prosumers are informed of the amount to be traded. Although this mechanism leads to a balance in the market between agents, different methods have been developed to bring other properties to the markets. In particular, Vickery-Clarke-Groves (VCG) Mechanism is incentive compatible for optimising the social welfare through efficient allocation where each agent pays an amount equal to the social cost/damage that he causes the other players \cite{tardos_introduction_2007}. Hence, truthfulness is a dominant strategy in this mechanism. In order to compare two approaches in this centralised market, we will evaluate these mechanisms in the formulated scenario. As this is a preliminary study, we do not yet consider network losses or constraints, in effect assuming a copper-plate network model. A full network representation will be integrated into the trading mechanism as part of future work. \section{Implementation} The proposed market consists of a group of dwellings, comprising a mix of consumers and prosumers, a market structure determining prices and matching of trades, a copper-plate network facilitating transport of energy, and a communications network enabling the flow of market-related information. Demand profiles, with $k=1$ minute resolution, are based on CREST Demand Model \cite{mckenna_high-resolution_2016}. In Fig \ref{tariffs} are depicted the time-of-use tariff (ToU) and feed-in-tariff (FiT) used in our model. Since ZIP traders improved their performance when the maximum and minimum constraints are defined, we use the values of tariff through the day to define $L_{max}$ and $L_{min}$. Hence, the former depends on the ToU tariff and the latter on FiT. These definitions are consistent in the sense that no buyer would pay more than the tariff of the retailer (ToU) and no sellers would sell their units cheaper than the incentive (FiT) that they would expect to receive. In summary, the process of our model is: \begin{description} \item[$\bullet$] Prosumers run HEMS to minimize their cost based on the optimisation problem (\refeq{HEMS}) formulated previously. To solve the problem, we used Mixed-integer Linear Program (MILP). \item[$\bullet$] Prosumers state the time-slots when they have extra energy to trade. \item[$\bullet$] The input data for the market include load profiles of prosumers and consumers, and the value of the tariff. \item[$\bullet$] Local market is performed and Order Book starts to receive orders during trading periods. \item[$\bullet$] Agents accept the amount of units to be exchanged and their prices. \end{description} Additionally, two scenarios were evaluated. In the first one, the energy to trade is a consequence of extra energy generated by PV systems. In the second, prosumers are willing to trade their surplus of energy generated as well as stored in their batteries. More specifications and results are explained in the next section. \begin{figure}[h] \centering \includegraphics[width=2.7in]{tariffs} \caption{Time-of-use tariff (ToU) and feed-in-tariff (FiT)} \label{tariffs} \end{figure} \section{Simulation Results and Discussion} For simulating the proposed system, we regarded load profiles from CREST Model of 100 dwellings, of which 37 are prosumers and 63 are consumers. Preceded by HEMS process, prosumers identified their surplus, and therefore quantities to trade in the market. \subsection{Scenario 1} In this case, the best time for participating in the market is around middle day as a consequence of extra units generated by PV systems. Moreover, users with HEMS meet their demand and store energy in their battery to use at peak time prices. For this reason, in our case-study, the most productive periods to trade were established between 8 am and 3 pm. As it was mentioned before, the market was performed considering three mechanisms: Centralised with equilibrium price, centralised with VCG mechanism and distributed P2P market. To compare the prices of transactions in each mechanism, we calculated the average transaction price $\left<{T_{p}}\right>$ during the day. \begin{figure}[h] \centering \includegraphics[width=3.5in]{avp1} \caption{Average transaction prices $\left<{T_{p}}\right>$ in scenario 1} \label{AVP} \end{figure} Fig \ref{AVP} shows the average transaction price each hour. The trend of all prices is to remain in the range of $L_{min}$ and $L_{max}$ (i.e. values of ToU and FiT). Hence, both buyers and sellers obtain a benefit from the local market. In the context of P2P case, there are no large fluctuations during each trading period. This is due to the strategy used by ZIP traders, agents learned during trading and modified their margins to participate in the market. The transactions prices converge rapidly result in no significant variations. The number of traders and the units to trade are different each trading period. Therefore, the transaction price does not necessarily have to converge to the same value. There is a peak at 2 pm because of the change in the tariff at that time. Additionally, we can conclude that VCG mechanism brings slightly lower prices in a centralised case. However, the cause of this is potentially associated with one feature of VCG mechanism which is budget deficit. In this case, sellers accepted prices in benefit of buyers. Those prices tend to be less than the equilibrium price. In the Fig \ref{TE}, the total energy traded $Q_{T}$ each hour is presented. In the first hours, the amount traded is small and increases gradually to reach a peak around middle day. Finally, it decreases progressively over time. Furthermore, the quantity exchanged with P2P mechanism is slightly less during the whole day. This is caused by factors associated with the auction process and agents strategies such as arrival orders time, margin prices and the evolution of the learning process by ZIP traders. In the case of centralised options, the units are the same because they use the same method for the allocation. \begin{figure}[h] \centering \includegraphics[width=3.5in]{totale1} \caption{Total energy traded $Q_{T}$ in scenario 1} \label{TE} \end{figure} The Table \ref{t1} shows savings and profits (in dollars) that consumers and prosumers would have potentially with their participation in the local market during one day. Additionally, the table compares values to show the profitable of each mechanism. Savings indicates the money that buyers are economising due to buying in the local market instead of the grid operator. Similarly, profit represents the extra money that sellers would earn. \begin{table}[h] \centering \caption{Scenario 1. Simulation results for different mechanisms in the local market. These values represent the revenues for agents} \label{t1} \begin{tabular}{c|crcccc} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Time \\ (Hr)\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Centralised}} & \multicolumn{2}{c|}{\textbf{VCG}} & \multicolumn{2}{c}{\textbf{P2P}} \\ \cline{2-7} & \textbf{Savings} & \textbf{Profit} & \textbf{Savings} & \textbf{Profit} & \textbf{Savings} & \textbf{Profit} \\ \hline 8 & 0.01 & 0.04 & 0.01 & 0.04 & 0.01 & 0.01 \\ 9 & 0.10 & 0.34 & 0.12 & 0.32 & 0.06 & 0.29 \\ 10 & 0.61 & 0.42 & 0.65 & 0.39 & 0.14 & 0.64 \\ 11 & 1.42 & 0.21 & 1.46 & 0.18 & 0.45 & 0.67 \\ 12 & 1.15 & 0.35 & 1.18 & 0.33 & 0.57 & 0.39 \\ 13 & 0.74 & 0.47 & 0.76 & 0.46 & 0.64 & 0.30 \\ 14 & 0.11 & 0.34 & 0.17 & 0.30 & 0.10 & 0.36 \\ 15 & 0.01 & 0.20 & 0.02 & 0.20 & 0.02 & 0.27 \\ \hline \textbf{Total} & \textbf{4.19} & \textbf{2.39} & \textbf{4.37} & \textbf{2.21} & \textbf{1.98} & \textbf{2.93} \\ \hline \end{tabular} \end{table} From the Table \ref{t1}, we can see that there are benefits for all agents in the market regardless of the mechanism used. For the buyers, centralised mechanisms were more profitable, particularly with VCG mechanism. In the case of the sellers, they achieved more profit from a distributed mechanism. Even although with the centralised mechanism the amount of energy was greater than in the distributed mechanism, the payoff may be higher because of the transaction price. \subsection{Scenario 2} In this case, prosumers are willing to keep some energy in the battery to trade instead of using the whole surplus only for their self-consumption. Consequently, there will be others trading times throughout the day. Results of this scenario are shown in figures \ref{AVP2} and \ref{TE2}. \begin{figure}[h!] \centering \includegraphics[width=3.5in]{avp} \caption{Average transaction prices $\left<{T_{p}}\right>$ in scenario 2} \label{AVP2} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=3.5in]{totale} \caption{Total energy traded $Q_{T}$ in scenario 2} \label{TE2} \end{figure} An increasing of hours for trading is evident. Similar to the previous scenario, there are no significant changes during each trading periods, and the prices respond to each mechanism and agents strategies. Likewise, the change in the value of the tariff causes high variation in prices around 2 pm. The total of energy traded in this scenario is substantially more. Meanwhile, in the first scenario there was no energy to trade after 3 pm; in this case, the time to trade was extended until 7 pm. Therefore, sellers have the opportunity to trade during more time and buyers may avoid peak prices from the grid operator. After the peak time, sellers may start to charge their batteries again at better prices (shoulder and off-peak periods). As shown in Table \ref{t2}, the more profitable method, for both sellers and buyers, are the centralised mechanisms. However, there were some time periods when the P2P mechanism was better for sellers (from 11 am to 3 pm). \begin{table}[h] \centering \caption{Scenario 2. Simulation results for different mechanisms in the local market. These values represent the revenues for agents} \label{t2} \begin{tabular}{c|cccccc} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Time \\ (Hr)\end{tabular}}} & \multicolumn{2}{c|}{\textbf{Centralised}} & \multicolumn{2}{c|}{\textbf{VCG}} & \multicolumn{2}{c}{\textbf{P2P}} \\ \cline{2-7} & \textbf{Savings} & \textbf{Profit} & \textbf{Savings} & \textbf{Profit} & \textbf{Savings} & \textbf{Profit} \\ \hline 7 & 0.38 & 0.43 & 0.40 & 0.42 & 0.24 & 0.20 \\ 8 & 0.44 & 0.40 & 0.45 & 0.39 & 0.34 & 0.29 \\ 9 & 0.47 & 0.39 & 0.48 & 0.38 & 0.39 & 0.33 \\ 10 & 0.69 & 0.32 & 0.71 & 0.31 & 0.39 & 0.32 \\ 11 & 1.49 & 0.16 & 1.50 & 0.15 & 0.69 & 0.44 \\ 12 & 1.27 & 0.23 & 1.29 & 0.21 & 0.68 & 0.30 \\ 13 & 1.20 & 0.24 & 1.21 & 0.23 & 0.61 & 0.23 \\ 14 & 4.32 & 1.04 & 4.38 & 0.98 & 1.78 & 1.63 \\ 15 & 3.28 & 1.36 & 3.42 & 1.22 & 1.94 & 1.42 \\ 16 & 2.47 & 1.60 & 2.49 & 1.58 & 1.67 & 1.12 \\ 17 & 2.40 & 1.56 & 2.41 & 1.54 & 1.68 & 1.04 \\ 18 & 2.35 & 1.53 & 2.36 & 1.52 & 1.58 & 1.01 \\ 19 & 2.44 & 1.58 & 2.46 & 1.56 & 1.59 & 1.04 \\ \hline \textbf{Total} & \textbf{23.22} & \textbf{10.83} & \textbf{23.57} & \textbf{10.49} & \textbf{13.57} & \textbf{9.40} \end{tabular} \end{table} Both scenarios have shown profitable results for traders. Regardless of the mechanism used, consumers and prosumers have a great opportunity to achieve revenues if they perform and participate in a local market. \section{Conclusion} In this paper, we have assessed the performance and profitable of a local market. This has been formulated considering centralised and distributed mechanism in a continues double auction. Units traded represent the energy surplus of a group of users with PV systems, battery storage and HEMS. Our simulations and results have shown the benefits that the local market will bring to their participants independently of the mechanism implemented. For future work, it is of interest to extend the study of strategies of agents with a more extensive analysis including penalties and technical constraints in the network. \bibliographystyle{IEEEtran}
1,477,468,749,968
arxiv
\section*{Introduction} The recent emergence of spin-orbit torque (SOT)\cite{Miron2011nat,brataas2014natnano,gambardella2011philtrans} from pure spin currents, has opened a new avenue for the controlled manipulation of magnetic moments in spintronic devices resulting in dramatically improved efficiency\cite{liu2012science,mellnik2014nature} and much lower power dissipation\cite{liu2012prl,avci2016natmat} compared to conventional spin-transfer torque (STT)\cite{ralph2008jmmm}. Thanks to the ability of SOT to compensate natural magnetic damping over spatially extended regions, there has been considerable interest in exploring the long range enhancement of spin wave (SW) propagation\cite{wang2011prl,padron2011apl,liu2018natcomm,demidov2014magnonicsapl,evelt2016apl,gladii2016apl,divinskiy2018arxiv} in a variety of nanoscale devices with the aim of developing an energy-efficient and ultra-high-speed beyond-CMOS spin-wave-based technology for signal processing\cite{chumak2014ntcom,balinskiy2018aip} and computation\cite{chumak2015ntmat,bracher2018jap,liu2018natcomm,chumak2019arXiv}, so-called magnonics\cite{neusser2009adv,kruglyak2010iop}. One of the most promising SOT devices for active, controllable SW generation on the nano-scale is the nano-constriction based spin Hall nano-oscillator (SHNO)\cite{Demidov2014apl,Kendziorczyk2016prb,Awad2016natphys,durrenfeld2017nanoscale,Chen2016procieee,divinskiy2017apl}. It can be easily fabricated using a wide range of different bilayer combinations\cite{Zahedinejad2018apl,mazraati2016apl,evelt2018pra,divinskiy2017apl} and the generated SWs can be directly observed using both electrical and optical microwave spectroscopy\cite{Zahedinejad2018apl,mazraati2016apl,evelt2018pra,divinskiy2017apl,Awad2016natphys,mazraati2018pra,mazraati2018arxiv}. Most importantly for applications, nano-constriction SHNOs exhibit highly robust mutual synchronization, both in long chains\cite{Awad2016natphys} and in two-dimensional arrays\cite{Zahedinejad2018arxiv}, which both improves their signal properties by orders of magnitude and lend themselves to neuromorphic computing\cite{romera2018nature,grollier2016ieee}. A major limitation of nano-constriction SHNOs, however, is the localized nature of the SOT driven auto-oscillations.\cite{Dvornik2018prappl} The localization is a consequence of the easy-plane anisotropy and the geometry of the device, which lead to a negative magnetodynamic non-linearity, further exacerbated by the Oersted field from the drive current, and from the SOT itself.\cite{Dvornik2018prappl,Dvornik2018arxiv} It would be highly advantageous if the localization could be mitigated so as to generate truly propagating SWs. Not only should this lead to mutual synchronization over much longer distances, it would also make SOT driven SWs directly applicable to additional non-conventional computing schemes such as wave based computing\cite{lenk2011pr,klingler2014apl,chumak2015ntmat,fischer2017apl}. In a recent work\cite{evelt2018pra}, Evelt and coworkers demonstrated SOT driven propagating SWs in extended Bi-substituted YIG films with perpendicular magnetic anisotropy\cite{Soumah2018natcomm} (PMA). While the auto-oscillations could only be observed optically and did not exhibit any frequency tunability via the drive current, the demonstration raises the question whether the addition of PMA to metal based nano-constriction SHNOs could potentially lead to SOT generated propagating SWs in more practical devices directly compatible with CMOS technology\cite{Zahedinejad2018apl}. Here we show, using 150 nm and 200 nm nano-constrictions in W/CoFeB/MgO material stacks with substantial PMA, that it is indeed possible to generate strongly current-tunable propagating SWs over a very wide frequency range of about 3--23 GHz. The SWs are studied using electrical microwave spectroscopy and modelled using micromagnetic simulations. Auto-oscillations are observed at currents as low as 0.15 mA, where they are still localized and exhibit negative non-linearity. As the current is increased, the non-linearity changes sign and the localized SWs exhibit smooth transition into propagating SWs at about 0.5 mA. It is hence possible to seamlessly turn on and off the localization, which will allow the generation of ultra-short SW pulses driven by the current alone, which is much faster than using external field \cite{divinskiy2016apl}. \subsection{Perpendicular magnetic anisotropy controlling the magnetodynamic non-linearity.} The rich non-linear magneto-dynamics in patterned magnetic thin films can be analytically described by a single non-linearity coefficient, $\mathcal{N}$, the magnitude and sign of which determine the strength and nature of magnon-magnon interactions, with positive and negative values signifying magnon repulsion and attraction, respectively.\cite{slavini2005approximate,slavin2009nonlinear,Dvornik2018prappl} As spin transfer torque and SOT can generate very high SW amplitudes, the sign and magnitude of $\mathcal{N}$ leads to dinstinctly different behavior of the auto-oscillations. A negative non-linearity makes the auto-oscillation frequency decrease with amplitude, eventually moving it into the magnonic band gap, where it first leads to SW self-localization, and can further promote the nucleation of magnetodynamical solitons such as SW bullets\cite{slavin2005prl,bonetti2010prl,dumas2013prl} in easy-plane magnetic films and magnetic droplets\cite{mohseni2013science,chung2018direct} in films with very large PMA. A large positive non-linearity, on the other hand, makes the auto-oscillation frequency \emph{increase} with amplitude to well above the ferromagnetic resonance (FMR) frequency leading to the propagation of spin waves with a finite real wave-vector. A prominent example is the so-called spin torque driven Slonczewski modes\cite{Slonczewski1999jmmm,madami2011natnano,houshang2018natcomm}, which can also form SW beams\cite{Hoefer2008prb} in oblique fields, particularly useful for mutual synchronization\cite{Houshang2015natnano}. The non-linearity is generally governed by the applied field vector and/or an effective magnetic anisotropy tensor and is zero for an isotropic magnet regardless of the applied external field strength. The easy-plane shape anisotropy of a magnetic thin film holds the magnetization vector in the film plane and therefore the non-linearity strongly depends on the strength and orientation of the applied field. However, anisotropy induced by an interface to a different material can counteract the shape anisotropy and even pull the magnetization vector out of the film plane. Such PMA contributes a term with the opposite sign in the nonlinear coefficient compared to the shape anisotropy. The impact of PMA on the sign and strength of $\mathcal{N}$ can then be calculated for a thin magnetic film using the method given in Refs.~\citen{slavini2005approximate,slavin2009nonlinear} with the result plotted in Fig.~\ref{fig:1}b as a function of the strength of applied out-of-plane (OOP) field ($\theta_{\text{ex}}=80^{\circ}$) and the PMA field. \begin{figure} \begin{center} \includegraphics[width=15cm]{Fig1.png} \caption{\textbf{Device schematic, magnetoresistance, and non-linearity coefficient.} (a) Schematic of a SHNO with nano-constriction width \textit{w}. (b) Contour plot displaying the analytically calculated non-linearity coefficient ($\mathcal{N}$) as a function of PMA strength and applied out-of-plane ($\theta = 80^{\circ}$) field for a thin magnetic film with saturation magnetization, $\mu_0 M_{\text{S}} =$ 0.93 T; dashed black line indicates $\mathcal{N} = 0$. (c) Anisotropic magneto-resistance measured with 0.05 mA on a 200 nm nano-constriction under a rotating 70 mT in-plane field.} \label{fig:1} \end{center} \end{figure} One notes that $\mathcal{N}$ increases monotonically from negative (red regions) to positive (blue regions) values as a function of the OOP field strength and goes through zero at a certain field value (black dashed line) that depends on the PMA strength\cite{bonetti2010experimental}, \emph{i.e.}~the PMA shifts the point of zero non-linearity towards lower values of applied field. Adding PMA hence makes it possible to reach positive non-linearity at much lower fields. As the auto-oscillation threshold current decreases with decreasing field, it should in principle be possible to drive propagating SWs at much lower currents, which the SHNO can sustain without degradation. As a side note, any further increase of the PMA beyond the point where it completely compensates the shape anisotropy, \emph{i.e.}~where the magnetization equilibrium angle changes from in-plane (IP) to perpendicular to the plane, again results in a negative $\mathcal{N}$ (the second red region at high PMA in Fig.~\ref{fig:1}b). As a consequence, the current dependence of the auto-oscillation frequency again becomes negative\cite{Rippard2010prb,Mohseni2011pssrrl}, the generated spin waves self-localize, and can eventually nucleate magnetic droplet solitons\cite{mohseni2013science,Macia2014natnano,Lendinez2015prb,Lendinez2017prappl,Divinskiy2017prb,chung2018direct}. \section*{Results} \subsection{Nano-patterned SHNO device schematic and magneto-dynamics.} \sloppy A schematic of a nano-constriction SHNO is shown in Fig.~\ref{fig:1}a. The material stack consisted of sputtered $\beta$-W(5~nm)/Co$_{20}$Fe$_{60}$B$_{20}$(1.4~nm)/MgO(2~nm). The $\beta$-phase of W has been shown to produce large SOT\cite{zhang2016apl,Demasius2016natcomm} and a thinner CoFeB interfaced with MgO layer enhances PMA in the CoFeB layer\cite{ikeda2010natm}. The stack was fabricated on highly resistive Si substrates to both dissipate the local heat generated during operation and to reduce microwave losses; the SHNOs are hence CMOS compatible\cite{Zahedinejad2018apl}. A positive direct current (d.c.) is injected from the signal pad to ground along the $y$-direction while $\phi$ and $\theta$ define the IP and OOP field angles, respectively. Fig.~\ref{fig:1}c shows the in-plane angular dependence of the anisotropic magneto-resitance (AMR) measured for a 200 nm wide nano-constriction SHNO exhibiting a relatively large overall AMR value of 0.46$\%$ between the parallel and perpendicular orientation. Fig.~\ref{fig:2} summarizes the spin-torque ferromagnetic resonance (ST-FMR) measurements performed on a 6$\times$18$~\mu m^2$ microstrip of W/CoFeB/MgO to determine the magneto-dynamical parameters. The inset of Fig.~\ref{fig:2}a schematically illustrates the experimental set-up employed for the ST-FMR (see Methods). The main panel of Fig.~\ref{fig:2}a shows the extracted resonance peak positions obtained at different microwave frequencies ranging from 3 to 12 GHz under RF current excitation. The resonance field dependence on frequency can be well fitted with the Kittel equation yielding an effective magnetization $\mu_0 M_{\text{eff}}=$ 0.31~T with a gyromagnetic ratio of $\gamma/2\pi$ = 29.9~GHz/T. With the saturation magnetization, $\mu_0 M_{\text{S}} =$ 0.93 T obtained from Alternating Gradient Magnetometry (AGM) measurements, we extract a PMA field of $\mu_0 H_{\text{k}}^{\perp}=\mu_0(M_{\text{S}}-M_{\text{eff}})=$ 0.62 T, \emph{i.e.}~while we have substantial PMA, it is not strong enough to pull the magnetization out-of-plane. Fig.~\ref{fig:2}b displays the plot of linewidths extracted as half width at half maximum (HWHMs) from the same resonance peaks as a function of different microwave frequencies and the linear best fit of experimental data gives rise to a Gilbert damping constant of $\alpha$ = 0.023. The inset of Fig.~\ref{fig:2}b shows the current induced linewidth changes extracted at a fixed microwave frequency of 7 GHz for two opposite in-plane field orientations, the slope of which yields a high value of the spin-Hall angle (SHA), $\theta_{\text{SH}}=$ -0.41, typical for $\beta$-W and indicating the presence of large SOT from the spin-Hall effect in our devices\cite{Demasius2016natcomm}. \begin{figure}[h] \begin{center} \includegraphics[width=15cm]{Fig2.png} \caption{\textbf{ST-FMR measurements. } (a) Resonance frequency vs.~in-plane field (blue dots) with a Kittel fit (red line) yielding an effective magnetization $\mu_0M_{\text{eff}}= \mu_0(M_{\text{s}}-H_{\text{k}}^{\perp})$ of 0.31 T. Inset: Illustration of the ST-FMR measurement on a 6$\times$18$~\mu m^2$ wide microstrip. (b) Extracted line-width (HWHM) vs.~resonance frequency yielding a Gilbert damping constant of $~\alpha =$ 0.023. Inset: Current dependent ST-FMR line-width for both positive (blue dots) and negative (red dots) field directions yielding a spin Hall angle of -0.41.} \label{fig:2} \end{center} \end{figure} \subsection{Propagating spin waves} Figure \ref{fig:3} shows color plots of the generated microwave power spectral density (PSD) as a function of OOP applied field strength measured for two different nano-constriction widths with fixed direct currents of $I_{\text{dc}} =$ 1 and 2 mA, respectively. In both measurements, the IP field angle was fixed at $\phi$ = 22$^\circ$ to ensure sufficient electrical sensitivity to the auto-oscillation signal while the OOP field angle $\theta$ = 80$^\circ$ was chosen in a way to achieve large positive non-linearity in the active nano-constriction region. The orange circles, fitted with a solid orange line using the Kittel equation (Eq. (\ref{eq:FMRfreq})), represent the FMR frequencies obtained from ST-FMR measurements on microstrips under identical applied field conditions. As the FMR frequency corresponds to a wave vector of $\vec{k}$ = 0, it allows us to distinguish between propagating and localized spin-wave modes in the SHNO, since spatially localized modes with a frequency well below FMR have no well-defined real wave vector, while propagating modes with frequency higher than the FMR have a finite real $\vec{k}$. It is noteworthy that in all previously investigated nano-constriction and nano-gap SHNOs\cite{Demidov2012b,Liu2013,durrenfeld2017nanoscale}, the auto-oscillations remained localized with frequency lower than the FMR spectrum of the magnetic material. As can be seen in Fig.~\ref{fig:3}a, this is in our case only true for $\mu_0 H<$ 0.2 T and at all higher fields, the auto-oscillation frequency lies up to several GHz above the FMR frequency. This general behavior was observed in all devices as \emph{e.g.}~in Fig.~\ref{fig:3}b where a larger nano-constriction with $w$ = 200 nm follows the same trend, but now with improved microwave characteristics. It can be noted that in addition to a higher output power, the auto-oscillations in the larger nano-constriction cross over into propagating spin waves already at $\mu_0 H \sim$ 0.15 T, indicating a higher PMA than in the smaller nano-constriction. This is a general trend for different nano-constriction widths, which we believe is an effect of an etch induced reduction of PMA at the nano-constriction edges, which affects the smaller nano-constrictions in greater proportion. \begin{figure}[h] \begin{center} \includegraphics[width=16cm]{Fig3.png} \caption{\textbf{Auto-oscillating propagating spin waves vs.~OOP field strength } Power spectral densities (PSDs) vs. out-of-plane field for a (a) 150 nm and (b) 200 nm nano-constriction. Orange data points are the ST-FMR resonances obtained under identical conditions on 4$\times$14$~\mu m^2$ wide microstrips with the solid line being a fit to the Kittel equation (Eq. (\ref{eq:FMRfreq}) in Methods) } \label{fig:3} \end{center} \end{figure} The field dependence in Fig.~\ref{fig:3} is entirely consistent with the expected behavior based on Fig.\ref{fig:1}b, where the influence of the PMA field strength on the non-linearity ($\mathcal{N}$) is depicted. In weak magnetic fields, $\mathcal{N}$ is negative, leading to localization of the auto-oscillations, but in stronger fields, $\mathcal{N}$ changes sign to positive, resulting in propagating spin waves. The strong PMA clearly allows one to achieve a positive $\mathcal{N}$ at a lower out-of-plane magnetization angle $\theta_{\text{M}}$, which in turn effectively results into a higher spin-Hall efficiency via sin $(90^{\circ}-\theta_{\text{M}})$ dependence and, therefore, substantially reduces the operational current\cite{giordano2016scirep}. With strong PMA field due to thinner CoFeB layer in the present case, the lowest threshold current density of auto-oscillations is obtained as 1.5 $\times$ 10$^{7}$ A/cm$^2$, which is considerably lower compared to the one observed in our recent study on relatively thicker CoFeB based W/CoFeB/MgO SHNOs with negligible PMA\cite{Zahedinejad2018apl}. Having demonstrated the ability of nano-constriction SHNO devices to generate propagating spin waves, we now present the current tunability of propagating mode under fixed OOP field strengths. Fig.~\ref{fig:4}a-d show the current dependent PSD plots for a 150 nm wide SHNO. At 0.4 T, we observe the non-monotonic current dependence accompanying a red-shift in the auto-oscillation frequency at lower currents followed by the blue shift at higher currents. The FMR frequency, as shown by dashed line, distinguishes the two opposite non-linearity regimes observed in our device indicating a dramatic crossover from a localized behaviour of auto-oscillations at small currents to propagating one at higher currents. This peculiar non-linearity driven non-monotonic behaviour at lower fields is a manifestation of a gradual change in confinement potential of the auto-oscillation mode with current and is consistent with the theoretical predictions \cite{Dvornik2018prappl,Dvornik2018arxiv} and our simulation results discussed below. At higher fields, 0.6 T $<\mu_0 H<$ 1 T, only a blue-shifted behaviour is observed, highlighting the dominance of large positive $\mathcal{N}$ caused by the PMA. It is noteworthy that the variation of auto-oscillation frequency with electric current results in a very large positive value of the current tunability (${df}/{dI}$) reaching values over 4 GHz$/$mA in our devices (see Fig.~\ref{fig:4}b). The current dependence for the wider (200 nm) SHNO, shown in Fig.~\ref{fig:4}e-h, only exhibits a blue-shifted auto-oscillation frequency starting from the threshold current in all fields, 0.4 T $\leq \mu_0 H\leq$ 1 T. It is interesting to note that we no longer observe any auto-oscillation localization in the wider nano-constriction, even at the lowest field of 0.4 T, which is consistent with stronger PMA in wider nano-constrictions. We also emphasize that the spectral linewidth of auto-oscillations $\Delta{f}<$ 20 MHz, extracted using a Lorentzian fit, yields a quality factor $Q={f}/\Delta{f}$ of up to 1000, indicating a considerably higher degree of oscillation coherence of the generated propagating spin-waves in our devices. In addition, our demonstration does not require very large values of PMA field to excite propagating SWs while the generation takes place in a wider frequency spectrum ranging from 5 to 22 GHz \cite{evelt2018pra}. \begin{figure} \begin{center} \includegraphics[width=16cm]{Fig4.png} \caption{\textbf{Current tunability of spin wave auto-oscillations at different out-of-plane applied field strengths.} PSDs of the spin-wave auto-oscillations vs.~current for (a-d) a $w=$ 150 nm SHNO and (e-f) a $w=$ 200 nm SHNO subject to four different out-of-plane field strengths (0.4, 0.6, 0.8, and 1.0 T). Orange dashed lines indicate the FMR frequency. } \label{fig:4} \end{center} \end{figure} \subsection{Micromagnetic simulations} Finally, we present micromagnetic simulations performed using comparable conditions as in our electrical measurements to study the spatial profiles of the SW auto-oscillations for a 150 nm nano-constriction SHNO. All the magnetodynamical parameters used in the simulations are directly taken from the ST-FMR measurements discussed above. Fig.~\ref{fig:5}a-c show the current dependent PSD under three different fixed OOP field strengths indicating an excellent agreement with our experimental results in Fig.~\ref{fig:4}a-c. At 0.4 T, we observe a similar non-monotonic current dependence of the auto-oscillation frequency with simulated current, starting with a red-shifting frequency followed by a blue-shifted behavior (see Fig.~\ref{fig:5}a). It is interesting to note the appearance of multiple frequency steps at the lowest field. These are likely related to discreet changes in the wave vector and can also be observed in our experimental results at the same field (Fig.~\ref{fig:4}a), albeit as smoother transitions. At higher applied fields, these mode transitions disappear both in the experiments and in the simulations. The auto-oscillations then only show a blue-shifted frequency behaviour. \begin{figure} \begin{center} \includegraphics[width=16cm]{Fig5.png} \caption{\textbf{Micromagnetic simulations.} (a-c) Micromagnetically simulated PSDs as a function of applied direct current through a 150 nm nano-constriction under field conditions used in the experiments. (d) Snapshots of the instantaneous $m_z$ component at three different currents showing how the auto-oscillations transition from localized (0.15 mA) to propagating (0.4 \& 0.7 mA) with a wave vector that increases with current. (e) Cuts through Figure \ref{fig:5}d along the Y axis at the three different currents. \label{fig:5} \end{center} \end{figure} To gain deeper insight into the non-monotonic frequency behavior, we plot the spatial profiles of the simulated auto-oscillations at three representative currents in Fig.~\ref{fig:5}d. The nature of the auto-oscillations is qualitatively different at low and high current: in the low-current region, where $df/dI<0$, the auto-oscillations are clearly localized to the vicinity of the nano-constriction; at the two higher currents, where $df/dI>0$, the SWs clearly propagate with a wave vector that increases with current. Fig.~\ref{fig:5}e shows snapshots of the instantaneous $m_z$ component at the three different currents, highlighting the transition from a localized nature at 0.15 mA to propagating spin waves with about twice as large wave vector at 0.70 mA compared to 0.40 mA. \section*{Discussion} The capability to generate high frequency SOT-driven coherent propagating spin waves in metal based CMOS-compatible SHNO devices has particular potential for a number of reasons. First, the SW generation takes place already at very small operational currents with thresholds as low as 0.15 mA, with a straightforward development path towards even lower currents, making these oscillators the most amenable to adaptation in nanomagnonic circuits. Thanks to the large spin Hall angle provided by the $\beta$-W layer together with the strong PMA due to thinner CoFeB layer, the critical threshold current density required to excite propagating spin wave with SOT in metal devices has been reduced by about two orders of magnitude (10$^{7}$ A/cm$^2$) compared to theoretical predictions based on Pt and zero PMA\cite{giordano2014apl}. Further reduction of the critical current density should also be possible by yet higher PMA in yet thinner CoFeB. Second, their current tunability should allow these SHNO to rapidly switch between localized and propagating spin waves, effectively acting as tunable nanoscale sources of ultra-short SW pulses and wave packets\cite{divinskiy2016apl}. Thirdly, the wave nature of the propagating SWs will now allow for experimental realizations of SOT driven spin wave computing \cite{klingler2014apl,fischer2017apl} such as interference based majority gates. Finally, the propagating spin wave modes could further boost the long-range mutual synchronization of SHNO chains and networks for neuromorphic computing applications. Given the recent experimental demonstration of neuromorphic vowel recognition using four mutually synchronized STNOs\cite{romera2018nature}, we believe that the long range mutual synchronization driven by propagating spin waves in low operational current SHNOs may turn out to be the most viable solution for scaling neuromorphic computing to large dynamical neural networks. \begin{methods} \subsection{Nano-constriction device fabrication.} A trilayer stack of W(5)/Co$_{20}$Fe$_{60}$B$_{20}$(1.4)/MgO(2) (thicknesses in nm) was grown at room temperature on an intrinsic high resistivity Si substrate ($\rho_{Si}>$10 k$\Omega \cdot$cm) using an AJA Orion-8 magnetron sputtering system. DC and RF sputtering were sequentially employed for the depositions of metallic and insulting layers, respectively. The chamber was evacuated to a base pressure of 2$\times$10$^{-8}$ mTorr while the Ar gas pressure was maintained at 3~mTorr during the growth of all the layers. The deposition rate of W was kept at 0.09~\AA/s to obtain high resistivity $\beta$-phase exhibiting a large spin Hall angle \cite{mazraati2016apl,zhang2016apl,Zahedinejad2018apl}. The same deposition rate was maintained for Co$_{20}$Fe$_{60}$B$_{20}$ layer, while MgO layer was grown at 0.04~\AA/s. The \textit{as-deposited} stack was subsequently annealed under chamber base pressure at 300~$^{\circ}$C for 1 hour to induce PMA. The annealing process was followed by deposition of 4 nm of SiO$_x$ to protect the MgO layer from degradation due to exposure to the ambient conditions. The resistivities of $\beta$-phase W and Co$_{20}$Fe$_{60}$B$_{20}$ were measured as 213 $\mu \Omega \cdot$cm and 100 $\mu \Omega \cdot$cm, respectively. The trilayer stack was then patterned into an array of 4$\times$14$~\mu m^2$ rectangular mesas and the nano-constriction SHNO devices with different widths were defined at the center of these mesas by a combination of electron beam lithography and Argon ion beam etching using negative electron beam resist as the etching mask. In addition, we patterned 6$\times$18$~\mu m^2$ microstrips for ST-FMR measurements. The fabrication process is detailed in Ref. \citen{Zahedinejad2018apl}. \subsection{Analytical calculation of non-linearity coefficient for a thin magnetic film.} To calculate the nonlinear coefficient $\mathcal{N}$, shown on Figure \ref{fig:1}b, we start from the magnetic energy density, which consists of Zeeman, dipolar and PMA terms. Employing a well-known method \cite{slavini2005approximate,slavin2009nonlinear}, in which the Hamiltonian is expressed in the elliptically polarized dimensionless variables by a sequentially applying Holstein-Primakoff and Bogolyubov transformations with the further elimination of the non-resonant three-waves processes, one can derive the final result (see Ref. \citen{mohseni2018prb} for the details) as: \begin{equation} N=\frac{2 \omega_{0}}{\mathcal{A}}\left(\mathcal{T}-3\frac{|\mathcal{W}_1|^2+|\mathcal{W}_2|^2}{\omega_0}\right), \label{eq:nonlinearity} \end{equation} where \begin{align*} \mathcal{T} &=\left[3(u^2+|v|^2)^2-1\right]\mathcal{U}_1/2-3u (u^2+|v|^2)\left(v\mathcal{U}_2+v^*\mathcal{U}_2^*\right), \\ \mathcal{W}_1 &= 3(u^2+|v|^2)(u\mathcal{V}-v^*\mathcal{V}^*)/2-(u\mathcal{V}+v^*\mathcal{V}^*)/2, \\ \mathcal{W}_2 &= - u v^*(u\mathcal{V}-v^*\mathcal{V}^*). \end{align*} The FMR frequency, shown by a solid line on Figure 3, can be calculated as: \begin{equation} \omega_{0}=\sqrt{\mathcal{A}^2-|\mathcal{B}|^2}. \label{eq:FMRfreq} \end{equation} The coefficients of Bogolyubov transformation are: \begin{equation*} u=\text{sign} (\mathcal{A}) \sqrt{\frac{\mathcal{A}+\omega_0}{2 \omega_0}}, \qquad v=\frac{\mathcal{B}^*}{|\mathcal{B}|}\sqrt{\frac{\mathcal{A}-\omega_0}{2 \omega_0}}, \end{equation*} where \begin{align*} \mathcal{A} &= \omega_{\text{H}}-\frac{1}{2}(\omega_{\text{k}}-\omega_{\text{M}})\cos^2 \theta_{\text{M}},\\ \mathcal{B} &= -\frac{1}{2}(\omega_{\text{k}}-\omega_{\text{M}})\cos^2 \theta_{\text{M}},\\ \mathcal{V} &= (\omega_{\text{M}}-\omega_{\text{k}})\sin \theta_{\text{M}} \cos \theta_{\text{M}}, \\ \mathcal{U}_1 &= (\omega_{\text{k}}-\omega_{\text{M}})\sin \theta_{\text{M}} \left(\frac{3}{2} \cos^2 \theta_{\text{M}}-1\right), \\ \mathcal{U}_2 &=-\mathcal{B}/2. \end{align*} In the above expressions we used the notations $\omega_{\text{M}}= \gamma \mu_0 M_{\text{S}}$, $\omega_{\text{k}}=\gamma \mu_0 H_{\text{k}}^{\perp}$, $\omega_{\text{H}}=\gamma \mu_0 H_{\text{M}}$, where $M_{\text{S}}$ is the saturation magnetization, $H_{\text{k}}^{\perp}$ is the PMA field, $H_{\text{M}}$ -- effective internal field. The latter can be defined together with the internal angle of magnetization $\theta_{\text{M}}$ from the following equations: \begin{align*} H_{\text{M}} \cos \theta_{\text{M}} &= H_{\text{ex}} \cos \theta_{\text{ex}} \\ (H_{\text{M}}-H_{\text{k}}^{\perp}+M_{\text{S}}) \sin \theta_{\text{M}} &= H_{\text{ex}} \sin \theta_{\text{ex}}, \end{align*} where $H_{\text{ex}}$ is the externally applied magnetic field applied at the out-of-plane angle $\theta_{\text{ex}}$. \subsection{ST-FMR measurements.} The magneto-dynamical parameters of devices under investigation were determined by performing ST-FMR measurements at room temperature on a 6$\times$18$~\mu m^2$ wide microstrip of W(5)/Co$_{20}$Fe$_{60}$B$_{20}$(1.4)/MgO(2) stack. A radio-frequency (RF) current modulated at 98.76 Hz is made to inject through a high frequency bias-T through the microstrip at a frequency characteristic of FMR (3-12 GHz), generating spin-orbit torques as well as Oersted field under an external applied IP magnetic field. The resulting torques excite the magnetic moment in the CoFeB layer to precess leading to a time-dependent change in the resistance of the microstrip due to the AMR of CoFeB layer\cite{liu2011prl}. The oscillating AMR mixes with the RF current to create a d.c. voltage, $V_{\text{mix}}$, across the microstrip and is measured using the circuit displayed in inset of Fig. \ref{fig:2}. All ST-FMR measurements shown in Fig. \ref{fig:2} were carried out by sweeping an in-plane field ($\phi$ = 30$^{\circ}$) from 350 to 0 mT, while the frequency of the input RF signal is kept fixed. To determine the SHA, we injected small dc currents in addition to RF current through \textit{dc} and \textit{rf} ports, respectively of a bias-T. The resonance feature in voltage response from each field sweep was fitted to a sum of one symmetric and one anti-symmetric Lorentzian sharing the same resonance field and linewidth. In Fig. \ref{fig:3}, ST-FMR measurements on the microstrip were performed by sweeping the applied field at a fixed IP angle of $\phi$ = 22$^{\circ}$ and OOP angle of $\theta$ = 80$^{\circ}$ to measure FMR frequency under the identical conditions employed during auto-oscillation measurements. \subsection{Microwave measurements.} All microwave electrical measurements were carried out at room temperature using a custom built probe station with the sample mounted at a fixed in-plane angle on an out-of-plane rotatable sample holder between the pole pieces of an electromagnet capable of producing a uniform magnetic field. A direct positive electric current, $I_\text{dc}$, was made to inject through \textit{dc} port of a high frequency bias-T and the resulting auto-oscillating signal was then amplified by a low-noise amplifier with a gain of $\geq$ 32 dB and subsequently recorded using a spectrum analyzer from Rhode \& Schwarz (10 Hz-40 GHz) comprising a low resolution bandwidth of 300 kHz. We measured multiple SHNO devices and restricted the maximum current up to 1 mA for 150 nm and 2 mA for 200 nm nano-constriction devices in order to avoid irreversible changes due to device degradation in the output microwave characteristics. \subsection{Micromagnetic simulations.} The micromagnetic simulations were performed using the Graphics processor Unit (GPU)-accelerated program MUMAX3\cite{Vansteenkiste2014} with input provided by the COMSOL simulations. A 150 nm nano-constriction width SHNO device is modelled into $1024\times1024\times1$ cells with an individual cell size of $3.9\times3.9\times5$ nm$^3$. The material parameters employed in simulations such as the saturation magnetization $\mu_0 M_{\text{S}} =$ 0.93 T, the gyromagnetic ratio $\gamma/2\pi$ = 29.9~GHz/T, and the Gilbert damping constant $\alpha$ = 0.023 were obtained from STFMR measurements on a 6$\times$18$~\mu m^2$ wide microstrip of W(5 nm)/Co$_{20}$Fe$_{60}$B$_{20}$(1.4 nm)/MgO(2 nm) stack. We used PMA field of $\mu_0 H_{\text{k}}^{\perp}$= 0.57 T, slightly lower than the measured value of 0.62 T on a microstrip. The exchange stiffness constant of $A_{\text{ex}} = 19\times10^{-12}$ J/m for CoFeB was taken from Ref. \citen{sato2012cofeb}. The distribution of charge current density and the resulting Oersted field landscape for a W/CoFeB bilayer is obtained with COMSOL simulations. The corresponding spin current is then estimated from the simulated charge current in the W layer generating a transverse pure spin current along the out of plane direction while considering a spin Hall angle, $\mathit{\theta_{\text{SH}}}=$ -0.41, obtained from STFMR measurements on a 6$\times$18$~\mu m^2$ wide microstrip. The magnetization dynamics is simulated by integrating the Landau-Lifshitz-Gilbert-Slonczewski equation over 62.5 ns, with the first 31 ns discarded in the following analysis to exclude transient effects. The auto-oscillation spectra is obtained by performing the fast Fourier transform (FFT) of the simulated time evolution of the magnetization averaged over sample volume. The full spatial maps of the magnetization are extracted from the time domain data at a fixed time of 32 ns. \end{methods}
1,477,468,749,969
arxiv
\section{Introduction}\label{sec:intro} The Calabi complex is a differential complex that was introduced in by E.~Calabi in 1961~\cite{calabi}. It has an extended pre-history, though. One way to characterize it is as a canonical formal compatibility complex (the \emph{second Spencer sequence}) of the Killing equation on (pseudo-)Riemannian manifolds of constant curvature. The solutions of the Killing equation are (possibly only locally defined) infinitesimal isometries. In the special context of classical linear elasticity theory, the same operator also maps between the displacement and strain fields~\cite{pommaret-mech,eastwood,stven}. It is well known that for flat spaces (zero curvature) a complete set of formal compatibility conditions for the Killing equation is given by the linearized Riemann curvature operator, also known as the Saint-Venant compatibility operator in the context of elasticity~\cite{pommaret-mech,eastwood,stven}. Subsequent compatibility conditions are furnished by the Bianchi identities. Thus, it would also be reasonable to refer to it as the Killing-Riemann-Bianchi complex. Calabi's interest in the eponymous complex stemmed from the isomorphism between the cohomology of its global sections and the cohomology of the sheaf of Killing vectors. Given a fine resolution of a sheaf, like one provided by a locally exact sequence of differential operators on sections of vector bundles, the general machinery of homological algebra implies that the sheaf cohomology is in fact isomorphic to the cohomology of this global sections of its resolution, the resolution of the sheaf of locally constant functions by the de~Rham complex of differential forms on a manifold being the canonical example. The bulk of Calabi's original article was in fact spent proving that the hypotheses needed for applying this general result actually hold, thus providing a way to represent the cohomology of the Killing sheaf. It is the latter object that was of intrinsic interest, as it was in subsequent works by others~\cite{bbl,weil,berger}, motivated by the well known interpretations of its lowest cohomology groups: in degree-$0$ as the Lie algebra of global isometries and in degree-$1$ as the space of non-trivial infinitesimal deformations of the metric under the constant curvature restriction. Later, the Calabi complex was also seen as a non-trivial example of a formally exact compatibility complex~\cite{goldschmidt-calabi,gasqui-goldschmidt-fr,gasqui-goldschmidt,pommaret,eastwood} constructed for the Killing operator by the methods of the formal theory of partial differential equations developed by the school of D.~C.~Spencer~\cite{spencer-deform1,spencer-deform2,spencer,quillen,goldschmidt-lin}. More recently, the Calabi complex resurfaced in mathematical physics, in the context of the (pre-)symplectic and Poisson structure of relativistic classical field theories. In~\cite{kh-big,kh-peierls}, the author has shown that the degeneracy subspaces of the naturally defined pre-symplectic $2$-form and Poisson bivector on the infinite dimensional phase space of relativistic classical field theories with possible constraints and gauge invariance are controlled by the cohomology of some differential complexes. In the case of Maxwell-like theories~\cite[Sec.4.2]{kh-peierls}, this role is played by the de~Rham complex, while in the case of linearized gravity~\cite[Sec.4.4]{kh-peierls}, this role is played by the formal compatibility complex of the Killing operator. In other words, for linearization backgrounds of constant curvature (important examples include Minkowski and de~Sitter spaces, as well as quotients thereof), this is precisely the Calabi complex. When the linearization background is merely locally symmetric, rather than of constant curvature, the right complex to use is a slightly different one that was constructed by Gasqui and Goldschmidt~\cite{gasqui-goldschmidt-fr,gasqui-goldschmidt}. However, a discussion of the latter is beyond the scope of this work. The construction of similar complexes adapted to other background geometries appears to be an open problem. The degeneracy subspace of the Poisson bivector of a classical field theory is of importance because it translates almost directly into violations of a (strict) notion of locality of the corresponding quantum field theory, a subject that has recently been under intense investigation~\cite{dl,sdh,bds,fewster-hunt,fl,benini,hack-lingrav,bss}. The goal of this paper is to exploit the connection between the Calabi complex and Killing sheaf cohomologies, in a direction opposite the original one of Calabi, for the purpose of obtaining results relevant to the above mentioned applications in mathematical physics. More precisely, we consider the computation of certain sheaf cohomologies much simpler than constructing quotient spaces of kernels of complicated differential operators. Thus, the ability to equate the Calabi cohomology groups, which for us are of primary interest, with Killing sheaf cohomology groups is a significant technical simplification. Along the way, we collect some relevant facts about the Calabi complex that are either difficult or impossible to find in the existing literature, along with other little known tools from the theory of differential complexes~\cite{tarkhanov} needed to prove the desired equivalence and to introduce cohomologies with compact supports. It is our hope that this treatment of the Calabi complex could serve as a model for the treatment of other differential complexes that are of importance in mathematical physics. In Section~\ref{sec:calabi}, we discuss the explicit form of the Calabi complex on any constant curvature pseudo-Riemannian manifold. The tensor bundles and differential operators between them are defined using notation and identities from the representation theory of the general linear group, which are reviewed in Appendix~\ref{app:yt-bkg}. The differential cochain homotopy operators defined in Section~\ref{sec:calabi-ops} and Appendix~\ref{app:yt-calabi}, and the relation of the formal adjoint Calabi complex to the Killing-Yano operator presented in Section~\ref{sec:calabi-adj} are likely new. Then, Section~\ref{sec:sheaves} recalls some general notions from sheaf cohomology, with emphasis on locally constant sheaves. It also covers the relation between the Calabi cohomology, with various supports, and the cohomologies of the sheaf of Killing vectors and the sheaf of Killing-Yano tensors. In Section~\ref{sec:killing} we discuss several methods for effectively computing the cohomologies of the Killing sheaf, also outside the constant curvature context. An important application of the above results is described in Section~\ref{sec:appl}, which uses the Calabi cohomology to determine the degeneracy subspaces of presymplectic and Poisson structures of linearized gravity on constant curvature backgrounds. This application, and its generalizations, constitutes the main motivation for this work. Finally, Section~\ref{sec:discuss} concludes with a discussion of the presented results and of how they could be generalized to other differential complexes of interest in the mathematical theory of classical and quantum gauge field theories in physics. It should be emphasized that the Killing sheaf cohomology can be identified with the cohomology of the Calabi complex only on pseudo-Riemannian spaces of constant curvature, where the latter complex is actually defined. The Killing sheaf itself has a wider domain of definition. In terms of applications to linearized gravity, the differential complexes that are to replace the Calabi complex on other background geometries are still expected to have isomorphic cohomology to that of the Killing sheaf. So, from that perspective, the Calabi complex is a particular case study and the Killing sheaf is an object of more permanent value. \section{The Calabi complex}\label{sec:calabi} Below, in Sections~\ref{sec:calabi-tens} and~\ref{sec:calabi-ops}, we shall explicitly describe the Calabi complex as a complex of differential operators between tensor bundles on a pseudo-Riemannian manifold $(M,g)$. Further more, we will explicitly list a corresponding sequence of differential operators that constitute a cochain homotopy from the Calabi complex to itself. The cochain maps induced by the homotopy operators will have the same principal symbol as the tensor Laplacian $\nabla_a \nabla^a$ induced by the Levi-Civita connection of the metric tensor $g$, though will differ from it by lower order terms. This geometric structure is very similar to that of the Hodge theory of the de~Rham complex on a Riemannian manifold. This structure is used in the later Section~\ref{sec:sh-res} to show the complex's local exactness. Finally, in Section~\ref{sec:calabi-adj}, we will describe the formal adjoint Calabi complex, with the formal adjoint cochain maps and homotopies playing roles analogous to the original ones. It turns out that, just as the Calabi complex resolves the sheaf of Killing vectors on $(M,g)$, its formal adjoint complex resolves the sheaf of rank-$(n-2)$ Killing-Yano tensors. A non-negligible amount of work~\cite{bbl,gasqui-goldschmidt-fr,goldschmidt-calabi,gasqui-goldschmidt,eastwood,pommaret}, though certainly not a large one, has been done on this differential complex since the original work~\cite{calabi} of Calabi in 1961. Its original presentation was in terms of Cartan's moving frame formalism and much of the subsequent work did not put a strong emphasis on explicit formulas. Thus, it is a little difficult to find its presentation in terms of standard covariant derivatives on tensor bundles in the existing literature. We give such formulas below, together with a complete sequence of cochain homotopy operators from the complex to itself and their corresponding cochain maps. These formulas are apparently new, as their role was played by a more generic, but somewhat less natural, construction applicable to general elliptic complexes in~\cite{calabi,bbl,gasqui-goldschmidt-fr,goldschmidt-calabi}. The advantage of our version is the connection of the homotopy and cochain maps with the equations of linearized gravity and coincidence, in low degrees, with other well known related operators, which include the Killing, linearized Riemann, Bianchi, de~Donder and Ricci trace operators. One could also argue that our resulting homotopy and cochain maps are simpler, because they never exceed second differential order (in contrast to fourth differential order). Further more, we find that the tensor bundles that constitute the nodes of the complex are best described as having fibers that carry irreducible representations of $\mathrm{GL}(n)$, where $n$ is the dimension of the base manifold; moreover, the principal symbols of the differential operators in the complex are $\mathrm{GL}(n)$ equivariant maps. Hence they are independent of the background metric, which is no longer true for subleading terms. This observation appears to have escaped the attention of earlier works, thus requiring some seemingly ad-hoc constructions~\cite{calabi}. A notable exception is Eastwood~\cite{eastwood}, who also identified the principal symbol complex as an instance of the general notion of BGG resolutions~\cite{bgg} in representation theory. Taking advantage of this connection with representation theory, we explicitly describe the tensor bundles of the complex and the equivariant principal symbol maps between them in terms of Young diagrams. \subsection{Tensor bundles and Young symmetrizers}\label{sec:calabi-tens} As was mentioned in the Introduction, it is convenient to describe various tensor bundles involved in the Calabi complex, as well as various maps between then, in terms of irreducible representations (\emph{irreps}) of group $\mathrm{GL}(n)$, where $n = \dim M$ is the dimension of the base manifold $M$. Irreps of $\mathrm{GL}(n)$ are concisely presented using Young diagrams and corresponding Young tensor symmetrizers. An excellent reference on this topic is the book~\cite{fulton}, where we refer the reader for complete details. For an uninitiated reader, we have briefly summarized the relevant concepts and formulas in Appendix~\ref{app:yt-bkg}. For the expert reader, it is recommended to skim the same appendix for the particulars of our notation. \begin{table} \caption{% The table below lists the tensor bundles of the Calabi complex, the corresponding irreducible $\mathrm{GL}(n)$ representations (labeled by Young diagrams), and their fiber ranks, for $\dim M = n$. The rank is given by the famous \emph{hook formula}, which is discussed in Appendix~\ref{app:yt-bkg}.} \label{tbl:bundles} \begin{center} \begin{tabular}{ccc} bundle & Young diagram & fiber rank \\ \hline \rule[0.5ex]{0pt}{2.5ex} \\[-3ex] $C_0M \cong T^*M$ & \ydiagram{1} & $n$ \\ \\[-1ex] $C_1M \cong S^2M$ & \ydiagram{2} & $\frac{n(n+1)}{2}$ \\ \\[-1ex] $C_2M \cong RM$ & \ydiagram{2,2} & $\frac{n^2(n^2-1)}{12}$ \\ \\ $C_3M \cong BM$ & \ydiagram{2,2,1} & $\frac{n^2(n^2-1)(n-2)}{24}$ \\ \\[-2ex] \hline \rule[0.5ex]{0pt}{2.5ex} \\[-3ex] $C_lM$ & \ytableaushort{{1}{},{2}{},{\none[\raisebox{-.5ex}{\vdots}]},{l}} & $\frac{n^2(n^2-1)(n-2)\cdots(n-l+1)}{2(l+1)l(l-2)!}$ \\ \\[-2ex] \hline \rule[0.5ex]{0pt}{2.5ex} \end{tabular} \end{center} \end{table} Given a base manifold $M$ of dimension $n=\dim M$, we can construct tensor bundles over $M$ whose fibers carry irreducible representations of $\mathrm{GL}(n)$. Indeed, we will consider Young symmetrized sub-bundles $\mathrm{Y}^d T^*M$ of the bundle of covariant $k$-tensors $(T^*)^{\otimes k} M$, where $d$ is a Young diagram type with $k$ cells. The Calabi complex, to be introduced in the next section, is a complex of differential operators between certain tensor bundles over $M$. Let us denote the corresponding sequence of vector bundles by $C_l M$. More precisely, \begin{equation} C_0 = T^*, ~~ C_1 = \mathrm{Y}^{(2)} T^*, ~~ C_2 = \mathrm{Y}^{(2,2)} T^*, ~~ C_l = \mathrm{Y}^{(2,2,1^{l-2})} T^* ~ (l>2) . \end{equation} Note that the bundle $C_1 M$ corresponds to symmetric $2$-tensors, which we will also denote $S^2 M$. Also, as mentioned in the preceding section, since the bundle $C_2 M$ corresponds to $4$-tensors with the algebraic symmetries of the Riemann tensor, we will also denote it $RM$. And the bundle $C_3 M$, also denoted $BM$, corresponds to $5$-tensors with symmetries of the image of the Bianchi operator applied to a section or $RM$. The index $l$ essentially counts the number of rows in the corresponding Young diagram. So, for $l>n$, the number of rows exceeds the base dimension and the $C_l M$ bundles become trivial. These tensor bundles, the corresponding Young diagrams and their fiber ranks are illustrated in Table~\ref{tbl:bundles}. \subsection{Differential operators}\label{sec:calabi-ops} Below, given any $n$-dimensional pseudo-Riemannian manifold $(M,g)$ of constant curvature $k$ (normalized so that the Ricci scalar curvature% \footnote{We follow~\cite{wald} for conventions regarding the definitions of curvature tensors and scalars.} % is equal to $k$), we give explicit formulas for the differential operators, constituting the Calabi complex, as well as formulas for the differential operators that constitute a cochain homotopy from the complex to itself and the corresponding induced cochain maps. In Calabi's original work~\cite{calabi}, the corresponding differential operators were constructed using an orthogonal coframe formalism. Thus, it has been difficult to find explicit formulas for these operators in the tensor formalism that is more prevalent in the physics literature on relativity. The cochain homotopy operators and the induced cochain maps coincide, in low degrees, with differential operators well known in the relativity literature. However, their explicit form in all degrees appears to be new. Furthermore, we explicitly demonstrate all the identities between these differential operators that lead to their homological algebra interpretations. We use a mixture of elementary arguments, as well as equivariance and standard $\mathrm{GL}(n)$-representation theoretic identities, unlike Calabi's original proofs~\cite{calabi} that relied on a somewhat ad hoc algebraic constructions, and unlike the derivation of Gasqui and Goldschmidt~\cite{gasqui-goldschmidt-fr,gasqui-goldschmidt} that relied on the sophisticated theory of Spencer sequences. First, we define a number of differential operators that will be convenient for our purposes. For homogeneous differential operators with constant coefficients, the operator is completely determined by the principal symbol. In general that is not the case, yet the presence of a preferred connection on tensor bundles (the $g$-compatible Levi-Civita connection) still allows us to specify operators by their principal symbols: the covariant derivative is applied to a tensor $k$-times, the derivative indices are full symmetrized, and the principal symbol is applied to the result. The principal symbol of a $k$-th order differential operator between two Young symmetrized bundles $\mathrm{Y} T^*$ and $\mathrm{Y}' T^*$ is a linear map between them that depends polynomially on a covector $p\in T^*$. If the operator (or just its principal symbol) is $\mathrm{GL}(n)$-equivariant, then the principal symbol actually corresponds to an intertwiner between the $\mathrm{Y}^{(k)}\otimes \mathrm{Y}$ and $\mathrm{Y}'$ representations. Such an intertwiner is non-zero only if $\mathrm{Y}'$ appears in the irrep decomposition of the tensor product. Moreover if $\mathrm{Y}'$ appears with single multiplicity, the intertwiner (and hence the principal) symbol is determined uniquely up to a scalar factor. It is an old result due to Pieri~\cite{fulton} that, in fact, the decomposition of the product of $\mathrm{Y}^{(k)}\otimes \mathrm{Y}$ into irreps has only single multiplicities. Not all principal symbols of importance to us are equivariant. The main source of the lack of equivariance is the dependence on the metric $g$. However, if the metric itself is also allowed to transform, the principal symbol becomes equivariant again. For instance, if the operator is equivariant in this way and depends linearly on the metric in covariant form, it corresponds to an intertwiner between the representations $\mathrm{Y}^{(2)}\otimes \mathrm{Y}^{(k)} \otimes \mathrm{Y}$ and $\mathrm{Y}'$. Because of the presence of a double tensor product, Pieri's rule doesn't always apply, so sometimes more information is necessary to specify the desired intertwiner unambiguously. As a rule, these ambiguities will be resolved by giving explicit formulas. Observe that the all tensor fields defined in Section~\ref{sec:calabi-tens} correspond to Young diagrams with at most two columns. We shall refer to the columns as \emph{left} and \emph{right}. Let $\d_L$ and $\d_R$, the \emph{left} and \emph{right exterior differentials}, be differential operators that increase by one the number of boxes in the, respectively, left or right column. They have equivariant principal symbols. We also define several operators whose principal symbols involve the metric. Two operators of order $0$ are the \emph{trace} $\operatorname{tr}$ and the \emph{metric exterior product} $(g\odot -)$, respectively, decreasing (contracting indices between the two columns) or increasing (multiplying by $g$ and symmetrizing) by one the number of boxes in each column. Two operators of order $1$ are \emph{left} and \emph{right codifferentials} $\delta_L$ and $\delta_R$, which decrease (taking a covariant divergence and resymmetrizing, if necessary) by one the number of boxes in, respectively, the left or right column. Finally, we have the \emph{tensor Laplacian} $\square$, a differential operator of order $2$ that does not alter the Young symmetry. Explicit formulas for these operators, along with proofs that they respect the corresponding Young symmetries, are given in Appendices~\ref{app:yt-ops}, \ref{app:yt-comp}, \ref{app:yt-calabi}. The differential operators constituting the Calabi complex, as well as cochain self-homotopy and the induced cochain self-maps fit into the following diagram: \begin{equation}\label{eq:calabi-diag} \begin{tikzcd} 0 \arrow{r} & C_0 \arrow{r}{B_1} \arrow{d}[swap]{P_0} & C_1 \arrow{r}{B_2} \arrow{d}[swap]{P_1} \arrow[dashed]{dl}[swap]{E_1} & C_2 \arrow{r}{B_3} \arrow{d}[swap]{P_2} \arrow[dashed]{dl}[swap]{E_2}& C_3 \cdots \arrow{r}{B_{n}} \arrow{d}[swap]{P_3} \arrow[dashed]{dl}[swap]{E_3}& C_{n} \arrow{r} \arrow{d}[swap]{P_n} \arrow[dashed]{dl}[swap]{E_{n}} & 0 \\ 0 \arrow{r} & C_0 \arrow{r}{B_1} & C_1 \arrow{r}{B_2} & C_2 \arrow{r}{B_3} & C_3 \cdots \arrow{r}{B_{n}} & C_{n} \arrow{r} & 0 \end{tikzcd} , \end{equation} where for simplicity we have used the symbol $C_l$ to stand for the space of sections $\Gamma(C_lM)$. The operators $B_l$ constitute a complex, because $B_{l+1} \circ B_l = 0$. The solid arrows in the diagram commute, $P_{l+1} \circ B_{l+1} = B_l \circ P_l$, so that the $P_l$ are cochain maps from the complex to itself. These cochain maps, $P_l = E_{l+1} \circ B_{l+1} + B_l \circ E_l$, are induced by the homotopy operators $E_n$, which appear as dashed arrows. Below, we give explicit formulas for each of these operators, discuss these identities, and relate them to well known differential operators from the literature on relativity. We follow the notational conventions of Appendices~\ref{app:yt-bkg}, \ref{app:yt-ops}. In particular, we use $:$ to separate fully antisymmetric tensor index groups belonging to different columns of the Young diagram, which characterizes the symmetry type of a given tensor. However, for simplicity, we also write $g_{ab} = g_{a:b}$ and $h_{ab} = h_{a:b}$. \begin{align} \notag B_1[v]_{a:b} &= K[v]_{a:b} = \nabla_a v_b + \nabla_b v_a , \\ \notag B_2[h]_{ab:cd} &= -2\tilde{R}[h]_{ab:cd} \\ \notag &= \left( \nabla_{(a}\nabla_{c)} h_{bd} - \nabla_{(b}\nabla_{c)} h_{ad} - \nabla_{(a}\nabla_{d)} h_{bc} + \nabla_{(b}\nabla_{d)} h_{ac} \right) \\ & \qquad {} + \frac{k}{n(n-1)} (g\odot h)_{ab:cd} , \\ \notag B_3[r]_{abc:de} &= \bar{B}[r]_{abc:de} = \d_L[r]_{abc:de} = 3\nabla_{[a} r_{bc]:de} \\ &= \nabla_a r_{bc:de} + \nabla_b r_{ca:de} + \nabla_c r_{ab:de} , \\ \notag B_4[b]_{abcd:ef} &= \d_L[b]_{abcd:ef} = 4\nabla_{[a} b_{bcd]:ef} \\ &= \nabla_a b_{bcd:ef} - \nabla_b b_{cda:ef} - \nabla_c b_{dab:ef} - \nabla_d b_{abc:ef} , \\ B_l[b]_{a_1\cdots a_l:bc} &= \d_L[b]_{a_1\cdots a_l:bc} = l\nabla_{[a_1} b_{a_2\cdots a_l]:bc} \quad (l \ge 3) . \end{align} Note that we have introduced some suggestive alternative notations for operators $B_l$ of low rank. In particular, $B_1 = K$ is the \emph{Killing} operator. Then, $B_2 = -2\tilde{R}$ is the linearized \emph{corrected Riemann} curvature operator,% \footnote{The same corrected curvature tensor can be obtained by linearizing the mixed form $R[g]_{ab}{}^{cd}$ of the Riemann tensor and then lowering all indices with the background metric. This linearized mixed Riemann tensor was previously used to isolate the gauge invariant metric perturbations on de~Sitter space in~\cite{roura}. That the linearized corrected Riemann tensor annihilates the Killing operator also follows from the classical analysis in~\cite{sw-pert}, which noted that the linearization of any tensor built only out of the metric and vanishing on the background spacetime is invariant under linearized diffeomorphisms.} % where $R[g+\lambda h] - \bar{R}[g+\lambda h] = \lambda \tilde{R}[h] + O(\lambda^2)$, with $\bar{R}[g]_{ab:cd} = \frac{k}{n(n-1)} (g_{ac} g_{bd} - g_{bc} g_{ad})$, cf.~Equation~\eqref{eq:Rb-def} in Appendix~\ref{app:yt-comp}. The precise relation with the linearized Riemann tensor operator is \begin{equation}\label{eq:linR-def} \dot{R}[h] = -\frac{1}{2}B_2[h] + k \frac{2}{n(n-1)}(g\odot h). \end{equation} Note that the identity $R[g] - \bar{R}[g] = 0$ holds precisely when the metric $g$ has constant curvature $k$. Finally, $B_3 = \bar{B}$ is the background \emph{Bianchi} operator, which also happens to coincide with the left exterior differential $\d_L$. It satisfies the well known Bianchi identity $\bar{B}[R[g]] = 0$. The operators $B_l$ for $l>3$, which we may call \emph{higher Bianchi} operators, do not appear to have been studied in the literature on relativity. So, as mentioned in the Introduction, the Calabi complex might also be legitimately referred to as the Killing-Riemann-Bianchi complex. Now we give mostly elementary arguments for the composition identities $B_{l+1}\circ B_l = 0$. Recall that if $v$ is a vector field (identified with a section of $C_0M \cong T^*M$ using the metric), then the Lie derivative of the metric along $v$ is given by the Killing operator, $\mathcal{L}_v g = K[v]$. Now, suppose that $T[g]$ is any tensor field covariantly constructed out of the metric and its derivatives. Consider its linearization $T[g+\lambda h] = T[g] + \lambda \dot{T}[h] + O(\lambda^2)$. The linearization $\dot{T}$ annihilates the Killing operator if $T[g] = 0$~\cite{sw-pert}. This fact follows from the fact that $T[g]$ is itself a tensor field, so that \begin{equation} \mathcal{L}_v T[g] = \dot{T}[\mathcal{L}_v g] = \dot{T}\circ K[v] . \end{equation} Letting $T[g]_{ab:cd} = R[g]_{ab:cd} - \bar{R}[g]_{ab:cd}$ we obtain the identity $B_2\circ B_1 = -2\tilde{R}\circ K = 0$, since $T[g] = 0$ by reason of $g$ being of constant curvature equal to $k$. Further, note that, since the metric is covariantly constant, $\nabla g = 0$, it is trivial to check that $\bar{B}[\bar{R}[g]] = 0$, for any $g$. Combining this observation with the Bianchi identity, we find that $\bar{B}[R[g]-\bar{R}[g]] = 0$, for any $g$. Making the dependence of $\bar{B} = \bar{B}_g$ on $g$ explicit, the linearization of this identity gives \begin{multline} \bar{B}_{g+\lambda h}[R[g+\lambda h] - \bar{R}[g+\lambda h]] \\ = \bar{B}_g[R[g]-\bar{R}[g]] + \lambda(\bar{B}[\tilde{R}[h]] + \dot{B}[h,R[g]-\bar{R}[g]]) + O(\lambda^2) = 0 , \end{multline} where $\bar{B}_{g+\lambda h}[T] = \bar{B}[T] + \lambda \dot{B}[h,T] + O(\lambda^2)$. At first order in $\lambda$, we obtain the desired identity $B_3 \circ B_2 = -2\bar{B} \circ \tilde{R} = 0$. The remaining identities, $B_{l+1}\circ B_l = \d_L^2 = 0$ for $l>2$, follow from abstract representation theoretic reasons, described in more detail in Appendix~\ref{app:yt-comp} and~\ref{app:yt-calabi}. \begin{align} \label{eq:calabi-homot-start} E_1[h]_a &= D[h]_a = \nabla^b h_{ab} - \frac{1}{2} \nabla_a h , \\ E_2[r]_{a:b} &= \operatorname{tr}[r]_{a:b} = r_{ac:b}{}^c , \\ \notag E_3[b]_{ab:cd} &= \nabla^e b_{e ab:cd} + \frac{1}{2}\nabla^e (b_{c ab:d e} - b_{d ab:c e}) \\ \notag & \quad {} -\frac{1}{2} (\nabla_c b_{ab e:d}{}^e - \nabla_d b_{ab e:c}{}^e) \\ \notag & \quad {} - \frac{1}{2} (\nabla_a b_{c b e:d}{}^e - \nabla_a b_{d b e:c}{}^e \\ & \qquad {} +\nabla_b b_{a c e:d}{}^e - \nabla_b b_{a d e:c}{}^e) , \\ \notag E_4[b]_{abc:de} &= \nabla^f b_{f abc:de} + \frac{1}{3}\nabla^f (b_{d abc:e f} - b_{e abc:d f}) \\ \notag & \quad {} + \frac{1}{3} (\nabla_d b_{abc f:e}{}^f - \nabla_e b_{abc f:d}{}^f) \\ \notag & \quad {} + \frac{1}{6} (\nabla_a b_{d bc f:e}{}^f - \nabla_a b_{e bc f:d}{}^f \\ \notag & \qquad {} +\nabla_b b_{a d c f:e}{}^f - \nabla_b b_{a e c f:d}{}^f \\ & \qquad {} +\nabla_c b_{ab d f:e}{}^f - \nabla_c b_{ab e f:d}{}^f) , \\ \notag E_5[b]_{abcd:ef} &= \nabla^i b_{i abcd:ef} + \frac{1}{4}\nabla^i (b_{e abcd:f i} - b_{f abcd:e i}) \\ \notag & \quad {} - \frac{1}{4}(\nabla_e b_{abcd i:f}{}^i - \nabla_f b_{abcd i:e}{}^i) \\ & \quad {} - \frac{1}{12} (\nabla_{\{e\}} b_{\{abcd\} i:f}{}^i -\nabla_{\{f\}} b_{\{abcd\} i:e}{}^i) , \\ \label{eq:calabi-homot-end} E_{l+1}[b]_{a_1\cdots a_l:bc} &= (\delta_L[b] - (-1)^l l^{-1} \d_R \circ \operatorname{tr}[b])_{a_1\cdots a_l:bc} \quad (l\ge 2) . \end{align} The notation used in the formula for $E_5$ is defined in Appendix~\ref{app:yt-bkg}. Note that $E_1 = D$ is the \emph{de~Donder} operator, used as a linearized gauge fixing condition in the literature on relativity. Also, if $R[g]$ is the Riemann tensor of the metric $g$, then $E_2[R[g]] = \operatorname{tr}[R[g]]$ is the corresponding Ricci tensor. The higher homotopy operators $E_l$ for $l>2$ do not seem to have previously appeared in the literature on relativity. \begin{align} P_0[v]_a &= \square v_a + k\frac{1}{n} v_a , \\ P_1[h]_{ab} &= \square h_{ab} - k \frac{2}{n(n-1)} h_{ab} + 2k \frac{g_{ab} \operatorname{tr}[h]}{n(n-1)} , \\ P_2[r]_{ab:cd} &= \square r_{ab:cd} - k\frac{2}{n} r_{ab:cd} + 2k\frac{(g\odot \operatorname{tr}[r])_{ab:cd}}{n(n-1)} , \\ P_3[b]_{abc:de} &= \square b_{abc:de} - k\frac{(3n-7)}{n(n-1)} b_{abc:de} - 2k\frac{(g\odot \operatorname{tr}[b])_{abc:de}}{n(n-1)} , \\ P_4[b]_{abcd:ef} &= \square b_{abcd:ef} - k\frac{(4n-14)}{n(n-1)} b_{abcd:ef} + 2k\frac{(g\odot \operatorname{tr}[b])_{abcd:ef}}{n(n-1)} , \\ \notag P_l[b]_{a_1\cdots a_l:bc} &= \square b_{a_1\cdots a_l:bc} - k\frac{(ln-l^2+2)}{n(n-1)} b_{a_1\cdots a_l:bc} \\ & \qquad {} + (-)^l 2k \frac{(g\odot \operatorname{tr}[b])_{a_1\cdots a_l:bc}}{n(n-1)} \quad (l\ge 3) . \end{align} Note our notation $\square = \nabla^a\nabla_a$ for the tensor Laplacian, which is also known as the d'Alambertian in Lorentzian signature. The operator $P_0 = D\circ K$ is gives the wave-like residual gauge condition such that the perturbation $h = K[v]$ satisfies the de~Donder gauge condition $D[h] = 0$ in linearized gravity. The operator $P_1 = \operatorname{tr}{} \circ (-2\tilde{R}) + K \circ D$ is the wave-like operator of the linearized Einstein equations for gravitational perturbations $h$ in de~Donder gauge $D[h] = 0$. These two operators are well known and can be found (or their close analogs can) for instance in \cite[Sec.7.5]{wald} and more they appeared in in~\cite{fewster-hunt,hack-lingrav,bdm}. The higher cochain maps and the corresponding identities appear to be new. Though, the identity $P_2 = E_3 \circ \bar{B} - 2\tilde{R}\circ E_2$ is related to the non-linear wave equations satisfied by the Riemann and Weyl tensors on any vacuum background, sometimes known as the \emph{Penrose wave equation}. For linearized fields, a related equation is sometimes known as the \emph{Lichnerowicz Laplacian}. For more details, see references~\cite{ryan}, \cite[Sec.1.3]{lichnerowicz}, \cite[Sec.7.1]{chr-kl}, \cite[Exr.15.2]{mtw}, \cite[Eq.35]{bcjr}. \begin{rem}\label{rmk:elliptic} It is worth noting that we refer to the operators $P_l$ as wave-like because the principal symbol of $P_l$ has the same principal symbol as the tensor Laplacian $\square = \nabla^a \nabla_a$, on Lorentzian manifolds also known as the d'Alambertian or wave operator, which is a hyperbolic differential operator. Note that the principal symbol of $P_l$ is determined only by the principal symbols of the $B_l$ and $E_l$. The principal symbols of $B_l$ are metric independent, while those of $E_l$ depend on the metric $g$ of the background pseudo-Riemannian manifold $(M,g)$. However, we are actually free to choose any metric, say $g'$ that is different from $g$, to construct the cochain homotopy operators, say $E'_l$. The principal symbol induced cochain maps $P'_l = E'_{l+1}\circ B_{l+1} + B_l \circ E'_l$ will then still only depend on one metric, $g'$, and be equal to the principal symbol of the tensor Laplacian $\square'$ defined with respect to $g'$. Thus, if we choose $g'$ to be Riemannian, we can induce cochain homotopies $P'_l$ that are elliptic. The operators $P'_l$ will of course differ from the $P_l$ by terms of lower differential order that would depend on both $g$ and $g'$. This remark will be very useful in Proposition~\ref{prp:calabi-exact} in the discussion of the local exactness of the Calabi complex. \end{rem} \subsection{Formal adjoint complex}\label{sec:calabi-adj} Given a linear differential operator $f\colon \Gamma(E)\to \Gamma(F)$, between vector bundles $E\to M$ and $F\to M$, its \emph{formal adjoint} is a linear differential operator $f^*\colon \Gamma(\tilde{F}^*) \to \Gamma(\tilde{E}^*)$, where where we have used the notation for the bundle $\tilde{V}^* \cong V^* \otimes_M \Lambda^n M$ of \emph{dual densities} of a vector bundle $V\to M$, defined as the tensor product of the its linear dual bundle $V^*\to M$ with that of densities $\Lambda^n M\to M$ on the base manifold if dimension $\dim M = n$. The formal adjoint operator is defined to be the unique differential operator such that a \emph{Green formula} holds, \begin{equation}\label{eq:gen-adj} \psi \cdot f[\xi] - f^*[\psi] \cdot \xi = \d G[\psi,\xi] , \end{equation} for any $\psi \in \Gamma(\tilde{F}^*)$, $\xi\in \Gamma(E)$, and some bilinear bidifferential operator \begin{equation} G\colon \Gamma(\tilde{F}^* \times_M E) \to \Gamma(\Lambda^{n-1} M). \end{equation} A formal adjoint operator always exists and is unique~\cite{anderson-small,anderson-big,tarkhanov}. In the presence of background pseudo-Riemannian metric $g$ on $M$, we can canonically identify the trivial bundle $\mathbb{R}\times M$ with $\Lambda^n M$, via multiplication by the canonical volume form $\varepsilon_{a_1\cdots a_n}$ with respect to $g$ ($\varepsilon \in \Omega^n(M)$), and also $V \cong V^*$ for any tensor bundle $V\to M$, by lowering and raising indices with $g$, thus also canonically identifying $V \cong \tilde{V}^*$. Below, we will take formal adjoints with respect to this identification. Recall the identity~\cite{levi-civita} \begin{equation} \varepsilon^{a a_2\cdots a_n} \varepsilon_{b a_2\cdots a_n} = (-1)^s(n-1)! \delta^a_b \end{equation} (where $s$ counts the number of minuses in the signature of the metric $g$, with $s=1$ for Lorentzian metrics with mostly-plus convention) and define \begin{equation} G^a = \frac{(-1)^s}{(n-1)!} \varepsilon^{a a_2 \cdots a_n} G_{a_2\cdots a_n} \end{equation} so that $G_{a_2\cdots a_n} = \varepsilon_{a a_2 \cdots a_n} G^a$. The right hand side of the formal adjoint equation~\eqref{eq:gen-adj} can then be rewritten as \begin{equation} (\d G)_{a_1\cdots a_n} = \frac{(-1)^s}{n!} \varepsilon_{a_1\cdots a_n} \varepsilon^{a b_2\cdots b_n} n \nabla_{a} G_{b_2\cdots b_n} = \varepsilon_{a_1\cdots a_n} \nabla_a G^a, \end{equation} with the whole equation becoming \begin{equation}\label{eq:adj} \psi\cdot f[\xi] - f^*[\psi]\cdot \xi = \nabla_a G^a[\psi,\xi] , \end{equation} where the dot indicates contraction of indices using the metric $g$ between two tensors of the same index structure. With this notation, the formal adjoint Calabi complex $(C_\bullet,B_\bullet^*)$ fits into the following diagram: \begin{equation}\label{eq:dual-calabi-diag} \begin{tikzcd} 0 \arrow[<-]{r} & C_0 \arrow[<-]{r}{B_1^*} \arrow[<-]{d}[swap]{P_0^*} & C_1 \arrow[<-]{r}{B_2^*} \arrow[<-]{d}[swap]{P_1^*} \arrow[<-,dashed]{dl}[swap]{E_1^*} & C_2 \arrow[<-]{r}{B_3^*} \arrow[<-]{d}[swap]{P_2^*} \arrow[<-,dashed]{dl}[swap]{E_2^*}& C_3 \cdots \arrow[<-]{r}{B_{n}^*} \arrow[<-]{d}[swap]{P_3^*} \arrow[<-,dashed]{dl}[swap]{E_3^*}& C_{n} \arrow[<-]{r} \arrow[<-]{d}[swap]{P_n^*} \arrow[<-,dashed]{dl}[swap]{E_{n}^*} & 0 \\ 0 \arrow[<-]{r} & C_0 \arrow[<-]{r}{B_1^*} & C_1 \arrow[<-]{r}{B_2^*} & C_2 \arrow[<-]{r}{B_3^*} & C_3 \cdots \arrow[<-]{r}{B_{n}^*} & C_{n} \arrow[<-]{r} & 0 \end{tikzcd} , \end{equation} where we have identified $\tilde{C}_i^* \cong C_i$ using the background metric. Note that all the analogous identities are satisfied, the solid arrows in the diagram commute and the dashed arrows are homotopy operators inducing the vertical cochain maps, $P_i^* = B_{i+1}^* \circ E_{i+1}^* + E_i^* \circ B_i^*$. The main difference is that the $B_\bullet$ now decrease the degree index by one instead of decreasing it. The usual numbering convention can be achieved by relabelling, but we shall not do so here, expecting that no confusion will arise. Recall that the final differential operator $B_n$ of the Calabi complex is \begin{equation} B_n[b]_{a_1 \cdots a_n : bc} = \d_L[b]_{a_1\cdots a_n : bc} = n \nabla_{[a_1} b_{a_2 \cdots a_n]:bc} , \end{equation} where $b\in \Gamma(C_{n-1} M)$. To compute its formal adjoint, let $c\in \Gamma(C_n M)$ and consider first the identity, derived in Appendix~\ref{app:yt-adj}, \begin{multline}\label{eq:dlDl-adj} \nabla_a (c^{a a_2\cdots a_n : bc} b_{a_2\cdots a_n : bc}) \\ = \frac{1}{n} c^{a a_2\cdots a_n : bc} \d_L[b]_{a a_2\cdots a_n : bc} + \delta_L[c]^{a_2\cdots a_n : bc} b_{a_2\cdots a_n : bc} . \end{multline} Note that the operators $\d_L$ and $\delta_L$ specifically produce tensors of the appropriate Young type. Therefore, the formal adjoint operator $B_n^*$ is given by the formula \begin{align} B_n^*[c]_{a_2\cdots a_n:bc} &= -\frac{1}{n}\delta_L[c]_{a_2\cdots a_n:bc} \\ &= -\frac{1}{n}\nabla^a c_{a a_2\cdots a_n:bc} - \frac{2}{n(n-1)} \nabla^a c_{[b|a_2\cdots a_n:|c]a} , \end{align} with the Green form represented by $G^a[c,b] = \frac{1}{n} c^{a a_2\cdots a_n:bc} b_{a_2\cdots a_n:bc}$. While this operator $B_n^*$ may look unfamiliar, after a further local invertible transformation the equation $B_n^*[c] = 0$ becomes equivalent to the well known \emph{rank-$(n-2)$ Killing-Yano equation}. Let us define a rank-$(n-2)$ anti-symmetric tensor $y^{c_3\cdots c_n}$ such that \begin{align} c_{a_1\cdots a_n:bc} &= \varepsilon_{a_1\cdots a_n} y^{c_3\cdots c_n} \varepsilon_{bc c_3\cdots c_n} , \\ y^{c_3\cdots c_n} &= \frac{1}{2(n-2)!(n-1)!} \varepsilon^{a_1\cdots a_n} \, c_{a_1\cdots a_n:bc} \, \varepsilon^{bc c_3\cdots c_n} . \end{align} It is straightforward to check using the hook formula (Appendix~\ref{app:young}) that the tensor $c$ of Young type $(2,2,1^{n-2})$ has the same number of independent components as the tensor $y$ of Young type $(1^{n-2})$. To transform the equation satisfied by $c$ into the Killing-Yano equation satisfied by $y$, we will need the following identities, which follow from the general properties of the $\varepsilon$ tensor~\cite{levi-civita}: \begin{align} \varepsilon^{a a_2\cdots a_n} c_{a' a_2\cdots a_n:bc} \varepsilon^{bc c_3\cdots c_n} &= 2(n-2)!(n-1)! \delta^a_{a'} y^{c_3\cdots c_n} , \\ \varepsilon^{a a_2\cdots a_n} c_{b a_2\cdots a_n:a'c} \varepsilon^{bc c_3\cdots c_n} &= (n-1)!^2 y^{b_3\cdots b_n} \delta^{[a}_{a'} \delta^{c_3}_{b_3} \cdots \delta^{c_n]}_{b_n} . \end{align} Contracting one $\varepsilon$ tensor with each index group of the equation $B_n^*[c] = 0$ we get \begin{align} 0 &= \varepsilon^{a a_2\cdots a_n} B^*_n[c]_{a_2\cdots a_n:bc} \varepsilon^{bc c_3\cdots c_n} \\ \notag &= -\frac{1}{n}\nabla^{a'} \varepsilon^{a a_2\cdots a_n} c_{a' a_2\cdots a_n:bc} \varepsilon^{bc c_3\cdots c_n} \\ &\quad {} - \frac{2}{n(n-1)} \nabla^{a'} \varepsilon^{a a_2\cdots a_n} c_{b a_2\cdots a_n:ca'} \varepsilon^{bc c_3\cdots c_n} \\ &= -\frac{2}{n}(n-1)!(n-2)! \left(\nabla^a y^{c_3\cdots c_n} - \nabla^{[a} y^{c_3\cdots c_n]}\right) . \end{align} Note that the derivative $\nabla^a y^{c_3\cdots c_n}$ takes values in the tensor bundle of Young type $(1)\otimes (1^{n-2})$. Using the well-known Littlewood-Richardson rules~\cite{fulton,lrr} this representations decomposes into the direct sum $(1)^{n-1} \oplus (2,1^{n-3})$. Note that the antisymmetrization of the above equation gives zero. Thus, the independent components of the equation satisfied by $y$ take values in a tensor bundle of Young type $(2,1^{n-3})$, which has two columns, of lengths $n-2$ (filled with indices belonging to $y$) and $1$ (filled with index belonging to $\nabla$). It is also well-known that this representation can be isolated by antisymmetrizing along the columns and symmetrizing any two indices between the columns. In our case, the antisymmetrization has no effect ($y$ is already antisymmetric) and the symmetrization, after lowering all indices, gives the equation \begin{equation} KY[y]_{a c_3 \cdots c_n} = \nabla_{(a} y_{c_3)\cdots c_n} = 0 , \end{equation} which is none other than the \emph{rank-$(n-2)$ Killing-Young equation}, whose solutions are called \emph{rank-$(n-2)$ Killing-Young} tensors or \emph{Killing $(n-2)$-forms}~\cite{stepanov}. We refer to the differential operator $KY$ as the Killing-Young operator. So, in the same sense that the Calabi complex constitutes the compatibility complex of the Killing equation on a constant curvature background, so does the formal adjoint Calabi complex for the rank-$(n-2)$ Killing-Yano equation on the same background. \subsection{Equations of finite type, twisted de~Rham complex}\label{sec:tw-dr} The Killing and Killing-Yano equations, which lie at the base of the Calabi and its formal adjoint differential complexes, are well known examples of partial differential equations of \emph{finite type}~\cite{goldschmidt-lin,spencer,pommaret}. That is, in any neighborhood of a point $x\in M$ they admit only a finite dimensional space of solutions. Each solution is fully determined by its value and finitely many derivatives at $x$. For the Killing and Killing-Yano equations only the first derivatives are required. This is a strong kind of unique continuation. Such equations are called \emph{regular} if the dimension of the solution space in a sufficiently small neighborhood of a point $x\in M$ is independent of $x$. That number may, however, differ from the dimension of the global solution space, which can be strictly smaller in the presence of topological or geometric obstructions to continuing local solutions to global ones. Regular equations of finite type have a very simple existence theory. Let $F\to M$ and $E\to M$ be two vector bundles, together with a differential operator $e\colon \Gamma(F) \to \Gamma(E)$ of order $l$ such that the equation $e[\psi] = 0$, for $\psi\in \Gamma(F)$, is finite type and regular. This means that there exists an integer $k$ such that the knowledge of $j^k\psi(x)$ for any $x\in M$ is sufficient to determine the components of all higher jets of $\psi$ at $x$. Prolongation of the equation to order $k$ (Appendix~\ref{app:jets}) gives the bundle map $p^{k-l} e\colon J^k F \to J^{k-l} E$. By the regularity hypothesis, the map is of constant rank, so its kernel $V = \ker p^{k-l} e \subseteq J^k F$ is a vector bundle over $M$. Since all higher derivatives of a solution $\psi$ at $x$ are uniquely determined by $j^k\psi(x)$ and $j^k\psi$ only takes values in $V$, there is a unique $n$-dimensional hyperplane in $T_{x,v}V$ that is tangent to the graph of a solution $\psi$ such that $j^k\psi(x) = (x,v)$. These hyperplanes define an $n$-dimensional distribution on the total space of the bundle $V$ and it is straightforward to check that this distribution is involutive (Lie brackets of vector fields valued in the distribution remain valued in the distribution). Thus, by the theorem of Frobenius~\cite{lang}, $V$ is foliated by $n$-dimensional leaves tangent to the given hyperplane distribution. Locally, these leaves are precisely the graphs of solutions to the equation $e[\psi] = 0$. Thus the rank $\operatorname{rk} V$ is precisely the dimension of the local solution space on any sufficiently small, connected open set in $M$. As we have already mentioned, both the Killing and Killing-Yano operators, $K\colon \Gamma(T^*M) \to \Gamma(S^2M)$ and $KY\colon \Gamma(\Lambda^{n-2}M) \to \Gamma(\mathrm{Y}^{(2,1^{n-1})}T^*M)$, define finite type equations. By the virtue of their covariance, they are also regular on any pseudo-Riemannian symmetric space, which includes constant curvature backgrounds. Furthermore, on constant curvature spaces, the dimensions of their local solution spaces are $\operatorname{rk} V_K = \operatorname{rk} V_{KY} = n(n+1)/2$~\cite{stepanov}. The $n$-dimensional hyperplane distribution on $V$ and the resulting foliation described above can also be described in another way, namely as a flat linear connection on $V\subseteq J^kF$~\cite[Sec.2.1.3]{morita}. The connection is linear because the original equation $e[\psi] = 0$ is itself linear. A linear connection on $V\to M$ can alternatively be described by a first order differential operator $D\colon \Gamma(V) \to \Gamma(T^*M \otimes V)$ defined by the property \begin{equation} D[\omega j^k\psi] = \d \omega \otimes j^k\psi , \end{equation} for any $\omega \in C^\infty(M)$ and solution $\psi\in \Gamma(F)$ of $e[\psi]$, where its $k$-jet is treated as a section $j^k\psi \colon M \to V$. That is, a section $\phi\in \Gamma(V)\subseteq \Gamma(J^kF)$ is constant on an open set $U\subseteq M$ iff it coincides with the $k$-jet of a solution of $e[\psi] = 0$ on $U$. So, it is clear that the equations $e[\psi] = 0$ and $D[\phi] = 0$ are equivalent (their spaces of local solutions are locally isomorphic). As discussed in Appendix~\ref{app:jets} this means that there exist differential operators $f$, $f'$, $g$, $g'$, $p$ and $q$, which fit into the following diagram (again, for brevity we use the bundle symbols to stand in for their spaces of sections) \begin{equation}\label{eq:conn-equiv} \begin{tikzcd} F \ar[swap]{r}{e} \ar[shift left]{d}{f} & E \ar[shift left]{d}{f'} \ar[dashed,bend right,swap]{l}{p} \\ V \ar{r}{D} \ar[shift left]{u}{g} & T^*M\otimes V \ar[shift left]{u}{g'} \ar[dashed,bend left]{l}{q} \end{tikzcd} \end{equation} and satisfy the following identities: \begin{align} D \circ f &= f' \circ e , & g \circ f &= \mathrm{id} + p \circ e , \\ e \circ g &= g' \circ D , & f \circ g &= \mathrm{id} + q \circ D . \end{align} We have already seen that on solutions, the map $f$ simply agrees with the $k$-jet extension operator $j^k$. Thus, as a differential operator of order $k$, it can be chosen to be any projection of $J^k F$ to its subspace $V$. The choice of this projection then determines the differential operator $f'$. The differential operators $g$ and $g'$ are constructed in similar ways, making sure that $f$ and $g$ are mutual inverses on solutions. The freedom in the choice of $f$, $f'$, $g$ and $g'$ also determine the operators $p$ and $q$. When it comes to a specific case, say the Killing or Killing-Yano equation, its equivalence to a local constancy condition with respect to a connection can be made explicit only once the solutions are themselves explicitly known. Thus this equivalence is mostly of theoretical, though non-negligible, interest. Having defined the flat vector bundle $(V,D)$ corresponding to a regular equation of finite type, there is a standard procedure to construct a differential complex associated to it. It is called the \emph{twisted de~Rham complex} associated to $(V,D)$, \begin{equation}\label{eq:tw-dr} \begin{tikzcd}[column sep=scriptsize] 0 \arrow{r} & V \arrow{r}{D} & \Lambda^1 M \otimes V \arrow{r}{D} & \Lambda^2 M \otimes V \cdots \arrow{r}{D} & \Lambda^n M \otimes V \arrow{r} & 0 , \end{tikzcd} \end{equation} where $D$ has been extended to a twisted de~Rham differential, defined on sections of $\Lambda^k M \otimes V$ by the condition \begin{equation} D[\omega \otimes \psi] = \d\omega \otimes \psi + (-1)^k \omega \wedge D\psi , \end{equation} for any $\omega\in \Gamma(\Lambda^k M)$ and $\psi \in \Gamma(V)$, where we recall that $D\psi$ is a section of $T^*M\otimes V = \Lambda^1 M \otimes V$ and apply the wedge product of forms in the obvious way. \begin{rem}\label{rmk:tw-dr} Locally (on sufficiently small contractible open sets), this twisted de~Rham complex consists $\operatorname{rk} V$ copies of the ordinary de~Rham complex. Globally, of course, if the base manifold $M$ is not simply connected, the twisted de~Rham complex $(\Lambda^\bullet M \otimes V, D)$ will differ from $\operatorname{rk} V$ copies of the ordinary de~Rham complex $(\Lambda^\bullet M,\d)$ because of the possible non-trivial bundle structure of $V\to M$ or the non-trivial monodromy $D$ (parallel transport with respect to $D$ along closed loops). The importance of the twisted de~Rham complex will become clear in Section~\ref{sec:sheaves} where we discuss the connection between the cohomology of differential complexes and sheaf cohomology. \end{rem} For later convenience, we shall denote the twisted de~Rham complexes associated to the Killing and Killing-Yano equations, respectively, by $(\Lambda^\bullet M\otimes V_K, D_K)$ and $(\Lambda^\bullet M\otimes V_{KY}, D_{KY})$. \section{Cohomology of locally constant sheaves}\label{sec:sheaves} The main reasons for introducing some of the general sheaf and sheaf cohomology machinery below is are two fold. First, we have made a connection between the abstract notion of sheaf cohomology and the cohomology of a differential complex. A priori, computing the cohomology of differential complex is a very hard problem, because it involves solving partial differential equations. On the other hand, because of the flexibility of the general machinery of sheaf cohomology, it may be computable in some effective way, for instance, by reducing it to a problem in finite dimensional linear algebra. The canonical example of where this connection can be leveraged is the computation of de~Rham cohomology groups of a manifold $M$ using the equivalent (through sheaf theoretic machinery) computation of the simplicial (or cellular) cohomology of a finite triangulation (or cell decomposition) of $M$. The second reason is that the ideas that have been introduced give us some tools to explicitly show that the cohomologies of two different differential complexes are isomorphic as long as both complexes are \emph{formally exact}, \emph{locally exact} and \emph{resolve} the same sheaf in degree-$0$ (this terminology is introduced below). \subsection{Locally constant sheaves}\label{sec:sh-lc} Recall from Section~\ref{sec:tw-dr} that a regular linear differential equation of finite type has only a finite dimensional space of local solutions, with this dimension being constant over the base manifold. It so happens that, from an abstract point of view, it is convenient to view these local solutions as a \emph{locally constant sheaf} of vector spaces. A \emph{sheaf} $\mathcal{F}$ of vector spaces on a topological space $M$~\cite{bredon,ks} is an assignment $U\mapsto \mathcal{F}(U)$ of a vector space (of \emph{local sections} over $U$, $\mathcal{F}(\varnothing) = 0$) to each open $U\subseteq M$ satisfying the following axioms: \emph{(restriction)} for any inclusion of opens $U\subseteq V$ there exist linear restriction maps $\mathcal{F}(V)\to \mathcal{F}(U)$, also written $f\mapsto f|_U$, such that $U\subseteq U$ induces the identity map and $U\subseteq V\subseteq W$ induces $\mathcal{F}(W) \to \mathcal{F}(U)$ in agreement with the composition $\mathcal{F}(W) \to \mathcal{F}(V) \to \mathcal{F}(U)$; \emph{(descent)} any pair of opens $U$ and $V$ induces an exact sequence $0 \to \mathcal{F}(U\cup V) \to \mathcal{F}(U)\times \mathcal{F}(V) \to \mathcal{F}(U\cap V) \to 0$, where the first map is $f \mapsto (f|_U,f|_V)$ and the second one is $(f,g) \mapsto f|_{U\cap V} - g|_{U\cap V}$. We write $\Gamma(\mathcal{F}) = \Gamma(M,\mathcal{F}) = \mathcal{F}(M)$ for the vector space of \emph{global sections} of the sheaf $\mathcal{F}$. A \emph{sheaf} is called \emph{locally constant} when the number $\odim \mathcal{F}_x = \max_{U\ni x} \dim \mathcal{F}(U)$, where $U$ ranges over connected open neighborhoods of $x\in M$, is finite and does not depend on $x$, so we can write $\odim\mathcal{F} = \odim\mathcal{F}_x$. Since $\dim \mathcal{F}(U)$ can only decrease for larger connected $U$, for any $x\in M$ there exists a connected neighborhood $U$ of $x$ such the vector spaces of local sections over smaller connected neighborhoods stabilize (the restriction map becomes an isomorphism), so that we can write $\mathcal{F}(U) \cong \bar{F}$ for some fixed vector space $\bar{F}$ that we call the \emph{stalk} of $\mathcal{F}$. Clearly, $\dim \bar{F} = \odim \mathcal{F}$. Also, $\mathcal{F}$ is called \emph{constant} when it is locally constant and $\Gamma(\mathcal{F}) \cong \bar{F}$. Given a vector bundle $F\to M$, the assignment $\mathcal{F}(U) = \Gamma(F,U)$ of local sections of $F$ over each open $U\subseteq M$ defines a sheaf $\mathcal{F}$ on $M$, called the \emph{sheaf of (germs of) sections} of $F\to M$. Similarly, it is straightforward to check that, given another vector bundle $E\to M$ and a linear differential operator $e\colon \Gamma(F) \to \Gamma(E)$, the sets $\S_e(U) = \{ \psi\in \Gamma(F,U) \mid e[\psi] = 0 \}$ of solutions of the partial differential equation $e[\psi] = 0$ also define a sheaf $\S_e$ on $M$, called the \emph{solution sheaf} of $e\colon \Gamma(F) \to \Gamma(E)$. Following the preceding discussion of equations of finite type, it should be clear that solution sheaves $\mathcal{K} = \S_K$ (the \emph{Killing sheaf}) and $\mathcal{KY} = \S_{KY}$ (the \emph{Killing-Yano sheaf}) of the Killing and Killing-Yano equations are locally constant, provided the background pseudo-Riemannian manifold is chosen such that these equations are regular. Another important example is the constant sheaf $\mathbb{R}_M = \S_\d$ of locally constant functions, which solve the equation $\d f = 0$, $f\in C^\infty(M)$ and $\d$ the de~Rham differential. Sheaves are important because every sheaf $\mathcal{F}$ (of vector spaces) on $M$ automatically comes with an abstract notion of \emph{sheaf cohomology} (vector spaces) $H^p(M,\mathcal{F})$, called the $p$-th or degree-$p$ cohomology of $\mathcal{F}$, or of $M$ with coefficients in $\mathcal{F}$. Moreover, all classical cohomology theories from algebraic topology can be identified with the cohomologies of certain sheaves. Further, some superficially different looking cohomologies theories may be connected through the fact that they are both equivalent to the sheaf cohomology of the same sheaf. In particular, the classical simplicial, cellular, singular, \v{C}ech and de~Rham cohomologies of a manifold $M$ all coincide~\cite{bott-tu,bredon,ks} because they are each equivalent to the cohomology of $M$ with coefficients in the sheaf $\mathbb{R}_M$ of locally constant functions. The intrinsic definition of sheaf cohomology is somewhat involved and not entirely intuitive (unless one is already intimately familiar with \v{C}ech cohomology and the notion of local coefficients). Fortunately, the intrinsic definition can be relegated to standard references~\cite{bredon,ks} in favor of an equivalent but more practical definition using \emph{acyclic resolutions}. To explain further, we need to introduce some terminology. A \emph{complex} of sheaves of vector spaces \begin{equation}\label{eq:sh-cplx} \begin{tikzcd} \cdots \arrow{r} & \mathcal{F}_i \arrow{r} & \mathcal{F}_{i+1} \arrow{r} & \cdots \end{tikzcd} \end{equation} consists of an assignment of linear maps $\mathcal{F}_i(U) \to \mathcal{F}_{i+1}(U)$ to each open $U\subseteq M$, in a way consistent with restriction maps, such that we have a complex of vector spaces of local sections (two successive maps compose to zero) \begin{equation} \begin{tikzcd} \cdots \arrow{r} & \mathcal{F}_i(U) \arrow{r} & \mathcal{F}_{i+1}(U) \arrow{r} & \cdots \end{tikzcd} \end{equation} for each open $U\subseteq M$. A local section in $\mathcal{F}_i(U)$ that is in the kernel of the corresponding map is called a \emph{cocycle} and a local section in $\mathcal{F}_i(U)$ that is in image of the corresponding map is called a \emph{coboundary}. A sheaf complex is \emph{exact} when, for each $x\in M$, open neighborhood $U\subseteq M$ of $x$ and local section $\alpha \in \mathcal{F}_i(U)$, there exists a possibly smaller and $\alpha$-dependent open neighborhood $U'\subseteq U$ of $x$ such that $\alpha|_{U'}$ is a coboundary. For a complex of sheaves, like~\eqref{eq:sh-cplx}, we could define its \emph{cohomology sheaves} $\H^i(\mathcal{F}_\bullet)$ (distinct from \emph{sheaf cohomology}, to be defined later), by starting with the assignment $\H^i(\mathcal{F}_\bullet)(U) = \ker(\mathcal{F}_i(U)\to \mathcal{F}_{i+1}(U)) / \operatorname{im} (\mathcal{F}_{i-1}(U) \to \mathcal{F}_i(U))$, which may not produce a sheaf but only a \emph{presheaf}, and applying the \emph{sheafification} construction to it. We will not go into the details of how sheafification turns presheaves into sheaves here, but they can be found in standard references~\cite{bredon,ks}. It suffices to point out that given a sheaf complex in non-negative degrees, $0 \to \mathcal{F}_0 \to \mathcal{F}_i \to \cdots$, the vector space $\H^0(\mathcal{F}_\bullet)(U) \subseteq \mathcal{F}_0(U)$ consists of all cocycle local sections. In the sequel, we shall only need to refer to such cohomology sheaves in degree-$0$. Given a sheaf $\mathcal{F}$, if $\mathcal{F}_i \to \mathcal{F}_{i+1}$ is a complex of sheaves such that $\mathcal{F}_i = 0$ for $i < 0$, $\H^0(\mathcal{F}_\bullet) = \mathcal{F}$, and $\H^i(\mathcal{F}_\bullet) = 0$ for $i > 0$, we call it a \emph{resolution} of the sheaf $\mathcal{F}$. In the sequel, we shall only consider sheaves of sections of vector bundles or of solution of some liner PDE and only complexes of sheaves where maps between the vector spaces of local sections are induced by restrictions of differential operators, for which the compatibility with restrictions is automatically satisfied. \subsection{Acyclic resolution by a differential complex}\label{sec:sh-res} The de~Rham complex~\cite{bott-tu} is the canonical example of a complex of sheaves of sections of vector bundles (differential forms on $M$), with maps induced by differential operators (de~Rham differentials). The Poincar\'e lemma then demonstrates that this complex of sheaves is exact. For simplicity, we shall call a \emph{differential complex} $(F_\bullet,f_\bullet)$ a sequence of vector bundles $F_i\to M$ and differential operators $f_i \colon \Gamma(F_{i-1}) \to \Gamma(F_i)$ satisfying $f_i \circ f_{i-1} = 0$, while implicitly setting $F_{-1} = 0$ and $f_0 = 0$. Given a differential complex, it is natural to define its cohomology vector spaces to be the cohomology of the cochain complex of global sections, $H^i(F_\bullet,f_\bullet) = H^i(\Gamma(F_\bullet),f_\bullet)$, which we also refer to as the cohomology with \emph{unrestricted supports}. Since differential operators do not increase supports, we can equally consider the cohomology of the differential complex with \emph{compact supports}, defined as $H_c^i(F_\bullet,f_\bullet) = H^i(\Gamma_c(F_\bullet),f_\bullet)$. A differential complex naturally define a complex $0 \to \mathcal{F}_0 \to \mathcal{F}_1 \to \cdots$ of sheaves of sections of these bundles, $\mathcal{F}_i(U) = \Gamma(F_i,U)$. A differential complex is said to be \emph{locally exact} if it defines an exact complex of sheaves. Local exactness is a very strong property that is crucial in the relation of the cohomology of a differential complex to sheaf cohomology, which we discuss next. In general, given a complex of sheaves $\mathcal{F}_i \to \mathcal{F}_{i+1}$, we call it an \emph{injective resolution} of a sheaf $\mathcal{F}$ if it is a \emph{resolution} of $\mathcal{F}$ (namely, $\mathcal{F}_i = 0$ for $i<0$, it is exact except for $\H^0(\mathcal{F}_\bullet) = \mathcal{F}$), and each $\mathcal{F}_i$ is \emph{injective}. The injectivity condition is somewhat technical. The same can be said for the fact that every sheaf has an injective resolution. So we will not go into them here and defer to standard references instead~\cite{bredon,ks}. We will need these notions only for the following definition. The \emph{degree-$i$ sheaf cohomology} vector spaces $H^i(\mathcal{F}) = H^i(M,\mathcal{F})$, also called the \emph{degree-$i$ cohomology of $M$ with coefficients in $\mathcal{F}$}, as the cohomology vector space of the complex of global sections of any injective resolution $\mathcal{F}_i \to \mathcal{F}_{i+1}$ of $\mathcal{F}$, $H^i(\mathcal{F}) = H^i(\Gamma(\mathcal{F}_\bullet))$. It is important to note that sheaf cohomology is well defined. It does not depend on the chosen injective resolution, because the injectivity condition implies the existence of a homotopy equivalence between the complexes of global sections of any two such resolutions, thus forcing their cohomologies to be isomorphic. This is another technical fact that we shall not go into here. Instead, we make note of yet another technical fact that provides a practical way to compute sheaf cohomology. For that, we need two more definitions. A sheaf $\mathcal{F}$ is called \emph{acyclic} if $H^i(\mathcal{F}) = 0$ for all $i>0$, though as usual the degree-$0$ cohomology $H^0(\mathcal{F}) \cong \Gamma(\mathcal{F})$ is isomorphic to the vector space of global sections of $\mathcal{F}$. A sheaf $\mathcal{F}$ on $M$ is called \emph{soft} if for any closed $A\subseteq M$ the restriction maps $\mathcal{F}(M) \to \mathcal{F}(A)$ are surjective, where $\mathcal{F}(A) = \bigcap_{U\subseteq A} \mathcal{F}(U)$ with $U$ ranging over all open sets that contain the closed set $A$. In other words, given an open $U\subseteq M$ and a closed subset $A \subseteq U$, a local section on $U$ can always be extended to a global one on $M$ without modification on $A$, but possibly modified on $U\setminus A$. What is really important for us is the following \begin{prop}\label{prp:sh-res} (i) If $\mathcal{F}$ is a sheaf on $M$, and $\mathcal{F}_i \to \mathcal{F}_{i+1}$ is a resolution of $\mathcal{F}$ by acyclic sheaves (\emph{acyclic resolution}), then $H^i(M,\mathcal{F}) \cong H^i(\Gamma(M,\mathcal{F}_\bullet))$. (ii) Any soft sheaf on $M$ is acyclic. (iii) Given a vector bundle $F\to M$, the sheaf $\mathcal{F}$ of sections of $F$ is soft. \end{prop} \begin{proof} Any standard discussion of sheaf cohomology establishes (i) and (ii)~\cite{bredon,ks}. On the other hand, (iii) is simply a restatement of the well known Whitney extension theorem for smooth functions~\cite[Thm.2.3.6]{hoermander-I}. \end{proof} Note that the complex of sheaves corresponding to a differential complex then automatically consists of acyclic sheaves. The above proposition essentially tells us that, given a resolution of some sheaf $\mathcal{F}$ on a manifold $M$ by a locally exact differential complex $(F_\bullet,f_\bullet)$, the sheaf cohomology of $\mathcal{F}$ and the cohomology of the differential complex will coincide, $H^i(\mathcal{F}) \cong H^i(F_\bullet,f_\bullet)$. This observation will be particularly important later in Corollary~\ref{cor:calabi-sheaf}. Next, we discuss some conditions ensuring that the cohomologies of two given differential complexes are isomorphic. As we have now seen, local exactness is a very strong and useful property, unfortunately it can be difficult to check in practice. Two weaker notions of exactness exist that are easier to check in practice. To formulate them, we refer to the notions of \emph{jets} and \emph{jet bundles}, together with associated constructions like \emph{prolongations} and \emph{principal symbols}, all briefly recalled in Appendix~\ref{app:jets}. Given a sequence of vector bundles $F_i$ and a complex of linear differential operators $f_{i}\colon F_{i-1} \to F_i$, each of order $k_i$, their prolongations define a complex of vector bundle morphisms, \begin{equation} \begin{tikzcd}[column sep=large] \cdots \arrow{r} & J^l F_{i-1} \arrow{r}{p^{l_i} f_i} & J^{l_i} F_i \arrow{r}{p^{l_{i+1}} f_{i+1}} & J^{l_{i+1}} F_{i+1} \arrow{r} & \cdots , \end{tikzcd} \end{equation} with $l_i = l - k_i$ and $l_{i+1} = l - k_i - k_{i+1}$, for each sufficiently large $l$. The differential complex is said to be \emph{formally exact} if the above compositions are exact, as linear bundle maps over $M$, for any values of $l$ and $i$ for which they are defined. On the other hand, given $(x,p) \in T^*M$, the principal symbols of the differential operators $f_i$ define a complex of linear maps between the fibers of $F_i$ at $x$, \begin{equation} \begin{tikzcd}[column sep=large] \cdots \arrow{r} & F_{i-1,x} \arrow{r}{\sigma_{x,p} f_i} & F_{i,x} \arrow{r}{\sigma_{x,p} f_{i+1}} & F_{i+1,x} \arrow{r} & \cdots . \end{tikzcd} \end{equation} The differential complex is said to be \emph{elliptic} if the above complex is exact for every $(x,p)\in T^*M$, $p\ne 0$. These two weaker notions are distinct~\cite{smith}. Formal exactness is a good hypothesis for showing that differential operators factor in certain ways. On the other hand, ellipticity is a condition that can be used to prove local exactness, via the method of parametrices and fundamental solutions. However, the general question of determining necessary and sufficient conditions for local exactness for differential complexes is a difficult and still open problem. The main conjecture is sometimes known as \emph{Spencer's conjecture}: a formally exact, elliptic complex is locally exact~\cite{spencer,smith,shl-tarkh}. On the other hand, some supplementary sufficient conditions are known for an elliptic complex to be locally exact. A prominent condition of this kind is known as the \emph{$\delta$-estimate}~\cite[Sec.1.3.13]{tarkhanov}, which first appeared in the works of Singer, Sweeney and MacKichan~\cite{spencer}. \begin{prop}\label{prp:tw-dr-exact} The twisted de~Rham complex associated to the flat bundle $(V,D)$ defined by a regular differential equation of finite type, defined in Equation~\eqref{eq:tw-dr}, is formally exact, elliptic and locally exact. \end{prop} \begin{proof} As noted in Remark~\ref{rmk:tw-dr}, the twisted de~Rham complex is locally (on sufficiently small contractible open sets) equivalent to $\operatorname{rk} V$ copies of the ordinary de~Rham complex. To see the equivalence, it suffices to locally choose a $D$-flat basis frame for $V$. Since all of the desired properties, formal exactness, ellipticity and local exactness are purely local, it suffices to check them for the ordinary de~Rham complex. It is well known that each of these properties does hold for the de~Rham complex, having served as a model example for each. Formal exactness and ellipticity are discussed, for instance, in~\cite{spencer,pommaret,tarkhanov} and~\cite[\textsection XIX.4]{hoermander-III}. On the other hand, local exactness is essentially the content of the Poincar\'e lemma~\cite{bott-tu}. There is another way to establish local exactness that bypasses the Poincar\'e lemma and does not require an explicit local choice of a $D$-flat basis frame for $V$. In particular, as discussed for instance in the given references, local exactness and ellipticity are independent of such a choice. Then, local exactness follows provided the initial operator of the complex, the connection operator $D\colon \Gamma(V) \to \Gamma(T^*M\otimes V)$, satisfies the $\delta$-estimate. According to Example~1.3.58 of~\cite{tarkhanov}, any linear connection operator satisfies the $\delta$-estimate. Hence, by Theorem~1.3.61 of~\cite{tarkhanov}, the twisted de~Rham complex is locally exact. \end{proof} As is well known in homological algebra, cochain maps and homotopies between them are important concepts, the first because they descend to cohomology, the second because equivalence up to homotopy descends to isomorphism on cohomology. When dealing with differential complexes, it becomes important to distinguish the case where the cochain maps and homotopies are defined by differential operators. The most important notion we will need is that of a \emph{formal homotopy equivalence}. Let $(F_\bullet,f_\bullet)$ and $(G_\bullet,g_\bullet)$ be two differential complexes. They are said to be \emph{formally homotopy equivalent} provided there exist differential operators $e_i$, $h_i$, $u_i$ and $v_i$ fitting into the diagram (we use the bundles to stand in for their spaces of sections) \begin{equation} \begin{tikzcd} \cdots \ar{r} & F_{i-1} \ar[swap]{r}{f_i} \ar[shift left]{d}{u_{i-1}} \ar[<-,shift right,swap]{d}{v_{i-1}} \ar[bend right,dashed]{l} & F_i \ar[swap]{r}{f_{i+1}} \ar[shift left]{d}{u_i} \ar[<-,shift right,swap]{d}{v_i} \ar[bend right,dashed,pos=.57,swap]{l}{e_i} & F_{i+1} \ar{r} \ar[shift left]{d}{u_{i+1}} \ar[<-,shift right,swap]{d}{v_{i+1}} \ar[bend right,dashed,pos=.4,swap]{l}{e_{i+1}} & \cdots \ar[bend right,dashed]{l} \\ \cdots \ar{r} & G_{i-1} \ar{r}{g_i} \ar[bend left,dashed]{l} & G_i \ar{r}{g_{i+1}} \ar[bend left,dashed,pos=.57]{l}{h_i} & G_{i+1} \ar{r} \ar[bend left,dashed,pos=.4]{l}{h_{i+1}} & \cdots \ar[bend left,dashed]{l} \end{tikzcd} , \end{equation} where the squares composed of solid arrows commute (cochain map condition on $u_i$ and $v_i$) and the dashed arrows are homotopy operators with respect to which $u_i$ and $v_i$ are quasi-inverses, $v_i \circ u_i - \mathrm{id} = e_{i+1}\circ f_{i+1} + f_i\circ e_i$ and $u_i \circ v_i - \mathrm{id} = h_{i+1}\circ g_{i+1} + g_i \circ h_i$. \begin{lem}\label{lem:f-exact} Consider two differential complexes $(F_\bullet,f_\bullet)$ and $(G_\bullet,g_\bullet)$ that start in degree $0$, also denote the corresponding complexes of sheaves of sections as $\mathcal{F}_i \to \mathcal{F}_{i+1}$ and $\mathcal{G}_i \to \mathcal{G}_{i+1}$. Suppose that both differential complexes are formally exact, except in degree $0$. Further, suppose that the equations $f_1[\phi] = 0$ and $g_1[\gamma] = 0$, with $\phi \in \Gamma(F_0)$ and $\gamma \in \Gamma(G_0)$, are equivalent, or in other words the degree-$0$ cohomology sheaves are isomorphic to some given sheaf $\mathcal{F} \cong \H^0(\mathcal{F}_\bullet) \cong \H^0(\mathcal{G}_\bullet)$. (i) Then there there exists a formal homotopy equivalence between these differential complexes and their cohomologies are isomorphic, both with unrestricted and compact supports (or any other kind of restriction on supports): \begin{gather} H^i(F_\bullet,f_\bullet) \cong H^i(G_\bullet,g_\bullet) \quad \text{and} \quad H^i_c(F_\bullet,f_\bullet) \cong H^i_c(G_\bullet,g_\bullet) . \end{gather} (ii) If one of the differential complexes is locally exact, then both are locally exact and their cohomologies both compute the sheaf cohomology of $\mathcal{F}$: \begin{equation} H^i(M,\mathcal{F}) \cong H^i(F_\bullet,f_\bullet) \cong H^i(G_\bullet,g_\bullet) . \end{equation} \end{lem} \begin{proof} (i) Equivalence of the equations $f_1[\phi]=0$ and $g_1[\gamma]=0$ means (Appendix~\ref{app:jets}) that there exist differential operators, say $u_0\colon \Gamma(F_0) \to \Gamma(G_0)$ and $v_0\colon \Gamma(G_0)\to \Gamma(F_0)$, such that $v_0 \circ u_0[\phi] = 0$ whenever $f_1[\phi] = 0$ and such that $u_0 \circ v_0 [\gamma] = 0$ whenever $g_1[\phi] = 0$. In other words, there exist differential operators $e_1\colon \Gamma(F_1) \to \Gamma(F_0)$ and $h_1\colon \Gamma(G_1) \to \Gamma(G_0)$ such that $v_0 \circ u_0 = e_1 \circ f_1$ and $u_0 \circ v_0 = h_1 \circ g_1$. These differential operators are the initial step in establishing the desired formal homotopy equivalence. We proceed by a standard induction argument from homological algebra (in fact, a version of this argument proves the independence of sheaf cohomology from the injective resolution used to compute it). Assume that all the desired differential operators have been defined up to $e_i$, $h_i$, $u_{i-1}$ and $v_{i-1}$, which also satisfy the desired identities. We can easily verify the identities \begin{align} (g_i \circ u_{i-1}) \circ f_{i-1} &= (g_i \circ g_{i-1}) \circ u_{i-2} = 0 , \\ (f_i \circ v_{i-1}) \circ g_{i-1} &= (f_i \circ f_{i-1}) \circ v_{i-2} = 0 , \end{align} which together with the formal exactness of the compositions $f_i \circ f_{i-1} = 0$ and $g_i \circ g_{i-1} = 0$ imply the factorizations $g_i \circ u_{i-1} = u_i \circ f_i$ and $f_i \circ v_{i-1} = v_i \circ g_i$, for some differential operators $u_i \colon \Gamma(F_i) \to \Gamma(G_i)$ and $v_i \colon \Gamma(G_i) \to \Gamma(F_i)$ (see Appendix~\ref{app:jets}). Further, we can also verify the identities \begin{align} (v_i \circ u_i - \mathrm{id} - f_i \circ e_i) \circ f_i &= (v_i \circ g_i) \circ u_{i-1} - f_i - f_i\circ e_i \circ f_i \\ &= f_i \circ (v_{i-1} \circ u_{i-1}) - f_i - f_i\circ e_i \circ f_i = 0 , \\ (u_i \circ v_i - \mathrm{id} - g_i \circ h_i) \circ g_i &= (u_i \circ f_i) \circ v_{i-1} - g_i - g_i\circ h_i \circ g_i \\ &= g_i \circ (u_{i-1} \circ v_{i-1}) - g_i - g_i\circ h_i \circ g_i = 0 , \end{align} which again together with formal exactness imply the factorizations $v_i \circ u_i - \mathrm{id} - f_i \circ e_i = e_{i+1} \circ f_{i+1}$ and $u_i \circ v_i - \mathrm{id} - g_i \circ h_i = h_{i+1} \circ g_{i+1}$, for some differential operators $e_{i+1} \colon \Gamma(F_{i+1}) \to \Gamma(F_i)$ and $h_{i+1} \colon \Gamma(G_{i+1}) \to \Gamma(G_i)$. This concludes the inductive step. Now, let us consider the cohomology of these complexes, $H^i(F_\bullet,f_\bullet) = H^i(\Gamma(F_\bullet), f_\bullet)$ and $H^i(G_\bullet,g_\bullet) = H^i(\Gamma(G_\bullet), g_\bullet)$. As is well known from homological algebra, a homotopy equivalence (of which a formal homotopy equivalence is a special kind) induces an isomorphism in cohomology: $H^i(F_\bullet, f_\bullet) \cong H^i(G_\bullet,g_\bullet)$. However, if the operators implementing the homotopy equivalence are differential operators, as in this case, we can replace unrestricted sections $\Gamma(-)$ by sections with compact supports $\Gamma_c(-)$, so that $H^i_c(F_\bullet,f_\bullet) = H^i(\Gamma_c(F_\bullet), f_\bullet)$ and $H^i_c(G_\bullet,g_\bullet) = H^i(\Gamma_c(G_\bullet), g_\bullet)$. The homotopy equivalence of the resulting complexes still holds because differential operators do not increase supports, and so we still have an isomorphism in cohomology: $H^i_c(F_\bullet, f_\bullet) \cong H^i_c(G_\bullet,g_\bullet)$. Incidentally, instead of compact supports, any other family of supports would do as well. (ii) By the local exactness hypothesis, both differential complexes provide resolutions of the sheaf $\mathcal{F}$ (which happens to be isomorphic to the solution sheaves $\S_{f_1} = \H^0(\mathcal{F}_\bullet)$ and $\S_{g_1} = \H^0(\mathcal{G}_\bullet)$). Then, by Proposition~\ref{prp:sh-res}, these resolutions are acyclic and hence the corresponding cohomologies with unrestricted supports compute the sheaf cohomology of $\mathcal{F}$. This concludes the proof. \end{proof} \subsection{Generalized Poincar\'e duality}\label{sec:dual} In Section~\ref{sec:sh-res}, we discussed how the cohomology $H^i(F_\bullet,f_\bullet)$ of a differential complex can, under optimal conditions, be equated with the cohomology $H^i(\mathcal{F})$ of the sheaf resolved by $(F_\bullet,f_\bullet)$. However, even under optimal conditions, this connection breaks down if we consider cohomology $H^i_c(F_\bullet,f_\bullet)$ with compact (or some other family of) supports instead of unrestricted ones. What we discuss below is a way to relate cohomology with compact supports to that with unrestricted supports, a kind of Poincar\'e duality. For the de~Rham complex on a manifold $M$, $\dim M = n$, a well known formulation of Poincar\'e duality is the isomorphism $H^p(M) \cong H^{n-p}_c(M)^*$~\cite[Rmk.5.7]{bott-tu} between the linear dual of cohomology in degree-$p$ and compactly supported cohomology in degree-$(n-p)$. This isomorphism is induced by the existence of a non-degenerate natural pairing between $p$-forms and $(n-p)$-forms on $M$ and its non-degenerate descent to cohomology. The goal of this section is to leverage the properties of the Calabi complex and its formal adjoint complex that were discussed in the preceding section to demonstrate a generalized version of Poincar\'e duality, which effectively computes the cohomology with compact supports in terms of sheaf cohomology. There are two ways to establish generalized Poincar\'e duality for a differential complex $(F_\bullet,f_\bullet)$ that would be applicable to the case of the Calabi complex and its formal adjoint. One of them, discussed in Section~\ref{sec:tw-dr-dual}, relies on the fact that the corresponding complex of sheaves resolves the sheaf of solutions of a regular differential equation of finite type (a locally constant sheaf). This method is somewhat more elementary. The other, discussed in Section~\ref{sec:elliptic-dual}, works for any elliptic complex, but requires some results from functional analysis and distribution theory. Either of these results, as will be shown in Section~\ref{sec:calabi-cohom}, can be applied to prove generalized Poincar\'e duality for the Calabi complex and its formal adjoint complex. \subsubsection{Twisted de~Rham complex}\label{sec:tw-dr-dual} First, we will discuss the twisted de~Rham complex, as introduced in Section~\ref{sec:tw-dr}. The results will then apply to the Calabi complex and its formal adjoint by virtue of Lemma~\ref{lem:f-exact}. The strategy is straightforward and reproduces the logic of the proofs of the ordinary Poincar\'e duality, cf.~\cite[\textsection 5]{bott-tu}, \cite[Ch.11]{spivak}, or~\cite[Sec.V.4]{ghv}. First, generalized Poincar\'e duality is shown to hold on contractible open patches. Then, given a ``good cover'' of the manifold consisting of such patches, we use a version of the Mayer-Vietoris exact sequence as an inductive step to conclude that generalized Poincar\'e duality also holds on the entire manifold. First, recall that we denote the fiber of the vector bundle $V\to M$ by $\bar{V}$. Then, $\bar{V}^*$ is the fiber of the dual vector bundle $V^*\to M$. We are interested in the relation between the cohomology of the twisted de~Rham complex $H^i(\Lambda^\bullet M \otimes V,D)$ and the compactly supported cohomology of the formal adjoint complex, which happens to be $(\Lambda^\bullet M \otimes V^*,D)$, where the connection $D$ has been extended to $V^*\to M$ by the rule $\d(\xi\cdot \psi) = (D\xi) \cdot \psi + \xi \cdot (D\psi)$, with $\xi\in \Gamma(V^*)$ and $\psi\in \Gamma(V)$. Presuming that $M$ is oriented, which is a prerequisite for integrating top-degree forms, there is a duality pairing between elements of $\Gamma(\Lambda^p M \otimes V)$ and $\Gamma_c(\Lambda^{n-p} M \otimes V^*)$ given by the formula \begin{equation}\label{eq:tw-dr-pair} \langle \xi, \psi \rangle = \int_M \langle \xi \wedge \psi \rangle , \end{equation} where $\langle (\alpha \otimes \xi) \wedge (\beta \otimes \psi) \rangle = (\alpha \wedge \beta) \otimes (\xi \cdot \psi)$. The formal adjoint relation is established (up to signs) for $\xi\in \Gamma(\Lambda^{n-p-1}M\otimes V^*)$ and $\psi \in \Gamma(\Lambda^p M \otimes V)$ by the identity \begin{equation} \d\langle \xi \wedge \psi \rangle = \langle (D\xi) \wedge \psi \rangle - (-1)^{n-p} \langle \xi \wedge (D\psi) \rangle . \end{equation} \begin{lem}\label{lem:triv-dual} Let $U\subseteq M$ be an oriented contractible open set. Then, generalized Poincar\'e duality holds, $H^p(\Lambda^\bullet M\otimes V|_U,D) \cong H^{n-p}_c(\Lambda^\bullet M\otimes V^*|_U,D)^*$, because all of the cohomology spaces vanish except $H^0(\Lambda^\bullet M\otimes V,D) \cong \bar{V}$ and $H^n_c(\Lambda^\bullet M\otimes V^*,D) \cong \bar{V}^*$. \end{lem} \begin{proof} As we have already noted in the proof of Proposition~\ref{prp:tw-dr-exact}, a choice of a locally $D$-flat basis frame for $V$ over $U\subseteq M$ identifies the twisted de~Rham complex with $\operatorname{rk} V$ copies of the usual de~Rham complex. Since $U$ is contractible, such a choice is always possible. Moreover, the pairing~\eqref{eq:tw-dr-pair} reduces to the usual pairing between forms and compactly supported forms of complementary degrees on an oriented manifold. Thus, we can easily conclude that \begin{align} H^p(\Lambda^\bullet M\otimes V|_U, D) &= H^p(U)\otimes \bar{V} , \\ H^{n-p}_c(\Lambda^\bullet M\otimes V^*|_U, D) &= H^{n-p}_c(U) \otimes \bar{V}^* . \end{align} Recalling that, for contractible $U$, $H^p(U) = 0$ except for $H^0(U) = \mathbb{R}$ and $H^{n-p}_c(U) = 0$ except for $H^n_c(U) = \mathbb{R}$, concludes the proof. \end{proof} \begin{lem}[Mayer-Vietoris]\label{lem:mv} Consider two open subsets $U,W \subseteq M$. We have the following long exact sequences in cohomology with unrestricted and compact supports, which we shall for brevity denote as $H^i(-) = H^i(\Lambda^\bullet M \otimes V|_{-}, D)$ and $H^i_c(-) = H^i_c(\Lambda^\bullet M \otimes V^*|_{-}, D)$: \begin{equation} \begin{tikzcd}[column sep=scriptsize,row sep=scriptsize] 0 \arrow{r} & H^0(U\cup W) \arrow{r} & H^0(U)\oplus H^0(W) \arrow{r}\arrow[draw=none]{d}[name=Z,shape=coordinate]{}& H^0(U\cap W) \arrow[rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}]{dll} \\ & H^1(U\cup W) \arrow{r} & H^1(U)\oplus H^1(W) \arrow{r} & H^1(U\cap W) \arrow{r} & \cdots \end{tikzcd} \end{equation} \begin{equation} \begin{tikzcd}[column sep=scriptsize,row sep=scriptsize] 0 \arrow{r} & H^0_c(U\cap W) \arrow{r} & H^0_c(U)\oplus H^0_c(W) \arrow{r}\arrow[draw=none]{d}[name=Z,shape=coordinate]{}& H^0_c(U\cup W) \arrow[rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}]{dll} \\ & H^1_c(U\cap W) \arrow{r} & H^1_c(U)\oplus H^1_c(W) \arrow{r} & H^1_c(U\cup W) \arrow{r} & \cdots \end{tikzcd} \end{equation} \end{lem} \begin{proof} Both long exact sequences in cohomology follow from short exact sequences of cochain complexes. These short exact sequences, where for brevity we write $\Gamma^i(-) = \Gamma(\Lambda^i M\otimes V|_-)$ and $\Gamma^i_c = \Gamma(\Lambda^i M \otimes V^*|_-)$ are \begin{equation} \begin{tikzcd}[column sep=scriptsize,row sep=scriptsize] 0 \arrow{r} & \Gamma^i(U\cup W) \arrow{r} & \Gamma^i(U) \oplus \Gamma^i(W) \arrow{r} & \Gamma^i(U\cap W) \arrow{r} & 0 , \end{tikzcd} \end{equation} \begin{equation} \begin{tikzcd}[column sep=scriptsize,row sep=scriptsize] 0 \arrow{r} & \Gamma^i_c(U\cap W) \arrow{r} & \Gamma^i_c(U) \oplus \Gamma^i_c(W) \arrow{r} & \Gamma^i_c(U\cup W) \arrow{r} & 0 . \end{tikzcd} \end{equation} In the first sequence, the maps are restrictions, $\alpha \mapsto (\alpha|_U,\alpha|_W)$ and $(\alpha,\beta) \mapsto (\alpha|_{U\cap W} - \beta|_{U\cap W})$. The exactness follows from the usual ability to restrict and glue together smooth sections over open regions, also known as their sheaf property. In the second sequence, the maps are extensions by zero, $\alpha \mapsto (\alpha^U_0, \alpha^W_0)$ and $(\alpha,\beta) \mapsto \alpha^{U\cup W}_0 - \beta^{U\cup W}_0$. The exactness follows from the existence of a smooth partition of unity adapted to the cover of $U\cup W$ by $U$ and $W$. These maps are clearly compatible with the connection differential operator $D$ and so are cochain maps. The general connection between short exact sequences of cochain complexes and long exact sequences in cohomology (Appendix~\ref{app:homalg}) gives the desired long exact sequences and concludes the proof. \end{proof} \begin{prop}\label{prp:tw-dr-dual} Given a flat vector bundle $(V,D)$ on an oriented $n$-dimensional orientable manifold $M$, the unrestricted cohomology $H^p = H^p(\Lambda^\bullet M \otimes V, D)$ of the associated twisted de~Rham complex and the compactly supported cohomology $H^{n-p}_c = H^{n-p}_c(\Lambda^\bullet M\otimes V^*, D)$ of its formal adjoint complex satisfy generalized Poincar\'e duality: \begin{equation} H^p \cong (H^{n-p}_c)^* . \end{equation} \end{prop} Note the asymmetry of the isomorphism. The reverse identity $(H^p)^* \cong H^{n-p}_c$ also holds when the cohomology vector spaces are finite dimensional, but in general may not when they are infinite dimensional. \begin{proof} In this proof, we shall use induction over a special kind of open cover of $M$. An open cover $(U_k)$ of $M$ is called \emph{good} if it is locally finite, every nonempty finite intersection $U_{k_0} \cap \cdots \cap U_{k_m}$ is diffeomorphic to $\mathbb{R}^n$, and it is closed under finite intersections. In particular, each of the $U_k$ is itself diffeomorphic to $\mathbb{R}^n$ and thus contractible. Good covers are known to exist for any manifold~\cite[Thm.5.1]{bott-tu}. Inducing an orientation on each element of the cover from the orientation on $M$, Lemma~\ref{lem:triv-dual} establishes the desired duality relation for any $U_k$ and thus the initial step of the inductive argument. Next, we show, provided the desired duality relation holds on any finite union $U_{k_0} \cup \cdots \cup U_{k_{m-1}}$ of $m$ sets, that it also holds on any finite union $U_{k_0} \cup \cdots \cup U_{k_m}$ of $m+1$ sets as well. Of course, we take all such unions to be oriented in a way compatible with the global orientation on $M$. Let $U = U_{k_m}$, $W = U_{k_0} \cup \cdots \cup U_{k_{m-1}}$ and notice that both $W$ and $W\cap U$ are finite unions of $m$ sets from the cover (recall that the cover is closed under intersections). The fact that the pairing~\eqref{eq:tw-dr-pair}, well defined on a given oriented, open $U\subseteq M$, descends to cohomology means that we always have a mapping $H^p(U) \to H_c^{n-p}(U)^*$, which may or may not be an isomorphism. It is in fact an isomorphism on $U$ and, by the inductive hypothesis, also on $W$ and $W\cap U$. Combining the long exact sequences of Lemma~\ref{lem:mv} for $W$ and $U$ together with these maps and isomorphisms, we obtain the following diagram (notice the arrow reversal by linear duality in the second row): \begin{equation*} \begin{tikzpicture}[baseline=(diag).base] \node[scale=.64] (diag) at (0,0){ \begin{tikzcd}[column sep=small] H^p(W\cup U) \ar{r} \ar[equal]{d} & H^p(W) \oplus H^p(U) \ar{r} \ar[equal]{d} & H^p(W\cap U) \ar{r} \ar{d} & H^{p+1}(W\cup U) \ar{r} \ar[equal]{d} & H^{p+1}(W) \oplus H^{p+1}(U) \ar[equal]{d} \\ H^{n-p}_c(W\cup U)^* \ar{r} & H^{n-p}_c(W)^* \oplus H^{n-p}_c(U)^* \ar{r} & H^{n-p}_c(W\cap U)^* \ar{r} & H^{n-p-1}_c(W\cup U)^* \ar{r} & H^{n-p-1}_c(W)^* \oplus H^{n-p-1}_c(U)^* \end{tikzcd} }; \end{tikzpicture} \end{equation*} Thus, by the $5$-lemma (Appendix~\ref{app:homalg}), the map in the center of the diagram is also an isomorphism and the inductive step is established. The only problem remaining is that a good cover is not always finite (though it can be chosen to be finite for compact manifolds). There is a way around that, however. Using a similar argument, one can show that the desired duality holds also on disjoint countable unions of finite unions of covering sets. It is at this stage that the asymmetry between the cohomologies with unrestricted and compact supports appears. Then, provided the manifold is second countable, one can choose a much coarser, yet finite, cover $(U'_k)$. The key property of this cover is that each of the non-empty finite intersections $U'_{k_0} \cap \cdots \cap U'_{k_m}$ is itself either a finite union of sets from $(U_k)$ or a disjoint countable union of those. The same $5$-lemma argument then shows that the desired generalized Poincar\'e duality relation $H^p \cong (H^{n-p}_c)^*$ holds on all of $M$. The technical details of this argument can be found in~\cite[Sec.V.4]{ghv}. \end{proof} \subsubsection{Elliptic complexes and Serre duality}\label{sec:elliptic-dual} Now we will discuss generic elliptic complexes, of which both the Calabi and the twisted de~Rham complexes are special cases. The result is essentially the same, though clearly more general. The arguments are somewhat less elementary and rely on some background in functional analysis and an result originally due to Serre~\cite{serre}. The Serre duality method also gives some more information. Namely, that the cohomology does not change if we replace smooth functions by distributions with the same supports. Serre's original work was in the context of the Dolbeault complex in the theory of several complex variables. A good exposition of this result in the setting of general elliptic complexes can be found in~\cite{tarkhanov}. At this point it is convenient to recall some basic facts of distribution theory~\cite{schwartz,treves,reed-simon}. Recall that, for any vector bundle $F\to M$, we can interpret $\Gamma(F)$ and $\Gamma_c(F)$ as locally convex topological vector spaces, with the Whitney weak Fr\'echet topology for the former and an inductive limit over supports of similar Fr\'echet topologies for the latter, with the limit topology still locally convex but no longer Fr\'echet (metrizable). These are the usual topologies used in the theory of distributions. The spaces of \emph{distributional sections} $\Gamma'(F)$ and $\Gamma_c'(F)$ of $F$, with respectively compact and unrestricted supports, are defined as topological duals endowed with the strong topology (the usual distributional topology), $\Gamma'(F) = \Gamma(\tilde{F}^*)^*$ and $\Gamma_c'(F) = \Gamma_c(\tilde{F}^*)^*$. Recall that $\tilde{F}^* = \Lambda^n M \otimes F^*$ is the densitized dual bundle; the densitized dual of the densitized dual is the original bundle. It so happens that, if we stick with the strong topology for dual spaces, the topological dual of $\Gamma'(F)$ is again $\Gamma(\tilde{F}^*)$ and that of $\Gamma_c'(F)$ is $\Gamma_c(\tilde{F}^*)$. So the spaces of smooth and distributional sections are \emph{reflexive} (with respect to the strong topology). Using the natural pairing \begin{equation} \langle \psi , \alpha \rangle = \int_M \psi\cdot \alpha \end{equation} between $\psi\in \Gamma(F)$ and $\alpha \in \Gamma_c(\tilde{F}^*)$, well-defined provided $M$ is oriented, we have the natural inclusions $\Gamma(F) \subset \Gamma_c'(F)$ and $\Gamma_c(F) \subset \Gamma'(F)$. By the \emph{Schwartz kernel theorem}, the continuous maps $G\colon \Gamma_c(F_1) \to \Gamma_c'(F)$ are in bijection with \emph{bidistributions}, elements $G\in \Gamma_c'(F_2 \boxtimes \tilde{F}^*_2)$, where $F_2 \boxtimes \tilde{F}^*_2 \to M\times M$ is the bundle with total space $F_2 \times \tilde{F}^*_1$ and the obvious projection onto its base, by the formula \begin{equation} (G\psi)(x) = \int_M G(x,y)\cdot \psi(y) . \end{equation} Let $\pi_1(x,y) = y$ and $\pi_2(x,y) = x$ denote the two projections $M\times M \to M$. We say that a bidistribution $G\in \Gamma_c'(F_2\boxtimes \tilde{F}^*_1)$ is \emph{properly supported} if $\pi_1\colon \operatorname{supp} G\to M$ is a proper map (the preimage of a compact set is compact). Differential operators define properly supported bidistributions, because their support lies on the diagonal of $M\times M$ by the crucial property that differential operators preserve supports. On the other hand, properly supported bidistributions need not preserve supports, though they still map compactly supported sections to compactly supported distributions. The amount by which the support of the image grows depends on the size of the support of the bidistribution in $M\times M$. Once we have introduced distributional sections, we can extend to them many operators that were previously defined only on smooth functions. For instance, any linear differential operator $f\colon \Gamma(F) \to \Gamma(E)$ between vector bundles $F\to M$ and $E\to M$ can be extended to act on distributions, $f\colon \Gamma'(F) \to \Gamma'(E)$ or even $f\colon \Gamma_c'(F) \to \Gamma_c'(E)$, according to the following formula: \begin{equation} \langle f[\alpha] , \psi \rangle = -\langle \alpha , f^*[\psi] \rangle , \end{equation} for any $\psi \in \Gamma_c(\tilde{F}^*)$ and $\alpha \in \Gamma_c'(F)$, where $f^*$ is the formal adjoint of $f$ and $\langle -, - \rangle$ is the natural dual pairing between sections and distributions. Since this natural pairing is non-degenerate, it suffices to define $f$ on the larger domain. Any other operator defined on smooth sections for which the above formula applies can also be extended to distributions, possibly with a restriction on their supports. In particular, the operators of a differential complex $(F_\bullet,f_\bullet)$ can be extended to distributional sections. Then we can consider the cohomology of the complex in distributional sections, $H^i(\Gamma_c'(F_\bullet), f_\bullet)$, which may a priori be different from its cohomology in smooth sections $H^i(F_\bullet, f_\bullet) = H^i(\Gamma(F_\bullet), f_\bullet)$, and similarly with compact supports. Below we shall see some sufficient conditions for the cohomologies in smooth and distributional sections to coincide. A crucial concept in the general theory of differential complexes is that of a \emph{parametrix}~\cite[Ch.2]{tarkhanov}. Let the vector bundles $F_i$ with differential operators $f_i\colon \Gamma(F_{i-1}) \to \Gamma(F_i)$ constitute a differential complex $(F_\bullet,f_\bullet)$ on $M$. Then, a \emph{parametrix} is a sequence of bidistributions $G_i \in \Gamma_c'(F_{i-1} \boxtimes \tilde{F}^*_{i})$ such that \begin{equation}\label{eq:parametrix} \mathrm{id}_i - Q_{i} = G_{i+1} \circ f_{i+1} + f_i \circ G_i , \end{equation} where $\mathrm{id}_i \colon \Gamma_c(F_i) \to \Gamma_c(F_i)$ is the identity map and $Q_i \in \Gamma(F_{i+1} \boxtimes \tilde{F}^*_i) \subset \Gamma_c'(F_{i+1} \boxtimes \tilde{F}^*_i)$ is a smooth bidistribution. We say that the parametrix is \emph{properly supported} if each $G_i$ is a properly supported bidistribution. Obviously, if each $G_i$ is properly supported, then so is each $Q_i$. \begin{prop}\label{prp:ell-param} Let $(F_\bullet, f_\bullet)$ be an elliptic complex on an oriented manifold $M$. (i) Then, for any open neighborhood $U \subseteq M\times M$ of the diagonal $M\subset M\times M$, there exists a properly supported parametrix $G_i\in \Gamma'_c(F_{i-1} \boxtimes \tilde{F}^*_i)$ with support $\operatorname{supp} G_i \subseteq U$. (ii) Then also, the cohomologies of smooth and distributional sections are isomorphic: \begin{equation} H^i(\Gamma_c'(F_\bullet), f_\bullet) \cong H^i(F_\bullet,f_\bullet) \quad \text{and} \quad H^i(\Gamma'(F_\bullet), f_\bullet) \cong H^i_c(F_\bullet,f_\bullet) . \end{equation} \end{prop} \begin{proof} (i) The existence of a parametrix for any elliptic complex follows from Corollary~2.1.11 and Theorem~2.1.12 of~\cite{tarkhanov}. The support of an existing pa\-ra\-met\-rix can be restricted arbitrarily close to the diagonal since $G_i^\chi$, defined by $G_i^\chi[\psi] = \chi G_i[\psi]$, is a parametrix as long as $G_i$ is a parametrix and $\chi\in C^\infty(M\times M)$ is properly supported with $\chi \equiv 1$ on a neighborhood of the diagonal. (ii) By the defining Equation~\eqref{eq:parametrix}, they are cochain homotopic to the identity operator, with respect to the cochain homotopy $G_i$. Further, being by hypothesis smooth and by (i) properly supported, they define smoothing operators, $Q_i\colon \Gamma'(F_i) \to \Gamma_c(F_i)$ and $Q_i\colon \Gamma_c'(F_i) \to \Gamma(F_i)$, when extended to distributions. It is then straightforward to see that the $Q_i$ and the inclusions of smooth sections in distributional ones (well defined because $M$ is oriented) constitute a homotopy equivalence between the complexes of smooth $(\Gamma(F_\bullet), f_\bullet)$ and distributional $(\Gamma_c'(F_\bullet), f_\bullet)$ sections, and similarly for compact supports. Thus, as desired, these complexes have isomorphic cohomologies. \end{proof} \begin{prop}[Serre, Tarkhanov]\label{prp:serre-dual} Given a differential complex $(F_\bullet, f_\bullet)$, that is not necessarily elliptic, on an oriented manifold $M$ that is countable at infinity (there exists an exhaustion by a countable sequence of compact sets), let $(\tilde{F}^*_\bullet, f_\bullet)$ be its formal adjoint complex. The following are algebraic (the topologies may not agree) isomorphisms of vector spaces \begin{align} H^i(F_\bullet, f_\bullet)^* &\cong H^i(\Gamma'(\tilde{F}^*_\bullet), f^*_\bullet) , & H^i(F_\bullet, f_\bullet) &\cong H^i(\Gamma'(\tilde{F}^*_\bullet), f^*_\bullet)^* , \\ H_c^i(F_\bullet, f_\bullet)^* &\cong H^i(\Gamma_c'(\tilde{F}^*_\bullet), f^*_\bullet) , & H_c^i(F_\bullet, f_\bullet) &\cong H^i(\Gamma_c'(\tilde{F}^*_\bullet), f^*_\bullet)^* , \end{align} where the cohomology vector spaces are endowed with the natural Hausdorff locally convex topology of a quotient of a subspace of the corresponding space of sections (be it smooth or distributional) and the topological duals are taken with the strong topology. \end{prop} \begin{proof} The original result of Serre~\cite{serre} appeared in the context of the Dolbeault differential complex in the theory of several complex variables. A detailed discussion and proof of the result for general differential complexes can be found in Sections~5.1.1 and 5.1.2 of~\cite{tarkhanov}. In particular, the desired conclusion can be found in Remark~5.1.9 thereof. Further conditions under which some of the duality isomorphisms are also continuous, and not merely algebraic, can be found there as well. \end{proof} Combining the two preceding propositions, it is easy to see that for any elliptic complex (subject to a countability condition on $M$) we have the Poincar\'e-Serre duality relation $H^i(F_\bullet, f_\bullet) = H^i_c(\tilde{F}^*_\bullet, f_\bullet^*)^*$. \subsection{The Calabi cohomology and homology}\label{sec:calabi-cohom} Below, we finally make use of the background information summarized in Sections~\ref{sec:sh-lc}, \ref{sec:sh-res}, and~\ref{sec:dual} and its consequences for the Calabi and its formal adjoint complexes, $(C_\bullet,B_\bullet)$ and $(C_\bullet,B_\bullet^*)$, which were introduced in~\ref{sec:calabi}. Namely, we make precise the identification between their cohomologies and the sheaf cohomologies of the Killing and Killing-Yano sheaves, $\mathcal{K}$ and $\mathcal{KY}$, introduced in Section~\ref{sec:sh-lc}. The hope created by this identification is that the difficult problem of solving systems of differential equations, which appear in these complexes, can be replaced by the equivalent and potentially easier problem of computing sheaf cohomologies. The latter problem is potentially easier because of the many available methods of computing sheaf cohomology. Some of which will be discussed in Section~\ref{sec:killing}. First, we introduce the basic definitions of Calabi cohomology and homology. Let us denote the cohomology of the Calabi complex (\emph{Calabi cohomology}) on a pseudo-Riemannian manifold $(M,g)$ of constant curvature as \begin{equation} HC^i(M,g) = H^i(C_\bullet, B_\bullet) = H^i(\Gamma(C_\bullet),B_\bullet) . \end{equation} Let us also denote the cohomology of the formal adjoint Calabi complex with compact supports (\emph{Calabi homology}) \begin{equation} HC_i(M,g) = H^i_c(C_\bullet,B^*_\bullet) = H^i(\Gamma_c(C_\bullet),B^*_\bullet) . \end{equation} The naming convention will be justified later by the generalized Poincar\'e duality relation in Corollary~\ref{cor:calabi-dual}. Similarly, we define the cohomology of the Calabi complex with compact supports (\emph{Calabi cohomology with compact supports}) as \begin{equation} HC_c^i(M,g) = H^i_c(C_\bullet, B_\bullet) = H^i(\Gamma_c(C_\bullet),B_\bullet) \end{equation} and the cohomology of the formal adjoint Calabi complex (\emph{locally finite Calabi homology}) as \begin{equation} HC^\mathit{lf}_i(M,g) = H^i(C_\bullet,B^*_\bullet) = H^i(\Gamma(C_\bullet),B^*_\bullet) . \end{equation} The following proposition is the main technical tool that we use to establish all other results in this section. \begin{prop}\label{prp:calabi-exact} Consider a pseudo-Riemannian manifold $(M,g)$ of constant curvature and dimension $n$. The corresponding Calabi complex $(C_\bullet, B_\bullet)$ is elliptic, formally exact and locally exact (except in degree $0$). The same is true for its formal adjoint complex $(C_\bullet,B_\bullet^*)$ (except in degree $n$). \end{prop} \begin{proof} In principle, we would need quite a bit of machinery for a full proof. Instead, we give a sketch of the main ideas and refer to the literature for technical details. The Calabi complex is actually an instance of a \emph{second Spencer sequence} construction~\cite{quillen,goldschmidt-lin,spencer,pommaret} applied to the Killing operator $B_1 = K$. This fact is the demonstrated in the papers~\cite{gasqui-goldschmidt-fr,gasqui-goldschmidt,goldschmidt-calabi}. These papers make use of the general construction and properties of the differential complex constituting a second Spencer sequence demonstrated in~\cite{quillen,goldschmidt-lin}. In fact, the resulting differential complex gives a formally exact compatibility complex for the Killing operator, which is also an elliptic complex. This holds since the Killing operator $K$ is itself elliptic (has injective symbol, which follows from the property of being of finite type, cf.~Section~\ref{sec:tw-dr}) and formally integrable (contains all of its integrability conditions) on a constant curvature background. A more elementary argument for ellipticity can be made on representation theoretic grounds (Appendix~\ref{app:yt-bkg}). The fibers of the tensor bundles $C_iM$ carry irreducible representations of $\mathrm{GL}(n)$. Further, as mentioned in Remark~\ref{rmk:elliptic}, the principal symbols of the differential operators $B_i$ are all $\mathrm{GL}(n)$-equivariant maps $\sigma B_i\colon \mathrm{Y}^{(k_i)}T^* \otimes C_{i-1} \to C_i$ or equivalently $\sigma_p B_i \colon C_{i-1} \to C_i$, for $p\in T^*$. By Schur's lemma, the symbol map $\sigma B_i$ is then an isomorphism when restricted to an irreducible summand of the tensor product representation. The well-known Littlewood-Richardson rules~\cite{fulton,lrr} for tensor products of $\mathrm{GL}(n)$ representations then show that the $C_i$ irreps have been chosen precisely such that the symbol sequence $\sigma_p B_i$ is exact for $p\ne 0$. This representation-theoretic line of argument is a special case of the construction of what are known as BGG resolutions~\cite{bgg}. Finally, local exactness (except in degree $0$) can be established by checking, for the Killing operator, a sufficient condition known as the \emph{$\delta$-estimate}~\cite[Sec.1.3.13]{tarkhanov}. Equivalently, we can simply invoke Proposition~\ref{prp:tw-dr-exact}, since, being of finite type, the Killing operator is equivalent to a flat covariant operator (Section~\ref{sec:tw-dr}). A more elementary proof of local exactness was given in the original article by Calabi~\cite{calabi}. He relied on the well known local exactness of the de~Rham complex and its relation to the simplified form of the complex in the flat (zero curvature) case. The non-zero curvature case was handled by embedding it in a flat space and then restricting and extending the relevant sheaves with respect to this embedding. Unfortunately, unlike the more sophisticated argument above, this simpler argument is unlikely to generalize, when the Calabi complex is replaced by a more general one. To finish the proof, we note that the properties of formal exactness and ellipticity are obviously preserved by taking formal adjoints, so that they apply equally well to the formal adjoint Calabi complex $(C_\bullet,B_\bullet^*)$. The formal adjoint complex then serves as the formally exact compatibility complex for the Killing-Yano operator $B_n^* = KY$, which is also regular and of finite type on constant curvature backgrounds, as discussed in Section~\ref{sec:tw-dr}. Thus, repeating the same arguments as above establishes local exactness (except this time in degree $n$) for the adjoint complex as well. \end{proof} \begin{cor}\label{cor:calabi-equiv} There is a formal homotopy equivalence between the Calabi complex $(C_\bullet,B_\bullet)$ and the twisted de~Rham complex $(\Lambda^\bullet M \otimes V_K,D_K)$ resolving the Killing sheaf, $\H^0(C_\bullet,B_\bullet) = \mathcal{K}$. The same is true (up to a trivial renumbering) of the formal adjoint complex and the twisted de~Rham complex $(\Lambda^\bullet M \otimes V_{KY},D_{KY})$ resolving the Killing-Yano sheaf, $\H^n(C_\bullet,B^*_\bullet) = \mathcal{KY}$. \end{cor} \begin{proof} We already know that both the Calabi and twisted de~Rham complex associated to the Killing operator are formally exact, locally exact (Propositions~\ref{prp:calabi-exact} and~\ref{prp:tw-dr-exact}) and both resolve the Killing sheaf, since the operators $K$ and $D_K$ are equivalent (Section~\ref{sec:tw-dr}). Thus, by Lemma~\ref{lem:f-exact}, there exists a formal homotopy equivalence (realized by differential operators) between the two complexes. Noting that the exact same argument (with trivial changes) applies to the formal adjoint Calabi complex and the Killing-Yano sheaf concludes the proof. \end{proof} \begin{cor}\label{cor:calabi-dual} Provided the manifold $M$ is countable at infinity (there is an exhaustion by a countable sequence of compact sets) or is of finite type (has a finite ``good cover''), we have the following generalized Poincar\'e duality isomorphisms \begin{align} HC^i(M,g) &\cong HC_i(M,g)^* , & HC^i_c(M,g)^* &\cong HC_i^\mathit{lf}(M,g) , \\ HC^i(M,g)^* &\cong HC_i(M,g) , & HC^i_c(M,g) &\cong HC_i^\mathit{lf}(M,g)^* , \end{align} where isomorphisms are taken in the algebraic sense and duality is meant in the topological sense, as described in Proposition~\ref{prp:serre-dual}. \end{cor} Note that in the case when all cohomology vector spaces are finite dimensional, the distinction between algebraic or topological isomorphisms and duals is irrelevant. \begin{proof} There are two ways to establish the desired duality isomorphisms, each relying on slightly different conditions on $M$, reflected in the hypotheses. We should note that both require an orientation on $M$. The existence off a non-degenerate metric on $M$ implies that it is orientable. We then simply fix an orientation arbitrarily. The Mayer-Vietoris argument (Proposition~\ref{prp:tw-dr-dual}) establishes the duality isomorphisms \begin{align} H^i(\Lambda^\bullet M \otimes V_K,D_K) &\cong H^i_c(\Lambda^\bullet M \otimes V_K^*,D_K)^* \\ \text{and} \quad H^i(\Lambda^\bullet M \otimes V_{KY},D_{KY}) &\cong H^i_c(\Lambda^\bullet M \otimes V_{KY}^*,D_{KY})^* . \end{align} Under the finite type condition on $M$, an easy modification to the Mayer-Vietoris argument (Propositions~5.3.1 and~5.3.2 of~\cite{bott-tu}) also shows that each of these cohomology groups is finite dimensional, so the reverse duality isomorphisms hold as well. Finally, the formal homotopy equivalence of Corollary~\ref{cor:calabi-equiv} translates these isomorphisms into the desired duality relations for Calabi cohomology and homology. The Poincar\'e-Serre argument, applies by virtue of the ellipticity of the Calabi and its formal adjoint complexes (Proposition~\ref{prp:calabi-exact}) and the hypothesis of countability at infinity. Combining the results of Propositions~\ref{prp:ell-param} and~\ref{prp:serre-dual}, easily establishes the desired duality isomorphisms directly. \end{proof} \begin{cor}\label{cor:calabi-sheaf} Assume the same hypotheses on $M$ as in Corollary~\ref{cor:calabi-dual}. The Calabi cohomology and homology, assuming they are finite dimensional, the following identities hold (with respect to algebraic duals): \begin{align} HC^i(M,g) &\cong H^i(\mathcal{K}) , & HC^i_c(M,g) &\cong H^{n-i}(\mathcal{KY})^* , \\ HC_i(M,g) &\cong H^i(\mathcal{K})^* & HC^\mathit{lf}_i(M,g) &\cong H^{n-i}(\mathcal{KY}) . \end{align} \end{cor} Note that we do expect the relevant cohomology and homology spaces to be finite dimensional in most applications. If the cohomology vector spaces happen to be infinite dimensional, then the correct (topological and algebraic) isomorphisms can be deduced from Corollary~\ref{cor:calabi-dual} and Proposition~\ref{prp:serre-dual}. \begin{proof} By Proposition~\ref{prp:calabi-exact} and Corollary~\ref{cor:calabi-equiv} we already know that the Calabi and its formal adjoint complexes are locally exact differential complexes that respectively resolve the Killing and Killing-Yano sheaves, $\mathcal{K}$ and $\mathcal{KY}$. Then, Lemma~\ref{lem:f-exact} establishes the isomorphisms $HC^i(M,g) \cong H^i(\mathcal{K})$ and $HC^\mathit{lf}_i(M,g) \cong H^{n-i}(\mathcal{KY})$. Finally, the duality isomorphisms of Corollary~\ref{cor:calabi-dual} establish the rest of the desired identities. Note that we have added the finite dimensionality hypothesis only to avoid explicitly specifying a topology on the relevant cohomology vector spaces, so that the topological and algebraic duals coincide. \end{proof} \section{The Killing sheaf and its cohomology}\label{sec:killing} In this section we concentrate on possible effective ways of computing the Killing sheaf cohomology (or rather the cohomology of any locally constant sheaf) of a pseudo-Riemannian manifold $(M,g)$ of constant curvature. For us, \emph{effective} is used somewhat loosely and we take it to mean roughly to either consist of finitely many steps involving only finite-dimensional linear algebra or to reduce to calculation that has already been done in the literature. In particular, any such method would be more effective than the brute force approach of trying to solve the systems of differential equations appearing in the Calabi complex. Since the interest in the cohomology of the Killing sheaf may extend beyond the constant curvature context, we always discuss the more general situation, specializing to the constant curvature case when necessary. There are two main possibilities, either the manifold $M$ is simply connected or it is not. They are discussed respectively in Sections~\ref{sec:simp-conn} and~\ref{sec:nonsimp-conn}. In the simply connected case, the sheaf cohomology can be expressed completely in terms of the de~Rham cohomology. The non-simply connected case is more complicated, where several complementary but potentially overlapping methods may be used. None of them, unfortunately, gives a complete solution. Crucial to the discussion that follows (see Appendix~\ref{app:bndl-deform} for relevant notation and concepts related to $G$-bundles) is the notion of the monodromy representation of the fundamental group $\pi = \pi_1(M)$ of a manifold with respect to a flat connection $D$ on a vector bundle $V\to M$ (cf.~Section~\ref{sec:tw-dr}). Let us identify $\pi_1(M) = \pi_1(M,x)$ for some $x\in M$. The connection $D$ gives rise to a notion of parallel transport on $V$. Since the connection is flat, the parallel transport along a curve connecting $x,y\in M$ depends only on the homotopy class of the path with its endpoints fixed. Therefore, since parallel transport acts linearly, parallel transport along loops based at $x\in M$ induces a representation $\rho_V\colon \pi \to \mathrm{GL}(\bar{V})$, where $\bar{V}\cong V_x$ is the typical fiber of $V\to M$, called the \emph{monodromy representation}. Another common term is the \emph{holonomy representation}. However, we reserve the term \emph{holonomy} for the same concept associated specifically to the $g$-compatible Levi-Civita connection on $M$. If $V\to M$ is a vector $G$-bundle, then there necessarily is an associated representation of the structure group on $\bar{V}$, $\sigma_V \colon G \to \mathrm{GL}(\bar{V})$. When the connection $D$ preserves the $G$-bundle structure, parallel transport and hence monodromy factors through the associated representation. Hence $\rho_V = \sigma_V \circ \rho$, where $\rho\colon \pi \to G$ is the monodromy representation of $\pi$ in the structure group. Recall also that for any manifold $M$ there exists a unique (up to diffeomorphism) connected, simply connected \emph{universal cover} $\tilde{M} \to M$, where the projection map is a surjective local diffeomorphism. In fact, $\tilde{M}\to M$ is a $\pi$-principal bundle over $M$. The principal bundle action of $\pi$ on $\tilde{M}$ by is called action by \emph{deck transformations}. Note that $M\cong \tilde{M}/\pi$. Deck transformations, being diffeomorphisms, commute with the de~Rham differential. Hence the action by deck transformations descends to de~Rham cohomology. We call it the \emph{deck representation} $\Delta^i\colon \pi \to \mathrm{GL}(H^i(M))$. The projection to $M$ pulls the bundle $V\to M$ back to $\tilde{V}\to \tilde{M}$ and the connection $D$ to $\tilde{D}$. Since the universal cover is simply connected, the pulled back bundle trivializes, $\tilde{V} \cong \bar{V} \times \tilde{M}$. Therefore, we have the isomorphism $H^i(\Lambda^\bullet \tilde{M} \otimes \tilde{V},D) \cong H^i(\tilde{M})\otimes \bar{V}$. It is not hard to see that the two sides are isomorphic not only as vector spaces but also as representations of the fundamental group $\pi$, with the right side transforming as the tensor product of the deck and monodromy representations $\Delta^i\otimes \rho_V$. Let us fix the assumptions that $(M,g)$ is connected and that its Killing sheaf $\mathcal{K}_g$ is locally constant, then concretize the above ideas to this case. Recall from Section~\ref{sec:tw-dr} that $\mathcal{K}_g$ is then resolved by the twisted de~Rham complex associated to the flat vector bundle $(V_K,D_K)$. The typical fiber $\bar{V}_K$ of $V_K \to M$ consists of the germs of local Killing vector fields. Each local Killing vector field extends to a global one, and hence to an infinitesimal isometry, on the universal cover $(\tilde{M},\tilde{g})$. Thus, we can identify $\bar{V}$ with the Lie algebra $\mathfrak{g}$ of the Lie group $G = \mathrm{Isom}(\tilde{M},\tilde{g})$ of isometries of $(\tilde{M},\tilde{g})$. Infinitesimal isometries act on each other by the formula $\mathcal{L}_u v = [u,v]$, which corresponds to the infinitesimal adjoint representation $\mathrm{ad}\colon \mathfrak{g} \to \operatorname{End}(\mathfrak{g})$. This representation integrates to the adjoint representation $\mathrm{Ad} \colon G \to \mathrm{GL}(\mathfrak{g})$, which is how finite isometries act on Killing vector fields. Also, it is clear by construction that deck transformations act on $(\tilde{M},\tilde{g})$ by isometries. Let us denote this representation of the fundamental group $\pi = \pi_1(M)$ by isometries as $\rho\colon \pi \to G$. As described in Section~\ref{sec:monodr-def}, this information is equivalent to specifying a flat principal $G$-bundle $P\to M$ with monodromy representation $\rho$ of $\pi$ in $G$. Further, it is clear that $V_K \cong \mathfrak{g}_P$ is the vector $G$-bundle over $M$ associated to $P$ with respect to the adjoint action $\mathrm{Ad}$ of $G$ on $\mathfrak{g}$ and that $D_K$ is the connection associated to the flat principal connection on $P$. The monodromy representation of $\pi$ on $\bar{V}_K$ is then the \emph{composite adjoint monodromy representation} $\rho_V = \mathrm{Ad}_\rho = \mathrm{Ad}\circ \rho$. \subsection{Simply connected case}\label{sec:simp-conn} The simplest case is when the manifold $M$ is simply connected, that is, its fundamental group $\pi = \pi_1(M)$ is trivial. Let the locally constant sheaf $\mathcal{F}$ have stalk $\bar{F}$ so that it defines a flat vector bundle $(F,D)$, with $\bar{F}$ the typical fiber of $F\to M$ (Sections~\ref{sec:sh-lc} and~\ref{sec:tw-dr}). We know that the twisted de~Rham differential complex $(\Lambda^\bullet M\otimes F,D)$ is an acyclic resolution of $\mathcal{F}$. Hence their cohomologies agree. On the other hand, since $M$ is simply connected, we can choose a global $D$-flat basis frame for $F$ and identify the twisted de~Rham complex with $\operatorname{rk} F = \dim \bar{F}$ copies of the standard de~Rham complex. This argument proves \begin{thm}\label{thm:simp-conn} Let $(M,g)$ be a connected, simply connected pseudo-Riemannian manifold with locally constant Killing sheaf $\mathcal{K}_g$, resolved by the twisted de~Rham complex $(\Lambda^\bullet M \otimes V_K, D_K)$. Let $\mathfrak{g} \cong \bar{V}_K$ be the Lie algebra of isometries of $(M,g)$. Then the following isomorphisms hold: \begin{equation} H^i(\mathcal{K}_g) \cong H^i(\Lambda^\bullet M \otimes V_K, D_K) \cong H^i(M)\otimes \mathfrak{g} . \end{equation} In particular $H^0(\mathcal{K}_g) \cong \mathfrak{g}$ and $H^1(\mathcal{K}_g) = 0$. \end{thm} \subsection{Non-simply connected case}\label{sec:nonsimp-conn} The non-simply connected case is of course more complicated and we can offer only partial results, which we summarize in this paragraph. The simplest sub-case is when the fundamental group $\pi = \pi_1(M)$ of the pseudo-Riemannian manifold $(M,g)$ is finite (Section~\ref{sec:fin-fg}). The Killing sheaf cohomology is then the $\pi$-invariant subspace of the de~Rham cohomology of the universal covering space. If the fundamental group is not necessarily finite, we still have the following general result for the degree-$1$ cohomology of constant curvature spaces. We can equate $\dim H^1(\mathcal{K}_g)$ to the dimension of the space of possible infinitesimal deformations of the metric that preserve the constant curvature condition as well as the value of the scalar curvature itself. That observation was already made in the original work of Calabi~\cite{calabi} and in fact prompted his interest in a resolution of the Killing sheaf $\mathcal{K}_g$. This space of infinitesimal deformations can also be computed as the degree-$1$ group cohomology of $\pi$ with coefficients in a certain representation on the Lie algebra of isometries of the universal cover of $M$ (Section~\ref{sec:inf-fg-1}). Another result helps compute higher degree cohomology groups. The Killing sheaf, being locally constant, defines a \emph{local system} or a system of \emph{local coefficients} on $M$, a concept well known in algebraic topology. A general result from the theory of local systems is that the aforementioned group cohomology computes higher Killing sheaf cohomology groups up to the degree of asphericity of $M$ (Section~\ref{sec:loc-coef}). Finally, there is a general method for completely computing the Killing sheaf cohomology based on a presentation of the manifold $M$ as a finite simplicial set (Section~\ref{sec:simp-set}). \subsubsection{Finite fundamental group}\label{sec:fin-fg} The basic idea here is to take advantage of the complete decomposability of representations of a finite group and then apply Schur's lemma. As will be clear from the proof, it is the complete decomposability that is important not the finiteness of $\pi$. So the same result actually holds under suitably weaker hypotheses. \begin{thm} Let $(M,g)$ be a connected pseudo-Riemannian manifold with fundamental group $\pi = \pi_1(M)$ and Killing sheaf $\mathcal{K}_g$, resolved by the twisted de~Rham complex $(\Lambda^\bullet M \otimes V_K, D_K)$. Let $\mathfrak{g} \cong \bar{V}_K$ be the Lie algebra of isometries of the universal cover $(\tilde{M},\tilde{g})$. If $\pi$ is finite, we have the following isomorphisms: \begin{equation} H^i(\mathcal{K}_g) \cong (H^i(\tilde{M})\otimes \mathfrak{g})^\pi , \end{equation} where the superscript $\pi$ denotes the $\pi$-invariant subspace with respect to the representation $\Delta^i\otimes\mathrm{Ad}_\rho$, the tensor product of the deck and composite adjoint monodromy representations. In particular $H^0(\mathcal{K}_g) \cong \mathfrak{g}^\pi$. \end{thm} \begin{proof} Consider the spaces of sections $\Omega_i = \Gamma(\Lambda^i \tilde{M}\otimes \tilde{V}_K)$, where $\tilde{V}_K \to \tilde{M}$ is the pullback of $V_K\to M$ along the universal covering projection $\tilde{M} \to M$. Let $\tilde{D}_K$ denote the pullback of the $D_K$. As we have already discussed at the top of Section~\ref{sec:killing}, this pulled back bundle is trivial, $\tilde{V}_K \cong \mathfrak{g} \times M$. Moreover, by simple connectedness of $\tilde{M}$ and Theorem~\ref{thm:simp-conn}, we have the isomorphism $H^i = H^i(\Omega_i,\tilde{D}_K) \cong H^i(\tilde{M})\otimes \mathfrak{g}$. As also discussed at the top of Section~\ref{sec:killing}, the spaces $\Omega_i$ carry representations of the fundamental group $\pi$, which also descends to the cohomologies $H^i$. Since $\pi$ is finite, it is well known that any representation thereof is completely decomposable~\cite{maschke}, that is, any subrepresentation has a direct sum complement subrepresentation. So, the subspace $\Omega_i^\pi \subset \Omega_i$ invariant under the action $\pi$ (every element of $\pi$ acts as the identity operator) has a direct sum complement $\Omega_i^{\hat\pi}$, so that $\Omega_i \cong \Omega_i^\pi \oplus \Omega_i^{\hat\pi}$. This direct sum induces the short exact sequence \begin{equation}\label{eq:pi-short-exact} \begin{tikzcd} 0 \ar{r} & \Omega_i^\pi \ar{r} & \Omega_i \ar{r} & \Omega_i^{\hat\pi} \ar{r} & 0 . \end{tikzcd} \end{equation} It is straightforward to note that, by construction of the universal cover $\tilde{M}\to M$, the $\pi$-invariant subcomplex $(\Omega_i^\pi,\tilde{D}_K)$ on $\tilde{M}$ is in fact cochain isomorphic to the complex $(\Gamma(\Lambda^\bullet M \otimes V_K), D_K)$ on $M$. Therefore the desired cohomology groups are $H^i(\Lambda^\bullet M \otimes V_K, D_K) \cong H^i(\Omega_\bullet^\pi, \tilde{D}_K)$. The complement $\Omega_i^{\hat\pi}$ naturally does not contain any non-zero vectors invariant under the action of $\pi$. In representation theoretic terminology, these two complementary subspaces are \emph{disjoint}. By Schur's lemma~\cite{schur}, the only equivariant map (\emph{intertwiner}) between any two disjoint representations is the zero map. Note that the differentials $\tilde{D}_K$ and the maps in the short exact sequence~\eqref{eq:pi-short-exact} are in fact $\pi$-equivariant. By the general machinery of homological algebra (Appendix~\ref{app:homalg}) the short exact sequence~\eqref{eq:pi-short-exact} induces the long exact sequence \begin{equation}\label{eq:pi-long-exact} \begin{tikzcd} 0 \arrow{r} & H^0_\pi \arrow{r} & H^0 \arrow{r}\arrow[draw=none]{d}[name=Z,shape=coordinate]{}& H^0_{\hat\pi} \arrow[rounded corners, to path={ -- ([xshift=2ex]\tikztostart.east) |- (Z) [near end]\tikztonodes -| ([xshift=-2ex]\tikztotarget.west) -- (\tikztotarget)}]{dll} \\ & H^1_\pi \arrow{r} & H^1 \arrow{r} & H^1_{\hat\pi} \arrow{r} & \cdots \end{tikzcd} \end{equation} where $H^i_\pi = H^i(\Omega_\bullet^\pi, \tilde{D}_K)$, $H^i_{\hat\pi} = H^i(\Omega_\bullet^{\hat\pi}, \tilde{D}_K)$ and all the maps are also $\pi$-equivariant. It is clear that the representations carried by $H^i_\pi$ and $H^i_{\hat\pi}$ are also disjoint. Therefore, the maps connecting the rows of diagram~\eqref{eq:pi-long-exact} are all zero. In other words, each of the rows becomes a short exact sequence on its own. Invoking again complete decomposability of representations of $\pi$, we can write $H^i \cong H^i_\pi \oplus H^i_{\hat\pi}$ and hence identify $H^i_\pi \cong (H^i)^\pi$ with the subspace of $H^i$ on which $\pi$ acts trivially. Collecting the above arguments together, while recalling the sheaf cohomology identity $H^i(\mathcal{K}_g) \cong H^i(\Lambda^\bullet M \otimes V_K, D_K)$, we obtain the isomorphism $H^i(\mathcal{K}_g) \cong (H^i(\tilde{M})\otimes \mathfrak{g})^\pi$. Noting the special cases $H^0(\tilde{M}) = \mathbb{R}$ and $H^1(\tilde{M}) = 0$, as in Theorem~\ref{thm:simp-conn}, concludes the proof. \end{proof} \subsubsection{Degree-$1$ cohomology}\label{sec:inf-fg-1} Consider a $1$-parameter family of $n$-dimensional pseudo-Riemannian manifolds $(M,g(t))$ where each $g(t)$, for $t$ in some neighborhood of zero, has constant curvature, with scalar curvature independent of $t$: Riemann tensor equal to $\frac{k}{n(n-1)} g(t)\odot g(t)$. Let $g(0) = g$ and $\dot{g}(0) = h$. Then the linearization of the identity $R[g(t)] - \frac{k}{n(n-1)} g(t)\odot g(t) = 0$ at $t=0$ will give (cf.~Section~\ref{sec:calabi-ops}) \begin{equation} \dot{R}[h] - k\frac{2}{n(n-1)} (g\odot h) = -\frac{1}{2} C_2[h] = 0 . \end{equation} In other words, $h$ is a Calabi $1$-cocycle. It is possible that not every Calabi $1$-cocycle gives rise to an actual $1$-parameter family of deformations, since there may be algebraic obstructions% \footnote{The study of these obstructions follows the general ideas outlined by Kodaira and Spencer~\cite{spencer-deform1,spencer-deform2}. See also the related phenomenon of linearization instabilities~\cite{kh-linstab}.} % to solving for higher order terms in the expansion parameter $t$. However, at the infinitesimal level, there are no other conditions and we can identify infinitesimal deformations with Calabi $1$-cocycles. If the deformation family $g(t)$ is trivial, induced by a $1$-parameter family of diffeomorphisms of the manifold $M$, then it is well known that $h = K[v]$ for some $1$-form $v$ (vector field generating the diffeomorphism family, with index lowered by the metric $g$), in other words a Calabi $1$-coboundary. It is easy to see that Calabi $1$-coboundaries can be identified with infinitesimal trivial deformations. Therefore, the Calabi cohomology vector space $HC^1(M,g)$, and hence the Killing cohomology vector space $H^1(\mathcal{K}_g)$ isomorphic to it, is in bijective correspondence with the space of infinitesimal deformations of the metric $g$ within the space of constant curvature metrics of scalar curvature $k$, modulo infinitesimal diffeomorphisms. There is another way to look at this infinitesimal deformation space. It is well known that the only geodesically complete, simply connected, constant curvature spaces are the pseudo-Euclidean ($k=0$), pseudo-spherical ($k>0$) and pseudo-hyperbolic ($k<0$) spaces~\cite[Sec.2.4]{wolf-cc}. In Riemannian signature, these are respectively the ordinary Euclidean, spherical and hyperbolic spaces. In Lorentzian signature, these are respectively the Minkowski, de~Sitter and anti-de~Sitter spaces. Thus, the elements of a family $(M,g(t))$ of geodesically complete, constant curvature, pseudo-Riemannian manifolds of fixed scalar curvature $k$ all have isometric universal covers $(\tilde{M},\tilde{g})$. Moreover, since the action of the fundamental group $\pi = \pi_1(M)$ on its universal cover via deck transformations is by isometries, there is an injective group homomorphism $\pi \to G = \mathrm{Isom}(\tilde{M},\tilde{g})$, so that we have a subgroup $\rho(\pi) \subseteq G$ that acts on $\tilde{M}$ properly and discontinuously~\cite[Sec.1.8]{wolf-cc}. Conversely, for any subgroup of $\pi'\subseteq G$ that acts on $\tilde{M}$ properly and discontinuously the quotient $(M',g') = (\tilde{M},\tilde{g})/\pi'$ will be a manifold of the same constant curvature, but with fundamental group $\pi' = \pi_1(M')$. So, we have already noticed that all $(M,g)$ with constant curvature arise in this way. Of course, any two subgroups $\pi',\pi'' \subseteq G$ that are conjugate, $\pi'' = a \pi' a^{-1}$ for some $a\in G$, give rise to isometric quotients. In fact, we have just argued that the infinitesimal deformations of the representation $\rho\colon \pi \to G$, up to conjugation by $G$, are in bijection with infinitesimal constant curvature deformations of the metric $(M,g)$. It is well known that the deformations of the representation $\rho$ are in bijection with certain degree-$1$ \emph{group cohomology} of the fundamental group $\pi$. On the other hand, we have already seen that deformations of the constant curvature spaces are parametrized by the Killing sheaf cohomology $H^1(\mathcal{K}_g)$. Thus, computing the group cohomology may be an effective way of computing the Killing sheaf cohomology, at least in degree-$1$. The details of the definition of the representation $\rho$ are described at the top of Section~\ref{sec:killing} and are also subsumed by the more general discussion below. This connection between the degree-$1$ Killing sheaf cohomology, deformations of the geometry and group cohomology of the fundamental group $\pi$ extends far beyond the case of manifolds of constant curvature. We base what follows on the remark at the top of Section~\ref{sec:killing} and the contents of Appendix~\ref{app:bndl-deform}. If $(\tilde{M},\tilde{g})$ is the universal cover of $(M,g)$ and $G = \mathrm{Isom}(\tilde{M},\tilde{g})$ with Lie algebra $\mathfrak{g}$, then there is a naturally defined flat principal $G$-bundle $P\to M$. Then, the infinitesimal deformations of this flat principal $G$-bundle are in bijections with $H^1(\mathcal{K}_g)$, the degree-$1$ Killing sheaf cohomology group. That is because the flat vector bundle $(V_K,D_K)$, whose twisted de~Rham complex resolves the Killing sheaf, is isomorphic to the associated bundle $\mathfrak{g}_P \to M$ with connection $D$ induced by the flat principal connection on $P$. Recall that the fibers of $\mathfrak{g}_P$ transform under the adjoint representation $\mathrm{Ad}\colon G \to \mathrm{GL}(\mathfrak{g})$ and that parallel transport with respect to the flat connection on $P$ defines a representation $\rho\colon \pi \to G$ of the fundamental group $\pi = \pi_1(M)$. Their composition $\mathrm{Ad}_\rho = \mathrm{Ad} \circ \rho$, as already mentioned at the top of Section~\ref{sec:killing}, is known as the \emph{composite adjoint monodromy representation}. In the case of spaces of constant curvature, the infinitesimal deformations of the flat principal bundle $P\to M$ are the same thing as the infinitesimal deformations of the given constant curvature metric, fixing the value of the curvature. \begin{thm} Given the notations and hypotheses of the above paragraph, the following isomorphisms between the Killing sheaf cohomology and group cohomology of $\pi$ with coefficients in $\mathrm{Ad}_\rho$ hold: \begin{equation} H^0(\mathcal{K}_g) \cong H^0(\pi,\mathrm{Ad}_\rho) \cong \mathfrak{g}^\pi , \quad H^1(\mathcal{K}_g) \cong H^1(\pi,\mathrm{Ad}_\rho) . \end{equation} \end{thm} This result is a direct consequence of Proposition~\ref{prp:cohom-equiv} of Appendix~\ref{app:bndl-deform}. Unfortunately, we cannot use the same methods to establish isomorphisms between the group and sheaf cohomologies in higher degrees. See, however, Section~\ref{sec:loc-coef}. The connection between group cohomology of $\pi$ and deformations of a flat principal bundle is well known, cf.~\cite{goldman}. The connection between, specifically, the cohomology of the Killing sheaf, infinitesimal deformations of the corresponding principal bundle, and group cohomology seems to be less well known, but is mentioned explicitly in~\cite[App.A.2]{acm}. \subsubsection{Cohomology with local coefficients}\label{sec:loc-coef} We have just noted, in Section~\ref{sec:inf-fg-1}, a geometric relation between degree-$1$ locally constant sheaf cohomology and cohomology of the fundamental group. A more general connection between the cohomology of a locally constant sheaf, or equivalently cohomology with coefficients in a local system~\cite[Ch.VI]{whitehead}, and group cohomology of the fundamental group has also been noticed in pure algebraic topology. In fact, that is how the notion of group cohomology first arose. The original goal was to calculate the cohomology of a space (with or without coefficients in a non-trivial local system) in terms of data specifying its homotopy type. Following some early work by Hurewicz, Hopf and Eilenberg, Eilenberg and Maclane~\cite{eilenberg-maclane} introduced what are now known as $K(1,\pi)$ spaces (topological spaces with $\pi_1 = \pi$ and $\pi_i = 0$ for all $i>0$) and computed all of their cohomology groups by introducing an algebraic construction based on the knowledge of the group $\pi$. We now call this construction \emph{group cohomology}~\cite{group-cohom}. They further showed that the same construction works also for any topological space $M$, not just a $K(1,\pi)$, for the cohomologies in degree-$i$, with $0 < i \le p$, as long as the space $M$ is $p$-aspherical, $\pi_i = 0$ for $0 < i \le p$. This result, applied to the Killing sheaf gives the following \begin{prop} Let $(M,g)$ be a connected pseudo-Riemannian manifold with locally constant Killing sheaf $\mathcal{K}_g$ and universal cover $(\tilde{M},\tilde{g})$. Denote $G = \mathrm{Isom}(\tilde{M},\tilde{g})$ the group of isometries of the universal cover and let $\mathfrak{g}$ be its Lie algebra. The fundamental group $\pi = \pi_1(M)$ acts on $\mathfrak{g}$ via the composite adjoint monodromy representation $\mathrm{Ad}_\rho \colon \pi \to \mathrm{GL}(\mathfrak{g})$. If the manifold $M$ is $p$-aspherical, meaning $\pi_i(M) = 0$ for $0 < i \le p$, then we have the following isomorphisms: \begin{equation} H^i(\mathcal{K}_g) = H^i(\pi,\mathrm{Ad}_\rho) \quad \text{for $0 \le i \le p$.} \end{equation} \end{prop} For higher degree cohomology there are other contributions to the homology groups. There is still a homomorphism $H^i(\pi,\mathrm{Ad}_\rho) \to H^i(\mathcal{K}_g)$, but it need no longer be an isomorphism~\cite[Sec.1.4.2]{morita}. Later, Postnikov~\cite{postnikov-dok,postnikov-ru} proposed a full solution for algebraically determining all the cohomology groups of a space based on its homotopy type. Postnikov's method encodes the full homotopy type of a space in terms of their homotopy groups and certain additional algebraic data known as a \emph{Postnikov system} or \emph{tower}. This construction is currently more commonly known in its topological form~\cite[Ch.IX]{whitehead}. For $p$-aspherical spaces and cohomology in degree $i$, with $0< i \le p$, Postnikov's construction coincides with group cohomology. In general, the two constructions do differ in degrees higher than the degree of asphericity. Unfortunately, both Postnikov's encoding of the homotopy type and his algebraic reconstruction of the cohomology are rather complicated, do not appear to have gained much popularity. They seem to be fully described only in his original monograph~\cite{postnikov-ru} or its translation~\cite{postnikov-en}, both being rather obscure references. At the moment, it is not clear to us what is the modern state of the art in terms of reconstructing the cohomology of a space with coefficients in a local system in terms of the space's homotopy type. \subsubsection{Simplicial set cohomology}\label{sec:simp-set} The last mathematical tool, which we will discuss, that can aid in the computation of the cohomologies of a locally constant sheaf is simplicial cohomology with local coefficients. The idea is to substitute the underlying manifold $M$ with a combinatorial structure like a \emph{simplicial complex} or a \emph{simplicial set}. Then, provided the combinatorial model is finite, the corresponding cohomology theory reduces to the computation of the cohomology of a finite dimensional cochain complex, and thus to finite dimensional linear algebra. We defer to the discussion in \cite[Sec.I.4.7--10]{gelfand-manin} for technical details. A disadvantage of this method is that finite combinatorial models only cover the case of compact manifolds. Non-compact manifolds require either an infinite combinatorial model or a non-trivial extension of the formalism. Another inconvenience, besides the need for an explicit decomposition of $M$ into simplices, is the need to define a discrete analog of parallel transport on the simplicial model to reproduce the composite adjoint monodromy representation $\mathrm{Ad}_\rho$. That is usually done by associating a copy of $\mathfrak{g}$ to each vertex of the simplicial model for $M$ and explicitly assigning a coherent set of linear isomorphisms between these copies to the edges connecting them, such that the composition of the isomorphisms of the edges along a closed loop is equal to the $\mathrm{Ad}_\rho$ action of the corresponding element of $\pi$. These choices may be simplified if all vertices could be collapsed into a single one, which is allowed for simplicial sets. Such a construction is always possible when $M$ is compact and results in a so-called \emph{reduced simplicial set}~\cite{rsset}. \section{Application to linearized gravity}\label{sec:appl} Recently, the symplectic and Poisson structure of linear classical field theories has been studied by the author within a very general framework~\cite{kh-peierls,kh-big} (see also~\cite{forger-romero,bfr,hs} for related work), which admits in particular any linear field theory whose gauge fixed equations of motion can be formulated as a hyperbolic PDE system with possible constraints and residual gauge freedom. Certain sufficient geometric conditions need to be satisfied for a field theory to fit into that framework. The framework can then precisely characterize the degeneracies of the presymplectic and Poisson tensors on the solution space of the theory. These sufficient conditions require the gauge generator and the constraint operator to fit into differential complexes and the degeneracies of the presymplectic and Poisson tensors are then characterized using the cohomology of these complexes. Once known, these presymplectic and Poisson degeneracies are known to be of importance in classifying the charges, locality, superselection sectors and quantization of the corresponding classical theory. The well known examples of Maxwell electromagnetism and Maxwell $p$-forms~\cite{sdh,bds,benini} fit into this framework~\cite[Sec.4.2]{kh-peierls}, invoking the well known de~Rham complex. Linearized gravity on a constant curvature Lorentzian manifold also fits into this framework, with the role of the de~Rham complex replaced by the Calabi complex or, as appropriate, the formal adjoint Calabi complex. For linearized gravity on an arbitrary background, we would need to make use of different differential complexes. The Calabi complex would be replaced by complexes defined by the property that they (at least formally) resolve the sheaf of Killing vectors on the given background (cf.~Section~\ref{sec:sh-res}). The corresponding formal adjoint complexes would play a role as well. This connection to the Killing sheaf, even without explicitly knowing the needed differential complexes, shows that the Killing sheaf cohomology plays a similar role both in the constant curvature context and more generally. Thus the ability to compute the Killing sheaf cohomology in as many circumstances as possible (as discussed in Section~\ref{sec:killing}) should take us a large part of the way towards understanding the presymplectic and Poisson degeneracies of linearized gravity on general backgrounds. Unfortunately, about half the desired information would still be missing, since it is not clear which sheaf cohomology theory would control the cohomology of the formal adjoint differential complex. In the case of constant curvature, we were able to identify it as the cohomology of the sheaf of rank-$(n-2)$ Killing-Yano tensors, which is resolved by the formal adjoint Calabi complex (Section~\ref{sec:calabi-adj}). It is currently not clear how to identify its analog in the case of a general background, without knowing the full differential complex that (formally) resolves the sheaf of Killing vectors. Now, specialized to the case of linearized gravity on a constant curvature background $(M,g)$, the analysis of~\cite{kh-peierls,kh-big} concludes that the presymplectic and Poisson tensors are actually non-degenerate (with spacelike compact support for solutions and compact support for smeared observables) if and only if the following two conditions are satisfied: \emph{(global recognizability)} a certain bilinear pairing between degree-$1$ Calabi cohomology with spacelike compact supports and degree-$1$ Calabi homology, \emph{(global parametrizability)} a certain bilinear pairing between on-shell degree-$1$ Calabi cohomology with spacelike compact supports and on-shell degree-$1$ timelike finite Calabi homology. The descriptions of off-shell or on-shell Calabi cohomologies with \emph{spacelike compact} supports, $HC^i_\sc$ or $HC^i_{\square,\sc}$, and of off-shell or on-shell \emph{timelike finite} Calabi homology, $HC_i^\mathit{tf}$ or $HC_i^{\square,\mathit{tf}}$ go beyond the scope of the current work. However, they are defined and studied in detail in~\cite{kh-cohom} (similar ideas appear also in~\cite{benini}). In fact, the results of~\cite{kh-cohom} show how to express these non-standard cohomologies in terms of the standard ones with unrestricted or compact supports, and similarly for homology. Recall also (Section~\ref{sec:calabi-cohom}) that the latter are isomorphic to appropriate cohomologies (or their linear duals) of the Killing or Killing-Yano sheaves, $\mathcal{K}_g$ or $\mathcal{KY}_g$. Using all of these results we are able to translate the non-degeneracy requirements as follows: \emph{(global recognizability)} a certain bilinear pairing between \begin{align} HC^1_\sc(M,g) &\cong H^{n-2}(M,\mathcal{KY}_g)^* \\ \text{and} \quad HC_1(M,g) &\cong H^1(M,\mathcal{K}_g)^* \end{align} is non-degenerate, \emph{(global parametrizability)} a certain bilinear pairing between \begin{align} \notag HC^1_{\square,\sc}(M,g) &\cong H^{n-1}(M,\mathcal{KY}_g)^* \oplus H^{n-2}(M,\mathcal{KY}_g)^* \\ \notag \text{and} \quad HC^{\square,\mathit{tf}}_1(M,g) &\cong H^1(M,\mathcal{K}_g)^* \oplus H^0(M,\mathcal{K}_g)^* \end{align} is non-degenerate. Notice that we have succeeded in expressing the vector spaces on which these pairings are defined purely in terms of Killing and Killing-Yano sheaf cohomologies. Checking non-degeneracy of course requires an explicit expression for the required bilinear pairings. Such expressions can be obtained from the general framework of~\cite{kh-peierls,kh-big}. However, there are two cases were we do not need such detailed information, and these are the ones we shall content ourselves here. For instance, if all the relevant cohomology vector spaces are trivial, then the only possible, trivial bilinear pairing is automatically non-degenerate. On the other hand, if the paired vector spaces have different dimensions, then every possible pairing between them must be degenerate. We conclude this section by listing several well known Lorentzian backgrounds for which the methods of Section~\ref{sec:killing} allow us to determine all or a few of the cohomologies of the Killing sheaf. For the reasons discussed above, we make note of the Killing-Yano sheaf cohomologies only for constant curvature backgrounds. \begin{table} \begin{center}% \begin{tabular}{lllll} spacetime & topology & $b^i = \dim H^i(\mathcal{K})$ & $c^i = \dim H^i(\mathcal{KY})$ \\ \hline\rule[0.5ex]{0pt}{2.5ex}% Minkowski & $\mathbb{R}^n$ & $b^0 = \frac{n(n+1)}{2}$ & $c^0 = \frac{n(n+1)}{2}$ \\ open FLRW & $\mathbb{R}^n$ & $b^0 = \frac{(n-1)n}{2}$ \\ de~Sitter & $\mathbb{R}\times S^{n-1}$ & $b^0 = b^{n-1} = \frac{n(n+1)}{2}$ & $c^0 = c^{n-1} = \frac{n(n+1)}{2}$ \\ closed FLRW & $\mathbb{R}\times S^{n-1}$ & $b^0 = b^{n-1} = \frac{(n-1)n}{2}$ \\ Schwarzschild & $\mathbb{R}^2\times S^2$ & $b^0 = b^2 = 4$ \\ Tangherlini & $\mathbb{R}^2\times S^{n-2}$ & $b^0 = b^{n-2} = \frac{(n-2)(n-1)}{2}$ \\ Kerr & $\mathbb{R}^2\times S^2$ & $b^0 = b^2 = 2$ \\ Meyers-Perry & $\mathbb{R}^2\times S^{n-2}$ & $b^0 = b^{n-2} = 1+N$ \end{tabular}% \caption{A list of some well known, simply connected solutions of (cosmological) vacuum Einstein equations, together with their topology and non-vanishing dimensions of Killing or Killing-Yano sheaf cohomologies. Note that $b^0$ always counts the number of independent global Killing vectors, and similarly for $c^0$. The Tangherlini solutions generalize the Schwarzschild one to higher dimensions and the Meyers-Perry solutions do the same for Kerr~\cite{emparan-reall}. For the latter, $N$ counts the number of rotational symmetries, which varies depending on the variant of the solution. We only consider the exterior regions for black hole solutions.\label{tab:simp-conn}} \end{center} \end{table} The easiest case is that of simply connected spacetimes. Then, the Killing sheaf cohomology is just the de~Rham cohomology tensored with the Lie algebra of global isometries (Section~\ref{sec:simp-conn}), with an analogous result for any other locally constant sheaf. Many of the well known exact solutions are in fact defined on simply connected underlying manifolds, including Minkowski space, black hole solutions and cosmological solutions. A few explicit examples are listed in Table~\ref{tab:simp-conn}. Note that only the Minkowski and de~Sitter spaces are of constant curvature, so that the Calabi complex could be defined on them. For these backgrounds, it makes sense to also compute the Killing-Yano sheaf cohomologies $H^i(\mathcal{KY})$. However, since we know that the number of linearly independent rank-$(n-2)$ Killing-Yano tensors on these spaces is the same as the number of linearly independent Killing vectors (Section~\ref{sec:calabi-adj}), the cohomology vector spaces are isomorphic, $H^i(\mathcal{KY}) \cong H^i(\mathcal{K})$. In the non-simply connected case, we can rely on the results of Sections~\ref{sec:fin-fg}, \ref{sec:inf-fg-1} and~\ref{sec:loc-coef}, according to which we can equate the Killing sheaf cohomologies with the group cohomology of the fundamental group with coefficients in the composite adjoint monodromy representation, at least up to the degree of asphericity of the underlying spacetime manifold. Unfortunately, there does not seem to exist a comprehensive list of exact solutions of Einstein's equations indexed by spacetime topology. So it takes some effort to find explicit examples of exact solutions on non-simply connected spacetimes. A rich source of examples comes from quotients of simply connected spacetimes (such as those mentioned in the preceding paragraph) by a discrete, freely acting subgroup $\pi$ of the isometry group. The quotient is a manifold because the action of $\pi$ is free and the metric descends to the quotient because the action of $\pi$ on the original spacetime is by isometries. The group $\pi$ then becomes the fundamental group of the quotient. \begin{table} \begin{center}% \begin{tabular}{llllll} $M$ & $\pi_1(M)$ & Bianchi sym. & additional sym. & $b^0$ & $b^1$ \\ \hline\rule[0.5ex]{0pt}{2.5ex}% $\mathbb{R}\times T^3$ & $\mathbb{Z}^3$ & $\mathbb{R}^3$ & $1$ & $3$ & $6$ \\ $\mathbb{R}\times T^3$ & $\mathbb{Z}^3$ & $\mathrm{VII}(0)$ & $1$ & 2 & $4$ \\ $\mathbb{R}\times T^3$ & $\mathbb{Z}^3$ & $\mathbb{R}^3$ & $SO(2)$ & $3$ & $6$ \\ $\mathbb{R}\times T^3$ & $\mathbb{Z}^3$ & $\mathbb{R}^3$ & $SO(3)$ & $3$ & $5$ \end{tabular} \caption{Known values of $b^i = \dim H^i(\mathcal{K}_g)$ for a generic spatially homogeneous spacetime $(M,g)$ with given topology and symmetry properties. See text for more details.\label{tab:fg-z3}} \end{center}% \end{table} An nearly exhaustive study of possible quotients of $4$-dimensional cosmological solutions (meaning spatially homogeneous ones) has been carried out in~\cite{kth,kth2,kodama}. A complete presentation of the results is rather complicated and is relegated to the original references. A particular cosmological solution $(M,g)$ is identified by (a) the topology of the spacetime $M$, (b) the topology and isometry group of the universal cover $(\tilde{M},\tilde{g})$, (c) a number of continuous \emph{metric parameters} specifying $\tilde{g}$, and (d) a number of continuous \emph{moduli} (or \emph{Teichm\"uller parameters}) specifying the quotient class. There may also be additional discrete parameters, but we ignore them here, since they do not affect the number of continuous parameters. According to the discussion of Section~\ref{sec:inf-fg-1}, the number of moduli (denoted by $N_\mathrm{m}$ in~\cite{kodama}) is in fact equal% \footnote{The space of moduli may not always be a smooth manifold, but may have algebraic singularities. Still, the number of moduli is the dimension of the generic subset of the moduli space, which is a smooth manifold. This dimension is also equal to $b^1 = \dim H^1(\mathcal{K})$. At singular points of the moduli space, $b^1$ may actually exceed the number of moduli, so at those points a more careful analysis is needed.} % to $b^1 = \dim H^1(\mathcal{K}_g)$. We shall not specify any metric parameters, since, as long as they take generic values they do not affect the number of moduli. For simplicity, we only consider the examples with toroidal spatial topology $M = \mathbb{R} \times T^3$, where $T^3 = S^1 \times S^1 \times S^1$. Hence, the fundamental group is $\pi_1(M) = \mathbb{Z}^3$ and the universal cover is $\mathbb{R}^4$. The (identity component) of the isometry group of $(\tilde{M},\tilde{g})$ is then a semidirect product of a $3$-dimensional transitive Bianchi group and an additional connected Lie group. Let us concentrate on the cases of either Bianchi type $I \cong \mathbb{R}^3$ or $\mathrm{VII}(0)$. Under these conditions, we can read off all the remaining possibilities and information form Table~IV of~\cite{kodama}. They are summarized in Table~\ref{tab:fg-z3}. Note that $b^0 = \dim H^0(\mathcal{K}_g)$ counts the number of independent global Killing vectors on $(M,g)$. The number of independent Killing vectors on $(\tilde{M},\tilde{g})$ counts the dimension of the Bianchi group (always $3$) and the dimension of the additional symmetry group. The number of independent Killing vectors not broken by compactification to $T^3$ can be deduced from the explicit presentation of the isometry groups $\mathrm{Isom}(\tilde{M},\tilde{g})$ and the discrete subgroups effecting the compactification, which for the examples given in Table~\ref{tab:fg-z3} in~\cite[Sec.3]{kodama}. Many more examples cam be read off from Tables~IV, VII and Section~5.3 of~\cite{kodama}. It appears difficult to locate literature on explicit calculations that are equivalent to computing higher Killing sheaf cohomologies for other non-simply connected spacetimes. \section{Discussion and generalizations}\label{sec:discuss} We have reviewed in detail the algebraic, geometric and analytical properties of the Calabi differential complex~\cite{calabi}. In Section~\ref{sec:calabi} we have defined the nodes of the complex in terms of Young symmetrized tensor bundles and given explicit formulas for the differential operators between them, verifying through explicit calculations that they in fact constitute a complex (Appendix~\ref{app:young}). Such explicit formulas are otherwise difficult to extract from the existing literature, especially in terms of tensor variables, as opposed to moving coframe variables used in Calabi's original work. Further, our formulas work for pseudo-Riemannian backgrounds of any signature, generalizing from the standard purely Riemannian context. We have also identified a differential operator cochain homotopy (Equations~\eqref{eq:calabi-diag}, \eqref{eq:calabi-homot-start}--\eqref{eq:calabi-homot-end}) that generates a cochain map from the complex to itself with a Laplacian-like principal symbol. This cochain homotopy map may be new. However, its lower order terms coincide with well known geometric operators known from the theory of linearized gravity (General Relativity). Another interesting and likely novel observation involved the formal adjoint complex (Section~\ref{sec:calabi-adj}), whose initial differential operator turned out to be equivalent to the rank-$(n-2)$ Killing-Yano operator, in analogy with the Killing operator in the original complex. In Sections~\ref{sec:sheaves} and~\ref{sec:killing} we showed that the Calabi complex is elliptic and locally exact. Hence, it resolves the sheaf of Killing vectors on the given constant curvature pseudo-Riemannian manifold. The same is true for the formal adjoint complex and the sheaf of rank-$(n-2)$ Killing-Yano tensors. Thus the cohomology of the Calabi complex could be expressed in terms the Killing sheaf cohomology, while that of its formal adjoint in terms of the Killing-Yano sheaf cohomology. When a sheaf is locally constant (covering the relevant cases on constant curvature pseudo-Riemannian manifolds), its cohomology can be effectively computed in many circumstances using tools from algebraic topology, thus enabling effective computation of the Calabi cohomology. These methods were reviewed in Section~\ref{sec:killing}, specialized to the Killing sheaf. Finally, in Section~\ref{sec:appl}, we discussed a physical application that motivated this work. Jointly, the results collected in this work, together with those of~\cite{kh-cohom,kh-peierls,kh-big} imply that knowledge of Killing and Killing-Yano sheaf cohomologies allows some degree of control over the degeneracy subspaces of the presymplectic and Poisson structures within the classical field theory of linearized gravity on constant curvature backgrounds. Unfortunately, the above results do not apply directly to linearized gravity on arbitrary Lorentzian manifolds, only those that have constant curvature and where the Calabi complex is defined. However, the Calabi complex serves as a case study for the more general situation and the same results partially generalize to general backgrounds. In particular, we can already make the following conclusions. In general, the Calabi complex will have to be replaced by a different differential complex, which will likely depend on some of the algebraic characteristics of the Lorentzian manifold (such as its isometries and the algebraic type of the curvature tensor and its derivatives). This complex would be identified, as was the Calabi complex~\cite{gasqui-goldschmidt-fr,gasqui-goldschmidt}, by the property of being a formally exact compatibility complex of the Killing operator. Such a complex is known to exist under general conditions and also have the property of being elliptic, since the Killing operator is itself elliptic~\cite{quillen,goldschmidt-lin}. Further, under a generic condition, it can be shown to be locally exact (Section~\ref{sec:sh-res}). The local exactness property connects the cohomology of this complex to that of the Killing sheaf, which can be effectively computed, at least in many circumstances, when the sheaf is locally constant. Unfortunately, one piece of the puzzle remains incomplete. The connection between the cohomology of the formal adjoint complex and sheaf cohomology depends on the knowledge of the initial operator in that differential complex, which is the adjoint of the final operator of the differential complex resolving the Killing sheaf. In the Calabi case it is equivalent to the Killing-Yano operator. However, since the differential complex is expected to change depending on the Lorentzian manifold, so is this initial operator. Thus, it is not clear which sheaf cohomology will replace the Killing-Yano sheaf in the general case. Hence, in future work, it would be very interesting to investigate these differential complex resolutions of the Killing sheaf, especially computing their differential operators explicitly. Besides the general existence results~\cite{quillen,goldschmidt-lin}, such a complex has already been constructed for locally symmetric spaces ($\nabla_a R_{bcde} = 0$)~\cite{gasqui-goldschmidt-fr,gasqui-goldschmidt}. Also, heuristic arguments suggest that they could be partially constructed by linearizing the so-called `ideal' characterizations of certain exact families of solutions of Einstein's equations. These include Schwarzschild~\cite{saez-schw}, Kerr~\cite{saez-kerr} and some perfect fluid~\cite{ferr-fluid} solutions. An `ideal' characterization consists of a number of tensor fields, locally and covariantly defined using the metric and its derivatives, which vanish iff the given metric is locally isometric to a particular geometry from the desired family. For instance, the vanishing of the Riemann tensor $R$ is an ideal characterization of the flat geometry, while the vanishing of the corrected Riemann tensor $R-\bar{R}$ (Section~\ref{sec:calabi-ops}) does the same for a constant curvature geometry. It should be clear from these examples, that the linearization of the tensors that constitute such an ideal characterization gives an operator whose composition with the Killing operator is formally exact. At the moment it is not completely clear what geometric interpretation can be given to subsequent differential operators in the desired formally exact differential complex. Finally, one can easily imagine situations where the number of independent solutions to the Killing equations changes over the background pseudo-Riemannian manifold. The Killing sheaf is then no longer locally constant and many of the techniques described in this work are no longer applicable. In those cases, perhaps some insight can be gained from the theory of constructible sheaves~\cite[Ch.4]{dimca}, \cite[Ch.VIII]{ks}, which are allowed to deviate from being locally constant in a controlled way. \section*{Acknowledgments} The author would like to thank Mauro Carfora and Wouter von Limbek for helpful discussions on deformations of isometry and flat principal bundle structures. Thanks to Micha\l{} Wrochna for feedback on an earlier version of the manuscript. Thanks also to Benjamin Lang and Alexander Schenkel for useful references on the geometry of principal bundles. The kind hospitality of the Erwin Schr\"odinger Institute during the workshop ``Algebraic Quantum Field Theory: Its Status and Its Future,'' where this work was first publicly presented, is also acknowledged.
1,477,468,749,970
arxiv
\section{Introduction } % Since 1991, when Gunstensen and Rothman \cite{Gunstensen} invented the technique, several multi-component lattice Boltzmann equation (MCLBE) variants have developed to address different flow regimes \cite{Succi, Kruger, Huang}. The idea remains a milestone of statistical physics, however all current MCLBE variants depart substantially from \cite{Gunstensen}, which developed directly from Rothman's earlier immiscible lattice gas cellular automata, \cite{RothmanKeller, Roth_Zal}. % Presently, variants are classified by their physical content \cite{Raabe}. Where the kinetics of phase separation must be considered, ``free--energy" methods \cite{SwiftOO96pre, m_r_swift} and their thermodynamically-consistent extensions, due to Wagner et al., \cite{Wagner, Wagner_and_Li, Wagner_and_Pooley}, are appropriate tools. For workers with a background in molecular simulation, the Shan--Chen method \cite{Shan_Chen} is a natural choice. In continuum immiscible hydrodynamics, one incorporates dynamic conditions of stress continuity (i.e. physical principles) and the kinematic condition of mutual impenetrability (with purely logical content), \cite{Landau}, as boundary conditions between separate flows. In this regime it is safe to use the chromodynamic, color-gradient or phase-field method, which we define as a combination of algorithms due to Lishchuk \cite{Lishchuk:02-0} (who uses earlier ideas of Brackbill, \cite{Brackbill}) and d'Ortona et al. \cite{dOrtona}. Chromodynamic MCLBE uses an immersed boundary force \cite{Brackbill, Peskin}, appropriate corrections being applied to the velocity \cite{Guo2002}, alongside a computationally-efficient, analytic component segregation \cite{dOrtona} which distributes an interface, which, for continua, should be sharp. (Note, Reiss and Phillips \cite{Reiss} developed an inter-facial perturbation to replace immersed boundary forces, which is the most physically consistent encapsulation of MCLB inter-facial tension as a perturbation to the stress.) The method is the most direct descendant of Gunstensen's original, in which the problems of lattice pinning and faceting have been reduced, Reiss and Dellar \cite{Dellar3, Reiss0} having identified their origin and a means to reduce the impact of the unphysical interface width scale. Such limitations notwithstanding, chromodynamic method is robust, transparent, has low micro-current and allows direct parameterization of inter-facial tension, width \cite{Halliday_PRE1_2007} and the separated fluids' viscosity contrast \cite{Xu}, the interface propagation in the base model is reasonably understood \cite{Kehrwald, Subheder} (but see below) and different CG models have been applied successfully to numerical study of steady and unsteady flow, \cite{Liu0, Leclaire, Ba, Wen}. Here we further investigate the fundamentals of the dynamics and kinematics of a chromodynamic MCLB interface, when it separates fluids at density ratio $\Lambda$. Our data aim to support results by Ba et al. \cite{Ba}, Wen at al. \cite{Wen} who have benchmarked the technique in complex flow situations using multi-relaxation time (MRT) collision schemes and generalizations of the segregation method of \cite{dOrtona}. % Use of a MRT collision scheme complicates the relationship between model kinematics (which originate in the re-color step- see section \ref{sec_intro}) and model dynamics (which is extracted by Chapman-Enskog analysis), \cite{Burgin}. But MRT schemes have the decisive advantage of stability. Hence, we develop a Dellar-type MRT scheme, for chromodynamic MCLBE which couples model kinematics and dynamics clearly. Taking this model as representative of chromodymamic MRT schemes, we extend previous work \cite{Burgin}, to measure the extent to which such models meet appropriate dynamic and kinematic conditions. To achieve this, one should consider fully transient flows. We do so, first with plane and, later, curved interfaces. By making direct comparison with appended semi-analytic calculations, which invoke kinematic and dynamic conditions, we answer the questions to what extent do the lattice fluids move together at the interface? and to what extent is the continuous traction condition met? % We organize as follows. In Sec.~\ref{sec_intro} we present backgound detail of our model; in Sec.~\ref{sec_derivation} we derive a MRT scheme for it; in Sec.~\ref{sec_results} we present and use semi-analytic tests alongside refined versions of existing tests, to assess its performance. In Sec.~\ref{sec_conclusions}, we present our conclusions. Details are presented in the appendices. % % % \section{Background : Density difference chromodynamic MCLBE } \label{sec_intro} % Represent red and blue fluid components by distribution functions $R_i(\mathbf{r}, t)$ and $B_i(\mathbf{r}, t)$, where: % \begin{equation} f_i(\mathbf{r}, t) = R_i(\mathbf{r}, t) + B_i(\mathbf{r}, t). \end{equation} % Above, $i=0,1,..(Q-1)$ indexes the $Q$ lattice links in the model (Fig.~\ref{figA1}). Let $\rho=(\rho_R+\rho_B)$, $\rho_R$, $\rho_B$, $\delta_t$, $c_{i \alpha} $, $w_i$, $u$ and $c_s$ denote nodal density, red nodal density, blue nodal density, time step, the $\alpha$ component of the $i^{th}$ lattice basis vector, the weight for link $i$, fluid velocity and the color-blind speed of sound (or the geometrical lattice tensor isotropy constant). Other symbols have their usual meanings. A MRT collision scheme, for a single fluid subject to a body force, $G_{\alpha}(\mathbf{r})$, has kinetic equation: % \begin{eqnarray} \nonumber f_i(\mathbf{r}+\delta_t \mathbf{c}_i, t+\delta_t) = && f_i(\mathbf{r}, t) - \sum_{j=0}^{Q-1} A_{ij} ( f_j(\mathbf{r}, t) - f^{(0)}_j(\rho, \mathbf{u}) )\\ \label{equ_evolution} && + F_{1 i} + F_{2 i}, \end{eqnarray} % where, after \cite{Ba, Wen}, equilibrium $f_i^{(0)}$ is modified to allocate mass away from rest link ($i=0$), generating a density contrast \cite{Ba, Wen, Liu}: % \begin{equation} \label{equ_equ} f_i^{(0)}(\rho, \mathbf{u}) = \rho \phi_i + w_i \rho\left( \frac{u_\alpha c_{i \alpha}}{c_s^2}+ \frac{u_\alpha u_\beta c_{i\alpha} c_{i\beta}}{2 c_s^4} - \frac{u^2}{2 c_s^2} \right), \end{equation} % with: % \begin{eqnarray} \label{eq_phi} \phi_i = \begin{cases} \frac{\alpha_R \rho_R}{\rho}+\frac{\alpha_B \rho_B}{\rho}, & i = 0, \\ \nonumber k w_i \left[ (1-\alpha_R) \frac{ \rho_R}{\rho} +(1-\alpha_B)\frac{\rho_B}{\rho} \right], & i \neq 0, \end{cases} \\ \end{eqnarray} % where $k=\frac{9}{5}$, in D2Q9. Above, $\alpha_R$ and $\alpha_B$ are considered shortly when discussing the role of $\phi_i$. % In Eq.~(\ref{equ_evolution}), $A_{ij}$ is a collision matrix element and ``sources" $F_{1i}$ and $F_{2i}$ correct the dynamics for the effects of large density contrasts and $\mathbf{G}$ respectively \cite{Burgin}. Term $F_{1i}$ is expressed in tensor Hermite polynomials: % \begin{equation} \label{equ_source1} F_{1i} = w_i T_{\alpha \beta}(\rho_R, \rho_B, \rho^N,\Lambda, \mathbf{u}) (c_{i \alpha } c_{i \beta} - c_s^2 \delta_{\alpha \beta}), \end{equation} % and to embed $\mathbf{G}$ we use the form devised by Luo \cite{Luo}: % \begin{eqnarray} \nonumber F_{2 i} & = & w_i \bigg( \frac{\bf{G} \cdot \bf{c}_{i \alpha } }{ c_s^2} + \frac{1}{2 c_s^4 } \left( 1 - \frac{\lambda_3}{2} \right) \times \\ \label{equ_source2} && ( G_{\alpha} u_{\beta} + G_{\beta} u_{\alpha} ) (c_{i \alpha } c_{i \beta} - c_s^2 \delta_{\alpha \beta})\bigg). \end{eqnarray} % Term $T_{\alpha \beta}$ and eigenvalue $\lambda_3$ (which determines lattice fluid kinematic viscosity) are considered in Appendix~\ref{sec_appendix1}. Note, we assume force-adjusted macroscopic observables: % \begin{eqnarray} \label{equ_u_def} (\rho_R, \rho_B) = \sum_i \left( R_i,B_i \right), \quad \mathbf{u} = \frac{\sum_i f_i(\mathbf{r}, t) \mathbf{c}_i }{\rho } + \frac{\mathbf{G}}{2\rho}. \end{eqnarray} % Return now to the density contrast mechanism embedded in $f_i^{(0)}$ and $F_{1i}$. Parameters $\alpha_R$ and $\alpha_B$ are chosen such that: % \begin{equation} \label{equ_fred} \Lambda = \frac{\rho_{0R}}{\rho_{0B}} = \frac{c_{sB}^2}{c_{sR}^2} = \left( \frac{1-\alpha_B}{1-\alpha_R} \right), \end{equation} % i.e. to control density contrast, $\Lambda$, via the sonic speed. Eq.~(\ref{equ_fred}) supports a condition for mechanical stability, $\rho_{0R}c^2_{R} = \rho_{0B}c^2_B$, where $\rho_{0C}$ is the density deep within the component $C = R,B$. % Components are identified by a color index $\rho^N(\mathbf{r},t)$: % \begin{equation} \label{equ_rhoN_def} \rho^N (\mathbf{r},t) \equiv \frac{ \left( \frac{\rho_R (\mathbf{r},t) }{\rho_{0R} } - \frac{\rho_B (\mathbf{r},t) }{\rho_{0B} } \right) }{ \left( \frac{\rho_R (\mathbf{r},t) }{\rho_{0R} } + \frac{\rho_B (\mathbf{r},t) }{\rho_{0B} } \right) } \in [-1,1], \end{equation} % \cite{Ba, Wen, Liu}, in terms of which inter-facial tension is created by the action of force: % \begin{equation} \label{equ_stforce} \mathbf{G} = \frac{1}{2} \sigma K \mathbf{\nabla} \rho^N, \end{equation} % where $\sigma$ is the inter-facial tension and the mean curvature is measured as follows \cite{Brackbill}: % \begin{equation} K = \underline{ \mathbf{\nabla} } \cdot \hat{n}, \quad \hat{n} = - \left( \frac{\nabla \rho^N }{ |\nabla \rho^N |} \right), \end{equation} % for a red drop, with the usual convention on surface normal, $\hat{n}$. Color field $\rho^N$ is considered continuous, changing rapidly only in the inter-facial region. Its variation may be sharpened \cite{Dellar3, Reiss0} and it may be used to control kinematic viscosity, by setting $\nu(\rho^N) = \frac{1}{6} \left(\frac{2}{\lambda_3 (\rho^N)} - 1\right)$, \cite{Xu3D, Xu2D}. Kinetic-scale, post-collision color segregation is an adaptation of \cite{dOrtona}: % \begin{eqnarray} \label{eq_re_color} \nonumber C_i^{++}(\mathbf{r},t) & = & \frac{\rho_C(\mathbf{r}, t) }{\rho(\mathbf{r}, t) }f_i (\mathbf{r}, t)^+ \\ & \pm & \beta \frac{\phi_i (\mathbf{r},t) \rho_R(\mathbf{r}, t) \rho_B(\mathbf{r}, t )}{\rho(\mathbf{r}, t)} \hat{\mathbf{n}} \cdot \delta _t \hat{\mathbf{c}}_i, \end{eqnarray} % where superscript $+$ ($++$) denotes a post-collision (post re-color) quantity and $\beta$ is a chosen parameter \cite{dOrtona}. This simple segregation rule is mass-conserving, local (given a director, $\hat{\mathbf{n}}$) and ``bottom-up", i.e. a kinetic scale postulate. It is usually ignored in deriving macroscopic model behavior. However, Eq. (\ref{eq_re_color}) is consistent with a modified equation for uniform fluid motion \cite{Burgin}: % \begin{eqnarray} \label{eq_rhor3} \nonumber && \frac{D \rho_R}{D t} +\frac{1}{2} \delta_t \frac{\partial^2 \rho_R}{\partial t^2} \\ \nonumber &=& \frac{k}{2} c_s^2 (1-\alpha_R) \delta_t \nabla^2 \left( \frac{\rho_R^2}{\rho} \right)\\ \nonumber && + \frac{k}{2} c_s^2 (1-\alpha_B) \delta_t \nabla^2 \left( \frac{\rho_R \rho_B}{\rho} \right)\\ \nonumber&& + \frac{1}{2}\delta_t u_\alpha u_\beta \partial_\alpha \partial_\beta \rho_R \\ \nonumber&& -\delta_t \beta (1-\alpha_R) k c_s^2 n_\gamma \partial_\gamma \left( \frac{\rho_R^2 \rho_B}{\rho^2} \right) \\ \nonumber&& -\delta_t \beta (1-\alpha_B) k c_s^2 n_\gamma \partial_\gamma \left( \frac{\rho_R \rho_B^2}{\rho^2} \right) \\ && + 2 \delta_t c_s^4 \partial_{\alpha} \partial_{\beta}\left( \frac{ \rho_R T_{\alpha \beta } }{ \rho} \right). \end{eqnarray} % Above, the last term on the right hand side originates in correction term, $F_{1i}$ (see Eq.~(\ref{equ_source1})). Burgin et al. \cite{Burgin} give this term for an LBGK collision model; on neglecting it they find by solving Eq.~(\ref{eq_rhor3}): $\rho_R (\mathbf{r},t) = \frac{\rho_{0R}}{2} \big(1 + \tanh(\beta \mathbf{\hat{\mathbf{n}}} \cdot (\mathbf{r} - \mathbf{u}t) \big)$, with equivalent behavior for $\rho_B$. When substituted in Eq. \ref{equ_rhoN_def}, these variations reveal a smoothly varying color index: % \begin{eqnarray} \label{equ_rhoN} \rho^N (\mathbf{r},t) = \tanh\left[ \beta \hat{\mathbf{n}} \cdot (\mathbf{r}- \mathbf{u}t ) \right]. \end{eqnarray} % Quantity $\rho^N$ is a material invariant, at leading order- see below. On the other hand, the last term in Eq.~(\ref{eq_rhor3}) constitutes an error associated with pure advection, present even in uniform flow, which is shown to restrict applicability of method. % As remarked above, taking the order $\delta_t$ terms in Eq.~(\ref{eq_rhor3}): % \begin{equation} \label{equ_kinematic_origin} \frac{\partial \rho_R}{\partial t} + u_\gamma \partial_\gamma \rho_R \approx 0, \quad \frac{\partial \rho_B}{\partial t} + u_\gamma \partial_\gamma \rho_B \approx 0, \end{equation} which is useful in deriving our MRT scheme, in Sec.~(\ref{sec_derivation}), where Eq.~(\ref{equ_kinematic_origin}) is taken to imply that on short timescales, $t_0$, the color index is an approximate material invariant, which eliminates its $t_0$ derivatives from the Euler equation. Note, Eqs.~(\ref{equ_source1}), (\ref{equ_source2}) and (\ref{equ_stforce}) require numerical gradients. Typically, compact second order stencils, relying on lattice isotropies are found to be sufficient in MCLBE but higher order, non-compact versions (see Sec.~(\ref{sec_stencils})) are helpful, here. % % % \section{ MRT scheme for large density difference chromodynamic MCLBE} \label{sec_derivation} % Dellar \cite{Dellar2003, Dellar2008} developed an MRT scheme for single component flow, which was extended to accommodate the force, $\mathbf{G}$, used in chromodynamic lattice Boltzmann multi-component flow \cite{Xu}. % Here, we further adapt that method to completely immiscible fluids, with density contrast $\Lambda$, where it is necessary to consider large density gradients in the region of rapidly changing $\rho^N$. Dellar's is arguably the most aesthetic and logically consistent MRT scheme. $\mathbf{A}$ is defined by its eigenvalues and eigenvectors, only a subset of which must be chosen, a majority being assigned in the Chapman-Enskog process. Working from a weighted orthogonal modal basis introduced by Junk \cite{Junk}, Dellar \cite{Dellar2008, Dellar2003} devised a MRT scheme with less coupling between the density, momentum and stress modes and the 3 ``ghost" modes, (in D2Q9) than is present in the more commonly used MRT scheme of Lallemand and Luo \cite{Lallemand}. % We derive, in Appendix~\ref{sec_appendix1}, a MRT scheme-based model, generalized to chromodynamic immiscible fluids. Our analysis, performed in $D2Q9$, attempts to clarify the coupling between collision and model kinematics. See also \cite{Burgin}. The resulting scheme involves a set of macro-scopic modes, $\mathbf{h}^{(p)}$, defined in Table~\ref{tabA1}; a majority representing observables e.g. momentum components. % % % \begin{figure}[ht] \begin{center} \includegraphics[width=4cm]{latticegrid2.png} \caption{ Schematic. Square D2Q9 lattice with our indexing convention. Odd values of $i$ identify the longer links.} \label{figA1} \end{center} \end{figure} % % % \begin{table*}[htb] \resizebox{13.7cm}{!}{ \begin{tabular}{ |c|c|c|c|c|L|c| } \hline eigenvector & component & definition & eigenvalue, $\lambda_p$ & mode, $m^{(p)}$ & physical interpretation & equilibrium \\ \hline $\mathbf{h}^{(0)}$ & $h_i^{(0)}$ & $w_i$ & 0 & $\rho$ & density & $\rho$ \\ \hline $\mathbf{h}^{(1)} $ & $h_i^{(1)}$ & $w_ic_{ix}$ & 0 & $\rho u_x$ & x momentum & $\rho u_x$ \\ \hline $\mathbf{h}^{(2)} $ & $h_i^{(2)}$ & $w_i c_{iy}$ & 0 & $\rho u_y$ & y momentum & $\rho u_y$ \\ \hline $\mathbf{h}^{(3)} $ & $h_i^{(3)}$ & $w_i c_{ix}^2$ & $\lambda_3$ & $\Pi_{xx}$ & Momentum flux component & $\Pi_{xx}^{(0)}$ \\ \hline $\mathbf{h}^{(4)} $ & $h_i^{(4)}$ & $w_i c_{iy}^2$ & $\lambda_3$ & $\Pi_{yy}$ & Momentum flux component & $\Pi_{yy}^{(0)}$ \\ \hline $\mathbf{h}^{(5)} $ & $h_i^{(5)}$ & $w_i c_{ix}c_{iy}$ & $\lambda_3$ & $\Pi_{xy}$ & Momentum flux component & $\Pi_{xy}^{(0)}$ \\ \hline $\mathbf{h}^{(6)} $ & $h_i^{(6)}$ & $g_i$ & $\lambda_6$ & $N$ & - & 0 \\ \hline $\mathbf{h}^{(7)} $ & $h_i^{(7)}$ & $g_i c_{ix}$ & $\lambda_7$ & $J_x$ & - & 0 \\ \hline $\mathbf{h}^{(8)} $ & $h_i^{(8)}$ & $g_i c_{iy}$ & $\lambda_7$ & $J_y$ & - & 0 \\ \hline \end{tabular} } \caption{ Collision matrix eigenspectrum. Left row eigenvectors (projectors), $\mathbf{h}^{(p)}, p = 0,1,...,8$, corresponding eigenvalues, corresponding physical significance (if any) and corresponding equilibria for mode $m^{(p)} \equiv \sum_i h^{(p)}_if_i$ of the collision matrix, $\mathbf{A}$.} \label{tabA1} \end{table*} % % We define a projection matrix, comprised of orthogonal left row collision matrix eigenvectors, $\mathbf{h}^{(p)}$, each a projector of a particular mode, $m^{(p)}$, \begin{equation} \nonumber \mathbf{M}\equiv \left( \mathbf{h}^{(0)}, \mathbf{h}^{(1)}, \cdots, \mathbf{h}^{(8)} \right)^T, \end{equation} % such that: % \begin{eqnarray} \nonumber &&\left(m^{(0)},m^{(1)},...,m^{(8)}\right)^T = \mathbf{M}\ \mathbf{f} \\ \nonumber &&= \left( \rho, \rho u_x, \rho u_y, \sigma_{xx}, \sigma_{yy}, \sigma_{xy}, N, J_x, J_y \right)^T, \end{eqnarray} % (see Table~\ref{tabA1}). Above, column vector $\mathbf{f} \equiv \left( f_0, f_1,...,f_8 \right)^T $. We define all the $\mathbf{h}^{(p)}$ as weighted polynomial expressions in the lattice basis of Fig.~\ref{figA1}, because a subset (of the $\mathbf{h}^{(p)}$) are naturally identified as such when deriving the dynamics: see Appendix~\ref{sec_appendix1}. Project Eq.~(\ref{equ_evolution}) using left multiplication by $\mathbf{M}$: % \begin{equation} \label{equ_temp1} \mathbf{M \ f^+} = \mathbf{M \ f} + \mathbf{M \ A \ M^{-1}} \left( \mathbf{M \ f^{(0)}} -\mathbf{M \ f} \right) +\mathbf{M \ F}, \end{equation} % where $\mathbf{F}$ is the column vector whose elements are $F_i = F_{1i} + F_{2i}$. The projected evolution equation decomposes to forced scalar relaxations for each mode: % \begin{eqnarray} \label{equ_temp} \nonumber m^{(p)+} &=& m^{(p)} + \lambda_p \left( m^{(0) (p)} - m^{(p)} \right) + S^{(p)}, \\ S^{(p)} &=& \sum_{j=0}^8 M_{pj} F_j, \quad p = 0,1,2,...,(Q-1). \end{eqnarray} % In Eq.~(\ref{equ_temp}), we use the properties of the $\mathbf{h}^{(p)}$, from which $\mathbf{M\ A}=\mathbf{ \Lambda \ M} $, i.e. $\mathbf{ \Lambda} = \mathbf{M\ A \ M^{-1}}$, with $\mathbf{\Lambda}\equiv diag (\lambda_0,\ \lambda_1,...,\lambda_8)$. Note, zero eigenvalues are associated with physical modes subject to conservation principles. % Developing a MRT scheme now reduces to specifying equilibria, $ m^{(0) (p)}$, and sources $ S^{(p)}$, such that a Chapman-Enskog expansion of the kinetic scale dynamics predicts that the physical modes (Tab.~\ref{tabA1}) conform with the continuity and Navier-Stokes equations. See Sec.~\ref{sec_appendix1}. % An advantage of Dellar's approach is that $\mathbf{M}$ may be inverted, using lattice isotropies. The modal evolutions in Eq.~(\ref{equ_temp}) are inverted to yield $\mathbf{f^+} = \mathbf{M^{-1} \ m^+}$, So, post-collision distribution function is constructed directly from post-collision $m^{(p)+}$: % \begin{eqnarray} \nonumber f_i^+&=&(M)_{ij}^{-1}\ m_j^+\\ \nonumber &=& w_i \Bigg\{ \bigg[ 2-\frac{3}{2}\left( c_{ix}^2+c_{iy}^2 \right) \bigg] \rho \\ \nonumber & & \ \ \ \ \ + 3 \left( (\rho u_x)^+ c_{ix} + (\rho u_y)^+ c_{iy}\right) \\ \nonumber & & \ \ \ \ \ + \frac{9}{2} \left( \Pi_{xx}^+ c_{ix}^2 +2\Pi_{xy}^+ c_{ix}c_{iy} +\Pi_{yy}^+ c_{iy}^2 \right)\\ \nonumber & & \ \ \ \ \ -\frac{3}{2} \left(\Pi_{xx}^+ + \Pi_{yy}^+\right)\\ \nonumber & & \ \ \ \ \ + \frac{1}{4} g_i N^+ + \frac{3}{8} g_i \left( J_x^+ c_{ix} + J_y^+ c_{iy} \right) \Bigg\}, \end{eqnarray} % with $(\rho u_x)^+$, $(\rho u_y)^+$, $\rho^+$, $\Pi_{xx}^+$, $\Pi_{xy}^+$, $\Pi_{yy}^+$, $N^+$, $J_x^+$ and $J_y^+$ given explicitly in Eqs.~(\ref{moev1} - \ref{moev7}). Of course, color is finally re-allocated according to Eq.~(\ref{eq_re_color}). % Tensor $T_{\alpha \beta}$ in Eqs.~(\ref{equ_source1}), (\ref{eq_rhor3}) is shown, in Appendix~\ref{sec_appendix1}, Eq.~(\ref{equ_T_identity}), to be identical to that of Burgin et al. \cite{Burgin}, for an LBGK model. % % % \section{ Results and Discussion} \label{sec_results} % The accuracy of our multi-component scheme of Sec.~\ref{sec_derivation} is assessed against the conditions of mutual impenetrability (model kinematics) and the viscous stress transmission (model dynamics). Transfer of momentum between immiscible fluids is controlled by boundary conditions which refer to both kinematics and dynamics. In Sec.~\ref{sec_test2} we present develop two transient test-bench flows which rely upon these conditions which we compare with data. % We mainly consider, here, the dynamics of the scheme, its kinematics having been effectively assessed by Burgin et al., \cite{Burgin}, on the following argument. % Whilst the work of Burgin et al. uses an LBGK collision method (to highlight the connection between the model kinematics and dynamics), the key tests applied consider performance in uniform flow, with a flat interface i.e. $\mathbf{G} = \mathbf{0}$. In this regime, there is no practical distinction between the operation of MRT and LBGK schemes. Put another way, Burgin's simulation data applies to the chromodynamic MCLBE MRT method of Sec.~\ref{sec_derivation}. (Note, however, we have confirmed this explicitly). Moreover, the kinetic equation source due density difference effects (see Eqs.~(\ref{equ_source1})), is identical to that for LBGK collision. We consider here curved fluid-fluid interfaces, as well as plane interfaces. No assessment would be complete without some assessment of the inter-facial micro-current. % For all the data presented below, we relax the ghost modes of our MRT scheme to equilibrium i.e. $\lambda_7=\lambda_8=1$. % % % \subsection{Plane Interfaces} % The data in Fig.~\ref{Ba_test} compare simulation and theory. We test the steady-state of uni-directional, pressure-driven flow, with the transverse density stratification illustrated in Fig.~\ref{fig_BaTest}. Note, we do not benchmark against the solution for discontinuous variation of density (see e.g. Ba et al., \cite{Ba}). Instead, we compare simulation data (crosses) with a semi-analytical solution in Appendix~\ref{sec_sol_Ba_test}, which accounts for the effects of continuous variation of density at interface (continuous line). For these data, the simulation width $L_x = 200$, $\alpha_B = 0.2$, $\alpha_R = 0.9$ (corresponding to a density contrast between separated components' bulk of $\Lambda = 8$) and $\nu_B = \nu_R = 0.333$. These data compare well with theory and data generated by identical tests applied to the MRT schemes of Ba et al. \cite{Ba}, which are based upon equivalent MCLBE interface schemes and traditional MRT collision operators. Note, however, that we find it necessary to use high order stencils of Appendix~\ref{sec_stencils} to compute density gradients. It is important to note that steady-state data in Fig.~\ref{Ba_test} do not verify instantaneous compliance with kinematic (impenetrability) and dynamic (continuous traction) conditions. For that, one needs a transient flow. Semi-analytical solutions for multi-component flow with flat and curved interfaces, which reference the key boundary conditions at issue are derived in appendix Sec.~(\ref{sec_test2}). % % % \begin{figure}[H] \begin{center} \includegraphics[width=9cm]{ba2.png} \caption{Transverse variation of the flow velocity for the test illustrated in Fig.~\ref{fig_BaTest}. Simulation data are represented by crosses and semi-analytic theory which accounts for the transverse variation of the density is indicated by the continuous line. For these data, $L_x=200$, $\alpha_B = 0.2$, $\alpha_R = 0.9$, $\Lambda=8$, $\nu_1=\nu_2 = 0.333$. } \label{Ba_test} \end{center} \end{figure} % % % In Appendix~\ref{sec_test2}, we consider the temporal decay of a uni-directional flows of two liquids of different density separated by a flat interface. The systems have defined initial velocity profile and the motion decays to rest. The geometry and flow initial conditions defining our tests are shown schematically in e.g. Fig.~\ref{fig_system}. The density and, with it, the kinematic viscosity change at the interface, which is tangentially sheared. We have obtained analytical benchmarks for this problem, in the sharp interface limit, in Appendix~\ref{sec_test2}, using Sturm-Liouville theory \cite{Arfken} straightforwardly. Fig.~\ref{fig_DynamicsTest} compares simulation data (crosses) and the analytical solution, for large range of density contrasts, $\Lambda$ (see caption). For these data, shear viscosity $\eta = 0.166=$ and segregation parameter $\beta = 0.5$ are constant whilst kinematic viscosity $\nu = \frac{\eta}{\rho}$ changes. This change is assumed discontinuous in Appendix~\ref{sec_test2}, whereas in simulation density varies across the interface. Even so, it is clear that these data confirm continuous operation of the continuous traction condition across the interface, not simply that the correct \emph{steady-state} profile is obtained. This assertion is supported by the data in Tab.~(\ref{fig_DynamicsTest}), which show the domain-average, relative error between the semi-analytic solution for $u(x,t)$, and the simulated solution, $u^*(x,t)$: % \begin{equation} \label{equ_aps_defn} \epsilon (t) = \frac{ \sum_i |u(x_i,t) -u^*(x_i,t)|^2}{\max(u^*(x_i,t))^2}, \end{equation} % which never exceeds $1\%$. Above, $x_i$ denotes the discrete, ``on-lattice'' value of the transverse co-ordinate. In Fig.~(\ref{fig_DynamicsTest}) the denser fluid is on the right. Its greater density means that it is not accelerated by the traction of the fluid on the left, as strongly as the the fluid on the left is accelerated by the traction of the fluid on the right. % \begin{figure*}[htb] \begin{center} \includegraphics[width=18cm]{SLFlowTestData.png} \caption{Comparison of simulation data (crosses) and semi-analytical solution (see Appendix~(\ref{sec_test2})) for a large range of density contrasts, $\Lambda$. For these data, shear viscosity $\eta =$ constant, whilst kinematic viscosity, $\nu = \frac{\eta}{\rho}$ changes. The interface centers on $x=500$ lattice units, with fluid on the right in all these figures is the denser fluid. For panels (A)..(D) $\Lambda = 1, 20, 31.25, 50$. These data confirm continuity of velocity and correct transmission of stress across a flat, sheared interface. } \label{fig_DynamicsTest} \end{center} \end{figure*} % % % \begin{table}[h!] \centering \resizebox{6cm}{!} {% \begin{tabular}{| c | c | c | c | c |} \hline \multicolumn{1}{|c|}{} & \multicolumn{4}{ c|}{Lattice relative error (\%)} \\ \hline T(lu) & $\Lambda=10$ & $\Lambda=20$ & $\Lambda = 31.25$ & $\Lambda=50$ \\ [0.5ex] \hline 1000 & 0.299 & 0.560 & 0.754 & 0.979 \\ 10000 & 0.114 & 0.199 & 0.252 & 0.300 \\ 20000 & 0.080 & 0.135 & 0.167 & 0.196 \\ 50000 & 0.047 & 0.075 & 0.092 & 0.115 \\ [0.5ex] \hline \end{tabular} }% \caption {Time variation $\epsilon (t)$, in Eq.~(\ref{equ_aps_defn}).} \label{tab_time_var_eps} \end{table} Note that data were matched between simulation and theory by equating the non-dimensional groups which scale the MCLBE dynamics and the corresponding unidirectional Navier-Stokes equation (Eq.~\ref{eq:Main}), as follows : $\frac{\nu(\lambda_3)^* T^*}{ H^{* 2}} = \frac{ \nu T }{ H^2 }$, where the quantities with (without) asterisks are in lattice (physical) units. From this, we find the simulation time corresponding to physical time $T$ as: % \begin{equation} T^* =\frac{ \nu }{\nu(\lambda_3)} \left( \frac{H^*}{H^2 } \right)^2 T \end{equation} % % % \subsection{Curved Interfaces} % Consider now curved interfaces in two dimensions. % The expected dependence of the inter-facial pressure step on surface tension parameter, $\sigma$ was, naturally, confirmed for the range of $\Lambda \in[10^{-3},10^3]$ (the range of data in Tables~\ref{tab_micro1} and \ref{tab_micro2}) and $\sigma \in[0,0.2]$. We proceed to consider other tests. % % % \subsubsection{Inter-facial Micro-current} % We study a red drop, initialized with radius $R=60$, on a lattice of size $200\times200$, with periodic boundary conditions. An inter-facial micro-current is present in all MCLBE models- see Fig.~\ref{fig_micro_flow}. It has been argued \cite{Halliday_micro} that micro-current circulation is a ``correct'' hydrodynamic response to application of a force, or perturbation, which is not native to the continuum scale (where an interface is discontinuous). It might be argued that a micro-current is a correct hydrodynamic response to an incorrect external force. We return to this point shortly. % For the particular case of chromodynamic MCLBE, the spatial pattern of non-isotropic numerical errors not offset by pressure (density) changes drive a persistent circulation. The source of numerical error lies in derivatives, discretization error associated with the Chapman-Enskog and the re-color step. With an interface force, setting $K=\frac{1}{R}$ (i.e. circumventing a numerical calculation of $K$) after Eq.~(\ref{equ_stforce}) significantly reduces micro-current activity \cite{Halliday_micro}. Figure \ref{fig_micro_flow} below compares the micro-current flow field, at $\Lambda = 10$, for calculated and fixed curvature drops. Flow field vectors are normalized in each plot. The flow in the case of fixed curvature is actually much weaker (refer to Tables~\ref{tab_micro1} and \ref{tab_micro2}) and more restricted to the inter-facial region. We will return to this matter shortly. With $\Lambda=1$ (no density contrast), numerical error derives only from the interface force, with the dominant contribution arising from calculation of local interface curvature, $K$. In the presence of component density differences, we introduce a need to correct the dynamics, which, as we see in Sec.~(\ref{sec_derivation}), introduces strong inter-facial density gradients. Evolution equation source terms which rely on numerical derivatives of density add error to that already present in the Lishchuk, or interface, force. Here, we make a quantitative assessment of the impact of that additional error. % We present micro-current data for a range of separated components' density contrast, $\Lambda$, in Tables~\ref{tab_micro1} (fixed $K$) and \ref{tab_micro2}. Based on the above discussion, the magnitude of the micro-current depends $\Lambda$ (and, of course, $|\mathbf{G}|$), but is largely independent of collision scheme. This is confirmed in the data in Tables~\ref{tab_micro1} and \ref{tab_micro2}. (We note that changing the collision model to an LBGK scheme does not alter any of these data by more that a few percent.) For small $\Lambda$, when density contrast correction terms are small, the domain maximum micro-current flow velocity magnitude, $|\mathbf{u}|_{max} = \max( |\mathbf{u}|)$, is small. As the value of $\Lambda$ increases (or decreases, in case of a rare drop) the micro-current intensity increases. % For small $\Lambda$, the micro-current regime is different, now being dominated by the interface force. First, we note a dramatic reduction in micro-current recorded in both Tables~\ref{tab_micro1} (fixed $K$) and \ref{tab_micro2}, as inter-facial density gradients reduce in size. Second, in comparing data for $\Lambda\in [10,0.1]$ \emph{between} Tables~\ref{tab_micro1} (fixed $K$) and \ref{tab_micro2}, we observe the signature reduction in micro-current activity when we eliminate reliance on a $K$ computed from second numerical gradients. % For larger density contrasts, where the principal cause of the circulation is presumably density contrast, the data of Tables~\ref{tab_micro1} and \ref{tab_micro2} both comply with a scaling $|\mathbf{u_{max}}| \sim 7.4 \times 10^{-3} \Lambda$. % % % \begin{figure}[ht] \begin{center} \includegraphics[width=6cm]{Microcurrent_subplot_MRT_v.png} \caption{ Normalized micro-current flow excerpt for $\Lambda = 10$ in the vicinity of a drop, radius $R=60$ for fixed $K=\frac{1}{R}$ (left) and numerically calculated curvature $K = \mathbf{\nabla}_s \rho^N$(right). See Tables~\ref{tab_micro1} and \ref{tab_micro2} to scale these velocity fields. The circulation in the case of fixed curvature is more localized. } \label{fig_micro_flow} \end{center} \end{figure} % % % \begin{table}[h!] \centering \begin{tabular}{||c c c c||} \hline \multicolumn{4}{|c|}{MRT : Fixed K} \\ \hline \hline \(\Lambda\) & \(\alpha_B\) & \(\alpha_R \) & \( |\textbf{u}|_{max} \times 10^5 \) \\ [0.5ex] \hline\hline 0.001 & 0.9995 & 0.5000 & 11.2 \\ 0.010 & 0.9950 & 0.5000 & 1.0 \\ 0.100 & 0.9500 & 0.5000 & 3.64\(\times 10^{-4}\) \\ 10 & 0.5000 & 0.9500 & 1.32\(\times 10^{-4}\) \\ 100 & 0.5000 & 0.9950 & 1.8 \\ 1000 & 0.5000 & 0.9995 & 7.8 \\ [1ex] \hline \end{tabular} \caption{Micro-current activity for a range of separated components' density contrast. For these data, the interface curvature calculation (see Eq.~(\ref{equ_stforce})) has been replaced by assigning $K=\frac{1}{R}$. The full flow field for the case of $\Lambda = 10$ is shown in Fig.~\ref{fig_micro_flow} (top). } \label{tab_micro1} \end{table} % % % \begin{table}[h!] \centering \begin{tabular}{||c c c c||} \hline \multicolumn{4}{|c|}{MRT : Calc K} \\ \hline \hline \(\Lambda\) & \(\alpha_B\) & \(\alpha_R \) & \( |\textbf{u}|_{max} \times 10^5 \) \\ [0.5ex] \hline\hline 0.001 & 0.9995 & 0.5000 & 11.2 \\ 0.010 & 0.9950 & 0.5000 & 3.3 \\ 0.100 & 0.9500 & 0.5000 & 1.0 \(\times 10^{-1}\) \\ 10 & 0.5000 & 0.9500 & 1.8 \(\times 10^{-2}\)\\ 100 & 0.5000 & 0.9950 & 1.8 \\ 1000 & 0.5000 & 0.9995 & 7.8 \\ [1ex] \hline \end{tabular} \caption{Micro-current activity for a range of separated components' density contrast. The full flow field for the case of $\Lambda = 10$ is shown in Fig.~\ref{fig_micro_flow} (bottom). } \label{tab_micro2} \end{table} % % % \subsubsection{Kinematics of Curved Interfaces} % Previous work \cite{Burgin} considered kinematics of a flat interface. Fig.~\ref{fig_drop_flow} (C) shows the flow (once the micro-current is subtracted), which is produced when blue fluid passes a tethered, cylindrical red drop, for density contrast $\Lambda = 5$. The Reynolds number must be kept very small here, to restrict deformation, and the drop is held spherical by large surface tension. Hence these data correspond to the challenging regime of small Reynolds and capillary number. This accounts for the large micro-current. The resulting Stokes' regime flow of internal and external fluid is apparently tangential to the curved interface at all points and continuous across it i.e. we observe that, in the inter-facial region, $v_n = 0 $, $v_t =$ continuous. This accords with the kinematic condition of mutual impenetrability. Note that the flow in Fig.~\ref{fig_drop_flow} (C) is not the solved flow past a three-dimensional spherical drop. % % % \begin{figure*}[htb] \begin{center} \includegraphics[width=18cm]{Sub_micro.png} \caption{Low Re internal flow past a tethered, cylindrical drop for $\Lambda = 5$. Flow in outside the drop has been suppressed. Panel (A) shows the total flow, in which the velocity field clearly has a non-physical component perpendicular to the interface. (B) shows the micro-current error, measured from the frozen phase field in (A), without external flow and (C) shows the physical flow exposed by subtracting the microcurrent. The solid black line represents the centre of the interface between the fluids ($\rho^N = 0$ contour). The internal and external flows are clearly parallel to the interface .} \label{fig_drop_flow} \end{center} \end{figure*} % % % \subsubsection{Dynamics of Curved Interfaces} % In appendix Sec.~(\ref{sec_test2}), we consider the temporal decay of a ``unidirectional" flow of two liquids of different density separated by a curved interface. For this test, the system again has a defined initial velocity profile and the motion decays to rest. The geometry and flow initial conditions defining our test are shown schematically in Fig.~\ref{fig_system}. The assumed density and, with it, the kinematic viscosity change at the interface, which is tangentially sheared. In all cases, the denser fluid is on the left, which accounts for its smaller acceleration. We have obtained an analytical solution for this problem, in the sharp interface limit in appendix Sec.~(\ref{sec_test2}), using adapted Sturm-Liouville theory. Fig.~\ref{fig_DynamicsTest2} compares simulation data (crosses) and the analytical solution, for range of density contrasts, $\Lambda$ (see caption) which is, note, smaller that that in Fig.~\ref{fig_DynamicsTest2}. This reduction reflects the introduction of a curved interface. For these data, $R_0 = 120$, $R=360$, shear viscosity $\eta = 0.333 =$ segregation parameter $\beta = 0.3$ are constant whilst kinematic viscosity $\nu = \frac{\eta}{\rho}$ changes. This change is assumed discontinuous in the treatment of appended Sec.~(\ref{sec_test2}), whereas in simulation density varies across the interface. Even so, these data confirm correct transient transmission of stress across the interface in our model, not simply that the correct \emph{steady-state} profile is obtained. % % % \begin{figure*}[htb] \begin{center} \includegraphics[width=16cm]{SLFlowTestData2.png} \caption{ Comparison of simulation data (crosses) and semi-analytical solution (see Sec.~(\ref{sec_test2})) for a range of density contrasts, $\Lambda$. For these data, shear viscosity $\eta =$ constant, whilst kinematic viscosity, $\nu = \frac{\eta}{\rho}$ changes. In all these figures, the interface centers on $r=120$ lattice units, with fluid on the left the denser. For panels (A)..(D) $\Lambda = 1, 2, 3, 5$, which is smaller than the range of $\Lambda$ shown in Fig.~(\ref{fig_DynamicsTest}), note. These data confirm continuity of velocity and correct transmission of stress across a curved interface.} \label{fig_DynamicsTest2} \end{center} \end{figure*} % % % Introduction of curvature undoubtedly reduces range of density contrast available to method but get correct inter-facial conditions but, in general, data presented in this section confirm that chromodynamic MRT schemes with density difference do recover correct boundary conditions at interface. % % % \section{Conclusions} \label{sec_conclusions} % Using a single fluid formulation, we have developed a convenient, multiple-relaxation time (MRT) collision scheme multi-component lattice Boltzmann scheme (MCLBE) for simulating completely immiscible fluids with a density contrast, $\Lambda$, using the chromodynamic variant. Our technique is based upon the method of Dellar \cite{Dellar2003, Dellar2008}. The model evolves a set of physical and non-physical (ghost) modes of the system, equal in number to the cardinality of lattice basis set, then constructs an explicit distribution function \emph{a posteriori}. We place all corrections to the target dynamics (the weakly compressible Navier-Stokes equations) in the kinetic-scale evolution equation. Significantly, the latter rely on density gradients, which can be large when $\Lambda$ is large, which limits applications to moderate density contrast. We present in the appendices enhanced (but non-compact) stencils for gradient calculation which improve performance. Equivalent MRT schemes, due to Ba et al. \cite{Ba} and, earlier, Liu et al. \cite{Liu} pioneered our essential approach. These authors showed the clear benefits of MRT collision models in benchmarking against complex flow simulations. To compliment this work, we focus, here, on fundamental, physical compliance in chromodynamic MCLBE MRT schemes. We produce data which compare well with the steady-state tests devised by Ba et al. \cite{Ba}, but also with new theory, as follows. We assess our model dynamics against a semi-analytical solutions to a transient flow test cases which reference, explicitly, the kinematic condition of mutual impenetrability and dynamic interface boundary condition of continuous traction. Broadly, data compare well with these solutions, confirming satisfactory, instantaneous compliance with kinematic and dynamic conditions at the simulation interface. % Whilst the Dellar-type MRT scheme we develop here is operationally equivalent to that of Ba et al., it has an advantage. Practically, it has improved implementability- post collision distribution function is explicitly constructed from modes with simple, scalar relaxation. Theoretically, the connection between model kinematics and dynamics is visible. This is a consequence of placing all density-difference dynamics corrections in the kinetic scale source term. % MCLBE MRT schemes are not without limitations. The well-known MCLBE inter-facial micro-current. Here our simulations of curved interfaces suggest that it it may be removed completely from steady state simulations. Further, data presented for curved interfaces conform to our understanding of the inter-facial micro-current (see \cite{Halliday_micro}) but the expected effect of dynamics corrective terms increases micro-current activity associated with the method, roughly in proportion to $\Lambda$, with the contribution to the spurious signal greater than that arising from the surface tension perturbation for $\Lambda > 10$. % % % \pagebreak
1,477,468,749,971
arxiv
\section{Introduction\label{section_intro}} Atomically thin group-VIB transition metal dichalcogenides (TMDs) have emerged as a class of two-dimensional (2D) semiconductors with exciting optical properties. The monolayers feature direct band gaps in the visible frequency range \cite{mak2010atomically,splendiani2010emerging}, with band edges located at the degenerate $\mathbf{K}$ and $-\mathbf{K}$ corners of the hexagonal Brillouin zone (BZ), which constitute the valley degree of freedom. Optical properties of TMDs are dominated by the hydrogen-like bound states of the electron and hole at the valleys, where strong Coulomb interaction leads to exceptionally large exciton binding energies \cite{yu2015valley}. As the monolayers are only bounded by weak van der Waals (vdW) interactions, heterostructures of different TMDs can be flexibly engineered without the requirement of lattice matching, which allows vast opportunities to engineer and extend optoelectronic properties. TMD heterobilayers typically feature the type-II alignment where electron and hole energetically favor the two opposite layers. Interlayer excitons (IXs) with the layer separation of the electron and hole ingredients therefore becomes the lowest energy configuration that can dominate the photoluminescence \cite{rivera2015observation}. Compared with monolayer excitons, IXs in TMD bilayers exhibit ultralong recombination lifetime and spin-valley lifetime due to the reduced electron-hole wave function overlap, and electrically tunable resonance and strong dipolar interaction due to the permanent electrical dipole, providing rich possibilities in optoelectronic applications \cite{gong2014vertical,ceballos2014ultrafast,fang2014strong,rivera2015observation,rivera2016valley,lin2015atomically}. Without the requirement of lattice matching, small differences in lattice constant and crystalline orientation lead to the formation of moir\'e patterns, i.e. spatial modulation on atomic registries from the interference of two mismatched atomic lattices. The sensitive dependence of electronic structures on the atomic stacking registries therefore results in the spatial variation in the local band gap in heterobilayers \cite{ zhang2017interlayer}. The moir\'e-patterned band gap corresponds to the spatial modulation of IX energy, which can be tuned electrically by means of Stark effect, while the spatial profile can be engineered through the variation of twisting angles~\cite{wang2012three}. IXs in such moiré potential can be exploited in two functioning regimes: (i) excitonic emitters localized in trap arrays in relatively large moir\'e; and (ii) excitonic superlattices in relatively small moir\'e, where exciton hopping leads to formation of mini-bands \cite{yu2017moire}. Evidences of moir\'e excitons in both regimes are reported with distinct spectroscopic features in the various heterobilayers \cite{seyler2019signatures,tran2019evidence,jin2019observation,alexeev2019resonantly, brotons2020spin, li2020dipolar}. However, a systematic analysis of how the moir\'e exciton optical properties vary between the two regimes is still lacking, which is one of the issues that we address in this work. \begin{figure*} \centering \setlength{\abovecaptionskip}{0.cm} \includegraphics[width=1\textwidth]{fig_twistvsstrain.jpg} \caption{Comparison between interlayer excitons in twisting and heterostrain induced moir\'e patterns. (a) Real space configuration of bilayers subject to twisting (upper panel) or volume-preserving heterostrain (lower panel). Black rhombus depicts a moir\'e supercell. The in-plane polarizations of a $\mathbf{K}$ valley exciton at different high symmetry locations (dashed circles) are schematically shown. At $R^h_h$ and $R^X_h$ stacking locals, circular (elliptical) polarization is allowed in the twisted (heterostrained) moir\'e. At the $R^M_h$ stacking local in-plane linear polarization is forbidden (allowed) in twisted (heterostrained) moir\'e. (b) Mismatched monolayer BZs of twisted and heterostrained bilayers. The blue and red color represent the bottom and top layer, respectively. Blue (magenta) solid dots are Dirac points of the bottom (top) layer. For the strain case, we show both the Dirac point locations with (solid dots) and without (empty dots) considering the strain induced gauge potential. Red arrows mark the center-of-mass kinematic momentum of a bright IX, which correspond to the mismatch of the Dirac points in the two layers. The middle dashed box illustrates the various momenta in a bright IX with electron and hole from each layer. (c) Distinct distribution of light cones (cylinders) with respect to the IX kinetic energy dispersion (gray parabola) in the twisted (upper panel) and heterostrained (lower panel) moir\'e. Compared to twisted moir\'e, two major differences are noted in heterostrained moir\'e: (1) Exchange of the elliptical polarization at the three main light cones; (2) Overall shift of the light cone locations (yellow arrow) with respect to the kinetic energy dispersion. } \label{fig_twistvsstrain} \end{figure*} On the other hand, moir\'e patterns can also be created by applying layer-dependent strain (i.e. heterostrain) to lattice matched bilayers \cite{zhang2019magnetotransport,yu2020giant,zhai2020theory}. In particular, under a volume preserving strain, the bilayer moir\'e will exhibit a large-scale interference pattern nearly indistinguishable from that introduced by twisting and/or lattice mismatch (Fig.~\ref{fig_twistvsstrain}a). Compared to twisting, the heterostrain approach allows in situ tunability of the moir\'e, where the periodicity can in principle be controlled via substrates mechanically, thermally or piezoelectrically~\cite{frisenda2017biaxial,roldan2015strain,deng2018strain,yang2021strain,han2021experimental}. Besides, strain can be unintentionally introduced due to unavoidable deformation in fabrications of moir\'e superlattices, where the interplay of uniaxial strain and twisting can lead to dramatic elongation of the moir\'e towards one-dimensional structure. Linearly polarized IX photoluminescence in the strain elongated moir\'e traps has been reported in TMDs heterobilayers \cite{bai2020excitons}. In the heterostrained moir\'e superlattices, distinguished IX properties can be expected as compared to the twisting induced ones. A qualitative picture is that the breaking of rotational symmetry can change optical selection rules from circular to linear polarization, but a systematic study of how this happens has not been carried out yet. In this work we systematically study the mini-band dispersion and optical properties of IXs in heterostrain induced moir\'e, in comparison with those in the moir\'e induced by twisting of various angles. The two types of moir\'e patterns have the valley mismatch between electron and hole in a different manner, which results in distinct distribution of light cones with respect to the exciton kinetic energy dispersion (Fig.~\ref{fig_twistvsstrain}c). The broken rotational symmetry in the strain induced moir\'e manifests as the exchange in positions of the main light cones with different polarized dipoles. Besides, the strain also introduces a constant gauge potential on either electron or hole, which shifts the dispersion of exciton with respect to its crystal momenta in the moir\'e superlattice, leading to dramatic changes of kinetic energies at the main and Umklapp light cones. Upon the band folding by the moir\'e superlattice potential, the mini-band dispersion, wave function spatial profile, and optical properties are totally distinct, even if moir\'e superlattice potentials have the similar real-space profile for the two cases. We also show the evolution of the excitonic mini-bands and the optical dipoles of the bright states inside the light cones with the decrease of moir\'e periodicity, during which the excitonic wave functions evolve from localized wave packets to extended Bloch states. We explore various types of strain configurations and the interplay of twisting and heterostrain, and provide comprehensive diagrams on the optical properties of moir\'e IXs in the strain-parameter space. These studies form the basis of strain and twisting engineering of moir\'e exciton optical properties for potential photonic applications. The rest of the paper is organized as the following. In Sec.~\ref{section_formalism} we give a brief description of the IX momentum eigenstates, the relation of kinetic energy with respect to its crystal momentum that can be defined from the superlattice Bloch function form, as well as the moir\'e potential that couples the momentum eigenstates. In Sec.~\ref{section_twist}, IXs in twisted heterobilayer are first investigated with the variation of twisting angles to explore the transition between small and large moir\'e regimes. Sec.~\ref{section_strain} presents the strained moir\'e IXs in comparison with the twisting case. The change of BZ geometry and the effective gauge potential from strain are first introduced, and the resultant consequences on the light cone distribution, mini-band dispersion, wave functions, and optical properties are analysed. At last, we briefly discuss properties of IXs under the interplay of twisting and strain. \section{IX momentum eigenstate and moir\'e potential\label{section_formalism}} \subsection{Exciton momentum eigenstates in misaligned bilayers} TMDs~\cite{liu2015electronic} possess a hexagonal BZ with massive Dirac cones located at the corners $\tau\mathbf{K}$ and $\tau=\pm 1$ is the valley index. Excitons in TMDs exhibit spin-valley locking due to the large spin-orbit splitting. For TMD heterobilayers with type-II band alignment, IXs are composed of electrons and holes located at $\tau'\mathbf{K}'$ and $\tau\mathbf{K}$ valleys in different layers. Here we use prime to denote quantities from the upper layer. Apart from misalignment of the Dirac cones in energy, locations of Dirac cones in momentum space are affected by the configuration of the bilayer. In a twisted bilayer, Dirac cones in the upper layer are rotated with respect to those in the lower layer (Fig.~\ref{fig_twistvsstrain}b upper panel). In a strained bilayer, Dirac cones are first shifted away from the corners of the distorted BZ (red shaded area in Fig.~\ref{fig_twistvsstrain}b lower panel) due to the breaking of three-fold rotational symmetry (empty pink dots). Moreover, the Dirac cones are further translated by a pseudogauge potential caused by changes of the hopping energy (from empty to solid pink dots), which will be discussed in Sec.~\ref{section_formalism:StrainEffects}. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_potential.jpg} \caption{(a) Light cones in the extended BZ scheme for a twisted moir\'e. The black hexagons denote the moir\'e BZs, purple arrows mark $\mathbf{g}_1, \mathbf{g}_2$, and $\mathbf{g}_3$. Red stars stand for main light cones, and blue/pink stars represent the 1st/2nd Umklapp light cones. The origin $\mathbf{Q}=0$ is marked by the brown dot and the brown circle is the equi-energy ring from main light cones. (b) The moir\'e potential landscape for R-type $\text{WSe}_2/\text{MoSe}_2$, where red rhombus marks a moir\'e supercell. A, B, and C label three high symmetry locals. The potential inside the red dashed circle can be approximated as a harmonic trap in the small twist angle regime. Upper right inset: Cross-section of the potential along the long axis of the supercell (black solid curve) and the harmonic approximation (red dashed curve).} \label{fig_potential} \end{figure} Using Bloch wave functions of a pair of electron and hole from each individual layer, $\psi^e_{\mathbf{k}_e}(\mathbf{r}_e)$ and $ \psi^{h}_{\mathbf{k}_h}(\mathbf{r}_h)$, one can construct the IX momentum eigenstate. Since Coulomb interaction conserves the center-of-mass (COM) momentum $\mathbf{Q}$ of electron and hole, it is a good quantum number to characterize the IX momentum eigenstate~\cite{yu2018brightened}: \begin{eqnarray} \label{IXmomentumeigenstate} & &X_{\tau'\tau,\mathbf{Q}}(\mathbf{R},\mathbf{r}_{eh})\notag\\ &=& \sum_{\Delta\mathbf{Q}}\Phi(\Delta\mathbf{Q})\psi^e_{\tau'\mathbf{K}'+\frac{m_e}{M_0}\mathbf{Q}+\Delta\mathbf{Q}}\psi^{h*}_{\tau\mathbf{K}-\frac{m_h}{M_0}\mathbf{Q}+\Delta\mathbf{Q}}\notag\\ &=& e^{i(\mathbf{Q}+\tau'\mathbf{K}'-\tau\mathbf{K})\cdot\mathbf{R}}U_{\tau'\tau,\mathbf{Q}}(\mathbf{R},\mathbf{r}_{eh}) \end{eqnarray} In the above, the coordinates and momenta of electrons and holes have been replaced by their COM and relative motion counterparts $\mathbf{R} = \frac{m_e}{M_0}\mathbf{r}_e+\frac{m_h}{M_0}\mathbf{r}_h$, $\mathbf{r}_{eh} = \mathbf{r}_e-\mathbf{r}_h$, $\mathbf{k}_e = \tau'\mathbf{K}'+\frac{m_e}{M_0}\mathbf{Q}+\Delta\mathbf{Q}$, and $\mathbf{k}_h = \tau\mathbf{K}-\frac{m_h}{M_0}\mathbf{Q}+\Delta\mathbf{Q}$ with $M_0 = m_e+m_h$ the exciton mass, $\Delta\mathbf{Q}$ the relative momentum, and $\Phi(\Delta\mathbf{Q})$ the relative motion wave function. The COM momentum $\mathbf{Q}$ is also called the \textit{kinetic momentum} since it is associated with the IX's kinetic energy $\hbar^2\mathbf{Q}^2/2M_0$. $U_{\tau'\tau,\mathbf{Q}}(\mathbf{R},\mathbf{r}_{eh})$ in the last line of Eq.~(\ref{IXmomentumeigenstate}) is a periodic function built from the periodic parts of the electron and hole's Bloch wave functions~\cite{yu2018brightened}. This makes the IX momentum eigenstate of the Bloch type, where $\mathbf{k}_c = \mathbf{Q}+\tau'\mathbf{K}'-\tau\mathbf{K}$ plays the role of \textit{crystal momentum}. In the following, we will focus on the $\tau=\tau'=+$ valley in R-stacking (or parallel) heterobilayers. The other valley is related by time-reversal symmetry. Also, a consistent coordinate system has been chosen throughout the work, i.e. the zigzag (armchair) crystalline direction as x (y) axis. Note that the crystal momentum $\mathbf{k}_c$ and kinetic momentum $\mathbf{Q}$ are different for IXs in the moir\'e. This contrasts with intralayer excitons in monolayers, or IXs in aligned lattice-matched heterobilayers, where the two momenta are identical due to $\mathbf{K}'-\mathbf{K}\equiv0$. When an exciton is converted into a photon, momentum conservation requires that the crystal momentum satisfies $\mathbf{k}_c\approx 0$. Therefore, intralayer excitons in monolayers or IXs in aligned lattice-matched heterobilayers have vanishing kinetic momentum (brown dot in Fig.~\ref{fig_potential}a). In contrast, bright IXs in the moir\'e have finite kinetic momentum $\mathbf{Q}_{lc}=\mathbf{K}-\mathbf{K}'+\mathbf{G}$ (stars in Fig.~\ref{fig_potential}a), where $\mathbf{G}$ denotes the moir\'e reciprocal lattice vector. $\mathbf{Q}_{lc}$ defines the location of moir\'e light cones, inside which the direct conversion with photon is permitted (Fig.~\ref{fig_twistvsstrain}b). Moir\'e IX momentum eigenstates in various light cones possess distinct optical dipoles coupled to different elliptically polarized light~\cite{yu2015anomalous} (Fig.~\ref{fig_twistvsstrain}c). In the case of twisted heterobilayer moir\'e, the three equivalent innermost light cones are located at $\mathbf{Q}_0=\mathbf{K}-\mathbf{K}'$, $C_3\mathbf{Q}_0$, and $C^2_3\mathbf{Q}_0$ (red stars in Fig.~\ref{fig_potential}a, $C_3$ denotes three-fold rotation). These are the main light cones, in which the momentum eigenstates hold dominant optical dipoles. Farther away are the Umklapp light cones, which represent the Umklapp recombination process with much weaker dipoles. Such categorization can also be applied to strained heterobilayer moir\'e, although three-fold rotational symmetry is broken. \subsection{Moir\'e potential} Moir\'e patterns from the misaligned bilayers exhibit spatially modulated local atomic registry that repeats in a much larger scale than the monolayer lattice constant. Different local atomic configurations in a moir\'e render local-to-local variation on the interlayer vdW interaction, forming a lateral modulation on the local band gap and interlayer distance~\cite{yu2018brightened}. The IX thus experiences a moir\'e potential, which can be modeled by~\cite{wang2017interlayer,yu2017moire,wu2018theory,yu2020electrically} \begin{eqnarray} \label{moirepotentialformula} V(\mathbf{R}) = \sum_{n = 1}^3 2V_0\cos\left(\mathbf{g}_n\cdot\mathbf{R}-\varphi\right), \end{eqnarray} where $\mathbf{g}_1$, $\mathbf{g}_2$, and $\mathbf{g}_3 = -\mathbf{g}_1-\mathbf{g}_2$ are the primitive reciprocal lattice vectors of the moir\'e superlattice separated by $120^\circ$ (purple arrows in Fig.~\ref{fig_potential}a). The information of the moir\'e profile is contained in the reciprocal lattice vectors, which vary for different moir\'e superlattices (e.g. the two cases in Fig.~\ref{fig_twistvsstrain}). Values of $V_0$ and $\varphi$ depend on materials and their stacking configuration~\cite{wu2017topological,wu2018theory}. For example, $V_0 = 9.122$ meV and $\varphi \approx 0.57\pi$ for R-type $\text{WSe}_2/\text{MoSe}_2$ heterobilayer~\cite{yu2017moire,yu2020electrically}. The potential profile in a twisted moir\'e superlattice is shown in Fig.~\ref{fig_potential}b. Each potential minimum (B) is connected with three saddle points (A) and three maxima (C). The three locals A, B, and C correspond to $R_h^h$, $R_h^X$, and $R_h^M$ high symmetry stacking registries, respectively (Fig.~\ref{fig_twistvsstrain}a). Here, $R_h^\mu$ denotes R-type stacking, with the $\mu$ site of the electron layer vertically aligned with the hexagon center $(h)$ of the hole layer. $M$ and $X$ represent metal and chalcogen atoms. The momentum space Hamiltonian describing the moir\'e exciton Bloch states in the slowly varying moir\'e potential reads~\cite{yu2017moire,yu2020electrically} \begin{eqnarray} \label{twistHamiltonian} H &=& \sum_l \left[\left(E_X+\frac{\hbar^2 \vert \mathbf{Q}_l \vert ^2}{2M_0}\right) \ket{\mathbf{Q}_l} \bra {\mathbf{Q}_l} \right]\nonumber\\ &+& \sum_l \left[\sum_{n=1}^3(V_0 \ket{\mathbf{Q}_l+\mathbf{g}_n} \bra {\mathbf{Q}_l} +h.c.)\right] \end{eqnarray} where $\ket{\mathbf{Q}_l}$ is the momentum eigenstate in the $l$th mini BZ, $E_X\approx 1.40$ eV and is tunable with electric field. The second line illustrates that each momentum eigenstate is coupled to six neighboring states with momentum difference $\pm\mathbf{g}_n$, which will lead to the formation of mini-bands. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_twistband.jpg} \caption{Dispersion relations of IX in twisted heterobilayer moir\'e. (a) The dispersion in the mini BZ at $\theta = 2^\circ$ in the absence of moir\'e potential. $\pm\mathbf{K}_m$ are the mini BZ corners. There are two degenerate points at $\mathbf{\Gamma}$, which consist of three main light cones and three 1st Umklapp light cones, respectively. The yellow cone stands for the light cone with very sharp slope, so only the states near $\mathbf{\Gamma}$ point can couple to light directly. (b) The dispersion in the presence of moir\'e potential. Gaps are opened at the degenerate points in (a). (c) The first 16 energy levels at $\mathbf{\Gamma}$ with the variation of twist angle from $0.3^\circ$ to $2.0^\circ$. States inside the red box exhibit equal energy spacing and linear scaling. There is a plateau state around $1.39$ eV, whose energy barely changes with $\theta$. Red ellipses mark the hybridization of this state with other states forming anti-crossings. The red dots mark the states whose wave functions are shown in Fig.~\ref{fig_twistexciton}b.} \label{fig_twistband} \end{figure} \begin{figure*} \centering \setlength{\abovecaptionskip}{0.cm} \includegraphics[width=1\textwidth]{fig_twistexciton.jpg} \caption{Wave function distribution for the low energy exciton mini-bands in twisted heterobilayer moir\'e of various twist angles. (a) Wave function densities of the first 6 states at $\theta = 0.3^\circ, 1.0^\circ$, and $3.0^\circ$. Wave functions at small twist angle possesses harmonic forms, where the orbital symmetry is labeled on the first row. (b) Evolution of the plateau state with twist angle (c.f. Fig.~\ref{fig_twistband}c, where the corresponding states are marked by red dots). It mainly distributes around A locals and will hybridize with states located around B point if their energies intersect ($\theta = 0.7^\circ$). (c) The effective coupling range dictated by moir\'e potential strength. Light cone (red stars) distribution in the extended mini-BZs is shown at different twist angles, where the circle has radius $Q_p \equiv \frac{\sqrt{2mV_0}}{\hbar}$, subscript p stands for moir\'e potential. Momentum eigenstates within the range characterized by $Q_p$ are strongly coupled by the moir\'e potential. (d) In-plane polarization of moir\'e excitons for $\theta = 0.3^\circ$ to $3.0^\circ$. Red/blue color stands for right/left circular polarization, circle size stands for dipole amplitude.} \label{fig_twistexciton} \end{figure*} \section{Interlayer excitons in twisted heterobilayer moir\'e\label{section_twist}} We first investigate IXs in twisted heterobilayer moir\'e with various twisting angles. We take R-type $\text{WSe}_2/\text{MoSe}_2$ bilayer and consider the spin singlet moir\'e exciton~\cite{yu2018brightened}. \subsection{Energy dispersion} Figs.~\ref{fig_twistband}a,b show the IX dispersion along $-\mathbf{K}_m$-$\mathbf{\Gamma}$-$\mathbf{K}_m$ in the mini BZ without and with the moir\'e potential at twist angle $\theta=2^\circ$. In the absence of moir\'e potential, the dispersion only comes from the parabolic kinetic energy. $\mathbf{Q}_{\text{lc}}$, where light cones reside, are folded onto the $\mathbf{\Gamma}$ point. The first energy level at $\mathbf{\Gamma}$ is three-fold degenerate consisting of three main light cones (red stars in Fig.~\ref{fig_potential}a). The second level is composed of three 1st Umklapp light cones (blue stars in Fig.~\ref{fig_potential}a). In the presence of moir\'e potential, degeneracy is broken with gap opening. If the energy separation between the two levels is smaller than the potential energy, i.e., $\Delta E < V_0$ (Fig.~\ref{fig_twistband}a), coupling between main and 1st Umklapp light cones will occur~\cite{yu2020electrically}. With the variation of twist angle, energy levels at the $\mathbf{\Gamma}$ point will evolve and certain states may mix (Fig.~\ref{fig_twistband}c). The kinetic momentum $\mathbf{Q}$ increases approximately linearly with $\theta$, thus the kinetic energy evolves with $|\mathbf{Q}|^2 \sim \theta^2$. In the small angle regime ($\theta < 1.5^\circ$), the separation of kinetic energy between the main and 1st Umklapp light cones is smaller than or comparable to the moir\'e potential strength, i.e., $\Delta E\lesssim V_0$. Thus, strong coupling between main and Umklapp light cones is expected. When twist angle is large ($\theta \gtrsim 3.0^\circ$), $\Delta E > V_0$, the main and Umklapp light cones are effectively decoupled. Several interesting features can be identified in the evolution of energy levels with twist angle. First, the lowest a few levels exhibit equal spacing and scale linearly when $\theta$ is small (red box in Fig.~\ref{fig_twistband}c). Second, there is a plateau state around $1.39$ eV whose energy is robust against variation of $\theta$. It corresponds to the 16th state at $\theta=0.35^\circ$ and becomes the 2nd when $\theta>1.1^\circ$ since the other levels increase with $\theta$ and rise to higher energies. \subsection{Wave function distribution} It is also interesting to look at the wave function distribution for the first a few states at $\mathbf{\Gamma}$ point (Fig.~\ref{fig_twistexciton}a). One can notice that the first a few states are localized around potential minima (B point) at small angles, e.g., $\theta = 0.3^\circ$. Particularly, profiles of the first a few states at small angles are analogous to the eigenstates of a two-dimensional harmonic oscillator with the constraint of three-fold rotational symmetry. Consequently, we can label the states as $\chi_{n,l}$, where $n$ is the principal quantum number, and $l$ is the angular momentum quantum number modulo 3 (1st row of Fig.~\ref{fig_twistexciton}a). These features and the scaling of energy levels discussed in the previous paragraph can be easily understood by realizing that the low energy IXs are trapped at the potential minima (B point), which can be well approximated as a harmonic oscillator potential (Fig.~\ref{fig_potential}b inset) $V(\rho)\approx 1.47V_0\left(4\pi^2\rho^2/a_m^2-3\right)$ with $\rho$ the radial distance measured from B point and $a_m$ the moir\'e period. Since the frequency of the harmonic oscillator $\omega \propto a_m^{-1}\propto \theta$, one recovers the equal spacing and linear in $\theta$ scaling at low energies. When $\theta$ is enlarged, high symmetry locals in the moir\'e shrink rapidly and the hopping between adjacent trapping sites in different supercells emerge, thus wave functions start to spread (2nd row of Fig.~\ref{fig_twistexciton}a). For even larger twist angles, the harmonic oscillator approximation breaks down, for example, the first three states mainly occupy B, A, C locals respectively when $\theta = 3.0^\circ$ (3rd row of Fig.~\ref{fig_twistexciton}a). Such wave function evolution can also be understood from a momentum space perspective. Momentum eigenstates in neighboring moir\'e BZs are coupled by moir\'e potential, whose strength can be characterized by an effective momentum length $Q_p$ satisfying $\hbar^2Q_p^2/2M_0 = V_0$. The purple circles in Fig.~\ref{fig_twistexciton}c delimit the region defined by the effective momentum. At small twist angles, the circle covers a large number of light cones. Many momentum eigenstates participate in constructing a moir\'e exciton eigenstate, the strong coupling among which yields a localized wave function. At large twist angles, the separation between light cones is increased, rendering a dramatically reduced number of light cones inside this circle. The wave functions of the excitons are built with less coupled momentum eigenstates and inherit their extended Bloch wave nature. Now let us look at some features of the plateau state around $1.39$ eV. It corresponds to the second state when $\theta > 1.1^\circ$, while its ordering changes with smaller $\theta$ and hybridization with other states occur around certain angles. It mainly distributes around the A high symmetry locals at most twist angles as shown by the top and bottom panels of Fig.~\ref{fig_twistexciton}b. This helps understand its insensitivity to variation of $\theta$: Excitons residing around A locals experience a rather flat potential landscape with magnitude around $-V_0$, whose energy barely changes with the size of moir\'e. Therefore, the plateau state exhibits an almost constant energy around $E_X-V_0\approx1.39$ eV. At twist angles where the energy level of the plateau state intersects with others, strong hybridization occur between it and those states located around B with the appearance of avoided crossings as marked by the dashed circles in Fig.~\ref{fig_twistband}c. For example, at $\theta=0.7^\circ$ (middle panel of Fig.~\ref{fig_twistexciton}b), the plateau state hybridizes with an extended $W_{2,-1}$ state rendering large densities around the B local. \subsection{Optical properties} Next we investigate the optical properties of twist moir\'e IXs at various angles. We focus on the first six states at $\mathbf{\Gamma}$ (Fig.~\ref{fig_twistband}b). One finds that all the states couple with circularly polarized light except the 3rd and 6th states, whose in-plane dipole vanishes (Fig.~\ref{fig_twistexciton}d). Red/blue color represents right/left circular direction, the size of circles denote the dipole amplitude. When $\theta < 1.5^\circ$, $\Delta E\lesssim V_0$ and the effective momentum $Q_p$ that characterizes the range of moir\'e potential covers many light cones (Fig.~\ref{fig_twistexciton}c), states in the main and 1st Umklapp light cones couple significantly. Thus, strong optical dipoles from main light cones and weak dipoles from Umklapp light cones are mixed, producing comparable dipole strengths for different states. The insensitivity of the plateau state energy against variation of $\theta$ is also reflected on optical properties. The switching of polarization in the 4th and 5th states from $\theta = 0.6^\circ$ to $0.9^\circ$ is related to the reordering of plateau state-- It corresponds to the 5th state at $\theta = 0.6^\circ$, then exchanges order with its neighbor and becomes the 4th state at $\theta = 0.9^\circ$. With the twist angle enlarged, the coupling between main and 1st Umklapp light cones becomes weaker. Optical dipoles of the first three states are mostly contributed by the three main light cones. The fourth to sixth states are mainly governed by the three 1st Umklapp light cones with much weaker dipole intensities. At $\theta = 3.0^\circ$, wave functions for the three lowest states with dominating dipoles are well separated (3rd row of Fig.~\ref{fig_twistexciton}a): The 1st to 3rd states are centered at B, A, and C locals, respectively. Consequently, A, B locals in the moir\'e supercells are coupled uniquely to the $\sigma_+$, $\sigma_-$ polarized light, respectively, and in-plane polarization is forbidden at C locals (Fig.~\ref{fig_twistvsstrain}a upper panel)~\cite{yu2017moire, yu2018brightened}. The 4th to 6th states have much wider spread, however, the location-dependent optical properties remain applicable. For instance, the 4th (5th) state contributes $\sigma_+$ ($\sigma_-$) light emitting at the A (B) local (among other places), although the intensity is much weaker. The observed optical properties in both small and large angle regime can be understood through symmetry analysis. The photon polarization from optical recombination of moir\'e excitons depends on the rotational symmetry. At small twist angles, the photon polarization is dictated by symmetry of local orbitals of the wave functions (first row of Fig.~\ref{fig_twistexciton}a). The moir\'e exciton in a wide potential well can be approximated as a wave packet \cite{yu2017moire} $\chi_{n,l} = \sum_{\mathbf{Q}} e^{-i(\mathbf{Q}-\mathbf{Q}_0) \cdot \mathbf{R}_c} W_{n,l}(\mathbf{Q}) X_{\mathbf{Q}}$, where $W_{n,l}(\mathbf{Q})$ is the 2D harmonic envelope wave function constraint by three-fold rotational symmetry. Setting the wave packet center $\mathbf{R}_c$ at the B point, the in-plane optical dipole $\mathbf{D}_\parallel$ can be analyzed via \begin{eqnarray} \label{localorbitaldipole} \hat{e}_+ \cdot \mathbf{D}_\parallel &\sim& W_{n,l}(\mathbf{Q}_0)+e^{i\frac{2\pi}{3}} W_{n,l}(C_3 \mathbf{Q}_0)+e^{i\frac{4\pi}{3}}W_{n,l}(C_3^2 \mathbf{Q}_0)\notag\\ \hat{e}_- \cdot \mathbf{D}_\parallel &\sim& W_{n,l}(\mathbf{Q}_0)+ W_{n,l}(C_3 \mathbf{Q}_0)+W_{n,l}(C_3^2 \mathbf{Q}_0) \end{eqnarray} For example, at $\theta = 0.3^\circ$, $W_{0,0}(\mathbf{Q})=W_{0,0}(Q)$ is a real function of $Q$ (Fig.~\ref{fig_twistexciton}a). This yields $\hat{e}_+ \cdot \mathbf{D}_\parallel \sim W_{0,0}(Q_0)+e^{i\frac{2\pi}{3}} W_{0,0}(Q_0)+e^{i\frac{4\pi}{3}} W_{0,0}(Q_0)=0$, while $\hat{e}_- \cdot \mathbf{D}_\parallel \sim W_{0,0}(Q_0)+ W_{0,0}(Q_0)+W_{0,0}(Q_0)$ is finite. By the same token, the above orbital optical selection rule determines that states with $l = 0$ emit $\sigma_-$ light, states with $l = -1$ emit $\sigma_+$ light, while states with $l = 1$ have vanishing in-plane polarization, which is consistent with results of Fig.~\ref{fig_twistexciton}d (e.g., first column). At large angles (e.g., $\theta = 3.0^\circ$), momentum eigenstates in neighboring BZs are weakly coupled, so wave functions become more extended. The optical dipoles instead depend on real-space atomic configurations. The electron and hole Bloch functions have distinct $C_3$ eigenvalues about different rotation centers ($h$ center, chalcogen site, or metal site, Fig.~\ref{fig_twistvsstrain}a middle panel). Moir\'e excitons $\chi$ at different high symmetry locals have distinct rotation centers, thus they exhibit location-dependent $C_3$ transformations~\cite{yu2018brightened,yu2017moire} \begin{eqnarray} \label{C3transofmration} C_3\chi_A = e^{-i\frac{2\pi}{3}}\chi_A, C_3\chi_B = e^{i\frac{2\pi}{3}}\chi_B, C_3\chi_C = \chi_C \end{eqnarray} Photons converted from such excitons possess the same symmetry. Since the first six states have specific rotational centers, moir\'e excitons in the large angle regime exhibit location-dependent optical selection rules as schematically shown in Fig.~\ref{fig_twistvsstrain}a. \section{Interlayer excitons in heterostrained moir\'e\label{section_strain}} In this section we consider IXs in moir\'e formed by various types of heterostrain~\footnote{Strain can be induced by various methods, such as pulling on suspended sheets with an electrostatic gate, or by bending a flexible substrate\cite{frisenda2017biaxial,roldan2015strain,deng2018strain,yang2021strain,han2021experimental}. In experiments, errors/uncertainties may occur when applying strain to the samples. First, not all the applied strain to the substrate was transferred to the sample~\cite{liu2014strain,li2020efficient}. This uncertainty can be calibrated by measuring strain transfer efficiency. Besides, strain distribution might be nonuniform due to interactions between the substrate and the sample. Experimentally, strain with uncertainty $<0.1\%$ in a few microns’ scale can be achieved~\cite{liu2014strain,goldsche2018tailoring}. The optical properties of interlayer excitons usually have no qualitative changes within strain variation of $< 0.1\%$ (Fig.~\ref{fig_generalstrain}, with exceptions near transition boundaries).} For simplicity, we assume that only the top layer is stained. We compare properties of IXs in strained moir\'e with those in the twisted case. \subsection{Strain effect}\label{section_formalism:StrainEffects} To characterize the IX in a moir\'e formed by heterostrain, first we give a brief introduction to the geometric and electronic effects of strain~\cite{vozmediano2010gauge,gerardo2017electronic,fang2018electronic,rostami2015theory}. First, strain introduces geometric changes to the lattice structure (Fig.~\ref{fig_twistvsstrain}a lower panel). Such geometric variations shift the location of Dirac cones to $\tau(I+S)^{-1}\mathbf{K}$ (dashed pink circles in Fig.~\ref{fig_twistvsstrain}b lower panel), where $I$ is the identity matrix and $S$ is the strain tensor. Apart from geometric effects on the crystalline structure, strain also modifies the hopping energy along different directions. In the limit of small strain, such variation can be captured by a pseudogauge potential in terms of strain tensor components \begin{eqnarray} \mathbf{A} = \frac{\sqrt{3}}{2a}\beta(\epsilon_{xx}-\epsilon_{yy},-2\epsilon_{xy})^T, \end{eqnarray} where $\beta$ is a material-dependent parameter, for instance, $\beta \approx 2.30$ for WSe$_2$~\cite{fang2018electronic}. Consequently, the two Dirac points in the monolayer BZ are further shifted oppositely (to respect time-reversal symmetry) by the pseudogauge potential towards \begin{eqnarray} \tau\mathbf{D} = \tau(I+S)^{-1}\mathbf{K} -\tau \mathbf{A}, \end{eqnarray} as shown schematically from pink circles to pink dots in Fig.~\ref{fig_twistvsstrain}b lower panel. The first term represents the distorted Dirac points due to geometric distortion discussed in the previous paragraph. In contrast to the twisting case, the Dirac points are shifted away from the BZ corners. \subsection{Volume-preserving heterostrain} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_strainband.jpg} \caption{(a) Light cone distribution in the moir\'e formed by volume-preserving strain along the zigzag direction. Black hexagons denote the moir\'e BZs. Red and blue/pink stars represent main and 1st/2nd Umklapp light cones. The brown dot, arrow and dashed circles represent the origin, gauge potential $\mathbf{A}$, and equi-energy rings, respectively. (b) The kinetic energy dispersion of heterostrained moir\'e excitons at $\epsilon = 3.0\%$. Three-fold degeneracies at $\mathbf{\Gamma}$ are broken due to strain. Composition of light cones for different levels at $\mathbf{\Gamma}$ are also labeled. (c) Dispersion of heterostrained moir\'e excitons in the presence of moir\'e potential. (d) Comparison of energy levels at $\mathbf{\Gamma}$ between twisted (blue) and heterostrained (red dashed) moir\'e excitons. Note that strain intensity has been converted into angles to facilitate the comparison (i.e. $\epsilon/\pi \times 180^\circ$).} \label{fig_strainband} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{fig_strainwavefunc.jpg} \caption{(a) Wave functions of the first 6 lowest states of moir\'e excitons in volume-preserving heterostrained moir\'e along the zigzag direction. The wave functions' three-fold rotational symmetry is broken, while the mirror symmetry along the long-axis of the moir\'e unit cell remains. (b) The in-plane polarization of strain moir\'e excitons from $\epsilon = 0.52\%$ to $5.2\%$. Red/blue color stands for positive/negative helicities, ellipse size stands for dipole amplitude.} \label{fig_strainwavefunc} \end{figure*} Here we consider volume-preserving strain~\cite{bi2019designing,shabani2021deep}, i.e., stretching the top layer along the zigzag direction while compressing it with the same extent in the perpendicular direction (Fig.~\ref{fig_twistvsstrain}a). The strain tensor reads $S = \text{diag}(\epsilon,-\epsilon)$, where $\epsilon$ is the strain strength. In this case, the large-scale moir\'e landscape resembles that of a twisted bilayer (Fig.~\ref{fig_twistvsstrain}a). This similarity also applies to the superlattice potentials based on the local approximation for long-period moir\'e (Appendix~\ref{section_similarity}). This allows us to focus first on the effects of the pseudo-gauge potential by studying the volume-preserving strain. From discussions in Sec.~\ref{section_formalism:StrainEffects}, we know that strain shifts Dirac cones away from BZ corners. Thus, the kinetic momentum in Eq.~(\ref{twistHamiltonian}) should be replaced by \begin{eqnarray} \mathbf{Q}_l^s = \mathbf{Q}_l+\mathbf{A} \end{eqnarray} in the presence of strain. Fig.~\ref{fig_strainband}a shows the distribution of light cones in volume-preserving strained moir\'e, where the brown arrow denotes the shift caused by the gauge potential $\mathbf{A}$ with respect to the origin (brown dot). Interestingly, here the size of $\mathbf{A}$ is almost identical to the length of the mini BZ boundary, which results in the following redistribution of the light cones. $C_3$ symmetry of the three main light cones is broken, which is manifested on the exchange of the dipole polarization of the two main light cones on the two sides of the yellow arrow in Fig.~\ref{fig_twistvsstrain}c~\cite{yu2015anomalous}. In contrast to the twisting case in Fig.~\ref{fig_potential}a, the three main light cones are no longer sitting on the same equi-energy circle (smaller ring in Fig.~\ref{fig_strainband}a), with one of them shifted to higher energy (larger ring). Meanwhile, one of the 1st Umklapp light cones moves towards the origin and stays close to the smaller equi-energy circle. As will be shown later, this change will suppress the optical strength of the low energy states. Now we look at the effects of heterostrain on the dispersion of IXs. Fig.~\ref{fig_strainband}b shows the exciton dispersion without the moir\'e potential along the direction marked by the black dashed line in Fig.~\ref{fig_strainband}a. The first (approximate) degeneracy at $\mathbf{\Gamma}$ consists of two degenerate main light cones and a 1st Umklapp light cone with slightly different energy. The second level at $\mathbf{\Gamma}$ originates from the remaining main light cone and the third level comprises of two degenerate 2nd Umklapp light cones (pink stars near the larger ring in Fig.~\ref{fig_strainband}a). Such redistribution of main and Umklapp light cones among the states, as compared to the case of twisting (Fig.~\ref{fig_twistband}a), will yield distinct optical properties. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_strain90.jpg} \caption{Moir\'e excitons from volume-preserving heterostrain along the armchair direction. (a) The light cone distribution. Black hexagons denote the moir\'e BZs. Red and blue/pink stars represent main and 1st/2nd Umklapp light cones. The brown dot, arrow and dashed circles represent the origin, gauge potential $\mathbf{A}$, and equi-energy rings, respectively. (b) The in-plane polarization of strain moir\'e excitons from $\epsilon = 0.52\%$ to $5.2\%$. Red/blue color stands for positive/negative helicities, ellipse size stands for dipole amplitude.} \label{fig_strain90} \end{figure} For different types of moir\'e superlattices, e.g., twisting vs heterostrain, the moir\'e potential is only affected via changes of moir\'e primitive reciprocal lattice vectors (Eq.~(\ref{moirepotentialformula})). This only changes the spatial profile instead of the magnitude of the moir\'e potential. Analogous to the twisting case, momentum eigenstates in different moir\'e BZs are coupled through the Fourier components of the moir\'e potential. On the other hand, the kinetic energy is affected by the strain-induced pseudogauge potential and -- different from the twisting case -- $\mathbf{Q}_l$ should be replaced by $\mathbf{Q}_l^s$ in Eq.~(\ref{twistHamiltonian}) for strained moir\'e excitons. The mini-bands in the presence of moir\'e potential is shown in Fig.~\ref{fig_strainband}c, which resembles that of the twisted moir\'e (Fig.~\ref{fig_twistband}b). Such similarity extends to the evolution of energy levels at $\mathbf{\Gamma}$ versus the variation of $\epsilon$ or $\theta$ (Fig.~\ref{fig_strainband}d). This is because energy levels only depend on strength of moir\'e potential and the distribution of light cones irrespective of their nature (main or Umklapp)\footnote{The similarity of $A$ and $K_m$ is caused by the material-dependent parameter $\beta$. For volume-preserving strain, the $A = \frac{\sqrt{3}\beta}{2a}(\epsilon_{xx}-\epsilon_{yy})=\frac{\sqrt{3}\beta \epsilon}{a}$, and $K_m = \frac{4\pi}{3a_m}\approx \frac{4\pi\epsilon}{3a}$, where $a_m\approx a/\epsilon$ is moir\'e period. The ratio of the two quantities becomes $\frac{A}{K_m} = \frac{3\sqrt{3}\beta}{4\pi}$, which only depends on $\beta$. The ratio is approximately 0.95 (0.99) for $\text{WSe}_2$ ($\text{MoSe}_2$) whose $\beta=0.23\,(0.24)$.}. Instead, different distributions of light cones between the heterostrain and twisting cases are reflected on the wave functions (Fig.~\ref{fig_strainwavefunc}a). In general, the local densities of the wave functions lose three-fold rotational symmetry. However, the mirror symmetry with respect to the long axis of the supercell is retained. Despite changes in symmetry, the features of spatial distribution remain unchanged: For small strain strength, wave functions exhibit local orbital features at potential minima; For large strain strength, wave functions exhibit extended Bloch wave forms, where quasi-1D stripe patterns can be spotted for high energy states. The optical properties of strained moir\'e excitons are also remarkably distinct from the twisted ones (Fig.~\ref{fig_strainwavefunc}b). Compared with twisted excitons, one finds: (i) The first four states at $\mathbf{\Gamma}$ possess comparable dipole strength independent of strain intensity. While for twisted moir\'e excitons, the first two states possess prominent dipole strength, especially at large twist angles (Fig.~\ref{fig_twistexciton}d); (ii) The 3rd and 6th states have finite dipole strength in contrast to the vanishing contributions in the case of twisting; (iii) The coupled light in strained moir\'e is elliptically instead of circularly polarized. The first difference results from the light cone redistribution. As is shown in Fig.~\ref{fig_strainband}b, the lower states at $\mathbf{\Gamma}$ point are composed of two main light cones and one 1st Umklapp light cone, while the higher states comprise of two 2nd Umklapp light cones and one main light cone. The average effects of strong and weak dipoles from main and Umklapp light cones make the first 4 states exhibit comparable dipole strengths. The second and third differences can be attributed to the distinct symmetry properties. The breaking of three-fold rotational symmetry leads to optical dipole with left and right circular polarization of different strengths, thus forming elliptically polarized light. Notice that the 3rd state still locates around C locals at $\epsilon = 5.24\%$ (Fig.~\ref{fig_strainwavefunc}a), however, the symmetry breaking at C locals ensures that in-plane polarized light-matter coupling is permitted, in contrast to the twisting case. While for the 6th state, its strip-like distribution spread across different locals, which leads to the finite in-plane dipole as compared to its vanishing counterpart in twisted moir\'e. Similar to the case of twisting, the change of the ordering of the plateau state is also reflected in the optical properties between the 4th and 5th states for strained moir\'e excitons. One can also apply the volume-preserving strain along the armchair direction with the strain tensor $S = \text{diag}(-\epsilon,\epsilon)$. The light cone distribution exhibits an approximate mirror reflection in the $\mathbf{Q}_y$ direction compared with the above case (Fig.~\ref{fig_strain90}a vs Fig.~\ref{fig_strainband}a). Such relation in the light cone distribution guarantees that the optical dipoles in the two cases are similar (Fig.~\ref{fig_strain90}b vs Fig.~\ref{fig_strainwavefunc}b). \subsection{Uniaxial strain} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{fig_realstrain.jpg} \caption{Moir\'e excitons from uniaxial heterostrain. (a) Real-space lattice of a heterobilayer, where yellow/green atoms are from the upper/lower layer. Black rhombus depicts the moir\'e supercell. (b) Wave function densities of the first six lowest states at $\epsilon = 3.5\%$. Green rhombuses mark the supercells. (c) The moir\'e potential. (d) The light cone distribution. Black lines depict the moir\'e BZ boundaries. Red and blue/pink stars represent main and 1st/2nd Umklapp light cones. The brown dot, arrow and dashed circles represent the origin, gauge potential $\mathbf{A}$, and equi-energy rings, respectively. (e) Comparison of energy levels at $\mathbf{\Gamma}$ from volume-preserving strain (blue) and uniaxial (red dashed) strain. (f) The in-plane polarization of the first six lowest states. (g)-(i) The in-plane polarization of the first three states in case of general $\epsilon_{xx}$ and $\epsilon_{yy}$. In (f)-(i), red/blue color stands for positive/negative helicities, ellipse size stands for dipole amplitude. } \label{fig_realstrain} \end{figure*} In this section we consider moir\'e formed from uniaxial heterostrain along the zigzag direction. The strain tensor reads $S=\text{diag}(\epsilon,-\nu\epsilon)$, where the Poisson ratio $\nu=0.19$ for WSe$_2$~\cite{ccakir2014mechanical}. This geometry allows us to explore the interplay of the effects of strong geometric distortion and the pseudo-gauge potential. Due to the different deformations along zigzag and armchair directions, the moir\'e superlattice geometry is changed significantly compared to that of a twisted or volume-preserving strained bilayer. The supercell is compressed along the short axis, forming strip-like supercells (Fig.~\ref{fig_realstrain}a). The mini BZ is also deformed dramatically, which affects the location of Dirac cones significantly and redistributes the light cones in a very different manner (Fig.~\ref{fig_realstrain}d). A main light cone is shifted away from the origin. Besides, two of the 2nd Umklapp light cones (pink stars) stay much closer to the origin. All the light cones are close to each other in the $\mathbf{Q}_y$ direction but become well separated along $\mathbf{Q}_x$. In the presence of moir\'e potential, light cones along $\mathbf{Q}_y$ are expected to be strongly coupled. Compared with volume-preserving strain, the valley mismatch becomes smaller for uniaxial strain. This results in an overall lowering of the energy levels and band hybridization mediated by the plateau state is absent within the studied energy range since the plateau state has higher energy (Fig.~\ref{fig_realstrain}e). As for wave function distribution, the first six states all locate around B locals (Fig.~\ref{fig_realstrain}b). Their profiles are analogous to the eigenstates of a 1D harmonic oscillator, as the potential minima are compressed to strips. Such strip-like features is a manifestation of the strong coupling between the light cones along $\mathbf{Q}_y$ direction due to the compression in the momentum space. The light cone redistribution also affects optical properties. Since one of the main light cones is away from the origin, and there is a mixture of main and Umklapp light cones at both low and high energies near the two equi-energy circles (Fig.~\ref{fig_realstrain}d), the first six states exhibit comparable dipole strength (Fig.~\ref{fig_realstrain}f). Besides, all the eigenstates, except the ground state, are contributed by distinct light cone combinations from the case of volume-preserving strain. For the ground state that has the similar optical dipole constituent with the case of volume-preserving strain, the compressed short axis in the supercell dictates that the vertically orientated elliptical polarization becomes narrower and more linear-like (cf. Fig.~\ref{fig_strainwavefunc}b). \subsection{General strain tuning} In principle, strain can be tuned independently along the two axes. To explore the effects of strain on the optical properties of moir\'e IXs more systematically, we tune the two strain components $\epsilon_{xx}$ and $\epsilon_{yy}$ continuously from volume-preserving ($\epsilon_{xx}=-\epsilon_{yy}$) to biaxial ($\epsilon_{xx}=\epsilon_{yy}$) configuration. Fig.~\ref{fig_generalstrain} shows the diagrams on the optical properties of the first three states of moir\'e IXs at $\mathbf{\Gamma}$ in the strain-parameter space. The blank areas on the diagram correspond to the situations where the superlattices become quasi-1D with gigantic periods, thus the results are not provided due to numerical limitations. The upper three panels describe the amplitude of dipoles, where positive (negative) values indicate that the semi-major axis of an elliptical polarization is parallel (perpendicular) to the x axis. The lower three panels describe the ellipticity angle of the polarization, where $\pm45^\circ$ corresponds to $\sigma_\pm$ circular polarization. The two yellow dashed lines in the first panel correspond to the cases of uniaxial strain with $\epsilon_{yy}=-\nu \epsilon_{xx}$ and $\epsilon_{xx}=-\nu \epsilon_{yy}$, where $\nu = 0.19$. The two diagonal directions in each panel describe the cases of volume-preserving and biaxial strain, respectively. In particular, a biaxially strained bilayer, whose pseudo-gauge potential vanishes, resembles a lattice-mismatched heterobilayer. Consequently, the optical properties of the former case can be employed to qualitatively understand those of the latter. Furthermore, due to similar moir\'e landscapes with identical symmetries, a biaxially strained bilayer and a twisted bilayer with equal moir\'e period share the identical in-plane polarization (cf. Fig.~\ref{fig_twistexciton}d). \begin{figure*} \centering \includegraphics[width=1\textwidth]{fig_generalstrain.jpg} \caption{Diagrams of IXs optical properties with strain tuning. The upper three panels describe the dipole amplitude, which is measured in units of the dipoles of intralayer exciton. Positive (negative) value represents that the semi-major axis of an elliptical polarization is parallel (perpendicular) to the x axis. The two yellow dashed lines in the first panel correspond to $\epsilon_{yy}=-\nu \epsilon_{xx}$ and $\epsilon_{xx}=-\nu \epsilon_{yy}$, where $\nu = 0.19$. The lower three panels describe the ellipticity angle of polarization, where $\pm45^\circ$ stands for $\sigma_\pm$ circular polarization. Results along $\epsilon_{xx}=\epsilon_{yy}$ in the last panel are meaningless as in-plane polarization vanishes. Some specific dipole polarizations are marked as rings in the upper panel, where the center corresponds to the strain configuration, the size represents the amplitude, and red/blue color indicates positive/negative helicity.} \label{fig_generalstrain} \end{figure*} \subsection{Mixture of twisting and heterostrain\label{section_mixture}} In this section, we briefly discuss the mixture of twisting and heterostrain. In general, the strain and rotation operators do not commute. Also, the mixture of twisting and heterostrain will deform the BZ nontrivially, which makes the problem quite complicated. In the following, we illustrate the effects of the interplay by applying volume-preserving strain along the zigzag direction to the top layer followed by twisting. Fig.~\ref{fig_mix}a illustrates the evolution of the moir\'e potential landscape by fixing the twist angle while changing the strength of strain. When the size of strain is enlarged and approaches the twist angle, the supercell stretches and becomes a 1D strip gradually. Dirac cones in the two layers are separated in a more complicated manner in the presence of nontrivially distorted BZ and the pseudogauge potential due to the superposition of twisting and strain. This results in a redistribution of the light cones that affects the wave function distribution as well as the optical properties. Fig.~\ref{fig_mix}b shows the wave function distribution of the three lowest states at $\mathbf{\Gamma}$ in different mixtures. Although they still exhibit localization around potential extrema at the high symmetry locals, their profiles become irregular due to the lack of both $C_3$ rotation and mirror symmetry. When $\epsilon \leq 1.05\%$, the three main light cones maintain approximate degeneracy, so the two lowest states exhibit dominant optical dipole strength (Figs.~\ref{fig_mix}c, d). As strain is increased and approaches the size of the twist angle, main light cones are more separated and shifted toward high energy states (Fig.~\ref{fig_mix}e). Thus, high energy states can be tuned to exhibit dominant dipole strength with strain engineering (Fig.~\ref{fig_mix}c). Strain can also be unintentionally introduced in device fabrications, which can have a general intensity and direction~\cite{shabani2021deep}. In Appendix~\ref{section_mixappendix} we provide diagrams of optical properties of moir\'e IXs in a moir\'e with a fixed twist angle but arbitrary strain intensity and direction, i.e., $S=\epsilon \begin{pmatrix} \text{cos}\, 2\phi & \text{sin}\, 2\phi \\ \text{sin}\, 2\phi & -\text{cos}\, 2\phi \\ \end{pmatrix} $, where the strain direction $\phi$ is defined by the angle between stretching direction and x axis (Figs.~\ref{fig_pt_mix_R0d5} and \ref{fig_pt_mix_R2}). Such diagrams might be utilized as a toolbox for strain estimation based on optical measurements. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig_mix.jpg} \caption{Moir\'e excitons from the mixture of twisting and volume-preserving strain. (a) Moir\'e potential landscape with fixed $\theta = 2^\circ$ while setting $\epsilon = 0.52\%$, $0.87\%$, $1.75\%$, respectively. The unit cell deforms as the strain strength approaches the size of the twist angle. (b) Wave functions of the first three states under the same conditions as (a). (c) The in-plane polarization of moir\'e excitons with fixed twist angle ($2^\circ$) and different strain intensity. Red/blue color stands for positive/negative helicities, ellipse size stands for dipole amplitude. (d) Light cone distribution at $\theta = 2^\circ$ and $\epsilon = 1.05\%$. Black lines depict the moir\'e BZ boundaries. Red and blue/pink stars represent main and 1st/2nd Umklapp light cones. The brown dot and dashed circles represent the origin and equi-energy rings, respectively. (e) Light cone distribution at $\theta = 2^\circ$ and $\epsilon = 1.57\%$.} \label{fig_mix} \end{figure} \section{Summary} To summarize, we investigate the evolution of wave function and optical properties of interlayer excitons in the moir\'e formed by different twist angles and heterostrain strength. The wave function evolution is subject to local atomic alignments and the light cone distribution. In the small twist angles or strain strength regime, densely arranged light cones dictate that wave functions are localized orbitals and optical selection rules are orbital-dependent. While in the opposite limit, sparsely arranged light cones determine that wave functions are extended Bloch-like. This results in various states possessing different rotation centers, consequently, the location-dependent optical selection rules. Compared with twisted moir\'e, low-energy carriers in heterostrained moir\'e are additionally affected by distorted BZ and an effective gauge potential in the momentum space, so that moir\'e excitons possess distinct wave functions and exhibit elliptically polarized optical selection rules. Due to the redistribution of light cones and 1D stripe-like wave functions in various strain configurations, high energy states can be tuned to exhibit strong optical properties. These results show that strain engineering can be utilized to manipulate light cone distribution and control the optical properties of moir\'e excitons.
1,477,468,749,972
arxiv
\section{Introduction} The Internet gradually becomes the unified network infrastructure for all our communication and business needs. Large enterprises, in particular, rely increasingly on Internet-based Virtual Private Networks (VPNs) that typically interconnect several, possibly remote, sites via a wide-area network (WAN). Depending on the company, the VPNs may have various uses, including carrying Voice-over-IP (VoIP) to drive down the communication expenses, sharing geographically distributed company resources, providing a real-time service, etc. However, it is well known that wide-area networks face today several problems, including congestion, failure of various network elements or protocol mis-configurations. These may result in periods of degraded quality-of-service, or even lack of connectivity, perceived by the end-user. To deal with these problems, several measures can be taken at the end-points, at the edge, or inside the network. One approach is to use redundant communication paths to improve end-to-end performance\footnote{We consider reliability as an extreme case of quality-of-service (QoS), because from a user's perspective a ``failure'' has the same effect as several packets lost in a row. At one extreme, packets may sporadically get dropped or delayed - this is typically referred to as QoS problem. At the other extreme, a failure may lead to an long-lasting loss of connectivity - this is typically referred to as a reliability problem. In the middle, several packets may get mistreated in a short time period - which is also typically considered a QoS problem. To cover the entire range of cases ,we often refer together to quality-of-service and reliability, as ``performance''}. This idea is not new. The Resilient Overlay Network (RON) architecture \cite{ron} proposed that participating nodes maintain multiple paths to each other, in order to preserve their connectivity in the face of Internet failures. The more practical alternative to resilient overlays, multi-homing \cite{akella1, akella2}, advocates that each edge network connect to the Internet over multiple Internet Service Providers (ISPs), in order to increase the probability of finding an available path to any destination. Both approaches essentially suggest to establish and intelligently use redundant communication paths. Several vendors have already developed products along these lines \cite{routescience, internap, fatpipe}. A significant body of research has also investigated the performance of such approaches and algorithms for monitoring, dynamic path switching and other aspects \cite{ron,akella1, akella2,upenn1, upenn2, chennee-cooexisting, dovrolis-infocom06, overlay-tcp, Kelly, under-over, selfish-shenker}. We too are looking at how to use control at the edge and utilize redundant communication paths to improve end-to-end performance. What we bring to the table is a mechanism for proactively leveraging several paths at the same time. We propose to replicate and transmit packets over several redundant independent paths, which are carefully selected. The goal is to increase the probability that at least one copy will be received correctly and on time. In other words, we propose to combine a proactive replication over a set of redundant links, with the traditional reactive dynamic switching among (sets of) links. Our approach is inspired by the Redundant Array of Inexpensive Disks (RAID) \cite{raid}. The basic idea of RAID was to combine multiple small, inexpensive disk drives into an array of disk drives which yields better performance that of a Single Large Expensive Drive (SLED), and appears to the computer as a single logical storage unit or drive. Furthermore, disk arrays were made fault-tolerant by redundantly storing information in various ways. Our approach is analogous to ``disk mirroring'', or RAID-1, which duplicates all content on a backup disk; so our approach would be called RAIL-1 according to RAID terminology. Similarly to RAID, we propose to replicate packets over multiple, relatively inexpensive, independent paths, i.e., create a {\em Redundant Array of Internet Links (RAIL)}, which appears to the application as a single ``superior'' link. To evaluate RAIL performance, we have built a prototype called RAILedge. We show that using RAIL yields better performance (both quality-of-service and reliability) than using any of the underlying paths alone. In addition, we evaluate the performance of applications, such as VoIP and TCP, over RAIL and we seek to optimize relevant application-level metrics. In particular, we propose an additional mechanism, called {\em delay padding}, which complements RAIL when there is a significant disparity between the underlying paths. There are several issues that need to be investigated. How much is the performance benefit from RAIL and how does it depend on the characteristics of the underlying paths? What is the tradeoff between performance benefit and the bandwidth cost of replicating every packet over multiple connections? How does RAIL interact with higher layers, such as TCP and VoIP applications? Does RAIL introduce reordering? How should one choose the links that constitute the RAIL, in a way that they complement each other and optimize application performance? In this paper, we address these questions. With regards to the bandwidth cost, we argue that it is worthwhile and that RAIL is a simple cost-efficient approach for achieving good quality-of-service over redundant paths. The first argument is from a cost point-of-view. As bandwidth gets cheaper and cheaper, combining multiple inexpensive links becomes competitive to buying a single, more expensive, private line. Furthermore, we show that two paths are sufficient to get most of the benefit. In addition, the cost of a connection is rather fixed than usage-based. Once one pays the initial cost to get an additional connection to a second ISP (which companies using multi-homing have already done), there is no reason not to fully utilize it. The second argument is from a performance point-of-view, which may be a strict requirement for critical applications. RAIL-ing traffic over $n$ paths provides more robustness to short term ``glitches'' than dynamic path switching between the same $n$ paths. This is because there are limits in how fast path switching mechanisms can (i) confidently detect glitches and (ii) react to them without causing instability to the network. For example, if a few VoIP packets are sporadically dropped, a path switching system should probably not react to it, while RAIL can still successfully deliver copies of the lost packets arriving from the redundant paths. Our findings can be summarized as follows. \begin{itemize} \item First, we demonstrate that proactively replicating packets over a {\em Redundant Array of Internet Links (RAIL)} significantly improves the end-to-end performance. We quantify the improvement in terms of network-level as well as application-level metrics. In this process, we use and derive analytical models for the performance of VoIP-over-RAIL and TCP-over-RAIL. We also use a working prototype of RAILedge. \item Second, we design and evaluate a {\em delay padding} mechanism to complement RAIL when there is a significant delay disparity among the underlying paths. This is useful both for VoIP (where it plays a proxy-playout role) and for TCP (where it may remove re-ordering) \item Third, we show that two paths provide most of the benefit, while additional paths bring decreasing benefits. The two preferred paths should be carefully selected based on their quality, similarity/disparity and correlation. \end{itemize} \begin{figure}[t] \begin{center} \centerline{\includegraphics[scale=0.37,angle=-90]{./figs/general/topology2-eps.eps}} \end{center} \vspace{-10pt} \caption{An example of a Redundant Array of Internet Links (RAIL) connecting two remote sites.} \label{fig:topology} \end{figure} The structure of the rest of the paper is as follows. Section \ref{sec:related} discuss related work. Section \ref{sec:system} describes the RAILedge design, some implementation details and the experimental setup. Section \ref{sec:evaluation} evaluates the performance improvement brought by RAIL in terms of general network-level metrics (subsection \ref{sec:network-level}), VoIP quality (subsection \ref{sec:voip}) and TCP throughput (subsection \ref{sec:tcp}); we also study the sensitivity to the characteristics of the underlying paths. In this evaluation, we used analysis, matlab simulation, actual packet traces collected over Internet backbones, and testbed experiments. Section \ref{sec:discussion} discusses the bigger picture, including possible extensions and open questions. Section \ref{sec:conclusion} concludes the paper. \section{Related Work} \label{sec:related} The use of redundancy is a well-known technique for improving system reliability \cite{reliability-book}. In the networking context, a common technique is to use redundant diverse paths in order to improve the end-to-end performance. Multi-homing and routing overlays both exploit path diversity, primarily to improve availability in case of failures, and secondarily performance in case of congestion in one of the two paths. Today, several vendors provide services that combine multi-homing (i.e. the connection of an edge network to several different ISPs) with additional control capabilities at the edge (such as monitoring and dynamic ISP switching, QoS mechanisms, compression) so as to optimize cost and performance \cite{routescience,internap, fatpipe}. Overlay networks provide additional control not only at the edge but also at intermediate nodes \cite{ron}. Several researchers are studying the performance of multi-homing and overlay routing, and have proposed algorithms for monitoring and path switching. The pioneer Resilient Overlay Networks project is described in \cite{ron}. Measurements-based performance evaluation of multi-homing can be found in \cite{akella1, akella2}. The benefit from path switching and the effect on application performance was quantified in \cite{upenn1, upenn2}. \cite{selfish-shenker} and \cite{under-over} took a game-theoretic approach to selfish route control and to the relation between the overlay and the underlying network, respectively. The theoretical frameworks proposed in \cite{overlay-tcp, Kelly} formulated the problem of joint multi-path route and rate control and provided a sufficient condition for the stability of such decentralized algorithms. \cite{chennee-cooexisting, dovrolis-infocom06} also demonstrate that overlays can cause instability and \cite{dovrolis-infocom06} used randomization to break synchronization. In the media streaming community, the idea of path diversity is traditionally combined with multiple-description coding: complementary streams are simultaneously sent over independent paths, to achieve resilience to loss in a bandwidth-efficient manner. \cite{john-video-diversity} proposed to transmit multiple- description video over independent paths; in follow-up work \cite{john-cdn}, the same authors used this idea to design a content-delivery network. \cite{yi-voice-diversity} applied the same idea to Voice-over-IP and also designed an playout scheduling algorithm to handle multi-path transmission. The same authors did a simulation study on the effect of replication and path diversity on TCP transfers \cite{steinbech-tcp-diversity}. Our work fits in this scope as follows. It is related to multi-homing and overlay approaches in that it tries to improve end-to-end performance by connecting edge-networks via several different ISPs and by exploiting their path diversity. We compare to related work as follows. The novel aspect we are focusing on is proactive replication of every packet over the available paths in a single RAIL. This aspect is orthogonal to the online decision of switching traffic between RAILs (i.e. sets of paths). However, in this paper we still explore how to choose and manage the physical paths that constitute a single RAIL. Similarly to \cite{upenn1, upenn2}, we are looking at application-level metrics, particularly for VoIP and TCP. In contrast to the media-streaming work, we transmit redundant as opposed to complementary descriptions, operating on the assumption that bandwidth is not the issue. Our delay padding algorithm resembles playout buffering \cite{yi-voice-diversity} in that it tries to smooth out the network delay jitter; however, it is implemented at an edge device instead of the end-point, and acts only as a playout-proxy without dropping packets. As the acronym ``RAIL'' indicates, our approach is inspired by the {\em Redundant Array of Inexpensive Disks} (or {\em RAID}), an idea for improving disk reliability and performance, proposed in the classic SIGMOD'88 paper by G. Gibson and R. Katz \cite{raid}. The basic idea of RAID was to combine multiple small, inexpensive disk drives into an array of disk drives which yields performance exceeding that of a Single Large Expensive Drive (SLED), and appears to the computer as a single logical storage unit or drive. Furthermore, disk arrays can be made fault-tolerant by redundantly storing information in various ways. Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault-tolerance and each offering different trade-offs in features and performance. The different levels of RAID in the original taxonomy \cite{raid} correspond to various functions of an intelligent network device connected to several ISPs. E.g. a network device that load-balances the outgoing traffic over the available paths increases the throughput; it could be named rail-0 because it corresponds to {\em striping}, or {\em raid-level 0} in \cite{raid}. In this paper, we focus on packet replication over several paths, which is analogous to {\em disk mirroring}, or {\em raid-level 1} in \cite{raid}. Similarly to RAID advocating multiple small inexpensive disks instead of a single large expensive one, we believe that, as bandwidth gets cheaper and cheaper, redundant replication of packets over independent, inexpensive Internet connections becomes the simplest, cost-efficient approach for achieving high quality-of-service and reliability. \section{System Design} \label{sec:system} \subsection{RAIL Mechanisms Overview} \label{sec:rail-mechanisms} RAIL improves the packet delivery between two remote local area networks (LANs), by connecting them through multiple wide-area paths. The paths are chosen to be as independent as possible, e.g. belonging to different Internet Service Providers. Fig.\ref{fig:topology} shows an example of two disjoint paths: Link 1 goes through ISP-A and ISP-C, Link 2 goes through ISP-B and ISP-D. (The simplest configuration would be to have both LANs connected to the same two ISPs.) For simplicity, we describe the system using two paths only; the same ideas apply to $n>2$ paths. A RAILedge device is required to connect each LAN to the wide-area paths. Each packet that transitions from the LAN to the WAN, via the RAILedge, is replicated at the RAILedge and sent out both WAN links. Copies of the same packet travel in parallel through the different WAN links and eventually arrive at the receiving RAILedge. There are three possibilities: both copies arrive, one copy arrives or no copy arrives. The receiving RAILedge examines every packet coming in from the WAN and suppresses any duplicates; i.e. it forwards the first copy of each packet toward its destination but it discards any copies arriving later. The result is clear: the probability of both copies being lost is reduced compared to using a single path, and the delay experienced is the minimum of the delay on each path. Overall, the application perceives a virtual RAIL link that is better than the underlying physical links. \begin{figure}[t] \begin{center} \centerline{\includegraphics[scale=0.25,angle=-90]{./figs/general/railedge-eps.eps}} \end{center} \vspace{-10pt} \caption{Components of our prototype RAILedge.} \label{fig:railedge} \end{figure} In summary, the RAILedge performs three basic operations: (i) packet duplication (ii) forwarding over all redundant Internet links and (iii) duplicate suppression. RAILedge-RAILedge communication happens over VPN tunnels, to ensure that every RAIL-ed packet is received by the intended RAILedge. We implement tunneling with a simple encapsulation/decapsulation scheme; our header includes the ID of the sending RAILedge and a sequence number, which is used to suppress duplicates at the receiving RAILEdge. All RAILedge operations are transparent to the end-user. The components of a RAILedge device are shown in Fig.\ref{fig:railedge} and the steps taken upon reception of a packet are summarized in Fig.\ref{fig:rail-lan-wan}. \begin{figure} {\centering \subfigure[Packet from LAN to WAN.] {\includegraphics[scale=0.3,angle=-90]{figs/general/lan2wan-eps.eps}} \subfigure[Packet WAN to LAN.] {\includegraphics[scale=0.3,angle=-90]{figs/general/wan2lan-eps.eps}} \caption{\label{fig:rail-lan-wan} RAIL functions upon reception of a packet.}} \end{figure} There is a component of the RAILedge that we are not going to examine in this paper: link monitoring and selection. This module is responsible for monitoring the performance of every physical path, computing appropriate quality metrics, and choosing the best subset of paths to constitute the RAIL, over which packets should be replicated. Link monitoring and dynamic selection is a research problem in itself, with extensive and growing literature. In this paper, we do not study dynamic path switching.\footnote{Intuitively, we expect that dynamic RAIL switching is a less constrained problem than single-path switching because (i) redundant transmission in a single RAIL provides robustness to short-term problems and (ii) most paths have consistent behavior in the longer time scales.} Instead, we focus on (i) evaluating the replication of packets over {\em all} paths that constitute the RAIL under study and (ii) on giving recommendations on how to statically select these paths. This is still useful for a typical use of RAIL: initially, the user compares different ISPs and decides which is the best set to subscribe to; after subscription, the user replicates packets over all ISPs. \subsection{Delay Padding} \label{sec:padding-mechanism} Delay Padding is a mechanism that needs to complement the basic RAIL mechanism when there is delay disparity in the paths. The idea is the following. The default behavior of the receiving RAILedge is to forward the first copy and discard all copies that arrive later. However, this may not always be the best choice when there is significant delay disparity between the two paths. In such cases, one can construct pathological scenarios where the default RAIL policy results in patterns of delay jitter that adversely affect the application. One example is VoIP: the playout buffering algorithm at the receiver tries to estimate the delay jitter and and adapt to it. This playout algorithm is unknown to us and out of our control; even worse, it is most likely designed to react to delays caused by real single paths, not by virtual RAIL paths. For example, when path 1 is much faster than path 2, then most of the time RAIL will forward copies arriving from path 1. The playout buffer may adapt and closely match it, by choosing a playout deadline slightly above the delay of the path 1. When packets are lost on the fast path, the copies arriving from the slow path will arrive late to be played out - and will be useless. In this scenario, a better use of the two paths would be to ``equalize'' the delay in the two paths by artificially delaying the packets arriving from the fast path, thus the name ``delay padding''. Essentially, delay padding acts as a proxy for playout, located at the RAILedge, and presents the receiver with the illusion of a roughly constant one-way delay. The main differences from a playout algorithm at the end-host is that delay padding does not drop packets that arrive late for playout. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.35]{./figs/general/delayPadding.eps}} \end{center} \vspace{-20pt} \caption{Delay Padding: artificially delay some packets so that all packets experience the same one-way delay.} \label{fig:paddingAlg} \end{figure} Fig. \ref{fig:paddingAlg} demonstrates the main idea of delay padding, for packets in the same VoIP flow. The goal is to minimize jitter, i.e. to make all packets experience the same, roughly constant, one-way delay $D$, shown in straight line. For every packet $i$, two copies arrive: the first one is marked with a circle, the second is marked with the diamond. The actual time RAIL forwards the packet is marked with an "X". Without padding, RAIL would normally forward the first copy, which incurred one-way delay $n_{RAIL}=min\{delay 1, delay2\}$. With padding, we compare $n_{RAIL}$ to the target one-way delay $D$. \begin{itemize} \item In cases 1 and 2: $n_{RAIL}<D$. We wait for additional ``padding'' time $D-n_{RAIL}$ before forwarding the packet. \item In case 3: $n_{RAIL}>D$. We forward the packet immediately, without further delay. (Instead, a playout algorithm at the receiver would just drop the late packets). \end{itemize} The target one-way delay $D$ so as to maximize the overall voice quality (MOS): $D=argmax\{MOS(D_{one_way})\}$. $D$ should be chosen taking into account the statistics of two paths and the delay budget. Adaptation of this value should be allowed only in much larger time scales. We discuss the choice of $D$ to optimize $MOS$, as well as the performance improvement from delay padding, in the section on VoIP evaluation (\ref{sec:voip-quality}). Delay padding may prove a useful mechanism for TCP as well. For example, it could be used to remove reordering, caused by RAIL for certain combinations of paths. This is discussed further in the section on reordering (\ref{sec:reordering-general}) and in the section on the effect of reordering on TCP in particular (\ref{sec:reordering}). A practical implementation of delay padding for VoIP would require (i) the ability to identify voice packets and keep per-flow state and (ii) calculations of timing in term of relative relative instead absolute one-way delay. An implementation of reordering-removal for TCP, would not necessarily require per flow state; it could just use the sequence numbers on the aggregate flow between the two RAILedges. \subsection{RAIL Prototype and Experimental Setup} In order to evaluate RAIL performance, we developed a RAILedge prototype that implements the functionality described in Section \ref{sec:rail-mechanisms}. Our prototype runs on Linux and consists of a control-plane and a data-plane agent, both running in user space. All routing and forwarding functionality is provided by the Linux kernel. The control plane is responsible for configuring the kernel with static routes and network interfaces. The data plane is responsible for the packet processing, i.e. encapsulation/decapsulation, duplication, duplicate suppression and delay padding. In particular, the kernel forwards each received packet to the data-plane agent, which processes it appropriately and forwards it back to the kernel for regular IP forwarding, see Fig.\ref{fig:railedge}. Our user-space prototype is sufficient for a network connected to the Internet through a T1 or T3 line: Without considering duplicate packets, RAILedge running on a 1.9 GHz CPU with 512 MB of DRAM forwards up to 100,000 minimum-size packets per second (about 51 Mbps) and up to 62,500 average-size (400 bytes) packets per second (about 200 Mbps), while it introduces negligible jitter. For higher-end links, we would need a different prototype that implements the entire data path in kernel space. \begin{figure}[t] \begin{center} \centerline{\includegraphics[scale=0.3,angle=-90]{./figs/general/testbedRAIL-eps.eps}} \end{center} \vspace{-20pt} \caption{Experimental setup for RAIL} \label{fig:testbed-RAIL} \end{figure} Fig. \ref{fig:testbed-RAIL} shows our experimental setup. Two Linux boxes, Host-A and Host-B, communicate through prototype RAILedges A and B respectively. The two RAILedges are connected directly through two of their physical interfaces (eth2-eth2, eth3-eth3), thus simulating the wide-area Links 1 and 2 shown in Fig.\ref{fig:topology}. We used Netem \cite{netem} on interfaces eth2, eth3, to emulate the properties of wide-area networks in a controlled way. The current version of Netem emulates variable delay, loss, duplication and re-ordering. Netem is currently enabled in the Linux kernel. We also emulated WAN links of various bandwidths, using the rate limiting functionality in Linux (iproute2/tc). \section{Performance evaluation} \label{sec:evaluation} In section \ref{sec:network-level}, we show that RAIL outperforms any of the underlying physical paths in terms of network-level metrics, i.e. it reduces loss, delay/jitter, it improves availability and it does not make reordering any worse than it already is in the underlying paths. In sections \ref{sec:voip} and \ref{sec:tcp} we look at the improvement in terms of application-level metrics for VoIP (MOS) and TCP (throughput); we also look at how this improvement varies with the characteristics, combinations and number of underlying paths. \subsection{RAIL improves network-level metrics} \label{sec:network-level} RAIL statistically dominates any of the underlying paths, i.e. it presents the end-systems with a virtual path with better statistics in terms of network-level metrics (loss, delay, jitter and availability). This is intuitively expected. At the very least, RAIL could use just one of the paths and ignore the other; having more options should only improve things. A natural consequence is that any application performance metric calculated using these statistics (e.g. loss rate, average delay, jitter percentiles) should also be improved by RAIL; we found this to be indeed the case in computing metrics for VoIP and TCP. In addition to the statistics, we also looked at pathological sample paths, e.g. that reordering or special patterns of jitter may arise; we show that RAIL does not make things worse than they already are and that delay padding is able to handle these cases. \subsubsection{Loss} Clearly, RAIL decreases the average packet loss rate from $p_1$, $p_2$ to $p=p_1p_2$, for independent paths. One can derive some useful rules of thumb, based on this simple fact. {\em Number of paths.} Given that the actual loss rates are really small $p_i<<0.1$ in practice, every new independent reduces loss $p=p_1p_2...p_n$, by at least an order of magnitude. For similar paths ($p_1=...p_n=p$) and it is easy to see that the loss probability $P_{RAIL}(k)=p^k$ is a decreasing and convex function of the number of paths ($k$). Therefore, most of the benefit comes from adding the $2^{nd}$ path, and additional paths bring only decreasing returns. However, adding a second path with significant {\em different} (smaller) loss rate dominates the product and makes a big difference. {\em Correlation.} In practice, the physical paths underlying RAIL may overlap. E.g. consider two paths that share a segment with loss rate $p_{shared}$, and also have independent segments with $p_1=p_2=p$. Loss experienced on any of the single paths w.p. $p_{single}=(1-p)(1-p_{shared})$. Loss is experienced over RAIL w.p. $p_{RAIL}=(1-p^2)(1-p_{shared})$. Fig. \ref{fig:shared-gain} plots $p_{RAIL}$ vs. $p$ for various values of $p_{shared}$. Clearly, $p_{RAIL}$ increases in both $p$ and $p_{shared}$. The lossier the shared part, $p_{shared}$, compared to the independent part, $p$, the less improvement we get by using RAIL (the curves for $p_{RAIL}$ and $p_{single}$ get closer and closer). Therefore, one should not only look at their end-to-end behavior, but also at the quality of their shared part, and choose a combination of paths that yields the lowest overall $p_{RAIL}$. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.45]{./figs/network-level/lossShared.eps}} \end{center} \vspace{-30pt} \caption{{\em The effect of shared loss.} Consider two paths with shared loss rate $p_{shared}$ and independent loss each $p_1=p_2=p$. Here we plot the end-to-end $p_{RAIL}=$ and $p_{single}$ vs. $p$, for various values of $p_{shared}$.} \label{fig:shared-gain} \end{figure} RAIL also decreases the {\em burstiness in loss}. Due to lack of space, we omit the analysis and refer the reader to section \ref{sec:voip-testbed}, for testbed experiments that demonstrate this fact. \subsubsection{Availability} The simplest way to view a ``failure'' is as a long lasting period of loss, and we can talk about the percentage of time a path spends in failure. Then, the arguments we made for loss in the previous section apply here as well. E.g. for RAIL to fail, both paths must fail; the downtime reduces fast with the number and quality of paths. Table \ref{table:Ken-downtime} gives a concrete idea on how much RAIL decreases the downtime. \begin{table}[h] \begin{center} \begin{tabular}{c|c} If both Internet links have & Then the RAIL link has \\ that much {\em bad} time: & that much {\em medium} time: \\ \hline {\footnotesize 10\% (2.5 hours/day)} & {\footnotesize 1\% (1.5 hour/week)} \\ \hline {\footnotesize 2\% (3+ hours/week)} & {\footnotesize 0.04\% (2 hours/week)}\\ \hline {\footnotesize 0.5\% (1- hours/week)} & {\footnotesize 0.0025\% (15 min/year)} \\ \hline {\footnotesize 0.1\% (5 hours/month)} & {\footnotesize (2.5 0.0001\% (20 sec/year)} \end{tabular} \caption{\label{table:Ken-downtime}RAIL reduces downtime {\em and} improves quality} \end{center} \end{table} Note that RAIL not only reduces the time we spend in a ``bad period'', but also improves the user experience from ``bad'' to ``medium'' during that period. We demonstrate this in detail in the VoIP section (in particular see Table \ref{table:MOStable}). \subsubsection{Delay and Jitter} When a packet $i$ is RAIL-ed over two independent paths, the two copies experience one-way delay $d_1(i)$ and $d_2(i)$, and the packet forwarded by RAIL (the copy that arrived first) experiences $d(i)=min\{d_1(i), d_2(i)\}$. If the cumulative distribution function (CDF) for $d_j,~j=1,2$ is $F_j(t)=Pr[d_i \le t]$, then the delay CDF for RAIL is : \begin{equation} \begin{split} F(t)=Pr[d\le t]=Pr[min\{d_1,d_2\}\le t]=...~~~~~~~\\ 1-Pr[d_1> t~ and~d_2>t]=1-(1-F_1(t))(1-F_2(t)) \end{split} \end{equation} It is easy to see that RAIL statistically dominates any of the two paths. Indeed, the percentage of packets experiencing delay more than $t$ over RAIL is $1-F(t)=(1-F_1(t))(1-F_2(t))$, which is smaller than the percentage of packets exceeding $t$ on any of the two links ($1-F_i(t)$). This means that the entire delay CDF is shifted higher and left, thus $F$ dominates $F_1$ and $F_2$. Any quality metrics calculated based on these statistics (e.g. the average delay, percentiles, etc) will be better for RAIL than for any of the two paths. Rather than plotting arbitrary distributions at this point, we choose to demonstrate the delay and jitter improvement in some practical scenarios considered in the VoIP section (\ref{sec:voip}). \subsubsection{\label{sec:reordering-general}Reordering} An interesting question is whether RAIL introduces reordering, which may be harmful for TCP performance? In this section, we show that RAIL does not make things worse than they already are on the underlying paths. RAIL cannot reorder packets if each underlying path does not reorder and does not drop packets. RAIL may translate loss on one path to late arrivals from the other path, which is actually an improvement. {\em {\bf Proposition 1.} If each path does not drop or reorder packets, then RAIL cannot introduce reordering.}\\ {\bf Proof.} Let us assume that RAIL can reorder packets. Fig.\ref{fig:reorder-example}(a) shows an example out-of-order sequence of out of order packets forwarded by the receiving RAILedge: (3,5,4). The same arguments will hold for any sequence $(i,k,j)$ with $i<j<k$. Packets 3 and 5 must have arrived through different paths (otherwise one of the paths would have dropped packet 4 or reorder it). Say 3 arrives from the top path and 5 from the bottom path. Then the copy of 3 sent on the bottom path must have arrived between 3 and 5 (otherwise RAIL would have forwarded the bottom 3 copy first). What happened to packet 4 sent on the bottom path? If it arrived between 3 and 5, then there would be no out-of-order at RAIL; if it arrived after 5, then the bottom path would have reordered 4 and 5, which we assumed it is not the case; and we have assumed that 4 is not dropped either. We reached a contradiction, which means that RAIL cannot reorder packets if both paths are well behaving to start with. \begin{figure} \begin{center} {\centering \subfigure[If each path does not reorder or drop packets, then RAIL cannot reorder them.] {\includegraphics[scale=0.3,angle=-90]{figs/general/reorder-theorem-eps.eps}}} {\centering \subfigure[RAIL converts loss on the faster path to reordering, if a packet is dropped on the fast path and $dt<d_2-d_1$.] {\includegraphics[scale=0.3,angle=-90]{figs/general/reorder-example-2-eps.eps}}} \end{center} \vspace{-10pt} \caption{\label{fig:reorder-example}RAIL and Reordering} \end{figure} {\em {\bf Proposition 2.} RAIL may translate loss on the faster path to late arrivals from the slower path. If the inter-packet spacing at the sender is smaller than the delay difference of the two paths, then the packets arrive out of order.} \\ {\bf Example.} In Fig.\ref{fig:reorder-example}(b), we consider paths 1 and 2, with one-way delay $d_1<d_2$. Two packets $n$ and $m$ are sent with spacing $dt$ between them. If packet $n$ is lost on the fast path, and $dt \le d_2-d_1$, then $n$ will arrive at the RAILedge after $m$ and the RAILedge will forward them out-of-order. The larger the delay difference $d_2-d_1$ and the smaller the spacing between packets $dt$, the larger the reordering gap. {\em {\bf Fact 3.} Better late than never.}\\ {\bf Discussion.} For VoIP, it does not hurt to receive packets late, as opposed to not receive them at all. However, out-of-order packets may potentially hurt TCP performance. Testbed experiments, in section \ref{sec:reordering}, show that TCP performs better when $x\%$ of packets out-of-order, compared to when $x\%$ of packets lost. Furthermore, the delay padding component is designed to handle the timely delivery of packets. We will revisit this fact in section \ref{sec:reordering}. \subsection{\label{sec:voip}RAIL improves VoIP performance} \subsubsection{Voice-over-IP Quality} \label{sec:voip-quality} A subjective measure used to assess Voice-over-IP quality is the Mean Opinion Score (or MOS), which is a rating in a scale from 1 (worst) to 5 (best) \cite{P.800}. Another equivalent metric is the $I$ rating, defined in the Emodel \cite{g.107}. \cite{g.107} also provides a translation between $I$ and $MOS$; in this paper, we convert and present voice quality in the MOS scale only, even when we do some calculations in the $I$ scale . \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.3]{./figs/voice/MOScontours-forplayout.eps}} \end{center} \vspace{-20pt} \caption{Voice Quality as a function of both loss and delay.} \label{fig:MOScontours} \end{figure} VoIP quality has two aspects. The first is speech quality and it depends primarily on how many and which packets are dropped in the network and/or at the playout buffer. \cite{g.107,g.113} express the speech quality as a function of the packet loss rate, $MOS_{speech}(loss~rate)$, for various codecs. The second aspect of VoIP quality is interactivity, i.e. the ability to comfortably carry on an interactive conversation; \cite{g.107. g.114} express this aspect as a function of the average one-way delay, $MOS_{interactivity}(avg~delay)$, for various conversation types. These two aspects can be added together (in the appropriate $I$ scale defined in \cite{g.107}) to give an overall MOS rating: $MOS=MOS_{speech}+MOS_{interactivity}$. This is the metric we will use throughout this section. We do not present the details of these formulas in this submission, due to lack of space. The interested reader is referred to the ITU-T standards \cite{g.107, g.113, g.114} or to comprehensive tutorials on the subject \cite{athina-voip,cole}. What the reader needs to keep in mind is that there are either formulas or tables for $MOS_{speech}(loss~rate)$, $MOS_{interactivity}(avg~delay)$ and that $MOS=MOS_{speech}+MOS_{interactivity}$. This is a commonly used methodology for assessing VoIP quality, e.g. see \cite{athina-voip,upenn1}. Fig.\ref{fig:MOScontours} shows contours of MOS as a function of loss and delay based on the data provided in the ITU-T standards, considering G.711 codec and free conversation. {\bf The effect of playout.} In the assessment of VoIP, one should take into account the function of the playout algorithm at the receiver, which determines the playout deadline $D_{playout}$: packets with one-way delay exceeding $D_{playout}$ are dropped. As $D_{playout}$ increases, the one-way delay increases (thus making interactivity worse), but less packets are dropped due to late arrival for playout (thus making speech quality better). Therefore, there is a tradeoff in choosing $D_{playout}$ and one should choose $D_{opt}=argmax{MOS(D_{playout})}$. This tradeoff depicted in Fig. \ref{fig:MOScontours} and is also responsible tfor the shape of the $MOS(D_{one~way})$ curves of Fig.\ref{fig:mos-vs-delay}, which clearly have a maximum at $D_{opt}$. The value $D_{opt}$ depends on the loss, delay and jitter of the underlying paths as welllas on the delay budget consumed in components other than the playout. Recall that $D_{playout}$ is only a part of the total $D_{one~way}=D_{end~systems}+D_{network}+D_{playout}$ and that packets arriving late contribute to the total loss ($packet~loss=(network~loss)+Pr[d>D_{playout}]$). {\em The effect of RAIL.} In the previous section, we saw that RAIL decreases (i) the loss rate (ii) the average delay and (iii) the percentage of late packets. Therefore, it also improves the $MOS$ which is a function of these three statistics. \subsubsection{Railing VoIP over representative Internet Paths} In this section, we now use realistic packet traces to simulate the behavior of WAN links. In particular, we use the packet traces provided in \cite{athina-traces}, which are collected over the backbone networks of major ISPs, by sending probes that emulate G.711 traffic. Fig.~\ref{fig:trace-b1-m2}(a) and (b) show the delay experienced on two paths between San Jose, CA and Ashburn, VA. The two paths belong to two different ISPs and experience different delay patterns. Fig.\ref{fig:trace-b1-m2}(c) shows the one-way delay experienced by packets RAIL-ed over these two paths. Packets were sent every 10ms. Although there is no network loss in these example paths, packets may still be dropped if they arrive after their playout deadline. Because the action of playout is out of the control of RAILedge, we consider the entire range of fixed one-way playout deadlines (out of which 70ms are considered consumed at the end-systems). The resulting $MOS$ is shown in Fig.\ref{fig:mos-vs-delay} as a function of $D_{one~way}$.\footnote{The curve $MOS(D_{one~way})$ has a maximum which corresponds to $D_{playout}^{opt}$ that optimizes the loss-delay tradeoff in the overall $MOS$.} Notice that the $MOS$ curve for RAIL is higher then both curves corresponding to individual links, for the entire range of delays considered. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.3]{./figs/voice/testB-traces-playout140.eps}} \end{center} \vspace{-30pt} \caption{One-way delay experienced when packets are transmitted over example path 1, example path 2, and using RAIL over these two paths ($d_RAIL=min\{d_1,d_2\}$).} \label{fig:trace-b1-m2} \end{figure} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.3]{./figs/voice/MOS-vs-delay.eps}} \end{center} \vspace{-20pt} \caption{MOS vs. playout deadline for the traces in Fig.\ref{fig:trace-b1-m2}.} \label{fig:mos-vs-delay} \end{figure} In general, RAIL always improves VoIP quality because it presents the application with a better virtual path in terms of loss, delay and jitter. However, the relative improvement of RAIL vs. the single path depends (i) on the behavior of the two paths and (ii) on the playout algorithm. This was just an illustrative example of RAIL over two specific paths. We now consider additional representative traces and their combinations using RAIL. We consider six packet traces from \cite{athina-traces}, shown in Fig.~\ref{fig:alltraces}. We call the traces ``good'', ``medium'' and ``bad'', to roughly describe the VoIP performance they yield.\footnote{E.g. we call the two traces on the top row ``good'', because they have almost constant delay, and negligible or no loss. We call the two traces on the medium row ``medium'' because they are good most of the time, except for a period of high delay/jitter/loss. Finally, the traces in the bottom row have very high delay (up to 400ms!) and jitter.} We then considered pairs of paths for all the combinations of good/medium/bad quality, by choosing one trace from the left and the second trace from the right of Fig.\ref{fig:alltraces}. Table \ref{table:MOStable} shows the MOS for each one of the 6 paths, as well as for these 9 combinations using RAIL.\footnote{In all cases, a conservative fixed playout deadline of 200ms was considered; 40ms delay has also been added for the end-systems).} One can see that the combined link (RAIL) provides one ``class'' better quality than any of the individual links. For example, if at least one path is good ($MOS>4$), then it dominates and the RAIL link is good, regardless of the second link. Two medium links (roughly $3<MOS<4$) give a good RAIL link($MOS>4$) and two bad links ($MOS<2$) give a medium RAIL link, i.e. there is one class of service improvement. This is intuitively expected, because RAIL multiplexes and uses the best of both paths. In addition, we did in-house informal listening tests: we simulated the transmission of actual speech samples over these traces and we had people listen to the reconstructed sound. It was clear that the RAIL-sample sounded much better. \begin{figure}[t!] \begin{center} \centerline{\includegraphics[scale=0.4]{./figs/voice/alltraces.eps}} \end{center} \vspace{-20pt} \caption{{\footnotesize Six representative packet traces, collected over wide-area paths of Internet backbones \cite{athina-traces}. We plot one-way delay vs. packet sequence number; when a packet is lost we give it a 0 value.}} \label{fig:alltraces} \vspace{-10pt} \end{figure} \begin{table}[th!] \begin{center} \begin{tabular}{c||c|c|c} {\bf RAIL}& \multicolumn{3}{c}{\bf Path 2}\\ \hline \multicolumn{1}{c||}{\bf Path 1} & Good-2 & Medium-2 & Bad-2 \\ \multicolumn{1}{c||}~ &(4.19) & (3.02) & (1.19) \\ \hline \hline Good-1 & ~& ~& ~~\\ (4.21) & 4.21 & 4.21 & 4.21\\ \hline Medium-1 & ~& ~ & ~\\ (3.87) & 4.21 & 4.21 & 4.00\\ \hline Bad-1 & ~& ~& ~~\\ (1.76) & 4.20& 3.97 & 3.09\\ \end{tabular} \caption{\label{table:MOStable}Voice Quality (in terms of MOS score) for the 6 representative paths, and for their 9 combinations using RAIL.} \end{center} \vspace{-10pt} \end{table} Notice, that this quality improvement is in addition to the availability improvement in Table \ref{table:Ken-downtime}: not only RAIL reduces the time spent in ``bad/medium'' periods, but it also improves the experience of the user during that period, from ``bad'' to ``medium'' and from ``medium'' to ``good''. \subsubsection{\label{sec:voip-testbed}Testbed experiments for VoIP-over-RAIL} In this section, we use our testbed to demonstrate the improvement that RAIL brings to VoIP quality for the entire range of path conditions. We used Netem to control the loss and delay parameters of each path. We sent probes to emulate the transmission of voice traffic.\footnote{200B UDP packets were sent every 20ms (corresponding to G.711 at 64kbps and 20ms packetization: 160B payload and 40B RTP/UDP/IP headers) for 2min duration.} \begin{figure} {\centering \subfigure[Packet loss measured on each path.] {\includegraphics[scale=0.3]{figs/voice/testbed/loss-netem.eps}} \subfigure[Resulting MOS (speech quality)] {\includegraphics[scale=0.3]{figs/voice/testbed/loss-netem-MOS.eps}} \caption{\label{fig:loss-testbed} Testbed experiments on the effect of packet loss to VoIP with/without RAIL. }} \end{figure} First, we looked at {\em loss rate}. We applied uniform loss and the same loss rate $p$ from 1 to 20\%, which is quite high but may happen during short periods of bursty loss. As expected, the voice stream experiences loss rate $p^2$ when transmitted over RAIL, and $p$ over on a single link. Indeed, in Fig.\ref{fig:loss-testbed}(a), the measured $45$ degrees red line (for a single link) agrees with $p$; the measured blue line (for RAIL) agrees with the theoretical $p^2$ dashed purple line. This loss reduction results in a speech quality improvement up to 1.5 units of MOS. Fig. \ref{fig:loss-testbed}(b) shows that MOS (averaged over the entire duration) is practically constant when we use RAIL, while the MOS over a single link is decreasing rapidly with increasing loss rate. A side-benefit is that speech quality varies less with time, which is less annoying for the user. \begin{table}[h!] \begin{center} \begin{tabular}{c||c|c|c|c|c} \hline \multicolumn{6}{c} {\bf Number of packets lost in burst} \\ \hline {\bf {\footnotesize Loss}}& \multicolumn{5}{c}{\bf Loss Rate}\\ {\bf {\footnotesize Corr.} }& 10\% & 20\% & 30\% &40\% & 50\% \\ \hline\hline {\footnotesize 0\%} & {\footnotesize 99/{\bf 11}}& {\footnotesize 203/{\bf 58}} & {\footnotesize 298/{\bf 101}} & {\footnotesize 399/{\bf 160}} &{\footnotesize 514/{\bf 249}} \\%& {\footnotesize 581/{\bf 366}} \\ {\footnotesize 20} & {\footnotesize 27/{\bf 1}} & {\footnotesize 127/{\bf 14}} & {\footnotesize 257/{\bf 62}} & {\footnotesize 362/{\bf 158}} &{\footnotesize 512/{\bf 242}} \\%& {\footnotesize 615/{\bf 396}} \\ {\footnotesize 40} & {\footnotesize 6/{\bf 0}} & {\footnotesize 45/{\bf 1}} & {\footnotesize 144/{\bf 33}} & {\footnotesize 340/{\bf 129}} &{\footnotesize 479/{\bf 251}} \\%& {\footnotesize 655/{\bf 426}}\\ {\footnotesize 60} & {\footnotesize 0/{\bf 0}} & {\footnotesize 18/{\bf 0}} & {\footnotesize 76/{\bf 8}} & {\footnotesize 248/{\bf 82}} &{\footnotesize 537/{\bf 258}} \\%& {\footnotesize 729/{\bf 573}}\\ {\footnotesize 80} & {\footnotesize 0/{\bf 0}} & {\footnotesize 0/{\bf 0}} & {\footnotesize 14/{\bf 0}} & {\footnotesize 123/{\bf 12}} &{\footnotesize 466/{\bf 288}} \\%& {\footnotesize 809/{\bf 752}}\\ \hline \end{tabular} \caption{\label{table:inburst}Number of packets lost in burst (out of 1000 total) on a single path (shown in regular font) vs. RAIL (shown in bold font).} \end{center} \vspace{-15pt} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{c||c|c|c|c|c} \hline \multicolumn{6}{c} {\bf Number of bursts} \\ \hline {\bf {\footnotesize Loss}}& \multicolumn{5}{c}{\bf Loss Rate}\\ {\bf {\footnotesize Corr.} }& 10\% & 20\% & 30\% &40\% & 50\% \\ \hline\hline {\footnotesize 0\%} & {\footnotesize 88/{\bf 11}}& {\footnotesize 161/{\bf 52}} & {\footnotesize 204/{\bf 93}} & {\footnotesize 243/{\bf 137}} &{\footnotesize 237/{\bf 197}} \\ {\footnotesize 20\%}& {\footnotesize 22/{\bf 1}}& {\footnotesize 93 /{\bf 13}} & {\footnotesize 185/{\bf 52}} & {\footnotesize 197/{\bf 122}} &{\footnotesize 230/{\bf 178}} \\ {\footnotesize 40\%}& {\footnotesize 5/{\bf 0}}& {\footnotesize 37/{\bf 1}} & {\footnotesize 99/{\bf 28}} & {\footnotesize 175/{\bf 90}} &{\footnotesize 198/{\bf 159}} \\ {\footnotesize 60\%}& {\footnotesize 0/{\bf 0}}& {\footnotesize 13/{\bf 0}} & {\footnotesize 50/{\bf 7}} & {\footnotesize 124/{\bf 57}} &{\footnotesize 164/{\bf 145}} \\ {\footnotesize 80\%}& {\footnotesize 0/{\bf 0}}& {\footnotesize 0/{\bf 0}} & {\footnotesize 4/{\bf 0}} & {\footnotesize 53/{\bf 7}} &{\footnotesize 100/{\bf 97}} \\ \hline \end{tabular} \caption{\label{table:numburst}Number of bursts (out of 1000) on a single path (in regular font) vs. RAIL (in bold).} \end{center} \vspace{-15pt} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{c||c|c|c|c|c} \hline \multicolumn{6}{c} {\bf Maximum burst size} \\ \hline {\bf {\footnotesize Loss}}& \multicolumn{5}{c}{\bf Loss Rate}\\ {\bf {\footnotesize Corr.} }& 10\% & 20\% & 30\% &40\% & 50\% \\ \hline\hline {\footnotesize 0\%} & {\footnotesize 2/{\bf 1}}& {\footnotesize 5/{\bf 4}} & {\footnotesize 7/{\bf 2}} & {\footnotesize 7/{\bf 3}} &{\footnotesize 11/{\bf 3}} \\ {\footnotesize 20\%} & {\footnotesize 3/{\bf 1}}& {\footnotesize 4/{\bf 2}} & {\footnotesize 5/{\bf 3}} & {\footnotesize 8/{\bf 7}} &{\footnotesize 17/{\bf 5}} \\ {\footnotesize 40\%} & {\footnotesize 2/{\bf 0}}& {\footnotesize 3/{\bf 1}} & {\footnotesize 8/{\bf 4}} & {\footnotesize 6/{\bf 5}} &{\footnotesize 10/{\bf 7}} \\ {\footnotesize 60\%} & {\footnotesize 0/{\bf 0}}& {\footnotesize 2/{\bf 0}} & {\footnotesize 4/{\bf 2}} & {\footnotesize 10/{\bf 4}} &{\footnotesize 19/{\bf 7}} \\ {\footnotesize 80\%} & {\footnotesize 0/{\bf 0}}& {\footnotesize 0/{\bf 0}} & {\footnotesize 10/{\bf 0}} & {\footnotesize 8/{\bf 2}} &{\footnotesize 24/{\bf 16}} \\ \hline \end{tabular} \caption{\label{table:maxburst}Maximum size of burst (i.e. max number of consecutive packets lost) on a single path (in regular font) vs. RAIL (in bold font). The average burst size for RAIL is 1 in most cases.} \end{center} \vspace{-15pt} \end{table} Second, we looked at the {\em burstiness of loss}, which is an important aspect because it can lead to loss of entire phonemes, thus degrading speech intelligibility. To control burstiness, we controlled the ``correlation'' parameter in Netem.\footnote{The Netem correlation coefficient does increase the loss burstiness, but does not directly translate to burstiness parameters, such as burstiness duration. An artifact of their implementation \cite{netem} is that increasing correlation decreases the measured loss rate (for loss rate<50\%). However, it does not matter: our point is to compare RAIL to a single path, under the same loss conditions} We tried all combinations of $(loss~rate,~loss~correlation)$ and measured the following metrics for bursty loss: (i) number of packets lost in burst (ii) number of bursts (iii) average burst size (iv) maximum burst size. In Tables \ref{table:inburst},\ref{table:numburst}, \ref{table:maxburst}, we show the numbers measured over one link in regular font, and the numbers measured over RAIL in bold. Clearly, all metrics are significantly reduced with RAIL compared to the single path case, which demonstrates that RAIL reduces loss burstiness. This good property is intuitively expected, as it is less likely that both paths will experience a burst at the same time. Third, we experimented with {\em delay jitter}. We considered two paths with the same mean delay (100ms), and we used Netem to generate delay according to a paretonormal distribution. We generated delay on both paths according to the same statistics. We fixed the mean delay at 100ms for both paths, and experimented with the entire range of delay variability (standard deviation from 10ms to 100ms and delay correlation from 0\% to 100\%). \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.35]{./figs/voice/testbed/jitter-improvement2.eps}} \end{center} \vspace{-30pt} \caption{{\footnotesize Improvement in speech quality using RAIL vs. using a single path, considering the full range of two factors: (i) the delay variability of the underlying paths (captured here by the standard deviation of delay) and (ii) the playout at the receiver (captured here by the jitter allowed). Delay was configured in Netem to be paretonormal distributed, with mean=100ms and correlation=0.}} \label{fig:jitter-improvement} \end{figure} In the beginning, we set delay correlation at 0 and increase the standard deviation of delay. We observed that RAIL reduces the jitter experienced by the VoIP stream. This results in less packets being late for playout and thus better speech quality. The exact improvement depends (i) on the delay variability of the underlying paths (captured here by the standard deviation of delay) and (ii) on the playout at the receiver (captured here by the jitter allowed at the playout). Fig.\ref{fig:jitter-improvement} shows the improvement in speech quality (in MOS) compared to a single path, for a range of these two parameters ($std~ dev$ 20-80ms and jitter level acceptable at playout 20-100ms). One can make several observations. First, RAIL always help (i.e. benefit$>0$); this is because RAIL presents the end-system with a better virtual path. Second, there is a maximum in every curve (every curve corresponds to a certain path delay variability): when the playout is intolerant to jitter, then it drops most packets anyway; when the playout can absorb most of the jitter itself, then the help of RAIL is not needed; therefore, RAIL provides most of its benefit, in the middle - when it is needed to reduce the perceived jitter below the acceptable threshold for playout. Finally, the entire curve moves to the right and lower for paths with higher delay variability. In addition, we experimented with delay correlation (which will result in several consecutive packets arrive late and get dropped in the playout) and we observed that RAIL decreased this correlation by multiplexing the two streams. Finally, we experimented with RAIL-ed VoIP and several non-RAILed TCP flows interfering with it. The idea was to have loss and delay caused by cross-traffic rather than being artificially injected by Netem. RAIL brought improvement in the same orders of magnitude as observed before. \subsubsection{Delay Padding} \begin{figure}[t!] \begin{center} \centerline{\includegraphics[scale=0.35]{./figs/voice/padding/padding-100-20-50-20-80.eps}} \end{center} \vspace{-20pt} \caption{Padding decreases jitter for RAIL over two paths with different average delay ($100ms$ and $50ms$) and similar delay variability (e.g. $std dev=20ms$ for both).} \label{fig:padding-diffavg} \end{figure} \begin{figure}[h] \begin{center} \centerline{\includegraphics[scale=0.35]{./figs/voice/padding/stdev-vs-dprop.eps}} \end{center} \vspace{-20pt} \caption{The larger the delay disparity between the two paths, the more padding is needed. } \label{fig:padding-delta-avg} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[scale=0.35]{./figs/voice/padding/padding-100-20-100-5-various.eps} \end{center} \vspace{-20pt} \caption{Padding decreases jitter for RAIL over paths with the same average delay ($100ms$) but different jitter ($stddev=20ms,5ms$). The more padding - the less jitter.} \label{fig:padding-sameavg} \end{figure} The delay padding algorithm, described in section \ref{sec:padding-mechanism}, acts as a proxy playout at the receiving RAILedge: it artificially adds delay (``padding'') in order to create the illusion of constant one-way delay. In this section, we use matlab simulation to demonstrate the effect of padding. Fig.\ref{fig:padding-diffavg} considers the case when the two paths differ in their average delay; this can be due to e.g. difference in propagation and/or transmission delay. Notice the difference between (b)-RAIL without padding and (c)-RAIL with padding. Fig.\ref{fig:padding-delta-avg} shows that the larger the disparity between the two paths, the more padding is needed to smooth out the stream. Fig. \ref{fig:padding-sameavg} considers the case when two paths have the same average delay but differ significantly in the delay jitter, e.g. due to different utilization. Fig. \ref{fig:padding-sameavg}(a) plots the delay on the two paths on the same graph; Fig. \ref{fig:padding-sameavg}(b) shows what RAIL does without padding; Fig. \ref{fig:padding-sameavg}(c) and (d) show that the stream can be smoothed out by adding more padding. The appropriate amount of padding should be chosen so as to maximize the overall MOS - as discussed in section \ref{sec:voip-quality}. \subsection{\label{sec:tcp}RAIL improves TCP performance} In the section \ref{sec:network-level}, we saw that RAIL statistically dominates the underlying paths in terms network-level statistics. Therefore, performance metrics computed based on these statistics, such as the average throughput, should be improved. In section \ref{sec:analysis-tcp}, we analyze the throughput of long-lived TCP flows, and we show that indeed this is the case. However, there may be pathological cases, e.g. when reordering falsely triggers fast-retransmit; this is what we study in section \ref{sec:reordering}, and show that -for most practical cases- RAIL helps TCP as well . \subsubsection{\label{sec:analysis-tcp}Analysis of long-lived TCP-over-RAIL} {\em A simple formula.} Let us consider two paths with loss rate and round-trip times: ($p_1$, $RTT_1$), $(p_2, RTT_2)$ respectively, and w.l.o.g. $RTT_1 \le RTT_2$. The simple rule of thumb from \cite{floyd} predicts that the long-term TCP throughput for each path is: $T_i=\frac{1.22}{RTT_i\sqrt{p_i}}, \text{for }i=1,2$. What is the long-term TCP throughput using RAIL over these two paths? Following a reasoning similar to \cite{floyd}, we find that: \vspace{-10pt} \begin{equation} T=\frac{1.22}{E[RTT]\sqrt{p_1p_2}}, \text{where:}~~~~~~~~~~~~\\ \label{eq:tcp-rail-formula} \end{equation} \begin{equation} E[RTT]=RTT_1\frac{1-p_1}{1-p_1p_2}+RTT_2\frac{p_1(1-p_2)}{1-p_1p_2} \label{eq:tcp-E[RTT]} \end{equation} \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.35,angle=-90]{./figs/tcp/prioni-eps.eps}} \end{center} \vspace{-20pt} \caption{The simple steady-state model for TCP \cite{floyd}.} \label{fig:tcp-prioni} \vspace{-10pt} \end{figure} {\bf Proof.} Fig. \ref{fig:tcp-prioni} shows the simple steady-state model considered in \cite{floyd}. The network drops a packet from when the congestion window increases to $W$ packets. The congestion window is cut in half ($W/2$), and then it increases by one packet per round-trip time until it reaches $W$ packets again; at which point, the network drops a packet again and the steady-state model continues as before. Let us look at a single congestion epoch. For that simple model, the number of packets sent during the congestion epoch is $\frac{w}{2}+(\frac{w}{2}+1)+...(+\frac{w}{2}+\frac{w}{2})=\frac{3w^2}{8}+\frac{3w}{4}$. For the packet to be lost , both copies sent over the two paths must be lost. Therefore, the loss rate is $p=p_1p_2=\frac{1}{number~of~packets}=\frac{1}{\frac{3w^2}{8}+\frac{3w}{4}} \simeq \frac{8}{3w^2}$ and $W\simeq \sqrt{8/3(p_1p_2)}$. The only difference from \cite{floyd} is that the round-trip time as perceived by TCP-over-RAIL is no longer constant, but it depends on whether a packet is lost on any of the paths. Provided that the packet is received on at least one path, which has prob. $(1-p_1p_2$), we are still in the same congestion epoch and \vspace{-10pt} \begin{equation} RTT= \begin{cases} RTT_1,~w.p.~(1-p_1)\\ RTT_2,~w.p.~p_1(1-p_2) \end{cases} \end{equation} Therefore, the conditional expectation for RTT is given by Eq.(\ref{eq:tcp-E[RTT]}); and the TCP throughput over RAIL is on average: \begin{equation} \frac{(number~of~packets)}{ (\frac{W}{2}+1)\cdot E[RTT]} \simeq...\frac{1.22}{E[RTT]\sqrt{p_1p_2}} \end{equation} Essentially, RAIL appears to the TCP flow as a virtual path with loss rate $p=p_1p_2$ and round-trip time $E[RTT]$. Notice that there are two factors to take into account in Eq.(\ref{eq:tcp-rail-formula}): a multiplication in loss ($p_1p_2$) and an averaging in delay E[RTT]. The loss for RAIL is smaller than any of the two links: $p>p_1,p>p_2$. The same is not true for the delay which is a weighted average: $RTT_1<E[RTT]<RTT_2$. {\em Implications.} Let us now use this simple formula to study the sensitivity of tcp-over-RAIL throughput to the characteristics of the underlying paths. {\bf Fact 1.} {\em TCP throughput is better over RAIL than over any of the two paths: $T>T_1$ and $T>T_2$. } {\bf Proof.} First, consider that $RTT_1=RTT_2=RTT$. Then, the RAIL link is equivalent to a single link with $p=p_1p_2$, which is better than any of the two by an order of magnitude. What happens when $RTT_1<RTT_2$? It is easy to see that RAIL is better than the slower path (2), because RAIL has both smaller loss and shorter RTT than the slow path (2): \vspace{-10pt} \begin{equation} \frac{T}{T_2}=\frac{1}{\sqrt{p_1}}\frac{RTT_2}{E[RTT]}>1 \cdot 1=1 \end{equation} Is RAIL better than the faster path (1) as well? RAIL is better in terms of loss but worse in terms of delay ($E[RTT]>RTT_1$). It turns out that the multiplicative decrease in loss dominates the averaging in delay. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.45]{./figs/tcp/tcpbenefit-rail-vs-fast.eps}} \end{center} \vspace{-20pt} \caption{Consider two paths with the same $p$ and different $RTT_1<RTT_2$, for the full range of $p$'s and $RTT$'s. The figure shows the ratio of TCP throughput over RAIL vs. tcp over the fast link. RAIL performs roughly 10 times better for the range of practical interest.} \label{fig:tcp-rail-vs-fastlink} \end{figure} In Fig.\ref{fig:tcp-rail-vs-fastlink}, we consider $p_1=p_2=p$, we fix $RTT_1=10ms$ and consider the full range of $p$ and $RTT_2$. We plot the ratio between the throughput for TCP-over-RAIL vs. TCP-over-fast-link. \vspace{-10pt} \begin{equation} \begin{split} \frac{T}{T_1}=\frac{1}{\sqrt{p}}\frac{RTT_1}{E[RTT]}~\text{where }~~~~~~~~~~\\ \frac{1}{\sqrt{p}}>1~~~\text{and}~ \frac{RTT_1}{E[RTT]}=...=\frac{1+p}{1+p\frac{RTT_2}{RTT_1}} \le 1 \end{split} \end{equation} We see that tcp does 4-10 times better over RAIL than over the fast link (1), for all practical cases: loss rates up to 10\% and difference in delay up to 100ms. Indeed, the difference in $RTT$ cannot be exceed some tens of milliseconds (e.g. due to propagation or transmission ), and $p$ should be really small, except for short time periods. {\em How many paths?} For n paths with characteristics $(p_i, RTT_i), i=1..n$, where $RTT_1<RTT_2<...<RTT_n$, and following similar derivations, we find that: \begin{equation} \begin{split} T(n)=\frac{1.22}{E[RTT]\sqrt{p_1p_2...p_n}}, \text{where:}~~~~~~~~~~~~\\ E[RTT]=\frac{[RTT_1+RTT_2p+...RTT_np^{n-1}](1-p)}{1-p_1...p_n}\\ \end{split} \label{eq:tcp-kpaths} \end{equation} The multiplicative factor $\sqrt{p_1..p_k}$ dominates the averaging E[RTT]. Also large RTTs have discounted contributions. For $p_1=p_2=...p_n$, $T(n)$ is a convex increasing function of $n$, which implies that adding more paths of similar loss rate, improves throughput but with decreasing increments. \subsubsection{\label{sec:reordering}Testbed Experiments on Reordering and TCP} In section \ref{sec:reordering-general}, we saw that RAIL does not introduce reordering if both paths are well behaving, but may convert loss on the fast path to late - and at the extreme even out-of-order packets under some conditions ($dt\le d_2-d_1$). It is well known that reordering may have a reverse effect on TCP, as it falsely triggers the fast retransmit. In this section, we use testbed experiments to show that, even in cases that RAIL converts loss to reordering, this is actually beneficial for TCP. \begin{figure} \begin{center} \centerline{\includegraphics[scale=0.35,angle=-90]{./figs/general/testbedSingle-eps.eps}} \end{center} \vspace{-10pt} \caption{Simplified experimental setup for testing the effect of reordering vs. loss on TCP.} \label{fig:testbed-Single} \vspace{-10pt} \end{figure} \begin{figure} \begin{center} {\includegraphics[scale=0.4]{figs/tcp/reorderVSloss.eps}} \end{center} \vspace{-10pt} \caption{\label{fig:loss-vs-reorder} Testbed experiments comparing the effect of loss vs. reordering on the throughput of a single TCP flow.} \vspace{-10pt} \end{figure} Recall that RAIL does not cause reordering, it only translates loss to reordering. Therefore, the fair question to ask is not how ``TCP does with reordering vs. without reordering'' but instead ``how TCP does with $x\%$ of packets arriving out-of-order vs. $x\%$ of packets being lost''. {\em{\bf Fact 3-revisited.} Better late than never (and the earlier the better)}. We used the simplified testbed shown in Fig.\ref{fig:testbed-Single} to inject a controlled amount of loss and reordering, using Netem, on a single TCP flow. Fig.\ref{fig:loss-vs-reorder} shows the results of the comparison. First, we introduced x\% of loss, ranging from 0 to 20\%; the TCP throughput is shown in dashed line. Then we introduced x\% of reordering for a range of reordering gaps/delays, i.e. the packets arrive 10-90ms later than they should; the resulting TCP throughput is shown in a separate bold line for each delay value. We see that TCP performs much better with reordering than with loss, therefore it is indeed better to receive packets ``late than never''. Not surprisingly, the less the delay in delivery, the better the performance. Furthermore, TCP has today several default options to deal with reordering: including SACK, DSACK and timestamps. We found that turning SACK on further improved the performance of TCP under reordering in Fig.\ref{fig:loss-vs-reorder}. In summary, we expect RAIL to help TCP for all practical cases, i.e. for small loss rates and delay differences between the paths in the order of 10-50ms. As an extreme measure, one can use the delay padding mechanism not only for voice, but also as a TCP ordering buffer to completely eliminate reordering. \section{Future Directions} \label{sec:discussion} We envision a RAIL-network architecture, where RAILedges are control points that use path redundancy, route control and application-specific mechanisms, to improve WAN performance. A first extension has to do with topology. So far, we considered two RAILedge devices connecting two remote sites via multiple redundant links. We envision that this can be generalized to a virtual multipoint network or {\em RAILnet}, where multiple edge networks are reliably interconnected to each other, as shown in Fig.\ref{fig:railnet}. Each participating edge network is located behind its own RAILedge, and each RAILedge pair communicates over at least two Internet links. The Railnet interface represents the local point of attachment to a Railnet and should present itself as a regular interface to a multi-access subnet. \begin{figure} \centerline{{\includegraphics[scale=0.28,angle=-90]{figs/general/hugh1-eps.eps}}} \vspace{-10pt} \caption{RAILnet: a virtual multipoint reliable network} \label{fig:railnet} \vspace{-10pt} \end{figure} Second, we are interested in combining the proactive replication of RAIL with some kind of route control, in particular (i) selection of the right subset of physical paths within the same RAIL and also (ii) dynamically switching among them. In this paper, we focused on the first part (i.e. at combinations of paths with various characteristics, at different number of paths, at paths that are similar or different from each other) and tried to give recommendations on how to statically select among them. The second aspect is dynamic switching among sets of paths. We expect this to be a less constrained than single-path switching, because (i) redundant transmission is robust to short-lived problems and (ii) physical paths tend to have consistent behavior in the long time scales. Therefore, RAIL should relieve much of the urgency in dynamic path switching decisions. One could further enhance the functionality of RAILedge. So far, we focused on replication of packets over multiple paths. Several other functions can be naturally added on an edge network device, including monitoring and path switching, compression, quality-of-service mechanisms, protocol specific acceleration. For example, one could decide to RAIL part of the traffic (e.g. VoIP or critical applications) and use striping for the remaining traffic; this could correspond to RAIL-0 in the raid taxonomy \cite{raid}. There are some additional interesting questions, we are currently pursuing as a direct extension of this work. First, we continue to study TCP over RAIL, using more accurate TCP models, and considering also short-lived connections; we are also working on a modification of our delay-padding algorithm, to remove reordering at the receiving RAILedge. Second, we are investigating the effect of RAIL on the rest of the traffic. E.g. when there is significant disparity in bandwidth, we expect RAIL-ed TCP to cause congestion on the limited-bandwidth path. Furthermore, what is the interaction between competing RAILs? Finally, it would be interesting to explore the benefit of adding additional RAILedges in the middle of the network. The RAILnet architecture can be incrementally deployed by gradually adding more RAILedges. If widely deployed, it has the potential to fundamentally change the dynamics and economics of wide-area networks. \vspace{-20pt} \section{Conclusion} \label{sec:conclusion} \vspace{-10pt} We proposed and evaluated the Redundant Array of Internet Links (RAIL) - a mechanism for improving packet delivery by proactively replicating packets over multiple Internet Links. We showed that RAIL significantly improves the performance in terms of network- as well as application-level metrics. We studied different combinations of underlying paths: we found that most of the benefit comes from two paths of carefully managed; we also designed a delay padding algorithm to hide significant disparities among paths. RAIL can be gracefully combined with and greatly enhance other techniques currently used in overlay networks, such as dynamic path switching. Ultimately, it has the potential to greatly affect the dynamics and economics of wide-area networks. \vspace{-20pt} {\footnotesize
1,477,468,749,973
arxiv
\section{Introduction}\label{S:Introduction} Let $G$ be a connected reductive algebraic group. A normal algebraic variety $X$ is called a spherical $G$-variety if there exists an algebraic action $G\times X\rightarrow X$ such that the restriction of the action to a Borel subgroup $B$ of $G$ has an open orbit in $X$. In this case, we say that the action is spherical. \vspace{.25cm} Let $P_1,\dots, P_k \subset G$ be a list of parabolic subgroups containing the same Borel subgroup $B$ and let $X$ denote the product variety $X=G/P_1\times \cdots \times G/P_k$. Then $X$ is a smooth, hence normal, $G$-variety via the diagonal action. The study of functions on an affine cone over $X$ is important for understanding the decompositions of tensor products of representations of $G$, see~\cite{PopovVinberg,Panyushev99}. In particular, determining when the diagonal action of $G$ on $X$ is spherical is important for understanding the multiplicity-free representations of $G$. In his ground breaking article~\cite{Littelmann}, Littelmann initiated the classification problem and gave a list of all possible pairs of maximal parabolic subgroups $P_1,P_2$ such that $G/P_1\times G/P_2$ is a spherical $G$-variety. In~\cite{MWZ1}, for group $G=SL(n)$ and in~\cite{MWZ2} for $G=Sp(2n)$, Magyar, Weyman, and Zelevinski classified the parabolic subgroups $P_1,\dots, P_k$ such that the product $X=G/P_1\times \cdots \times G/P_k$ is a spherical $G$-variety. According to~\cite{MWZ1}, if $X$ is a spherical $G$-variety, then the number of factors is at most 3, and $k=3$ occurs in only special cases. Therefore, the gist of the problem lies in the case $k=2$. This case is settled in full detail by Stembridge. In~\cite{Stembridge}, for a semisimple complex algebraic group $G$, Stembridge listed all pairs of parabolic subgroups $(P_1,P_2)$ such that $G/P_1\times G/P_2$ is a spherical $G$-variety. \vspace{.5cm} For motivational purposes, we will mention some recent related developments. Let $K$ be a connected reductive subgroup of $G$ and let $P$ be a parabolic subgroup of $G$. One of the major open problems in the classification of spherical actions is the following: What are the possible triplets $(G,K,P)$ such that $G/P$ is a spherical $K$-variety? When $K$ is a Levi subgroup of a parabolic subgroup $Q$, this question is equivalent to asking when $G/Q\times G/P$ is a spherical $G$-variety via diagonal action; it has a known solution as we mentioned earlier. For an explanation of this equivalence, see~\cite[Lemma 5.4]{AP}. In~\cite{AP}, Avdeev and Petukhov gave a complete answer to the above question in the case $G=SL(n)$. If we assume that $K$ is a symmetric subgroup of $G$, then our initial question is equivalent to asking when $G/P\times K/B_K$ has an open $K$-orbit via its diagonal action. Here, $B_K$ is a Borel subgroup of $K$. In this case, the answer is recorded in~\cite{Heetal}. See also the related work of Pruijssen~\cite{VanPruijssen}. Finally, let us mention another extreme situation where the answer is known: $G$ is an exceptional simple group, $P$ is a maximal parabolic subgroup, and $K$ is a maximal reductive subgroup of $G$, see~\cite{Niemann}. \vspace{.5cm} We go back to the products of flag varieties and let $P$ and $Q$ be two parabolic subgroups from $G$. From now on we will call a product variety of the form $G/P\times G/Q$ a double flag variety. If the diagonal action of $G$ on a double flag variety $X=G/P\times G/Q$ is spherical, then we will call $X$ a spherical double flag variety for $G$. As it is shown by Littelmann in his previously mentioned article, the problem of deciding if a double flag variety is spherical or not is closely related to a study of the invariants of a maximal unipotent subgroup in the coordinate ring of an affine cone over $X$. In turn, this study is closely related to the combinatorics of the $G$-orbits in $X$. In this regard, our goal in this note is to prove the following result on the poset of inclusion relationships between the $G$-orbit closures in a spherical double flag variety. \begin{Theorem}\label{T:main} Let $G$ denote the special linear group $SL(n+1)$. If $X$ is a spherical double flag variety for $G$, then the poset of $G$-orbit closures in $X$ is a lattice. \end{Theorem} In fact, we have a precise description of the possible lattices in Theorem~\ref{T:main}. It turns out that the Hasse diagram of such a lattice look like a ``ladder'', or the lattice is a chain, see Theorem~\ref{T:Type A G orbits}. \vspace{.5cm} The structure of our paper is as follows. In the next section we set up our notation and review some basic facts about the double-cosets of parabolic subgroups. In Subsection~\ref{SS:2}, we show that the inclusion poset of $G$-orbit closures in $G/P\times G/Q$ is isomorphic to the inclusion poset of $P$-orbit closures in $G/Q$. In Subsection~\ref{SS:4} we review the concept of tight Bruhat order due to Stembridge. We use the information gained from this subsection in our analysis of the cases that are considered in the subsequent Section~\ref{S:main}, where we prove our main result. \section{Preliminaries}\label{S:Preliminaries} \subsection{}\label{SS:1} For simplicity, let us assume that $G$ is a semisimple simply-connected complex algebraic group and let $B$ be a Borel subgroup of $G$. Let $T$ be a maximal torus of $G$ that is contained in $B$. The unipotent radical of $B$ is denoted by $U$, so that $B=UT$. We denote by $\Phi$ the root system corresponding to the pair $(G,T)$ and we denote by $\Delta$ the subset of simple roots corresponding to $B$. A parabolic subgroup $P$ of $G$ is said to be standard with respect to $B$ if the inclusion $B\subseteq P$ holds true. In this case, $P$ is uniquely determined by a subset $I \subseteq \Delta$. \vspace{.25cm} The Weyl group of $(G,T)$, that is $N_G(T)/T$, is denoted by $W$ and we use the letter $R$ to denote the set of simple reflections $s_\alpha \in W$, where $\alpha \in \Delta$. We will allow ourselves be confused by using the letter $I$ (and $J$) to denote a subset of $\Delta$ or the corresponding subset of simple reflections in $R$. The {\em length} of an element $w\in W$, denoted by $\ell(w)$, is the minimal number of Coxeter generators $s_i\in R$ that is needed for the equality $w = s_1\cdots s_k$ hold true. In this case, the product $s_1\cdots s_k$ is called a reduced expression for $w$. Note that when $W$ is the symmetric group of permutations $S_{n+1}$, the length of a permutation $w=w_1\dots w_{n+1}\in S_{n+1}$ is equal to the number of pairs $(i,j)$ with $1\leq i < j \leq n+1$ and $w_i > w_j$. \vspace{.25cm} The Bruhat-Chevalley order on $W$ can be defined by declaring $v\leq w$ $(w,v\in W)$ if a reduced expression of $v$ is obtained from a reduced expression $s_1\cdots s_k$ of $w$ by deleting some of the simple reflections $s_i$. \vspace{.25cm} Let $X(T):=\mt{Hom}(T,\mathbb{G}_m)$ denote the group of characters of the maximal torus $T$. Let $\{ \omega_1,\dots, \omega_r\}$ denote the set of fundamental weights corresponding to the set of simple roots $\Delta =\{\alpha_1,\dots, \alpha_r \}$. By our assumptions on $G$, we have $\omega_i \in X(T)$ for every $i\in \{1,\dots, r\}$. The Weyl group $W$ acts on the weight lattice, that is $X(T)$. Let $\mathbf{E}$ denote the real vector space that is spanned by the fundamental weights, so that $\mathbf{E} = X(T)\otimes_\mathbb{Z} \mathbb{R}$. The action of $W$ on the weight lattice extends to give a linear action on $\mathbf{E}$. A vector $\theta$ from $\mathbf{E}$ is called a dominant vector if it is of the form $\theta = a_1 \omega_1 +\cdots + a_r \omega_r$, where $a_i$'s are nonnegative real numbers. Let $W(\omega_i)$ ($i\in \{1,\dots, r\}$) denote the isotropy group of $\omega_i$ in $W$. Then $W(\omega_i)$ ($i\in \{1,\dots, r\}$) is a parabolic subgroup of $W$, and furthermore, the subgroup of $G$ that is generated by $B$ and $W(\omega_i)$ is a maximal parabolic subgroup. \subsection{}\label{SS:2} Let $G$ act on two irreducible varieties $X_1$ and $X_2$, and let $x_i \in X_i$, $i=1,2$ be two points in general positions. If $G_i \subset G$ denotes the stabilizer subgroup of $x_i$ in $G$, then $\mt{Stab}_G(x_1\times x_2)$ coincides with the stabilizer in $G_1$ of a point in general position from $G/G_2$ (or, equivalently, with the stabilizer in $G_2$ of a point in general position from $G/G_1$), see~\cite{Panyushev99}. \vspace{.25cm} Let $P_1$ and $P_2$ be two parabolic subgroups of $G$. By applying the idea from the previous paragraph to the double flag variety $X:= G/P_1\times G/P_2$, where $B\subset P_1\cap P_2$, we notice that the study of $G$-orbits in $X$ reduces to the study of $P_1$-orbits in the flag variety $G/P_2$. But more is true; this correspondence between $G$-orbits and $P_1$-orbits respects the inclusions of their closures in Zariski topology. \begin{Lemma}\label{L:poset isom} The poset of $G$-orbit closures in $X$ is isomorphic to the poset of $P_2$-orbit closures in $G/P_1$. \end{Lemma} \begin{proof} Let $X$ denote $G/P_1 \times G/P_2$. The canonical projection $\pi : X\rightarrow G/P_2$ is $G$-equivariant and it turns $X$ into a homogenous fiber bundle over $G/P_2$ with fiber $G/P_1$ at every point $gP_2$ ($g\in G$) of the base $G/P_2$. To distinguish it from the other fibers, let us denote by $Y$ the fiber $G/P_1$ at the `origin' $eP_2$ of $G/P_2$. Then any $G$-orbit in $X$ meets $Y$. Note also that if $g\cdot y \in Y$ for some $g\in G$ and $y\in X$, then $g\in P_2$. There are two useful consequences of this observation; 1) $Y$ is a $P_2$-variety; 2) any $G$-orbit $O$ meeting $Y$, actually meets $Y$ along a $P_2$-orbit. Therefore, the map $O\mapsto O\cap Y$ gives a bijection between the set of all $G$-orbits in $X$ and the set of all $P_2$-orbits in $Y$. Since $G$ and $P_2$ are connected algebraic groups, the Zariski closures of their orbits are irreducible. Furthermore, the boundaries of the orbit closures are unions of orbits of smaller dimensions. At the same time, $Y$ is closed in $X$, therefore, the extension of the orbit-correspondence map, \begin{align} \overline{O} \longmapsto \overline{O}\cap Y, \end{align} gives a poset isomorphism between the inclusion orders on the Zariski closures of $G$-orbits in $X$ and the Zariski closures of $P_2$-orbits in $Y$. This finishes the proof of our lemma. \end{proof} \begin{Remark} By looking at the $(P_1,P_2)$-double cosets in $G$, as far as the combinatorics of orbit closures is concerned, we see that there is no real difference between the study of $P_1$-orbits in $G/P_2$ and the study of $P_2$-orbits in $G/P_1$. \end{Remark} \subsection{}\label{SS:3} We preserve our assumptions/notation from the previous subsections; $P_1$ and $P_2$ are two standard parabolic subgroups with respect to $B$. If $I$ and $J$ are the subsets of $R$ (or, of $\Delta$) that determine $P_1$ and $P_2$, respectively, then we will write $P_I$ (resp. $P_J$) in place of $P_1$ (resp. $P_2$). The Weyl groups of $P_I$ and $P_J$ will be denoted by $W_I$ and $W_J$, respectively. In this subsection, we will present some well-known facts regarding the set of $(W_I,W_J)$-double cosets in $W$, denoted by $W_I \backslash W / W_J$. \vspace{.25cm} First of all, the set $W_I \backslash W / W_J$ is in a bijection with the set of $P_I$-orbits in $G/P_J$, see~\cite[Section 21.16]{Borel}. For $w\in W$, we denote by $[w]$ the double coset $W_I w W_J$. Let $$ \pi : W \rightarrow W_1\backslash W / W_2 $$ denote the canonical projection onto the set of $(W_1,W_2)$-double cosets. It turns out that the preimage in $W$ of every double coset in $W_1\backslash W / W_2$ is an interval with respect to Bruhat-Chevalley order, hence it has a unique maximal and a unique minimal element, see~\cite{Curtis85}. Moreover, if $[w], [w']\in W_1\backslash W / W_2$ are two double cosets, $w_1$ and $w_2$ are the maximal elements of $[w]$ and $[w']$, respectively, then $w \leq w'$ if and only if $w_1 \leq w_2$, see~\cite{HohlwegSkandera}. It follows that $W_1\backslash W / W_2$ has a natural combinatorial partial ordering defined by $$ [w] \leq [w'] \iff w \leq w' \iff w_1 \leq w_2 $$ where $[w],[w'] \in W_1\backslash W / W_2$ and $w_1$ and $w_2$ are the maximal elements, $w_1 \in [w]$ and $w_2 \in [w']$. This partial order is geometric in the following sense; if $O_1$ and $O_2$ are two $P_I$-orbits in $G/P_J$ with the corresponding double cosets $[w_1]$ and $[w_2]$, respectively, then $O_1 \subseteq \overline{O_2}$ if and only if $w_1 \leq w_2$. The bar on $O_2$ stands for the Zariski closure in $G/P_J$. \vspace{.25cm} Now let $[w]$ be a double coset from ${}^{I}{W} \cap W^J$ represented by an element $w\in W$ such that $\ell(w) \leq \ell(v)$ for every $v\in [w]$. It turns out that the set of all such minimal length double coset representatives is given by ${}^{I}{W} \cap W^J$, where $^{I}{W}$ stands for the set of minimal length coset representatives for $W_I \backslash W$. We denote $^{I}{W} \cap W^J$ by $X_{I,J}^-$. Set $H= I \cap w J w^{-1}$. Then $ uw \in W^J$ for $u\in W_I$ if and only if $u$ is a minimal length coset representative for $W_I/W_H$. In particular, every element of $W_I w W_J$ has a unique expression of the form $uwv$ with $u\in W_I$ is a minimal length coset representative of $W_I/W_H$, $v\in W_J$ and $\ell(uwv) = \ell(u)+\ell(w)+\ell(v)$. \vspace{.25cm} Another characterization of the sets $X_{I,J}^-$ is as follows. For $w\in W$, the {\em right ascent set} is defined as \begin{align*} \mt{Asc}_R(w) = \{ s\in R :\ \ell(ws) > \ell(w) \}. \end{align*} The {\em right descent set}, $\mt{Des}_R(w)$ is the complement $R-\mt{Asc}_R(w)$. Similarly, the {\em left ascent set} of $w$ is \begin{align*} \mt{Asc}_L(w) = \{ s\in R :\ \ell(sw) > \ell(w) \}\ \text{ ($=\mt{Asc}_R(w^{-1})$).} \end{align*} Then \begin{align}\label{A:another} X_{I,J}^- &= \{ w\in W:\ I\subseteq \mt{Asc}_L(w)\ \text{ and } J\subseteq \mt{Asc}_R(w) \}\\ &= \{ w\in W:\ I^c\supseteq \mt{Des}_R(w^{-1})\ \text{ and } J^c\supseteq \mt{Des}_R(w) \} \end{align} \vspace{.25cm} For our purposes we need the distinguished set of maximal length representatives for each double coset. It is given by \begin{align}\label{A:another} X_{I,J}^+ &= \{ w\in W:\ I\subseteq \mt{Des}_R(w^{-1})\ \text{ and } J\subseteq \mt{Des}_R(w) \}\\ &= \{ w\in W:\ I^c\supseteq \mt{Asc}_R(w^{-1})\ \text{ and } J^c\supseteq \mt{Asc}_R(w) \} \end{align} For a proof of this characterization of $X_{I,J}^+$, see~\cite[Theorem 1.2(i)]{Curtis85}. \begin{Remark}\label{R:opposite} The Bruhat-Chevalley orders on $X_{I,J}^-$ and $X_{I,J}^+$ are isomorphic. \end{Remark} \subsection{}\label{SS:4} We mentioned in the introductory section that Littelmann classified the pairs of parabolic subgroups $(P_I,P_J)$ corresponding to fundamental dominant weights such that the diagonal $G$ action on $G/P_I\times G/P_J$ is spherical. Said differently, we know all pairs $(I,J)$ of subsets of $R$ such that \begin{itemize} \item $|I| = |J|= |R|-1$, and \item $G/P_I\times G/P_J$ is a spherical double flag variety for $G$. \end{itemize} In particular, under the maximality assumption of the subsets $I$ and $J$, the poset of $G$-orbit closures is a chain, see~\cite[Proposition 3.2]{Littelmann}. In the light of our Lemma~\ref{L:poset isom}, this is equivalent to the statement that with respect to the Bruhat-Chevalley order, the set $X_{I,J}^+$ is a chain. \vspace{.5cm} We mention also that the classification of Littelmann is extended by Stembridge to cover all pairs of subsets $(I,J)$ in $R$ such that $G/P_I\times G/P_J$ is a spherical double flag variety for $G$. See Corollaries 1.3.A -- 1.3.D, 1.3.E6, 1.3.E7, and 1.3.\{E8,F4,G2\} in~\cite{Stembridge}. \begin{Remark}\label{R:in order} \begin{enumerate} \item We call a spherical double flag variety $G/P_I\times G/P_J$ trivial if one of the factors is isomorphic to a point, that is $P_I=G$ or $P_J=G$. In the cases of E8,F4, and G2 all of the spherical double flag varieties are trivial. \item In the cases of A--D, E6, and E7, if $G/P_I\times G/P_J$ is a spherical double flag variety for $G$, then at least one of the subsets $I$ and $J$ is maximal, that is to say, of cardinality $|R|-1$. Without loss of generality we always choose $I$ to be the maximal one. \end{enumerate} \end{Remark} \subsection{}\label{SS:5} In this subsection we will review the useful concept of ``tight Bruhat order.'' We maintain our notation from the previous subsections. \vspace{.5cm} One way to define the Bruhat-Chevalley order on $W$ is to use the reflection representation of $W$ as the group of isometries of $\mathbf{E}$. Let $\langle\ ,\rangle$ denote the $W$-invariant inner product on $\mathbf{E}$, and let $\theta \in \mathbf{E}$ be a vector such that $\langle \theta, \beta \rangle \geq 0$ for all $\beta \in \Phi^+$. Such a vector is called dominant. It is indeed dominant in the sense of Subsection~\ref{SS:1}. \vspace{.5cm} It is well known that the stabilizer of a dominant vector is a parabolic subgroup $W_J\subset W$, where $J=\{s_\alpha \in R:\ \langle \theta ,\alpha \rangle=0\}$. Thus, as a set, the minimal length coset representatives $W^J \subset W$ of the quotient $W/W_J$ can be identified with the orbit $W \theta$. Following Stembridge, we are going to call the orbit map $w\mapsto w\cdot \theta$ the evaluation. \vspace{.5cm} A proof of the following result can be found in~\cite{Stembridge02}. \begin{Proposition}\label{P:BC1} Let $\theta \in \mathbf{E}$ be a dominant vector with stabilizer $W_J$. The evaluation map induces a poset isomorphism between the Bruhat-Chevalley order on $W^J$ and the orbit $W\theta$ with partial order $\leq_B$ defined by the transitive closure of the following relations: $$ \mu \leq_B s_\beta(\mu) \ \text{ for all $\beta \in \Phi^+$ such that $\langle \mu ,\beta \rangle > 0$.} $$ \end{Proposition} \vspace{.5cm} Let $I$ be a subset of $R$ and let $\Phi_I \subset \Phi$ denote the root subsystem corresponding to the parabolic subgroup $W_I$. We denote by $\Phi_I^+$ the intersection $\Phi^+\cap \Phi_I$. If $\theta$ is a dominant vector and its stabilizer subgroup is $W_J$ with $J\subset R$, then we define \begin{align}\label{A:double quotient} (W\theta)_I:= \{\mu \in W\theta :\ \langle \mu ,\beta \rangle \geq 0 \ \text{ for all $\beta \in \Phi^+_I$} \}. \end{align} \vspace{.5cm} A proof of the following result can be found in~\cite[Proposition 1.5]{Stembridge05}. \begin{Proposition}\label{P:BC2} Let $I,J\subset R$ be two sets of Coxeter generators for $W$ and let $\theta \in \mathbf{E}$ be a dominant vector with stabilizer $W_J$. Then the evaluation map induces a poset isomorphism between the (restriction of) Bruhat-Chevalley order on $X_{I,J}^-$ and $(W\theta)_I$ with partial order defined by the transitive closure of the relations $$ \mu \leq_B s_\beta(\mu) \ \text{ for all $\beta \in \Phi^+$ such that $s_\beta (\mu)\in (W\theta)_I$ and $\langle \mu ,\beta \rangle > 0$}. $$ \end{Proposition} \vspace{.5cm} Now we come to the definition of a critical notion for our proof. There is a natural partial ordering on the roots defined by \begin{align}\label{A:natural} \nu \preceq \mu \iff \mu-\nu \in \mathbb{R}^+ \Phi^+. \end{align} It turns out, when the interpretation of Bruhat-Chevalley ordering as given in Proposition~\ref{P:BC1} is used, there is a natural order reversing implication: \begin{align}\label{A:implication} \mu \leq_B \nu \implies \nu \preceq \mu. \end{align} If the converse implication also holds, then the poset $W\theta$ is called tight. More precisely, a subposet $(M,\leq_B)$ of the Bruhat-Chevalley order on $(W\theta,\leq_B)$ is called tight if $$ \mu \leq_B \nu \iff \nu \preceq \mu $$ for all $\nu,\mu$ in $M\subset \mathbf{E}$. \vspace{.5cm} In the light of our Remark~\ref{R:in order} part 3, we assume that $I\subset R$ is a maximal subset of the form $I= R-\{s\}$ for some $s\in R$. Also, we assume that there exists a dominant $\theta \in \mathbf{E}$ such that $W_J$ is its stabilizer subgroup. Now, by \cite[Theorem 2.3]{Stembridge05}, we see that if $W^J$ is tight, then $X_{I,J}^- = X_{R-\{s\},J}^-$ is a chain. The list of tight quotients is also given in~\cite{Stembridge05}; $(W^J,\leq_B)$ is tight if and only if $W$ is of at most rank 2, or $J=R$, or one of the following holds: \begin{itemize} \item $W\cong A_n$ and $J^c=\{ s_j \}$ $(1\leq j \leq n )$ or $J^c=\{ s_j,s_{j+1} \}$ $(1\leq j \leq n-1 )$, \item $W\cong B_n$ and $J^c=\{ s_1 \},\{ s_2 \},\{ s_n \}$, or $J^c=\{ s_1,s_2 \}$, \item $W\cong D_n$ and $J^c=\{ s_1 \},\{ s_2 \}$ or $J^c=\{ s_n \}$, \item $W\cong E_6$ and $J^c=\{ s_1 \}$ or $J^c=\{ s_6 \}$, \item $W\cong E_7$ and $J^c=\{ s_7 \}$, \item $W\cong F_4$ and $J^c=\{ s_1 \}$ or $J^c=\{ s_4 \}$, or \item $W\cong H_3$ and $J^c=\{ s_1 \}$ or $J^c=\{ s_3 \}$. \end{itemize} Therefore, in these cases (when $I$ is maximal and $J$ is as in this list) we know that $X_{I,J}^- = X_{R-\{s\},J}^-$ is a chain. We finish our preliminaries section by listing the remaining cases under the assumption that $I$ is of the form $R-\{s\}$ for some $s\in R$. \begin{itemize} \item $W\cong A_n$ \begin{enumerate} \item $I^c\in \{ \{s_2\}, \{s_{n-1}\}\}$ and $J^c=\{s_p,s_q\}$ with $1<p <p+1<q<n$; \item $I^c \in \{ \{s_1\}, \{s_n\}\}$ and $|J^c| \geq 2$ (but $J^c\neq \{ s_j,s_{j+1} \}$ $(1\leq j \leq n-1 )$); \item $I^c \in \{ \{s_2\},\dots,\{s_{n-1}\}\}$, and $J^c=\{s_1,s_j\}$ or $J^c=\{s_j,s_n\}$ with $2< j < n-1$. \end{enumerate} \item $W\cong C_n$ \begin{enumerate} \item $I^c=\{s_n\}$ and $|J^c|=1$; \item $I^c=J^c=\{s_1\}$. \end{enumerate} \item $W\cong D_n$ ($n\geq 4$) \begin{enumerate} \item $I^c=\{s_n\}$ and $J^c= \{s_l,s_i \}$ with $1\leq i \leq n$ and $1\leq l \leq 2$; \item $I^c\in \{ \{s_1 \},\{s_2\}\}$, and $J^c \subsetneq \{s_1,s_2,s_n\}$ or $J^c\subseteq \{s_{n-1},s_n\}$ or $J^c=\{s_{n-2}\}$; \item ($n=4$ case only) $I^c= \{s_1\}$ and $J^c=\{s_2,s_3\}$ or $I^c=\{s_2\}$ and $J^c=\{s_1,s_3\}$. \end{enumerate} \item $W\cong E_6$ \begin{enumerate} \item $I^c\in \{ \{s_1\},\{s_6\}\}$ and $J^c=\{s_1,s_6\}$. \end{enumerate} \end{itemize} \section{Proof of the main result}\label{S:main} The Weyl group of $(SL(n+1),T)$, where $T$ is the maximal torus of diagonal matrices is isomorphic to the symmetric group $S_{n+1}$. A set of Coxeter generators $R \subset S_{n+1}$ is given by the set $$R= \{ s_i = (i,\, i+1) :\ i=1,\dots, n\},$$ where $(i,\ i+1)$ is the simple transposition that interchanges $i$ and $i+1$ and leaves everything else fixed. For easing our notation, whenever it is clear from the context, we will denote the simple transposition $s_i$ by its index $i$. \vspace{.25cm} In the light of Lemma~\ref{L:poset isom}, Subsection~\ref{SS:3}, and Subsection~\ref{SS:5}, to prove our main result Theorem~\ref{T:main}, it will suffice to analyze the Bruhat-Chevalley order on the set of distinguished double coset representatives, $X_{I,J}^+$. We will do this analysis on a case-by-case basis for \begin{enumerate} \item $I^c\in \{ \{2\}, \{{n-1}\}\}$ and $J^c=\{p,q\}$ ($1<p <p+1<q<n$); \item $I^c \in \{ \{1\}, \{n\}\}$ and $|J^c| \geq 2$ (but $J^c\neq \{ j, {j+1} \}$ $(1\leq j \leq n-1 )$); \item $I^c \in \{ \{2\},\dots,\{n-1\}\}$, and $J^c=\{1,j\}$ or $J^c=\{j, n\}$ with $2< j < n-1$. \end{enumerate} \subsection{Case 1.} We start with a general remark which we will use in the sequel. \begin{Remark}\label{R:can induct} Let $w\in S_{n+1}$ be a permutation whose one-line notation ends with the decreasing string $k \ k-1 \dots 2 \ 1$. In this case, any element in the upper interval $[w,w_0]\subset S_{n+1}$ has the same ending. In other words, if $w'\in [w,w_0]$, then the last $k$ entries of $w'$ are exactly $k,k-1,\dots, 1$ in this order. Similarly, if $w$ begins with the decreasing string $n+1\ n \dots \ k$ for some $k\in \{1,\dots, n+1\}$, then any element in the upper interval $[w,w_0]\subset S_{n+1}$ has the same beginning. So, essentially, these elements form an upper interval of $S_{n+1}$, which is isomorphic to $S_{n+1-k}$. In a similar way, if we consider the set of permutations that starts with the string $1\, 2\,\dots k$, then we obtain a lower interval that is isomorphic to $S_{n+1-k}$ in $S_{n+1}$. \end{Remark} Now we proceed to give our proof, starting with the sub-case $I^c = \{2\}$. Let $w=w_1\dots w_{n+1}$ be an element, in one-line notation, from $X_{I,J}^+$. Recall that $$ X_{I,J}^+ =\{ w\in W:\ I^c\supseteq \mt{Asc}_R(w^{-1})\ \text{ and } J^c\supseteq \mt{Asc}_R(w) \}. $$ The meaning of $I^c =\{ 2\} \supseteq \mt{Asc}_R(w^{-1})$ is that either $ \mt{Asc}_R(w^{-1})=\emptyset$, in which case $w$ is equal to $w_0$, the longest permutation, or, $ \mt{Asc}_R(w^{-1})=\{2\}$ hence $2$ comes before $3$ in $w$ and there are no other consecutive pairs $(a,a+1)$ such that $a$ comes before $a+1$ in $w$. Note also that $ \mt{Asc}_R(w)$ cannot be empty unless $X_{I,J}^+=\{ w_0\}$. \vspace{.5cm} We continue with the assumption that $w\neq w_0$. Suppose $ J^c = \{ p,q \}$ for $1 < p < p+1 < q < n$. We are going to write $L_1$ for the segment $w_1 w_2 \dots w_{p}$, $L_2$ for the segment $w_{p+1} \dots w_{q}$, and $L_3$ for the segment $w_{q+1} \dots w_{n+1}$. By our assumptions, all three of these segments are decreasing sequences. In particular, since $2$ comes before $3$ in $w$, $2$ cannot appear in $L_3$. In fact, $2$ and $3$ cannot appear in the same segment. First, we assume that $p=2$. Since any element of $X_{I,J}^+$ has descents (at least) at the positions $J=\{1,\widehat{2},3,4,\dots, \widehat{q},\dots, {n+1}\}$, the bottom element $\tau_0$ is either of the form \begin{align}\label{A:1 or 2} \tau_0=2\ 1\ | n+1 \ n \dots n-q + 3 \ \ n-q +2\, \ n-q +1 \dots 3, \end{align} or it is of the form \begin{align}\label{A:2 or 1} \tau_0=n+1 \ n \dots n-q + 4 \ \ 2\ \ 1\ | \ n-q + 3 \ \ n-q +2 \dots 3. \end{align} The bars between numbers indicate the possible positions of ascents. Note that the number of inversions of the former permutation is $1+ {n-1 \choose 2}$, and the rank of the latter is \begin{align*} f_n(q) &:= \left( \sum_{i=1}^{q-2} n+1-i \right) + 1 + \left( \sum_{i=q+1}^{n} n+1-i \right) \\ &= {n+1 \choose 2} +1 - (n+1 -q) - (n+1- (q-1)). \end{align*} which is always greater than the former. Therefore, the minimal element $\tau_0$ of $X_{I,J}^+$ starts with $2\ 1$ (as in~\ref{A:1 or 2}). \vspace{.5cm} This element has a single ascent at the $2$-nd position. We will analyze the covers of $\tau_0$. Since an upward covering in Bruhat-Chevalley order is obtained by moving a larger number to the front, $n+1$ of $L_2$ moves into $L_1$ and accordingly either 2 or 1 from $L_1$ moves into $L_2$. \vspace{.5cm} Recall that each double coset $W_I z W_J$ is an interval of $W$ in Bruhat-Chevalley order and $X_{I,J}^+$ consists of maximal elements of these intervals (see \cite[Theorem 1.2(ii)]{Curtis85}). It follows from this critical observation that, to obtain a covering of $\tau_0$, $1$ has to move, and it becomes the last entry of $L_2$. In other words, the permutation $$ \tau_1= n+1\ 2 \ | \ n \dots n-q + 3 \ \ 1\ | \ n-q +2\, \ n-q +1 \dots 3 $$ is the unique element in $X_{I,J}^+$ that covers $\tau_0$. \vspace{.5cm} Next, we analyze the covers of $\tau_1$; it has only two possible coverings which are obtained as follows: 1) 2 moves into $L_2$ and $n$ moves into $L_1$, 2) $1$ moves into $L_3$ and $n-q+2$ moves into $L_2$. The resulting elements are \begin{align*} \tau_2 &= n+1\ n \ | \ n-1 \dots n-q + 3 \ \ 2\ \ 1\ |\ n-q +2\, \ n-q +1 \dots 3, \\ \tau_3 &= n+1\ 2 \ | \ n \dots n-q + 3 \ \ n-q +2\ | \ n-q +1 \dots 3\ 1. \end{align*} It is not difficult to see that each of these two elements are covered by the same element, namely $$ \tau_4 = n+1\ n \ | \ n-1 \dots n-q + 3 \ \ n-q+2\ \ 2\ \ |\ \ n-q +1 \dots 3 \ 1. $$ Observe that, in $\tau_4$ the only entry that can be moved is 2 and this is possible only if the inequality $q \leq n-1$ holds. This agrees with our assumption on $q$. Therefore, there exists a unique cover of $\tau_4$, which is $w_0$. Note that all that is said above is independent of $n$ as long as $p=2$ and $3 < q < n$. Hence, our poset is as in Figure~\ref{F:stretched diamond}. \begin{figure}[htp] \centering \begin{tikzpicture}[scale=.65] \begin{scope} \node at (0,-5) (t0) {$\tau_0$}; \node at (0,-2.5) (t1) {$\tau_1$}; \node at (-2,0) (t2) {$\tau_2$}; \node at (2,0) (t3) {$\tau_3$}; \node at (0,2.5) (t4) {$\tau_4$}; \node at (0,5) (t5) {$\tau_5=w_0$}; \draw[-, thick] (t0) to (t1); \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t4); \draw[-, thick] (t4) to (t5); \end{scope} \end{tikzpicture} \caption{The Bruhat-Chevalley order on $X_{I,J}^+$ for Case 1.} \label{F:stretched diamond} \end{figure} Finally, we look at the case for $p>2$. The only difference between this and $p=2$ case is that the first $p-2$ terms of the elements of $X_{I,J}^+$ all start with $n+1\ \ n \ \ n-2 \dots n - p$. By using Remark~\ref{R:can induct} and induction, we reduce this case to the case of $p=2$. Therefore, our poset $X_{I,J}^+$ is isomorphic to the one in Figure~\ref{F:stretched diamond}. \vspace{.25cm} We proceed with the second sub-case of Case 1; we assume that $I^c= \{n-1\}$ and $J=\{ p,q\}$ with $2\leq p< p+1 < q \leq n-1$. As in the previous sub-case, for an element $w\in X_{I,J}^+$ these conditions imply that $w$ is of the form $w= L_1 | L_2 | L_3$, where $L_i$, $i=1,2,3$ are decreasing sequences of lengths $p, q-p$ and $n+1-q$, respectively, and the number $n-1$ appears before $n$ in $w$. It follows that the smallest element of $X_{I,J}^+$ is of the form $$ \tau_0=w_1\dots w_p | w_{p+1}\dots w_q | w_{q+1} \dots w_{n+1} = n-1 \ n-2 \dots n-q \ | \ n+1\ \ n \ \ n-q -1 \ \ n-q-2 \dots 1 $$ Then arguing exactly as in the previous case one sees that the poset under consideration is also of the form Figure~\ref{F:stretched diamond}. \subsection{Case 2.} We start with the sub-case $I^c =\{1\}$, and we let generously $J^c$ be any proper subset $J^c \subset \{1,\dots, n\}$. Let $w= w_1\dots w_{n+1}$ be an element from $X_{I,J}^+$ and let $v_1\dots v_{n+1}$ denote the inverse, $w^{-1}$ of $w$. Since $\mt{Asc}_R(w^{-1}) \subseteq \{1\}$, we have either $w=w^{-1}=w_0$, or \begin{align}\label{A:obviously} v_1 < v_2 > v_3 > \cdots > v_{n+1}. \end{align} Let $V'$ denote the set of permutations whose entries satisfy the inequalities in (\ref{A:obviously}) and set $$ V:=V'\cup \{w_0\}. $$ Then $V$ has $n+1$ elements, and furthermore, $(V,\leq)$ is a chain. But in Bruhat-Chevalley order we have $$ u \leq v \iff u^{-1} \leq v^{-1} \ \text{ for every } u,v\in S_{n+1}. $$ Therefore, $V^{-1}:= \{ v^{-1} :\ v\in V\}$ is a chain also. It follows that, as a subposet of $V^{-1}$, $X_{I,J}^+$ is a chain as well. This finishes the proof of the first part of Case 2. Next, we assume that $I^c=\{n\}$ and let $w= w_1\dots w_{n+1}\in X_{I,J}^+$. If $w^{-1}=v_1\dots v_{n+1}$ denotes the inverse of $w$, then, as before, we have either $w=w^{-1}=w_0$, or \begin{align}\label{A:obviously} v_1 > v_2 \cdots > v_n < v_{n+1}. \end{align} By arguing as in the previous paragraph we see that $X_{I,J}^+$ is a chain in this case as well, and hence, the proof of Case 2 is finished. \subsection{Case 3.} Now, we proceed with the proof of Case 3 but since we have symmetry, we will consider the case of $I^c=\{i\}$ with $2\leq i \leq n-1$ and $J^c=\{1, j\}$ with $2 < j <n-1$ only. Let us note also that as the number $i\in I^c$ grows up to $\lfloor \frac{n+1}{2} \rfloor$ we get more freedom to position $i$ and $i+1$ in an element $w\in X_{I,J}^+$; this makes $X_{I,J}^+$ grow taller as a poset. Now we are ready to present the structure of our poset in detail. \vspace{.5cm} A generic element $w=w_1 \dots w_{n+1}$ from $X_{I,J}^+$ is viewed as a concatenation of three segments, $w=L_1 L_2 L_3$ where $L_1=w_1$, $L_2=w_2\dots w_j$, and $L_3 = w_{j+1}\dots w_{n+1}$. The possible ascents are at the $1$-st and at the $j$-th positions. At the same time, if $w\neq w_0$, then we have $w^{-1} \neq w_0$, therefore, $w^{-1}$ has an ascent at the $i$-th position. This means that $i$ comes before $i+1$ in $w$ and there are no other pairs $(a,a+1)$ such that $a$ comes before $a+1$ in $w$. Therefore, $i$ and $i+1$ are always contained in distinct segments except for $w=w_0$. In particular, $i$ appears either in $L_1$ or in $L_2$. \vspace{.25cm} We proceed to determine the smallest element $\tau_0$ of $X_{I,J}^+$. Let us write $\tau_0$ in the form $\tau_0=L_1L_2L_3$ as in the previous paragraph and let $k$ be the number in $L_1$. We observe that if $k\neq n+1$, then we have $k=i$. Indeed, if we assume otherwise that $k\neq i$ and that $k\neq n+1$, then we find that $k+1$ comes after $k$ in $\tau_0$; this is a contradiction. As a consequence of this observation we see that $\tau_0$ starts either with $n+1$ or with $i$. On the other hand, if $k=n+1$, then by interchanging $k$ with the first entry of $L_2$ we obtain another element in $X_{I,J}^+$ and this new element is smaller than $\tau_0$ in Bruhat-Chevalley order. This is a contradiction as well. Therefore, in $\tau_0$, we have $i$ as the first entry. Now there are two easy cases; 1) $j \leq i$ and $\tau_0$ is of the form \begin{align}\label{A:bottom element 1} \tau_0= i \ \ i-1 \dots i-j+1 \ | \ n+1 \ \ n \dots i+1 \ \ i- j \ \ i-j -1 \dots 1. \end{align} 2) $j > i$ and $\tau_0$ is of the form \begin{align}\label{A:bottom element 2} \tau_0= i \ \ n+1 \ \ n \dots n+2 - (j-i) \ \ i-1 \ \ i-2 \dots 1 \ | \ n+1 - (j-i) \ \ n - (j-i) \dots i+1. \end{align} Note that the vertical bar is between the $j$-th and the $j+1$-st positions. \vspace{.25cm} We proceed with some observations regarding how the posets climb up in the Bruhat-Chevalley order on $X_{I,J}^+$, starting with $\tau_0$'s as in (\ref{A:bottom element 1}) and (\ref{A:bottom element 2}). First of all, if $\tau_0$ is as in (\ref{A:bottom element 1}), then to get a covering relation, there is only one possible interchange, namely, moving $i-j+1 \in L_2$ into $L_3$. In this case, to maintain the descents, the number that is replaced by $i-j+1$ has to be $n+1$, which goes into the first entry of $L_2$. In other words, the unique $w\in X_{I,J}^+$ that covers $\tau_0$ is \begin{align}\label{A:bottom element 3} w= i \ \ n+1 \ \ i-1 \dots i-j +2 \ | \ n \dots i+1 \ \ i- j+1 \ \ i- j \ \ i-j -1 \dots 1. \end{align} It is easy to verify that there are exactly two elements that covers $w$; \begin{align}\label{A:bottom element 4} w_{(2)}= n+1 \ \ i \ \ i-1 \dots i-j +2 \ | \ n \dots i+1 \ \ i- j+1 \ \ i- j \ \ i-j -1 \dots 1 \end{align} and \begin{align}\label{A:bottom element 5} w^{(2)}= i \ \ n+1 \ \ n \ \ i-1 \dots i-j+3 \ | \ n -1 \dots i+1 \ \ i- j+2 \ \ i- j +1 \dots 1. \end{align} By Remark~\ref{R:can induct} we see that all elements that lie above $w_{(2)}$ in $X_{I,J}^+$ start with $n+1$. Also, since there is no ascent at the $1$-st position for such elements, the resulting upper interval $[w_{(2)},w_0]$ in $X_{I,J}^+$ is isomorphic to a double coset poset in $S_{n+1}$ with $I^c=\{i\}$ and $J^c=\{j\}$, hence it is a chain. \vspace{.5cm} There are two covers of $w^{(2)}$; one of them, $w_{(3)}$, is an element of the interval $[w',w_0]$ (hence $w_{(3)}$ covers $w'$ as well). The other cover of $w^{(2)}$ is \begin{align}\label{A:bottom element 6} w^{(3)}= i \ \ n+1 \ \ n \ \ n-1 \ \ i-1 \dots i-j+4 \ | \ n -1 \dots i+1 \ \ i- j+3 \ \ i- j +2 \dots 1. \end{align} Now the pattern is clear; $w^{(3)}$ has exactly two covers one of which lies in $[w_{(3)},w_0]$ and the other $w^{(4)}$ has a similar structure. Therefore, the bottom portion of the resulting poset is a `ladder', as depicted in Figure~\ref{F:Anii2}, and the chains $w^{(p)}$ and $w_{(p)}$, $p\geq 3$ climb up to meet for the first time either at $w_0$, or at \begin{align}\label{A:bottom element 7} w^{(m+1)}=w_{(m+1)}= n+1 \ \ n\dots n+1 - (j-2) \ \ i \ | \ n+1 - (j-3) \dots \widehat{i} \dots 2 \ \ 1. \end{align} In the latter case, of course, $w_0$ is the unique cover of $w^{(m+1)}=w_{(m+1)}$ and it is easy to check from (\ref{A:bottom element 7}) that this happens if and only if $n+1-(j-1) > i$. In both of these cases, the hight of our poset does not exceed $j$. \begin{figure}[htp] \centering \begin{tikzpicture}[scale=.6] \begin{scope} \node at (0,-7.5) (t0) {$\tau_0$}; \node at (0,-5) (t1) {$w$}; \node at (-2,-2.5) (t2) {$w^{(2)}$}; \node at (-2,0) (t4) {$w^{(3)}$}; \node at (2,-2.5) (t3) {$w_{(2)}$}; \node at (2,0) (t5) {$w_{(3)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (-2,6.5) (t10) {}; \node at (2,6.5) (t11) {}; \draw[-, thick] (t0) to (t1); \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[dotted, ultra thick] (t8) to (t10); \draw[dotted, ultra thick] (t9) to (t11); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \end{scope} \end{tikzpicture} \caption{The Bruhat-Chevalley order on $X_{I,J}^+$ for $I^c=\{i\}$, $J^c=\{1,j\}$, where $2< j \leq i$.} \label{F:Anii2} \end{figure} Now we look at the covers of $\tau_0$ in the case of (\ref{A:bottom element 2}). In this case, there are exactly two covers of $\tau_0$: \begin{align}\label{A:bottom element 21} w_{(1)}= n+1 \ \ i \ \ n \dots n+2 - (j-i) \ \ i-1 \ \ i-2 \dots 1 \ | \ n+1 - (j-i) \ \ n - (j-i) \dots i+1 \end{align} and \begin{align}\label{A:bottom element 22} w^{(1)}= i \ \ n+1 \ \ n \dots n+2 - (j-i) \ \ n+1 - (j-i) \ \ i-1 \ \ i-2 \dots 2 \ | \ \ n - (j-i) \dots i+1 \ \ 1. \end{align} The elements of $X_{I,J}^+$ that cover (\ref{A:bottom element 21}) and (\ref{A:bottom element 22}) are found in a similar way to those of (\ref{A:bottom element 4}) and (\ref{A:bottom element 5}). We depict the bottom portion of $X_{I,J}^+$ for $\tau_0$ as in (\ref{A:bottom element 2}) in Figure~\ref{F:Anii22}. Note that, as in the previous case, the chains of the poset climb up to meet for the first time either at $w_0$, or at \begin{align*} w^{(m+1)}=w_{(m+1)}= n+1 \ \ n\dots n+1 - (j-2) \ \ i \ | \ n+1 - (j-3) \dots \widehat{i} \dots 2 \ \ 1. \end{align*} The latter situation occurs if and only if $n+1-(j-1) > i$, or, equivalently, $n > i+j -2$. \begin{figure}[htp] \centering \begin{tikzpicture}[scale=.6] \begin{scope} \node at (0,-5) (t1) {$\tau_0$}; \node at (-2,-2.5) (t2) {$w^{(1)}$}; \node at (-2,0) (t4) {$w^{(2)}$}; \node at (2,-2.5) (t3) {$w_{(1)}$}; \node at (2,0) (t5) {$w_{(2)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (-2,6.5) (t10) {}; \node at (2,6.5) (t11) {}; \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[dotted, ultra thick] (t8) to (t10); \draw[dotted, ultra thick] (t9) to (t11); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \end{scope} \end{tikzpicture} \caption{The Bruhat-Chevalley order on $X_{I,J}^+$ for $I^c=\{i\}$, $J^c=\{1,j\}$, where $2\leq i < j$.} \label{F:Anii22} \end{figure} This finishes the proof of Case 3. By combining with Stembridge's results on tight Bruhat order and the classification of the spherical double flag varieties for $SL(n+1)$, we now have a proof of the following result. \begin{Theorem}\label{T:Type A G orbits} Let $G$ denote $SL(n+1)$ and let $P_I$ and $P_J$ be two standard parabolic subgroups of $G$. If $G/P_I \times G/P_J$ is a spherical double flag variety, then the inclusion poset $(Z,\subseteq)$ of $G$-orbit closures is either a chain or one of the ``ladder lattices'' as depicted in Figure~\ref{F:ladder}. More precisely, we have \begin{enumerate} \item if $|I^c|=|J^c|=1$, then $Z$ is isomorphic to a chain; \item if $|I^c|=1$ and $J^c = \{ s_j, s_{j+1}\}$ ($1\leq j \leq n-1$), then $Z$ is isomorphic to a chain; \item if $I^c\in \{ \{s_2\}, \{s_{n-1}\}\}$ and $J^c=\{s_p,s_q\}$ ($1<p <p+1<q<n$), then the Hasse diagram of $Z$ is as in Figure~\ref{F:stretched diamond}; \item if $I^c \in \{ \{s_1\}, \{s_n\}\}$ and $|J^c| \geq 2$ (but $J^c\neq \{ s_j,s_{j+1} \}$ $(1\leq j \leq n-1 )$), then $Z$ is isomorphic to a chain; \item if $I^c \in \{ \{s_2\},\dots,\{s_{n-1}\}\}$, and $J^c=\{s_1,s_j\}$ or $J^c=\{s_j,s_n\}$ with $2< j < n-1$, then \begin{enumerate} \item the Hasse diagram of $Z$ is as in (A) in Figure~\ref{F:ladder} for $2 < j \leq i$ and $i+j-2 < n$; \item the Hasse diagram of $Z$ is as in (B) in Figure~\ref{F:ladder} for $2 < j \leq i$ and $i+j-2 \geq n$; \item the Hasse diagram of $Z$ is as in (C) in Figure~\ref{F:ladder} for $j > i \geq 2$ and $i+j-2 < n$; \item the Hasse diagram of $Z$ is as in (D) in Figure~\ref{F:ladder} for $j > i \geq 2$ and $i+j-2 \geq n$. \end{enumerate} \end{enumerate} \end{Theorem} \begin{comment} \begin{Remark} We anticipate that the following statement is true across all types: If $|J^c|=1$, then the poset of $G$ orbit closures is a chain, otherwise it is one of the ladder lattices as in Figure~\ref{F:ladder}. These cases will be handled in an upcoming manuscript. \end{Remark} \end{comment} \begin{figure}[htp] \centering \begin{tikzpicture}[scale=.6] \begin{scope}[xshift= -9.5cm] \node at (0,-9) (t00) {(A)}; \node at (0,-7.5) (t0) {$\tau_0$}; \node at (0,-5) (t1) {$w$}; \node at (-2,-2.5) (t2) {$w^{(2)}$}; \node at (-2,0) (t4) {$w^{(3)}$}; \node at (2,-2.5) (t3) {$w_{(2)}$}; \node at (2,0) (t5) {$w_{(3)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (0,7.5) (t10) {$w_{(m+1)}$}; \node at (0,10) (t11) {$w_0$}; \draw[-, thick] (t0) to (t1); \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \draw[-, thick] (t8) to (t10); \draw[-, thick] (t9) to (t10); \draw[-, thick] (t10) to (t11); \end{scope} \begin{scope}[xshift=-3.1cm] \node at (0,-9) (t00) {(B)}; \node at (0,-7.5) (t0) {$\tau_0$}; \node at (0,-5) (t1) {$w$}; \node at (-2,-2.5) (t2) {$w^{(2)}$}; \node at (-2,0) (t4) {$w^{(3)}$}; \node at (2,-2.5) (t3) {$w_{(2)}$}; \node at (2,0) (t5) {$w_{(3)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (0,7.5) (t10) {$w_{0}$}; \draw[-, thick] (t0) to (t1); \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \draw[-, thick] (t8) to (t10); \draw[-, thick] (t9) to (t10); \end{scope} \begin{scope}[xshift=3.1cm] \node at (0,-9) (t00) {(C)}; \node at (0,-5) (t1) {$\tau_0$}; \node at (-2,-2.5) (t2) {$w^{(2)}$}; \node at (-2,0) (t4) {$w^{(3)}$}; \node at (2,-2.5) (t3) {$w_{(2)}$}; \node at (2,0) (t5) {$w_{(3)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (0,7.5) (t10) {$w_{(m+1)}$}; \node at (0,10) (t11) {$w_0$}; \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \draw[-, thick] (t8) to (t10); \draw[-, thick] (t9) to (t10); \draw[-, thick] (t10) to (t11); \end{scope} \begin{scope}[xshift=9.5cm] \node at (0,-9) (t00) {(D)}; \node at (0,-5) (t1) {$\tau_0$}; \node at (-2,-2.5) (t2) {$w^{(2)}$}; \node at (-2,0) (t4) {$w^{(3)}$}; \node at (2,-2.5) (t3) {$w_{(2)}$}; \node at (2,0) (t5) {$w_{(3)}$}; \node at (-2,2.5) (t6) {$w^{(m-1)}$}; \node at (2,2.5) (t7) {$w_{(m-1)}$}; \node at (-2,5) (t8) {$w^{(m)}$}; \node at (2,5) (t9) {$w_{(m)}$}; \node at (0,7.5) (t10) {$w_{0}$}; \draw[-, thick] (t1) to (t2); \draw[-, thick] (t1) to (t3); \draw[-, thick] (t2) to (t4); \draw[-, thick] (t3) to (t5); \draw[-, thick] (t2) to (t5); \draw[dotted, ultra thick] (t5) to (t7); \draw[dotted, ultra thick] (t4) to (t6); \draw[-, thick] (t6) to (t8); \draw[-, thick] (t7) to (t9); \draw[-, thick] (t6) to (t9); \draw[-, thick] (t8) to (t10); \draw[-, thick] (t9) to (t10); \end{scope} \end{tikzpicture} \caption{The ladder posets.} \label{F:ladder} \end{figure} \vspace{.5cm} \textbf{Acknowledgements.} We are grateful to John Stembridge for several reasons, including for his Maple codes and for answering our questions about his work. We thank Bill Graham for bringing this problem to our attention. We thank Roman Avdeev for his comments on the first version of this manuscript. Finally, we thank the referee for very careful reading of our paper and for the constructive suggestions.
1,477,468,749,974
arxiv
\section{Introduction} {\it Binary differential equations} (BDE) widely appear in several geometric problems. A BDE in two variables $x, y$ has the form \begin{equation}\label{bde} a(x,y)\, dy^2+2 b(x,y) \, dx dy + c(x,y) \, dx^2=0 \end{equation} with smooth functions $a, b, c$ of $x, y$. It is regarded as a smooth map $\mathbb{R}^2 \to \mathbb{R}^3$ assigning $(x, y) \mapsto (a, b, c)$ and consider the $C^\infty$-topology on the mapping space. Put $\delta(x,y):=b(x,y)^2-a(x,y)c(x,y)$. If $\delta>0$, the BDE locally defines two foliations which are transverse to each other. The {\it discriminant curve} is given by $\delta=0$, at which the integral curve of BDE generically has a cusp. Two germs of BDEs $F$ and $G$ are equivalent if there is a local diffeomorphism in the $xy$-plane sending the integral curves of $F$ to those of $G$. Also the topological equivalence is defined. There have been known several classification results for germs of (families of) BDE \cite{BT1, BT2, BFT, CBR, DR, Davydov1, Davydov2, Tari1, Tari2}. As a specific geometric setting, consider a surface locally given by $z=f(x,y)$; {\it asymptotic curves} are integral curves of the BDE \begin{equation}\label{abde} f_{yy} \, dy^2 + 2 f_{xy} \, dx dy + f_{xx} \, dx^2=0 \end{equation} (called an {\em asymptotic BDE}, for short). Asymptotic BDEs form a thin subset of the space of general BDEs. The discriminant curve coincides with the {\it parabolic curve} in the surface theory; denote it by $\mathcal{P}$. Note that the above equivalence relation of BDE preserves the discriminant, but loses any information about inflection of integral curves, thus the theory of general BDE is really useful for analyzing parabolic curves but not for flecnodal curves at all. In this paper, we are interested in bifurcation phenomena of asymptotic BDE. In \cite{SKSO}, we studied generic $1$ and $2$-parameter families of surfaces in real projective $3$-space $\mathbb{P}^3$ and presented a classification of Monge forms under projective transformations in accordance with equisingularity types of central projection; indeed it is a natural extension of a well-known classification of jets of a {\it generic} surface given by Arnold-Platonova (cf. \cite{Arnold, Platonova, Landis}) and is related to a work of Uribe-Vargas on $1$-parameter bifurcations of parabolic and flecnodal curves \cite{UV}. There are in total 20 normal forms of parabolic and flat Monge forms up to codimension $4$ (Table \ref{main_table1} in \S 2). For each normal form, we will carefully check the criteria in topological classification of general BDE due to Davydov, Bruce and Tari \cite{BT1, BT2, BFT, Davydov1, Davydov2, Tari1, Tari2}, that determines the diffeomorphic/topological type of our asymptotic BDEs (Propositions \ref{BDE0}, \ref{BDE1} and \ref{BDE2}). It then turns out that asymptotic BDEs at parabolic points arising in generic $2$-parameter families of surfaces realize any generic types of IDE (implicit differential equation) of codimension $2$ classified in Tari \cite{Tari1}. The BDE at flat umbilical points is more remarkable; While our degenerate flat umbilic class $\Pi^f_2$ generically appears in $2$-parameter family, its asymptotic BDE is not equivalent to any type of BDE of codimension $2$ with $(a(0), b(0),c(0))=(0,0,0)$ classified in Tari \cite{Tari2}, but it is equivalent to the normal form \begin{equation}\label{oliver} xdy^2+2ydxdy+x^2dx^2=0, \end{equation} which is actually one of types of codimension $3$ in the space of (general) BDEs obtained by Oliver \cite{Oliver} (Remark \ref{rem_BDE2}). \begin{figure} \centering \includegraphics[height=5cm]{bde.pdf}\;\;\; \includegraphics[height=5cm]{Pif2_g.pdf} \caption{BDE of type (\ref{oliver}) in \cite{Oliver} (left) and degenerate flat point ($D_5$) of type $\Pi_2^f$ in \cite{SKSO} (right). } \label{oliver1} \end{figure} Next, we find out the bifurcation diagrams for generic $2$-parameter families of surfaces. We show that obtained families of asymptotic BDEs are topologically versal (as families of general BDEs) in the sense of Tari \cite{Tari1}, except for the class $\Pi_2^f$ mentioned above. Then, bifurcations of the parabolic curve are read off from the bifurcation diagrams depicted in \cite{Tari1}. However, this is not useful for analyzing the flecnodal curve; for instance, unlike general BDE, the $A_3$-transition of asymptotic BDE at a point of type $\Pi_{v,1}^p$ creates a `figure-eight' flecnodal curve, as it was firstly { observed by F. Aicardi (Trieste, 1997) and A. Ortiz-Rodr\'iguez (Paris, 1999) through computer experiments} (cf. \cite{UV}). Therefore, by direct computations, we examine the bifurcation of the flecnodal curve explicitly in examples. Also we present the bifurcation diagram of type $\Pi^f_2$, which would completely be new in literature. \subsection*{Acknowledgement} The first and second authors thank organizers of the 14th Workshop on Real and Complex Singularities for giving them a nice opportunity to work together. The first author is supported by JSPS grant no.16J02200. The third author is partly supported by JSPS grants no.24340007 and 15K13452. \section{Monge forms at parabolic and flat points} We briefly recall our classification of Monge forms via projective transformations obtained in \cite{SKSO}. Take an affine chart $\mathbb{R}^3=\{[x:y:z:1]\} \subset \mathbb{P}^3$, and consider germs of surfaces in $\mathbb{R}^3$ at the origin given by Monge forms $z=f(x, y)$ with $f(0)=0$ and $df(0)=0$. We say that two germs or jets of surfaces are {\it projectively equivalent} if there is a projective transformation on $\mathbb{P}^3$ sending one to the other. Projective transformations preserving the origin and the $xy$-plane form a $10$-dimensional subgroup of $PGL(4)$, and it acts on the space $J=\mathfrak{m}_{x,y}^2/\mathfrak{m}_{x,y}^{k+1}$ of $k$-jets of Monge forms in a canonical way. In \cite{Platonova81, Platonova} (cf. \cite{Arnold, Landis}), Platonova studied a projectively invariant stratification of $J$ with codimension $\le 2$, and it has recently been extended by Kabata \cite{Kabata} systematically up to codimension $4$ so that each stratum is characterized by singularity types of central projections which the surface-germ possesses. Here, the {\it central projection} of a surface $M$ from a viewpoint $q \in \mathbb{P}^3$ means the restriction to $M$ of a canonical projection $\pi_q:\mathbb{P}^3-\{q\} \to \mathbb{P}^2$; at each point $p \in M$, the projection is locally described as a map-germ $\mathbb{R}^2, 0 \to \mathbb{R}^2, 0$ in local coordinates centered at $p \in M$ and $\pi_q(p) \in \mathbb{P}^2$, respectively, and consider its singularity type (the class up to {\it $\mathcal{A}$-equivalence}, i.e., the equivalence relation of map-germs via natural actions of diffeomorphism-germs of the source and the target). The singularity type measures how the line contacts with $M$: from a point on non-asymptotic line, the projection is of type fold ${\rm I\hspace{-.1em}I}_2: (y, x^2)$ ($2$-point contact), and from a point of an asymptotic line, it is of cusp ${\rm I\hspace{-.1em}I}_3:(y, x^3+xy)$ in general ($3$-point contact), and a plenty of degenerate types of map-germs appear, which are not determined only by the contact order, e.g. the parabolic curve $\mathcal{P}$ is formed by points at which the projection has the beaks/lips singularity ${\rm I}_2: (y, x^3\pm x^2y)$ or worse (Figure \ref{beaks}). \begin{figure} \centering \includegraphics[width=10cm]{view1.png} \caption{Central projection: Parabolic points are characterized as points at which the projection has the lips/beaks singularity or worse.} \label{beaks} \end{figure} The normal forms of parabolic and flat umbilical Monge forms are listed in Table \ref{main_table1} below (\cite{SKSO, Platonova}), where $k$ is the order of jets, $\mbox{cod}$ is the codimension of strata, and the last column $\mbox{proj.}$ means singularity types of central projection of the surface at the origin from viewpoints on the asymptotic line (the type in bracket indicates a more degenerate singularity type of projection from some isolated viewpoint specially chosen on the line). Let $M \subset \mathbb{P}^3$ be a non-singular surface. Suppose that an open subset $U \subset M\cap \mathbb{R}^3$ is parametrized by a graph $z=f(x,y)$ of a function. Since we are working in projective geometry, we may define the Monge form at $p$ by the $k$-jet of $f$ at $p$ off the linear term $j^1f(p)$. Then the {\it Monge-Taylor map} $\theta: U \to J$ is locally defined with respect to this affine chart. Take an open cover of $M$ by such affine charts so that $M$ is locally given by graphs of functions. By a standard transversality argument, we easily see that any class $\Pi^*_{*,*}$ with codimension $\le 2$ appears for a generic embedding $M \to \mathbb{P}^3$ (\cite{Platonova81}), and any class with $\mbox{cod}=k>2$ appears in a generic family of embeddings $M \times V \to \mathbb{P}^3$ ($V \subset \mathbb{R}^{k-2}$). \begin{table}[h] $$ \begin{array}{l | l | c c | l } \mbox{class} & \mbox{normal form} & k & \mbox{cod} & \mbox{proj. } \\ \hline \hline \Pi^p_{{\rm I},1} & y^2+x^3 + xy^3+\alpha x^4 & 4 & 1 & {\rm I}_2 \, ({\rm I}_3) \\ \Pi^p_{{\rm I},2} & y^2+x^3\pm xy^4+\alpha x^4+\beta y^5 + x^2\phi_3 & 5 & 2 & {\rm I}_2 \, ({\rm I}_4) \\ \Pi^p_{c,1} & y^2+x^2 y + \alpha x^4 \;\;\; (\alpha\not=0, \frac{1}{4}) & 4 & 2 & {\rm I\hspace{-.1em}I\hspace{-.1em}I}_2\, ({\rm I\hspace{-.1em}I\hspace{-.1em}I}_3) \\ \hline \Pi^p_{c,2}& y^2+x^2 y + \frac{1}{4} x^4 + \alpha x^5+ y \phi_4 \; (\alpha\not=0) & 5 & 3 & {\rm I\hspace{-.1em}I\hspace{-.1em}I}_2 \\ \Pi^p_{c,4} & y^2+ x^2 y + x^5 + y\phi_4 & 5 & 3 & {\rm I\hspace{-.1em}V}_1 \\ \Pi^p_{{\rm I},3} & y^2+x^3+ xy^5+\alpha x^4+\phi & 6 & 3 & {\rm I}_2\, ({\rm I}_{5})\\ \Pi^p_{v,1}& y^2\pm x^4 +\alpha x^3y+\beta x^2y^2 \;\; (\beta\not=\pm\frac{3}{8}\alpha^2) & 4 & 3 & {\rm V}_1\, ({\rm VI}) \\ \Pi^f_{1} & xy^2\pm x^3 + \alpha x^3 y + \beta y^4 & 4 & 3 & {\rm I}_2^\pm, {\rm I}_3 ({\rm I}_4) \\ \hline \Pi^p_{c,3} & y^2+x^2 y + \frac{1}{4} x^4 + y \phi_4 & 5 & 4 & {\rm I\hspace{-.1em}I\hspace{-.1em}I}_3\, ({\rm I\hspace{-.1em}I\hspace{-.1em}I}_{4}) \\ \Pi^p_{c,5} & y^2+ x^2 y \pm x^6+y(\phi_4+\phi_5) & 6 & 4 & {\rm I\hspace{-.1em}V}_2 \\ \Pi^p_{{\rm I},4} &y^2+x^3+\alpha x^4+\phi & 6 & 4 & {\rm I}_2\, ({\rm I}_{6})\\ \Pi^p_{v,2}& y^2 \pm x^4+ \alpha x^3y \pm \frac{3}{8}\alpha^2 x^2y^2 & 4 & 4 & {\rm V}_1\, ({\rm VI}_1)\\ \Pi^p_{v,3} & y^2+ x^5 + y(\phi_3+\phi_4) & 5 & 4 & {\rm V}_2\, ({\rm VI}_2) \\ \Pi^f_{2} & xy^2 + x^4 \pm y^4+\alpha x^3 y & 4 & 4 & {\rm I}_2^- ({\rm I\hspace{-.1em}I\hspace{-.1em}I}) \\ \end{array} $$ } \caption{\small Monge forms at parabolic and flat points are obtained in \cite{Arnold, Platonova, Landis} for $\mbox{cod}=1,2$ and \cite{SKSO} for $\mbox{cod}=3,4$. In the list, $\alpha, \beta, \cdots$ are leading moduli parameters, $\phi_r$ denotes generic homogeneous polynomials of degree $r$ and $\phi=\beta y^5 + \gamma y^6 + x^2(\phi_3 + \phi_4)$. Double-sign $\pm$ corresponds in same order for each of $\Pi^p_{v,1}$ or $\Pi^p_{v,2}$. } \label{main_table1} \end{table} \section{Binary differential equations} \subsection{General BDE} One can separate BDE (\ref{bde}) into two cases. The first case occurs when the functions $a, b, c$ do not vanish at the origin at once, then the BDE is just an implicit differential equation (IDE). The second case is that all the coefficients of BDE vanish at the origin. Stable topological models of BDEs belong to the first case; it arises when the discriminant is smooth (or empty). If the unique direction at any point of the discriminant is transverse to it (i.e. integral curves form a family of cusps), then the BDE is stable and smoothly equivalent to $dy^2+xdx^2=0$, that was classically known in Cibrario \cite{CBR} and also Dara \cite{DR}. If the unique direction is tangent to the discriminant, then the BDE is stable and smoothly equivalent to $dy^2+(-y+\lambda x^2)dx^2=0$ with $\lambda\neq 0,\frac{1}{16}$, that was shown in Davydov \cite{Davydov1, Davydov2}; the corresponding point in the plane is called a \emph{folded singularity} -- more precisely, a \emph{folded saddle} if $\lambda<0$, a \emph{folded node} if $0<\lambda<\frac{1}{16}$ and a \emph{folded focus} if $\frac{1}{16}<\lambda$, see Figure \ref{fig1}. \begin{figure} \centering \includegraphics[width=10cm]{folded1.pdf}\\ \caption{Folded singularities: saddle (left), node (center) and focus (right)}\label{fig1} \end{figure} In both cases, the topological classification of generic $1$ and $2$-parameter families of BDEs have been established in Bruce-Fletcher-Tari \cite{BT1, BFT} and Tari \cite{Tari1, Tari2}, respectively. We will use those results later. Besides, we need a generic $3$-parameter family of BDE studied in Oliver \cite{Oliver}. \subsection{BDE of asymptotic curves} We are concerned with (degenerate) parabolic points and flat umbilic points of a surface. In fact, asymptotic BDE is intimately related to the singularity type of the Monge form. The parabolic curve can be seen as the locus where the Monge form has $A_{\geq2}$-singularities; when the Monge form has a $A_3^\pm$-singularity, the surface has a cusp of Gauss, which corresponds to the class $\Pi^p_{c,1}$ -- in this case the asymptotic BDE has a folded saddle singularity (resp. a folded node or focus singularity) if the Monge form has a singularity of type $A_3^-$ (resp. $A_3^+$). The transitions in $1$-parameter families occur generically in three ways at the following singularities of the Monge form: non-versal $A_3$, $A_4$ and $D_4$ (flat umbilic) \cite{BFT}. For $2$-parameter families, $A_3,$ $A_4$, $A_5$ and $D_5$ singularities of the Monge form generically appear. Below, the Monge form is written by $$f(x,y)=\sum_{2\le i+j} c_{ij}\, x^i y^j, $$ and the $k$-jet $j^kf(0)$ is assumed to coincide with the normal form as in Table \ref{main_table1} for each class. \begin{prop} \label{BDE0} The following classes in Table \ref{main_table1} correspond to structurally stable types of BDE given in \cite{CBR, DR, Davydov1}. \begin{enumerate} \item[$(\Pi^p_{{\rm I},k})$] $(1 \le k\le 4)$. The parabolic curve is smooth and the unique direction defined by $\delta=0$ is transverse to the curve; the asymptotic BDE is smoothly equivalent to $$dy^2+x\, dx^2=0.$$ \item[$(\Pi^p_{c,k})$] $(k=1,4,5)$. The parabolic curve is smooth and the unique direction defined by $\delta=0$ is tangent to the curve; the asymptotic BDE is smoothly equivalent to $$dy^2+(-y+ \lambda x^2)dx^2=0$$ with $\lambda=6(c_{40}-\frac{1}{4})\not=0$, where $c_{40}$ is the coefficient of $x^4$ in the normal form. \end{enumerate} \end{prop} \noindent {\sl Proof} :\; The results follow from the comments in \S 3.1 above. In second case, i.e., $j^4f= y^2+x^2y+c_{40}x^4$, the $2$-jet of the asymptotic BDE is transformed to the above form via $x=\bar{x}$ and $y=-\frac{1}{2}\bar{x}^2-\bar{y}$. \hfill $\Box$ \begin{rem}\label{rem_BDE0}\upshape As $c_{40}=0$ in the normal forms of classes $\Pi^p_{c,4}$ and $\Pi^p_{c,5}$, we see $\lambda=-\frac{3}{2}<0$, thus the asymptotic BDE has a folded saddle at the origin. The {\it folded saddle-node bifurcation} (cf. Fig.2 in \cite{Tari1}) occurs at $\lambda=0$. That is the case of $c_{40}=\frac{1}{4}$, that corresponds to the classes $\Pi^p_{c,k}\; (k=2,3)$ dealt below. Notice that another exceptional value $\lambda=\frac{1}{16}$ does not relate to our classification of Monge forms given by projection-types (Table \ref{main_table1}). That is, the {\it folded node-focus bifurcation} of asymptotic BDE occurs within the same class $\Pi^p_{c,1}$ (cf. Fig.3 in \cite{Tari1}) and $\lambda=\frac{1}{16}$ makes a condition on coefficients of order greater than $4$ of the normal form, that is independent from the geometry of central projection of the surface. We should remark that $\Pi^p_{c,4}$ and $\Pi^p_{c,5}$ cause $1$ and $2$-parameter bifurcations of the flecnodal curve, respectively. For instance, during a $1$-parameter bifurcation of type $\Pi^p_{c,4}$, a butterfly point moves on the flecnodal curve and passes through this degenerate cusp of Gauss at the bifurcation moment, see \cite[\S 4]{SKSO} and \cite{UV}. \end{rem} \begin{rem}\label{rem_BDE1}\upshape At elliptic points in an smooth surface in $\mathbb R^3$ there is a unique pair of conjugate directions for which the included angle (i.e the angle between these directions) is minimal. These directions are called {\it characteristic directions} and are determined in terms of the coefficients of the first and second fundamental forms. Theses directions are not preserved via projective transformations, but at a cusp of Gauss, Oliver in \cite{Oliver2} shows that the characteristic directions are invariant under projective transformations. We can use the normal form $\Pi^p_{c,1}$ to obtain the BDE associated to characteristic directions; it is indeed smoothly equivalent to $dy^2+(-y+\lambda x^2)dx^2=0$ with $\lambda=-6c_{40}+\frac{3}{2}\not=0$. The configurations of asymptotic and characteristic curves at a cusp of Gauss are given in \cite{Oliver2}. \end{rem} \begin{prop}\label{BDE1} The following classes correspond to some topological types of BDE with codimension $1$. \begin{enumerate} \item[$(\Pi^p_{v,1})$] The Monge form has an $A_3$-singularity at the origin, at which the parabolic curve has a Morse singularity; the asymptotic BDE is topologically equivalent to the non-versal $A_3^\pm$-transitions with Morse type 1 in \cite{BFT} $$dy^2+(\pm x^2 \pm y^2) dx^2=0.$$ \item[$(\Pi^p_{c,2})$] The Monge form has an $A_4$-singularity at the origin, at which the parabolic curve is smooth; the asymptotic BDE is topologically equivalent to the well-folded saddle-node type in \cite{BFT, Davydov2} $$dy^2+(-y + x^3) dx^2=0,$$ provided the coefficient of $x^5$ in the normal form $c_{50}\neq 0$. \item[$(\Pi^f_{1})$\,\;] The Monge form has a $D_4^\pm$-singularity at the origin, at which the parabolic curve has a Morse singularity; the asymptotic BDE is topologically equivalent to the bifurcation of star/$1$-saddle types in \cite{BT1} \begin{eqnarray*} D_4^+: && ydy^2-2x dxdy -y dx^2=0 \;\; \mbox{\rm (star)};\\ D_4^-: &&ydy^2+2x dxdy +y dx^2=0 \;\; \mbox{\rm ($1$-saddle)}. \end{eqnarray*} \end{enumerate} \end{prop} \noindent {\sl Proof} :\; In \cite{SKSO, Kabata}, the class $\Pi^p_{v,1}$ is explicitly described as follows. Let $z_0=y^2+c_{20}x^2+\sum_{i+j\ge 3} c_{ij}x^iy^j \in J$. Then, $z_0$ is projectively equivalent to $\Pi^p_{v,1}$ if and only if $$c_{20}=c_{30}=c_{21}=0, \; \; c_{40}\neq0, \;\; S:=3 c_{31}^2 + 8 c_{40}(c_{12}^2 - c_{22})\not=0.$$ In fact, exactly the same condition appears in \cite[p.501, Case 1]{BFT} as the condition for $A_3^\pm$-transition: $S\not=0$ means the $2$-jet $j^2\delta(0)$ is non-degenrate (ibid), thus the normal form follows from Theorem 2.7 (and Prop. 4.1) in \cite{BFT}. For the class $\Pi^p_{c,2}$, $z_0$ (of the above form) is projectively equivalent to $\Pi^p_{c,2}$ if and only if $$c_{20}=c_{30}=B=0, \; c_{40}\not=0, \; A\not=0,$$ with $B:=c_{21}^2c_{40}-4c_{40}^2$ and $A:=c_{21}^2c_{50}+4c_{12}c_{40}^2-2c_{21}c_{31}c_{40}$. This condition is the same as the one for $A_4$-transition in \cite[p.502, Case 2]{BFT}, and then the normal form of BDE is obtained (\cite[Prop. 4.2]{BFT}, also see \cite{Davydov2}). For the class $\Pi^f_{1}$, the asymptotic BDE is given in \cite[Cor. 5.3]{BT1}: indeed, for our normal form of $\Pi^f_{1}$, the parabolic curve is defined by $3 x^2 - y^2 + 18 \beta x y^2+ \cdots =0$, hence it has a node at the origin for arbitrary $c_{31}=\alpha, c_{04}=\beta$. \hfill $\Box$ \begin{prop}\label{BDE2} The following classes correspond to some topological types of BDE with codimension $\ge 2$. \begin{enumerate} \item[$(\Pi^p_{v,2})$] The Monge form has an $A_3$-singularity at the origin, at which the parabolic curve has a cusp singularity; the asymptotic BDE is topologically equivalent to the cusp type in \cite{Tari1} $$dy^2+(\pm x^2+y^3) dx^2=0, $$ provided $C_1 :=\mp 5c_{50}c_{31}^3+12c_{41}c_{31}^2\mp 24c_{32}c_{31}+32c_{23}\neq 0 $. \item[$(\Pi^p_{v,3})$] The Monge form has an $A_4$-singularity at the origin, at which the parabolic curve has a Morse singularity; the asymptotic BDE is topologically equivalent to the non-transversal Morse type in \cite{Tari1} $$dy^2+(xy+x^3) dx^2=0$$ provided $C_2:=c_{31}\neq 0$. \item[$(\Pi^p_{c,3})$] The Monge form has an $A_5$-singularity at the origin, at which the parabolic curve is smooth; the asymptotic BDE is topologically equivalent to the folded degenerate elementary type in \cite{Tari1} $$dy^2+(-y \pm x^4) dx^2=0,$$ provided $C_3:=c_{60}-\frac{1}{2}c_{41}\neq 0$. \item[($\Pi^f_{2})$\,\;] The Monge form has a $D_5$-singularity at the origin, at which the parabolic curve has a cusp singularity; the asymptotic BDE is topologically equivalent to a cusp type 2 in \cite{Oliver} $$xdy^2+2ydxdy+x^2dx^2=0.$$ \end{enumerate} \end{prop} \noindent {\sl Proof} :\; For each of the first three classes, the claim follows from Proposition 4.1 and Theorem 1.1 of Tari \cite{Tari1}. Let $S, A, B$ be as in the proof of Proposition \ref{BDE1}. As shown in \cite{SKSO}, the condition of $z_0=y^2+c_{20}x^2+o(2)$ to be equivalent to $\Pi_{v,2}^p$ is given by $$c_{20}=c_{30}=c_{21}=S=0,\;\; c_{40}\neq0$$ which is entirely the same as the condition of (iii) in \cite[p.156]{Tari1} ($C_1$ is given by $C$ in the bottom of that page). Also the condition for $\Pi_{v,3}^p$ is given by $$c_{20}=c_{30}=c_{21}=c_{40}=0,$$ and that for $\Pi_{c,3}^p$ is given by $$c_{20}=c_{30}=B=A=0.$$ The same conditions can be found in (ii) and (i) in \cite[p.156]{Tari1} respectively ($C_2, C_3$ are given by $c_3$ in (ii) and $A$ in (i) in {\it ibid.}). Hence, those asymptotic BDEs are equivalent to the normal forms presented in Theorem 1.1 in \cite{Tari1}. For the last class $\Pi^f_{2}$, the $1$-jet of the asymptotic BDE is given by $j^1(a,b,c)(0)=(2x,2y,0)$ and the parabolic curve is defined by $- 4 y^2+24 x^3 +\cdots =0$, namely it has a cusp at the origin. Thus the corresponding BDE is equivalent to one of types described in Theorem 3.4 of \cite{Oliver}. \hfill $\Box$ \begin{rem}\label{rem_BDE2} \upshape In Tari \cite{Tari2} and Oliver \cite{Oliver}, BDE with the discriminant having a cusp are classified. Notice that the BDE for $\Pi^f_{2}$ in Proposition \ref{BDE2} is equivalent to one of `type 2' in Oliver's classification \cite{Oliver} which appears in a generic $3$-parameter family of general BDE's, while the type appears in a generic $2$-parameter family of asymptotic BDE by our classification of Monge forms \cite{SKSO}. This is not surprising, for asymptotic BDEs form a thin subset of the space of all BDEs. In fact, it is shown in \cite[Prop.2.1(2)]{Tari2} that for general BDE with a cusp, the $1$-jet is reduced by linear changes of coordinates and multiplication by non-zero constants to $j^1(a,b,c)(0)=(x, \pm y+ \alpha x, 0)$ ($\alpha \in \mathbb{R}$), while in our case it is reduced to the particular form $j^1(a,b,c)(0)=(x, y, 0)$ as seen in the proof of Proposition \ref{BDE2}. This infers the gap of codimensions caused by two different classifications. It would be interesting to find a deeper understanding of the geometry of asymptotic BDE at a flat umbilic point. \end{rem} \begin{rem} \upshape In \cite{SKSO, Kabata}, our classification of Monge forms has been achieved in accordance with singularity types of central projections, or say almost equivalently, the contact of the surface with lines, while Tari \cite{Tari1} described types of asymptotic BDEs in terms of singularities of height functions and singularities of parabolic curves, that reflects the contact of the surface with planes. These two different approaches lead to the same conditions $C_1, C_2, C_3\not=0$. That should be explained by using a duality between the contact of lines and the contact of planes. \end{rem} \section{Families of Monge forms and BDEs} In Propositions \ref{BDE1} and \ref{BDE2}, we have compared our Monge forms in \cite{SKSO} and types in classification of general BDE given by Tari \cite{Tari1}. In this section, we compare families of Monge forms and families of BDE. Given an $s$-parameter family $f(x,y,\lambda)\; (=f_{\lambda}(x,y)): U \times \mathbb{R}^s \to \mathbb{R}$ $(U \subset \mathbb{R}^2)$, we define a family of Monge-Taylor maps $$\theta: U \times \mathbb{R}^{s} \to J, \quad \theta(p,\mbox{\boldmath $u$}):=j^kf_{\lambda}(p).$$ Below, for each class in Table \ref{main_table1}, we take a family of Monge forms whose Monge-Talyor map $\theta$ is transverse to the corresponding stratum in the jet space $J$ (Table \ref{table2}). We show that the associated family of asymptotic BDEs is topologically versal in the sense of Tari \cite{Tari1}. \subsection{Transverse families of Monge forms} For instance, recall the cases of $\Pi_{v,k}^p$ as in Propositions \ref{BDE1} and \ref{BDE2}. Write $z=\sum_{2\le i+j\le k} c_{ij}x^iy^j \in J$ as before, and regard $c_{ij}$ as coordinates of $J$. The locus of parabolic Monge forms in $J$ is defined by $c_{20}c_{02}-c_{11}^2=0$, thus the tangent space to the locus at $z_0=y^2+o(2)$ is defined by the $1$-form $dc_{20}=0$ in $T_{z_0}J=J$. Let $z_0=y^2+o(2)$ be of type $\Pi_{v,1}^p$. By the local defining equation of the stratum, its tangent space at $z_0$ is given by linear equations $$dc_{20}=dc_{30}=dc_{21}=0\quad \mbox{on} \;\; T_{z_0}J.$$ In particular, take $z_0=y^2\pm x^4 +\alpha x^3y+\beta x^2y^2+o(4)$ with $S(z_0)=3\alpha^2-8\beta\not=0$ and $f: U \to \mathbb{R}$ a representative of it; $z_0=j^4f(0)$. The Monge-Taylor map $\theta: U \to J$ sends $p \mapsto j^4f(p)$ off the linear term. Then, the image $d\theta(T_0U)$ and $\partial/\partial c_{20}$ span the normal space (i.e the quotient of $T_{z_0}J$ via the tangent space of the stratum) and thus the $1$-parameter family $f(x,y,t)=f(x,y)+tx^2$ induces a family of Monge-Taylor maps $\theta: U\times \mathbb{R} \to J$ being transverse to the stratum at the origin $(0,0)$. For the class $\Pi_{v,2}^p$, let $$\textstyle f(x,y)=y^2\pm x^4+\alpha x^3y\pm \frac{3}{8}\alpha^2 x^2y^2+\sum_{i+j=5}c_{ij}x^iy^j+o(5).$$ Then the tangent space of the stratum at $z_0=j^4f(0)$ is defined by $$dc_{20}=dc_{30}=dc_{21}=dS=0\quad \mbox{on} \;\; T_{z_0}J.$$ Since $dS=6c_{31}dc_{31}+8(c_{12}^2-c_{22})dc_{40}+16c_{12}c_{40}dc_{12}-8c_{40}dc_{22}$, we have $dS=6\alpha dc_{31}\mp 3 \alpha^2 dc_{40} -8dc_{22}$ at $z_0$. Then the condition that $$\textstyle \mbox{rank}\; [\; dc_{20}, dc_{30}, dc_{21}, dS\;]^T \left[d\theta(\frac{\partial}{\partial x})\; d\theta(\frac{\partial}{\partial y})\right]=2$$ is written down as $$C_1:=\mp 5c_{50}c_{31}^3+12c_{41}c_{31}^2 \mp 24c_{32}c_{31}+32c_{23}\neq 0,$$ that is exactly the same condition required in Proposition \ref{BDE2} (\cite[p.156]{Tari1}). Then we can easily find a desired $2$-parameter deformation of $f$; for instance, when $c_{23}\not=0$ and other $c_{50}=\cdots=c_{05}=0$ (then $C_1\not=0$), we may take $f(x,u,t,u)=f(x,y)+t x^2+u x^2y$. Also for other cases in Table \ref{main_table1}, any representative $f(x,y)$ of the normal form ($k$-jet of Monge form) admits a deformation whose Monge-Taylor map is transverse to the stratum, provided Taylor coefficients of $f$ of higher order ($> k$) are chosen to be appropriately generic, if necessary. In Table 2, we collect examples of such families of Monge forms deforming the normal forms in Table \ref{main_table1}. Here we omit the stable case dealt in Proposition \ref{BDE0}. \begin{table} $$ \begin{array}{l | l l} \mbox{class} & \mbox{family}\\ \hline\hline \Pi^p_{c,2}& y^2+x^2 y + \frac{1}{4} x^4 + \alpha x^ + t x^3 &(\alpha\not=0) \\ \Pi^p_{c,3}(\pm) & y^2+x^2 y + \frac{1}{4} x^4 +\gamma x^4y + t x^3 + u x^4 & (\gamma \lessgtr 0) \\ \hline \Pi^p_{v,1}(\pm, +) & y^2+ x^4 +\alpha x^3y+\beta x^2y^2 + t x^2 & (\beta> \frac{3}{8}\alpha^2) \\ \Pi^p_{v,1}(\pm, -) & y^2-x^4 +\alpha x^3y+\beta x^2y^2 + t x^2 & (\beta< \frac{3}{8}\alpha^2) \\ \Pi^p_{v,2}(\pm) & y^2 \pm x^4 + \alpha x^3y \pm \frac{3}{8}\alpha^2 x^2y^2 + \gamma x^2y^3+ t x^2+u x^2y & (\gamma\not=0) \\ \Pi^p_{v,3} & y^2+ x^5 + \gamma x^3y + t x^2+u x^2y & (\gamma\not=0) \\ \hline \Pi^f_{1}(\pm) & xy^2\pm x^3 + \alpha x^3 y + \beta y^4 + t x^2 \\ \Pi^f_{2}(\pm) & xy^2 + x^4 \pm y^4+\alpha x^3 y + t x^2+ux^3 \end{array} $$ \caption{\small Examples of families of Monge forms (with parameters $t, u$) whose Monge-Taylor maps are transverse to the strata. } \label{table2} \end{table} \subsection{Versal families of BDE} As a general theory, the germ of BDE (\ref{bde}) with $a(0)\not=0$ is canonically transformed to the germ of an IDE $$p^2+\frac{a(x,y)c(x,y)-b(x,y)^2}{a(x,y)^2}=0 \qquad \left(p=\frac{dy}{dx}\right)$$ by a simple coordinate change $\bar{x}=x$ and $\bar{y}=y+\int_0^x \frac{b(s,y)}{a(s,y)}ds$. For an IDE, $p^2+\varphi(x,y)=0$, moreover for a family of IDEs, $p^2+\varphi(x,y,t,u)=0$, there are known useful criteria for detecting its genericity, by which generic classifications of IDE and that of families of IDEs have been achieved, see Tari \cite{Tari1} for the detail (also \cite{BFT}). Given a deformation of a parabolic Monge form $y^2+o(2)$, we obtain a family of IDEs of asymptotic curves, and apply Tari's criteria to it. Now we check criteria of asymptotic IDE and BDE for the following four examples of families of Monge forms in Table \ref{table2}: \begin{itemize} \item[($\Pi_{v,2}^p$)] $p^2\pm 6 x^2 + \gamma y^3+ u y +t - \frac{3}{2}u^2 x^2 - 3 \gamma t x^2 y=0$ $(\gamma\not=0)$ (we set $\alpha=0$ for simplicity). It is just an IDE of cusp type in the sense of \cite{Tari1}. Check a criterion in Proposition 3.5 (ii) of \cite{Tari1}: $$ \left| \begin{array}{cc} \varphi_t(0) & \varphi_{ty}(0) \\ \varphi_u (0) & \varphi_{uy}(0) \end{array}\right| =-1\not=0, $$ thus, by Theorem 3.6 of \cite{Tari1}, our family of asymptotic BDE is fiber topologically equivalent to $$dy^2+(\pm x^2+y^3+uy +t) dx^2=0. $$ \item[($\Pi_{v,3}^p$)] $\textstyle p^2+3 \gamma x y+10 x^3+t + u y - \frac{3}{2} u^2 x^2 - 5 \gamma u x^3=0$ $(\gamma\not=0)$. It is of non-transverse Morse singularity type. Check a criterion in Proposition 3.3 (ii) of \cite{Tari1}: $$ \left| \begin{array}{ccc} 0&1&6\\ \varphi_t(0) & \varphi_{ty}(0) & \varphi_{txx}(0)\\ \varphi_u(0) & \varphi_{uy}(0) & \varphi_{uxx}(0)\\ \end{array}\right| =-6\not=0, $$ thus, by Theorem 3.4 of \cite{Tari1}, our family is fiber topologically equivalent to $$dy^2+(xy+x^3+ux^2+t) dx^2=0.$$ \item[($\Pi_{c,3}^p$)] $\textstyle p^2 + y \mp \frac{15}{2}\gamma x^4+3 t x + 6 u x^2 \pm 6 \gamma x^2 y=0$ $(\gamma\not=0)$. It is of folded degenerate elementary singularity type. Check a criterion in Proposition 3.1 (ii) of \cite{Tari1}: $$ \left| \begin{array}{cc} \varphi_{tx}(0) & \varphi_{txx}(0) \\ \varphi_{ux} (0) & \varphi_{uxx}(0) \end{array}\right| =-36\not=0, $$ thus, by Theorem 3.2 of \cite{Tari1}, our family is fiber topologically equivalent to $$dy^2+(-y\pm x^4+ux^2+tx) dx^2=0.$$ \end{itemize} \begin{itemize} \item[($\Pi_{f,1}^p$)] This is not the case of IDE and indeed it is a $1$-parameter family of BDE, thus we refer to Example 4.1 in \cite{BT2}. Consider $F=(12\beta y^2+2x)p^2+2(3\alpha x^2+2y)p+(6\alpha xy+2t+6x)=0$. In this case the linear part of $F$ provides all the topological information about the family of BDEs (\cite{BT1,BT2}). Check a versality criterion in Proposition 2.1 of \cite{BT2}: $$ \left| \begin{array}{ccc} 2 & 0&0 \\ 0&2 & 0\\ 6&0&2 \end{array}\right| =8\not=0, $$ thus we can reduce the 1-jet of the BDE to the form $(y+t)dy^2\pm 2xdxdy \pm y dx^2=0$. Combining Theorem 3.5 and Example 4.1 in \cite{BT2} with $\phi(p)=(F_x+pF_y)(0,0,0,p)=6p^2+6$, our family is fiber topologically equivalent to \begin{eqnarray*} (y+t)dy^2-2 x dxdy - y dx^2&=&0,\\ (y+t)dy^2+2x dxdy + y dx^2&=&0. \end{eqnarray*} \end{itemize} Also for families of Monge forms of type $\Pi_{v,1}^p$, $\Pi_{c,2}^p$, in Table \ref{table2}, it can be seen that the families of asymptotic BDE are respectively equivalent to \begin{eqnarray*} dy^2+(\pm x^2+\pm y^2+t) dx^2&=&0, \mbox{(see \cite{BFT})}\\ dy^2+(-y+x^3+tx) dx^2&=&0\; \mbox{(see \cite{Tari1})}. \end{eqnarray*} Bifurcation diagrams of generic $2$-parameter families of IDEs have clearly been depicted in Tari \cite{Tari1, Tari2}. Therefore, we can deduce from those figures the bifurcation diagrams of parabolic curves for generic $2$-parameter families of parabolic Monge forms. In the next section, we compute the bifurcation of flecnodal curve at parabolic and flat umbilical points. \section{Bifurcation diagrams for $2$-parameter families of surfaces} \subsection{Flecnodal curve} A point of a surface in $\mathbb{P}^3$ is {\it flecnodal} if an asymptotic line at that point has more than $3$-point contact with the surface. The closure of the set of such points is called the {\it flecnodal curve}, denoted by $\mathcal{S}$, which is an important characteristic of the surface; $\mathcal{S}$ lies on the hyperbolic domain and meets the parabolic curve $\mathcal{P}$ at (ordinary or degenerate) cusps of Gauss $\Pi_{c,*}^p$. A flecnodal point is characterized as the point at which the projection along an asymptotic line is of type the swallowtail singularity ${\rm I\hspace{-.1em}I}_4: (y, x^4+yx)$ or worse. From this fact, a local defining equation of $\mathcal{S}$ is obtained in a very neat way \cite{Saji, Kabata}. Suppose that the origin $0\in \mathbb{R}^3$ is a flecnodal point of a surface $z=f(x,y)$ such that the $x$-axis is the asymptotic line at $0$. For deforming the line, one has $2$-dimensional freedom, thus the projection along the $x$-axis, $(x,y)\mapsto (y, f(x,y))$, has a $2$-parameter deformation $$F_{v,w}(x,y) = (y-vx, f(x,y)-wx).$$ Let $\lambda=0$ be the equation defining the singular point set (contour generator) of $F_{v,w}$ and $\eta$ be a vector field on a neighborhood of the origin in $\mathbb{R}^2$ which spans $\ker dF_{v,w}$ where $\lambda=0$, i.e. $$\textstyle \lambda(x,y,v,w):=\det dF_{v,w}(x,y) \;\; \mbox{and} \;\; \eta(x,y,v,w):=\frac{\partial}{\partial x}+v\frac{\partial}{\partial y}.$$ Then the swallowtail singularity of $F_{v,w}$, and thus the curve $\mathcal{S}$, is characterized by three equations $$\lambda=\eta\lambda=\eta\eta\lambda=0.$$ By $\lambda=0$, $w$ is always solved. Eliminating $v$ by the last two equations, we obtain an equation of variables $x, y$ or parametrizations $x=x(v)$, $y=y(v)$, which defines $\mathcal{S}$ around $(x,y)=(0,0)$. Furthermore, there generically appear some isolated points in the curve $\mathcal{S}$ at which an asymptotic line has $4$-point contact with the surface, i.e. the projection $F_{v,w}$ admits the butterfly singularity ${\rm I\hspace{-.1em}I}_5: (y, x^5+yx)$, so we call such a point a {\it butterfly point} for short. It is defined by an additional equation $\eta\eta\eta\lambda=0$ on $\mathcal{S}$. The parabolic curve $\mathcal{P}$ is obtained as the locus of points where the singular point set of the projection $F_{v,w}$ is not smooth, i.e., $$\textstyle \lambda=\frac{\partial \lambda}{\partial x}=\frac{\partial \lambda}{\partial y}=0.$$ In Figures below, $\mathcal{P}$ is drawn in black and $\mathcal{S}$ is in gray. \subsection{$1$-parameter bifurcations} For three classes of codimension $1$ in Proposition \ref{BDE1}, we confirm bifurcations of curves $\mathcal{P}$ and $\mathcal{S}$ as depicted in \cite{UV} by direct computations using families of Monge forms in Table \ref{table2}. In \cite{UV} bifurcations of $\mathcal{S}$ at hyperbolic points are also classified and positive/negative flecnodal points are considered. We do not enter the full scope of the classification of hyperbolic points, since we focus mainly on bifurcations at parabolic and flat umbilical points. \ \noindent $\bullet \;\; (\Pi^p_{v,1}(\pm,\pm))$ As seen in Proposition \ref{BDE1}, at a point of type $\Pi^p_{v,1}$, non-versal $A_3^\pm$-transition of BDE occurs (cf. \cite{SKSO}). According to sign difference of coefficients in the normal form, there are four types. Among them, there are two types so that the flecnodal curve is created/canceled when passing through the point. Unexpectedly, the flecnodal curve has the form of `figure-eight', { as mentioned in Introduction. } Obviously, the Proposition \ref{BDE1} does not help anything for understanding the appearance of the figure eight curve, because the equivalence of BDE does not preserve inflections of integral curves. Let us confirm this fact by a direct computation using the normal form of $\Pi^p_{v,1}$. Let $$f(x,y, t)=y^2+ x^4 + x^2y^2 + t x^2.$$ Solving equations $\lambda=\eta\lambda=\eta\eta\lambda=0$, we have $$\textstyle (x,y)= \left(\mp \frac{v(2+v^2)\sqrt{-t-v^2}}{{\sqrt{2}(-2+v^2)\sqrt{2+v^2-v^4}}}, \pm \frac{(2+v^2)\sqrt{-t-v^2}}{\sqrt{2(2+v^2-v^4)}}\right)$$ with $t\le -v^2$ and $|v| \ll 1$, which parametrizes part of the flecnodal curve $\mathcal{S}$ sitting in the half planes, $y\ge0$ and $y\le 0$. The parabolic curve $\mathcal{P}$ is given by $t + (6+t) x^2 + y^2 + 6 x^4 - 3 x^2 y^2=0$. As $t$ varies, an elliptic Morse bifurcation of the parabolic curve occurs and the created flecnodal curve has the form of figure-eight as depicted in $(+, +)$, Figure \ref{1para}. No butterfly point appears on $\mathcal{S}$, since $\eta\eta\eta\lambda\not=0$ for $(x,y)$ near the origin. Also for the form $f(x,y, t,u)=y^2- x^4 + x^2y^2 + t x^2$, it has a hyperbolic Morse bifurcation of the parabolic curve and the figure-eight flecnodal curve also arises as depicted in $(-, +)$. In the other two cases, $f(x,y, t,u)=y^2\pm x^4 - x^2y^2 + t x^2$, the curves bifurcate as depicted in $(\pm, -)$. \ \noindent $\bullet \;\; (\Pi^p_{c,2})$ In this case, $A_4$-transition of asymptotic BDE occurs; during this process, a pair of cusps of Gauss (tangential points of $\mathcal{P}$ and $\mathcal{S}$) is created/canceled. Take $$\textstyle f(x,y, t)=y^2 + x^2 y + \frac{1}{4} x^4 + x^5 + t x^3$$ and project it along the $x$-axis. Then $\mathcal{P}$ is a cubic curve: $2y + 20 x^3+x^2 +6 tx=0$, and $\mathcal{S}$ is given by $y+100 x^4+10 x^3+ (\frac{1}{2} + 20 t) x^2 + 3 t x +t^2=0$. \ \noindent $\bullet \;\; (\Pi^f_1(\pm))$ Take $$\textstyle f_\pm (x,y,t)=x y^2 \pm x^3 + x^3 y + t x^2.$$ As a test, we consider the projection along the $x$-axis (and $y$-axis). In case of $f_+$, $\mathcal{P}$ is given by $12 x^2 - 9 x^4 - 4 y^2+4 t x =0$, and $\mathcal{S}$ is given by $x=0$. In case of $f_{-}$, $\mathcal{P}$ is given by $12 x^2 + 9 x^4 + 4 y^2-4 t x =0$, and $\mathcal{S}$ has three branches consisting of the $y$-axis and two smooth curves having an intersection point moving along the $x$-axis. \begin{figure}[p] \centering \includegraphics[width=11.5cm]{1para_b.png} \caption{$1$-parameter bifurcations of $\Pi_{v,1}^p$, $\Pi_{c,2}^p$ and $\Pi_1^f$ (\cite{UV}, \cite{SKSO}).} \label{1para} \end{figure} \subsection{$2$-parameter bifurcations} For each of four classes of codimension $2$ in Proposition \ref{BDE2}, we draw a new picture of the bifurcation diagram using the family in Table \ref{table2}. The bifurcation of $\mathcal{P}$ can be indeed read off from Figure 8, Figure 6 and Figure 4 in Tari \cite{Tari1}, respectively. We compute the curve $\mathcal{S}$ and add some new branches to Tari's figures. \ \noindent $\bullet \;\; (\Pi_{v,3}^p)$ Consider the family of Monge forms $$f(x,y, t,u)=y^2+ x^5 + x^3y + t x^2+u x^2y,$$ and then we have Figure \ref{Piv3} below (cf. Figure 8 in \cite{Tari1}). The parabolic curve $\mathcal{P}$ is given by the equation $p(x,y,t,u)=12 x y+ 40 x^3- 9 x^4+4 t + 4 u y - 4 u^2 x^2 - 12 u x^3 =0$. When $(t,u)=(0,0)$, $\mathcal{P}$ consists of the $y$-axis and the smooth curve $y=-\frac{10}{3} x^2 + \frac{3}{4} x^3$; hence, $\mathcal{P}$ has a node at the origin. Solving $p=p_x=p_y=0$, we see that a hyperbolic Morse bifurcation of $\mathcal{P}$ occurs at a point of type $\Pi_{v, 1}^p$ when the parameter comes across a smooth curve $t=\frac{1}{27}u^3 (10+\frac{3}{4} u)$ (no.2 and 8 in Figure \ref{Piv3}). The $A_4$-transition at which two cusps of Gauss are created/canceled appears along a smooth curve on $ut$-space, that is the $\Pi_{c,2}^p$-locus (no.4 and 12). On the other hand, solving equations $\lambda=\eta\lambda=\eta\eta\lambda=0$ for $y$, $\mathcal{S}$ is expressed by two branches $$\textstyle y=-\frac{1}{2} (u^3 +7 u^2 x + 20 x^2 + 18 u x^2 + 18 x^3 \pm (u + 3 x)h(x,y,t,u))$$ where $h= \sqrt{-4 t + u^4 + 8 u^3 x + (40 u + 28 u^2) x^2 + (80 + 48 u) x^3 + 36 x^4}$. In particular, the defining equation is written as $y^2 +100 x^4 + 20 x^2 y + 18 x^3 y + u^3 y + t (u + 3 x)^2 + u^2 (-10 x^3 + 7 x y) - 6 u (5 x^4 - 3 x^2 y)=0$. If $(t,u)=(0,0)$, $\mathcal{S}$ has a $5/2$-cusp at the origin which is tangent to the $x$-axis. By $\eta\eta\eta \lambda=0$, one can find some points of $\mathcal{S}$ at which the butterfly singularity appears in the projection; such isolated butterfly points are traced in Figure \ref{Piv3}. Two dotted branches between no.12 and 13 indicate the bifurcation of class $\Pi_{4,5}^h:\;xy+x^4+y^5+\alpha xy^3+\beta x^3y+x\phi_4$ in \cite[\S 5]{SKSO}, where a butterfly point passes through a double point of $\mathcal{S}$, and a dotted curve between no.3 and 4 corresponds to the class $\Pi_{c,4}^p$ as noted in Remark \ref{rem_BDE0}, where a butterfly point passes through a degenerate cusp of Gauss. A lengthy computation shows that the butterfly point degenerates into the class $\Pi_{3,5}^h:\;xy+x^3+y^5+\alpha xy^3+x\phi_4$ with $\alpha=0$ in \cite[\S 4]{SKSO}, when the parameter $(t,u)$ lies on a smooth curve $t=\frac{1}{4}u^4+o(4)$; an elliptic Morse bifurcation of $\mathcal{S}$ appears on one half branch (no.6), and a hyperbolic Morse bifurcation appears on the other branch (no.10). \begin{figure} \centering \includegraphics[width=11.5cm]{Piv3.png} \caption{Bifurcation of $\Pi_{v,3}^p$. The diagram consists of branches named by $\Pi_{v,1}^p$ (no.2, 8), $\Pi_{c,2}^p$ (no.4, 12), $\Pi_{3,5}^h$ (no.6, 10), $\Pi_{4,5}^h$ (between no.12-13) and $\Pi_{c,4}^p$ (between no.3-4). } \label{Piv3} \end{figure} \ \noindent $\bullet \;\; (\Pi_{v,2}^p(\pm))$ Let $$\textstyle f(x,y, t,u)=y^2 \pm x^4 + x^2 y^3 + t x^2 + u x^2 y.$$ In entirely the same way, we have Figures \ref{Piv2+} and \ref{Piv2-} (cf. Figure 6 in \cite{Tari1}). When $(t,u)=(0,0)$, $\mathcal{P}$ is defined by $x^2 \pm \frac{1}{6}y^3+ 3 x^4 y \mp x^2 y^4 =0$, and $\mathcal{S}$ is $16 x^2 + 9 y^7\mp 66 x^2 y^4+o(7)=0$. No butterfly point occurs for $\eta\eta\eta\lambda\not=0$. \begin{figure} \centering \includegraphics[width=8cm]{Piv2+.png} \caption{Bifurcations of $\Pi_{v,2}^p(+)$.} \label{Piv2+} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{Piv2-.png} \caption{Bifurcations of $\Pi_{v,2}^p(-)$.} \label{Piv2-} \end{figure} \ \noindent $\bullet \;\; (\Pi_{c,3}^p(\pm))$ Consider the family of Monge forms $$\textstyle f(x,y, t,u)=y^2 + x^2 y + \frac{1}{4} x^4 \pm x^4 y + t x^3 + u x^4.$$ In entirely the same way, we have Figures \ref{Pic3+} and \ref{Pic3-} (cf. Figure 4 in \cite{Tari1}). When $(t,u)=(0,0)$, $\mathcal{P}$ and $\mathcal{S}$ are $y=-\frac{1}{2}x^2\pm7 x^4 - 38 x^6 +o(6)$ and $y=-\frac{1}{2}x^2 \pm 7 x^4 - 138 x^6+o(6)$, respectively. No butterfly point occurs for $\eta\eta\eta\lambda\not=0$. \begin{figure} \centering \includegraphics[width=9cm]{Pic3+.png} \caption{Bifurcations of $\Pi_{c,3}^p(+)$.} \label{Pic3+} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{Pic3-.png} \caption{Bifurcations of $\Pi_{c,3}^p(-)$.} \label{Pic3-} \end{figure} \ $\bullet \;\; (\Pi_2^f(\pm))$ This case must be new, since it is not versal as a family of BDE. Take $$f(x,y, t,u)=xy^2+ x^4 \pm y^4 + t x^2+u x^3.$$ First, consider the case $(+)$ here (the coefficient of $y^4$ is $+1$). Any tangent line is asymptotic, so choose as a test the projection along the $y$-axis and its deformation: $(x,y)\mapsto (x-vy, f(x,y)-wy)$. The result is as follows; see Figure \ref{Pif2+} below. It is easy to find the equation of $\mathcal{P}$: $p(x,y,t,u)=6 x^3 - y^2 +t x + 3 u x^2 + 6 t y^2 + 18 u x y^2 + 36 x^2 y^2=0$. Hence, when $(t,u)=(0,0)$, $\mathcal{P}$ has an ordinary cusp at the origin. Solve $p=p_x=p_y=0$, then we have an equation, $t(32t-12u^2)=0$, that defines the locus of $(t,u)$ where $\mathcal{P}$ has a singularity at some $(x,y)$. When one comes across the component $t=0$ (no.6 and 9 in Figure \ref{Pif2+}) the bifurcation of type $\Pi_1^f$ occurs, and around the curve $32t=12u^2$ (no.4 and 11) the bifurcation of type $\Pi_{v,1}^p$ appears. Solving $\lambda=\eta\lambda=\eta\eta\lambda=0$, $x,y$ can be parametrized by $v$, and then we can draw the curve $\mathcal{S}$ (we remark that $v$ may go to $\infty$, that means that the asymptotic line at such a point of $\mathcal{S}$ is close to the $x$-axis; then it is better for analyzing $\mathcal{S}$ to switch to the projection along the $x$-axis). In case of $(t,u)=(0,0)$, $\mathcal{S}$ has an ordinary cusp together with two smooth components passing through the origin. The bifurcation of $\mathcal{S}$ occurs as follows: At no.3, a tacnode bifurcation (self-tangency of two branches) appears (that class is denoted by $\Pi_{4,4}^h:\;xy+x^4\pm y^4+\alpha xy^3+\beta x^3y$ in \cite{SKSO}). Also between no.7 and 8 (also between no.13 and 2), two tacnode bifurcations arise successively, and from no.12 to no.13, there appear two events of type $\Pi_{c,2}^p$, at each of which an $A_4$-transition appears, i.e. two cusps of Gauss (at which $\mathcal{P}$ and $\mathcal{S}$ are tangent) are canceled. There must be two branches between no.7-8 (also no.13-2, no.12-13) when taking general coefficients of order $5$ in the normal form $f$ (for the above particular form, it is observed by the symmetry of $y\leftrightarrow -y$ that these two branches duplicate). No butterfly point occurs for $\eta\eta\eta\lambda\not=0$ near the origin. The case $\Pi_2^f(-)$ is slightly simpler than $\Pi_2^f(+)$ (Figure \ref{Pif2-}). \begin{figure} \centering \includegraphics[width=11cm]{Pif2+.png} \caption{Bifurcation of $\Pi_2^f(+)$.} \label{Pif2+} \end{figure} \vspace{1cm} \begin{figure} \centering \includegraphics[width=10cm]{Pif2-.png} \caption{Bifurcation of $\Pi_2^f(-)$.} \label{Pif2-} \end{figure}
1,477,468,749,975
arxiv
\section{Introduction} The behaviour of dynamical systems on complex networks has been studied from a variety of viewpoints over the past 30 years, and a variety of tools have been developed to understand cooperative and competitive processes on the network. This is true in many application areas, but particularly in the area of neuroscience. Oscillatory network models in this area are inspired by oscillatory behaviour present in scales from whole brain regions made of neuronal populations down to single cells, and the networks are fundamental to the organisation of neural systems. For example, a recent review~\cite{AshComNic16} discussed oscillatory models in neuroscience. In the context of pathological states or disease associated brain dynamics, epilepsy serves as a classical example of the importance of oscillatory network dynamics linked to the generation and propagation of epileptiform activity, as it has been discussed in a recent review~\cite{wendling16}. The particular model we study here, given in~\cite{benj12pheno}, is based on the work of~\cite{kalitzin10}, as a network model of epileptic seizures. However, here we are concerned more with the abstract problem of the spreading of noise-induced escapes throughout an oscillatory network. In a recent paper \cite{act17domino} we consider {\em sequential noise-induced escapes} for a network of systems where there is escape from a ``shallow" equilibrium attractor to a ``deep" equilibrium attractor at each node. We develop several ideas from that paper to the case where there is bi-stability between steady and oscillatory attractors. As in \cite{act17domino}, each node when uncoupled has two attractors, a stable steady state (that can be destabilised by noise) and a more deeply stable oscillatory attractor (that is more resistant to noise). Starting with the system in the steady attractor, we say it ``escapes" when it crosses a threshold to the basin of the oscillatory attractor. Related work includes, for example, the study of Benayoun et.al.~\cite{ben10ava} who consider a spreading of noise-induced activity in a network of excitatory and inhibitory neurones. More generally sequential transitions between stable/unstable attractors have been implicated in a diverse range of brain functions associated with neuronal timing, coding, integration as well as coordination and coherence \cite{rabinovich08,rabinovich11}. The time of escape is a random variable that reflects the details of the nonlinear dynamics and the properties of the noise process. In the uncoupled case and for a memoryless escape process, the escapes will be uncorrelated and one can consider independent processes - there will be a random sequence of escapes corresponding to the order in which the nodes escape. More precisely, suppose that a number of bistable dynamical systems each have a ``quiescent'' attractor and an ``active'' attractor, such that in the presence of low amplitude noise there are noise-induced transitions from ``quiescent'' to ``active'' state (that we call ``escape" of the system) but not vice versa. Coupling of such systems can promote (or decrease) escape of others on the network. There may be critical values of the coupling, as identified in \cite{act17domino,BFG2007a,BFG2007b}, at which the qualitative nature of the escape changes associated with bifurcations on basin boundaries of the attractors where typical transitions occur. In this sense one can see the sequences of escapes, and their relative timings and probabilities, as emergent properties of the network. Throughout this paper we link our work to the Eyring-Kramers escape time~\cite{eyring35activ,kramers40brown} between potential minima. In the classical one-dimensional case the expected escape time $T$ from a local (quadratic) minimum $x$ of a potential $V,$ over the unique local (quadratic) maximum $z$ is given by \begin{equation} T \simeq \frac{2\pi}{\sqrt{V''(x)|V''(z)|}}\text{e}^{[V(z) - V(x)]/\varepsilon}. \label{eq:kramers1D} \end{equation} We also make use of the multidimensional analogue of \eqref{eq:kramers1D} that assumes minima are separated by a unique saddle~\cite{eyring35activ,kramers40brown}. The first proof of the multidimensional Erying-Kramers' Law, including a definition of $\simeq$, was given in \cite{bovier04meta} using, among other things, potential theory. We also use an analysis based on \cite{berg08EK} that gives generalised Kramers' scalings near a pitchfork bifurcation on the basin boundary of the local minimum from which escape occurs. In this paper we examine in Section~\ref{sec:1node} the behaviour of a single phenomenological node considered in \cite{benj12pheno} that has bistability between steady and oscillatory attractors. We study in detail the noise-induced escapes from steady to oscillatory attractors and characterise a condition such that the escape from the steady attractor occurs more frequently than escape from the oscillatory attractor in the limit of low noise. We use standard mean escape-time theory to obtain closed form expressions for the mean escape time from steady state, and verify upper and lower bounds in terms of the problem parameters, thus improving the asymptotic estimates presented in \cite{benj12pheno}. In Section~\ref{sec:2node} we consider two coupled identical bistable nodes of the form discussed in Section~\ref{sec:1node}. For the cases of a pair of bidirectional, unidirectional and uncoupled nodes we are able to use potential theory analysis of the stochastically forced coupled system to explain the scalings of mean escape times as a function of coupling strength, as observed numerically by \cite{benj12pheno}. In particular for strong bidirectional coupling we find (somewhat counterintuitively) that the mean escape time for one node is greatly increased by the coupling, but the mean escape time of the second node is greatly reduced: this completes the work presented in \cite{benj12pheno} that concerns only the first escape of one of the nodes. As previously discussed in a symmetric context \cite{BFG2007a,BFG2007b} this behaviour is due to the presence of bifurcations in the basin boundary of the steady attractor that correspond to synchronisation, though here the phase dynamics of the coupled oscillations adds an extra complication. We extend this to some simple networks in Section~\ref{sec:master}. In this context we introduce a master equation approach to the problem of sequential escapes in such a bistable network. For weakly bidirectionally coupled networks of identical bistable nodes, this approach gives a good abstract model representation for the sequential escape process as long as the coupling is sufficiently weak. For the system considered, this description breaks down via a bifurcation process that occurs when the coupling strength reaches a critical value. Finally, we briefly discuss some open problems and extensions of this work in Section~\ref{sec:discuss}. \section{Single node escape times} \label{sec:1node} The phenomenological network model for seizure onset studied in~\cite{benj12pheno} considers idealised nodes that can be stable in either steady or oscillatory behaviours. This is probably the simplest planar system that gives coexistence of steady and oscillatory attractors. In \cite{benj12pheno,kalitzin10} the motivation was to regard this as a representation of the brain activity measured by electroencephalogram (EEG) that may be in a healthy (non-oscillating) or seizure (oscillating) state. We consider the complex valued noise-driven system \begin{equation} \mathrm{d} z(t) = f(z)\mathrm{d} t +\alpha \mathrm{d}\, W(t) \label{eq:ben1} \end{equation} where $W(t)=u+i v$ is a complex Wiener process ($u$ and $v$ are real independent Wiener processes) with noise amplitude $\alpha>0$, and \begin{equation} f(z) = (\nu +i\omega)z +2z|z|^2 - z|z|^4 \label{eq:benf} \end{equation} can be thought of as noise-driven truncated normal form of a Bautin bifurcation: \begin{equation} \dot{z}=f(z). \label{eq:bennonoise} \end{equation} For $\nu<0$ the only attractor of (\ref{eq:bennonoise}) is a stable periodic orbit surrounding an unstable equilibrium at the origin. This equilibrium becomes stable in a subcritical Hopf bifurcation at $\nu=0$ and in the regime $0<\nu<1$ the system exhibits bistability with an attracting fixed point and an attracting limit cycle separated by an unstable limit cycle: the stable and unstable periodic orbits meet each other in a saddle node bifurcation at $\nu=1$. The parameter $\omega$ controls the frequency of the oscillations and here we fix $\omega=20$ as in~\cite{benj12pheno}. Figure~\ref{fig:1nodedyn} summarises the dynamics of \eqref{eq:ben1}, where one realisation of \eqref{eq:ben1} for $\alpha=0.2$ and $\nu=0.5$ is shown in the phase space of \eqref{eq:benf} in panel (a). The time series of the realisation is shown in panel (b). Panel (c) summarises the bifurcation diagram of~\eqref{eq:ben1}. \begin{figure}[t] \includegraphics[width=\textwidth]{ckaa_netescs_fig1.pdf} \caption{The single node noise-driven dynamics of \eqref{eq:ben1}. Panels (a) and (b) show one realisation in green for $\nu=0.5$, $\alpha=0.2$ and $\omega=20$ in the phase space of \eqref{eq:benf} containing a stable equilibrium at the origin, an unstable limit cycle (dotted line) and stable limit cycle (solid line). The bifurcation diagram of \eqref{eq:benf} is shown in panel (c), with the the Hopf (HB) and saddle-node (SN) bifurcations marked. The dashed lines in panel (c) indicate the unstable equilibrium at the origin for $\nu<0$ and the unstable limit cycle for $0<\nu<1$. We show in Section~2 that for $\nu<3/4$ the limit cycle is more stable than the equilibrium, in the sense that the potential for the radial dynamics is lower. } \label{fig:1nodedyn} \end{figure} In the presence of noise of amplitude $\alpha>0$, both steady and oscillatory attractors of (\ref{eq:ben1}) show stochastic fluctuations and there are occasional transitions between these two metastable states, driven by occasional large fluctuations in the noise. For low noise amplitude (\ref{eq:ben1}) shows similar behaviour to the underlying deterministic system, whereas for large noise the dynamics are dominated by large stochastic fluctuations. The dynamics' realisation shown in Figure~\ref{fig:1nodedyn} is computed in {\sc{Matlab}} using the Heun method for stochastic differential equations~\cite{kloeden03num} with the initial condition at the origin and step size $h=10^{-5}$. The realisation trajectory spends some time near the origin but the stochastic fluctuations eventually drive it past the basin boundary represented by the unstable limit cycle and it is then attracted to the stable limit cycle. The time series shows this transition from small, noise dominated fluctuations, to an oscillatory regime. In the presence of noise, we define, analogously to \cite{act17domino}, the escape time of the node $\tau$ to be when the realisation trajectory switches from being close to the origin (quiescent) to being close to the stable limit cycle (active). More precisely, if the noise-free system has stable limit cycles at $|z|=0$ and $|z|=R_{\max}$ separated by an unstable limit cycle at $0<|z|=R_c<R_{\max}$, then the escape time for a given threshold $\xi\in(R_c,R_{\max})$ is $$ \tau = \inf\{t>0:|z(t)|>\xi~\mbox{given}~z(0)=0\}. $$ Note that $\tau$ is a random variable that depends on the noise realisation and reflects the influence of the noise on the nonlinear dynamics. For small enough noise the escape time has a cumulative distribution $Q(t) = \mathbb{P} \{ \tau<t\}$ with an exponential tail \cite{berg13kramers} and the mean escape time $T$ from the steady to the oscillatory attractor is \begin{equation} T=\mathbb{E}(\tau) = \int_{t=0}^{\infty} t \frac{d}{dt}Q(t)\,dt= \int_{t=0}^{\infty} [1-Q(t)]\,dt. \label{eq:meanesc} \end{equation} This is what we aim to quantify in the following section. \subsection{Mean escape times for a single node} \label{sec:1nodemean} To determine the mean escape time, we transform~\eqref{eq:ben1} into polar coordinates given by $z(t)=R(t)\exp[\imath \theta(t)]$ with $R(t)\geq 0$ and $\theta(t)$ considered modulo $2\pi$. This gives \begin{align} \mathrm{d} R & = \Bigg{[} - \nu R + 2R^3 - R^5 + \frac{\alpha^2}{2R} \Bigg{]} \mathrm{d} t +\alpha \mathrm{d} W_R \label{eq:benr}\\ \mathrm{d} \theta &= \omega \mathrm{d} t + \frac{\alpha}{R} \mathrm{d} W_\theta \label{eq:benth} \end{align} where $W_R$ and $W_\theta$ are independent standard Wiener processes. The $\alpha^2/R$ terms arise from It\^{o}'s Lemma; see for example \cite{daffer98,gard83hand}. As the $R$ equation is independent of $\theta$ we can consider the escape problem to oscillatory behaviour as a one-dimensional potential (well/energy) problem for $R(t)$ \begin{equation*} {\mathrm{d} R} = -\frac{\partial V}{\partial R} \mathrm{d} t+\alpha \mathrm{d} W_R, \end{equation*} where the potential function, $V$, is given by \begin{equation} V:=\frac{\nu R^2}{2} - \frac{R^4}{2} + \frac{R^6}{6}-\frac{\alpha^2}{2} \ln{R}. \label{eq:Vnontrunc} \end{equation} The maxima and minima of $V$ correspond to the equilibrium and limit cycles of the full system. Note that the potential depends on $\alpha$; for $\alpha=0.3$ and $\nu=0.5$, $\frac{\partial V}{\partial r}=0$ at exactly two points which are the potential wells corresponding to the stable limit cycle. However, as the noise amplitude increases the potential barrier decreases and disappears and so the escape time of the node decreases. More precisely, one can determine the bifurcation behaviour of the ODE \begin{equation} -\frac{\mathrm{d} R}{\mathrm{d} t} = \frac{\mathrm{d} V}{\mathrm{d} R} =- \frac{\alpha^2}{2R} + \nu R - 2R^3 + R^5 \label{eq:delalpha} \end{equation} in the region $R>0$ with $V$ as in \eqref{eq:Vnontrunc} as a function of $\alpha>0$ and $\nu>0$. One can verify (eliminating $R$ from the conditions $V'(R)=V''(R)=0$) that there are saddle node bifurcations for this system at \begin{equation} \nu^3-\nu^2-\frac{9}{2}\nu \alpha^2+\frac{27}{16}\alpha^4+4\alpha^2=0 \label{eq:saddlenodecurve} \end{equation} that has a cusp point (where $V'(R)=V''(R)=V'''(R)=0$) at $(\nu,\alpha)=(4/3,4\sqrt{3}/9)$. Hence one can verify the existence of three equilibria for $R>0$ in a region near $\alpha=\nu=0$ bounded by $0<\alpha<\nu/2+O(\nu^2)$. Within the bounded region of $(\alpha,\nu)$ given by (\ref{eq:saddlenodecurve}) there are three distinct equilibria of \eqref{eq:delalpha} at parameter-dependent locations we denote $$ 0<R_{\min}<R_{c}<R_{\max}. $$ The $R_{\min}$ and $R_{\max}$ are attractors corresponding to minima of $V$ while $R_{c}$ is a local maximum (unstable) that forms a {\em gate} (boundary) between the basins of the two attractors in the terminology of \cite{berg08EK}. Note for $\alpha=0$ and $0<\nu<1$ the three equilibria are at $R_{\min}=0$, $R_{c}=R_c^0:=\sqrt{1-\sqrt{1-\nu}}$ and $R_{\max}=\sqrt{1+\sqrt{1-\nu}}$. \begin{figure} \centering \includegraphics[width=\textwidth]{ckaa_netescs_fig2.pdf} \caption{The effect of varying $\nu$ and $\alpha$ on the potential function $V$ for $R>0.$ The curves $V$ (a) and $\frac{\partial V}{\partial R}$ (b) for $\nu=0.5$ and different values of $\alpha$. In panel (a) the deeper well corresponds to the stable limit cycle and the peak to the unstable limit cycle; note the asymptote at $R=0$ for $\alpha>0$. Each extrema in panel (a) corresponds to $\frac{\partial V}{\partial R}=0$ in panel (b). The saddle node bifurcation curve is shown in the $(\nu,\alpha)$-plane with cusp point marked $+$ in panel (c). The parameter points marked as coloured dots in (c) for $\nu=0.5$ correspond to the curves in panels (a) and (b); see also the legend. The line $\nu=2\alpha$ is also marked and for $\nu<1$ this is very close to the bifurcation curve. For $\nu>2\alpha$ there are is a well of $V$ near the origin and $\frac{\partial V}{\partial R}=0$ for three values of $R>0$, for $\nu<2\alpha$ there is no well near the origin and $\frac{\partial V}{\partial R}=0$ only once in $R>0$. } \label{fig:1nodepot} \end{figure} Figure~\ref{fig:1nodepot} shows the potential $V$ and its derivative $\frac{\partial V}{\partial R}$ for different values of $\alpha$ along with the saddle-node bifurcation curve in the $(\nu,\alpha)$-plane. Due to the symmetry of $V$, a reflection at $R=0$, along which the $\ln R$ term creates an asymptote for $\alpha>0$, we plot $V$ and $\frac{\partial V}{\partial R}$ for $R>0$. The zeros of $\frac{\partial V}{\partial R}$ in panel (b) correspond to the extrema of $V$ and the equilibria $R_c^0$ and $R_{\text max}^0$ are marked. Panel (c) shows the cusp point of the saddle-node bifurcation curve in the $(\nu,\alpha)$-plane with the line $\nu=\alpha/2$. The dots mark the parameter values of the curves in panels (a) and (b); note that $\nu=0.5, \alpha=0.3$ is not within the bounded region and the corresponding curve in panel (b) only has one zero. Figure~\ref{fig:1nodepot} shows that for $\alpha>0$ we have that whenever $R_c$ exists, it satisfies $R_{c}<R_{c}^0$. We choose a threshold $\xi$ such that $$ R_c<\xi<R_{\max}. $$ Although the leading order escape times are independent of $\xi$, for comparability to \cite{benj12pheno} we take $\xi=R_c^0$ in this section. In later sections we take a fixed value of $\xi=0.5$. The mean escape time $T(\nu,\alpha)$ from (\ref{eq:meanesc}) can be found by considering solutions $R(t)$ of the SDE (\ref{eq:benr}) and defining the mean first escape time $$ W_\xi(R_0) := \mathbb{E} ( \inf\{t>0~:~R(t)\geq \xi,~\mbox{given}~R(0)=R_0\}). $$ This mean escape time $W_{\xi}(R)$ satisfies a Poisson problem \begin{equation} \frac{\alpha^2}{2} \frac{\mathrm{d} ^2}{\mathrm{d} R^2} W_{\xi}(R) - V'(R)\frac{\mathrm{d} }{\mathrm{d} R} W_{\xi}(R) =-1, \ \ \lim_{R\rightarrow 0+} |W_{\xi}(R)|<\infty, \ \ W_{\xi}(a)=0. \label{eq:pois} \end{equation} If we define $u(R)=\frac{\mathrm{d} W_{\xi}}{\mathrm{d} R}$ then \eqref{eq:pois} can be simplified using an integrating factor $\exp(\frac{-2V}{\alpha^2})$. The boundary conditions imply $W_{\xi}(R) \to 0$ and $ \frac{{\rm{d}W_{\xi}}}{\mathrm{d} R}\exp (\frac{-2V}{\alpha^2}) \to 0$ as $R \to 0$. Integrating and applying the boundary conditions \begin{equation} W_{\xi}(R)= \frac{2}{\alpha^2} \int_{x=R}^{\xi} \int^x_{y=0} \exp\bigg[\frac{2(V(x)-V(y))}{\alpha^2}\bigg] \mathrm{d} y\, \mathrm{d} x. \label{eq:WarVs} \end{equation} Substituting the expression for $V$ from \eqref{eq:Vnontrunc} gives \begin{align*} W_{\xi}(R) &= \frac{2}{\alpha^2} \int_{x=R}^{\xi} \int^x_{y=0} \exp\frac{2}{\alpha^2}\bigg[ \frac{\alpha^2}{2}( \ln{y}- \ln{x} ) + \frac{\nu( x^2-y^2)}{2}+ \frac{y^4-x^4}{2}+\frac{x^6-y^6}{6} \bigg] \mathrm{d} y\, \mathrm{d} x. \end{align*} Therefore, as $T(\nu, \alpha) = W_{\xi}(0)$ we have \begin{equation} T(\nu, \alpha) = \frac{2}{\alpha^2} \int_{x=0}^{\xi} \int^x_{y=0} \frac{y}{x} \exp\bigg(\frac{\nu(x^2-y^2)+(y^4-x^4)+(x^6-y^6)/3}{\alpha^2}\bigg) \mathrm{d} y\, \mathrm{d} x. \label{eq:meanT} \end{equation} Kramers' formula \cite{berg13kramers} uses Laplace's method to give an asymptotic expression for \eqref{eq:WarVs}: \begin{equation} T_{K}(\nu,\alpha) = \frac{2\pi}{\sqrt{|V''(R_c)|V''(R_{\min})}} \exp \left[\frac{2(V(R_c)-V(R_{\min})}{\alpha^2} \right] \end{equation} as $\alpha\rightarrow 0$. Moreover, one can obtain upper and lower bounds on (\ref{eq:meanT}) that are valid for general $0<\nu<1$ and $\alpha>0$ (cf. \cite{ashwin16quant}). We write \begin{equation} p=x^2-y^2,~~q=x^2+y^2,~~\Rightarrow~~ \frac{\partial(p,q)}{\partial(x,y)}= \left|\begin{array}{rr} 2x & -2y \\ 2x & 2y \end{array}\right|=8 xy. \end{equation} The triangle defined by $(x,y)$ such that $0<y<x<\xi$ transforms to $0<q<2\xi^2$, $0<p<\min(q,2\xi^2-q)$, so we have \begin{equation} T(\nu, \alpha) = \frac{1}{\alpha^2} \int_{q=0}^{2\xi^2} \int_{p=0}^{\min(q,2\xi^2-q)} \frac{1}{2(p+q)} \exp\bigg(\frac{p(\nu-q+p^2/12+q^2/4)}{\alpha^2}\bigg) \text{d}p\, \text{d}q. \label{eq:T} \end{equation} Noting that $0<p<q$ in the region of integration implies $q<p+q<2q$, in addition noting $q^2/3>p^2/12+q^2/4>q^2/4$ in this region we obtain the following estimates for the integrand of (\ref{eq:T}) \begin{align*} \frac{1}{2q}\exp\bigg(\frac{p(\nu-q+q^2/3)}{\alpha^2}\bigg) &>\frac{1}{2(p+q)}\exp\bigg(\frac{p(\nu-q+p^2/12+q^2/4)}{\alpha^2}\bigg)\\ \frac{1}{4q}\exp\bigg(\frac{p(\nu-q+q^2/4)}{\alpha^2}\bigg)&< \frac{1}{2(p+q)}\exp\bigg(\frac{p(\nu-q+p^2/12+q^2/4)}{\alpha^2}\bigg) \end{align*} Hence, we have lower and upper bounds $T_l(\nu,\alpha)<T(\nu,\alpha)<T_u(\nu,\alpha)$ given by: \begin{align} T_l(\nu, \alpha) & = \int_{q=0}^{\xi^2} \int_{p=0}^{q} \frac{1}{4q\alpha^2} \exp\bigg(\frac{p(\nu-q+q^2/4)}{\alpha^2}\bigg) \mathrm{d} p\, \mathrm{d} q,\\ T_u(\nu, \alpha) &= \int_{q=0}^{2\xi^2} \int_{p=0}^{q} \frac{1}{2q\alpha^2} \exp\bigg(\frac{p(\nu-q+q^2/3)}{\alpha^2}\bigg) \mathrm{d} p\, \mathrm{d} q. \end{align} The inner integrals can be evaluated to give \begin{align} T_l(\nu, \alpha) & = \int_{q=0}^{\xi^2} \frac{e^{\frac{q(\nu-q+q^2/4)}{\alpha^2}}-1}{4q(\nu-q+q^2/4)} \mathrm{d} q, \label{eq:Tl} \\ T_u(\nu, \alpha) &= \int_{q=0}^{2\xi^2} \frac{e^{\frac{q(\nu-q+q^2/3)}{\alpha^2}}-1}{2q(\nu-q+q^2/3)} \mathrm{d} q.\label{eq:Tu} \end{align} Moreover, note that Laplace's method can be used to get an asymptotic estimate for these bounds in the case of fixed $\nu$ and $\alpha\rightarrow 0$. Define \begin{align*} f_l(q)=(4q(\nu-q+q^2/4))^{-1},~~&f_u(q)=(2q(\nu-q+q^2/3))^{-1},\\ g_l(q)=q(-\nu+q-q^2/4),~~&g_u(q)=q(-\nu+q-q^2/3). \end{align*} One can verify that for a fixed $\nu \in (0,1)$, the functions $g_l(q)$ and $g_u(q)$ have unique minima on $[0,\xi^2]$ and $[0,2\xi^2]$ at $c_l=(4-2\sqrt{4-3\nu})/3$ and $c_u=1-\sqrt{1-\nu}$, respectively. It is also easy to verify that $f_l(c_l) \neq 0$ and $f_u(c_u) \neq 0$. Thus, to leading order as $\alpha \to 0$. \begin{align*} T_l(\nu, \alpha) &\sim f_l(c_l)\sqrt{\frac{2\pi\alpha^2}{|g_l^{\prime \prime}(c_l)|}}\exp \left[-\frac{g_l(c_l)}{\alpha^2}\right], \\ T_u(\nu, \alpha) &\sim f_u(c_u)\sqrt{\frac{2\pi\alpha^2}{|g_u^{\prime \prime}(c_u)|}}\exp\left[-\frac{g_u(c_u)}{\alpha^2}\right]. \end{align*} The functions derived here show the direction connection between our analytic results for the escape time of one node given by~\eqref{eq:benr} and the classic one-dimensional Kramers' escape time~\eqref{eq:kramers1D} as well as the dependence on the excitability $\nu$ and the noise amplitude $\alpha$. \Fref{fig:1nodeaTnT} shows the integral $T(\nu,\alpha)$ numerically estimated using {\sc Maple} from \eqref{eq:T} plotted against $\alpha$ for different values of $\nu$ in panel (a), and plotted against $\nu$ for different values of $\alpha$ in panel (b); compare with~\cite{benj12pheno} Figure~5. Panel (a) shows the lower and upper bound $T_l(\nu,\alpha)$ and $T_u(\nu,\alpha)$ also numerically estimated using {\sc Maple} from~\eqref{eq:Tl} and~\eqref{eq:Tu} respectively. The approximation of $T(\nu,\alpha)$ from~\cite{benj12pheno} is shown for comparison for each $\nu$ value. In panel (b) a cross is marked on the curve $\alpha=0.05$ at $\nu=0.2$, we use these values in subsequent sections. The panels also show points that are numerical approximations of the mean escape time, computed in {\sc Matlab} using the Heun method. For each point, two hundred realisations of~\eqref{eq:ben1} were computed with step size $h=10^{-2}$. As the radial dependence \eqref{eq:benr} is independent of $\omega$, we fix $\omega=0$ in our computations. The figure uses threshold $\xi=R_c^0$, as in \cite{benj12pheno}, which corresponds to the amplitude of the unstable periodic orbit of \eqref{eq:benf} for $\alpha=0$. Comparing to the approximation of $T$ from~\cite{benj12pheno} we find reliable bounds and good agreement with the numerics over a large range of $(\alpha,\nu)$. \begin{figure}[ht!] \centering \includegraphics[width=\textwidth]{ckaa_netescs_fig3.pdf} \caption{Numerical approximations of the mean escape time $T(\nu,\alpha)$ from $R=0$ to threshold $\xi=R_c^0$ (solid) from (\ref{eq:T}), along with mean escape times from numerical simulations of escape time (dots). Panel (a) shows the upper bound $T_u(\nu,\alpha)$ and lower bound $T_l(\nu,\alpha)$ computed from (\ref{eq:Tl}) and (\ref{eq:Tu}) respectively (dashed). The dotted curves show the estimate of $T(\nu,\alpha)$ from Benjamin {\em et al} {\cite[Figure 5]{benj12pheno}}. The cross in panel (b) is on the curve $\alpha=0.05$ at $\nu=0.2$ and corresponds to $T(0.2,0.05)=121.64$.} \label{fig:1nodeaTnT} \end{figure} \section{Sequential escape times for coupled bistable nodes} \label{sec:2node} We now consider $N$ identical bistable nodes of the type \eqref{eq:ben1} discussed in Section~\ref{sec:1node}, coupled linearly as in~\cite{benj12pheno}: \begin{equation} \mathrm{d} z_i(t) =\left[ f(z_i) + \beta \sum_{j\neq i} A_{ji}(z_j - z_i) \right]\mathrm{d} t +\alpha\, \mathrm{d} W_i (t), \label{eq:neteq} \end{equation} for $i=1,...,N$, where $f$ is defined in \eqref{eq:benf} and depends on $\nu$. This generalises the setting of \cite{act17domino} to a case of bistable nodes where one of the attractors is periodic. For this network, $A_{ji}\in\{0,1\}$ is the adjacency matrix and $\beta\geq 0$ the coupling strength: we assume that $A_{ii}=0$ for all $i$. We fix $0<\nu<1$ and $\nu>2\alpha$ to ensure that each individual node is in the bistable regime with an attracting equilibria for the radial dynamics (\ref{eq:benr}) at $R_{\min}$ and $R_{\max}$. For sequential escapes, we will assume the parameter regime is such that the rate of return from $R_{\max}$ to $R_{\min}$ is very small relative to the rate of escape from $R_{\min}$ to $R_{\max}$. This can be quantified in terms of the potential (\ref{eq:Vnontrunc}): for $\alpha=0$ and $0<\nu<1$ we have $$ V(R_{\min})=0,~~V(R_{\max})=\frac{\nu}{2}-\frac{1}{3}\left(1+(1-\nu)^{\frac{3}{2}}\right). $$ One can verify for this case that there are two attractors and \begin{equation} V(R_{\max})<V(R_{\min}). \label{eq:marginalzero} \end{equation} if and only if $0<\nu<3/4$. Moreover, for $0<\nu<3/4$ fixed and increasing $\alpha$, (\ref{eq:marginalzero}) is maintained as long as there are still three roots: the $-\alpha^2/2 \ln R$ dependence means that $V(R_{\min})$ increases while $V(R_{\max})$ decreases with $\alpha$. If all nodes start in the quiescent state it is a natural question to ask how the coupling affects the sequence of escape times of the nodes in the network \cite{act17domino}. We discuss the general set-up in the next section and then focus on the example of two coupled nodes. \subsection{Statistics of sequential escapes} \label{sec:seqstats} We fix a threshold $\xi>0$ for all nodes and consider the first escape time for the $i$th node $$ \tau^{(i)} := \inf\{t>0~:~ |z_i(t)|\geq \xi~\mbox{ given }~ z_i(0)=0\} $$ from the quiescent state, assuming that all nodes start at $z_k=0$ ($k=1,\ldots,N$) at time $t=0$. The distribution of the random variable $\tau^{(i)}$ is affected by the noise realization and the chosen threshold $\xi$, as well as the behaviour of other nodes in the network via the coupling structure $A_{ij}$ and strength $\beta$. We choose $\xi$ such that the region $|z_i|<\xi$ contains the whole basin of attraction of the trivial solution $z=0$ in the limit $\alpha\rightarrow 0$. Note that the coupling deforms the basin of attraction and so it may be necessary to choose $\xi$ somewhat greater than $R_c$, depending on $\beta$. For a fixed threshold $\xi$ and initial condition, the independence of the noise paths means that, with probability one, no two escapes will be at precisely the same time. Therefore, there will be a sequential ordering of nodes corresponding to the order in which they escape. Using the notation of~\cite{act17domino}, there is a permutation $s(k)$ of $\{1,\ldots,N\}$ such that \begin{equation} 0<\tau^{(s(1))}<\tau^{(s(2))}<\cdots<\tau^{(s(N))} \label{eq:defines} \end{equation} where the times $\tau^{(s(k))}$ and the permutation $s(k)$ are random variables that depend on the realization of the noise. We also define $$ \tau^{i}:=\tau^{(s(i))} $$ which can be thought of as the time until the $i$th escape, and we write $\tau^{0}=0$. For any integers $0\leq \ell<k\leq N$ \begin{equation} \tau_N^{k|\ell}:=\tau^{k}-\tau^{\ell} \label{eq:fpt} \end{equation} is the first passage time between the $\ell$th and $k$th escapes. Although \cite{act17domino} considers both the timing and sequence of escapes, in this paper we concentrate primarily on $\tau^{k|\ell}_N$. Sequential escape can be understood in terms of this set of distributions which are governed by distributions with exponential tails and therefore by Kramers-type rates. In these cases, the essential information about the sequential escapes is given by the mean first passage time between escapes $\ell$ and $k$ that is the expectation of $\tau_N^{k|\ell}$, i.e. \begin{equation} T_N^{k|\ell}: = \mathbb{E}\left(\tau_N^{k|\ell}\right)=\int_{t=0}^{\infty} \left[1-Q_N^{k|\ell}(t)\right]\, \mathrm{d} t, \label{eq:mfpt} \end{equation} where \begin{equation} Q_N^{k|l}(t) = \mathbb{P}(\tau_N^{k|l} \leq t) \label{eq:cdfpt} \end{equation} is the cumulative distribution of first passage times from $\ell$ to $k$. Note that if $k>\ell>n$ then $\tau_N^{k|n}=\tau_N^{k|\ell}+\tau_N^{\ell|n}$, and so taking expectations we have \begin{equation} T_N^{k|n}=T_N^{k|\ell}+T_N^{\ell|n}. \end{equation} Section~\ref{sec:master} returns to this general case in more detail, while for the rest of this section we consider specific examples of sequential escapes for \eqref{eq:neteq} in the case $N=2$. We consider three coupling scenarios for \eqref{eq:neteq} with $N=2$: disconnected ($\beta=0$ or $A_{12}=A_{21}=0$); unidirectional ($A_{12}=1$ but $A_{21}=0$); and bidirectional ($A_{12}=A_{21}=1$). The study~\cite{benj12pheno} investigates the influence of $\beta$ on the mean first passage time such that the first node has escaped, i.e. $T_2^{1|0}$, but clearly $T_2^{2|1}$ is also of interest. The paper \cite[Figure~6(b)]{benj12pheno} shows a number of limiting behaviours that we aim to explain here using the potential function for the coupled system. Here, we focus mainly on the case of two bidirectionally coupled nodes with brief comparison to the unidirectionally coupled and uncoupled cases. \subsection{Two bidirectionally coupled nodes} \label{sec:2binet} Writing system~\eqref{eq:neteq} for $N=2$ with bidirectional coupling ($A_{12}=A_{21}=1$) in polar coordinates $z_i(t) = R_i(t)\exp[\imath \theta_i(t)]$ we have \begin{align} \mathrm{d} R_i &= \bigg[-(\nu+\beta)R_i + 2R_i^3 - R_i^5 + \beta R_k\cos(\phi) + \frac{\alpha^2}{2R_i} \bigg]\mathrm{d} t +\alpha\,\mathrm{d} W_{R_i} , \label{eq:2r}\\ \mathrm{d} \phi & = -\beta \bigg( \frac{R_k}{R_i} + \frac{R_i}{R_k} \bigg)\sin{\phi} \,\mathrm{d} t + \alpha \bigg( \frac{1}{R_i}\mathrm{d} W_{\theta_i} - \frac{1}{R_k}\mathrm{d} W_{\theta_k} \bigg) . \label{eq:2phi} \end{align} where $\phi = \theta_i-\theta_k$ is the phase difference between the two nodes and $W_{R_i}$, $W_{\theta_i}$ and $W_{\theta_k}$ are independent Wiener processes for $i,k\in\{1,2\}$. The subsystem~\eqref{eq:2r} can be written as a noise driven potential system \begin{equation} \frac{\mathrm{d} }{\mathrm{d} t} R_1 = - \frac{\partial V}{\partial R_1},~~~~ \frac{\mathrm{d} }{\mathrm{d} t} R_2 = - \frac{\partial V}{\partial R_2},~~~~ \label{eq:2rpot} \end{equation} for the potential \begin{equation} V=\frac{1}{2} \bigg[ \frac{R_1^6+R_2^6}{3} - (R_1^4 + R_2^4) + (\nu+\beta)(R_1^2 + R_2^2) - \alpha^2\ln(R_1R_2)\bigg] - \beta R_1R_2 \cos \phi. \label{eq:2biV} \end{equation} The phase difference $\phi$ governed by (\ref{eq:2phi}) has, for $\beta>0$ and the low noise limit $\alpha\rightarrow 0$, a stable fixed point at $\phi=0$ and an unstable fixed point at $\phi=\pi$ if the $R_{i}$ are bounded away from zero. All local minima of the potential \eqref{eq:2biV} will have $\phi=0$, as will all saddles that act as gates between basins of attraction. Hence we restrict to $\phi=0$ from hereon and analogously to \cite{act17domino} we perform a bifurcation analysis of the noise-free version of (\ref{eq:2r}), namely \begin{align} \frac{\mathrm{d}}{\mathrm{d} t} R_i &= -(\nu+\beta)R_i + 2R_i^3 - R_i^5 + \beta R_k + \frac{\alpha^2}{2R_i} , \label{eq:2rnonoise} \end{align} $\nu=0.2$ and $\alpha=0.05$. \Fref{fig:2bibif} shows the bifurcation diagram of $\beta$ against $R_1$, analogously to \cite{act17domino}. Panels (b)--(d) show the $(R_1,R_2)$-plane with the equilibria of \eqref{eq:2rnonoise} and potential contours of \eqref{eq:2biV} for $\phi=0$. Each panel (b)--(d) also shows a typical realisation of~\eqref{eq:neteq} computed in {\sc{Matlab}} using the Heun method with initial point $z_1=z_2=0$. The bifurcation diagram of (\ref{eq:2rnonoise}) depicted in \Fref{fig:2bibif}(a) is computed in {\sc Auto}~\cite{AutoOrig} and shows two symmetric, simultaneous saddle-node (SN) bifurcations at $\beta_{\rm{SN}}=0.0154297$, the second of which is difficult to discern as three equilibria (the saddle and sink involved in the SN and the sink of the deepest well) have very close values in $R_1$. There is a pitchfork bifurcation at $\beta_{\rm{PF}}=0.164917$. These bifurcations split the diagram into three regimes: \begin{itemize} \item $0<\beta<\beta_{\rm{SN}}$ has nine equilibria, one source, four sinks and four saddles. \item $\beta_{\rm{SN}}<\beta<\beta_{\rm{PF}}$ has five equilibria; one source, two saddles and two sinks. \item $\beta>\beta_{\rm{PF}}$ has three equilibria; two sinks and one saddle. \end{itemize} As in \cite{act17domino} the first regime corresponds to {\em weak coupling}, the second to {\em intermediate coupling} and the third to {\em strong coupling} (see also \cite{BFG2007a,BFG2007b}). The remainder of this paper examines the influence of these regimes on the escape times. \begin{figure \centering \includegraphics[width=0.95\textwidth]{ckaa_netescs_fig4.pdf} \caption{Bifurcation diagram and corresponding phase portraits for $\nu=0.2$ and $\alpha=0.05$. Panel (a) shows the bifurcation diagram of $R_1$ against $\beta$ for \eqref{eq:2rnonoise}. There are three regimes separated by two simultaneous saddle-node (SN) bifurcations and a pitchfork bifurcation (PF). Panels (b)--(d) show typical realisations plotted with contour lines of potential $V$ for $\phi=0$ at $\beta$ values representative of each coupling regime. Specifically, for the weak coupling regime $\beta=0.01$ (b), for the intermediate coupling regime $\beta=0.1$ (c) and for the strong coupling regime $\beta=1$ (d). Equilibria of \eqref{eq:2rnonoise} are shown as $\bullet$ for sinks, $\blacksquare$ for sources and $\blacktriangle$ for saddles. The straight lines $R_i=\xi=0.5$ show the thresholds used to quantify escapes. } \label{fig:2bibif} \end{figure} \subsection{Estimating escape times for two coupled nodes} \label{sec:2est} We numerically compute $T_2^{1|0}$, $T_2^{2|0}$ and $T_2^{2|1}$ in {\sc{Matlab}} by fixing example parameters $\nu=0.2$ and $\alpha=0.05$ and computing an ensemble of $2000$ realisations of \eqref{eq:neteq} for $i=2$ using the Huen method with step size $h=10^{-3}$. Note, we set $\omega=0$ in \eqref{eq:neteq} as the radial dynamics do not depend on $\omega$. We choose threshold $\xi=0.5$ to determine the escape times, as shown in \Fref{fig:2bibif}(b)--(d). The first and second escape times $\tau_2^{1|0}$ and $\tau_2^{2|0}$ are averaged over the ensemble to give numerical approximations of $T_2^{1|0}$ and $T_2^{2|0}$, while $T_2^{2|1} =T_2^{2|0} - T_2^{1|0} $. Using numerical integration of the one-node case \eqref{eq:meanT}, we first estimate the limits for $\beta\rightarrow 0$ and $\beta\rightarrow\infty$ for bidirectional coupling. In the infinite coupling limit, the two systems are strongly synchronised and act as a single node with the same $\nu$ but attenuated noise, $\alpha/\sqrt{2}$. For $\nu=0.2$, $\alpha=0.05$ and $\xi=0.5$ in the limit $\beta\rightarrow \infty$ we expect \begin{align} T^{1|0}_2&\rightarrow {\cal{T}}_{(1)}=T\left(\nu, \alpha/\sqrt{2}\right)\approx6312.21, \label{eq:lim1} \\ T^{2|1}_2&\rightarrow 0. \nonumber \end{align} The uncoupled limit has independence of escapes so the approximate mean escape time is half the mean escape time of one node. For $\nu=0.2$, $\alpha=0.05$ and $\xi=0.5$ in the limit $\beta\rightarrow 0$ (or in the uncoupled case for all $\beta$) we expect \begin{align} T^{1|0}_2&\rightarrow {\cal{T}}_{(2)}=\frac{T(\nu,\alpha)}{2}\approx 96.51, \label{eq:lim2}\\ T^{2|1}_2&\rightarrow {\cal{T}}_{(3)}=T(\nu,\alpha) \approx 193.01. \label{eq:lim3} \end{align} We also note that for the unidirectional case, the first escape $T_2^{1|0}$ will be either from the driving node with mean $T(\nu,\alpha)$ or from the driven node with mean $T(\nu,\alpha/\sqrt{2})$. The sum of the rates of escape corresponds in the limit $\beta\rightarrow \infty$ to \begin{equation} T_2^{1|0}\rightarrow {\cal{T}}_{(4)}= \frac{T(\nu,\alpha)T(\nu, \frac{\alpha}{\sqrt{2}})}{T(\nu,\alpha)+T(\nu, \frac{\alpha}{\sqrt{2}})}=188.01 \label{eq:lim4} \end{equation} For the bidirectionally coupled case, we also find that many features of the scalings of first passage times $T^{1|0}$ and $T^{2|1}$ can be understood from the coupling regimes of the deterministic potential system \eqref{eq:2rpot}. We estimate these scalings in the three coupling regimes using generalized Eyring-Kramers Laws~\cite{berg08EK,berg13kramers} for saddles that are multidimensional and/or passing through bifurcation. For the given value of $\nu$ and $\alpha$ we relate the Eyring-Kramers times $T_K$ to the numerical experiments $T$ using a common linear transformation of the form $$ T \simeq A T_K +B $$ where the constants $A,B$ are determined from the one-node case. Specifically, they are the unique solution such that the one-node estimates for $\mathcal{T}_{(1)}$ and $\mathcal{T}_{(2)}$ are correct, namely $$ T(\nu,\alpha)=A T_{K}(\nu,\alpha)+B,~~T(\nu,\alpha/\sqrt{2})=A T_{K}(\nu,\alpha/\sqrt{2})+B. $$ so that $A$ and $B$ do not depend on $\beta$. For $\nu=0.2$ and $\alpha=0.05$ we find $A=4.38$ and $B=-295$. \paragraph{First escape time statistics} The mean first escape time $T^{1|0}_2$ is associated with escape over one of two possible saddles for weak and intermediate coupling, $\beta<\beta_{\text PF}$. These saddles merge into a single synchronised saddle for strong coupling, $\beta>\beta_{\rm{PF}}$, where $T^{1|0}_2$ is associated with escape over the only remaining saddle. A multidimensional Eyring-Kramers Law~\cite{berg13kramers} gives an asymptotic approximation $\widehat{T}_2^{1|0}$ for $T_2^{1|0}$. Denote by $x$ the potential minimum where neither node has escaped and denote by $y$ one of the two saddles that undergo the pitchfork bifurcation, or the only saddle for $\beta>\beta_{PF}$. Then we compute \begin{equation} \widehat{T}_2^{1|0}(\beta) = \frac{2\pi}{|\lambda_1(y)|}\sqrt{\frac{|{\rm{det}}(\nabla^2V(y))|}{{\rm{det}}(\nabla^2V(x))}}{\rm{e}}^{[V(y)-V(x)]/\varepsilon} \label{eq:EKfirst} \end{equation} where $\varepsilon=\alpha^2/2$ and $\lambda_1(y)$ is the single negative eigenvalue of the Hessian $\nabla^2V(y)$. Here we use the two node potential $V$ given by \eqref{eq:2biV} with $\alpha=0.05$, $\nu=0.2$ and $\phi=0$. This breaks down for $\beta \to \beta_{PF}$ where $\lambda_1(y)\to 0$. Berglund and Gentz~\cite{berg08EK} derive a multidimensional Eyring-Kramers Law for escapes from a potential well over a saddle that undergoes pitchfork bifurcation. This corresponds to the case as $\beta$ passes through $\beta_{\text PF}$ and gives an asymptotic expression $\widetilde{T}_2^{1|0}$ for $T_2^{1|0}$ on either side of the pitchfork bifurcation. Denote by $z_{\pm}$ the two saddles for $\beta<\beta_{PF}$ that merge at the pitchfork bifurcation, and by $z$ the saddle for $\beta>\beta_{\text PF}$. Denote by $\mu_1<0<\mu_2$ the eigenvalues of $\nabla^2V(z_{\pm})$ and by $\lambda_1<0<\lambda_2$ the eigenvalues of $\nabla^2V(z)$. Finally we let $x$ be as before. Then by \cite[Corollary 3.8]{berg08EK} and noting $\varepsilon=\alpha^2/2$: \begin{equation} \widetilde{T}_{2}^{1|0}(\beta) = \left\{ \begin{array}{cl} \displaystyle{2\pi \sqrt{\frac{\mu_2 +(2\varepsilon C_4)^{1/2}}{|\mu_1|{\rm{det}}(\nabla^2V(x))}}\frac{{\rm{e}}^{[V(z_{\pm})-V(x)]/\varepsilon}}{\Psi_-(\mu_2/(2\varepsilon C_4)^{1/2})}[1+E_-(\varepsilon,\mu_2)]} & \mbox{ for }\beta<\beta_{\text PF}\\ \displaystyle{ 2\pi \sqrt{\frac{\lambda_2 +(2\varepsilon C_4)^{1/2}}{|\lambda_1|{\rm{det}}(\nabla^2V(x))}}\frac{{\rm{e}}^{[V(z)-V(x)]/\varepsilon}}{\Psi_+(\lambda_2/(2\varepsilon C_4)^{1/2})}[1+E_+(\varepsilon,\lambda_2)]}& \mbox{ for }\beta\geq\beta_{\text PF} \end{array}\right. \label{eq:EKTfirst} \end{equation} The coefficient $C_4>0$ represents the coefficient of the quartic expansion near the bifurcation point and the $\Psi_{\pm}$ are given in \cite{berg08EK} as \begin{align*} \Psi_+(\gamma) &= \sqrt{\frac{\gamma(1+\gamma)}{8\pi}}{\rm{e}}^{\frac{\alpha^2}{16}}J_{1/4}\left( \frac{\alpha^2}{16} \right),\\ \Psi_-(\gamma) &= \sqrt{\frac{\pi \gamma(1+\gamma)}{32}}{\rm{e}}^{-\frac{\alpha^2}{64}}\left[ I_{-1/4}\left( \frac{\alpha^2}{64} \right) +I_{1/4}\left( \frac{\alpha^2}{64} \right) \right]. \end{align*} The Bessel functions $I_{\pm 1/4}$ and $J_{1/4}$ are of the first and second kinds, respectively. The error functions $E_{\pm}$ are bounded and tend to zero in $\varepsilon\rightarrow 0$ for some neighbourhood of $\lambda_1=0$: we set $E_{\pm}=0$ to define $\widetilde{T}^{1|0}_2(\beta)$. The quantities in (\ref{eq:EKTfirst}) are available in terms of properties of the potential $V$ and so numerically computable from the parameters. \paragraph{Second escape time statistics} The mean second escape $T^{2|1}_2$ has three, rather than two, identifiable regimes. For $\beta<\beta_{SN}$ it is associated with noise-induced escape from the attracting state where only one of the nodes has escaped. Here we again use the multidimensional Eyring-Kramers Law~\cite{berg13kramers} to find an asymptotic approximation $\widehat{T}_2^{2|1}$ for $T_2^{2|1}$ for $0\leq \beta<\beta_{SN}$. Specifically, let $x$ and $z$ be the sink and saddle respectively that undergo the saddle-node bifurcation at $\beta_{SN}$. Using the two node potential $V$ given by \eqref{eq:2biV} with $\alpha=0.05$, $\nu=0.2$ and $\phi=0$, we compute \begin{equation} \widehat{T}_2^{2|1}(\beta) = \frac{2\pi}{|\lambda_1(z)|}\sqrt{\frac{|{\rm{det}}(\nabla^2V(z))|}{{\rm{det}}(\nabla^2V(x))}}{\rm{e}}^{[V(z)-V(x)]/\varepsilon} \label{eq:EKTsec} \end{equation} where $\varepsilon=\alpha^2/2$ and $\lambda_1(z)$ is the single negative eigenvalue of the Hessian $\nabla^2V(z)$. This is valid for $\beta\ll\beta_{SN}$, but it breaks down for $\beta \to \beta_{SN}$ where $\lambda_1(z)\to 0$. The approximations \eqref{eq:EKTfirst} and \eqref{eq:EKTsec} are asymptotic for small $\alpha$. For $\beta_{SN}<\beta<\beta_{PF}$ the second escape is associated with following the unstable manifold from one of the two saddles that exist in the region. In this case, the second escape does not pass through a second potential well of \eqref{eq:2biV} and so the escape times $\tau_2^{2|1}$ are no longer exponentially distributed beyond this point. For example, if we assume $R_1$ escapes before $R_2$ over the saddle $x$ then we can numerically estimate $T^{2|1}_2$ by parametrizing the unstable manifold $W^u(x)$ by $(r_1(t),r_2(t))$ between $R_1=\xi$ and $R_2=\xi$, \begin{equation} \widetilde{T}_2^{2|1} = \inf\{t~:~r_1(s)=\xi=r_2(t+s)~\mbox{ for some }s\}. \label{eq:T21approx1} \end{equation} Specifically, we compute orbit segments that lie on the unstable manifold $W^u(x)$ as solutions of a multi-segment boundary value problem set-up in the software package {\sc{Auto}} \cite{AutoOrig,auto}; for general theory of computation of manifolds with AUTO see \cite{doedel07lec,kraus07comp}. We rescale the deterministic part of equations \eqref{eq:2r} so that the integration time becomes a parameter of the system. We fix the parameters $\nu=0.2$, $\alpha=0.05$, $\phi=0$ and $\beta=0.0155$, very close to but just past the saddle node bifurcation. We compute one orbit segment that has one end in the linear unstable direction of $x$ and the other at the threshold $R_1=\xi$. We then compute a second orbit segment that has one end equal to the end of the first segment and the other end of the orbit segment lies on the threshold $R_2=\xi$. We continue the system with $\beta$ as the main increasing continuation parameter up to $\beta_{PF}$ whilst monitoring the integration time of the second orbit segment. Here we use variable step size $10^{-6}<h<1$ and $\text{\sc{ntst}}=300$ mesh points. For $\beta_{PF}<\beta$ the second escape is associated with fluctuations away from synchrony near the synchronous unstable manifold of the single saddle. In this case we can estimate the scaling with $\beta$ by considering these fluctuations. For some constant $\delta$ and threshold $\xi$, we approximate the dynamics through the region $$ R_1=R+\delta,R_2=R-\delta. $$ For trajectories passing by $R=\xi$, $\delta=0$ we have for $\nu=0.2$ and $\alpha=0.05$ \begin{align*} \dot{R} &\approx 0.12125\\ \dot{\delta}&\approx - L \delta + \alpha dW_t \end{align*} Where the value of $\dot{R}$ is given at $R=\xi=0.5$ and $\delta=0$, and $L=(2\beta-0.329834)$ is the transverse eigenvalue at the saddle; note, $L=0$ at the PF bifurcation. If we assume that the process for $\delta$ is in equilibrium then $\delta\sim N(0,\alpha^2/(2L))$. This means the first escape will happen approximately at time $T$ before $R=\xi$ such that $0.12125 T \approx \alpha/(\sqrt{2L})$. The second escape will happen at a time roughly the same time after $R=\xi$, meaning we have an estimate \begin{equation} \widetilde{T}_2^{2|1} \sim 2T = (\alpha/\Delta) \sqrt{2/L}= (0.05/0.12125)\sqrt{2/(2\beta-0.329834)} \label{eq:T21approx2} \end{equation} \begin{figure}[t] \includegraphics[width=\textwidth]{ckaa_netescs_fig5.pdf} \caption{Numerically computed mean first passage times $T_2^{1|0}$ (a) and $T_2^{2|1}$ (b) against $\beta$ for two nodes with bidirectional coupling (Bi), unidirectional coupling (Uni) and disconnected (Dis) for $\nu=0.2$, $\alpha=0.05$ and threshold $\xi=0.5$. The grey lines indicate the saddle-node (SN) and the pitchfork (PF) bifurcations in the case of bidirectional coupling. Panel (a) shows mean first passage times $T_2^{1|0}$ (cf.~ \cite[Figure~6(b)]{benj12pheno}) with $A\widehat{T}_2^{1|0}+B$ (grey dashed) where $\widehat{T}_2^{1|0}$ is given by \eqref{eq:EKfirst} and $A\widetilde{T}_2^{1|0}+B$ (black dashed) where $\widetilde{T}_2^{1|0}$ is given by \eqref{eq:EKTfirst}. Here $A=4.38$ and $B=-295$. The predicted asymptotic escape times ${\cal{T}}_{(1)},{\cal{T}}_{(2)}$ and ${\cal{T}}_{(4)}$ for each network in the limit $\beta\rightarrow \infty$ are shown. Note that in the limit $\beta\rightarrow 0$, all times limit to ${\cal{T}}_{(2)}$. Panel (b) also shows $A\widehat{T}_2^{2|1}+B$ (black dashed) where $\widehat{T}_2^{2|1}$ is given by $\eqref{eq:EKTsec}$ and $\widetilde{T}_2^{2|1}$ (black dashed) given by \eqref{eq:T21approx1}--\eqref{eq:T21approx2}. The predicted asymptotic escape time ${\cal{T}}^{(3)}$ is also plotted and all times limit to ${\cal{T}}_{(3)}$ for $\beta \to 0$. The error bars for $T_2^{2|1}$ for the bidirectional and unidirectional coupling become much larger around $\beta_{PF}$ due to the small escape times. } \label{fig:2escT} \end{figure} Figure~\ref{fig:2escT} shows numerical simulations of $T_2^{1|0}$ and $T_2^{2|1}$ plotted with error bars against $\beta$ for two nodes with bidirectional coupling, unidirectional coupling and disconnected. We mark the estimated limits ${\cal{T}}_{(i)}$ for $i=1...4$ given by \eqref{eq:lim1}--\eqref{eq:lim4} computed for $\nu=0.2$, $\alpha=0.05$ and $\xi=0.5$ by numerical integration of \eqref{eq:meanT}. We mark the location of the saddle-node and pitchfork bifurcations in each panel. Panel (a) shows $A\widehat{T}_2^{1|0}+B$ and $A\widetilde{T}_2^{1|0}+B$ where $\widehat{T}_2^{1|0}$ and $\widetilde{T}_2^{1|0}$ are the classic and modified multidimensional Eyring-Kramers times given by \eqref{eq:EKfirst} and \eqref{eq:EKTfirst} respectively. Panel(b) shows $A\widehat{T}_2^{2|1}+B$ where $\widehat{T}_2^{2|1}$ is the multidimensional Eyring-Kramers times given by \eqref{eq:EKTsec}, and the additional estimates $\widetilde{T}_2^{2|1}$ given by \eqref{eq:T21approx1} and \eqref{eq:T21approx2} are also shown. In both panels the estimated limits agree well with those found numerically for the full system. In panel (a) the approximation $A\widehat{T}_2^{1|0}+B$ has a clear asymptote at the pitchfork bifurcation, whereas, the two curves $A\widetilde{T}_2^{1|0}+B$ meet at $\beta_{PF}$ and note that the general shape is consistent with the numerically computed $T_2^{1|0}$. Panel (b) shows $A\widehat{T}_2^{2|1}+B$ is close to $T_2^{2|1}$ for $\beta=10^{-3}$ but the two curves diverge rapidly as $\beta \to \beta_{SN}$. Approximations $\widetilde{T}_2^{2|1}$ also diverge from $T_2^{2|1}$ at the bifurcations, but follow the general shape of $T_2^{2|1}$ for $\beta_{SN} << \beta<<\beta_{PF}$ and $\beta>\beta_{PF}$. The accuracy of our numerical simulations also breaks down for times around $10^{1}$ as can be seen from the large error bars around the bidirectional and unidirectional curves. This is due to small escape times and the fixed step size used in our computations. \section{A master equation approach to sequential escape} \label{sec:master} In this section we use a Master equation approach to model sequential escape on a network of $N$ nodes. We assume that each node can undergo a transition from quiescent state $0$ to active state $1$ as a memoryless Markov jump process, and as above we assume there are no transitions from $1$ back to $0$ meaning the associated Markov chain is transient to an absorbing state. Using this, at least in the weak coupling limit, we find good agreement not only to the mean sequential escape times but also to their distributions. \subsection{Sequential escapes for weak coupling} We characterise the state of a system such as \eqref{eq:neteq} at time $t$ by the vector $$ X(t)=\{X_i\}\in \{0,1\}^N. $$ where node $X_j$ changes from $0$ to $1$ according to a memoryless process with rate $r^{X}_j$: this rate may depend on current state $X$. For a fixed threshold $\xi>0$ one may think of $X_i(t)$ as the discrete random variable that changes from zero to one at the first escape time $\tau^{(i)}$ of node $i$, i.e. $$ X_i(t)=\left\{ \begin{array}{cl} 0 & \mbox{ if }0\leq t<\tau^{(i)}\\ 1 & \mbox{ otherwise} \end{array}\right. $$ The set of such states forms the vertices of an $N$-dimensional hypercube and there is a directed edge if a transition is possible from one state to the other. The irreversible nature of the process means there is a directed cycle-free sequence of transitions on $\{0,1\}^N$ that leads from $X(0)=\{0\}^N$ to $X = \{1\}^N$. Figure~\ref{fig:hyper23} shows possible states of (a) two node and (b) three node networks as vertices of two- and three-dimensional hypercubes, respectively. The directed edges are labelled with their corresponding transition rates; for example, in panel (a) the rate $r_2^{\{0,0\}}$ is the transition rate of node 2 given that we are in state $X=[0,0]$. \begin{figure} \centering \includegraphics[width=13.5cm]{ckaa_netescs_fig6.pdf} \caption{Hypercube of states for a network of (a) two nodes and (b) three nodes. Each vertex of the hypercube is a state $X$ of the network and the directed edges indicate permissible transitions between states at state dependent transition rates $r_j^X$. } \label{fig:hyper23} \end{figure} Let $p_{X}(t)$ represent the probability that the system is in state $X$ at time $t>0$: we study $p_{X}(t)$ using a master equation approach. Define the {\em origin} operator $O_j:\{0,1\}^N\rightarrow \{0,1\}^N$ by \begin{equation*} [O_j(X)]_k:=\left\{ \begin{array}{rl} X_k & \mbox{ if }k\neq j\\ 0 & \mbox{ if }k=j. \end{array} \right. \end{equation*} The probability $p_{X}$ satisfies the master equation: \begin{equation} \frac{d}{dt} p_{X} = \sum_{j~:~X_j=1} r^{O_j(X)} p_{O_j(X)}- \sum_{j~:~X_j=0} r^{X}_j p_{X} \end{equation} for $X\in \{0,1\}^N$. The first sum represents the rate at which states arrive at state $X$ and the second sum represents the rate at which they leave state $X$. This gives a system of linear ordinary differential equations (ODE) that can be represented as \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t} p = M p \label{eq:meq} \end{equation} for the $2^N$ dimensional vector $p(t)$, where $M$ is a $2^N$ square matrix. For example, the two node network shown in \Fref{fig:hyper23}(a) is governed by \begin{equation*} M = \begin{bmatrix} -r_1^{[0,0]} - r_2^{[0,0]} & 0 & 0 & 0 \\ r_1^{[0,0]} & -r_2^{[1,0]} & 0 & 0 \\ r_2^{[0,0]} & 0 & -r_1^{[0,1]} & 0\\ 0 & r_2^{[1,0]} & r_1^{[0,1]} & 0 \end{bmatrix} \ \ {\rm{and }} \ \ p = \begin{bmatrix} p_{[0,0]}\\ p_{[1,0]}\\ p_{[0,1]}\\ p_{[1,1]} \end{bmatrix} \end{equation*} with solution $p(t)= p(0) \exp Mt$. The eigenvalues of $M$ are $\lambda_1=-r_1^{[0,0]} - r_2^{[0,0]}$, $\lambda_2=- r_2^{[1,0]}$, $\lambda_3=- r_1^{[0,1]}$ and $\lambda_4=0$ with corresponding eigenvectors $v_i$. In particular, the unique zero eigenvalue $\lambda_4$ has eigenvector $v_4=[0, 0, 0, 1]^T$ showing that the state $X=[1,1]$ is an absorbing state for the system. \subsection{The all-to-all coupled case} \label{sec:evol} We define the probability that $k$ out of $N$ nodes have escaped by time $t$ to be \begin{equation} p_{N,k} (t):= {\mathbb{P}}\{ |\{i~:~X_i(t)=1\}|=k\}. \label{eq:pNkdef} \end{equation} Explicit formulae for $p_{N,k}$ can be found for the all-to-all coupled case, where the rate $r^X_j$ depends only on the number of escaped nodes, i.e. where $$ r_{j}^X = r_{|\{i~:~X_i=1\}|} $$ and the rate $r_i$ corresponds to the rate at which the remaining nodes escape, given that $i\in\{0,\ldots,N\}$ of them have already escaped (we use the convention that $r_N=0$). For example, for a two node network note that $p_{2,0}=p_{[0,0]}$, $p_{2,1} = p_{[0,1]} + p_{[1,0]}$ and $p_{2,2}=p_{[1,1]}$. In the uncoupled case with rate $r$ at each node, substituting in $N=2$ and $n=0,1,2$ we obtain \begin{align*} p_{2,0} & = {\rm{e}}^{-2rt},\\ p_{2,1}& = {\rm{e}}^{-rt}(1- {\rm{e}}^{-rt}), \\ p_{2,2}& = -2 {\rm{e}}^{-rt}+ {\rm{e}}^{-2rt}+1. \end{align*} More generally, for the all-to-all coupled case the $2^N$ equations of \eqref{eq:meq} for the $p_{X}$ can be reduced to $N+1$ equations for the $p_{N,k}$, where $\sum_{k=0}^N p_{N,k}=1$. The resulting linear system has $N+1$ eigenvalues \begin{equation} \lambda_k = -(N-k)r_k \label{eq:geneigris} \end{equation} for $k=0, \dots, N$. As the non-zero off diagonal entries are $-\lambda_k$, the $k^{\rm{th}}$ equation is \begin{equation} \frac{\mathrm{d} }{\mathrm{d} t} p_{N,k} =- \lambda_{k-1}p_{N,k-1} + \lambda_k p_{N,k}. \label{eq:genris} \end{equation} \begin{prop} Assume that $\lambda_k<0$ for $k=1,\ldots,N-1$ are distinct and all nodes are quiescent at time $t=0$, i.e. $p_{N,0}(0)=1$. Then the solution of \eqref{eq:genris} is given by \begin{equation} p_{N,k} (t) = \left[\prod_{i=0}^{k-1} \lambda_i\right] \left[ \sum_{j=0}^k \frac{{\rm{e}}^{\lambda_j t}}{\prod_{n \neq j} (\lambda_n -\lambda_j)} \right]. \label{eq:PNk} \end{equation} \label{prop:master} \end{prop} \begin{proof} We show this by induction for any $N>0$ and all integers $0\leq k\leq N$. It is clear that $\frac{\mathrm{d}}{\mathrm{d} t} p_{N,0} = \lambda_0 p_{N,0}$ has the solution $p_{N,0}={\rm{e}}^{\lambda_0 t}$, for any $N$. It follows that $\frac{\mathrm{d}}{\mathrm{d} t} {p}_{N,1} = -\lambda_0 p_{N,0} + \lambda_1 p_{N,1}$ has the solution $$ p_{N,1} = \frac{\lambda_0}{\lambda_1 - \lambda_0}{\rm{e}}^{\lambda_0 t} + \frac{ \lambda_0}{\lambda_0 - \lambda_1}{\rm{e}}^{\lambda_1 t}, $$ and $\frac{\mathrm{d}}{\mathrm{d} t} {p}_{N,2} = - \lambda_1 p_{N,1} + \lambda_2 p_{N,2}$ has the solution $$ p_{N,2} = \frac{\lambda_0 \lambda_1}{(\lambda_0 - \lambda_1)(\lambda_0-\lambda_2)}{\rm{e}}^{\lambda_0 t} + \frac{\lambda_0 \lambda_1}{(\lambda_1 - \lambda_0)(\lambda_1 - \lambda_2)}{\rm{e}}^{\lambda_1 t} + \frac{\lambda_0 \lambda_1}{(\lambda_2 - \lambda_0)(\lambda_2 - \lambda_1)}{\rm{e}}^{\lambda_2 t}. $$ Now assume that the result holds for some $k<N$. Using \eqref{eq:genris} we write $\dot{p}_{N,k+1} (t) = -\lambda_k p_{N,k} + \lambda_{k+1} p_{N,k+1}.$ Using integration factor $\text{e}^{-\lambda_{k+1}t} $ and~\eqref{eq:PNk} gives \begin{align*} \frac{\text{d}}{\text{d}t}\left( p_{N,k+1} \text{e}^{-\lambda_{k+1}t} \right) &= -\lambda_{k}p_{N,k}\text{e}^{-\lambda_{k+1}t},\\ &=-\prod_{i=0}^{k} \lambda_i \left[ \sum_{j=0}^k \frac{{\rm{e}}^{(\lambda_j -\lambda_{k+1})t}}{\prod_{n \neq j} (\lambda_n -\lambda_j)} \right]. \end{align*} Integrating both sides with respect to $t$ we obtain \begin{equation*} p_{N,k+1}e^{-\lambda_{k+1}t}=-\prod_{i=0}^k \lambda_i \left[ \sum^{k}_{j=0} \frac{{\rm{e}}^{(\lambda_j -\lambda_{k+1})t}}{(\lambda_j - \lambda_{k+1})\prod_{n \neq j} (\lambda_n -\lambda_j)} + C \right], \end{equation*} and so \begin{equation*} p_{N,k+1}=\prod_{i=0}^k \lambda_i \left[ \sum^{k}_{j=0} {\rm{e}}^{\lambda_{j}t}\prod_{n=0,n \neq j}^{k+1} \frac{1}{(\lambda_n -\lambda_j)} + C{\rm{e}}^{\lambda_{k+1}t} \right], \end{equation*} Using the initial condition $X(0)=\{0\}^N$, equally $p_{N,k+1}(0) = 0 $, we find \begin{align*} C &= -\sum_{j=0}^k \prod_{n=0,n \neq j}^{k+1} \frac{1}{(\lambda_n -\lambda_j)}=\prod_{n=0}^{k} \frac{1}{(\lambda_n -\lambda_{k+1})} \end{align*} which follows from the identity \begin{equation} \sum_{j=0}^{k+1} \prod_{n=0,n\neq j}^{k+1} \frac{1}{(\lambda_n-\lambda_j)}=0. \label{eq:identity} \end{equation} Equation (\ref{eq:identity}) can be shown by considering the order $k$ Lagrange interpolating polynomial $$ P(x)=\sum_{j=0}^{k+1} \prod_{n=0,n\neq j}^{k+1} \frac{(x-\lambda_j)}{(\lambda_n-\lambda_j)} $$ which is equal to $1$ at the $k+1$ points $x=\lambda_j$. The identity is obtained by noting that $P(x)$ is constant and hence $k!\frac{\mathrm{d}^k}{\mathrm{d} x^k} P(x)=0$. Therefore \begin{align*} p_{N,k+1}&=\prod_{i=0}^k \lambda_i \left[ \sum^{k}_{j=0} {\rm{e}}^{\lambda_{j}t}\prod_{n=0,n \neq j}^{k+1} \frac{1}{(\lambda_n -\lambda_j)} + {\rm{e}}^{\lambda_{n+1}t}\prod_{n=0}^{k} \frac{1}{(\lambda_n -\lambda_{k+1})} \right],\\ &= \prod_{i=0}^{k} \lambda_i \left[ \sum_{j=0}^{k+1} \frac{{\rm{e}}^{\lambda_j t}}{\prod_{n \neq j} (\lambda_n -\lambda_j)} \right]. \end{align*} Hence the statement is true for $k+1$: the result follows by induction. \end{proof} If there is an $i\neq j$ such that $\lambda_i=\lambda_j$ then the linear system \eqref{eq:genris} has a resonance and the explicit form of solution \eqref{eq:PNk} will be modified. We do not consider this here except to note that in the uncoupled case $r_j=r>0$ and \eqref{eq:geneigris} means there are no resonances. Assuming the Kramers rate determines the escapes, and this varies continuously in $\beta$, means that the no resonance condition is expected to apply for weak enough coupling. As an example, two nodes with bidirectional coupling gives $r_0=r_1^{[0,0]} = r_2^{[0,0]}$ and $r_1 = r_1^{[0,1]} = r_2^{[1,0]}$ so (\ref{eq:genris}) reduces to $\dot{p}_{2,0} = -2 r_0 p_{2.0}$, $\dot{p}_{2,1} = 2 r_0 p_{2,0} - r_1p_{2,1}$ and $\dot{p}_{2,2} = r_1p_{2,1}$. The eigenvalues of the linear system are $\lambda_0=-2r_0$, $\lambda_1=-r_1$, and $\lambda_2=0$. Hence by \eqref{eq:PNk} we have \begin{align} p_{2,0} & = {\rm{e}}^{\lambda_0 t}, \nonumber\\ p_{2,1} &= \frac{\lambda_0}{\lambda_0-\lambda_1} \left( {\rm{e}}^{\lambda_1 t} - {\rm{e}}^{\lambda_0 t}\right),\label{eq:p202122} \\ p_{2,2} & = \frac{\lambda_0 \lambda_1}{\lambda_0-\lambda_1} \left( \frac{{\rm{e}}^{\lambda_0 t}}{\lambda_0} - \frac{{\rm{e}}^{\lambda_1 t}}{\lambda_1}\right) + 1.\nonumber \end{align} Note that the reduction to a closed master equation with $N+1$ variables is only possible for the all-to-all coupled case where symmetry between nodes means that the order in which nodes escapes is identical on each sequence: the probability of a particular permutation $s(i)$ satisfying \eqref{eq:defines} is $1/N!$. \subsection{Estimation of sequential escape times} \label{sec:estmaster} Since $p_{N,k}(t)$ is the probability that precisely $k$ nodes have escaped, the solutions $p_{N,k}(t)$ can be used to determine the mean escape times. If we associate escape times to the $X_i$ by $$ \tau^{k}= \inf\{t>0~:~|\{i~:~X_i(t)=1\}|=k\}, $$ note that \eqref{eq:pNkdef} can be expressed as $$ p_{N,k} (t)= {\mathbb{P}}\{ \tau^{k}\leq t<\tau^{k+1}\}. $$ and \begin{equation} q_{N,k}(t) := {\mathbb{P}}\{\tau^{k} \leq t\} = \sum_{\ell=k}^N p_{N,\ell}. \label{eq:mastcum} \end{equation} The mean time to the $k$th escape ($k>0$) is the expectation of $\tau^k$. For example, using \eqref{eq:p202122} for $N=2$ we have that $q_{2,1}= 1-p_{2,0}=1-e^{\lambda_0 t}$ and $q_{2,2}=p_{2,2}=1+(\lambda_1e^{\lambda_0t}+\lambda_0e^{\lambda_1 t})/(\lambda_0+\lambda_1)$, so that $$ T_2^{1|0}=\int_{t=0}^{\infty} t \frac{d}{dt}[q_{2,1}(t)] \,dt = \frac{1}{|\lambda_0|}=\frac{1}{2r_0} $$ while $T_2^{2|0}=1/|\lambda_1|+1/|\lambda_0|=1/r_1+1/(2r_0)$ for the bidirectionally-coupled two-node network considered in \sref{sec:2binet}. The cumulative distribution \eqref{eq:cdfpt} can be approximated for any $N\geq k>\ell\geq 0$ by considering \eqref{eq:genris} with initial conditions $p_{N,\ell}(0)=1$ rather than $p_{N,0}(0)=1$. For this case we have $$ \widehat{Q}_N^{k|\ell}(t) = \mathbb{P}\{\tau^k \leq t\}=\sum_{j\geq k} p_{N,j}. $$ Note that \begin{equation} \widehat{Q}_N^{k|0}(t) = q_{N,k}(t). \label{eq:Qk0} \end{equation} In the case $N=2$ we obtain \begin{equation} \widehat{Q}_2^{2|1}(t)= 1-e^{\lambda_1 t} \label{eq:Q221} \end{equation} and so $T_2^{2|1}=1/|\lambda_1|=1/r_1$. The master equation approach presented above is only expected to be valid in the weak coupling regime, i.e. $\beta<\beta_{SN}$. For $\beta>\beta_{SN}$ the hypercube representation of states of the network is no longer valid; see \cite{BFG2007a}. We use the simulations discussed in \sref{sec:2binet} for $\beta=10^{-2}$ with $\nu=0.2$, $\alpha=0.05$ and threshold $\xi=0.5$ as $ Q_N^{1|0}(t)$, $ Q_N^{2|0}(t)$ and $ Q_N^{2|1}(t)$. We take the mean escape times $T_2^{1|0}(0.01) =133.5$ and $T_2^{2|1}(0.01)=80.94$ and compute $r_0 = 1/2T_2^{1|0}(0.01) \approx 0.00375$ and $r_1 = 1/ T_2^{2|1}(0.01) \approx 0.0124$. We substitute these values into \eqref{eq:Qk0} for $k=1,2$ and $N=2$, and \eqref{eq:Q221}. \Fref{fig:simmeq} shows $ Q_2^{1|0}(t)$, $ Q_2^{2|0}(t)$ and $ Q_2^{2|1}(t)$ for $\beta=0.01$ plotted on one graph against time (linear) with $ \widehat{Q}_2^{1|0}(t)$, $ \widehat{Q}_2^{2|0}(t)$ and $ \widehat{Q}_2^{2|1}(t)$. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{ckaa_netescs_fig7.pdf} \caption{Cumulative distributions $ Q_N^{1|0}(t)$, $ Q_N^{2|0}(t)$ and $ Q_N^{2|1}(t)$ (blues) for $\beta=0.01$, plotted with the master equation cumulative distributions $\widehat{Q}_N^{k,\ell}$ for the predicted values $r_0\approx 0.00375$ and $r_1\approx 0.0124$ (oranges).} \label{fig:simmeq} \end{figure} \Fref{fig:simmeq} shows a good agreement between the simulations and the cumulative distributions obtained with the master equation approach in the weak coupling regime. For intermediate and strong coupling regimes, as illustrated for example in Figure~\ref{fig:2bibif}, the assumptions behind the master equation are no longer valid: the transition distributions may be far from well-modelled by exponential. This is beyond the scope of this paper and we leave it to future work. \section{Discussion} \label{sec:discuss} In this paper we explained a number of features of the escape times discussed in \cite{benj12pheno}. We gave improved estimates for the one node mean escape time showing the dependence on excitability $\nu$ and noise amplitude $\alpha$. We investigated sequential escapes for a system of identical bistable nodes and the dependence on coupling strength. To this end, we used the example of two bidirectionally connected nodes. We derived expressions for infinite coupling and uncoupled limits using numerical integration of the mean escape time of one node. Moreover, we gave asymptotic approximations for the first and second escape times in each of the strong, intermediate and weak coupling regimes. Here we made use of variations of the multi-dimensional Eyring-Kramers' Law~\cite{berg08EK, berg13kramers} to explain the escape times through a pitchfork bifurcation. We compared these estimates to numerical simulations. One of the more surprising results of this paper is that for the model system \eqref{eq:neteq} with fixed excitability $\nu$ and noise level $\alpha$, an increase in $\beta$ has an opposite effect on the first and second mean escape times: Figure~\ref{fig:2escT} demonstrates $T_2^{1|0}$ monotonically increases for small $\beta$ while $T_2^{2|1}$ monotonically decreases. The aggregate effect is that the mean time to complete escape $T_{2}^{2|0}$ decreases and then increases. Essentially this is due to (a) strong coupling causing synchronization of the escapes but at the same time weakening the effect of noise on the system - it takes longer to escape because of the noise in the strongly coupled system is averaged. It is also (b) nonlinear effects of the interaction of bifurcations on the basin boundary in the coupled system with most likely escape paths. In the weak coupling regime states of the network lie at the vertices of a hypercube. We used a master equation approach to find an expression for the probability of being in a given network state in the weakly all-to-all coupled case where the rate of escape from states with $k$ active nodes are equal. We find good agreement between our numerical simulations and the distribution of escape times from a master equation model. On the other hand, the synchronization of escapes means that the master equation model breaks down above some critical coupling strength. As noted in \cite{act17domino}, sequential escape statistics should be of interest to characterising and modelling a wide range of processes in applications: in this work we extend these ideas to a simple case where there is bistability between an equilibrium and a limit cycle attractor. However, the simple nature of the bistable nodes we considered means that the phases effectively uncouple from the radial dynamics. For more general bistable oscillators the phase dynamics will not be so easily reduced. At least in the weak noise limit, it should be possible to develop master equation models suitable for intermediate and strong coupling. There are many open problems: not just generalization to more heterogeneous networks, but also non-potential systems. Eyring-Kramers' Law~\cite{berg13kramers} and the generalisation used here for non-quadratic saddles~\cite{berg08EK} require an explicit expression for the potential landscape of the system. However, the system describing two nodes with unidirectional coupling is not a potential system and the results do not hold for this case. Analysis of escape times of non-potentail systems could be applied to, for example, energy landscapes derived from neuroimaging data~\cite{ezaki17energy, tka14search} and the bifurcations on the basin boundaries of these system could provide valuable insight into the emergent transient dynamics. \section*{Acknowledgements} We particularly thank the following people for their advice and perceptive suggestions: Oscar Benjamin, Chris Bick, Vadim Biktashev, Jan Sieber, John Terry, Kyle Wedgewood. We thank Robin Chapman for the proof of the identity used in Proposition~\ref{prop:master}.
1,477,468,749,976
arxiv
\section{Introduction} It is well-known that modern differential geometry explains explicitly the dynamics of Hamiltonians. Therefore, if $Q$ is an $m$-dimensional configuration manifold and $\mathbf{H}:T^{\ast }Q\rightarrow \mathbf{R}$% \textbf{\ }is a regular Hamiltonian function, then there is a unique vector field $X$ on $T^{\ast }Q$ such that dynamic equations are given by \begin{equation} \,\,i_{X}\Phi =d\mathbf{H} \label{1.1} \end{equation}% where $\Phi $ indicates the symplectic form. The triple $(T^{\ast }Q,\Phi ,X) $ are called \textit{Hamiltonian system }on the cotangent bundle $% T^{\ast }Q. $ Nowadays, there are a lot of studies about Hamiltonian mechanics, formalisms, systems and equations \cite{deleon, tekkoyun} and there in. There are real, complex, paracomplex and other analogues. We say that in order to obtain different analogous in different spaces is possible. Quaternions were invented by Sir William Rowan Hamiltonian as an extension to the complex numbers. Hamiltonian's defining relation is most succinctly written as: \begin{equation} i^{2}=j^{2}=k^{2}=ijk=-1 \label{1.2} \end{equation}% If it is compared to the calculus of vectors, quaternions have slipped into the realm of obscurity. They do however still find use in the computation of rotations. A lot of physical laws in classical, relativistic, and quantum mechanics can be written pleasantly by means of quaternions. Some physicists hope they will find deeper understanding of the universe by restating basic principles in terms of quaternion algebra. It is well-known that quaternions are useful for representing rotations in both quantum and classical mechanics \cite{dan} . We say that Cliffordian manifold is quaternion manifold. Therefore, all properties defined on quaternion manifold of dimension $8n$ also is valid for Cliffordian manifold. Thus, it is possible to construct mechanical equations on Cliffordian K\"{a}hler manifold. The paper is structured as follows. In second 2, we recall Cliffordian K\"{a}% hler manifolds. In second 3 we introduce Hamiltonian equations related to mechanical systems on Cliffordian K\"{a}hler manifold. In conclusion, we discuss some geometrical and physical results about Hamiltonian equations and fields obtained on the base manifold. \section{Preliminaries} Hereafter, all mappings and manifolds are assumed to be smooth, i.e. infinitely differentiable and the sum is taken over repeated indices. By $% \mathcal{F}(M)$, $\chi (M)$ and $\Lambda ^{1}(M)$ we denote the set of functions on $M$, the set of vector fields on $M$ and the set of 1-forms on $% M$, respectively. \subsection{Cliffordian K\"{a}hler Manifolds} Here, we recalled the main concepts and structures given in \cite{yano, burdujan} . Let $M$ be a real smooth manifold of dimension $m.$ Suppose that there is a 6-dimensional vector bundle $V$ consisting of $F_{i}(i=1,2,...,6)$ tensors of type (1,1) over $M.$ Such a local basis $\{F_{1},F_{2},...,F_{6}% \} $ is called a canonical local basis of the bundle $V$ in a neighborhood $% U $ of $M$. Then $V$ is called an almost Cliffordian structure in $M$. The pair $(M,V)$ is named an almost Cliffordian manifold with $V$. Hence, an almost Cliffordian manifold $M$ is of dimension $m=8n.$ If there exists on $% (M,V)$ a global basis $\{F_{1},F_{2},...,F_{6}\},$ then $(M,V)$ is said to be an almost Cliffordian manifold; the basis $\{F_{1},F_{2},...,F_{6}\}$ is called a global basis for $V$. An almost Cliffordian connection on the almost Cliffordian manifold $(M,V)$ is a linear connection $\nabla $ on $M$ which preserves by parallel transport the vector bundle $V$. This means that if $\Phi $ is a cross-section (local-global) of the bundle $V$, then $\nabla _{X}\Phi $ is also a cross-section (local-global, respectively) of $V$, $X$ being an arbitrary vector field of $M$. If for any canonical basis $\{J_{1},J_{2},...,J_{6}\}$ of $V$ in a coordinate neighborhood $U$, the identities \begin{equation} g(J_{i}X,J_{i}Y)=g(X,Y),\text{ }\forall X,Y\in \chi (M),\text{ }\ i=1,2,...,6, \label{2.2} \end{equation}% hold, the triple $(M,g,V)$ is named an almost Cliffordian Hermitian manifold or metric Cliffordian manifold denoting by $V$ an almost Cliffordian structure $V$ and by $g$ a Riemannian metric and by $(g,V)$ an almost Cliffordian metric structure$.$ Since each $J_{i}(i=1,2,...,6)$ is almost Hermitian structure\ with respect to $g$, setting \begin{equation} \Phi _{i}(X,Y)=g(J_{i}X,Y),~\text{ }i=1,2,...,6, \label{2.3} \end{equation} for any vector fields $X$ and $Y$, we see that $\Phi _{i}$ are 6 local 2-forms. If the Levi-Civita connection $\nabla =\nabla ^{g}$ on $(M,g,V)$ preserves the vector bundle $V$ by parallel transport, then $(M,g,V)$ is called a Cliffordian K\"{a}hler manifold, and an almost Cliffordian structure $\Phi _{i}$ of $M$ is called a Cliffordian K\"{a}hler structure. A Clifford K\"{a}% hler manifold is Riemannian manifold ($M^{8n},g$)$.$ For example, we say that $\mathbf{R}^{8n}$ is the simplest example of Clifford K\"{a}hler manifold. Suppose that let $\left\{ x_{i},x_{n+i},x_{2n+i},x_{3n+i},x_{4n+i},x_{5n+i},x_{6n+i},x_{7n+i}\right\} , $ $i=\overline{1,n}$ be a real coordinate system on $\mathbf{R}^{8n}.$ Then we define by $\left\{ \frac{\partial }{\partial x_{i}},\frac{\partial }{% \partial x_{n+i}},\frac{\partial }{\partial x_{2n+i}},\frac{\partial }{% \partial x_{3n+i}},\frac{\partial }{\partial x_{4n+i}},\frac{\partial }{% \partial x_{5n+i}},\frac{\partial }{\partial x_{6n+i}},\frac{\partial }{% \partial x_{7n+i}}\right\} $ and $% \{dx_{i},dx_{n+i},dx_{2n+i},dx_{3n+i},dx_{4n+i},dx_{5n+i},dx_{6n+i},dx_{7n+i}\} $ be natural bases over $\mathbf{R}$ of the tangent space $T(\mathbf{R}% ^{8n}) $ and the cotangent space $T^{\ast }(\mathbf{R}^{8n})$ of $\mathbf{R}% ^{8n},$ respectively$.$ By structures $J_{1},J_{2},J_{3}$, the following expressions are obtained% \begin{equation} \begin{array}{c} J_{1}(\frac{\partial }{\partial x_{i}})=\frac{\partial }{\partial x_{n+i}},% \text{ }J_{1}(\frac{\partial }{\partial x_{n+i}})=-\frac{\partial }{\partial x_{i}},\text{ }J_{1}(\frac{\partial }{\partial x_{2n+i}})=\frac{\partial }{% \partial x_{4n+i}},\text{ }J_{1}(\frac{\partial }{\partial x_{3n+i}})=\frac{% \partial }{\partial x_{5n+i}}, \\ J_{1}(\frac{\partial }{\partial x_{4n+i}})=-\frac{\partial }{\partial x_{2n+i}},\text{ }J_{1}(\frac{\partial }{\partial x_{5n+i}})=-\frac{\partial }{\partial x_{3n+i}},\text{ }J_{1}(\frac{\partial }{\partial x_{6n+i}})=% \frac{\partial }{\partial x_{7n+i}},\text{ }J_{1}(\frac{\partial }{\partial x_{7n+i}})=-\frac{\partial }{\partial x_{6n+i}}, \\ J_{2}(\frac{\partial }{\partial x_{i}})=\frac{\partial }{\partial x_{2n+i}},% \text{ }J_{2}(\frac{\partial }{\partial x_{n+i}})=-\frac{\partial }{\partial x_{4n+i}},\text{ }J_{2}(\frac{\partial }{\partial x_{2n+i}})=-\frac{\partial }{\partial x_{i}},\text{ }J_{2}(\frac{\partial }{\partial x_{3n+i}})=\frac{% \partial }{\partial x_{6n+i}}, \\ J_{2}(\frac{\partial }{\partial x_{4n+i}})=\frac{\partial }{\partial x_{n+i}}% ,\text{ }J_{2}(\frac{\partial }{\partial x_{5n+i}})=-\frac{\partial }{% \partial x_{7n+i}},\text{ }J_{2}(\frac{\partial }{\partial x_{6n+i}})=-\frac{% \partial }{\partial x_{3n+i}},\text{ }J_{2}(\frac{\partial }{\partial x_{7n+i}})=\frac{\partial }{\partial x_{5n+i}}, \\ J_{3}(\frac{\partial }{\partial x_{i}})=\frac{\partial }{\partial x_{3n+i}},% \text{ }J_{3}(\frac{\partial }{\partial x_{n+i}})=-\frac{\partial }{\partial x_{5n+i}},\text{ }J_{3}(\frac{\partial }{\partial x_{2n+i}})=-\frac{\partial }{\partial x_{6n+i}},\text{ }J_{3}(\frac{\partial }{\partial x_{3n+i}})=-% \frac{\partial }{\partial x_{i}}, \\ J_{3}(\frac{\partial }{\partial x_{4n+i}})=\frac{\partial }{\partial x_{7n+i}% },\text{ }J_{3}(\frac{\partial }{\partial x_{5n+i}})=\frac{\partial }{% \partial x_{n+i}},\text{ }J_{3}(\frac{\partial }{\partial x_{6n+i}})=\frac{% \partial }{\partial x_{2n+i}},\text{ }J_{3}(\frac{\partial }{\partial x_{7n+i}})=-\frac{\partial }{\partial x_{4n+i}}.% \end{array} \label{2.4} \end{equation} A canonical local basis$\{J_{1}^{\ast },J_{2}^{\ast },J_{3}^{\ast }\}$ of $% V^{\ast }$ of the cotangent space $T^{\ast }(M)$ of manifold $M$ satisfies the condition as follows: \begin{equation} J_{1}^{\ast 2}=J_{2}^{\ast 2}=\text{ }J_{3}^{\ast 2}=J_{1}^{\ast }J_{2}^{\ast }\text{ }J_{3}^{\ast 2}J_{2}^{\ast }J_{1}^{\ast }=-I, \label{2.6} \end{equation}% defining by% \begin{equation} \begin{array}{c} J_{1}^{\ast }(dx_{i})=dx_{n+i},\text{ }J_{1}^{\ast }(dx_{n+i})=-dx_{i},\text{ }J_{1}^{\ast }(dx_{2n+i})=dx_{4n+i},\text{ }J_{1}^{\ast }(dx_{3n+i})=dx_{5n+i}, \\ J_{1}^{\ast }(dx_{4n+i})=-dx_{2n+i},\text{ }J_{1}^{\ast }(dx_{5n+i})=-dx_{3n+i},\text{ }J_{1}^{\ast }(dx_{6n+i})=dx_{7n+i},\text{ }% J_{1}^{\ast }(dx_{7n+i})=-dx_{6n+i} \\ J_{2}^{\ast }(dx_{i})=dx_{2n+i},\text{ }J_{2}^{\ast }(dx_{n+i})=-dx_{4n+i},% \text{ }J_{2}^{\ast }(dx_{2n+i})=-dx_{i},\text{ }J_{2}^{\ast }(dx_{3n+i})=dx_{6n+i}, \\ J_{2}^{\ast }(dx_{4n+i})=dx_{n+i},\text{ }J_{2}^{\ast }(dx_{5n+i})=-dx_{7n+i},\text{ }J_{2}^{\ast }(dx_{6n+i})=-dx_{3n+i},\text{ }% J_{2}^{\ast }(dx_{7n+i})=dx_{5n+i}, \\ J_{3}^{\ast }(dx_{i})=dx_{3n+i},\text{ }J_{3}^{\ast }(dx_{n+i})=-dx_{5n+i},% \text{ }J_{3}^{\ast }(dx_{2n+i})=-dx_{6n+i},\text{ }J_{3}^{\ast }(dx_{3n+i})=-dx_{i}, \\ J_{3}^{\ast }(dx_{4n+i})=dx_{7n+i},\text{ }J_{3}^{\ast }(dx_{5n+i})=dx_{n+i},% \text{ }J_{3}^{\ast }(dx_{6n+i})=dx_{2n+i},\text{ }J_{3}^{\ast }(dx_{7n+i})=-dx_{4n+i}.% \end{array} \label{2.7} \end{equation} \section{Hamiltonian Mechanics} Here, we obtain Hamiltonian equations and Hamiltonian mechanical system for quantum and classical mechanics structured on the standard Cliffordian K\"{a}% hler manifold $(\mathbf{R}^{8n},V).$ Firstly, let $(\mathbf{R}^{8n},V)$ be a standard Cliffordian K\"{a}hler manifold. Assume that a component of almost Cliffordian structure $V^{\ast }$% , a Liouville form and a 1-form on the standard Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V)$ are shown by $J_{1}^{\ast }$, $\lambda _{J_{1}^{\ast }}$ and $\omega _{J_{1}^{\ast }}$, respectively$.$ Then \begin{eqnarray*} \omega _{J_{1}^{\ast }} &=&\frac{1}{2}% (x_{i}dx_{i}+x_{n+i}dx_{n+i}+x_{2n+i}dx_{2n+i}+x_{3n+i}dx_{3n+i} \\ &&+x_{4n+i}dx_{4n+i}+x_{5n+i}dx_{5n+i}+x_{6n+i}dx_{6n+i}+x_{7n+i}dx_{7n+i}) \end{eqnarray*}% and \begin{eqnarray*} \lambda _{J_{1}^{\ast }} &=&J_{1}^{\ast }(\omega _{J_{1}^{\ast }})=\frac{1}{2% }(x_{i}dx_{n+i}-x_{n+i}dx_{i}+x_{2n+i}dx_{4n+i}+x_{3n+i}dx_{5n+i} \\ &&-x_{4n+i}dx_{2n+i}-x_{5n+i}dx_{3n+i}+x_{6n+i}dx_{7n+i}-x_{7n+i}dx_{6n+i}). \end{eqnarray*}% It is well-known that if $\Phi _{J_{1}^{\ast }}$ is a closed K\"{a}hler form on the standard Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V),$ then $% \Phi _{J_{1}^{\ast }}$ is also a symplectic structure on Cliffordian K\"{a}% hler manifold $(\mathbf{R}^{8n},V)$. Consider that Hamilton vector field $X$ associated with Hamiltonian energy $% \mathbf{H}$ is given by% \begin{equation} \begin{array}{c} X=X^{i}\frac{\partial }{\partial x_{i}}+X^{n+i}\frac{\partial }{\partial x_{n+i}}+X^{2n+i}\frac{\partial }{\partial x_{2n+i}}+X^{3n+i}\frac{\partial }{\partial x_{3n+i}} \\ +X^{4n+i}\frac{\partial }{\partial x_{4n+i}}+X^{5n+i}\frac{\partial }{% \partial x_{5n+i}}+X^{6n+i}\frac{\partial }{\partial x_{6n+i}}+X^{7n+i}\frac{% \partial }{\partial x_{7n+i}}.% \end{array} \label{4.2} \end{equation} Then \begin{equation} \Phi _{J_{1}^{\ast }}=-d\lambda _{J_{1}^{\ast }}=dx_{n+i}\wedge dx_{i}+dx_{4n+i}\wedge dx_{2n+i}+dx_{5n+i}\wedge dx_{3n+i}+dx_{7n+i}\wedge dx_{6n+i} \label{4.3} \end{equation}% and% \begin{equation} \begin{array}{c} i_{X}\Phi _{J_{1}^{\ast }}=\Phi _{J_{1}^{\ast }}(X)=X^{n+i}dx_{i}-X^{i}dx_{n+i}+X^{4n+i}dx_{2n+i}-X^{2n+i}dx_{4n+i} \\ +X^{5n+i}dx_{3n+i}-X^{3n+i}dx_{5n+i}+X^{7n+i}dx_{6n+i}-X^{6n+i}dx_{7n+i}.% \end{array} \label{4.4} \end{equation} Moreover, the differential of Hamiltonian energy is obtained as follows:% \begin{equation} \begin{array}{c} d\mathbf{H}=\frac{\partial \mathbf{H}}{\partial x_{i}}dx_{i}+\frac{\partial \mathbf{H}}{\partial x_{n+i}}dx_{n+i}+\frac{\partial \mathbf{H}}{\partial x_{2n+i}}dx_{2n+i}+\frac{\partial \mathbf{H}}{\partial x_{3n+i}}dx_{3n+i} \\ +\frac{\partial \mathbf{H}}{\partial x_{4n+i}}dx_{4n+i}+\frac{\partial \mathbf{H}}{\partial x_{5n+i}}dx_{5n+i}+\frac{\partial \mathbf{H}}{\partial x_{6n+i}}dx_{6n+i}+\frac{\partial \mathbf{H}}{\partial x_{7n+i}}dx_{7n+i}.% \end{array} \label{4.5} \end{equation} According to \textbf{Eq.}(\ref{1.1}), if equaled \textbf{Eq. }(\ref{4.4}) and \textbf{Eq. }(\ref{4.5}), the Hamiltonian vector field is found as follows:% \begin{equation} \begin{array}{c} X=-\frac{\partial \mathbf{H}}{\partial x_{n+i}}\frac{\partial }{\partial x_{i}}+\frac{\partial \mathbf{H}}{\partial x_{i}}\frac{\partial }{\partial x_{n+i}}-\frac{\partial \mathbf{H}}{\partial x_{4n+i}}\frac{\partial }{% \partial x_{2n+i}}-\frac{\partial \mathbf{H}}{\partial x_{5n+i}}\frac{% \partial }{\partial x_{3n+i}} \\ +\frac{\partial \mathbf{H}}{\partial x_{2n+i}}\frac{\partial }{\partial x_{4n+i}}+\frac{\partial \mathbf{H}}{\partial x_{3n+i}}\frac{\partial }{% \partial x_{5n+i}}-\frac{\partial \mathbf{H}}{\partial x_{7n+i}}\frac{% \partial }{\partial x_{6n+i}}+\frac{\partial \mathbf{H}}{\partial x_{6n+i}}% \frac{\partial }{\partial x_{7n+i}}% \end{array} \label{4.6} \end{equation} Suppose that a curve \begin{equation} \alpha :\mathbf{R}\rightarrow \mathbf{R}^{8n} \label{4.7} \end{equation}% be an integral curve of the Hamiltonian vector field $X$, i.e., \begin{equation} X(\alpha (t))=\overset{.}{\alpha },\,\,t\in \mathbf{R}. \label{4.8} \end{equation}% In the local coordinates, it is obtained that \begin{equation} \alpha (t)=(x_{i},x_{n+i},x_{2n+i},x_{3n+i},x_{4n+i},x_{5n+i},x_{6n+i},x_{7n+i}) \label{4.9} \end{equation}% and% \begin{equation} \begin{array}{c} \overset{.}{\alpha }(t)=\frac{dx_{i}}{dt}\frac{\partial }{\partial x_{i}}+% \frac{dx_{n+i}}{dt}\frac{\partial }{\partial x_{n+i}}+\frac{dx_{2n+i}}{dt}% \frac{\partial }{\partial x_{2n+i}}+\frac{dx_{3n+i}}{dt}\frac{\partial }{% \partial x_{3n+i}} \\ +\frac{dx_{4n+i}}{dt}\frac{\partial }{\partial x_{4n+i}}+\frac{dx_{5n+i}}{dt}% \frac{\partial }{\partial x_{5n+i}}+\frac{dx_{6n+i}}{dt}\frac{\partial }{% \partial x_{6n+i}}+\frac{dx_{7n+i}}{dt}\frac{\partial }{\partial x_{7n+i}}.% \end{array} \label{4.10} \end{equation}% Considering \textbf{Eq. }(\ref{4.8}), if equaled \textbf{Eq. }(\ref{4.6}) and% \textbf{\ Eq. }(\ref{4.10}), it follows% \begin{equation} \begin{array}{c} \frac{dx_{i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{n+i}},\text{ }% \frac{dx_{n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{i}},\text{ }\frac{% dx_{2n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{4n+i}},\text{ }\frac{% dx_{3n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{5n+i}}, \\ \frac{dx_{4n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{2n+i}},\text{ }% \frac{dx_{5n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{3n+i}},\text{ }% \frac{dx_{6n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{7n+i}},\text{ }% \frac{dx_{7n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{6n+i}}.% \end{array} \label{4.11} \end{equation}% Thus, the equations obtained in \textbf{Eq. }(\ref{4.11}) are seen to be \textit{Hamiltonian equations} with respect to component $J_{1}^{\ast }$ of almost Cliffordian structure $V^{\ast }$ on Cliffordian K\"{a}hler manifold $% (\mathbf{R}^{8n},V),$ and then the triple $(\mathbf{R}^{8n},\Phi _{J_{1}^{\ast }},X)$ is seen to be a \textit{Hamiltonian mechanical system }% on Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V)$. Secondly, let $(\mathbf{R}^{8n},V)$ be a Cliffordian K\"{a}hler manifold. Suppose that an element of almost Cliffordian structure $V^{\ast }$, a Liouville form and a 1-form on Cliffordian K\"{a}hler manifold $(\mathbf{R}% ^{8n},V)$ are denoted by $J_{2}^{\ast }$, $\lambda _{J_{2}^{\ast }}$ and $% \omega _{J_{2}^{\ast }}$, respectively$.$ Putting \begin{eqnarray*} \omega _{J_{2}^{\ast }} &=&\frac{1}{2}% (x_{i}dx_{i}+x_{n+i}dx_{n+i}+x_{2n+i}dx_{2n+i}+x_{3n+i}dx_{3n+i} \\ &&+x_{4n+i}dx_{4n+i}+x_{5n+i}dx_{5n+i}+x_{6n+i}dx_{6n+i}+x_{7n+i}dx_{7n+i}) \end{eqnarray*}% we have \begin{eqnarray*} \lambda _{J_{2}^{\ast }} &=&J_{2}^{\ast }(\omega _{J_{2}^{\ast }})=\frac{1}{2% }(x_{i}dx_{2n+i}-x_{n+i}dx_{4n+i}-x_{2n+i}dx_{i}+x_{3n+i}dx_{6n+i} \\ &&+x_{4n+i}dx_{n+i}-x_{5n+i}dx_{7n+i}-x_{6n+i}dx_{3n+i}+x_{7n+i}dx_{5n+i}). \end{eqnarray*} Assume that $X$ is a Hamiltonian vector field related to Hamiltonian energy $% \mathbf{H}$ and given by \textbf{Eq. }(\ref{4.2}). Considering \begin{equation} \Phi _{J_{2}^{\ast }}=-d\lambda _{J_{2}^{\ast }}=dx_{n+i}\wedge dx_{4n+i}+dx_{2n+i}\wedge dx_{i}+dx_{5n+i}\wedge dx_{7n+i}+dx_{6n+i}\wedge dx_{3n+i}, \label{4.12} \end{equation}% then we calculate% \begin{equation} \begin{array}{c} i_{X}\Phi _{J_{2}^{\ast }}=\Phi _{J_{2}^{\ast }}(X)=X^{n+i}dx_{4n+i}-X^{4n+i}dx_{n+i}+X^{2n+i}dx_{i}-X^{i}dx_{2n+i} \\ +X^{5n+i}dx_{7n+i}-X^{7n+i}dx_{5n+i}+X^{6n+i}dx_{3n+i}-X^{3n+i}dx_{6n+i}.% \end{array} \label{4.13} \end{equation}% According to \textbf{Eq.}(\ref{1.1}), if we equal \textbf{Eq. }(\ref{4.5}) and \textbf{Eq. }(\ref{4.13}), it follows% \begin{equation} \begin{array}{c} X=-\frac{\partial \mathbf{H}}{\partial x_{2n+i}}\frac{\partial }{\partial x_{i}}+\frac{\partial \mathbf{H}}{\partial x_{4n+i}}\frac{\partial }{% \partial x_{n+i}}+\frac{\partial \mathbf{H}}{\partial x_{i}}\frac{\partial }{% \partial x_{2n+i}}-\frac{\partial \mathbf{H}}{\partial x_{6n+i}}\frac{% \partial }{\partial x_{3n+i}} \\ -\frac{\partial \mathbf{H}}{\partial x_{n+i}}\frac{\partial }{\partial x_{4n+i}}+\frac{\partial \mathbf{H}}{\partial x_{7n+i}}\frac{\partial }{% \partial x_{5n+i}}+\frac{\partial \mathbf{H}}{\partial x_{3n+i}}\frac{% \partial }{\partial x_{6n+i}}-\frac{\partial \mathbf{H}}{\partial x_{5n+i}}% \frac{\partial }{\partial x_{7n+i}}% \end{array} \label{4.14} \end{equation} Considering \textbf{Eq. }(\ref{4.8}), \textbf{Eq. }(\ref{4.10}) and\textbf{\ Eq. }(\ref{4.14}) are equal, we find equations% \begin{equation} \begin{array}{c} \frac{dx_{i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{2n+i}},\text{ }% \frac{dx_{n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{4n+i}},\text{ }% \frac{dx_{2n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{i}},\text{ }% \frac{dx_{3n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{6n+i}}, \\ \frac{dx_{4n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{n+i}},\text{ }% \frac{dx_{5n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{7n+i}},\text{ }% \frac{dx_{6n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{3n+i}},\text{ }% \frac{dx_{7n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{5n+i}}.% \end{array} \label{4.15} \end{equation}% In the end, the equations obtained in \textbf{Eq. }(\ref{4.15}) are known to be \textit{Hamiltonian equations} with respect to component $J_{2}^{\ast }$ of \ standard almost Cliffordian structure $V^{\ast }$ on the standard Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V),$ and then the triple $(% \mathbf{R}^{8n},\Phi _{J_{2}^{\ast }},X)$ is a \textit{Hamiltonian mechanical system }on the standard Cliffordian K\"{a}hler manifold $(\mathbf{% R}^{8n},V)$. Thirdly, let $(\mathbf{R}^{8n},V)$ be a standard Cliffordian K\"{a}hler manifold. By $J_{3}^{\ast }$ $\lambda _{J_{3}^{\ast }}$ and $\omega _{J_{3}^{\ast }},$ we denote a component of almost Cliffordian structure $% V^{\ast }$, a Liouville form and a 1-form on Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V)$, respectively$.$ Let $\omega _{J_{3}^{\ast }}$ be given by \begin{eqnarray*} \omega _{J_{3}^{\ast }} &=&\frac{1}{2}% (x_{i}dx_{i}+x_{n+i}dx_{n+i}+x_{2n+i}dx_{2n+i}+x_{3n+i}dx_{3n+i} \\ &&+x_{4n+i}dx_{4n+i}+x_{5n+i}dx_{5n+i}+x_{6n+i}dx_{6n+i}+x_{7n+i}dx_{7n+i}) \end{eqnarray*}% Then it holds \begin{eqnarray*} \lambda _{J_{2}^{\ast }} &=&J_{2}^{\ast }(\omega _{J_{2}^{\ast }})=\frac{1}{2% }(x_{i}dx_{3n+i}-x_{n+i}dx_{5n+i}-x_{2n+i}dx_{6n+i}-x_{3n+i}dx_{i} \\ &&+x_{4n+i}dx_{7n+i}+x_{5n+i}dx_{n+i}+x_{6n+i}dx_{2n+i}-x_{7n+i}dx_{4n+i}). \end{eqnarray*}% It is well-known that if $\Phi _{J_{3}^{\ast }}$ is a closed K\"{a}hler form on the standard Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V),$ then $% \Phi _{J_{3}^{\ast }}$ is also a symplectic structure on Cliffordian K\"{a}% hler manifold $(\mathbf{R}^{8n},V)$. Consider $X$ . It is Hamiltonian vector field connected with Hamiltonian energy $\mathbf{H}$ and given by \textbf{Eq. }(\ref{4.2}). Taking into \begin{equation} \Phi _{J_{3}^{\ast }}=-d\lambda _{J_{3}^{\ast }}=dx_{3n+i}\wedge dx_{i}+dx_{n+i}\wedge dx_{5n+i}+dx_{2n+i}\wedge dx_{6n+i}+dx_{7n+i}\wedge dx_{4n+i}, \label{4.17} \end{equation}% we find% \begin{equation} \begin{array}{c} i_{X}\Phi _{J_{3}^{\ast }}=\Phi _{J_{3}^{\ast }}(X)=X^{3n+i}dx_{i}-X^{i}dx_{3n+i}+X^{n+i}dx_{5n+i}-X^{5n+i}dx_{n+i} \\ +X^{2n+i}dx_{6n+i}-X^{6n+i}dx_{2n+i}+X^{7n+i}dx_{4n+i}-X^{4n+i}dx_{7n+i}.% \end{array} \label{4.18} \end{equation}% According to \textbf{Eq.}(\ref{1.1}), \textbf{Eq. }(\ref{4.5}) and \textbf{% Eq. }(\ref{4.18}) are equaled, we obtain a Hamiltonian vector field given by% \begin{equation} \begin{array}{c} X=-\frac{\partial \mathbf{H}}{\partial x_{3n+i}}\frac{\partial }{\partial x_{i}}+\frac{\partial \mathbf{H}}{\partial x_{5n+i}}\frac{\partial }{% \partial x_{n+i}}+\frac{\partial \mathbf{H}}{\partial x_{6n+i}}\frac{% \partial }{\partial x_{2n+i}}+\frac{\partial \mathbf{H}}{\partial x_{i}}% \frac{\partial }{\partial x_{3n+i}} \\ -\frac{\partial \mathbf{H}}{\partial x_{7n+i}}\frac{\partial }{\partial x_{4n+i}}-\frac{\partial \mathbf{H}}{\partial x_{n+i}}\frac{\partial }{% \partial x_{5n+i}}-\frac{\partial \mathbf{H}}{\partial x_{2n+i}}\frac{% \partial }{\partial x_{6n+i}}+\frac{\partial \mathbf{H}}{\partial x_{4n+i}}% \frac{\partial }{\partial x_{7n+i}}.% \end{array} \label{4.19} \end{equation} Taking into \textbf{Eq. }(\ref{4.8}), we equal \textbf{Eq. }(\ref{4.10}) and% \textbf{\ Eq. }(\ref{4.19}), it yields% \begin{equation} \begin{array}{c} \frac{dx_{i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{3n+i}},\text{ }% \frac{dx_{n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{5n+i}},\text{ }% \frac{dx_{2n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{6n+i}},\text{ }% \frac{dx_{3n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{i}}, \\ \frac{dx_{4n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{7n+i}},\text{ }% \frac{dx_{5n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{n+i}},\text{ }% \frac{dx_{6n+i}}{dt}=-\frac{\partial \mathbf{H}}{\partial x_{2n+i}},\text{ }% \frac{dx_{7n+i}}{dt}=\frac{\partial \mathbf{H}}{\partial x_{4n+i}}.% \end{array} \label{4.20} \end{equation}% Finally, the equations obtained in \textbf{Eq. }(\ref{4.20}) are obtained to be \textit{Hamiltonian equations} with respect to component $J_{3}^{\ast }$ of almost Cliffordian structure $V^{\ast }$ on the standard Cliffordian K% \"{a}hler manifold $(\mathbf{R}^{8n},V),$ and then the triple $(\mathbf{R}% ^{8n},\Phi _{J_{3}^{\ast }},X)$ is a \textit{Hamiltonian mechanical system }% on the standard Cliffordian K\"{a}hler manifold $(\mathbf{R}^{8n},V)$. \section{Conclusion} Formalism of Hamiltonian mechanics has intrinsically been described with taking into account the basis $\{J_{1}^{\ast },J_{2}^{\ast },J_{3}^{\ast }\}$ of almost Cliffordian structure $V^{\ast }$ on the standard Cliffordian K% \"{a}hler manifold $(\mathbf{R}^{8n},V)$. Hamiltonian models arise to be a very important tool since they present a simple method to describe the model for mechanical systems. In solving problems in classical mechanics, the rotational mechanical system will then be easily usable model. Since physical phenomena, as well-known, do not take place all over the space, a new model for dynamic systems on subspaces is needed. Therefore, equations (\ref{4.11}), (\ref{4.15}) and (\ref{4.20}) are only considered to be a first step to realize how Cliffordian geometry has been used in solving problems in different physical area. For further research, the Hamiltonian vector fields derived here are suggested to deal with problems in electrical, magnetical and gravitational fields of quantum and classical mechanics of physics.
1,477,468,749,977
arxiv
\section{Introduction} As pointed out by Svante Janson in his seminal work \cite{Janson01}, in many random combinatorial problems, the interesting statistic is the sum of independent and identically distributed (i.i.d.)\ random variables conditioned on some exogenous integer-valued random variable. In general, the exogenous random variable is itself a sum of integer-valued random variables. Here, we are interested in the law of $N^{-1} (Y_1 + \dots + Y_N)$ conditioned on a specific value of $X_1 + \dots + X_N$ that is to say in the conditional distribution \[ \mathcal{L}_N \mathrel{\mathop:}= \mathcal{L}(N^{-1} (Y_1 + \dots + Y_N) \ |\ X_1 + \dots + X_N = m) , \] where $m$ and $N$ are integers and the $(X_i, Y_i)$ for $1 \leqslant i \leqslant N$ are i.i.d.\ copies of a vector $(X, Y)$ of random variables with $X$ integer-valued. In \cite{Janson01}, Janson proves a general central limit theorem (with convergence of all moments) for this kind of conditional distribution under some reasonable assumptions and gives several applications in classical combinatorial problems: occupancy in urns, hashing with linear probing, random forests, branching processes, etc. Following this work, one natural question arises: is it possible to obtain a general Berry-Esseen inequality for these models? The first Berry-Esseen inequality for a conditional model is given by Malcolm P.\ Quine and John Robinson in \cite{QR82}. They study the particular case of the occupancy problem, i.e., the case when the random variable $X$ is Poisson distributed and $Y = \mathbbm{1}_{\{ X = 0 \} }$. Up to our knowledge, it is the only result in that direction for this kind of conditional distribution. Our paper is organized as follows. In Section \ref{sec:BE}, we present the model and we state our main results (Theorems \ref{th:BE_cond_strong} and \ref{th:BE_cond_strong_U}). In Section \ref{sec:example}, we describe classical examples. The last section is dedicated to the proofs. \section{Conditional Berry-Esseen inequality}\label{sec:BE} For all $n \geqslant 1$, we consider a vector of random variables $(\rva{X}{n},\rva{Y}{n})$ such that $\rva{X}{n}$ is integer-valued and $\rva{Y}{n}$ real-valued. Let $N_n$ be a natural number such that $N_n \to \infty$ as $n$ goes to infinity. Let $(\rva[i]{X}{n},\rva[i]{Y}{n})_{ 1\leqslant i\leqslant N_n}$ be an i.i.d.\ sample distributed as $(\rva{X}{n},\rva{Y}{n})$ and define \[ \rva[k]{S}{n} \mathrel{\mathop:}= \sum_{i=1}^{k} \rva[i]{X}{n} \quad \text{and} \quad \rva[k]{T}{n}\mathrel{\mathop:}=\sum_{i=1}^{k} \rva[i]{Y}{n}, \] for $k\in \intervallentff{1}{N_n}$. To lighten notation, define $S_n \mathrel{\mathop:}= \rva[N_n]{S}{n}$ and $T_n \mathrel{\mathop:}= \rva[N_n]{T}{n}$. Let $m_n \in \ensnombre{Z}$ be such that $\mathbb{P}(S_n = m_n) > 0$. The purpose of the paper is to prove a Berry-Esseen inequality for the conditional distributions \[ \mathcal{L}(U_n) \mathrel{\mathop:}= \mathcal{L}( T_n | S_n = m_n) . \] \begin{hypothesis}\label{hyp} \renewcommand{\theenumi}{(A\arabic{enumi})} \renewcommand{\labelenumi}{\theenumi} Suppose that there exist positive constants $c_1$, $\tilde{c}_2$, $c_2$, $c_3$, $\tilde{c}_4$, $c_4$, $c_5$, $c_6$, $c_7$, and $\eta_0$, such that: \begin{enumerate} \setcounter{enumi}{\theassumption} \item \label{ass:tll} $ \gamma_n\mathrel{\mathop:}= 2\pi \sd{\rva{X}{n}} N_n^{1/2}\mathbb{P}(S_n = m_n) \geqslant c_1$; \item \label{ass:var_X} $\tilde{c}_2 \leqslant \sd{\rva{X}{n}} \mathrel{\mathop:}= \Var\left(\rva{X}{n}\right)^{1/2} \leqslant c_2$; \item \label{ass:rho_X} $\tm{\rva{X}{n}} \mathrel{\mathop:}= \mathbb{E}\bigl[ \abs{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}^3 \bigr] \leqslant c_3 \sd{\rva{X}{n}}^3$; \item \label{ass:var_Y} $\tilde{c}_4 \leqslant \sd{\rva{Y}{n}} \mathrel{\mathop:}= \Var\left(\rva{Y}{n}\right)^{1/2} \leqslant c_4$; \item \label{ass:rho_Y} $\tm{\rva{Y}{n}} \mathrel{\mathop:}= \mathbb{E}\bigl[\abs{\rva{Y}{n}-\mathbb{E}[\rva{Y}{n}]}^3\bigr] \leqslant c_5 \sd{\rva{Y}{n}}^3$; \item \label{ass:corr} the correlations $r_n \mathrel{\mathop:}= \Cov\left(\rva{X}{n},\rva{Y}{n}\right) \sd{\rva{X}{n}}^{-1} \sd{\rva{Y}{n}}^{-1}$ satisfy $|r_n| \leqslant c_6 < 1$; \item \label{ass:fc_XY} for $\rva{Y'}{n} \mathrel{\mathop:}= \rva{Y}{n} -\mathbb{E}[\rva{Y}{n}] - \Cov(\rva{X}{n},\rva{Y}{n}) \sd{\rva{X}{n}}^{-2} (\rva{X}{n}-\mathbb{E}[\rva{X}{n}] )$, for all $s \in \intervalleff{-\pi}{\pi}$, and for all $t \in \intervalleff{-\eta_0}{\eta_0}$, \[ \abs{\mathbb{E}\bigl[ e^{i(s\rva{X}{n} + t\rva{Y'}{n})} \bigr]} \leqslant 1 - c_7 \bigl( \sd{\rva{X}{n}}^2 s^2 + \sd{\rva{Y'}{n}}^2 t^2 \bigr) . \] \end{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \renewcommand{\labelenumi}{\theenumi.} \end{hypothesis} Obviously, Assumption \ref{hyp} is very close to the set of assumptions of the central limit theorem established in \cite[Theorem 2.3]{Janson01}. In particular, \ref{ass:tll} is a consequence of $m_n = N_n\mathbb{E}[\rva{X}{n}]+O\bigl(\sd{\rva{X}{n}} N_n^{1/2}\bigr)$, \ref{ass:rho_X}, and \ref{ass:fc_XY} (see the proof of Theorem 2.3 in \cite{Janson01}). By \cite[Lemma 4.1.]{Janson01}, $\sd{\rva{X}{n}}^2 \leqslant 4 \mathbb{E}[\abs{X - \mathbb{E}[X]}^3]$ , so $\tilde{c}_2$ can be chosen as $1/(4c_3)$. \ref{ass:corr} is not very restricting and holds in the examples provided in Section \ref{sec:example}. Following \cite{Janson01}, we introduce $\rva{Y'}{n}$ in \ref{ass:fc_XY} in order to work with a centered variable uncorrelated with $\rva{X}{n}$. If $(X, Y')$ is a vector of centered and uncorrelated random variables, then \begin{align*} \abs{\mathbb{E}\bigl[ e^{i(sX + tY')} \bigr]} & = 1 - \frac{1}{2} \bigl( \sd{X}^2 s^2 + \sd{Y'}^2 t^2 \bigr) + o(s^2 + t^2) , \end{align*} so \ref{ass:fc_XY} is reasonable if the vectors $(\rva{X}{n}, \rva{Y'}{n})$ are identically distributed. \begin{proposition}\label{cor:cv_distrib} Assume that \[ m_n = N_n\mathbb{E}[\rva{X}{n}]+O\bigl(\sd{\rva{X}{n}} N_n^{1/2}\bigr), \] that $(\rva{X}{n}, \rva{Y}{n})$ converges in distribution to $(X, Y)$ as $n \to \infty$, and that, for every fixed $r > 0$, \[ \limsup_{n \to \infty} \mathbb{E}\left[|\rva{X}{n}|^r\right] < \infty \quad \text{and} \quad \limsup_{n \to \infty} \mathbb{E}\left[|\rva{Y}{n}|^r\right ]< \infty . \] Suppose further that the distribution of $X$ has span 1 and that $Y$ is not almost surely equal to an affine function $c+dX$ of $X$. Then, Assumption \ref{hyp} is satisfied. \end{proposition} The proof is omitted since the proposition relies on Corollary 2.1 and Theorem 2.3 in \cite{Janson01}. \begin{theorem}\label{th:BE_cond_strong} Under Assumption \ref{hyp}, $\tau_n^2 \mathrel{\mathop:}= \sd{\rva{Y}{n}}^2 (1-r_n^2) > 0$ and we have \begin{equation}\label{eq:be} \sup_{x \in \ensnombre{R}} \left|\mathbb{P}\left(\frac{U_n-N_n \mathbb{E}\left[\rva{Y}{n}\right]-r_n \sd{\rva{Y}{n}}\sd{\rva{X}{n}}^{-1}(m_n - N_n\mathbb{E}\left[\rva{X}{n}\right])}{N_n^{1/2}\tau_n}\leqslant x \right)-\Phi(x)\right|\leqslant \frac{C}{N_n^{1/2}}, \end{equation} where $\Phi$ denotes the standard normal cumulative distribution function and $C$ is a positive constant that only depends on $\tilde{c}_2$, $c_2$, $c_3$, $\tilde{c}_4$, $c_4$, $c_5$, $c_6$, $c_7$, $\eta_0$, and $c_1$. \end{theorem} Remark that the standardization of the variables $U_n$ involved in \eqref{eq:be} is not the natural one. The following theorem fixes this default of standardization. \begin{proposition}\label{prop:moment_U} Under \ref{ass:tll}, \ref{ass:rho_X}, \ref{ass:var_Y}, \ref{ass:rho_Y}, and \ref{ass:fc_XY}, there exist two positive constants $d_1$ and $d_2$ depending only on $c_3$, $c_4$, $c_5$, $c_7$, and $c_1$ such that, for $N_n \geqslant 3$, \begin{equation}\label{eq:moment} \abs{\mathbb{E}\left[U_n\right] - N_n \mathbb{E}[\rva{Y}{n}] - r_n \sd{\rva{Y}{n}} \sd{\rva{X}{n}}^{-1} (m_n - N_n\mathbb{E}[\rva{X}{n}])} \leqslant d_1 \end{equation} and \begin{equation}\label{eq:moment2} \abs{\Var\left(U_n\right) - N_n \tau_n^2} \leqslant d_2 N_n^{1/2}. \end{equation} \end{proposition} \begin{theorem}\label{th:BE_cond_strong_U} Under Assumption \ref{hyp}, we have \begin{equation}\label{eq:be_2} \sup_{x \in \ensnombre{R}} \left|\mathbb{P}\left(\frac{U_n- \mathbb{E}\left[U_n\right]}{\Var\left(U_n\right)^{1/2}}\leqslant x \right)-\Phi(x)\right|\leqslant \frac{\widetilde{C}}{N_n^{1/2}}, \end{equation} where $\widetilde{C}$ is a constant that only depends on $\tilde{c}_2$, $c_2$, $c_3$, $\tilde{c}_4$, $c_4$, $c_5$, $c_6$, $c_7$, $\eta_0$, and $c_1$. \end{theorem} Furthermore, as in \cite{Janson01}, the results of Theorems \ref{th:BE_cond_strong} and \ref{th:BE_cond_strong_U} simplify considerably in the special case when the vector $(\rva{X}{n}, \rva{Y}{n})$ does not depend on $n$, that is to say when we consider an i.i.d.\ sequence instead of a triangular array. This is a consequence of Proposition \ref{cor:cv_distrib}. \section{Classical examples}\label{sec:example} In this section, we describe the examples mentioned in \cite{Janson01} and \cite{Holst79}. Each of them satisfies the assumptions of Proposition \ref{cor:cv_distrib}, as shown in \cite{Janson01}, leading to a Berry-Esseen inequality. \subsection{Occupancy problem}\label{exocc} In the classical occupancy problem, $m$ balls are thrown uniformly at random into $N$ urns. The resulting numbers of balls $(Z_1, \dots, Z_N)$ have a multinomial distribution. It is well known that $(Z_1, \dots, Z_N)$ is also distributed as $(X_1, \cdots, X_N)$ conditioned on $\{ \sum_{i=1}^N X_i = m \}$, where the random variables $X_i$ are i.i.d., with $X_i \sim \mathcal{P}(\lambda),$ for any arbitrary $\lambda > 0$. The classical occupancy problem studies the number of empty urns $U = \sum_{i=1}^N \mathbbm{1}_{\{Z_i = 0\}}$, which is distributed as $ \sum_{i=1}^N \mathbbm{1}_{\{X_i = 0\}}$ conditioned on $\{ \sum_{i=1}^N X_i = m \}$. Now, if $m = m_n \to \infty$ and $N = N_n \to \infty$ with $m_n/N_n \to \lambda \in \intervalleoo{0}{\infty}$, we can take $\rva{X}{n} \sim \mathcal{P}(\lambda_n)$ with $\lambda_n \mathrel{\mathop:}= m_n/N_n$, $\rva{Y}{n} = \mathbbm{1}_{\{\rva{X}{n} = 0\}}$, and apply Proposition \ref{cor:cv_distrib} to obtain a Berry-Esseen inequality for $U_n = \sum_{i=1}^{N_n} \mathbbm{1}_{\{Z_i = 0\}}$. \begin{remark} In \cite{QR82}, the authors prove a Berry-Esseen inequality for the occupancy problem in a more general setting: the probability of landing in each urn may be different. The tools they developed will be used in the sequel to prove our results. \end{remark} \begin{remark} Here, we need a result for triangular arrays, and not only for i.i.d.\ sequences. Indeed, if we took $\rva{X}{n} = X$ with $X \sim \mathcal{P}(\lambda)$, we would only have \[ m_n = N_n(\lambda + o(1)) = N_n \mathbb{E}[\rva{X}{n}] + o(N_n) . \] But Proposition \ref{cor:cv_distrib} requires \[ m_n = N_n \mathbb{E}[X] + O(N_n^{1/2}) , \] which is stronger. This remark goes for the following examples too. \end{remark} \subsection{Bose-Einstein statistics} This example is borrowed from \cite{Holst79} (see also \cite{Feller68}). Consider $N$ urns and put $m$ indistinguishable balls in the urns in such a way that each distinguishable outcome has the same probability $1/ \binom{m+N-1}{m}$. Let $Z_k$ be the number of balls in the $k$\th{} urn. It is well known that $(Z_1, \dots, Z_N)$ is distributed as $(X_1, \dots, X_N)$ conditioned on $\{\sum_{i=1}^N X_i = m\}$, where the random variables $X_i$ are i.i.d., with $X_i \sim \mathcal{G}(p),$ for any arbitrary $p \in \intervalleoo{0}{1}$. If $m = n$, $N = N_n \to \infty$ with $N_n/n \to p$, take $\rva{X}{n} \sim \mathcal{G}(p_n)$ with $p_n = N_n/n$ to obtain a Berry-Esseen inequality for any sequence of variables of the type $U_n = \sum_{i=1}^{N_n} f(Z_i)$. \subsection{Branching processes} Consider a Galton-Watson process, beginning with one individual, where the number of children of an individual is given by a random variable $X$ having finite moments. Assume further that $\mathbb{E}[X]=1$. We number the individuals as they appear. Let $X_i$ be the number of children of the $i$\th{} individual and $S_k \mathrel{\mathop:}= \sum_{i=1}^k X_i$. It is well known (see \cite[Example 3.4]{Janson01} and the references therein) that the total progeny $S_N+1$ is $N \geqslant 1$ if and only if \begin{equation} \label{GW1} \forall k \in \{ 0, \dots , N-1 \} \quad S_k \geqslant k \quad \text{and} \quad S_N = N-1 . \end{equation} This type of conditioning is different from the one studied in the present paper, but by \cite[Corollary 2]{Wendel75} and \cite[Example 3.4]{Janson01}, if we ignore the cyclical order of $X_1, \dots, X_N$, it is proven that $X_1, \dots, X_N$ have the same distribution conditioned on \eqref{GW1} as conditioned on $\{ S_N = N-1 \}$. Applying Proposition \ref{cor:cv_distrib} with $N = n$ and $m = n-1$, we obtain a Berry-Esseen inequality for any sequence of variables $U_n$ distributed as $T_n = \sum_{i=1}^n f(X_i)$ conditioned on $\{ S_n = n-1 \}$. For instance, if $f(x) = \mathbbm{1}_{\{x = 3\}}$, $U_n$ is the number of individuals with three children given that the total progeny is $n$. \subsection{Random forests} Consider a uniformly distributed random labeled rooted forest with $m$ vertices and $N$ roots with $N < m$. Without loss of generality, we may assume that the vertices are $1, \dots, m$ and, by symmetry, that the roots are the first $N$ vertices. Following \cite{Janson01}, this model can be realized as follows. The sizes of the $N$ trees in the forest are distributed as $(X_1, \dots , X_N)$ conditioned on $\{ \sum_{i=1}^N X_i = m \}$, where the random variables $X_i$ are i.i.d.\ and Borel distributed for any arbitrary parameter $\mu \in \intervalleoo{0}{1}$, i.e. \[ \mathbb{P}(X_i = l) = e^{-\mu l} \frac{(\mu l)^{l-1}}{l!} \] (see, e.g., \cite{FPV98} or \cite{Janson01a} for more details). Then, the $i$\th{} tree is drawn uniformly among the trees of size $X_i$. Proposition \ref{cor:cv_distrib} provides a Berry-Esseen inequality for any sequence of variables of the type $U_n = \sum_{i=1}^{N_n} f(Z_i)$ where $N_n \to \infty$ and $Z_1$, ..., $Z_{N_n}$ are the sizes of the trees in the forest. For instance, if $f(x) = \mathbbm{1}_{\{ x = K \}}$, $U_n$ is the number of trees of size $K$ in the forest (see, e.g., \cite{Kolchin84,Pavlov77,Pavlov96}). \subsection{Hashing with linear probing Hashing with linear probing is a classical model in theoretical computer science that appeared in the 60's. It has been studied from a mathematical point of view firstly in \cite{Knuth74}. For more details on the model, we refer to \cite{FPV98, Janson01a, Marckert01-1, Chassaing02, ChF03, Janson05}. The model describes the following experiment. One throws $n$ balls sequentially into $m$ urns at random with $m > n$; the urns are arranged in a circle and numbered clockwise. A ball that lands in an occupied urn is moved to the next empty urn, always moving clockwise. The length of the move is called the displacement of the ball and we are interested in the sum of all displacements which is a random variable denoted $d_{m,n}$. After throwing all balls, there are $N \mathrel{\mathop:}= m-n$ empty urns. These divide the occupied urns into blocks of consecutive urns. We consider that the empty urn following a block belongs to this block. Following \cite{Knuth98a,FPV98}, Janson \cite{Janson01a} proves that the lengths of the blocks and the sums of displacements inside each block are distributed as $(X_1, Y_1)$, ..., $(X_N, Y_N)$ conditioned on $\{ \sum_{i=1}^N X_i = m \}$, where the random vectors $(X_i, Y_i)$ are i.i.d.\ copies of a vector $(X, Y)$ of random variables: $X$ being Borel distributed with any arbitrary parameter $\mu \in \intervalleoo{0}{1}$ and $Y$ given $\{ X = l \}$ being distributed as $d_{l,l-1}$. In particular, $d_{m, n}$ is distributed as $\sum_{i=1}^N Y_i$ conditioned on $\{ \sum_{i=1}^N X_i = m \}$. If $m = m_n \to \infty$ and $N = N_n = m_n - n \to \infty$ with $n/m_n \to \mu \in \intervalleoo{0}{1}$, we take $\rva{X}{n}$ following Borel distribution with parameter $\mu_n \mathrel{\mathop:}= n/m_n$ to get a Berry-Esseen inequality for $d_{m_n, n}$, by Proposition \ref{cor:cv_distrib}. \section{Proofs}\label{sec:preuve} Remind that $U_n$ is distributed as $T_n$ conditioned on $\{S_n=m_n\}$. Following the procedure of \cite{Janson01}, we consider the projection \begin{align*} \rva{Y'}{n} = \rva{Y}{n} -\mathbb{E}[\rva{Y}{n}]-\Cov(\rva{X}{n},\rva{Y}{n}) \sd{\rva{X}{n}}^{-2} (\rva{X}{n}-\mathbb{E}[\rva{X}{n}]). \end{align*} Then $\mathbb{E}[\rva{Y'}{n}] = 0$ and $\Cov(\rva{X}{n},\rva{Y'}{n}) = \mathbb{E}[\rva{X}{n} \rva{Y'}{n}] = 0$. Besides, \ref{ass:fc_XY} and \ref{ass:corr} are verified by $\rva{Y'}{n}$. By \ref{ass:corr}, \[ \sd{\rva{Y'}{n}}^2 = \sd{\rva{Y}{n}}^2 (1 - r_n^2) \in [\tilde{c}_4^2(1 - c_6^2), c_4^2], \] so \ref{ass:var_Y} is satisfied by $\rva{Y'}{n}$. Finally, by Minkowski inequality, \ref{ass:rho_X} and \ref{ass:rho_Y}, and the fact that $\abs{r_n} \leqslant 1$, \begin{align*} \norme{\rva{Y'}{n}}_3 & \leqslant \norme{\rva{Y}{n}-\mathbb{E}[\rva{Y}{n}]}_3 + \abs{r_n} \sd{\rva{X}{n}}\sd{\rva{Y}{n}}\sd{\rva{X}{n}}^{-2} \norme{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}_3 \\ & \leqslant \tm{\rva{Y}{n}}^{1/3} + \sd{\rva{Y}{n}} \sd{\rva{X}{n}}^{-1} \tm{\rva{X}{n}}^{1/3} \\ & \leqslant \sd{\rva{Y}{n}}(c_3^{1/3} + c_5^{1/3})\\ & \leqslant \sd{\rva{Y'}{n}}(1-c_6^2)^{-1/2}(c_3^{1/3} + c_5^{1/3}). \end{align*} Hence, $\rva{Y'}{n}$ satisfies \ref{ass:rho_Y}. Consequently, all conditions hold for the vector $(\rva{X}{n},\rva{Y'}{n})$ too. Finally, \[ T'_n \mathrel{\mathop:}= \sum_{i=1}^{N_n} \rva[i]{Y'}{n} = T_n - N_n\mathbb{E}[\rva{Y}{n}] - \Cov(\rva{X}{n},\rva{Y}{n}) \sd{\rva{X}{n}}^{-2} (S_n-N_n \mathbb{E}[\rva{X}{n}]). \] So, conditioned on $\{S_n=m_n\}$, we have $T'_n=T_n - N_n \mathbb{E}[\rva{Y}{n}] - r_n \sd{\rva{Y}{n}} \sd{\rva{X}{n}}^{-1} (m_n - N_n\mathbb{E}[\rva{X}{n}])$. Hence the conclusions in Theorems \ref{th:BE_cond_strong} and \ref{th:BE_cond_strong_U} for $(\rva{X}{n},\rva{Y}{n})$ and $(\rva{X}{n},\rva{Y'}{n})$ are the same. Thus, it suffices to prove the theorems for $(\rva{X}{n},\rva{Y'}{n})$. In other words, we will henceforth assume that $\mathbb{E}\left[\rva{Y}{n}\right]=\mathbb{E}\left[\rva{X}{n} \rva{Y}{n}\right]=0$, $r_n=0$ and $\tau_n^2=\sd{\rva{Y}{n}}^2$. Moreover, the constants $c_4'$, $\tilde{c}_4'$, $c_5'$, $c_6'$, and $c_7'$ for $(X,Y')$ are linked to that of $(X,Y)$ by the following relations: $c_4'=c_4$, $\tilde{c}_4'=\tilde{c}_4(1-c_6^2)^{1/2}$, $c_5'=(1-c_6^2)^{-3/2}(c_3^{1/3}+c_5^{1/3})^3$, $c_6'=0$, and $c_7'=c_7$. In the proofs, we omit the primes. \medskip The proofs of Theorems \ref{th:BE_cond_strong} and \ref{th:BE_cond_strong_U} intensively rely on the use of Fourier transforms through the functions $\varphi_n$ and $\psi_n$ defined by \begin{equation} \label{def:psi_n} \varphi_n(s,t) \mathrel{\mathop:}= \mathbb{E}\left[\exp\left\{is\left(\rva{X}{n}-\mathbb{E}\left[\rva{X}{n}\right]\right)+it\rva{Y}{n}\right\}\right] \quad \text{and} \quad \psi_n(t) \defeq2\pi \mathbb{P}(S_n=m_n) \mathbb{E}\left[\exp\left\{itU_n\right\}\right] . \end{equation} The controls of these functions (respectively the controls of their derivatives) needed in the proofs are postponed to Subection \ref{subsec:tech_res} in Lemmas \ref{lem:bartlett} and \ref{lem:maj_phi_n} (resp.\ in Lemma \ref{lem:maj_der_phi_n}). In particular, \eqref{eq:bartlett}, \eqref{eq:maj_expo_st}, \eqref{eq:maj_der_phi_n}, and \eqref{esp1b_array} will be used several times in the sequel. \subsection{Proof of Theorem \ref{th:BE_cond_strong}} We follow the classical proof of Berry-Esseen theorem (see e.g.\ \cite{Feller71}) combined with the procedure in \cite{QR82}. As shown in \cite{Loeve55} (page 285) or \cite{Feller71}, the left hand side of \eqref{eq:be} is dominated by \begin{equation* \frac{2}{\pi}\int_{0}^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \left|\frac{\psi_n(u\sd{\rva{Y}{n}}^{-1} N_n^{-1/2})}{2\pi\mathbb{P}(S_n=m_n)}-e^{-u^2/2}\right|\frac{du}{u}+\frac{24 \sd{\rva{Y}{n}}^{-1} N_n^{-1/2}}{\eta \pi \sqrt{2\pi}}, \end{equation*} where $\eta > 0$ is arbitrary. We choose to define \begin{equation}\label{eq:eta} \eta \mathrel{\mathop:}= \min \bigg( \frac{2}{9} (c_4 c_5)^{-1}, \eta_0 \bigg)>0. \end{equation} From \eqref{eq:bartlett} of Lemma \ref{lem:bartlett} and a Taylor's expansion, \begin{align*} &u^{-1}\left|\frac{\psi_n(u\sd{\rva{Y}{n}}^{-1} N_n^{-1/2})}{2\pi\mathbb{P}(S_n=m_n)}-e^{-u^2/2}\right| = u^{-1}e^{-u^2/2}\left|\frac{e^{u^2/2}\psi_n(u\sd{\rva{Y}{n}}^{-1} N_n^{-1/2})}{2\pi\mathbb{P}(S_n=m_n)}-1\right| \\ & \leqslant e^{-u^2/2} \sup_{0\leqslant \theta \leqslant u} \left|\frac{\partial}{\partial t}\left[\frac{e^{t^2/2}\psi_n(t\sd{\rva{Y}{n}}^{-1} N_n^{-1/2})}{2\pi\mathbb{P}(S_n=m_n)}\right]_{t=\theta}\right| \\ & \leqslant \gamma_n^{-1}e^{-u^2/2} \sup_{0\leqslant \theta \leqslant u} \left\{\int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi \sd{\rva{X}{n}} N_n^{1/2}} \left|\frac{\partial}{\partial t}\left[e^{t^2/2}\varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right]_{t=\theta}\right| ds\right\}. \end{align*} By \ref{ass:tll}, $\gamma_n\geqslant c_1$. Now we split the integration domain of $s$ into \[ A_1 \mathrel{\mathop:}= \left\{s:\; |s|< \varepsilon \sd{\rva{X}{n}} N_n^{1/2}\right\} \quad \textrm{and} \quad A_2 \mathrel{\mathop:}= \left\{s:\; \varepsilon \sd{\rva{X}{n}} N_n^{1/2}\leqslant |s| \leqslant \pi \sd{\rva{X}{n}} N_n^{1/2}\right\}, \] where \begin{equation}\label{eq:epsilon} \varepsilon \mathrel{\mathop:}= \min \bigg( \frac{2}{9} (c_2 c_3)^{-1}, \pi \bigg) \end{equation} and decompose \begin{equation* u^{-1}\left|\frac{\psi_n(u\sd{\rva{Y}{n}}^{-1} N_n^{-1/2})}{2\pi\mathbb{P}(S_n=m_n)}-e^{-u^2/2}\right|\leqslant \sup_{0 \leqslant \theta\leqslant u} \left[I_1(n,u, \theta) + I_2(n,u, \theta)\right], \end{equation*} where \begin{align} I_1(n, u, \theta)&= \gamma_n^{-1} \int_{A_1} e^{-(u^2+s^2)/2} \abs{\frac{\partial}{\partial t} \left[e^{(t^2+s^2)/2}\varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right]_{t=\theta}} ds, \label{def:I_1}\\ I_2(n, u, \theta)&= \gamma_n^{-1} e^{-u^2/2} \int_{A_2} \abs{\frac{\partial}{\partial t}\left[e^{t^2/2}\varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right]_{t=\theta}} ds. \label{def:I_2} \end{align} Lemmas \ref{lem:I1} and \ref{lem:I2} state that there exists positive constants $C_1$ and $C_2$, only depending on $\tilde{c}_2$, $c_2$, $c_3$, $c_5$, $c_7$, and $c_1$, such that, for $N_n \geqslant \max(12^3c_3^2, 12^3c_5^2, 2)$, \begin{equation}\label{eq:I1} \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \sup_{0\leqslant \theta \leqslant u} I_1(n, u, \theta) du \leqslant \frac{C_1}{N_n^{1/2}}, \end{equation} and \begin{equation}\label{eq:I2} \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \sup_{0\leqslant \theta \leqslant u} I_2(n, u, \theta)du \leqslant \frac{C_2}{N_n^{1/2}}. \end{equation} So, \[ \sup_{x \in \ensnombre{R}} \left|\mathbb{P}\left(\frac{U_n}{N_n^{1/2}\sd{\rva{Y}{n}}}\leqslant x \right)-\Phi(x)\right| \leqslant \frac{C}{N_n^{1/2}} \] with \begin{equation*} C \mathrel{\mathop:}= \max \biggl( C_1 + C_2 +\frac{24}{\widetilde c_4 \pi \sqrt{2\pi}} \biggl( \min \biggl( \frac{2}{9} c_4 c_5, \eta_0 \biggr)\biggr)^{-1} , 12^{3/2}c_3, 12^{3/2}c_5, \sqrt{2} \biggr) . \end{equation*} \subsection{Proof of Proposition \ref{prop:moment_U}} \paragraph{Proof of \eqref{eq:moment}} We adapt the proof given in \cite{Janson01}. Using the definition \eqref{def:psi_n} of $\Psi_n$, and differentiating under the integral sign of \eqref{eq:bartlett} of Lemma \ref{lem:bartlett}, we naturally have \begin{align*} &\abs{\mathbb{E}\left[U_n\right]} = \abs{\frac{-i\psi_n'(0)}{2\pi \mathbb{P}(S_n=m_n)}} \nonumber\\ & \leqslant \gamma_n^{-1}N_n \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} \abs{\frac{\partial \varphi_n}{\partial t}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} \cdot \abs{\varphi_n^{N_n-1}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} ds. \end{align*} Using \eqref{esp1b_array} of Lemma \ref{lem:maj_der_phi_n} with $t = 0$, \ref{ass:var_X}, \ref{ass:rho_X}, and \ref{ass:rho_Y}, we deduce \[ \abs{\frac{\partial \varphi_n}{\partial t}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} \leqslant \frac{s^2}{2}\frac{\tm{\rva{Y}{n}}^{1/3}\tm{\rva{X}{n}}^{2/3}}{\sd{\rva{X}{n}}^2 N_n} \leqslant \frac{c_3^{2/3} c_4 c_5^{1/3}}{2 N_n} s^2. \] Then using \eqref{eq:maj_expo_st} of Lemma \ref{lem:maj_phi_n} (with $l=1$ and $t = 0$) and for $N_n \geqslant 3$, \[ \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} \abs{\frac{\partial \varphi_n}{\partial t}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} \cdot \abs{\varphi_n^{N_n-1}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} ds \leqslant \frac{c_3^{2/3} c_4 c_5^{1/3}}{2 N_n} \int_{-\infty}^{+\infty} s^2 e^{-2 c_7 s^2/3} ds. \] So, \eqref{eq:moment} holds with \[ d_1 \mathrel{\mathop:}= 2^{-1} c_3^{2/3} c_4 c_5^{1/3} c_1^{-1} \int_{-\infty}^{+\infty} s^2 e^{-2 c_7 s^2/3} ds . \] \paragraph{Proof of \eqref{eq:moment2}} Since $\tau_n^2 = \sd{\rva{Y}{n}}^2$ and $\mathbb{E}\left[U_n\right]$ is bounded, it suffices to show that the quantity $\abs{\mathbb{E}\left[U_n^2\right] - N_n \sd{\rva{Y}{n}}^2}$ is bounded by some $d_2' N_n^{1/2}$ to prove \eqref{eq:moment2}. Proceeding as done previously, \begin{align} & \mathbb{E}\left[U_n^2\right] = \frac{-\psi_n''(0)}{2\pi \mathbb{P}(S_n=m_n)}\nonumber\\ & = - \gamma_n^{-1} N_n(N_n-1) \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \left(\frac{\partial \varphi_n}{\partial t}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)\right)^2\varphi_n^{N_n-2}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)ds\label{eq:moment3}\\ & \quad - \gamma_n^{-1} N_n \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \frac{\partial^2 \varphi_n}{\partial t^2}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)\varphi_n^{N_n-1}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)ds\label{eq:moment4} \end{align} where \begin{equation}\label{def:vn} v_n \mathrel{\mathop:}= (m_n - {N_n}\mathbb{E}\left[\rva{X}{n}\right])/(\sd{\rva{X}{n}} N_n^{1/2}). \end{equation} First, by \eqref{esp1b_array} of Lemma \ref{lem:maj_der_phi_n} with $t = 0$ and by \eqref{eq:maj_expo_st} of Lemma \ref{lem:maj_phi_n} (with $l=2$ and $t = 0$), one has, for $N_n \geqslant 3$, \begin{align*} & \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} \abs{\frac{\partial \varphi_n}{\partial t}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)}^2 \abs{\varphi_n^{N_n-2}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)} ds \\ & \qquad \leqslant \frac{c_3^{4/3} c_4^2 c_5^{2/3}}{4 N_n^2} \int_{-\infty}^{+\infty} s^4 e^{- c_7 s^2/3}ds. \end{align*} Finally, by \ref{ass:tll}, the term \eqref{eq:moment3} is bounded by \begin{equation* d_2'' \mathrel{\mathop:}= \frac{c_3^{4/3} c_4^2 c_5^{2/3}}{4 c_1} \int_{-\infty}^{+\infty} s^4 e^{- c_7 s^2/3}ds. \end{equation*} Second, we study the term \eqref{eq:moment4}. We want to show that \[ \Delta_n: = \gamma_n^{-1} \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \frac{\partial^2 \varphi_n}{\partial t^2}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}}, 0\right) \varphi_n^{N_n-1}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}}, 0\right)ds + \sd{\rva{Y}{n}}^2 \] is bounded by some $d_2'''/N_n^{1/2}$. By \eqref{eq:bartlett} with $t=0$, \[ \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right) ds = 2 \pi \mathbb{P}(S_n = m_n) \sd{\rva{X}{n}} N_n^{1/2} = \gamma_n, \] so \begin{align*} \Delta_n & = \gamma_n^{-1} \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \left( \frac{\partial^2 \varphi_n}{\partial t^2}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right) + \sd{\rva{Y}{n}}^2 \varphi_n\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)\right) \varphi_n^{N_n-1} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)ds \\ &= \gamma_n^{-1} \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi\sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \mathbb{E}\bigg[ {\rva{Y}{n}}^2 f(s) \bigg] \cdot\varphi_n^{N_n-1} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},0\right)ds \end{align*} where \[ f(s) = - \left(e^{is\sd{\rva{X}{n}}^{-1} N_n^{-1/2}(\rva{X}{n}-\mathbb{E}[\rva{X}{n}])} -\mathbb{E}\Big[ e^{is\sd{\rva{X}{n}}^{-1} N_n^{-1/2}(\rva{X}{n}-\mathbb{E}[\rva{X}{n}])} \Big]\right). \] Applying Taylor's theorem yields \begin{align*} \abs{f(s)} & \leqslant \abs{s} \sup_u \left\lvert - i\frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}} N_n^{1/2}} e^{iu\sd{\rva{X}{n}}^{-1} N_n^{-1/2}(\rva{X}{n}-\mathbb{E}[\rva{X}{n}])} \right. \left. + \mathbb{E}\bigg[ i\frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}} N_n^{1/2}} e^{iu\sd{\rva{X}{n}}^{-1} N_n^{-1/2}(\rva{X}{n}-\mathbb{E}[\rva{X}{n}])} \bigg]\right\rvert \\ & \leqslant \frac{\abs{s}}{N_n^{1/2}} \bigg( \abs{ \frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}}}} + \mathbb{E}\bigg[ \abs{ \frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}}}} \bigg] \bigg). \end{align*} Thus, using H\"older's inequality, \begin{align*} \abs{\mathbb{E}[{\rva{Y}{n}}^2 f(s)]} & \leqslant \frac{\abs{s}}{N_n^{1/2}} \mathbb{E}\bigg[ {\rva{Y}{n}}^2 \bigg( \abs{\frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}}}} + \mathbb{E}\bigg[ \abs{\frac{\rva{X}{n}-\mathbb{E}[\rva{X}{n}]}{\sd{\rva{X}{n}}}} \bigg] \bigg) \bigg] \\ & \leqslant \frac{\sd{\rva{Y}{n}}^2 \abs{s}}{N_n^{1/2}} \bigg( \frac{\tm{\rva{Y}{n}}^{2/3}}{\sd{\rva{Y}{n}}^2} \frac{\tm{\rva{X}{n}}^{1/3}}{\sd{\rva{X}{n}}} + 1 \bigg) \\ & \leqslant \frac{\abs{s} c_4^2}{N_n^{1/2}} \bigg(c_5^{2/3}c_3^{1/3}+1 \bigg) \end{align*} where the last inequality is obtained using \ref{ass:var_X}, \ref{ass:rho_X}, \ref{ass:var_Y}, and \ref{ass:rho_Y}. Now, by \ref{ass:tll} and the upper bound in \eqref{eq:maj_expo_st} (with $l=1$ and $t = 0$), we get, for $N_n\geqslant 3$, \[ \abs{\Delta_n} \leqslant \frac{c_4^2}{c_1 N_n^{1/2}} \bigg(c_5^{2/3}c_3^{1/3}+1 \bigg) \int_{-\infty}^{+\infty} \abs{s} e^{-s^2 c_7 (N_n-1)/N_n} ds \leqslant \frac{d_2'''}{N_n^{1/2}}, \] with \begin{equation* d_2''' \mathrel{\mathop:}= c_4^2 c_1^{-1}( c_5^{2/3} c_3^{1/3} +1) \int_{-\infty}^{+\infty} \abs{s} e^{-2 s^2 c_7/3} ds. \end{equation*} Finally, \[ \abs{\Var(U_n) - N_n \sd{\rva{Y}{n}}^2} \leqslant (d_1^2 + d_2'' + d_2''') N_n^{1/2} \mathrel{=}: d_2 N_n^{1/2} . \] Then the proof of \eqref{eq:moment2} is complete. \subsection{Proof of Theorem \ref{th:BE_cond_strong_U}} Write \begin{align*} \abs{\mathbb{P}\left( \frac{U_n - \mathbb{E}[U_n]}{\Var\left(U_n\right)^{1/2}} \leqslant x \right) - \Phi(x)} & \leqslant \abs{\mathbb{P}\left( \frac{U_n}{N_n^{1/2} \sd{\rva{Y}{n}}} \leqslant a_n x + b_n \right) - \Phi(a_n x + b_n)} \\ & \hspace{5cm} + \abs{\Phi(a_n x + b_n) - \Phi(x)} \\ \end{align*} where \[ a_n \mathrel{\mathop:}= \frac{\Var(U_n)^{1/2}}{N_n^{1/2} \sd{\rva{Y}{n}}} \quad \text{and} \quad b_n \mathrel{\mathop:}= \frac{\mathbb{E}[U_n]}{N_n^{1/2} \sd{\rva{Y}{n}}}. \] The previous estimates of $\mathbb{E}[U_n]$ and $\Var(U_n)$ yield, \[ \abs{a_n - 1} \leqslant \abs{a_n^2 - 1} \leqslant d_2 \tilde{c}_4^{-2} N_n^{-1/2} \quad \text{and} \quad \abs{b_n} \leqslant d_1 \tilde{c}_4^{-1} N_n^{-1/2}. \] Then for $N_n^{1/2} \geqslant 2 \tilde{c}_4^{-2} d_2$, $a_n \geqslant 1/2$ and applying Taylor's theorem to $\Phi$, one gets \begin{align*} \abs{\Phi(a_n x + b_n) - \Phi(x)} & \leqslant \abs{(a_n - 1) x + b_n} \sup_t \frac{e^{-t^2/2}}{\sqrt{2\pi}} \\ & \leqslant \frac{N_n^{-1/2}}{\sqrt{2\pi}} \max(d_2 \tilde{c}_4^{-2}, d_1 \tilde{c}_4^{-1}) (\abs{x} + 1) e^{-(|x|/2 - d_1 \tilde{c}_4^{-1})^2/2}\\ \end{align*} the supremum being over $t$ between $x$ and $a_n x + b_n$. The last function in $x$ being bounded, we can define \[ C'\mathrel{\mathop:}= \frac{1}{\sqrt{2\pi}} \max(d_2 \tilde{c}_4^{-2}, d_1 \tilde{c}_4^{-1} ) \sup_{x \in \ensnombre{R}} \Big[(\abs{x} + 1) e^{-(|x|/2 - d_1 \tilde{c}_4^{-1})^2/2} \Big] . \] Finally, we apply \eqref{eq:be} and \eqref{eq:be_2} holds with $\widetilde{C}\mathrel{\mathop:}= C+\max(C', 2\tilde{c}_4^{-2} d_2)$. \subsection{Technical results}\label{subsec:tech_res} Recall that $v_n = (m_n - {N_n}\mathbb{E}\left[\rva{X}{n}\right])/(\sd{\rva{X}{n}} N_n^{1/2})$ and $\gamma_n = 2\pi\mathbb{P}(S_n=m_n) \sd{\rva{X}{n}} N_n^{1/2}$. Moreover, \begin{equation*} \varphi_n(s,t) = \mathbb{E}\left[\exp\left\{is\left(\rva{X}{n}-\mathbb{E}\left[\rva{X}{n}\right]\right)+it \rva{Y}{n}\right\}\right] \quad \text{and} \quad \psi_n(t) = 2\pi \mathbb{P}(S_n=m_n) \mathbb{E}\left[\exp\left\{itU_n\right\}\right] . \end{equation*} \begin{lemma}\label{lem:bartlett} One has \begin{equation}\label{eq:bartlett} \psi_n(t)=\frac{1}{\sd{\rva{X}{n}} N_n^{1/2}} \int_{-\pi \sd{\rva{X}{n}} N_n^{1/2}}^{\pi \sd{\rva{X}{n}} N_n^{1/2}} e^{-isv_n} \varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},t\right) ds \end{equation} \end{lemma} \begin{proof} Indeed, since \[ \int_{-\pi}^\pi e^{is(S_n - m_n)} ds = 2 \pi \mathbbm{1}_{\{ S_n = m_n \}}, \] we have \begin{align*} \psi_n(t) & = 2\pi \mathbb{P}(S_n=m_n) \mathbb{E}\left[\exp\left\{itU_n\right\}\right] \\ & = 2\pi \mathbb{E}\left[ \exp\left\{it T_n \right\} \mathbbm{1}_{\{S_n = m_n\}} \right] \\ & = \int_{-\pi}^{\pi} \mathbb{E}\left[\exp\left\{is \left(S_n -m_n\right)+it T_n \right\} \right]ds \\ & = \int_{-\pi}^{\pi} e^{-is\left(m_n-N_n\mathbb{E}\left[\rva{X}{n}\right]\right)}\varphi_n^{N_n}(s,t)ds, \end{align*} which leads to \eqref{eq:bartlett} after the change of variable $s'=s\sd{\rva{X}{n}} N_n^{1/2}$. \end{proof} Now we give controls on the function $\varphi_n$ and its partial derivatives (see Lemmas \ref{lem:maj_phi_n} and \ref{lem:maj_der_phi_n}). \begin{lemma} \label{lem:maj_phi_n} Under \ref{ass:fc_XY}, for any integer $l \geqslant 0$, $\abs{s} \leqslant \pi \sd{\rva{X}{n}} N_n^{1/2}$, and $\abs{t} \leqslant \eta_0 \sd{\rva{Y}{n}} N_n^{1/2}$, one gets \begin{equation} \label{eq:maj_expo_st} \abs{\varphi_n^{N_n-l} \bigg(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}}, \frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}} \bigg)} \leqslant e^{-(s^2+t^2) \cdot c_7 \cdot (N_n - l)/N_n}. \end{equation} \end{lemma} \begin{proof} The proof is a mere consequence of the inequality $1 + x \leqslant e^x$ that holds for any $x\in\ensnombre{R}$. \end{proof} \begin{lemma} \label{lem:maj_der_phi_n} For any $s$ and $t$, one has \begin{equation} \label{eq:maj_der_phi_n} \left|\frac{\partial \varphi_n}{\partial t} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right| \leqslant \frac{\sd{\rva{Y}{n}}}{N_n^{1/2}} (|s|+|t|) ; \end{equation} and \begin{align}\label{esp1b_array} \left\lvert \frac{\partial \varphi_n}{\partial t} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}}, \frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}} \right)\right\rvert & \leqslant \frac{\sd{\rva{Y}{n}}}{N_n^{1/2}} |t| + \frac{\sd{\rva{Y}{n}}}{N_n} \bigg[ \frac{s^2}{2} \bigg( \frac{\tm{\rva{X}{n}}}{\sd{\rva{X}{n}}^3}\bigg)^{2/3} \bigg( \frac{\tm{\rva{Y}{n}}}{\sd{\rva{Y}{n}}^3}\bigg)^{1/3} \nonumber \\ & + \abs{st} \bigg( \frac{\tm{\rva{X}{n}}}{\sd{\rva{X}{n}}^3}\bigg)^{1/3} \bigg( \frac{\tm{\rva{Y}{n}}}{\sd{\rva{Y}{n}}^3}\bigg)^{2/3} + \frac{t^2}{2} \bigg( \frac{\tm{\rva{Y}{n}}}{\sd{\rva{Y}{n}}^3}\bigg) \bigg]. \end{align} \end{lemma} \begin{proof} We apply Taylor's theorem to the function defined by \[ (s,t) \mapsto f(s,t)=\frac{\partial \varphi_n}{\partial t} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right). \] We conclude to \eqref{eq:maj_der_phi_n} using \[ \left|f(s,t)-f(0,0)\right|\leqslant |s|\sup_{\theta, \theta' \in \intervalleff{0}{1}} \left|\frac{\partial f}{\partial s}\left(\theta s,\theta' t\right)\right|+|t|\sup_{\theta, \theta' \in \intervalleff{0}{1}} \left|\frac{\partial f}{\partial t}\left(\theta s,\theta' t\right)\right| \] and to \eqref{esp1b_array} using \begin{align*} \left|f(s,t)-f(0,0)\right| & \leqslant |s| \left|\frac{\partial f}{\partial s}\left(0,0\right)\right|+|t| \left|\frac{\partial f}{\partial t}\left(0,0\right)\right| + \frac{s^2}{2}\sup_{\theta, \theta' \in \intervalleff{0}{1}} \left|\frac{\partial^2 f}{\partial^2 s}\left(\theta s,\theta' t\right)\right|\\ & \qquad + |st|\sup_{\theta, \theta' \in \intervalleff{0}{1}} \left|\frac{\partial^2 f}{\partial t\partial s}\left(\theta s,\theta' t\right)\right|+\frac{t^2}{2}\sup_{\theta, \theta' \in \intervalleff{0}{1}} \left|\frac{\partial^2 f}{\partial^2 t}\left(\theta s,\theta' t\right)\right|. \end{align*} The partial derivatives of $f$ are estimated by mixed moments of $X_n$ and $Y_n$ and then bounded above by Hölder's inequality. \end{proof} The following lemma is a result due to Quine and Robinson (\cite[Lemma 2]{QR82}). \begin{lemma}\label{lem:lem2} Define \[ l_{1,n} \mathrel{\mathop:}= \tm{\rva{X}{n}} \sd{\rva{X}{n}}^{-3} N_n^{-1/2} \qquad \text{and} \qquad l_{2,n} \mathrel{\mathop:}= \tm{\rva{Y}{n}} \sd{\rva{Y}{n}}^{-3} N_n^{-1/2}. \] If $l_{1,n} \leqslant 12^{-3/2}$ and $l_{2,n} \leqslant 12^{-3/2}$, then, for all \[ (s, t) \in R \mathrel{\mathop:}= \left\{(s,t):\; |s|<\frac{2}{9}l_{1,n}^{-1}, |t|<\frac{2}{9}l_{2,n}^{-1}\right\}, \] we have \begin{align* \left\lvert \frac{\partial}{\partial t}\bigg[e^{(s^2+t^2)/2} \right. & \left. \varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\bigg]\right\rvert \leqslant C_4(|s|+|t|+1)^3(l_{1,n}+l_{2,n})\exp\left\{\frac{11}{24}\left(s^2+t^2\right)\right\}, \end{align*} with $C_4 \mathrel{\mathop:}= 161$. \end{lemma} \begin{remark} We make explicit the constant $C_4$ appearing at the end of the proof of Lemma 2 in \cite{QR82}. For all $v$ and $s$ in $R_2$ as defined in \cite{QR82}, one has \begin{align*} \frac{(\abs{v} + 2\abs{s})}{(\abs{v} + \abs{s} + 1)^3(\ell_{1,n}+\ell_{2,n})} e^{-(v^2 + s^2)/24} & \leqslant 108 \cdot \sqrt{6} \cdot e^{-1/2} \leqslant 161. \end{align*} \end{remark} By \ref{ass:var_X} and \ref{ass:rho_X}, \begin{equation*} l_{1,n} \leqslant c_3 N_n^{-1/2} \leqslant c_2 c_3 \sd{\rva{X}{n}}^{-1} N_n^{-1/2}, \end{equation*} which implies that $\sd{\rva{X}{n}} N_n^{1/2} \leqslant c_2 c_3 l_{1,n}^{-1}$. Similarly, \begin{equation*} l_{2,n} \leqslant c_5 N_n^{-1/2} \leqslant c_4 c_5 \sd{\rva{Y}{n}}^{-1} N_n^{-1/2}, \end{equation*} and $\sd{\rva{Y}{n}} N_n^{1/2} \leqslant c_4 c_5 l_{2,n}^{-1}$. Now we are able to establish \eqref{eq:I1}. \begin{lemma}\label{lem:I1} There exists a positive constant $C_1$, only depending on $c_3$, $c_5$, $c_1$ such that, for $N_n \geqslant 12^3\max(c_3^2, c_5^2)$, \begin{equation*} \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \sup_{0\leqslant \theta \leqslant u} I_1(n, u, \theta) du \leqslant \frac{C_1}{N_n^{1/2}}. \end{equation*} \end{lemma} \begin{proof} The definitions of $\eta$ in \eqref{eq:eta} and $\varepsilon$ in \eqref{eq:epsilon} imply that, for $s\in A_1$ and $u$ and $\theta$ as in the integral in the statement above, one has \begin{align*} |s| & < \varepsilon \sd{\rva{X}{n}} N_n^{1/2} \leqslant \frac{2}{9} l_{1,n}^{-1} \quad \text{and} \quad |\theta| \leqslant |u| \leqslant \eta \sd{\rva{Y}{n}} N_n^{1/2} \leqslant \frac{2}{9} l_{2,n}^{-1}, \end{align*} which ensures that $(s, \theta) \in R$ as specified in Lemma \ref{lem:lem2}. Moreover, for $N_n \geqslant 12^3\max(c_3^2, c_5^2)$, $l_{1,n} \leqslant 12^{-3/2}$ and $l_{2,n} \leqslant 12^{-3/2}$. Now using Lemma \ref{lem:lem2} in \eqref{def:I_1} and by \ref{ass:tll}, we get \begin{align*} \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} & \sup_{0 \leqslant \theta \leqslant u} I_1(n, u, \theta) du \\ & \leqslant \gamma_n^{-1} C_4 (l_{1,n}+l_{2,n}) \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \int_{A_1} (|s|+|u|+1)^3 e^{-(s^2+u^2)/24} ds du \\ & \leqslant N_n^{-1/2} c_1^{-1} C_4 (c_3 + c_5) \int_{\ensnombre{R}^2} (|s|+|u|+1)^3 e^{-(s^2+u^2)/24}dsdu \end{align*} and the result follows with \[ C_1 \mathrel{\mathop:}= c_1^{-1} C_4 (c_3 + c_5) \int_{\ensnombre{R}^2} (|s|+|u|+1)^3 e^{-(s^2+u^2)/24} ds du . \] \end{proof} \begin{remark} Actually, Lemma \ref{lem:I1} is valid as soon as $N_n \geqslant \max(c_3^2, c_5^2)$: the constants in the proof of Lemma 2 in \cite{QR82} can be improved. \end{remark} Now we are able to prove \eqref{eq:I2}. \begin{lemma}\label{lem:I2} There exists a positive constant $C_2$, only depending on $c_1$, $\tilde{c}_2$, $c_2$, $c_3$, and $c_7$ such that, for $N_n \geqslant 2$, \begin{equation*} \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \sup_{0\leqslant \theta \leqslant u} I_2(n, u, \theta)du \leqslant \frac{C_2}{ N_n^{1/2}}. \end{equation*} \end{lemma} \begin{proof} We use the controls \eqref{eq:maj_expo_st} with $t=\theta$ and $l=1$, \eqref{eq:maj_der_phi_n}, and $\abs{\varphi_n} \leqslant 1$ to get \begin{align*} &\abs{\frac{\partial}{\partial t} \left[e^{t^2/2}\varphi_n^{N_n}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{t}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right]_{t=\theta}}\\ & = e^{\theta^2/2}\left|\varphi_n^{N_n-1}\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{\theta}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right| \cdot\left|\theta \varphi_n\left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{\theta}{\sd{\rva{Y}{n}} N_n^{1/2}}\right) \right. \\ & \hspace{7cm} \left. + \frac{N_n^{1/2}}{\sd{\rva{Y}{n}}} \frac{\partial \varphi_n}{\partial t} \left(\frac{s}{\sd{\rva{X}{n}} N_n^{1/2}},\frac{\theta}{\sd{\rva{Y}{n}} N_n^{1/2}}\right)\right|\\ &\leqslant (\abs{s} + 2\abs{\theta})e^{\theta^2/2-(s^2+\theta^2) \cdot c_7(N_n-1)/N_n}, \end{align*} for $s\in A_2$ and $u$ and $\theta$ as in the integral in the statement of the Lemma. Finally, using \eqref{def:I_2}, we get that, for $N_n \geqslant 2$, \begin{align*} & \int_0^{\eta \sd{\rva{Y}{n}} N_n^{1/2}} \sup_{0 \leqslant \theta \leqslant u} I_2(n, u, \theta) du \\ & \leqslant 2 \gamma_n^{-1} \int_0^{+\infty} \int_{\varepsilon \sd{\rva{X}{n}}N_n^{1/2}}^{+\infty} \sup_{0 \leqslant \theta \leqslant u} \bigg[ (s + 2\theta) \exp \bigg( \frac{\theta^2}{2} \bigg( 1 - 2c_7\frac{N_n-1}{N_n} \bigg) \bigg) \bigg] \\ & \hspace{10cm}\cdot e^{-u^2/2-s^2 \cdot c_7(N_n-1)/N_n} ds du \\ & \leqslant 2 c_1^{-1} \int_0^{+\infty} \int_{\varepsilon \sd{\rva{X}{n}}N_n^{1/2}}^{+\infty} (s + 2u) e^{-\min(1, c_7)u^2/2-s^2 c_7/2} ds du \\ & \leqslant e^{-N_n c_7 \varepsilon^2 \sd{\rva{X}{n}}^2/2} \left(\frac{c_1^{-1} c_7^{-1}\sqrt{2\pi}}{\sqrt{\min(1, c_7)}} + \frac{ 4 c_1^{-1}}{\min(1, c_7)} \frac{1}{c_7\varepsilon \sd{\rva{X}{n}} N_n^{1/2}}\right)\\ &\leqslant C_2'e^{-C_2''N_n} \end{align*} where \begin{equation*} C_2' \mathrel{\mathop:}= c_1^{-1}c_7^{-1} \left( \frac{\sqrt{2\pi}}{\sqrt{\min(1, c_7)}} + \frac{4}{\min(1, c_7) \min \bigg( \frac{2}{9} (c_2 c_3)^{-1}, \pi \bigg) \tilde{c}_2} \right) \end{equation*} and $C_2'' \mathrel{\mathop:}= \tilde{c}_2^2/2 c_7 \min \bigg( \frac{2}{9} (c_2 c_3)^{-1}, \pi \bigg)^2 $. The result follows, writing \[ C_2' e^{-C_2'' N_n} = \frac{C_2' (C_2'')^{-1/2}}{N_n^{1/2}} (C_3 N_n)^{1/2} e^{- C_3 N_n} \leqslant \frac{C_2' (C_2'')^{-1/2}}{N_n^{1/2}} (1/2)^{1/2} e^{-1/2} \mathrel{=}: \frac{C_2}{N_n^{1/2}}, \] since $x^{1/2} e^{-x}$ is maximum in $1/2$. \end{proof} \bibliographystyle{plain}
1,477,468,749,978
arxiv
\section{Executive Summary} \subsection{Introduction} The discovery of charmonium in the November revolution of 1974 was a watershed moment in the establishment of Quantum Chromodynamics (QCD) as the fundamental field theory of the strong force and, more generally, in the establishment of the Standard Model of particle physics. Since that time, theoretical and experimental investigations in quarkonium physics have continued to be pillars of the international research effort in QCD, and the source of many new insights. There are several reasons for this continuing interest and activity in quarkonium physics. One reason is that the nonrelativistic nature of heavy-quarkonium systems makes it easier to unravel the complicated effects of QCD dynamics. This allows one to use powerful effective-field-theory tools in the theoretical analyses of these systems. Another reason is that some quarkonium systems have particularly clean experimental signatures, which allows detailed studies with high statistics even in the challenging environment of a hadron collider at the energy frontier. Finally, quarkonium systems provide a unique laboratory in which to understand the interplay between perturbative and nonperturbative effects in QCD, because the heavy-quark mass provides a natural boundary between the perturbative and nonperturbative regimes. In this Snowmass White Paper, we describe two aspects of quarkonium physics in which dramatic progress can be expected through an interplay of theory with experiments at the frontiers in high-energy physics: (1) the spectroscopy of quarkonia above the open-heavy-flavor thresholds, and (2) the production of quarkonia at large transverse momentum. Progress on both of these problems will be driven by experiments at both the energy frontier and the intensity frontier. In addition to the dramatic progress that can be expected in these two areas, incremental progress can be expected in many other areas of quarkonium physics. See Refs.~\cite{Brambilla:2004wf,Brambilla:2010cs} for detailed accounts of the opportunities in these other areas. \subsection{Activity in Quarkonium Physics \label{sec:QWG}} The strong, continuing interest in quarkonium physics within the high energy physics community is well illustrated by the activities of the Quarkonium Working Group (QWG). The aim of this organization, which consists of about 100 high-energy physicists, is to further interactions between experiment and theory on quarkonium physics, to identify problems at the forefront of quarkonium physics, and to facilitate the solutions of those problems. The QWG has sponsored the writing of two extensive reviews on quarkonium physics in 2004 and in 2010, which have highlighted the frontier issues in theory and experiment \cite{Brambilla:2004wf,Brambilla:2010cs}. The impacts of these reviews are evident in their numbers of citations, which have reached 600 and 400 as of August 2013, respectively. The QWG sponsors an international workshop on quarkonium physics on a regular basis. The most recent QWG workshop was held in Beijing in April 2013, and the next one will be held at CERN in November 2014. Other physics groups also sponsor conferences that are devoted to quarkonium physics, with several such conferences typically taking place each year. The activity in quarkonium physics at the intensity frontier is illustrated dramatically by the impact of the $B$-factory experiments in this area. The most highly cited paper by the Belle collaboration is its 2003 paper on the discovery of the $X(3872)$, which has over 800 citations as of August 2013. Of Belle's 15 most highly cited papers, 5 are on quarkonium physics. The $4^{\rm th}$ most highly cited paper by the BaBar collaboration is their 2005 paper on the discovery of the $Y(4260)$, which has about 470 citations as of August 2013. Of Babar's 30 most highly cited papers, 9 are on quarkonium physics. Given that quarkonium physics was barely mentioned as a motivation for these experiments, their impacts in this area have been especially remarkable. There is every reason to expect that the intensity-frontier experiments, such as Belle~II, and the energy-frontier experiments, such as LHCb, will have a comparable impacts on quarkonium spectroscopy. One indication of the activity in quarkonium physics at the energy frontier is the number of quarkonium related papers that have emerged from the experimental collaborations at the LHC. To date, about 250 LHC quarkonium papers have been submitted to the arXiv. While the bulk of these results have been in the areas of production and spectroscopy, some also address other areas of quarkonium physics. This high level of activity can be expected to continue at future runs of the LHC. \subsection{Quarkonium Spectroscopy} The spectroscopy of heavy quarkonia above the open-heavy-flavor threshold is an aspect of quarkonium physics in which dramatic progress can be expected through experiments at the intensity frontier and theoretical efforts to understand the results. Experiments have uncovered entirely new exotic hadronic spectroscopies in the open-charm and open-bottom threshold regions. Understanding the QCD spectrum in these regions is one of the most interesting and exciting challenges to theorists today. Quarkonium systems provide a relatively clean environment to probe strong dynamics in the threshold regions. The lessons learned should be more generally applicable to the more complicated situation with the light hadron spectrum. One of the basic properties of QCD is the hadron spectrum. The elementary constituents of hadrons are quarks ($q$), antiquarks ($\bar q$), and gluons ($g$). Their interactions are described by QCD, a vector $SU(3)$ color force between quarks ($3$) and antiquarks ($\bar 3$) mediated by an octet of gluons. There are six flavors of quarks ($u,d,s,c,b,t$) observed in nature. The masses for the $u$, $d$, and $s$ quarks are small compared to the scale of QCD ($\Lambda_{\rm QCD}$) while the $c$, $b$, and $t$ quarks are heavy. QCD dynamics simplify in both the $m_{\rm quark} \rightarrow 0$ limit, because of the resulting chiral symmetries, and in the $m_{\rm quark} \rightarrow \infty$ limit, because of heavy quark symmetry and the possibility of using nonrelativistic effective field theories to describe the heavy quarks. Because QCD is confining, only color-singlet hadron states exist in nature. The low-lying spectrum of hadrons are usually classified by their valence (flavor) quark content. In the naive quark model, mesons are $q\bar q$ states and baryons are $qqq$ states. For heavy quarks, the low-lying mesons are called charmonium ($c\bar c$) and bottomonium ($b \bar b$) or, collectively, quarkonium. Below the open-heavy-flavor thresholds for pairs of heavy-light mesons ($D\bar D$ and $B \bar B$, respectively), these quarkonium states are narrow and are, thus, ideal systems for precision studies of QCD dynamics. Low-lying $b\bar c$ systems also have these properties. Hadronic states involving $t$ quarks do not have a chance to form before the $t$ quark decays through weak interactions. Because quarkonium systems contain heavy quarks, one expects nonrelativistic dynamics to apply. Since the discovery of the first charmonium state, the $J/\psi$, in 1974, simple models that treat the complicated QCD dynamics of the gluons and light quarks as an effective potential between the heavy quark-antiquark have been employed. This dynamical assumption is the Born-Oppenheimer approximation commonly used in atomic and molecular physics. Potential models enjoyed tremendous success in the early years -- accurately predicting the excitation spectrum and the electromagnetic and hadronic transitions among these narrow quarkonium states. With the advent in the late 90's of a new generation of high-luminosity $B$ factories (PEPII, KEKB, and CESRII) and a high-luminosity charm factory (BEPCII), more detailed studies of the properties of the known states were possible as well as the observation of many previously unobserved narrow spin-singlet states in the charmonium spectrum ($\eta_c$, $\eta_c^{\prime}$, $h_c$) and in the bottomonium spectrum ($\eta_b$, $\eta_b'$, $h_b$, $h_b'$, \ldots). Two contemporaneous theoretical developments put the studies of quarkonium on more rigorous grounds. Effective field theories (NRQCD, potential NRQCD, ...) enabled rigorous calculations using perturbation theory. Lattice QCD simulations with full QCD dynamics, except for the $b$ quark, enabled calculations of masses and decays of the low-lying states of quarkonium systems. All these results strengthened the agreement with theory expectations, at least for quarkonia below the open-heavy-flavor thresholds. A decade ago, a new state, the $X(3872)$, was observed at Belle and quickly confirmed by CDF, \rm D\O}%\emptyset}, and $\babar$. Initial attempts to identify it as a missing state of the conventional charmonium spectrum failed as more of its properties were measured. The $X(3872)$ mass is very close to a strong decay threshold ($D^{*0} \bar D^0$) and its decays exhibit significant isospin violation. This was a surprise and upset the simple picture that had been so successful for 30 years. New states and new surprises followed quickly. Additional states have been discovered in the threshold regions for both the charmonium and bottomonium systems. They have been collectively denoted as the $XYZ$ states. Some of these new states have conventional quarkonium interpretations, but others clearly do not fit. Particularly difficult to interpret is the $Y(4260)$ state: a $1^{--}$ state that is only barely observable as an $s$-channel resonance in $e^+e^-$ collisions and which appears at an energy where no conventional charmonium state is expected. Finally, a number of charged quarkonium states have now been observed. The most well-established are the charged bottomonium states $Z^+_b(10610)$ and $Z^+_b(10650)$ and the charged charmonium state $Z^+_c(3900)$. Clearly such states must have tetraquark structures. The present list of $XYZ$ states extends to almost two dozen states. In retrospect, the failure of the model of mesons as simple $q\bar q$ states is to be expected, because QCD has a much richer structure than the quark model. The excitation spectra of quarkonium systems reflect these additional degrees of freedom. Exciting the gluonic (string) degrees of freedom is expected to produce hybrid states. Tetraquark states with additional light-quark valence pairs states ($Q\bar q\bar Q q'$) should appear near the open-heavy-flavor thresholds. The dynamics of such tetraquark states has many possibilities (meson molecules, compact tetraquarks, diquarkonium, hadroquarkonium, \ldots). Lattice QCD groups have made preliminary calculations of the spectrum of charmonium that reveal the low-lying hybrid states as well as the conventional charmonium states. Lattice calculations of tetraquark systems are also possible, but these are technically very difficult at the present time. The next generation of collider experiments with capabilities of studying quarkonium states (BESIII, LHCb, Belle~II, PANDA) are online now or will be in the near future. With much higher production rates for quarkonium states, we can expect valuable clues to help disentangle the physics of the open-heavy-flavor threshold region. Hadron collider experiments (ATLAS, CMS, LHCb) will likely provide the first detailed studies of the $b\bar c$ system as well. \subsection{Quarkonium Production} The production of heavy quarkonia at large transverse momentum $p_T$ is an aspect of quarkonium physics in which dramatic progress can be expected through the interaction of theory with experiments at the energy frontier. The best prospects for an understanding of quarkonium production that is based rigorously on QCD are in the region of $p_T$ that is much larger than the quarkonium mass. The Tevatron experiments began to reach this region in their measurements of charmonium production, and the LHC experiments have begun to reach this region in their measurements of bottomonium production. In future runs of the LHC, both charmonium and bottomonium production will be measured deep into the relevant large-$p_T$ regions, with very high statistics. This will provide powerful tests of theoretical frameworks for quarkonium production. Interactions between theory and experiment have played a key role in the history of quarkonium production. Since the first measurements of quarkonium production cross sections in CDF Run~I, quarkonium production experiments have stimulated theoretical developments, which have, in turn, motivated increasingly extensive and challenging experimental measurements. During the last two decades, this process has continued at an intense level and has led to many impressive achievements in both experiment and theory. Quarkonium production continues to be relevant to the international high-energy physics program in numerous ways. Quarkonium production is interesting in its own right, as an aspect of QCD that is not fully understood. It is also useful as a laboratory for exploring experimental techniques within a setting in which experimental signatures are very clean and very high statistics can be accumulated. Quarkonium production is also useful as a theoretical laboratory for QCD. It can be used to test and extend our understanding of factorization theorems, which are the theoretical foundation for all perturbative calculations in QCD. New theoretical concepts that have been developed in order to understand large higher-order perturbative corrections to quarkonium production, such as those that arise from kinematic enhancements and from large endpoint logarithms, could have wider applicability in the calculation of high-energy cross sections. Insights gained from studying quarkonium production are also relevant to physics beyond the Standard Model. For example, certain quarkonium production processes can be used to measure Higgs couplings and so to probe for physics beyond the standard model. If that new physics involves nonrelativistic bound states, then the techniques that have been developed for understanding quarkonium production will be directly applicable. The current standard method for calculating quarkonium production rates is the nonrelativistic QCD (NRQCD) factorization approach, which is based on the effective field theory NRQCD. In this approach, production rates are expressed as perturbatively calculable partonic cross sections multiplied by nonperturbative constants called NRQCD matrix elements. Some of the NRQCD matrix elements must be determined through fits of NRQCD factorization predictions to experimental data, but they can then be used to make predictions for other quarkonium production processes. Universality of the NRQCD matrix elements is an essential feature of NRQCD factorization, and so it is important to test the predictions of NRQCD factorization in as many processes as possible. The NRQCD factorization approach is a conjecture that has not been proven to all orders in $\alpha_s$. Therefore, at present, it must be regarded as a model, to be tested by experiment, rather than as a consequence of QCD. Since standard methods for proving factorization fail unless the hard-scattering momentum transfer $p_T$ is much larger than the initial- and final-state hadron masses, it seems likely that NRQCD factorization, if it is correct, holds only for $p_T$ much larger than the quarkonium mass. An important recent theoretical development is the next-to-leading-power (NLP) fragmentation approach, in which quarkonium production rates are expressed as perturbatively calculable partonic cross sections convolved with fragmentation functions, up to corrections suppressed by a factor $m_Q^4/p_T^4$, where $m_Q$ is the heavy-quark mass. Unlike the NRQCD factorization formula, the NLP factorization formula has been proven to all orders in $\alpha_s$. The NLP fragmentation approach becomes more predictive if NRQCD factorization is used to express the fragmentation functions in terms of NRQCD matrix elements. The NLP fragmentation approach then organizes the NRQCD factorization expression for the cross section according to powers of $m_Q^2/p_T^2$, which facilitates the calculation of higher-order corrections and the resummation of large logarithms. NRQCD factorization predictions have now been computed at next-to-leading order (NLO) in $\alpha_s$ for many production processes. In general, these predictions agree with the experimental data from $pp$, $p\bar p$, $ep$, and $e^+e^-$ colliders for the production of $S$-wave and $P$-wave quarkonia at large $p_T$. The most notable exception is the polarization of the $J/\psi$, $\psi(2S)$, and $\Upsilon(nS)$ ($n=1,2,3$) at the Tevatron and at the LHC. This serious discrepancy between theory and experiment deserves further investigation from both the theoretical and experimental sides. Definitive measurements of the polarizations will be carried out at the LHC. An important issue with regard to quarkonium polarizations is that prompt $J/\psi$ and $\Upsilon(1S)$ production rates include feeddown from $S$-wave and $P$-wave quarkonium states of higher mass. The inclusion of feeddown in the theoretical predictions brings in additional theoretical uncertainties. Therefore, it would be very useful for future experiments to separate the feeddown contributions from direct-production contributions. At the frontiers of high-energy physics, there are many opportunities for further interesting work on quarkonium production. In Run~2 of the LHC, the extension of the energy frontier to $13$~TeV and the increase in luminosity in comparison with Run~I will make it possible to extend the $p_T$ reach of quarkonium studies. Since NRQCD factorization, if it is correct, is expected to hold only for values of $p_T$ that are much larger than the quarkonium mass, it is important to measure quarkonium production with high statistics at the highest possible values of $p_T$. Measurements of production and polarization for the $\chi_{cJ}$ and $\chi_{bJ}$ states and for new processes, such as associated production with $W$ or $Z$ bosons, would provide valuable additional tests of the theory. $B_c$ studies could also provide new insights into the production of mesons with nonzero flavor. While many of the key tests of the theory of charmonium production could be accomplished in upcoming runs of the LHC, the LHC upgrade would afford the opportunity to push studies of $b\bar b$ states to still higher values of $p_T$. The $b\bar b$ systems are particularly important tests of the validity of NRQCD because they are more nonrelativistic than the $c\bar c$ systems. Precision measurements of cross sections and polarizations of $b\bar b$ states at values of $p_T$ that are much larger than the quarkonium mass will be challenging, but they are crucial tests of the NRQCD factorization approach. A future high-energy $e^+e^-$ collider will allow studies of quarkonium production in two-photon collisions. In a Higgs factory mode, it would afford new opportunities to make precision measurements of the Higgs couplings to Standard Model particles. The Higgs decay to $J/\psi+\gamma$ provides, perhaps, the only realistic means with which to probe the $Hc\bar c$ coupling. Higgs decays to $J/\psi+Z$ and $\Upsilon+Z$ are other quarkonium-related avenues for probing Higgs couplings. Double-charmonium production has been the focus of interesting studies at the $B$ factories and it will be investigated further at Belle~II. By taking advantage of the high luminosity at Belle~II, one should be able to observe final states, such as $J/\psi+J/\psi$, that were not produced at sufficient rates to have been observed previously at $B$ factories. Accurate measurements of the cross sections for exclusive charmonium production will provide motivation for developing understanding and control of large logarithms of $p_T^2/m_q^2$ that appear in the theoretical predictions. \section{Quarkonium Spectroscopy$^{\,}$\protect\footnotemark} \footnotetext{Authors: Eric Braaten, Estia Eichten, Stephen Lars Olsen, Todd K.~Pedlar} \subsection{Introduction} The strongly interacting particles in the Standard Model are quarks and gluons. The strongly interacting particles in Nature are mesons and baryons. In the Standard Model, quarks and gluons are related to mesons and baryons by the long-distance regime of QCD, which remains the least understood aspect of the theory. Since first-principle calculations using lattice QCD are still not practical for many long-distance phenomena, a number of models motivated by the color structure of QCD have been proposed. However, so far at least, predictions of these QCD-motivated models that pertain to the spectrum of hadrons have a less-than-stellar record. In spite of decades of experimental searches, unambiguous examples of hadrons predicted by these models, such as pentaquarks, the $H$-dibaryon, mesons with exotic $J^{PC}$ quantum numbers, etc., have not been found. On the other hand, a number of states that do not fit into the conventional picture of hadrons {\it have} been found. A compelling, unified understanding of these new states has not yet emerged. This gap between quark/gluon theory and meson/baryon observations remains a major deficiency in our current level of understanding of elementary particle processes. For the reasons discussed above, theoretical and experimental investigations of the spectroscopy of quarkonium and quarkonium-like hadrons have had a profound influence on the development of our current level of understanding of the relation between quarks/gluons and hadrons. It is essential (and probably inevitable) that these investigations will continue to be pursued, with the goals of ultimately establishing a reliable QCD-motivated theoretical model that can accurately describe measured phenomena and experimental observations of the hadron spectra predicted by this model. Insights gained from these activities will likely have applications far beyond the limited area of quarkonium physics. One of the most basic properties of QCD is its spectrum: the list of particles that are stable or at least sufficiently long-lived to be observed as resonances. The elementary constituents in QCD are quarks ($q$), antiquarks ($\bar q$), and gluons ($g$), and QCD requires that they be confined into color-singlet clusters called hadrons. The most stable hadrons are the clusters predicted by the quark model: mesons ($q \bar q$), baryons ($q q q$), and antibaryons. Hundreds of meson resonances and dozens of baryons resonances have been observed, most of which are conventional mesons and conventional baryons. Until recently, no other types of clusters had been unambiguously identified. The general expectation in the high energy physics community has been that other types of clusters were probably broad and were probably strongly mixed with conventional mesons and baryons, in which case identifying them would be a complicated problem in hadron physics. This expectation has been shattered by recent discoveries of charged quarkonia in the bottomonium ($b \bar b$) and charmonium ($c \bar c$) sectors of QCD. The discoveries of these manifestly exotic tetraquark mesons have exposed an embarrassing gap in our qualitative understanding of QCD. The discoveries of the charmonium and bottomonium tetraquarks are the culmination of a decade of surprising experimental results on heavy quarkonium. It began with the discovery of the $X(3872)$ by the Belle Collaboration in September 2003. This $c \bar c$ meson has decays that severely violate isospin symmetry, disfavoring its identification as conventional charmonium. It continued with the discovery of the $Y(4260)$ by the $\babar$ Collaboration in July 2005. This $c \bar c$ meson has $J^{PC}$ quantum numbers $1^{--}$, but it is produced very weakly in $e^+ e^-$ annihilation. Over the past decade, many additional $c \bar c$ mesons have been observed. Those with masses below the open-charm threshold fit neatly into the predicted charmonium multiplets. However most of those above the open-charm threshold have no compelling conventional charmonium assignments. The list of these neutral $c \bar c$ mesons, which have been labeled $X$, $Y$, or $Z$, has grown to more than a dozen states. They are candidates for exotic clusters, such as charmonium hybrids ($c \bar c g$) or charmonium tetraquarks ($c \bar c q \bar q$). However none of these neutral $c \bar c$ mesons is manifestly exotic, with $J^{PC}$ or flavor quantum numbers that are incompatible with conventional charmonium. Thus, despite their unexpected properties, one could not exclude these states from simply being complicated manifestations of the strong interactions between hadrons. The recent discoveries of charged quarkonium-like mesons in the $b \bar b$ and $c \bar c$ sectors eliminates the possibility of dismissing the $XYZ$ mesons as complicated manifestations of the strong interactions. In November 2011, the Belle Collaboration discovered the $Z_b^+(10610)$ and $Z_b^+(10650)$, whose decays into $\Upsilon \, \pi^+$ reveal their constituents to be $b \bar b u \bar d$. In April 2013, the BESIII Collaboration discovered the $Z_c^+(3900)$, whose decays into $J/\psi\, \pi^+$ reveal its constituents to be $c \bar c u \bar d$. This state was almost immediately confirmed by the Belle Collaboration. Despite being above the thresholds for decays into pairs of heavy-light mesons, these charged tetraquark mesons are relatively narrow. They are a glimpse into a sector of the QCD spectrum that has remained hidden for all these decades. The observation of many neutral $XYZ$ mesons with unexplained properties and the discoveries of the charged $Z_b$ and $Z_c$ mesons have revealed a serious gap in our qualitative understanding of QCD. The challenge of understanding the $Q \bar Q$ mesons above the open-heavy-flavor threshold has now become urgent. Calculations from first principles using lattice QCD should eventually be capable of revealing the spectrum of these mesons. However understanding their properties will probably also require the development of phenomenological descriptions that are well motivated by QCD. Fortunately experiments at the intensity frontier and at the energy frontier of high energy physics are well positioned to address this problem. Electron-positron colliders in Beijing (BEPC II), which is now running in the charmonium region, and in Japan (Super KEK-B), which will begin running in the bottomonium region in 2016, provide clean environments for the study of heavy quarkonia and quarkonium-related states. At the Large Hadron Collider, the ATLAS, CMS, and LHCb detectors have already contributed to quarkonium spectroscopy using proton-proton collisions. They should be able to do much more after the beam energy upgrade in 2015 and after a subsequent luminosity upgrade at a later date. The dedicated $B$-physics experiment LHCb has a particularly large potential for discovery in the realm of quarkonium spectroscopy. Finally, the PANDA experiment at the FAIR facility in Darmstadt, will utilize antiproton-proton annihilation in the charm-anticharm threshold region to study charmonium and other neutral $c \bar c$ mesons beginning around 2019. With the wealth of additional clues provided by all these experiments, the emergence of a theoretical understanding of the quarkonium spectrum above the open-heavy-flavor threshold is inevitable. \subsection{Experimental Results} \begin{sidewaystable} \caption{ New $c\bar{c}$ and $b\bar{b}$ mesons above the open-heavy-flavor threshold. The {\bf bold} states have been observed by at least two independent experiments and with significance greater than 5$\sigma$. The masses $M$ and widths $\Gamma$ are weighted averages from the Experiments, with uncertainties added in quadrature. In the $J^{PC}$ column, a question mark (?) indicates an unmeasured value. For charged states, $C$ is that of a neutral isospin partner. In the Process column, the decay modes are indicated in parentheses and ellipses (...) indicate an inclusive measurement. For each Experiment, the statistical significance is given in standard deviations (\#$\sigma$) unless it is not provided (np). These Tables are adapted and updated from a table in Ref.~\cite{Eidelman:2012vu} that was prepared in May 2012. } \setlength{\tabcolsep}{0.21pc} \label{tab:Q-QbarI} \begin{center} \begin{tabular}{lccclll} \hline\hline \rule[10pt]{-1mm}{0mm} State & $M$~(MeV) & $\Gamma$~(MeV) & $J^{PC}$ & Process~(decay mode) & Experiment~(\#$\sigma$) & 1$^{\rm st}$~observation \\[0.7mm] \hline \rule[10pt]{-1mm}{0mm} $X(3823)$& 3823.1$\pm$1.9 & $<24$ & $?^{?-}$ & $B \to K + (\chi_{c1}\gamma)$ & Belle~\cite{Bhardwaj:2013rmw}~(3.8) & Belle~2013 \\[0.7mm] $\bm{X(3872)}$& 3871.68$\pm$0.17 & $<1.2$ & $1^{++}$ & $B \to K + (J/\psi\, \pi^+\pi^-)$ & Belle~\cite{Choi:2003ue,Choi:2011fc}~(12.8), \babar~\cite{Aubert:2008gu}~(8.6) & Belle~2003 \\[0.7mm] & & & & $p\bar p \to (J/\psi\, \pi^+\pi^-)+ ...$ & CDF~\cite{Acosta:2003zx,Abulencia:2006ma,Aaltonen:2009vj}~(np), \rm D\O}%\emptyset}~\cite{Abazov:2004kp}~(5.2) & \\[0.7mm] & & & & $B \to K + (J/\psi\, \pi^+\pi^-\pi^0)$ & Belle~\cite{Abe:2005ix}\footnote{\label{notinc-footnote} Not included in the averages for $M$ and $\Gamma$.}~(4.3), \babar~\cite{delAmoSanchez:2010jr}\textsuperscript{\ref{notinc-footnote}}~(4.0) & \\[0.7mm] & & & & $B \to K + (D^0 \bar D^0 \pi^0)$ & Belle~\cite{Gokhroo:2006bt,Aushev:2008su}\textsuperscript{\ref{notinc-footnote}}~(6.4), \babar~\cite{Aubert:2007rva}\textsuperscript{\ref{notinc-footnote}}~(4.9) & \\[0.7mm] & & & & $B \to K + (J/\psi\, \gamma)$ & Belle~\cite{Bhardwaj:2011dj}\textsuperscript{\ref{notinc-footnote}}~(4.0), \babar~\cite{Aubert:2006aj,Aubert:2008rn}\textsuperscript{\ref{notinc-footnote}}~(3.6)& \\[0.7mm] & & & & $B \to K + (\psi(2S)\, \gamma)$ & \babar~\cite{Aubert:2008rn}\textsuperscript{\ref{notinc-footnote}}~(3.5), Belle~\cite{Bhardwaj:2011dj}\textsuperscript{\ref{notinc-footnote}}~(0.4) & \\[0.7mm] & & & & $pp \to (J/\psi\, \pi^+\pi^-)+ ...$ & LHCb~\cite{Aaij:2012lhcb}~(np) & \\[1.9mm] $\bm{X(3915)}$ & $3917.5\pm1.9$ & 20$\pm 5$ & $0^{++}$ & $B \to K + (J/\psi\, \omega)$ & Belle~\cite{Abe:2004zs}~(8.1), \babar~\cite{Aubert:2007vj}~(19) & Belle~2004 \\ [0.7mm] & & & & $e^+e^- \to e^+e^- + (J/\psi\, \omega)$ & Belle~\cite{Uehara:2009tx}~(7.7), \babar~\cite{delAmoSanchez:2010jr, Lees:2012xs}(7.6~)& \\[1.9mm] $\bm{\chi_{c2}(2P)}$ & $3927.2\pm2.6$ & 24$\pm$6 & $2^{++}$ & $e^+e^-\to e^+e^- + (D\bar{D})$ & Belle~\cite{Uehara:2005qd}~(5.3), \babar~\cite{:2010hka}~(5.8) & Belle~2005 \\ [1.9mm] $X(3940)$ & $3942^{+9}_{-8}$ & $37^{+27}_{-17}$ & $?^{?+}$ & $e^+e^- \to J/\psi + (D^* \bar D)$ & Belle~\cite{Abe:2007sya}~(6.0) & Belle~2007 \\ [0.7mm] &&&& $e^+e^- \to J/\psi + (...)$ & Belle~\cite{Abe:2007jn}~(5.0) \\ [1.9mm] $\bm{G(3900)}$ & $3943\pm21$ & 52$\pm$11 & $1^{--}$ & $e^+e^- \to \gamma + (D \bar D)$ & \babar~\cite{Aubert:2006mi}~(np), Belle~\cite{Pakhlova:2008zza}~(np) & \babar~2007 \\ [1.9mm] $Y(4008)$ & $4008^{+121}_{-\ 49}$ & 226$\pm$97 & $1^{--}$ & $e^+e^- \to \gamma + (J/\psi\, \pi^+\pi^-)$ & Belle~\cite{Belle:2007sj}~(7.4)~ & Belle~2007 \\[1.9mm] $\bm{Y(4140)}$ & $4144.5\pm2.6$ & $15^{+11}_{-\ 7}$ & $?^{?+}$ & $B \to K + (J/\psi\, \phi)$ & CDF~\cite{Aaltonen:2009tz,Aaltonen:2011at}~(5.0), CMS~\cite{Yetkin:2013iza}~($>$5) & CDF~2009 \\[1.9mm] $X(4160)$ & $4156^{+29}_{-25} $ & $139^{+113}_{-65}$ & $?^{?+}$ & $e^+e^- \to J/\psi + (D^* \bar D^*)$ & Belle~\cite{Abe:2007sya}~(5.5) & Belle~2007 \\[1.9mm] \hline \end{tabular} \end{center} \end{sidewaystable} \begin{sidewaystable} \caption{ New $c\bar{c}$ and $b\bar{b}$ mesons above the open-heavy-flavor threshold (continuation of Table~\ref{tab:Q-QbarI}). } \setlength{\tabcolsep}{0.21pc} \label{tab:Q-QbarII} \begin{center} \begin{tabular}{lccclll} \hline\hline \rule[10pt]{-1mm}{0mm} State & $M$~(MeV) & $\Gamma$~(MeV) & $J^{PC}$ & Process~(decay mode) & Experiment~(\#$\sigma$) & 1$^{\rm st}$ observation \\[0.7mm] \hline \rule[10pt]{-1mm}{0mm} $\bm{Y(4260)}$ & $4263^{+8}_{-9}$ & 95$\pm$14 & $1^{--}$ & $e^+e^- \to \gamma + (J/\psi\, \pi^+\pi^-)$ & \babar~\cite{Aubert:2005rm,Aubert:2008ic}~(8.0), CLEO~\cite{He:2006kg}~(5.4) & \babar~2005 \\ [0.7mm] & & & & & Belle~\cite{Belle:2007sj}~(15) &\\[0.7mm] & & & & $e^+e^-\to (J/\psi\, \pi^+\pi^-)$ & CLEO~\cite{Coan:2006rv}~(11)& \\[0.7mm] & & & & $e^+e^-\to (J/\psi\, \pi^0\pi^0)$ & CLEO~\cite{Coan:2006rv}~(5.1) & \\[1.9mm] $Y(4274)$ & $4274.4^{+8.4}_{-6.7}$ & $32^{+22}_{-15}$ & $?^{?+}$ & $B\to K + (J/\psi\, \phi )$ & CDF~\cite{Aaltonen:2011at}~(3.1) & CDF~2010 \\[1.9mm] $X(4350)$ & $4350.6^{+4.6}_{-5.1}$ & $13.3^{+18.4}_{-10.0}$ & 0/2$^{++}$ & $e^+e^-\to e^+e^- \,(J/\psi\, \phi)$ & Belle~\cite{Shen:2009vs}~(3.2) & Belle~2009 \\ [1.9mm] $\bm{Y(4360)}$ & $4361\pm13$ & 74$\pm$18 & $1^{--}$ & $e^+e^-\to\gamma + (\psi(2S)\, \pi^+\pi^-)$ & \babar~\cite{Aubert:2006ge}~(np), Belle~\cite{:2007ea}~(8.0) & \babar~2007 \\ [1.9mm] $X(4630)$ & $4634^{+\ 9}_{-11}$ & $92^{+41}_{-32}$ & $1^{--}$ & $e^+e^-\to\gamma\, (\Lambda_c^+ \Lambda_c^-)$ & Belle~\cite{Pakhlova:2008vn}~(8.2) & Belle~2007 \\ [1.9mm] $Y(4660)$ & 4664$\pm$12 & 48$\pm$15 & $1^{--}$ & $e^+e^-\to\gamma + (\psi(2S)\, \pi^+\pi^-)$ & Belle~\cite{:2007ea}~(5.8) & Belle~2007 \\ [0.7mm] \hline $\bm{Z_c^+(3900)}$ & $3898\pm 5$ & $51\pm 19$ & $1^{?-}$ & $Y(4260) \to \pi^- + (J/\psi\, \pi^+)$ & BESIII~\cite{Ablikim:2013mio} (np), Belle~\cite{Liu:2013dau} (5.2) & BESIII~2013\\[1.9mm] & & & & $e^+e^- \to \pi^- + (J/\psi\, \pi^+)$ & Xiao {\em et al.}~\cite{Xiao:2013iha}\footnote{Not included in the averages for $M$ and $\Gamma$.} (6.1)& \\[1.9mm] $Z_1^+(4050)$ & $4051^{+24}_{-43}$ & $82^{+51}_{-55}$ & ?& $ B \to K + (\chi_{c1}(1P)\, \pi^+)$ & Belle~\cite{Mizuk:2008me}~(5.0), \babar~\cite{Lees:2011ik}~(1.1) & Belle~2008 \\[1.9mm] $Z_2^+(4250)$ & $4248^{+185}_{-\ 45}$ & 177$^{+321}_{-\ 72}$ &?& $ B \to K + (\chi_{c1}(1P)\, \pi^+)$ & Belle~\cite{Mizuk:2008me}~(5.0), \babar~\cite{Lees:2011ik}~(2.0) & Belle~2008 \\[1.9mm] $Z^+(4430)$ & $4443^{+24}_{-18}$ & $107^{+113}_{-\ 71}$ & ?& $B \to K + (\psi(2S)\, \pi^+)$ & Belle~\cite{Choi:2007wga,Mizuk:2009da}~(6.4), \babar~\cite{:2008nk}~(2.4) & Belle~2007 \\[1.9mm] \hline\hline $Y_b(10888)$ & 10888.4$\pm$3.0 & 30.7$^{+8.9}_{-7.7}$ & $1^{--}$ & $e^+e^- \to (\Upsilon(nS)\, \pi^+\pi^-)$ & Belle~\cite{Chen:2008pu,Abe:2007tk}~(2.0)& Belle~2010 \\[0.7mm] \hline $Z_{b}^+(10610)$ & 10607.2$\pm$2.0 & 18.4$\pm$2.4 & $1^{+-}$ & $\Upsilon(5S) \to \pi^- + (\Upsilon(nS)\,\pi^+)$,~$n=1,2,3$~ & Belle~\cite{Adachi:2011XXX,Bondar:2011pd}~(16) & Belle~2011 \\[0.7mm] & & & & $\Upsilon(5S) \to \pi^- + (h_b(nP)\,\pi^+)$,~$n=1,2$ & Belle~\cite{Adachi:2011XXX,Bondar:2011pd}~(16) & \\[1.9mm] $Z_{b}^+(10650)$ & 10652.2$\pm$1.5 & 11.5$\pm$2.2 & $1^{+-}$ & $\Upsilon(5S)\to\pi^- + (\Upsilon(nS)\,\pi^+)$,~$n=1,2,3$ & Belle~\cite{Adachi:2011XXX,Bondar:2011pd}~(16)& Belle~2011 \\[0.7mm] & & & & $\Upsilon(5S) \to \pi^- + (h_b(nP)\,\pi^+)$,~$n=1,2$ & Belle~\cite{Adachi:2011XXX,Bondar:2011pd}~(16)& \\[1.9mm] \hline\hline \end{tabular} \end{center} \end{sidewaystable} Ten years have passed since the announcement by the Belle Collaboration at the 2003 Lepton-Photon Conference of the discovery of a new charmonium-like state lying above the open-charm threshold, referred to as $X(3872)$~\cite{Choi:2003ue,Choi:2011fc}. The $X(3872)$ was only the first of many quarkonium-like states above the open-heavy-flavor threshold that were discovered over the course of the last decade. These states are listed in Tables~\ref{tab:Q-QbarI} and \ref{tab:Q-QbarII}. The list is separated into four sections. The first section is the list of new neutral $c \bar c$ mesons, which overpopulate the region from the open-charm threshold up to approximately 4700 MeV as compared to the number of expected conventional charmonium states. The second section is the list of charged $c \bar c$ mesons, which are manifestly exotic tetraquarks. The third and fourth sections are a single new neutral $b \bar b$ meson and the two charged $b \bar b$ mesons. The charged $Q \bar Q$ mesons presumably belong to isospin triplets that include a neutral meson. The neutral mesons $Z_c^0(3900)$ and $Z_b^0(10610)$ have been observed, but they are not listed in the Tables. Additional information about many these states can be found in several reviews \cite{Godfrey:2008nc,Barnes:2009zza,Pakhlova:2010zz}. We proceed to summarize the new quarkonia below the open-heavy-flavor threshold that have been discovered in the last decade. We then discuss selected aspects of the new quarkonium-like states above the open-heavy-flavor thresholds. \subsubsection{Quarkonia below the open-heavy-flavor thresholds} \noindent Prior to 2002, the knowledge of the spectra of narrow, below-threshold charmonium and bottomonium states was fairly limited. In the case of bottomonium, the masses, full widths, and dilepton partial widths of the triplet-$S$ states were known. The masses of the triplet-$P$ states were also known. Dipion decays among triplet-$S$ states had been observed, and the radiative cascades involving the triplet-$P$ states had also been measured, although with relatively poor precision. Additionally, a number of radiative decays of the ground-state triplet-$S$, the $\Upsilon(1S)$, had been measured. Glaringly absent from the bottomonium spectrum, however, were the singlet-$S$ and singlet-$P$ states, as were the triplet-$D$ states, whose ground states were expected to lie between the ground and first excited triplet-$P$ levels. Charmonium was on somewhat firmer footing. All the masses and full widths of the triplet-$S$ and triplet-$P$ states were known. The radiative transition rates among them were known to good precision, and many hadronic decays of each of these states had been observed. In addition, the ground-state singlet-$S$, the $\eta_c(1S)$, had also been observed by several experiments, though its full width was a hotly disputed matter. As in the case of bottomonium, the singlet-$P$ state had not yet been observed, though hints had been obtained by two experiments. In the intervening decade, the picture for both the charmonium spectrum and the bottomonium spectrum below their respective open-heavy-flavor thresholds has become much clearer. In charmonium, work by E835, CLEO, BES/BESIII, Belle and $\babar$ have firmly established all three below-threshold singlet states, $\eta_c(1S)$, $\eta_c(2S)$ and $h_c(1P)$. Their masses and widths are now well measured, and many of their key decay modes are known. Each of these states has subsequently served as a vehicle for discovery of other new states, particularly in bottom meson decays and in studies of resonances lying above the open-charm thresholds. In bottomonium, the $\Upsilon(1D)$ was first observed by CLEO in 2002~\cite{Bonvicini:2004yj}, and finally confirmed by $\babar$ in 2010~\cite{delAmoSanchez:2010kz}. This was the first firmly established $D$-wave quarkonium state. In 2008, the $\eta_b(1S)$ was observed by $\babar$~\cite{Aubert:2008ba} and CLEO~\cite{Bonvicini:2009hs}. The announcement of weak evidence for the $h_b(1P)$ at $\babar$~\cite{Lees:2011zp} was followed up the discovery of strong signals for both $h_b(1P)$ and $h_b(2P)$ in dipion transitions from $\Upsilon(5S)$ at Belle~\cite{Adachi:2011ji}. Subsequent analyses of the principal radiative decays of $h_b(1P)$ and $h_b(2P)$ yielded a more precise measurement of the mass of $\eta_b(1S)$ and very strong evidence for its radial excitation, $\eta_b(2S)$~\cite{Mizuk:2012pb}. \subsubsection{$X(3872)$} \label{sec:X3872} The $X(3872)$ was first observed~\cite{Choi:2003ue} by the Belle Collaboration decaying to $J/\psi\, \pi^+\pi^-$ and produced opposite a charged kaon in the decay of charged $B$ mesons. The state was quickly confirmed by inclusive production in $p \bar p$ collisions by CDF~\cite{Acosta:2003zx} and \rm D\O}%\emptyset}~\cite{Abazov:2004kp} and also by $\babar$ in the discovery channel~\cite{Aubert:2008gu}. The observation by Belle~\cite{Bhardwaj:2011dj} and $\babar$~\cite{Aubert:2006aj} of the decay mode $J/\psi\, \gamma$ established the $C$-parity as $+$. A 2006 analysis by the CDF collaboration of the $J/\psi\, \pi^+\pi^-$ decay mode~\cite{Abulencia:2006ma} reduced the possible $J^{PC}$ quantum numbers to $1^{++}$ and $2^{-+}$. A 2010 analysis by $\babar$~\cite{delAmoSanchez:2010jr} of the $J/\psi\, \pi^+\pi^-\pi^0$ decay mode indicated a moderate preference for $2^{-+}$ over $1^{++}$. In May 2013, LHCb published the results of a full five-dimensional angular analysis of the $J/\psi\, \pi^+\pi^-$ decays of $X(3872)$~\cite{Aaij:2012lhcb}, which ruled out $2^{-+}$ by over $8\sigma$ compared to $1^{++}$. The $1^{++}$ quantum numbers are those of the conventional $^3P_1$ charmonium state $\chi_{c1}(2P)$, but the $X(3872)$ has other properties that make such an assignment problematic. A very important property of the $X(3872)$ is that its mass is extremely close to the $D^{*0} \bar D^0$ threshold. Its energy relative to that threshold is measured to be $- 0.3 \pm 0.4$~MeV. The $1^{++}$ quantum numbers of $X(3872)$ imply that it has an $S$-wave coupling to $D^* \bar D$. As discussed in Sec.~\ref{sec:molecule}, the universal properties of $S$-wave near-threshold resonances then guarantee that the $X(3872)$ must be a charm-meson molecule whose constituents are a superposition of $D^{*0} \bar D^0$ and $D^0 \bar D^{*0}$. One simple universal prediction for such a loosely bound $S$-wave molecule is that the root-mean-square separation of its constituents scales with its binding energy $E_X$ as $E_X^{-1/2}$ \cite{Braaten:2003he}. If the binding energy of the $X(3872)$ is 0.3~MeV, the root-mean-square separation of its constituent charm mesons is predicted to be a remarkable 5~fm. The property of the $X(3872)$ that was initially most surprising is that its decays revealed a dramatic violation of isospin symmetry. In the discovery mode $J/\psi\, \pi^+\pi^-$, the $\pi^+\pi^-$ system is compatible with the decay of a virtual $\rho$, so the decay mode must have isospin 1. In the decay mode $J/\psi\, \pi^+\pi^- \pi^0$, the $\pi^+\pi^- \pi^0$ system is compatible with the decay of a virtual $\omega$, so the decay mode must have isospin 0. The decay modes $J/\psi\, \pi^+\pi^-$ and $J/\psi\, \pi^+\pi^- \pi^0$ were observed to have comparable branching fractions, indicating a severe violation of isospin symmetry. This can be explained by the fact that the mass of the $X(3872)$ is extremely close to the $D^{*0} \bar D^0$ threshold but about 8~MeV below the $D^{*+} D^-$ threshold, provided the isospin of $X(3872)$ would be 0 if it were not so close to these thresholds. The branching fractions into $J/\psi\, \pi^+\pi^-$ and $J/\psi\, \pi^+\pi^- \pi^0$ are comparable in spite of the much stronger coupling of $X(3872)$ to $J/\psi\, \omega$, because the decay rate of $\rho$ into $\pi^+\pi^-$ is much larger than that of $\omega$ into $\pi^+\pi^- \pi^0$ and because the decay into $J/\psi\, \omega$ proceeds through the tail of the $\omega$ resonance. \subsubsection{New $1^{--}$ mesons} \label{sec:one--} Several new $J^{PC} = 1^{--}$ mesons have been observed by CLEO, Belle and \babar, spanning a range of masses between 3900 MeV and 4700 MeV. All of them were produced via Initial State Radiation (ISR), which fixes their quantum numbers to be that of the photon. Most of these states were observed in dipion transitions to either $J/\psi$ or $\psi(2S)$ -- the only exceptions being $G(3900)$~\cite{Aubert:2006mi}, which was observed decaying to $D\bar{D}$, and $X(4630)$~\cite{Pakhlova:2008vn}, which was observed decaying to $\Lambda_c^+ \Lambda_c^-$. The most well-established of these states, the $Y(4260)$, was observed in ISR decaying to $J/\psi\, \pi^+\pi^-$ by \babar~\cite{Aubert:2005rm,Aubert:2008ic}, CLEO~\cite{He:2006kg} and Belle~\cite{Belle:2007sj}. It was later observed through direct $e^+e^-$ annihilation on resonance by CLEO~\cite{Coan:2006rv} -- an analysis which also produced evidence of the $Y(4260)$ decaying by a neutral dipion transition to $J/\psi\, \pi^0\pi^0$. Thanks to the large Belle data sample at $\Upsilon(4S)$ and the dedicated BES III running at $\sqrt{s} = 4260$ MeV, the statistics accumulated for the $Y(4260)$ are sufficient for detailed studies of resonant substructure in its decays to $J/\psi\, \pi^+\pi^-$. As noted below in Sec.~\ref{sec:charged}, these studies have revealed additional unexpected states, in particular the charmonium tetraquark $Z_c^+$. The $Y(4260)$ resonance is a candidate for a charmonium hybrid. One of the basic characteristics of a quarkonium hybrid is that its $Q \bar Q$ wave function is strongly suppressed near the origin. The production rate in $e^+ e^-$ annihilations of a hybrid with quantum numbers $1^{--}$ is therefore strongly suppressed. The $Y(4260)$ is produced so weakly in $e^+ e^-$ annihilation that it appears as a small local maximum in the cross section near the deep minimum between the conventional charmonium states $\psi(4160)$ and $\psi(4415)$. In bottomonium, a candidate for an additional $J^{PC}=1^{--}$ resonance above the open-bottom threshold was observed by Belle at a mass just above that of the $\Upsilon(5S)$. In 2008, Belle published the analysis of $\Upsilon(5S)\rightarrow \Upsilon(nS)\, \pi^+\pi^-$~\cite{Abe:2007tk}, which indicated branching fractions that were two orders of magnitude larger than expected. In 2010, Belle published the results of a study of the cross sections for $e^+e^-\rightarrow \Upsilon(nS)\, \pi^+\pi^-$ as a function of $\sqrt{s}$~\cite{Chen:2008pu}, in which it was found that the peak cross section for this process lies at 10.89~GeV, rather than at 10.86~GeV, where the total hadronic cross section peaks. This result has been interpreted as arising from a distinct $1^{--}$ resonance very close to $\Upsilon(5S)$~\cite{Hambrock:2013tpa}, but this has not yet been confirmed. Since it is not predicted by quark potential models, this $Y_b(10890)$ resonance, if confirmed, is likely to have an unconventional quark structure, such as a bottomonium hybrid, tetraquark or molecule. \subsubsection{New positive $C$-parity mesons} Positive $C$-parity charmonium-like mesons can be produced at $e^+e^-$ colliders via the two-photon fusion process $\gamma \gamma \rightarrow c\bar{c}$ or the double charmonium process $e^+e^- \rightarrow J/\psi + c\bar{c}$. They can also be produced by more complicated processes, such as the decays of $B$ mesons or inclusive production in hadron collisions, in which case their $C$-parity can be revealed by their decay products, such as $J/\psi$ plus a light vector meson $\omega$ or $\phi$. The $X(3915)$ has the most substantial pedigree, having been observed by both Belle~\cite{Abe:2004zs,Uehara:2009tx} and $\babar$~\cite{Aubert:2007vj,delAmoSanchez:2010jr,Lees:2012xs} and having been produced both in $B$ decays and in $\gamma\gamma$ fusion. In all these cases, the $X$ is observed in its decay to $J/\psi\, \omega$. Spin-parity measurements by the $\babar$ Collaboration prefer $0^{++}$. This makes the $X(3915)$ a candidate for the $\chi_{c0}(2P)$, but this assignment is disfavored by its near mass degeneracy with the candidate $\chi_{c2}(2P)$ state (see paragraph below) and its narrow width, which suggests that its decays into $D \bar D$ must be suppressed. Both Belle~\cite{Uehara:2005qd} and $\babar$~\cite{:2010hka} have observed a state at a mass of 3927~MeV produced by $\gamma\gamma$ fusion, which decays to $D\bar{D}$. The analysis of spin and $C$-parity favors an assignment of $2^{++}$. Since there is no strong evidence disfavoring this assignment, we have denoted this state $\chi_{c2} (2P)$. The $X(3940)$ has been observed only by the Belle Collaboration in $e^+ e^-$ collisions, but using two different methods. It has been observed through the exclusive final state $J/\psi\, D^* \bar D$ as a resonance in $D^* \bar D$~\cite{Abe:2007sya}. It has also been observed through the inclusive final state $J/\psi + X$ as a peak in the recoil momentum distribution for $J/\psi$~\cite{Abe:2007jn}. The CDF Collaboration has demonstrated that general-purpose detectors at hadron colliders can also contribute to quarkonium spectroscopy by their discovery of the $Y(4140)$ through its decay into $J/\psi\, \phi$~\cite{Aaltonen:2009tz,Aaltonen:2011at}. The $Y(4140)$ has been confirmed at the LHC by the CMS Collaboration~\cite{Yetkin:2013iza}. \subsubsection{Charged $Q \bar Q$ mesons} \label{sec:charged} The observations of charged resonances above the open-heavy-flavor thresholds that couple strongly to charmonium or bottomonium have led to significant interest in both the experimental and theoretical communities. Such resonances, decaying to a heavy quarkonium state and a charged light-quark meson are manifestly exotic, requiring a minimal quark content of $Q \bar Q q \bar q$. Given the existence of this new category of tetraquark hadrons, the planned data sets at the upgraded Super KEK-B collider as well as the current and future data sets from the LHC experiments offer significant discovery potential. The Belle Collaboration has reported the observation of three charged charmonium-related resonances with statistical significances greater than $5\sigma$: the $Z_1^+(4050)$ and $Z_2^+(4250)$ were observed in the $\chi_{c1}(1P)\, \pi^+$ final state~\cite{Mizuk:2008me} and $Z^+(4430)$ was observed in $\psi(2S)\, \pi^+$~\cite{Choi:2007wga,Mizuk:2009da}. $\babar$ searched for similar structures in the same final states~\cite{Lees:2011ik,:2008nk}, but was unable to observe statistically significant excesses for any of them. More recently, in decays of the $Y(4260)$, both the BES III~\cite{Ablikim:2013mio} and the Belle~\cite{Liu:2013dau} Collaborations announced the observation of a new charged charmonium-like resonance, $Z_c^+(3900)$, which decays to $J/\psi\, \pi^+$. The Belle Collaboration utilized a sample of $Y(4260)$ mesons produced using Initial State Radiation (ISR), while the BES Collaboration's result was based on a sample of $e^+e^-$ annihilations recorded on the peak of the $Y(4260)$. Both experiments observed signals with greater than $5\sigma$ significance. In addition, an analysis based on CLEO-c data taken at $\sqrt{s} = 4170 $ MeV claimed a $6.0\sigma$ observation of the $Z_c(3900)$ in the $J/\psi\, \pi^+$ final state~\cite{Xiao:2013iha}, although the mass measurement is incompatible by nearly $3\sigma$ with the Belle and BES results. In addition, the same study presented $3\sigma$ evidence for a neutral isospin partner $Z_c^0$ in the $J/\psi\, \pi^0$ decay channel at a similar mass. Charged bottomonium-like mesons were discovered by the Belle Collaboration in 2011. In an effort to understand the unexpectedly large rate of production of the singlet-$P$ states $h_b(2P)$ and $h_b(1P)$ observed in $\pi^+\pi^-$ transitions from $\Upsilon(5S)$, the Belle Collaboration studied the resonant substructure in these transitions~\cite{Adachi:2011XXX,Bondar:2011pd}. Two significant peaks, denoted $Z_b(10610)$ and $Z_b(10650)$, were observed in the invariant mass of $\Upsilon(nS)\, \pi^+$ and in that of $h_b(nP)\, \pi^+$ as well. In fact, it was observed that the rate of production of $h_b(nP)$ in $\pi^+\pi^-$ transitions from $\Upsilon(5S)$ was fully accounted for by charged pion cascades involving the $Z_b$ states, and large fractions of the transitions to the lower $\Upsilon(nS)$ states occur through the $Z_b$ as well. Both charged $Z_b$ states were observed in dipion transitions to $\Upsilon(1S,2S,3S)$ and $h_b(1P,2P)$ with consistent mass and width measurements for all of the five final states. Belle subsequently obtained 4.9$\sigma$ evidence for the neutral isospin partner of the $Z_b(10610)$ in $\pi^0\pi^0$ transitions from $\Upsilon(5S)$ to $\Upsilon(2S)$~\cite{Adachi:2012im}. The $Z_b(10610)$ and $Z_b(10650)$ states lie very close to the $BB^*$ and $B^*B^*$ thresholds, respectively, and therefore raise the speculation that the two states are, in fact, molecular states of these bottom meson pairs. This speculation is supported by the subsequent observation by Belle~\cite{Adachi:2012cx} of large rates of decay of the $Z_b$ states into the expected $B^{(*)}$ meson pairs. The $Z_c^+(3900)$ is the only charged quarkonium resonance that has been observed in more than one experiment. Since the only known production mechanism of the $Z_b$ states is $\Upsilon(5S)$ decays and as the Belle Collaboration has the only substantial $\Upsilon(5S)$ data sample, the observations of $Z_b(10610)$ and $Z_b(10650)$ are likely to remain unconfirmed until further data are taken on the $\Upsilon(5S)$ resonance, unless these states can be observed inclusively in $\Upsilon\, \pi^+$ decays in the LHC experiments. The confirmation of these exotic $b \bar b$ mesons is clearly important. The discovery of additional tetraquark mesons may provide essential clues for our understanding of this new aspect of the QCD spectrum. We therefore anticipate exciting results when new experiments come online in the next decade, providing data samples sufficiently large to explore this new territory with much increased precision. \subsection{Phenomenological Models} The properties of the neutral and charged $XYZ$ states are presumably described by QCD. The ultimate challenge to theory is to derive those properties directly from QCD. A more modest challenge for theory is to develop a phenomenological description of these states motivated by QCD that accurately reproduces the properties of the states that have already been observed and predicts which additional states should be observed and which ones should not. Thus far, theory has failed to even meet this modest challenge. Below, we describe briefly the phenomenological models of the $XYZ$ mesons that have been proposed. Such a model can be specified most simply by identifying its colored constituents and how they are clustered within the meson. A more detailed description might specify the forces between the constituents. Thus far none of the phenomenological models that have been proposed has produced a compelling explanation for the neutral and charged $XYZ$ states that have been observed. \subsubsection{Conventional quarkonium} The only constituents in a conventional quarkonium are the heavy quark and antiquark ($Q \bar Q$). In a quark potential model, the $Q$ and $\bar Q$ interact through a potential $V(r)$ that can be approximated by a color-Coulomb potential proportional to $1/r$ at short distances and to a confining potential that is linear in $r$ at long distances. The quarkonium spectrum in the quark potential model consists of spin-symmetry multiplets: $S$-wave multiplets with $J^{PC}$ quantum numbers $\{ 0^{-+}, 1^{--} \}$, $P$-wave multiplets $\{ 1^{+-}, (0,1,2)^{++} \}$, D-wave multiplets $\{ 2^{-+}, (1,2,3)^{--} \}$, F-wave multiplets $\{ 3^{-+}, (2,3,4)^{--} \}$, etc. The splittings between multiplets are approximately the same in charmonium and bottomonium. The splittings within spin-symmetry multiplets are approximately 3 times smaller in bottomonium than in charmonium. Quark potential models provide a very accurate description of the quarkonium states below their open-heavy-flavor threshold. In the absence of large mixings with other types of mesons, they should also provide a good description of conventional quarkonium states above the open-heavy-flavor threshold. In addition to the quarkonium masses, potential models provide predictions for the radiative transition rates between quarkonium states. Models for their hadronic transition rates based on the multipole expansion of QCD have also been developed. Predictions for charmonium states above the open-charm threshold are given in Refs.~\cite{Eichten:2004uh,Barnes:2005pb,Eichten:2007qx}. \subsubsection{Meson molecules} \label{sec:molecule} If the mass of a quarkonium-like meson is sufficiently close to the open-heavy-flavor threshold for a pair of heavy-light mesons, it is at least plausible to interpret the pair of mesons as constituents of the quarkonium-like meson. For such an interpretation to be plausible, the widths of the constituents must be narrower than that of the meson. The charm mesons that are narrow enough to be plausible constituents are the $S$-wave mesons $D$, $D^*$, $D_s$, and $D_s^*$ and the $P$-wave mesons $D_1$, $D_2^*$, $D_{s0}^*$, $D_{s1}$, and $D_{s2}^*$. Pairs of these mesons with zero net strangeness define 25 thresholds between 3730~MeV and 5146~MeV. Thus the mass of any given $c \bar c$ meson in this region has a high probability of randomly being within 50~MeV of one of the thresholds. The interactions between the heavy-light mesons at sufficiently low energies is due to pion exchange, where the tensor term in the pion-exchange potential is singular at short distance. In channels where the pion-exchange potential is attractive, whether the mesons are bound into molecules or unbound depends crucially on the short-distance region. For reasonable choices of a short-distance cutoff, there are bound states near threshold for $D^* \bar D$ with isospin 0 and $J^{PC} = 0^{-+}$ and $1^{++}$ and for $D^* \bar D^*$ with isospin 0 and $J^{PC} = 0^{++}$, $0^{-+}$, $1^{+-}$, and $2^{++}$ \cite{Tornqvist:1993ng}. The corresponding bottom meson pairs are predicted to form molecules in these channels that are more strongly bound. The pattern of quantum numbers for the molecules can be changed by taking into account the potentials from the exchange of other mesons. If the mass of a quarkonium-like meson is extremely close to the threshold for a pair of mesons and if it has an $S$-wave coupling to them, it can be identified rigorously as a loosely bound molecule consisting of those mesons \cite{Braaten:2003he}. The binding energy must be much less than the energy scale set by the range of the pion-exchange interactions between the pair of mesons, which is about 10~MeV from pion exchange for charm meson pairs and about 3~MeV for bottom meson pairs. In this case, the molecule has universal properties that are determined by its binding energy and its width. The quintessential example of such a molecule is the $X(3872)$, as discussed in Sec.~\ref{sec:X3872}. \subsubsection{Quarkonium hybrids} \label{sec:hybrid} A quarkonium hybrid has constituents $Q \bar Q g$, where $g$ is a constituent gluon. Since the color charge of a gluon is octet, the $Q \bar Q$ pair is also in a color-octet state. In the lowest-energy multiplets of quarkonium hybrids, the constituent gluon $g$ is a magnetic gluon that can be assigned the quantum numbers $J^{PC} = 1^{+-}$ and can be interpreted as a vector particle in a $P$-wave orbital. If the $Q \bar Q$ pair is in an $S$-wave state, the states of the hybrid form the spin-symmetry multiplet $\{ 1^{--}, (0,1,2)^{-+} \}$. If the $Q \bar Q$ pair is in a $P$-wave state, the states of the hybrid form a supermultiplet that can be decomposed into the spin-symmetry multiplets $\{ 1^{++}, (0,1,2)^{+-} \}$, $\{ 0^{++}, 1^{+-} \}$, and $\{ 2^{++}, (1,2,3)^{+-} \}$. The quantum numbers $1^{-+}$, $0^{+-}$, and $2^{+-}$ are exotic: they are not possible for a conventional quarkonium that consists of $Q \bar Q$ only. As described in Sec.~\ref{sec:lattice}, calculations of the $c \bar c$ meson spectrum using lattice QCD have found that the lowest charmonium hybrids fill out an $S$-wave multiplet and a $P$-wave supermultiplet. As described in Sec.~\ref{sec:latticeNRQCD}, calculations of the $b \bar b$ meson spectrum using lattice NRQCD without dynamical quarks have found that the lowest bottomonium hybrid spin-symmetry multiplets are $\{ 1^{--}, (0,1,2)^{-+} \}$, $\{ 1^{++}, (0,1,2)^{+-} \}$, and $\{ 0^{++}, 1^{+-} \}$, followed by an excited $\{ 1^{--}, (0,1,2)^{-+} \}$ multiplet. An alternative model of the quarkonium hybrids is provided by the Born-Oppenheimer approximation, in which the $Q$ and $\bar Q$ move adiabatically in the presence of gluon and light-quark fields whose energy levels are those generated by static $Q$ and $\bar Q$ sources. In the absence of dynamical light quarks, the Born-Oppenheimer potentials can be calculated straightforwardly using lattice QCD. As described in Sec.~\ref{sec:BO}, if the Born-Oppenheimer is applied to bottomonium without light quarks, the lowest spin-symmetry multiplets are $\{ 1^{--}, (0,1,2)^{-+} \}$ and $\{ 1^{++}, (0,1,2)^{+-} \}$, followed by radial excitations of these two multiplets, and then by $\{ 0^{++}, 1^{+-} \}$. The lowest Born-Oppenheimer potential for a quarkonium hybrid has a minimum for a $Q \bar Q$ separation of about 0.3~fm, and it is repulsive at short distances. As a consequence, the wave function at the origin for the $Q \bar Q$ in the quarkonium hybrid is very small. A $1^{--}$ quarkonium hybrid should therefore be produced very weakly in $e^+ e^-$ annihilation compared to a conventional $1^{--}$ quarkonium. One model-independent prediction for decays of a quarkonium hybrid is that decay into a pair of $S$-wave mesons is suppressed \cite{Kou:2005gt,Close:2005iz}. The dominant decays of charmonium hybrids were therefore previously expected to be into an $S$-wave and $P$-wave charm-meson pair, provided these states are kinematically accessible. The discoveries of tetraquark $Q \bar Q$ mesons provide previously unanticipated decay channels for quarkonium hybrids. \subsubsection{Compact tetraquarks} A compact tetraquark has constituents $Q \bar Q q \bar q$ that are in overlapping orbitals. In the simplest quark potential models for tetraquarks, the four quarks interact through pair-wise potentials between the six quark pairs. In such models, the mass of a tetraquark must be below the thresholds for pairs of heavy-light mesons ($Q \bar q$ and $\bar Q q$) with appropriate quantum numbers and below the thresholds for a quarkonium ($Q \bar Q$) plus a light ($\bar q q$) meson with appropriate quantum numbers \cite{Vijande:2007ix}. If the tetraquark is above either of these thresholds, there is no potential barrier that prevents it from falling apart into a pair of mesons. In more complicated quark potential models, there can also be 3-quark and 4-quark potentials that may be able to stabilize tetraquarks above the meson pair thresholds \cite{Vijande:2011im}. \subsubsection{Diquarkonium tetraquarks} One possibility for substructure in a $Q \bar Q q \bar q$ tetraquark is that its constituents are clustered into diquarks $Q q$ and $\bar Q \bar q$ \cite{Drenska:2010kg}. Their color states are antitriplet and triplet, respectively. The spin quantum number of a diquark can be 0 or 1. In the naive diquark model, the only degrees of freedom come from the spins and flavors of the quarks inside the diquarks and from the relative motion of the diquarks, whose orbitals are determined by the orbital angular momentum and radial quantum numbers. The tetraquark energy levels are then completely determined by group theory in terms of the orbital energies and the spin-spin interaction strengths, which are phenomenological parameters. The lowest energy states of the tetraquark are those where the diquark pair is in an $S$-wave state. If we consider only the light quark flavors $u$ or $d$, the $Q \bar Q$ tetraquark spectrum consists of degenerate isosinglet and isovector multiplets with $J^P$ quantum numbers $0^+$, $0^+$, $1^+$, $1^+$, $1^-$, and $2^+$ \cite{Drenska:2010kg}. The $J^{PC}$ quantum numbers of the neutral tetraquarks are $0^{++}$, $1^{++}$, $1^{+-}$, and $2^{++}$, with 4, 4, 2, and 2 states, respectively. If we include the $s$ quark, there are additional isodoublet and isosinglet states. If we also consider orbital angular momentum excitations and radial excitations of the diquark pair, there is a vast proliferation of predicted states. \subsubsection{Hadro-quarkonium} Hadro-quarkonium consists of a color-singlet $q \bar q$ pair bound to a compact core that is a color-singlet $Q \bar Q$ pair \cite{Dubynskiy:2008mq}. The binding can be attributed to a color analog of the van der Waals force between neutral atoms. An essentially equivalent model is a molecule consisting of a light meson bound to a quarkonium. Hadro-quarkonium is motivated primarily by the fact that most of the $XYZ$ states have been observed through their hadronic transitions to a single charmonium state by emitting a specific light vector meson. This is explained naturally if the charmonium state and the light meson are already present as constituents of the $XYZ$ meson. \subsubsection{Born-Oppenheimer tetraquarks} The structure of a $Q \bar Q q \bar q$ tetraquark could be similar to that of a quarkonium hybrid in the Born-Oppenheimer approximation, in which the $Q$ and $\bar Q$ move adiabatically in the presence of gluon and light-quark fields whose energy levels are those generated by static $Q$ and $\bar Q$ sources \cite{Braaten:2013boa}. In the case of tetraquarks, the gluon and light-quark fields have non-singlet flavor quantum numbers. A simple predictive assumption for the tetraquark Born-Oppenheimer potentials is that they differ only by an offset in energy from the corresponding quarkonium hybrid potentials. This implies that the pattern of spin-symmetry multiplets for the tetraquarks is similar to that for the quarkonium hybrids. \subsection{Theoretical approaches within QCD} QCD presumably provides a fundamental description of the $Q \bar Q$ mesons above the open-heavy-flavor threshold. In this section, we describe theoretical approaches to the problem that are directly based on QCD. \subsubsection{Lattice QCD} \label{sec:lattice} The only well-developed systematically improvable method for calculating observables of QCD that are inherently nonperturbative is lattice gauge theory. This method involves discretizing the quark and gluon fields on a Euclidean space-time lattice, evaluating the functional integrals over the Grassmann quark fields analytically to get a determinant, and then evaluating the functional integrals over the gluon fields using Monte Carlo methods. This method can only be applied directly to observables that can be extracted from Euclidean correlation functions of the quark and gluon fields. For such observables, all the errors in the calculation can be quantified. There are statistical errors from the limited sample of gluon field configurations and also from the determination of the QCD parameters, which are the coupling constant $\alpha_s$ and the quark masses. There are also systematic errors from the extrapolation of the lattice volume to infinity, the extrapolation of the lattice spacing to zero, and the extrapolation of the light quark masses to their physical values. The mass of the charm quark is small enough that lattice gauge theory, with presently available computational resources, can be applied directly to $c \bar c$ mesons. The most extensive studies of the $c \bar c$ meson spectrum above the $D \bar D$ threshold have been carried out by Dudek, Edwards, Mathur and Richards \cite{Dudek:2007wv} and extended by the Hadron Spectrum Collaboration \cite{Liu:2012ze}. The most recent published calculations used an anisotropic lattice with $24^3 \times 128$ sites and a spatial lattice spacing of about 0.12~fm. Their gauge field configurations were generated using dynamical $u$, $d$, and $s$ quarks, with the $s$ quark having its physical mass and the $u$ and $d$ quarks unphysically heavy, corresponding to a pion mass of about 400~MeV. On a cubic lattice, there are 20 channels analogous to the $J^{PC}$ quantum numbers in the continuum. For each of the 20 lattice $J^{PC}$ channels, the Hadron Spectrum Collaboration calculated the $c \bar c$ meson spectrum from the Euclidean time dependence of the cross-correlators for a set of operators whose number ranged from 4 to 26, depending on the channel. The operators included ``hybrid'' operators constructed out of the $c$ quark field and the gluon field strength and ``charmonium'' operators operators constructed out of the $c$ quark field and other combinations of covariant derivatives. They considered two lattice volumes and verified that their results were insensitive to the lattice volume. Since they only considered one value for the lattice spacing and one value for the $u$ and $d$ masses, they could not quantify the systematic errors associated with the lattice spacing or the light quark masses. The Hadron Spectrum Collaboration identified 46 states in the $c \bar c$ meson spectrum with high statistical precision \cite{Liu:2012ze}. These states had spins $J$ as high as 4 and masses as high as 4.6~GeV. Four of the states had exotic quantum numbers that were not possible for a pure $c \bar c$ state: $1^{-+}$, $0^{+-}$, and $2^{+-}$. In some of the nonexotic $J^{PC}$ channels, there were several states that could not be accommodated by the multiplets for conventional charmonium. In particular, a total of 6 states were identified in both the $1^{--}$ and $1^{+-}$ channels. The states that are more strongly excited by charmonium operators are plausible candidates for conventional charmonium. They fill out complete $1S$, $2S$, $3S$, $1P$, $2P$, $1D$, and $1F$ multiplets and there are also candidates for the $3P$ and $1G$ multiplets. The states that are more strongly excited by hybrid operators are plausible candidates for charmonium hybrids. They can be assigned to the heavy-quark spin-symmetry multiplets $\{ 1^{--}, (0,1,2)^{-+} \}$, $\{ 1^{++}, (0,1,2)^{+-} \}$, $\{ 0^{++}, 1^{+-} \}$, and $\{ 2^{++}, (1,2,3)^{+-} \}$ \cite{Liu:2012ze}. Since the probabilities for light-quark constituents in a conventional quarkonium or in a quarkonium hybrid are expected to be small, it is plausible that their results give the correct pattern for the spectrum of charmonium and charmonium hybrids in QCD. Definitive calculations of the spectrum of $c \bar c$ mesons with all systematic errors quantified would require much more extensive calculations. The calculations of Ref.~\cite{Liu:2012ze} would have to be repeated at several smaller lattice spacings in order to extrapolate to zero lattice spacing. They would have to be repeated for several smaller pion masses in order to extrapolate to the physical $u$ and $d$ masses. The list of operators would have to be expanded to include ``tetraquark'' ($c \bar c q \bar q$) operators that are constructed out of light-quark fields as well as the $c$ quark field. A tetraquark operator that factors into color-singlet $c \bar q$ and $\bar c q$ operators has an enhanced amplitude for exciting scattering states consisting of a pair of charm mesons, which have a discrete spectrum on the lattice. One complication that could become increasingly severe as the $u$ and $d$ quark masses are decreased is the effect of 3-hadron scattering states consisting of a $c\bar c$ meson and a pair of light mesons. The masses of the one or two lightest $c \bar c$ mesons in each of 13 $J^{PC}$ channels have also been calculated by Bali, Collins, and Ehmann \cite{Bali:2011rd}. They used isotropic lattices with $16^3 \times 32$ sites and $24^3 \times 48$ sites and lattice spacings of 0.115~fm and 0.077~fm, respectively. Their gauge field configurations were generated using dynamical $u$ and $d$ quarks with equal masses that correspond to a pion mass of 1010~MeV, 400~MeV, or 280~MeV. In each $J^{PC}$ channel, the masses of the two lightest $c \bar c$ mesons were extracted from the cross-correlators of 3 operators. For most of the $J^{PC}$ channels, the operators were charmonium operators, but the operators for the $2^{+-}$ channel were hybrid operators. Comparison with the results of Ref.~\cite{Liu:2012ze} suggest that several of the states should be interpreted as charmonium hybrids, namely the states with the exotic quantum numbers $1^{-+}$ and $2^{+-}$ and the first excited states for $2^{-+}$ and $3^{+-}$. In Ref.~\cite{Bali:2011rd}, the mixing between $c \bar c$ mesons and pairs of charm mesons were also studied in the channels $0^{-+}$, $1^{--}$, and $1^{++}$. The radiative transition rates between $Q \bar Q$ mesons can also be calculated using lattice QCD. They can be extracted from the Euclidean correlation functions of three operators, one of which is the electromagnetic current operator. The first such calculations for excited $c \bar c$ mesons were carried out by Dudek, Edwards, and Thomas using lattice QCD without dynamical quarks \cite{Dudek:2009kk}. They used an anisotropic lattice with $12^3 \times 48$ sites and a lattice spacing of about 0.1~fm. They calculated the electric dipole transition rate between each of the four lowest-energy $1^{--}$ states and the $\chi_{c0}$. For the fourth $1^{--}$ state, the transition rate is very small, consistent with its identification as a charmonium hybrid. They also calculated the magnetic dipole transition rate between each of the four lowest-energy $1^{--}$ states and the $\eta_c$. \subsubsection{Lattice NRQCD} \label{sec:latticeNRQCD} Since the mass of the $b$ quark mass is larger than that of the $c$ quark by about a factor of 3, lattice QCD calculations for $b \bar b$ mesons require a lattice spacing that is 3 times smaller to obtain the same accuracy as for $c \bar c$ mesons. The resulting increase in the number of lattice points by a factor of $3^4$ puts lattice QCD calculations for $b \bar b$ mesons beyond the reach of the computational power that is currently available. An alternative is to use lattice NRQCD. Nonrelativistic QCD is an effective field theory for QCD in which the heavy quark is treated nonrelativistically. Lattice NRQCD has been very successful in calculating the properties of the bottomonium states below the $B \bar B$ threshold. (For a recent example, see Ref. \cite{Dowdall:2011wh}.) Since the $b$ and $\bar b$ remain nonrelativistic in the $b \bar b$ mesons above the $B \bar B$ threshold, NRQCD should also be applicable to those mesons. Lattice NRQCD without any dynamical quarks has been used by Juge, Kuti, and Morningstar to calculate the energies of the lightest bottomonium hybrid mesons \cite{Juge:1999ie}. They used an anisotropic lattice with $15^3 \times 45$ sites and a spacial lattice spacing of about 0.11~fm. They used the Euclidean time-dependence of the correlator of a single hybrid operator in each of four $J^{PC}$ channels to determine the energies of the hybrid states. The ground-state hybrid spin-symmetry multiplet was determined to be $\{ 1^{--}, (0,1,2)^{-+} \}$. The lowest-energy excited multiplets are $\{ 1^{++}, (0,1,2)^{+-} \}$ and $\{ 0^{++}, 1^{+-} \}$, followed by an excited $\{ 1^{--}, (0,1,2)^{-+} \}$ multiplet. \subsubsection{Born-Oppenheimer approximation} \label{sec:BO} Because the mass of a heavy quark is large, the $Q$ and $\bar Q$ in a $Q \bar Q$ meson move slowly in response to the gluon and light-quark fields, while, in comparison, the responses of the gluon and light-quark fields to the motion of the $Q$ and $\bar Q$ are almost instantaneous. These features are exploited in the Born-Oppenheimer approximation. The relative motion of the $Q$ and $\bar Q$ is described by the Schroedinger equation with Born-Oppenheimer (B-O) potentials $V_n(r)$ defined by the energy levels of the gluon and light-quark fields in the presence of static $Q$ and $\bar Q$ sources separated by a distance $r$. The energy levels for the gluon and light-quark fields can be specified by the quantum number $+\Lambda$ or $-\Lambda$, where $\Lambda = 0,1,2,\ldots$ (or $\Lambda = \Sigma,\Pi,\Delta,\ldots$), for the projection of their total angular momentum $\bm{J}_{\rm light}$ on the $Q \bar Q$ axis and by their $CP$ quantum number $\eta = \pm 1$ (or $\eta = g,u$). The sign of $\pm \Lambda$ is relevant only for $\Lambda = \Sigma$. The B-O potentials can therefore be labeled $\Sigma_g^\pm$, $\Sigma_u^\pm$, $\Pi_g$, $\Pi_u$, \ldots. The centrifugal potential in the Schroedinger equation is proportional to $(\bm{L} - \bm{J}_{\rm light})^2$, where $\bm{L}$ is the sum of $\bm{J}_{\rm light}$ and the orbital angular momentum of the $Q \bar Q$ pair. The raising and lowering operators $J_{\rm light}^+$ and $J_{\rm light}^-$ in the $\bm{L} \cdot\bm{J}_{\rm light}$ term introduce couplings between the B-O potentials. In the leading Born-Oppenheimer approximation, these coupling terms are neglected and $L$ is a good quantum number. The distinct energy levels from solutions to the Schroedinger equation for a specific B-O potential can therefore be labeled $nL$, where $n$ is a radial quantum number and $L = S, P, D, \ldots$. A systematic expansion around the leading Born-Oppenheimer approximation that takes into account nonadiabatic effects from the motion of the $Q \bar Q$ pair can be developed by treating the $L^- J_{\rm light}^+$ and $L^+ J_{\rm light}^-$ terms in the centrifugal potential as perturbations. In the leading Born-Oppenheimer approximation, each of the energy levels $nL$ for the Schroedinger equation with a specific B-O potential corresponds to a multiplet of $Q \bar Q$ mesons. In the flavor-singlet sector, the energy levels in the ground-state B-O potential $\Sigma_g^+$ can be interpreted as conventional quarkonia while those in the excited B-O potentials $\Pi_u$, $\Sigma_u^-$, \ldots can be interpreted as quarkonium hybrids. In the flavor-nonsinglet sectors, the energy levels in the B-O potentials can be interpreted as tetraquark mesons. The lowest flavor-singlet Born-Oppenheimer potentials have been calculated by Juge, Kuti, and Morningstar using lattice QCD without any dynamical quarks \cite{Juge:1999ie}. Their finest lattices were $10^3 \times 30$ and $14^3 \times 56$, with spacial lattice spacings of about 0.19~fm and 0.23~fm, respectively. The ground-state potential $\Sigma_g^+$ behaves like the phenomenological potential of quark-potential models. The most attractive of the excited B-O potentials is the $\Pi_u$ potential, which has a minimum near 0.3 ~fm. This potential is repulsive and linear in $r$ at long distances, and it is repulsive at short distances, where it approaches the energy of gluon and light-quark fields in the presence of a static color-octet $Q \bar Q$ source. The solutions to the Schroedinger equation for the $b \bar b$ pair in the excited B-O potentials reveal that the lowest energy level for bottomonium hybrids is the $P$-wave level $\Pi_u(1P)$, whose supermultiplet consists of two degenerate spin-symmetry multiplets: $\{ 1^{--}, (0,1,2)^{-+} \}$ and $\{ 1^{++}, (0,1,2)^{+-} \}$. The next lowest energy levels are the radially excited $P$-wave level $\Pi_u(2P)$ and the $S$-wave level $\Sigma_u^-(1S)$, which consists of a single spin-symmetry multiplet $\{ 0^{++}, 1^{+-} \}$. If the nonadiabatic couplings between B-O potentials were taken into account, the $\Pi_u(nP)$ supermultiplets would be split and the energy ordering of the $\Pi_u(2P)$ and $\Sigma_u^-(1S)$ levels would presumably be reversed, so that they would agree with the results of the lattice NRQCD calculations of bottomonium hybrids described in Sec.~\ref{sec:latticeNRQCD}. In the presence of static $Q$ and $\bar Q$ sources, the lowest-energy state of the gluon and light-quark fields need not be localized near the sources. The localized fields can be accompanied by additional light hadrons. These additional hadrons complicate the calculation of a B-O potential, which should really be defined as the minimal energy for {\it localized} gluon and light-quark fields with the appropriate quantum numbers. This problem is not so severe in the absence of light quarks, because the only light hadrons are glueballs, which have rather large masses. The problem is more serious if there are dynamical light quarks, and it becomes increasingly severe as the masses of the $u$ and $d$ quarks are decreased to their physical values. It may be possible to overcome this problem, or at least ameliorate it, by using the cross-correlators of multiple operators to determine the B-O potentials. \subsection{Scientific Opportunities} In this section, we describe the opportunities in the near and mid-term future for adding to our understanding of the spectroscopy of quarkonia and related states. We describe, in roughly chronological order in terms of readiness for data taking, the prospects for results in this arena from BESIII, Belle II, the LHC experiments, and PANDA. \subsubsection{BESIII @ BEPC} BESIII is the only experiment currently taking data using $e^+ e^-$ collisions. It is expected to continue running for eight to ten years -- although its current specific run plan carries through only 2015. It has already made significant contributions to the study of charmonium and related states in the region above the open-charm threshold, both confirming neutral states and discovering new charged and neutral states. Their decision to run at the $Y(4260)$ has supplied interesting information well beyond what may have been expected, including the discovery of the charmonium tetraquark $Z_c^+$. Among the possibilities that have good discovery potential would be continued running on the $Y(4260)$ resonance and additional running on other higher $1^{--}$ resonances. These priorities compete with the desires to run at the $\psi(3770)$ for studies of $D$ mesons and at the $\psi(4170)$ for studies of $D_s$ mesons, but they should be given consideration. Given the tantalizing possibility of the $Y(4260)$'s identification as a charmonium hybrid, more data taken on resonance will certainly help support or deny this possibility. With a proven detector and an excellent accelerator facility that is currently operating, BES III alone has the immediate opportunity to add greatly to our understanding of the $c \bar c$ meson spectrum above the open-charm threshold. We expect many new results from BES III in the near term. \subsubsection{Belle II @ Super KEK-B} The KEK-B accelerator at KEK was dismantled after the completion of the operation of the Belle experiment in 2010, and currently is undergoing a substantial upgrade of the entire facility to what will be known as Super KEK-B. The design luminosity of Super KEK-B is $8 \times 10^{35}~{\rm cm}^{-2}\, {\rm s}^{-1}$, nearly 40 times the peak instantaneous luminosity achieved by KEK-B. The plans are to begin running at the $\Upsilon(4S)$ resonance in 2016, and ultimately to collect an integrated luminosity of 50 ${\rm ab}^{-1}$. The Belle II detector also represents a substantial upgrade to the Belle detector system -- including new particle-ID systems, central drift chamber, and silicon and pixel detectors for increased vertexing capabilities. The Belle Collaboration has had remarkable success in quarkonium spectroscopy, particularly in the study of states accessible through $B$ decays, double charmonium production, and by means of Initial State Radiation (ISR), as well as those accessible in $e^+e^-$ annihilation near the $\Upsilon(5S)$. This gives reason for optimism concerning the prospects for further success with the upgraded detector system and the much increased luminosities. Many interesting states have been observed by Belle in running at the $\Upsilon(4S)$, but most, if not all of them, require significantly increased statistics in order to better characterize their properties and decays. With projected increases of factors of forty to fifty in integrated luminosity, the prospects for these studies (and new discoveries) are bright indeed. In particular, we would also like to note the importance of running at the $\Upsilon(5S)$, and possibly above the resonance, for elucidating the character of the charged and neutral $Z_b$ tetraquark states, and transitions to lower bottomonia. The clean event environment provided by $e^+e^-$ annihilations offers great advantages for the study of both the tetraquark states and conventional bottomonia -- as well as discovery potential for new states higher in mass. It would be a great loss if a large increase in statistics at the $\Upsilon(5S)$ and above were not accumulated in Belle II. \subsubsection{LHC upgrade} Currently the LHC is in a two-year shutdown for an upgrade to the RF and superconducting magnets, enabling not only an increase in the center-of-mass energy but also a large increase in the instantaneous luminosity. Already, ATLAS, CMS and LHCb have made contributions to the study of some of the states we have discussed in this paper. LHCb made the definitive determination of the quantum numbers $1^{++}$ of the $X(3872)$. LHCb and CMS have made extensive measurements of the production rate of the $X(3872)$. CMS has confirmed the CDF observation of the $Y(4140)$ in the $J/\psi\, \phi$ decay mode. Another indication of the capabilities of the LHC experiments is provided by the conventional charmonium states $\chi_{bJ}(3P)$, which were discovered by ATLAS and confirmed by \rm D\O}%\emptyset}\ and by LHCb. With the planned increases in both energy and luminosity over the next decade, the LHC experiments promise to be even more fruitful sources of new results in this area. \subsubsection{PANDA @ FAIR} PANDA, an experiment planned for the FAIR facility in Germany, offers a possibility not exploited since the experiments E760 and E835 at Fermilab: antiproton-proton annihilation on resonance into charmonium states. States of all $J^{PC}$ quantum numbers may be directly accessed by this technique, so long as they have large enough branching fractions to $p\bar{p}$. To produce conventional charmonium in $p \bar p$ collisions, all three of the quarks in the proton must annihilate with antiquarks in the $\bar p$. Tetraquark mesons with constituents $c \bar c q \bar q$ can be produced in $p \bar p$ collisions by annihilating only two of the three quarks in the $p$. Thus tetraquark charmonium states may couple more strongly to $p \bar p$ than conventional charmonium states with a similar energy. PANDA is expected to begin data taking around 2019. This gives other experiments, such as LHCb, the opportunity to discern which new states might couple strongly enough to $p\bar{p}$ to be ripe targets for study at PANDA. \section{Quarkonium Production$^{1,2}$} \addtocounter{footnote}{1}\footnotetext{Authors: Geoffrey T.~Bodwin, Eric Braaten, James Russ} \addtocounter{footnote}{1}\footnotetext{More detailed accounts of technical aspects of quarkonium production and comparisons between theory and experiment can be found in Refs.~\cite{Brambilla:2004wf,Brambilla:2010cs,Bodwin:2012ft}} \subsection{Introduction} Quarkonium production rates have been studied intensely since the discovery of the $J/\psi$. One aspect of quarkonium production that makes it an interesting problem is the variety of different quarkonia that can be studied. Nature has provided three sets of heavy-quarkonium systems: charmonium ($c \bar c$), bottomonium ($b \bar b$), and $b \bar c$ mesons. The most attractive targets for production measurements are the narrowest states. Most of them are below the threshold for pairs of heavy-light mesons, although there may also be exceptional, narrow states above threshold, such as the $X(3872)$. In the charmonium and bottomonium systems, the states whose production rates are most easily measured at hadron colliders are the spin-triplet S-wave states: the $J/\psi$ and the $\psi(2S)$ in the charmonium system and the $\Upsilon(1S)$, the $\Upsilon(2S)$, and the $\Upsilon(3S)$ in the bottomonium system. They have significant decay modes into $\mu^+ \mu^-$ and $e^+ e^-$, which allow accurate measurements and provide useful triggers. The next most easily measured states are the spin-triplet P-wave states: the $\chi_{cJ}(1P)$ states ($J=0,1,2$) in the charmonium system, and the $\chi_{bJ}(1P)$ and $\chi_{bJ}(2P)$ states in the bottomonium system. They have radiative decays into a spin-singlet S-wave state and a photon. At $e^+ e^-$ colliders, the production rates of many of the other quarkonium states below the heavy-flavor threshold can be measured. In the $b \bar c$ system, the only meson whose production rate has been measured thus far is the spin-singlet ground state $B_c^-$, which has been observed through its decays into $J/\psi\, \ell^- \bar\nu_\ell$ (Ref.~\cite{Abulencia:2006zu}) and its decays into $J/\psi\, \pi^\pm$ (Ref.~\cite{Aaltonen:2007gv}). Lattice QCD should provide a reliable value for the partial width for this decay mode, which can be combined with the lifetime measurement to determine a cross section. Measurements of the relative branching ratios for the $B_c$ into hadronic modes would give additional constraints on theoretical models of the decay process. Measurements of the production rate of the $B_c$ are valuable in order to test the predicted dependence of the rate on the heavy-quark mass ratio. Measurements of the production rates of any of the excited $b \bar c$ mesons may be challenging. An important consequence of the large mass of the heavy quark is the suppression of nonperturbative (low-momentum-transfer) interactions that change the spin state of the quarkonium. Consequently, quarkonium polarizations in hard-scattering production processes are amenable to perturbative theoretical analyses. The polarizations of quarkonia are therefore important observables with which to test quarkonium-production theory. \subsection{Importance of quarkonium production} \subsubsection{Intrinsic importance} The production rate of specific hadrons in high-energy collisions is a fundamental problem in QCD that is important in its own right. The hadrons whose production rate should be easiest to understand are heavy quarkonia --- bound states consisting of a heavy quark and a heavy antiquark. The large quark mass $m_Q$ implies that the creation of the heavy quark and antiquark can be described using perturbative QCD. The fact that heavy quarkonia are nonrelativistic bound states allows the application of theoretical tools that simplify and constrain the analyses of nonperturbative effects that are associated with their formation. Hence, heavy quarkonium production provides a unique laboratory in which to explore the interplay between perturbative and nonperturbative effects in QCD. Recent theoretical developments, which are described in Sec.~\ref{sec:prodtheory}, have improved the prospects for a rigorous and quantitative understanding of quarkonium production in the kinematic region of large momentum transfer\footnote{\label{pT-footnote} Throughout this paper, we use $p_T$ to denote the large momentum transfer. If either of the colliding particles is a hadron, then $p_T$ denotes the quarkonium transverse momentum. If both incoming particles are leptons or photons, then $p_T$ denotes the quarkonium momentum in the center-of-mass frame.} $p_T$. Experiments at the energy frontier can measure quarkonium production at much larger $p_T$ than ever before, providing definitive tests of this new theoretical framework. \subsubsection{Laboratory for experimental analysis} Some of the most important decay modes of heavy elementary particles have analogs in quarkonium decays. For example, the decay $Z^0 \to \mu^+ \mu^-$ has the quarkonium analogs $\Upsilon(1S) \to \mu^+ \mu^-$ and $J/\psi \to \mu^+ \mu^-$. In a high-energy hadron collider, it is possible to accumulate very large data samples of quarkonia. Analysis methods for decays of very heavy particles can therefore be tested by making use of the analogous quarkonium decays. An important example is the measurement of the polarization or spin alignment of a particle. The polarization is often measured by choosing a polarization frame and measuring the polar-angle distribution with respect to the polarization axis. However such a measurement, by itself, does not provide the best control of systematic errors. The large data samples of the $J/\psi$, the $\Upsilon(1S)$, and other quarkonium states that were accumulated at the Tevatron were used to measure their polarizations \cite{Acosta:2001gv,Abulencia:2007us,Abazov:2008aa}. These measurements are also being pursued at the LHC \cite{Chatrchyan:2012woa}. They are demonstrating that, in order to get the best control over systematic errors, it is essential to measure the polarization for various choices of the polarization frame and to use frame-independent relations to constrain the measurements \cite{Faccioli:2010kd}. The power of the frame-independent relations is that they are sensitive to the presence of residual background contributions, since backgrounds tend to transform differently between reference frames than do decay products. These lessons have not yet been exploited in measurements of the polarizations of heavy elementary particles, such as the $W$, the $Z^0$, and the top quark. Furthermore, these lessons may be useful in determining the spins of any new heavy elementary particles that are discovered. \subsubsection{Laboratory for QCD theory} \label{sec:QCDlab} Quarkonium production can be used as a laboratory to test theoretical concepts that are central to the perturbative QCD theoretical program. For example, perturbative QCD at high momentum transfer $p_T$ relies heavily on resummations of large logarithms of $p_T$ to achieve accurate predictions. Such resummations can be tested in quarkonium production at large $p_T$, where the clean experimental signals for the $J/\psi$ and the $\Upsilon(1S)$ that are provided by the $\mu^+\mu^-$ decay channel allow high-statistics measurements in which background contributions are under good control. Precise measurements might allow tests of subleading contributions to resummation formulas. Factorization theorems are the theoretical foundation for all perturbative calculations in QCD. Quarkonium production could be used to test the validity of factorization theorems at leading and subleading powers of $p_T^2$. Such tests would improve our understanding of the interplay of perturbative and nonperturbative physics in QCD and might also lead to theoretical insights into the applicability of factorization concepts to new processes. Exclusive quarkonium production may afford a unique opportunity to tackle the long-standing problem of endpoint logarithms in exclusive processes in QCD. In exclusive quarkonium production, such logarithms occur at scales of order $m_Q$ or higher, and, therefore, can be analyzed entirely within perturbation theory. Analyses of endpoint logarithms in quarkonium production might lead to insights into the nature of nonperturbative endpoint logarithms in exclusive processes for light hadrons and to methods that would allow one to organize and resum them. \subsubsection{Probes for new physics\label{sec:probes-new-physics}} Quarkonium production can be used as a tool to probe for new physics. The production of the $J/\psi$ or the $\Upsilon(1S)$, followed by their decays to $\mu^+\mu^-$, provides a clean experimental signal that can be measured with high statistics. If the theory of quarkonium production can be made sufficiently precise, then discrepancies between theory and experiment at large $p_T$ or at large $\sqrt{s}$ might signal physics beyond the Standard Model. Physics beyond the Standard Model could include bound states of new heavy particles. New heavy-particle bound states are an essential feature of most technicolor models. There are supersymmetric extensions of the Standard Model in which some of the SUSY partners are most easily discovered through their bound states. Specifically, there are models in which the stop or gluino can be most easily discovered through decays of stoponium or gluino-onium into photons. The theoretical techniques that are required for understanding quarkonium production in QCD are relevant for understanding the production of nonrelativistic bound states of new particles. Decays of heavy elementary particles into quarkonia could be used to study the couplings of those heavy particles to heavy quarks. For example, one could measure quarkonium production in decays of the Higgs particle. The decay of the Standard Model Higgs to $J/\psi+\gamma$ or $\Upsilon+\gamma$ involves an interference between a direct process, in which the Higgs decays to a $Q\bar Q$ virtual pair, and an indirect process, in which a top or $W$ loop produces a pair of photons or $Z^0$'s, one of which decays into a quarkonium \cite{Bodwin:2013gca}. The direct processes, and, hence, the decay rates are sensitive to the $HQ\bar Q$ coupling. Although it may be difficult to observe, the decay of the Higgs into $J/\psi+\gamma$ is the only process, as far as is known, that can be used to probe the $Hc\bar c$ coupling directly at the LHC. One could also use the decay of the Higgs in the channel $H\to Z+Z^*\to Z+J/\psi$ or $Z+\Upsilon$ to probe the $HZZ$ coupling \cite{Isidori:2013cla}. \subsection{Theoretical Framework} \label{sec:prodtheory} The earliest attempts to describe quarkonium production were the {\it color-singlet model} \cite{Kartvelishvili:1978id,Chang:1979nn,Berger:1980ni,Baier:1981uk} and the {\it color-evaporation model} \cite{Fritzsch:1977ay,Halzen:1977rs}, whose origins go back almost to the discovery of charmonium. They were superseded in the 1990's by the {\it nonrelativistic QCD (NRQCD) factorization approach} \cite{Bodwin:1994jh}, which uses an effective field theory to separate perturbative and nonperturbative effects of QCD. Although it has not been derived rigorously from QCD, NRQCD factorization remains a theoretically and phenomenologically viable description of quarkonium production. It is the default model for most current experimental studies. A more recent theoretical development is the {\it next-to-leading power (NLP) fragmentation approach} \cite{Kang:2011zza,Kang:2011mg,Fleming:2012wy}. It is believed to be valid up to corrections that go as $m_Q^4/p_T^4$. Hence, its predictions should become most accurate at high values of $p_T$ that are accessible at the energy frontier. These modern theoretical approaches to quarkonium production are described in more detail below. \subsubsection{NRQCD Factorization Approach\label{sec:nrqcd-fact}} NRQCD is an effective field theory for the sector of QCD that includes a heavy quark ($Q$) and a heavy antiquark ($\bar Q$) whose velocities in the $Q\bar Q$ rest frame, denoted by $v$, are nonrelativistic ($v\ll 1$). The NRQCD factorization conjecture \cite{Bodwin:1994jh} states that the inclusive cross section for producing a quarkonium state $H$ with sufficiently large momentum transfer $p_T$ (see footnote~\ref{pT-footnote}) can be written as a sum of products of short-distance $Q\bar Q$ production cross sections times long-distance {\it NRQCD matrix elements}: \begin{equation} d \sigma[A+B \to H + X] = \sum_n d \sigma[A+B \to (Q \bar Q)_n + X]~ \langle {\cal O}_n^H \rangle. \label{NRQCD-fact} \end{equation} The sum over $n$ extends over the color and angular-momentum states of the $Q \bar Q$ pair. The short-distance cross sections $d \sigma$ are essentially inclusive partonic cross sections for creating a $Q\bar Q$ pair, convolved with parton distributions if the colliding particles $A$ and $B$ are hadrons. They are process dependent and have perturbative expansions in powers of $\alpha_s(m_Q)$. The NRQCD matrix element $\langle{\cal O}_n^H \rangle$ is essentially the probability for a $Q\bar Q$ pair in the state $n$ to evolve into the heavy quarkonium $H$. These nonperturbative factors are process-independent constants that scale with definite powers of the relative velocity $v$. Hence, the NRQCD factorization formula is a double expansion in powers of $\alpha_s$ and $v$. The inclusive annihilation decay rate of a quarkonium state satisfies a factorization formula that is analogous to Eq.~(\ref{NRQCD-fact}), but with different NRQCD matrix elements. The NRQCD production matrix elements are vacuum expectation values of four-fermion operators in NRQCD, but with a projection onto an intermediate state of the quarkonium $H$ plus anything: \begin{equation} \langle {\cal O}_n^H \rangle=\langle 0|\chi^\dagger \kappa_n\psi \biggl(\sum_X |H+X\rangle\langle H+X|\biggr) \psi^\dagger \kappa'_n\chi|0\rangle. \label{NRQCDme} \end{equation} Here, $\psi^\dagger$ and $\chi$ are two-component (Pauli) fields that create a heavy quark and a heavy antiquark, respectively, and $\kappa_n$ and $\kappa_n'$ are direct products of Pauli and color matrices.\footnote{ The color-octet NRQCD matrix elements also include Wilson lines that run from the quark and antiquark fields to infinity \cite{Nayak:2005rw}. For simplicity, we have omitted these Wilson lines in Eq.~(\ref{NRQCDme}).} A key feature of NRQCD factorization is that quarkonium production can occur through the creation of color-octet, as well as color-singlet, $Q\bar Q$ pairs. The color-singlet contributions at the leading order in $v$ correspond to the contributions of the color-singlet model. Hence, the NRQCD factorization approach contains all of the production processes of the color-singlet model, as well as additional production processes that involve color-singlet production at higher orders in $v$ and color-octet production. The leading color-singlet NRQCD production matrix element in Eq.~(\ref{NRQCD-fact}) is simply related to the leading color-singlet NRQCD decay matrix element in the analogous factorization formula for the annihilation decay rate of the quarkonium $H$. Hence, the leading color-singlet NRQCD production matrix elements can be determined from quarkonium electromagnetic decay rates (up to corrections of order $v^4$). The color-octet NRQCD production matrix elements are treated as phenomenological parameters. The predictive power of NRQCD factorization comes from truncating the expansion in $v$ so as to reduce the number of NRQCD matrix elements that must be fixed by phenomenology. The truncation in $v$ is more accurate for bottomonium than for charmonium. ($v^2\approx 0.1$ for the $\Upsilon(1S)$; $v^2\approx 0.23$ for the $J/\psi$.) The truncation in $v$ can reduce the number of nonperturbative constants to just a few for each quarkonium spin multiplet. The truncation for S-wave states that is used in current phenomenology includes the NRQCD matrix elements through relative order $v^4$. For a spin-triplet S-wave quarkonium state $H$, such as the $J/\psi$ or the $\Upsilon(1S)$, the nonperturbative factors are reduced to a single color-singlet matrix element $\langle{\cal O}^{H}(^3S_1^{[1]})\rangle$, which is of leading order in $v$, and three color-octet matrix elements, $\langle{\cal O}^{H}(^1S_0^{[8]})\rangle$, $\langle{\cal O}^{H}(^3S_1^{[8]})\rangle$, and $\langle{\cal O}^{H}(^3P_J^{[8]})\rangle$, which are of relative orders $v^3$, $v^4$, and $v^4$, respectively. In the notations for these NRQCD operators, the quantities in parentheses are the angular-momentum ($^{2S+1}L_J$) and color state (singlet or octet) of the $Q\bar Q$ pair that evolves into the quarkonium state $H$. \subsubsection{NLP Fragmentation Approach} For very large momentum transfer\textsuperscript{\ref{pT-footnote}} $p_T$, the inclusive cross section to produce a hadron can be simplified. The contribution to the cross section at leading power in $p_T$ ($1/p_T^4$ in $d \sigma/dp_T^2$) can be written as a sum of single-parton production cross sections convolved with {\it single-parton fragmentation functions} \cite{Collins:1981uw}: \begin{equation} d \sigma[A+B \to H + X] = \sum_i d\hat\sigma[A+B\to i+X]\otimes D[i\to H] . \label{1-ple-frag} \end{equation} The sum over $i$ extends over the types of partons (gluons, quarks, and antiquarks). The short-distance cross sections $d \hat\sigma$ are essentially inclusive partonic cross sections for producing the single parton $i$, convolved with parton distributions if the colliding particles $A$ and $B$ are hadrons. They have perturbative expansions in powers of $\alpha_s(p_T)$. The fragmentation function $D_{i\to H}(z)$ is the nonperturbative probability distribution in the longitudinal momentum fraction $z$ of the hadron $H$ relative to the parton $i$. The leading power (LP) factorization formula in Eq.~(\ref{1-ple-frag}) was proven for $e^+ e^-$ annihilation by Collins and Soper \cite{Collins:1981uw}. For a light hadron $H$, the corrections are of order $\Lambda_{\rm QCD}^2/p_T^2$. The proof does not seem to have been extended to hadron collisions, except in the case of heavy quarkonium, for which a proof has been sketched recently by Nayak, Qiu, and Sterman \cite{Nayak:2005rt}. In this case, the leading corrections are of order $m_Q^2/p_T^2$. The LP fragmentation formula in Eq.~(\ref{1-ple-frag}) has been used to describe quarkonium production at large $p_T$. The fragmentation functions for a heavy quarkonium can be calculated in perturbative QCD, up to nonperturbative multiplicative constants \cite{Braaten:1993rw,Braaten:1993mp}. The fragmentation functions have been calculated in the NRQCD factorization framework to leading order in $\alpha_s$ for all the phenomenologically relevant channels and to next-to-leading order (NLO) for the color-octet $^3S_1$ channel \cite{Braaten:2000pc}. In some channels, the leading power of $p_T$ described by parton fragmentation does not appear in the NRQCD factorization formula until NLO or even N$^2$LO in $\alpha_s$. For these channels, the LP fragmentation formula in Eq.~(\ref{1-ple-frag}) can be used to calculate the leading power of $p_T$ with much less effort than a fixed-order calculation. Furthermore, the evolution equations for the single-parton fragmentation functions can be used to sum large logarithms of $p_T^2/m_Q^2$ to all orders in $\alpha_s$. Unfortunately, the usefulness of the LP fragmentation formula in Eq.~(\ref{1-ple-frag}) for quarkonium has proved to be limited. Explicit calculations have revealed that, in some channels, it does not give the largest contribution until $p_T$ is almost an order of magnitude larger than the quarkonium mass \cite{Chang:1994aw,Chang:1996jt}. In these channels, the corrections of relative order $m_Q^2/p_T^2$ apparently have large coefficients. An important recent theoretical development is the extension of the factorization formula in Eq.~(\ref{1-ple-frag}) to the first subleading power of $p_T^2$. Kang, Qiu, and Sterman proved that the contributions of relative order $m_Q^2/p_T^2$ can be written as a sum of of $Q\bar Q$ production cross sections convolved with {\it double-parton fragmentation functions} \cite{Kang:2011zza,Kang:2011mg}: \begin{equation} \sum_n d\hat\sigma[A+B\to (Q\bar Q)_n +X]\otimes D[(Q\bar Q)_n \to H]. \label{2-ple-frag} \end{equation} The sum over $n$ extends over the color (singlet or octet) and Lorentz structures (vector or axial vector) of the energetic $Q \bar Q$ pair. The short-distance cross sections $d \hat\sigma$ are essentially inclusive partonic cross sections for producing an energetic $Q \bar Q$ pair, convolved with parton distributions if the colliding particles $A$ and $B$ are hadrons. They have perturbative expansions in powers of $\alpha_s(p_T)$. The double-parton fragmentation functions $D_{(Q\bar Q)_n \to H}(z, \zeta,\zeta')$ are nonperturbative probability distributions in the longitudinal momentum fraction $z$ of the quarkonium $H$ relative to the $Q \bar Q$ pair. They also depend on the relative longitudinal momentum fractions $\zeta$ and $\zeta'$ of the $Q$ and the $\bar Q$. The evolution equations for the double-parton fragmentation functions can be used to sum large logarithms of $p_T^2/m_Q^2$ to all orders in $\alpha_s$. The NLP fragmentation formula is obtained by adding Eq.~(\ref{2-ple-frag}) to Eq.~(\ref {1-ple-frag}). The corrections are of order $\Lambda_{\rm QCD}^2/p_T^2$ and $m_Q^4/p_T^4$. This factorization formula has also been derived by making use of soft collinear effective theory \cite{Fleming:2012wy,Fleming:2013qu}. The NLP fragmentation approach by itself lacks the predictive power of NRQCD factorization, because the fragmentation functions $D[i\to H]$ and $D[(Q\bar Q)_n \to H]$ are nonperturbative functions of the momentum fractions that must be determined phenomenologically. The NLP fragmentation approach can be given predictive power through the NRQCD factorization conjecture, which implies that a fragmentation function can be written as a sum of products of short-distance functions times NRQCD matrix elements, in analogy to Eq.~(\ref{NRQCD-fact}). If one truncates the expansion in the relative velocity $v$, as is described in Sec.~\ref{sec:nrqcd-fact}, then the nonperturbative factors can be reduced to a few constants for each quarkonium spin multiplet. The NLP fragmentation cross section, expressed in terms of the NRQCD matrix elements, should account for the first two terms in the expansion of the NRQCD factorization cross section in powers of $m_Q^2/p_T^2$. \subsubsection{Status of Proofs of NRQCD Factorization} A diagrammatic proof of NRQCD factorization for a given hard-scattering process consists of a demonstration that diagrams in each order in $\alpha_s$ can be reorganized so that (1) all soft singularities cancel or can be absorbed into NRQCD matrix elements and (2) all collinear singularities and spectator interactions can be absorbed into parton distributions of incoming hadrons. This reorganization of low-virtuality singularities is essential in order to define $Q \bar Q$ production cross sections that are free of low-momentum contributions and, hence, calculable in perturbation theory. In the NLP fragmentation approach, the factorization of the production cross section into the form of Eqs.~(\ref{1-ple-frag}) and (\ref{2-ple-frag}), up to corrections that are suppressed by factors of $\Lambda_{\rm QCD}^2/p_T^2$ or $m_Q^4/p_T^4$, has already been proven \cite{Kang:2011zza,Kang:2011mg,Fleming:2012wy}. It remains to prove that the fragmentation functions satisfy the NRQCD factorization conjecture. This step has been demonstrated for gluon fragmentation functions through next-to-next-to-leading order (NNLO) in $\alpha_s$ \cite{Nayak:2005rw,Nayak:2005rt,Nayak:2006fm}. However, a proof to all orders in $\alpha_s$ is essential, because potential violations of factorization involve soft gluons, for which $\alpha_s$ is not a good expansion parameter. In the absence of a rigorous proof of NRQCD factorization, we must rely on experiment in order to decide whether NRQCD factorization is a valid model of quarkonium production. Standard methods for proving factorization of soft contributions are valid only up to corrections that are suppressed as $1/p_T^4$. In particular, the NLP fragmentation approach suggests that the NRQCD factorization formula in Eq.~(\ref{NRQCD-fact}) is likely to hold only up to corrections that are suppressed by a factor $m_Q^4/p_T^4$ or a factor $\Lambda_{\rm QCD}^2/p_T^2$. Therefore, it is very important to test the predictions of NRQCD factorization with high precision at the highest accessible values of $p_T$. \subsubsection{$k_T$ Factorization} The $k_T$-factorization approach is an alternative to standard collinear factorization in which one writes cross sections in terms of parton distributions that depend on the transverse momenta of the partons, as well as on their longitudinal momentum fractions. The $k_T$-factorization approach contains some contributions at leading order in $\alpha_s$ that would appear only in higher orders in collinear factorization. This property may be useful in certain kinematic situations in which large corrections can appear in higher orders in calculations in collinear factorization. The predictions of the $k_T$-factorization approach agree well with many existing measurements of quarkonium production cross sections and polarizations. However, because the $k_T$-dependent parton distributions are known phenomenologically with much less precision than the collinear parton distributions, $k_T$-factorization predictions can contain large uncertainties that may not yet be accurately quantified. Moreover, one must be cognizant of the fact that there could be hidden biases in the predictions that stem from the choices of parametrizations of the $k_T$-dependent parton distributions. Furthermore, in practice, the $k_T$-dependent parton distributions model large-$k_T$ behavior that would be calculated from first principles in collinear factorization. In the applications of $k_T$ factorization to quarkonium production, it is always assumed that production occurs only through color-singlet $Q\bar Q$ channels. Therefore, the $k_T$-factorization approach contains uncanceled infrared divergences, which appear in higher orders in $v$, for example, in calculations of $P$-wave quarkonium production at the leading non-trivial order in $v$. \subsubsection{Large Higher-Order Perturbative Corrections} \label{sec:HOCor} The short-distance coefficients $d \sigma_n$ in the NRQCD factorization formula in Eq.~(\ref{NRQCD-fact}) are partonic cross sections for creating a $Q\bar Q$ pair that can be calculated as perturbation series in $\alpha_s$. For most of the important production processes, they have been calculated to next-to-leading order (NLO) in $\alpha_s$. In some cases, the NLO corrections are uncomfortably large. Understanding the origin of the large radiative corrections can lead to new insights into the production mechanisms. In inclusive quarkonium production at large $p_T$, corrections of NLO in $\alpha_s$ are surprisingly large in some channels --- sometimes exceeding the leading-order (LO) contribution by an order of magnitude. This situation has raised questions about the convergence of the perturbation expansion. The large higher-order corrections may be understood as arising from a combination of two effects. The first effect is that higher powers of $\alpha_s$ can be offset by a less rapid fall-off with $p_T$. For example, for production of the $J/\psi$ through the color-singlet $S$-wave channel, the contribution to $d \sigma/dp_T^2$ at leading order in $\alpha_s$ goes as $\alpha_s^3 m_c^4/p_T^8$, and so it is strongly suppressed at large $p_T$. The NLO contribution goes as $\alpha_s^4 m_c^2/p_T^6$, so it is less strongly suppressed. The NNLO contribution goes as $\alpha_s^5/p_T^4$, owing to contributions in which a gluon fragments into the $J/\psi$. Higher orders exhibit this same leading-power behavior, $1/p_T^4$, but are suppressed by additional powers of $\alpha_s$. The second effect is that the contributions from $Q \bar Q$ fragmentation into quarkonium can enter with large coefficients. Gluon fragmentation contributions scale as the leading power $1/p_T^4$, while the $Q \bar Q$ fragmentation contributions scale as $m_Q^2/p_T^6$, and so gluon fragmentation will dominate at sufficiently large $p_T$. However, as we have mentioned, there are channels in which both contributions enter at the same order in $\alpha_s$, but the value of $p_T$ at which gluon fragmentation begins to dominate is almost an order of magnitude larger than the quarkonium mass ~\cite{Chang:1994aw,Chang:1996jt}. The NLP fragmentation approach has the potential to increase significantly the accuracy of predictions for inclusive quarkonium production at large $p_T$. In many color and angular-momentum channels, the accuracy of the existing NLO calculations is at best leading order at large $p_T$. At very large $p_T$, the accuracy is at best leading order because single-parton fragmentation contributions that behave as $1/p_T^4$ enter first at NLO in a strict expansion in $\alpha_s$. At intermediate $p_T$, the accuracy is at best leading order because $Q \bar Q$ fragmentation contributions that may have large coefficients and that behave as $m_Q^2/p_T^6$ enter first at NLO. In fact, the accuracy in these channels is not even leading order, because there are large logarithms of $p_T/m_Q$ at higher orders in $\alpha_s$. Within NRQCD factorization, a complete calculation to NNLO in $\alpha_s$ would be prohibitively difficult and would only decrease the relative error to $\alpha_s \log(p_T/m_Q)$ at large $p_T$. Within the fragmentation approach, the relative error could be decreased to order $\alpha_s^2$ at large $p_T$ by calculating the fragmentation functions to NLO in $\alpha_s$ and using the evolution equations for the fragmentation functions to sum the logarithms of $p_T/m_Q$. Such a calculation is feasible, and it would give higher accuracy at large $p_T$ than a full calculation to NNLO in $\alpha_s$. \subsubsection{Exclusive quarkonium production} A collision that is initiated by an electron and a positron or by two photons can produce exclusive two-quarkonium final states. The exclusive production rate can be calculated, at sufficiently large CM momentum $p_T$, by making use of a simplified version of the NRQCD factorization formula in Eq.~(\ref{NRQCD-fact}) in which the color-octet NRQCD matrix elements are omitted. The corrections to this formula should be suppressed by at least $v^4$. In this simplified factorization formula, the inclusive partonic production cross sections are, of course, replaced by the appropriate exclusive cross sections. For exclusive $e^+e^-$ production of two quarkonium states and for exclusive production of a quarkonium and a light meson in $B$-meson decays, factorization theorems have been established for processes that proceed at leading order without a helicity flip of the heavy-quark \cite{Bodwin:2010fi}. For exclusive processes that proceed through a helicity flip at leading order, such as $e^+e^-\to J/\psi +\eta_c$, factorization has not been proven because there are complications from the ``endpoint regions'' of momentum space, in which a gluon carries away most of the longitudinal momentum of a spectator quark \cite{Bodwin:2013ys}. It is possible that NRQCD factorization still holds in these cases because the endpoint regions cannot have a virtuality lower than $m_Q$. Exclusive quarkonium production cross sections have also been calculated within the light-cone formalism for exclusive hard-scattering processes \cite{Chernyak:1977fk,Efremov:1979qk,Lepage:1979za,Lepage:1979zb,Lepage:1980fj,Chernyak:1980dk}. Examples of such calculations can be found in Refs.~\cite{Bondar:2004sv,Braguta:2008tg} and references therein. Calculations in the light-cone approach neglect parton transverse momenta in comparison with longitudinal momenta and are, therefore, sometimes simpler than calculations in the NRQCD factorization approach \cite{Jia:2008ep}. In first approximation, the light-cone approach takes into account certain contributions that are of higher order in $v$ in the NRQCD factorization approach. However, existing calculations in the light-cone approach rely on the use of model light-cone quarkonium distributions. In some calculations, the small-momentum behavior of the model light-cone distributions has been constrained by making use of information about the quarkonium wave function. However, the large-momentum tails of the model light-cone distributions may not be consistent with the large-momentum behavior of QCD \cite{Bodwin:2006dm}. An important limitation of the light-cone approach is that, because it neglects the transverse momentum of partons, it does not reproduce the endpoint regions of QCD properly \cite{Jia:2010fw,Bodwin:2013ys}. The endpoint regions give contributions that are enhanced by logarithms of $p_T^2/m_Q^2$ in helicity-flip processes, such as $e^+e^-\to J/\psi +\eta_c$. \subsection{Issues That Should Be Addressed in Future Work} NLO corrections in the NRQCD factorization framework have been computed for many quarkonium production processes. NLO computations for inclusive production include $J/\psi$, $\psi(2S)$, $\chi_{cJ}$ and $\Upsilon(nS)$ production cross sections and polarizations at the Tevatron and the LHC, $J/\psi$ and $\psi(2S)$ production cross sections at RHIC, $J/\psi$ photoproduction cross sections and polarization at HERA, the $J/\psi+\eta_c$ production cross section, and the $J/\psi+X$ and $J/\psi+X(\hbox{non-$c\bar c$})$ production cross sections in $e^+e^-$ annihilation at the $B$ factories. NLO computations for exclusive production include the $e^+e^-\to J/\psi+\eta_c$ and $e^+e^-\to J/\psi+\chi_{cJ}$ double-quarkonium production cross sections. Generally, data and the NLO predictions of NRQCD factorization for quarkonium production agree, within errors.\footnote{See the global fit of NRQCD matrix elements of Butensch\"on and Kniehl \cite{Butenschoen:2011yh,Butenschoen:2012qh} and Ref.~\cite{Brambilla:2010cs} for some further details.} There are three significant exceptions. These are discussed below, along with some additional issues that should be addressed in future experimental and theoretical work. \subsubsection{$e^+e^-\to J/\psi + X({\rm non-}c \bar c)$} Inclusive charmonium production in $e^+e^-$ collisions at the $B$ factories has shown some discrepancies with theoretical predictions. The inclusive cross section for $J/\psi$ production can be split into the cross section to produce $J/\psi + c\bar c +X$, in which there are charm mesons accompanying the $J/\psi$, and the cross section to produce $J/\psi + X({\rm non-}c \bar c)$, in which there are only light hadrons accompanying the $J/\psi$. The measured cross section for $J/\psi + X({\rm non-}c \bar c)$ production at Belle \cite{Pakhlov:2009nj} is about a factor two lower than NRQCD factorization predictions \cite{Zhang:2009ym,Butenschoen:2011yh}. However, for several reasons, the apparent conflict between the Belle measurement and the theoretical predictions may not be significant. First, the typical $J/\psi$ CM momentum $p_T$ for this process is quite low --- less than 3~GeV --- suggesting that corrections to factorization of order $m_c^4/p_T^4$ may not be under control. Second, the most recent Belle measurements \cite{Pakhlov:2009nj} imply a value for the inclusive $J/\psi$ production cross section that is about a factor of two smaller than the value that was measured by the BaBar Collaboration \cite{Aubert:2001pd}. Independent measurements of the cross sections to produce $J/\psi + X$ and $J/\psi + X({\rm non-}c \bar c)$ would therefore be valuable. Third, the Belle measurement of cross sections for $J/\psi$ plus light hadrons includes only events with greater than four charged tracks, and the corrections owing to events with four or fewer charged tracks are not known. A determination of the effect of events with four or fewer charged tracks is necessary in order to make a direct comparison between theory and experiment. \subsubsection{$\gamma\gamma \to J/\psi + X$} The cross section for inclusive $J/\psi$ production in $\gamma\gamma$ scattering that was measured by DELPHI at LEP~II \cite{Abdallah:2003du} lies above the NRQCD factorization prediction that is based on the global fits of NRQCD matrix elements \cite{Butenschoen:2011yh} by more than an order of magnitude. However, the experimental error bars are very large, and the discrepancies with the prediction all lie at values of $p_T$ less than $2.7$~GeV, where corrections to factorization of order $m_c^4/p_T^4$ may not be under control. Clearly, measurements of greater precision and at higher values of $p_T$ are needed in order to make a meaningful comparison with the prediction of NRQCD factorization. \subsubsection{$J/\psi$ and $\Upsilon$ Polarization} The $J/\psi$ polarization that is observed in CDF Run~II \cite{Abulencia:2007us} seems to be incompatible with the combined constraints of the measured $d\sigma/dp_T$ from CDF Run~II \cite{Abulencia:2007us} and the measured $d\sigma/dp_T$ from HERA \cite{Adloff:2002ex,Aaron:2010gz}. The CDF Run~II measurement indicates that the $J/\psi$ is slightly longitudinally polarized in the helicity frame. NRQCD predictions for polarization of the $J/\psi$ vary dramatically, depending on the data that are used to determine the NRQCD matrix elements. A prediction of strong transverse polarization \cite{Butenschoen:2012px} arises when one includes in the fits of the NRQCD matrix elements both the CDF Run~II data and the HERA data for $d\sigma/dp_T$ down to a $p_T$ of $3$~GeV. Inclusion of the low-$p_T$ HERA data is crucial in obtaining strong transverse polarization. At such low values of $p_T$, one might doubt that corrections to factorization of order $m_c^4/p_T^4$ are under control. A prediction of moderate transverse polarization \cite{Gong:2012ug} arises when one includes data for $d\sigma/dp_T$ with $p_T$ greater than $7$~GeV from both CDF Run~II and LHCb and uses NRQCD factorization predictions to correct for feeddown from the $\psi(2S)$ and the $\chi_{cJ}$ states. A prediction of near-zero transverse polarization \cite{Chao:2012iv} arises when one includes in the fits of the NRQCD matrix elements both the CDF Run~II data for $d\sigma/dp_T$ with $p_T$ greater than $7$~GeV and the CDF Run~II polarization measurement. It is very important to resolve the ambiguities in the fits of the NRQCD matrix elements. Further measurements of $J/\psi$ production in new processes, at different values of $\sqrt{s}$ and rapidity, and in $ep$ collisions at larger values of $p_T$ would all help to resolve these ambiguities. It is worth noting that the CDF Run~I \cite{Affolder:2000nn} and Run~II \cite{Abulencia:2007us} $J/\psi$ polarization measurements are incompatible, although the CDF collaboration states \cite{Abulencia:2007us} that the Run~I measurement supersedes the Run~I measurement. Recent results from $J/\psi$ polarization measurements at LHCb \cite{Aaij:2013hsa} and from $J/\psi$ and $\psi(2S)$ polarization measurements at CMS \cite{Chatrchyan:2013cla} show no evidence for large transverse polarization in the helicity frame out to $p_T=57$~GeV. CDF Run~II polarization measurements for all three $\Upsilon$(nS) states have been analyzed in several different polarization frames. A frame-invariant relation has been used to check the consistency of these measurements~\cite{CDF:2011ag}. These Run~II results agree with the Run~I CDF results in the helicity frame and extend those results to larger values of $p_T$. The Run~II \rm D\O}%\emptyset}\ result for the $\Upsilon(1S)$ in the helicity frame disagrees with both CDF measurements, but has much less precision than the new CDF result. A recent CMS measurement of the $\Upsilon$(nS) polarizations in pp collisions at 7~TeV shows behavior that is similar to that of the CDF Tevatron results in multiple reference frames and in a frame-independent analysis \cite{Chatrchyan:2012woa}. The trends of the $\Upsilon$ measurements as functions of $p_T$, in both $pp$ and $p\bar p$ production, agree with the trends in the charmonium system: there are no large polarization effects at high $p_T$. A new NLO prediction for the polarizations of the $\Upsilon(nS)$ states at the LHC \cite{Gong:2013qka} is in agreement with data from the CMS collaboration \cite{Chatrchyan:2012woa} for the $\Upsilon(1S)$ and $\Upsilon(2S)$ states, but not for the $\Upsilon(3S)$ state for $p_T >$ 30 GeV. The agreement of the new predictions with CDF Run~II data is not quite as good. Clearly, it will be important to address the issue of feeddown from higher-mass $b \bar b$ states in future experimental measurements of $\Upsilon(nS)$ polarizations. \subsubsection{Need for Resummation of Logs of $p_T^2/m_c^2$} LHC data for $J/\psi$ production, particularly data at large values of $p_T$, show a slight shape discrepancy in comparison with NRQCD predictions --- especially the predictions of Butensch\"on and Kniehl \cite{Butenschoen:2011yh}, which are based on global fits of the NRQCD matrix elements that include data with $p_T$ as low as $3$~GeV. This shape discrepancy may be an indication that resummation of logarithms of $p_T^2/m_c^2$ is needed in the NRQCD factorization predictions at large $p_T$. The NLP fragmentation approach provides a framework with which to carry out this resummation at both the leading power and the first subleading power in $m_Q^2/p_T^2$. \subsubsection{The Issue of Feeddown} Comparisons between theory and experiment are complicated by feeddown from heavier quarkonium states. The prompt production rates for $J/\psi$ include feeddown from the $\psi(2S)$ and $\chi_{cJ}(1P)$ states and the prompt production rates for the $\Upsilon(1S)$ include feeddown from the $\Upsilon(2S)$, $\Upsilon(3S)$, $\chi_{bJ}(1P)$, and $\chi_{bJ}(2P)$ states. Theoretical predictions, in contrast, are typically for direct production rates of the $J/\psi$ and $\Upsilon(1S)$ states. Theoretical predictions have been made that include the effects of feeddown. However, it should be kept in mind that they are obtained by adding the predictions for several direct production rates, which may not be equally reliable. In the case of unpolarized cross sections, the feeddown contributions are typically on the order of about $30$--$40\%$, which is not significant in comparison with the theoretical uncertainties. However, the feeddown contributions could have an important effect on the polarizations. Given these considerations, it would be very useful for experiments to separate the feeddown contributions from the direct-production contributions, especially for polarization studies. Measurements of $\chi_{cJ}$ and $\chi_{bJ}$ production rates and polarizations would provide additional tests of NRQCD factorization. For the $\chi$ states, the theory is more constrained than for the $\psi$ and $\Upsilon$ states because there are two, rather than four, NRQCD matrix elements that enter at the leading non-trivial order in $v$. Consequently, it might be possible to make more stringent tests of NRQCD factorization for the $\chi_{cJ}$ and $\chi_{bJ}$ states than for the $S$-wave quarkonium states. \subsubsection{Universality of NRQCD matrix elements} Universality of the NRQCD matrix elements is a crucial prediction of the NRQCD factorization approach. It should be tested in as wide a range of production processes as possible. In the case of spin-triplet S-wave states, measurements of associated production of quarkonia or measurements of photoproduction or leptoproduction processes, in addition to hadroproduction processes, may be essential in order to pin down the values of the three most important color-octet NRQCD matrix elements. Those values are needed in order to make firm predictions of cross sections and, especially, polarizations. \subsubsection{Large Logarithms in Exclusive Double-Quarkonium Production} NRQCD factorization predictions for exclusive double-quarkonium production in $e^+e^-$ annihilation generally agree with experimental measurements, within rather large experimental and theoretical uncertainties. For many years, a large discrepancy existed between theoretical predictions and experimental measurements of the cross section for $e^+e^-\to J/\psi+\eta_c$. This discrepancy was resolved by a shift in the measured cross section and by the incorporation of corrections of higher order in $v$ and $\alpha_s$ into the theoretical predictions. The corrections of next-to-leading order (NLO) in $v^2$ and the corrections of NLO in $\alpha_s$ each increase the cross section by about a factor of two. The large size of the corrections of NLO in $v^2$ is not believed to be a problem for the $v$ expansion because these corrections arise from three different sources, each of which seems to have a well-behaved $v$ expansion \cite{Bodwin:2006ke,Bodwin:2007ga}. The large size of the corrections of NLO in $\alpha_s$ is more problematic. It is a consequence of large double and single logarithms of $p_T^2/m_c^2$ \cite{Jia:2010fw}. It has been shown that the double logarithms arise from the endpoint region \cite{Bodwin:2013ys}, in which the hard scattering transfers almost all of the momentum of a spectator parton to an active parton. However, it is not known how to resum the double (or single) endpoint logarithms to all orders in perturbation theory. Such a resummation would allow one to have greater confidence in the reliability of the theoretical predictions. \subsection{Opportunities at the Frontiers of High-Energy Physics} In this section, we describe the opportunities in the near- and long-term future for adding to our understanding of quarkonium production. We describe the prospects for progress in this arena at the Large Hadron collider, Belle-II, the LHC upgrade, and future high-energy $e^+ e^-$ and $e p$ colliders. \subsubsection{Large Hadron Collider} The extension of the energy frontier to 13 TeV in Run 2 of the LHC will offer the prospects of an extended $p_T$ reach for quarkonium studies. The experiments are refining their trigger strategies to cope with the higher luminosity and the larger cross section. It is important that quarkonium studies should not be ignored in the emphasis on the search for new physics. The importance of quarkonium decay modes of the Higgs boson as probes of Higgs coupling constants was mentioned in Sec.~\ref{sec:probes-new-physics}. The exploration of the high-$p_T$ aspects of quarkonium production and polarization that are outlined here will require large integrated luminosity. In this regard, it is important that the experiments ensure that quarkonium triggers retain high efficiency for dimuon decays of the $J/\psi$ and the $\Upsilon(1S)$ at large $p_T$. The current benchmark theoretical model for quarkonium production, NRQCD factorization, has not been proven by theoretical means to be a consequence of QCD. In the absence of further progress in establishing NRQCD factorization theoretically, we must rely on experiment to decide whether it is a valid model of quarkonium production. The NLP fragmentation approach suggests that NRQCD factorization is likely to hold only for $p_T$ much greater than the quarkonium mass. Therefore, it is very important to measure quarkonium production rates and polarizations with high statistics at the highest possible values of $p_T$. Measurements of production cross sections differential in both rapidity and $p_T$ could provide additional important tests, as could measurements of the production rates and polarizations of $\chi_{cJ}$ and $\chi_{bJ}$ states. In making measurements of prompt quarkonium production, it is very important to separate feeddown contributions from the direct-production rates. Measurements of new production processes, such as the associated production of a quarkonium state and a $W$ or $Z$ boson, would also yield important tests of theoretical models. The LHCb detector has demonstrated its ability to measure the properties of quarkonium states in a variety of decay modes. In the case of $B_c$ studies, the luminosity limitation may restrict the range of final states that can be addressed, but the experiment has already shown its ability to make good determinations of relative branching ratios and should have enough data in Run~2 to explore the physics implications of a quarkonium state that has net flavor content. It is not clear whether the experiment has a chance to observe any of the $B_c$ excited states. \subsubsection{Belle-II at SuperKEKB} Double-charmonium production processes have been the focus of interesting measurements at the $B$ factories that were not anticipated in the designs of these machines. At Belle, the cross sections for producing $J/\psi + \eta_c(1S)$, $J/\psi + \chi_{c0}(1P)$, and $J/\psi + \eta_c(2S)$ have all been measured. The large increase in luminosity at Belle~II should make it possible to resolve processes in which the $J/\psi$ recoils against other onia. In particular, it should be possible to measure the cross section for producing $J/\psi + J/\psi$, which proceeds through the annihilation of $e^+ e^-$ into two photons \cite{Bodwin:2002kk}. It may also be possible to observe processes in which a $\chi_{cJ}(1P)$ or an $\eta_c(1S)$ recoils against an onium. As we have mentioned in Sec.~\ref{sec:HOCor}, the theory of exclusive production of double-quarkonium states is plagued by large logarithms of $p_T^2/m_Q^2$, where $p_T$ is the CM momentum of the quarkonium. These logarithms appear in channels for which the production amplitude is helicity suppressed at leading order in $\alpha_s$. Such channels include $J/\psi + \eta_c$ and, for some combinations of helicities, $J/\psi + \chi_{cJ}$. Work is in progress with the goal of understanding the origins of such large logarithms and resumming them to all orders in perturbation theory. If the perturbation series can be brought under control through such a resummation, then the resummation could be tested by measuring the exclusive production of double-charmonium states at Belle-II. Measurements of rates into specific helicity states would provide particularly stringent tests of the theory. If the center-of-mass energy at Belle-II can be increased above the $B_c^+ B_c^-$ threshold at 12.5~GeV, that would allow measurements of decays of the $B_c$ that are hopeless at a hadron collider. It might even allow the observation of some of the excited states in the $B_c$ spectrum. \subsubsection{LHC Upgrade} It is likely that many of the tests of quarkonium-production theory that we have outlined will be accomplished in the upcoming runs of the LHC. The higher energy and luminosity of an upgraded LHC might however be of considerable importance for testing the theory of the production of $b\bar b$ quarkonium states, as the criterion that $p_T$ be much larger than the bottomonium mass may not be easy to satisfy in precision measurements. Additional precision in measurements of charmonium production at higher values of $p_T$ could also drive the theory to a new level of precision beyond NLO in $\alpha_s$ and beyond the leading non-trivial order in $v$. \subsubsection{Future $e^+e^-$ Collider} One important application of a future $e^+e^-$ collider will be to serve as a Higgs factory. This will make possible the measurements of Higgs couplings to the SM particles. As we have mentioned in Sec.~\ref{sec:probes-new-physics}, rare Higgs decays involving quarkonia offer unique probes of some couplings. In particular, the decay $H\to J/\psi + \gamma$ seems to provide the only realistic probe of the $H c \bar c$ coupling. Measurements of inclusive quarkonium production in $\gamma \gamma$ collisions at a high-energy $e^+ e^-$ collider could provide an important text of NRQCD factorization. In particular, measurements of inclusive $J/\psi$ production at higher values of $p_T$ than were accessible at LEP could resolve the tension between global fits of NRQCD matrix elements and the LEP measurements. \subsubsection{Future $ep$ Collider} The large range of theoretical predictions for quarkonium polarizations in hadroproduction is a consequence of the fact that the color-octet NRQCD matrix elements are poorly determined at present by the high-$p_T$ hadroproduction data. Such ambiguities could likely be largely resolved by measuring quarkonium production cross sections with high precision at $p_T> 10$~GeV at a future $ep$ collider. Measurements of quarkonium polarization at a high-energy $ep$ collider would provide further stringent tests of the theory. An electron-ion collider has emerged as one of the top priorities of the U.S.\ nuclear-physics community. An electron-ion collider at CERN has also been proposed.
1,477,468,749,979
arxiv
\section{Introduction}\label{sec:Intro} \subsection{Background} Let $M$ be a $2$--dimensional surface with boundary. A map from $M$ to a Riemannian manifold $N$ is said to span a given collection $\Gamma\subset N$ of Jordan curves if its restriction to $\partial M$ is a weakly monotone parametrization of $\Gamma$. Consider the problem of finding a weakly conformal map of minimal area among maps spanning $\Gamma$. When $M$ is a disc this amounts to the classical Problem of Plateau, with first general solutions going back to \cite{Dou31, Rad30, Cou37} for $N=\mathbb{R}^n$ and to \cite{Mor48} for homogeneously regular Riemannian manifolds $N$. When $M$ is a surface of higher topological type, possibly with several boundary components, the problem is known as the Plateau-Douglas problem. It was first considered in \cite{Dou39, Shi39, Cou40} with different non-degeneracy conditions; complete modern solutions appeared in \cite{Jos85,TT88}. One may further ask whether it is possible to find a weakly conformal map of minimal area spanning $\Gamma$ in a fixed relative homotopy class. In general, such maps need not exist, see \cite{Jost84, Lem78}. However, Lemaire \cite{Lem82} showed the existence of an area minimizer in a fixed relative homotopy class under the assumption that $N$ has trivial second homotopy group, while Jost \cite{Jos85} proved the existence of an area minimizer inducing the same action on fundamental groups as a given map. Schoen-Yau \cite{SY79} and Sacks-Uhlenbeck \cite{SU82} considered the related problem of finding a mapping of minimal area from a closed (i.e. compact and without boundary) surface $M$ to $N$ inducing the same action on fundamental groups as a given map. Finally, White \cite{White88} introduced the notion of $d$--homotopy type for Sobolev maps from a closed manifold of any dimension to a Riemannian manifold and proved the existence of mappings of minimal energy in a given $d$--homotopy class for suitable integers $d$. Recently, the classical Plateau and the Plateau-Douglas problems have been solved in metric spaces of various generality in \cite{Nik79, Jost94, MZ10, OvdM14, LW15-Plateau, Cre-Plateau-sing} and \cite{FW-Plateau-Douglas, Creutz-Fitzi}, respectively. In the present article we strengthen the results of Lemaire \cite{Lem82} and Jost \cite{Jos85} mentioned above and generalize them to the setting of proper geodesic metric spaces admitting a local quadratic isoperimetric inequality. For this purpose, we introduce and study a notion of $1$--homotopy classes of Sobolev maps relative to a given collection of Jordan curves. Our notion is akin to $d$--homotopy of Sobolev maps defined on a closed manifold introduced by White \cite{White88} and studied in \cite{White88, HL03}. It provides better control than the induced action on the fundamental group. We then solve the Plateau-Douglas problem in relative $1$--homotopy classes and show that solutions are locally H\"older continuous and conformal in a weak metric sense. If the underlying space has trivial second homotopy group then relatively $1$--homotopic maps are relatively homotopic. To our knowledge, our results are already partially new for Riemannian manifolds. We further obtain an analog for closed surfaces, generalizing the results in \cite{SY79, SU82} mentioned above. \begin{defn}\label{def:qii} A complete metric space $X$ is said to admit a local quadratic isoperimetric inequality if there exist $C, l_0>0$ such that every Lipschitz curve $c\colon S^1\to X$ of length $\ell(c)\leq l_0$ is the trace of a Sobolev map $u\in W^{1,2}(\mathbb D, X)$ with $$\operatorname{Area}(u)\leq C\cdot \ell(c)^2.$$ \end{defn} For the notions related to Sobolev maps we refer to Section~\ref{sec:prelims}. The class of spaces admitting a local quadratic isoperimetric inequality contains all homogeneously regular Riemannian manifolds \cite{Mor48}, compact Lipschitz manifolds, complete $\rm{CAT}(\kappa)$--spaces, compact Alexandrov spaces, some sub-Riemannian manifolds, and many more spaces, cf. \cite[Section 8]{LW15-Plateau}. \subsection{Relative $1$--Homotopy classes of Sobolev maps} Let $\Gamma\subset X$ be the disjoint union of $k\ge 1$ rectifiable Jordan curves in a proper geodesic metric space $X$ admitting a local quadratic isoperimetric inequality. Let $M$ be a smooth compact oriented surface with $k$ boundary components, and let $g$ be an auxiliary Riemannian metric on $M$. We denote by $[\Gamma]$ the family of weakly monotone parametrizations of $\Gamma$, i.e. uniform limits of homeomorphisms $\partial M\to \Gamma$, and by $\Lambda(M,\Gamma,X)$ the family of Sobolev maps $u\in W^{1,2}(M, X)$ such that the trace $\operatorname{tr}(u)$ has a continuous representative in $[\Gamma]$. Let $h\colon K\to M$ be a $C^1$--smooth triangulation of $M$, and $\varrho\colon K^1\to X$ a continuous map such that $\varrho|_{\partial K}\in [\Gamma]$, where $K^1$ denotes the $1$--skeleton of $K$ and $\partial K\subset K^1$ is the subset of $K$ homeomorphic to $\partial M$. The homotopy class of $\varrho$ relative to $\Gamma$ is the family $$[\varrho]_\Gamma:= \{\varrho'\colon K^1\to X \mid\ \varrho'\textrm{ continuous },\ \varrho'|_{\partial K}\in[\Gamma],\ \varrho\sim\varrho'\textrm{ rel }\Gamma\},$$ where $\varrho$ and $\varrho'$ are said to be homotopic relative to $\Gamma$, denoted $\varrho\sim \varrho'$ rel $\Gamma$, if there exists a homotopy $F\colon K^1\times[0,1]\to X$ from $\varrho$ to $\varrho'$ with $F(\cdot, t)|_{\partial K}\in [\Gamma]$ for every $t$. The $1$--homotopy class $u_{\#, 1}[h]$ relative to $\Gamma$ of an element $u\in \Lambda(M,\Gamma, X)$ will be defined in Section~\ref{sec:1-homot}. In the following theorem we summarize its most important properties. These could in fact be used to give an equivalent definition of $u_{\#, 1}[h]$, see the remark after the theorem. \begin{thm}\label{thm:intro-properties-1-hom-class-Sobolev} Every $u\in \Lambda(M, \Gamma, X)$ has a well-defined relative homotopy class $u_{\#, 1}[h]$ of continuous maps from $K^1$ to $X$ whose restriction to $\partial K$ is in $[\Gamma]$. It satisfies: \begin{enumerate} \item If $u$ has a representative $\bar{u}$ which is continuous on the whole of $M$ then $$u_{\#,1}[h] = [\bar{u}\circ h|_{K^1}]_\Gamma.$$ \item If $u,v\in \Lambda(M,\Gamma, X)$ satisfy $u_{\#,1}[h] = v_{\#, 1}[h]$ then, for every triangulation $\tilde{h}\colon \tilde{K}\to M$ of $M$, we have $$u_{\#,1}[\tilde{h}] = v_{\#,1}[\tilde{h}].$$ \item For every $L>0$ there exists $\varepsilon>0$ such that if $u,v\in\Lambda(M, \Gamma, X)$ induce the same orientation on $\Gamma$, and $$d_{L^2}(u,v)\leq \varepsilon,\quad \max\left\{E_+^2(u,g), E_+^2(v,g)\right\}\leq L,$$ then $u_{\#,1}[h] = v_{\#,1}[h].$ \end{enumerate} \end{thm} Here, $E_+^2(u,g)$ denotes the Reshetnyak energy of $u$ with respect to $g$, see Section~\ref{sec:prelims}. Maps in $\Lambda(M,\Gamma, X)$ can be approximated in the $L^2$--distance by \emph{continuous} maps in $\Lambda(M,\Gamma, X)$ with the same trace and control on the energy, see Lemma~\ref{lem:approx-cont-area-energy-bounded}. Thus properties (i) and (iii) in Theorem~\ref{thm:intro-properties-1-hom-class-Sobolev} imply that the $1$--homotopy class $u_{\#,1}[h]$ is well-defined. The argument used to prove (ii) also shows that, if $u\in \Lambda(M,\Gamma, X)$ and $\varphi\colon M\to X$ is continuous with $\varphi|_{\partial M}\in[\Gamma]$, then $u_{\#,1}[h] = [\varphi\circ h|_{K^1}]_\Gamma$ holds for one triangulation $h$ if and only if it holds for every triangulation. In this case we say that $u\in\Lambda(M,\Gamma, X)$ is $1$--homotopic to $\varphi$ relative to $\Gamma$, denoted by $u\sim_1 \varphi$ rel $\Gamma$. \subsection{Homotopic Plateau-Douglas problem} Let $\Gamma$, $X$ be as above, and let $M$ be a smooth compact oriented and \emph{connected} surface with $k\ge 1$ boundary components. Given a continuous map $\varphi\colon M\to X$ with $\varphi|_{\partial M}\in[\Gamma]$, set $$a(M, \varphi, X):= \inf\{\operatorname{Area}(u): \text{$u\in\Lambda(M, \Gamma, X)$, $u\sim_1\varphi$ rel $\Gamma$}\},$$ where $\inf\varnothing=\infty$ by convention. Moreover, set $a^*(M, \varphi, X):= \inf a(M^*, \varphi^*, X)$, where the infimum is taken over all \emph{primary reductions} of $(M,\varphi)$, that is, pairs $(M^*, \varphi^*)$ consisting of \begin{enumerate} \item[(i)] a smooth surface $M^*$ obtained from $M$ by cutting $M$ along a smooth closed simple non-contractible curve $\alpha$ in the interior of $M$ and gluing smooth discs to the two new boundary components; \item[(ii)] a continuous map $\varphi^*\colon M^*\to X$ which agrees with $\varphi$ on $M\setminus \alpha$. \end{enumerate} We say that $\varphi$ satisfies the \emph{homotopic Douglas condition} if \begin{equation}\label{eq:Douglas-condition} a(M,\varphi,X) < a^*(M,\varphi,X). \end{equation} As an illustration, if the induced homomorphism $\varphi_*\colon \pi_1(M) \to \pi_1(X)$ of fundamental groups is injective then $\varphi$ satisfies the homotopic Douglas condition \eqref{eq:Douglas-condition} and, in par\-ti\-cu\-lar, $a(M,\varphi,X)<\infty$, see Proposition~\ref{prop:induced-homom-Douglas}. In the statement below, we fix $\Gamma,\ X$, and $M$ as above, and let $\varphi\colon M\to X$ be a continuous map with $\varphi|_{\partial M}\in [\Gamma]$. \begin{thm}\label{thm:Plateau-Douglas-homot-intro} If $\varphi$ satisfies the homotopic Douglas condition \eqref{eq:Douglas-condition} then: \begin{enumerate} \item There exist $u\in\Lambda(M, \Gamma, X)$ and a Riemannian metric $g$ on $M$ such that $u$ is $1$--homotopic to $\varphi$ relative to $\Gamma$, $u$ is infinitesimally isotropic with respect to $g$, and $\operatorname{Area}(u) = a(M, \varphi, X)$. \item Any such $u$ has a representative $\bar{u}$ which is locally H\"older continuous in the interior of $M$ and extends continuously to the boundary $\partial M$. \item If $X$ has trivial second homotopy group then $\bar{u}$ is homotopic to $\varphi$ relative to $\Gamma$. \end{enumerate} \end{thm} Moreover, the metric $g$ can be chosen such that it has constant curvature $-1$, $0$, or $1$ and $\partial M$ is geodesic. See Section~\ref{sec:prelims} for the definition of infinitesimal isotropy, which is a metric variant of weak conformality. Here, $\bar{u}$ and $\varphi$ are called homotopic relative to $\Gamma$ if they are homotopic through a family of maps whose restriction to $\partial M$ is in $[\Gamma]$. We remark that homotopy classes (relative to $\Gamma$) need not contain continuous, infinitesimally isotropic area minimizers if $\pi_2(X)\ne\varnothing$, compare \cite[Chapter 5]{Jost84}. Theorem~\ref{thm:Plateau-Douglas-homot-intro} generalizes and strengthens \cite[Theorem 2.2]{Jos85} and \cite[Theorem 1.7]{Lem82}, see also \cite[Theorem 5.1]{Jost94} for a homotopic variant of the Dirichlet problem in metric spaces. We remark that control on the relative $1$--homotopy class is, in general, strictly stronger than the control on the action on fundamental groups in \cite{Jos85}, see Example~\ref{ex:1-hom-stronger-action-fundgrp}. An analog of Theorem~\ref{thm:Plateau-Douglas-homot-intro} for closed surfaces, generalizing results in \cite{SY79,SU82}, will be discussed in Section~\ref{sec:sol}. We remark that the local quadratic isoperimetric inequality is crucial to the stability statement (iii) in Theorem~\ref{thm:intro-properties-1-hom-class-Sobolev}. Example~\ref{ex:stabilityfail} exhibits a space where the stability of $1$--homotopy classes from closed surfaces fails. Compare with \cite{Creutz-Fitzi}, where the Plateau-Douglas problem was recently solved in spaces without a local quadratic isoperimetric inequality. \subsection{Outline} The idea for defining the relative $1$--homotopy type of a map $u\in\Lambda(M,\Gamma,X)$ is, like in \cite{White88}, to consider small perturbations of $C^1$--smooth triangulations of $M$ in such a way that the restriction of $u$ to the $1$--skeleton of a ``generic'' perturbed triangulation is essentially continuous. In Section~\ref{sec:admissible-deformations}, we introduce \emph{admissible deformations} on $M$ which accomplish this and prove that the relative homotopy class of such restrictions is essentially independent of the perturbation, see Theorem~\ref{thm:homotopic-1-skeleton}. This crucially uses the local quadratic isoperimetric inequality. In Section~\ref{sec:1-homot} we show that the way we perturb a given triangulation does not affect the relative homotopy type of the restrictions to generic $1$--skeleta. Together with a continuous approximation of Sobolev maps (see Lemma~\ref{lem:approx-cont-area-energy-bounded}) and the results of Section~\ref{sec:admissible-deformations}, this leads to a well-defined notion of relative $1$--homotopy class for Sobolev maps, which is moreover independent of the chosen triangulation. The main results in Section~\ref{sec:1-homot} are Theorems~\ref{thm:rel-hom-indep-wiggling} and \ref{thm:stability-1-homotopic} from which Theorem~\ref{thm:intro-properties-1-hom-class-Sobolev} will follow. As already mentioned, our notion of relative $1$--homotopy class is related to the $d$--homotopy type, studied primarily for Sobolev maps defined on closed manifolds in \cite{White88, HL03, HL05}. While these articles also discuss the case of manifolds with boundary, Sobolev maps in their setting are required to have a fixed Lipschitz trace. This is suitable for solving the Dirichlet problem in $d$--homotopy classes but cannot be applied to the Plateau--Douglas problem since it is not possible to control the boundary behaviour of elements of $\Lambda(M,\Gamma, X)$. In Sections~\ref{sec:homot-Douglas-cond} and \ref{sec:sol} we use an approach analogous to that in \cite{FW-Plateau-Douglas} in order to solve the homotopic Plateau-Douglas problem. Unlike in \cite{FW-Plateau-Douglas}, we need to control the relative $1$--homotopy type of the primary reductions appearing in the proofs of Propositions~\ref{prop:equi-cont} and \ref{prop:lower-bound-rel-systole}. Lemma~\ref{lem:1-hom-reduction} provides the necessary technical tool for this. We furthermore provide a simple sufficient condition (see Proposition~\ref{prop:induced-homom-Douglas}) that ensures the homotopic Douglas condition \eqref{eq:Douglas-condition} is satisfied. Section~\ref{sec:sol} is devoted to the proof of Theorem~\ref{thm:Plateau-Douglas-homot-intro}. We present and prove Theorem~\ref{thm:area-min-hom-class-without-bdry}, which is an analog of Theorem~\ref{thm:Plateau-Douglas-homot-intro} for closed surfaces. \section{Preliminaries}\label{sec:prelims} \subsection{Terminology} A \emph{surface}, in this work, refers to a smooth compact oriented surface with (possibly) non-empty boundary, and a \emph{closed surface} is a surface with empty boundary. We denote by $\partial M$ and ${\rm int}(M)=M\setminus \partial M$ the boundary and interior of a surface $M$, respectively. The Euler characteristic of a connected surface $M$ satisfies $\chi(M)=2-2p-k$, where $k\ge 0$ is the number of components of $\partial M$, and $p$ is the genus of the closed surface obtained by gluing a disc along every boundary component of $M$. For a metric space $X$ and $m\ge 0$, we denote by ${\mathcal H}_X^m$ the Hausdorff $m$--measure on $X$. If $X$ is a manifold equipped with a Riemannian metric $g$, we denote ${\mathcal H}_g^m={\mathcal H}_X^m$. The Lebesgue measure of a subset $A\subset \mathbb{R}^m$ is denoted by $|A|$. \subsection{Triangulations} A triangulation of a surface $M$ is a homeomorphism $h\colon K\to M$ from a cell-complex $K$, equipped with the length metric which restricts to the Euclidean metric on every cell $\Delta$ of $K$. We additionally assume throughout the paper that triangulations are $C^1$--diffeomorphisms, i.e. $h|_\Delta$ is a $C^1$--diffeomorphism onto its image for any cell $\Delta$ of $K$ (cells are closed by definition). The $j$--skeleton $K^j$ of $K$ is the union of the cells of $K$ with dimension $\le j$, and $\partial K\subset K^1$ is the subset of $K$ homeomorphic to $\partial M$. \subsection{Semi-norms} The energy of a semi-norm $s$ on (Euclidean) $\mathbb{R}^2$ is defined by $$\mathbf{I}_+^2(s):= \max\{s(v)^2: v\in\mathbb{R}^2, |v|=1\}.$$ The jacobian of a norm $s$ on $\mathbb{R}^2$ is the unique number ${\mathbf J}(s)$ such that $${\mathcal H}^2_{(\mathbb{R}^2, s)}(A) = {\mathbf J}(s) \cdot |A|$$ for some and thus every subset $A\subset\mathbb{R}^2$ with $|A|>0$. For a degenerate semi-norm $s$ we set ${\mathbf J}(s):= 0$. Notice that we always have ${\mathbf J}(s)\leq \mathbf{I}_+^2(s)$. A semi-norm $s$ on $\mathbb{R}^2$ is called isotropic if $s=0$ or if $s$ is a norm and the ellipse of maximal area contained in $\{v\in\mathbb{R}^2: s(v)\leq 1\}$ is a round Euclidean ball. \subsection{Sobolev maps with metric targets} Let $(X, d)$ be a complete metric space and let $M$ be a smooth compact $m$--dimen\-sional manifold, possibly with non-empty boundary. Fix a Riemannian metric $g$ on $M$ and let $\Omega\subset M$ be open and bounded. Denote by $L^2(\Omega, X)$ the collection of measurable and essentially separably valued maps $u\colon \Omega\to X$ such that for some and thus every $x\in X$ the function $u_x(z):= d(x, u(z))$ belongs to the classical space $L^2(\Omega)$. For $u,v\in L^2(\Omega, X)$ we define $$d_{L^2}(u,v):= \left(\int_\Omega d^2(u(z), v(z))\,d{\mathcal H}_g^m(z)\right)^{\frac{1}{2}},$$ and we say that a sequence $(u_n)\subset L^2(\Omega, X)$ converges in $L^2(\Omega, X)$ to $u\in L^2(\Omega, X)$ if $d_{L^2}(u_n,u)\to 0$ as $n\to\infty$. The following definition is due to Reshetnyak \cite{Res97,Res06}. \begin{defn} A map $u\in L^2(\Omega, X)$ belongs to the Sobolev space $W^{1,2}(\Omega, X)$ if there exists $h\in L^2(\Omega)$ such that $u_x\in W^{1,2}({\rm int}(M))$ and $|\nabla u_x|_g\leq h$ almost everywhere on $\Omega$, for every $x\in X$. \end{defn} Several other notions of Sobolev spaces exist in the literature and we refer the reader to \cite[Chapter 10]{HKST15} for an overview of some of them. We will use in particular \emph{Newton-Sobolev} spaces which are equivalent to $W^{1,2}(\Omega,X)$ if $\Omega$ is a bounded Lipschitz domain, see Proposition~\ref{prop:Newton-Sobolev-rep} for a precise statement. If $u\in W^{1,2}(\Omega, X)$ then for almost every $z\in \Omega$ there exists a unique semi-norm $\ap\md u_z$ on $T_zM$ such that $$\operatorname{ap} \lim_{v\to 0} \frac{d(u(\exp_z(v), u(z)) - \ap\md u_z(v)}{|v|_g}=0,$$ where $\operatorname{ap}\lim$ is the approximate limit, see e.g. \cite{Kar07}. Next, we specialize to the case that $M$ has dimension $m=2$. We define the notions of energy, jacobian and isotropy of a semi-norm on $(T_zM, g(z))$ by identifying it with $(\mathbb{R}^2,|\cdot|)$ via a linear isometry. \begin{defn} Let $u\in W^{1,2}(\Omega, X)$. The Reshetnyak energy of $u$ with respect to $g$ and the parametrized (Hausdorff) area of $u$ are given, respectively, by $$E_+^2(u, g):= \int_{\Omega} \mathbf{I}_+^2(\ap\md u_z)\,d{\mathcal H}^2_{g}(z),\quad \operatorname{Area}(u):= \int_{\Omega} {\mathbf J}(\ap\md u_z)\,d{\mathcal H}^2_g(z).$$ \end{defn} We have that the parametrized area of a Sobolev map is invariant under precompositions with biLipschitz homeo\-morphisms, and thus independent of the Riemannian metric $g$. The energy $E_+^2$ is invariant only under precompositions with conformal diffeomorphisms, and thus depends on $g$. Our notation reflects these facts. Finally, if $u$ satisfies Lusin's property (N) then the area formula \cite{Kir94}, \cite{Kar07} for metric space valued Sobolev maps yields $$\operatorname{Area}(u) = \int_X\#u^{-1}(x)\,d{\mathcal H}^2_X(x).$$ \begin{defn} A map $u\in W^{1,2}(\Omega, X)$ is called infinitesimally isotropic with respect to the Riemannian metric $g$ if for almost every $z\in \Omega$ the semi-norm $\ap\md u_z$ on $(T_zM, g(z))$ is isotropic. \end{defn} If $X$ is a Riemannian manifold, or more generally a space with property (ET) (cf. \cite[Definition 11.1]{LW15-Plateau}), then infinitesimal isotropy is equivalent to weak conformality, see \cite[Theorem 11.3]{LW15-Plateau}. Next, we recall the definition of the trace of a Sobolev map. Let $\Omega \subset {\rm int}(M)$ be a Lipschitz domain. Then for every $z\in \partial \Omega$ there exist an open neighborhood $U\subset M$ and a biLip\-schitz map $\psi\colon (0,1)\times [0,1)\to M$ such that $\psi((0,1)\times (0,1)) = U\cap \Omega$ and $\psi((0,1)\times\{0\}) = U\cap \partial\Omega$. Let $u\in W^{1,2}(\Omega, X)$. For almost every $s\in (0,1)$ the map $t\mapsto u\circ\psi(s,t)$ has an absolutely continuous representative which we denote by the same expression. The trace of $u$ is defined by $$\operatorname{tr}(u)(\psi(s,0)):= \lim_{t\searrow 0} (u\circ\psi)(s,t)$$ for almost every $s\in(0,1)$. It can be shown (see \cite{KS93}) that the trace is independent of the choice of the map $\psi$ and defines an element of $L^2(\partial \Omega, X)$. \begin{prop}\label{prop:good-cont-filling} Let $X$ be a proper metric space admitting a local quadratic isoperimetric inequality. Let $\Omega$ be a Lipschitz Jordan domain in the interior of $M$ and let $u\in W^{1,2}(\Omega, X)$ have a continuous trace. Then for every $\varepsilon>0$ there exists a continuous map $v\colon\overline{\Omega}\to X$ with $v|_{\partial \Omega} = \operatorname{tr}(u)$, $v\in W^{1,2}(\Omega, X)$, and \begin{align*} \operatorname{Area}(v)\leq \operatorname{Area}(u) + \varepsilon\cdot E_+^2(u, g),\quad E_+^2(v, g) \leq \left(1+\varepsilon^{-1}\right)\cdot E_+^2(u, g). \end{align*} \end{prop} It follows, in particular, that if a closed curve $\gamma$ in $X$ is the trace of a Sobolev disc then $\gamma$ is contractible. \begin{proof} By possibly doubling $M$ we may assume that $M$ has no boundary. Now, there exists a conformal diffeomorphism from a bounded open subset of $\mathbb{R}^2$ onto an open subset of $M$ which contains $\overline{\Omega}$. Since area and energy are invariant under conformal diffeomorphisms we may assume that $\Omega$ is a bounded Lipschitz Jordan domain in $\mathbb{R}^2$. We write $E_+^2(u)$ for the energy of $u$. Fix $\varepsilon>0$. We first show the existence of a minimizer $v\in W^{1,2}(\Omega, X)$ of $$A_\varepsilon(v):= \operatorname{Area}(v) + \varepsilon\cdot E_+^2(v),$$ subject to the condition $\operatorname{tr}(v) = \operatorname{tr}(u)$. For this let $(v_n)\subset W^{1,2}(\Omega, X)$ be a minimizing sequence for $A_\varepsilon$ with $\operatorname{tr}(v_n) = \operatorname{tr}(u)$ for all $n$. Then $(v_n)$ has bounded energy and thus, by \cite[Lemma 4.11]{LW15-Plateau} and \cite[Theorems 1.13 and 1.12.2]{KS93}, a subsequence converges in $L^2(\Omega, X)$ to a map $v\in W^{1,2}(\Omega, X)$ with $\operatorname{tr}(v) = \operatorname{tr}(u)$. By the lower semi-continuity of area and energy it follows that $v$ is a minimizer of $A_\varepsilon$. Next, we claim that for every Lipschitz domain $\Omega'\subset \Omega$ and every $w\in W^{1,2}(\Omega', X)$ with $\operatorname{tr}(w) = \operatorname{tr}(v|_{\Omega'})$ we have $E_+^2(v|_{\Omega'}) \leq \left(1+\varepsilon^{-1}\right)\cdot E_+^2(w)$ and thus $v$ is $\left(1+\varepsilon^{-1}\right)$--quasiharmonic in the sense of \cite{LW16-harmonic}. Indeed, if $w\in W^{1,2}(\Omega', X)$ satisfies $\operatorname{tr}(w) = \operatorname{tr}(v|_{\Omega'})$ then the map $w'$ which agrees with $w$ on $\Omega'$ and with $v$ on $\Omega\setminus \Omega'$ belongs to $W^{1,2}(\Omega, X)$ and satisfies $\operatorname{tr}(w') = \operatorname{tr}(u)$ by \cite[Theorem 1.12.3]{KS93}. Since $v$ minimizes $A_\varepsilon$ we obtain $$\operatorname{Area}(v|_{\Omega'}) + \varepsilon\cdot E_+^2(v|_{\Omega'}) \leq \operatorname{Area}(w) + \varepsilon\cdot E_+^2(w) \leq (1+\varepsilon)\cdot E_+^2(w)$$ and this implies the claim. Finally, since $v$ is quasiharmonic and has a continuous trace, it follows from \cite[Theorem 1.3]{LW16-harmonic} that $v$ has a continuous representative which continuously extends to the boundary. This representative, which we denote again by $v$, satisfies the properties in the statement of the proposition. \end{proof} \begin{prop}\label{prop:Newton-Sobolev-rep} Let $\Omega$ be a Lipschitz domain in the interior of $M$. A measurable and essentially separably valued map $u\colon\Omega\to X$ belongs to $W^{1,2}(\Omega, X)$ if and only if there exist a map $v\colon\overline{\Omega}\to X$ and a Borel function $\rho\colon\overline{\Omega}\to[0,\infty]$ in $L^2(\Omega)$ such that $v=u$ almost everywhere and \begin{equation}\label{eq:upper-grad-ineq} d(v(\gamma(a)),v(\gamma(b)))\leq \int_a^b \rho(\gamma(t))|\gamma'(t)|{\rm d} t \end{equation} for every Lipschitz curve $\gamma\colon[a,b]\to \overline{\Omega}$. In this case, we have \begin{equation}\label{eq:energy} E_+^2(u,g) = \inf\{\|\rho\|_{L^2(\Omega, g)}^2:\ \rho \text{ satisfies \eqref{eq:upper-grad-ineq}}\} \end{equation} and $v(z) = \operatorname{tr}(u)(z)$ for ${\mathcal H}^1_g$--almost every $z\in\partial \Omega$. \end{prop} The map $v$ in the claim is called a \emph{Newton--Sobolev representative} of $u$, and $\rho$ an \emph{upper gradient} of $v$. Inequality \eqref{eq:upper-grad-ineq} is known as the upper gradient inequality. \begin{proof The existence of $v$ and $\rho$ as in the claim imply that $u\in W^{1,2}(\Omega,X)$, see \cite[Chapter 7]{HKST15}. For the opposite implication, by possibly doubling $M$ we may assume $M$ has no boundary. Since $\partial \Omega$ is Lip\-schitz, there exists a Lipschitz domain $\widehat \Omega\subset M$ containing $\overline\Omega$ and a map $\hat u\in W^{1,2}(\widehat \Omega,X)$ with $\hat u|_\Omega=u$, see the proof of \cite[Lemma 3.4]{LW15-Plateau}. There exists $v\colon \widehat \Omega\to X$ and $\rho\colon \widehat \Omega\to [0,\infty]$ satisfying \eqref{eq:upper-grad-ineq} for all Lipschitz curves $\gamma\colon [a,b]\to \widehat\Omega$, cf. \cite[Theorems 7.1.20 and 7.4.5]{HKST15}. The maps $v|_{\overline\Omega}$ and $\rho|_{\overline\Omega}$ satisfy the claim. The equality \eqref{eq:energy} follows e.g. from \cite[Theorem 7.1.20 and Lemma 6.2.2]{HKST15}. Let $\psi\colon (0,1)\times [0,1)\to M$ be as in the definition of the trace, so that $\operatorname{tr}(u)(\psi(s,0))=\lim_{t\to 0}u\circ\psi(s,t)$ for a.e. $s\in (0,1)$. A Fubini-type argument shows that $$v(\psi(s,0))=\lim_{t\to 0}u\circ\psi(s,t)$$ for a.e. $s\in (0,1)$. This completes the proof. \end{proof} We illustrate the use of Newton-Sobolev representatives in the next lemma. Recall that a metric space is said to be $C$--quasiconvex if any two points can be joined by a Lipschitz curve of length at most $C$ times their distance. \begin{lem}\label{lem:ug} Let $h\colon K\to M$ be a Lipschitz map from a cell-complex $K$, and $A\subset K^1$ a $C$--quasiconvex subset of the 1-skeleton. If $v\colon M\to X$ is a Newton-Sobolev representative, and $\rho\in L^2(M)$ an upper gradient of $u$ with $ L:=\int_A\rho^2\circ h{\rm d} {\mathcal H}^1<\infty,$ then $v\circ h|_A$ is $\frac 12$--H\"older continuous with constant $(CL)^{\frac 12}{\rm Lip}(h)$. \end{lem} \begin{proof} For $x,y\in A$, let $\gamma\colon [0,\ell(\gamma)]\to A$ be a simple unit speed curve joining $x$ and $y$ with $\ell(\gamma)\le Cd(x,y)$. By \eqref{eq:upper-grad-ineq} we have \begin{align*} d(v\circ h(x),v\circ h(y))&\le \int_0^{\ell(\gamma)}\rho\circ h(\gamma(t))|(h\circ\gamma)'(t)|{\rm d} t \le {\rm Lip}(h)\int_0^{\ell(\gamma)}\rho\circ h(\gamma(t)){\rm d}t\\ &\le\ {\rm Lip}(h)\ell(\gamma)^{\frac 12}L^{\frac 12}\le (CL)^{\frac 12}{\rm Lip}(h)d(x,y)^{\frac 12}. \end{align*} \end{proof} \section{Admissible deformations on a surface}\label{sec:admissible-deformations} The notion of admissible deformation on a surface given below, in the spirit of \cite{HL05}, will be used to define $1$--homotopy classes relative to given Jordan curves. We remark that the deformations in \cite{HL05, White88} keep the boundary fixed and are thus not suitable for studying the Plateau-Douglas problem. The deformations in \cite{White86, White88, HL03} for closed surfaces also do not adapt to our purposes. \begin{defn}\label{def:admdef} An admissible deformation on a surface $M$ is a smooth map $\Phi\colon M\times \mathbb{R}^m\to M$, for some $m\in\mathbb{N}$, such that $\Phi_\xi:= \Phi(\cdot, \xi)$ is a diffeomorphism for every $\xi\in\mathbb{R}^m$ and $\Phi_0=\operatorname{id}_M$, and such that the derivative of $\Phi^p:= \Phi(p, \cdot)$ at the origin satisfies \begin{equation*} D\Phi^p(0)(\mathbb{R}^m) = \left\{\begin{array}{l@{\;\;\text{if}\;}l} T_pM & p\in {\rm int}(M)\\ T_p(\partial M) &p\in\partial M. \end{array}\right. \end{equation*} \end{defn} If $\Phi\colon M\times\mathbb{R}^m\to M$ is an admissible deformation on $M$ and $\varphi\colon M\to M$ is a diffeomorphism then $\Phi'(p,\xi):= \varphi(\Phi(\varphi^{-1}(p),\xi))$ also defines an admissible deformation. \begin{prop}\label{prop:existence-admissible-deformations} There exist admissible deformations on every surface. \end{prop} \begin{proof} Let $\eta_1, \eta_2\colon [0,\infty)\to [0,\infty)$ be smooth functions such that $\eta_1(0)=0$, $\eta_1'(0)>0$, $\eta_2(0)>0$ and $\eta_1(t)=\eta_2(t) = 0$ for all $t\geq 1$. We use $\eta_1,\eta_2$ to define smooth vector fields $X_1, X_2$ on $M$ as follows. Each boundary component of $M$ has a neighborhood which is diffeomorphic to $S^1\times[0,2)$. On such a boundary component we define $X_1$ and $X_2$ by $X_1(z,t) = \eta_1(t) \frac{\partial}{\partial t}$ and $X_2(z,t) = \eta_2(t)\frac{\partial }{\partial z}$, written in coordinates $(z,t)\in S^1\times[0,2)$. Now, extend $X_1,X_2$ to all of $M$ by setting them to be zero outside these neighborhoods. It is easy to see that there exist smooth vector fields $X_3, \dots, X_m$ on $M$, for some $m$, with support in the interior of $M$ such that the vectors $X_1(p), \dots, X_m(p)$ span $T_pM$ for every $p$ in the interior of $M$. For every $k=1,\dots, m$, the flow $\varphi_{X_k, t}$ along $X_k$ is defined for all times $t\in\mathbb{R}$. Now the map $\Phi\colon M\times\mathbb{R}^m\to M$ given by $$\Phi(p,\xi):= \varphi_{X_1, \xi_1}\circ \varphi_{X_2, \xi_2}\circ\dots\circ\varphi_{X_m, \xi_m}(p)$$ defines an admissible deformation on $M$. \end{proof} Let $M$ be a surface, which we equip with a Riemannian metric $g$. Let $\Phi\colon M\times \mathbb{R}^m\to M$ be an admissible deformation on $M$ and let $h\colon K\to M$ be a triangulation of $M$. For $\xi\in\mathbb{R}^m$ let $h_\xi\colon K\to M$ be the triangulation given by $h_\xi:= \Phi_\xi\circ h$. The following variant of \cite[Lemma 5]{HL05} plays a key role throughout the article (see also \cite[Lemma 3.3]{HL03} for closed manifolds). \begin{lem}\label{lem:crucial-ineq-deformation} There exist an open ball $B_{\Phi, h}\subset \mathbb{R}^m$ centered at the origin and $C>0$ such that for every Borel function $\rho\colon M\to [0,\infty]$ we have \begin{equation}\label{eq:first-ineq-crucial-lemma} \int_{B_{\Phi, h}}\left(\int_{K^0\cap \partial K} \rho\circ h_\xi(z)\,d{\mathcal H}^0(z)\right)\,d\xi \leq C\int_{\partial M}\rho\,d{\mathcal H}^1_g \end{equation} and, for every $l\in\{0,1,2\}$, \begin{equation}\label{eq:second-ineq-crucial-lemma} \int_{B_{\Phi, h}}\left(\int_{K^l\setminus \partial K} \rho\circ h_\xi(z)\,d{\mathcal H}^l(z)\right)\,d\xi \leq C\int_M\rho\,d{\mathcal H}^2_g. \end{equation} \end{lem} \begin{proof} We only prove \eqref{eq:second-ineq-crucial-lemma} and leave the similar proof of \eqref{eq:first-ineq-crucial-lemma} to the reader. Let $\Delta$ be a closed cell of some dimension $l$ in $K$ and suppose $\Delta$ is not contained in $\partial K$. Define a map $H\colon \Delta\times\mathbb{R}^m\to M$ by $H(z,\xi):= \Phi(h(z), \xi)$. The properties of $\Phi$ and $h$ imply that $$D H(z, 0)(\mathbb{R}^l\times\mathbb{R}^m) = T_{h(z)}M$$ for every $z\in \Delta$ and therefore there exist $\varepsilon, c>0$ such that the jacobian of the differential of $H$ satisfies \begin{equation}\label{eq:lower-bound-jac} {\mathbf J}(D H(z,\xi)) \geq c \end{equation} for every $(z,\xi)\in \Delta\times \overline{B}(0, 2\varepsilon)$. Since $H|_{\Delta\times B(0, 2\varepsilon)}$ is $C^1$ up to the boundary, we may extend $H$ to a map $ H\colon \tilde\Delta\times B(0, 2\varepsilon)\to \tilde M$ satisfying \eqref{eq:lower-bound-jac} for some open manifolds $\tilde\Delta\subset \mathbb{R}^l$ and $\tilde M$ containing $\Delta$ and $M$, respectively, by possibly making $c$ smaller. We now claim that there exists $L\geq 0$ such that \begin{equation}\label{eq:upper-bound-levelsets} {\mathcal H}^{l+m-2}(H^{-1}(x) \cap \Delta\times B(0,\varepsilon))\leq L \end{equation} for every $x\in M$. In order to prove this, fix $(z,\xi)\in \Delta\times \overline{B}(0,\varepsilon)$. Let $F\colon \tilde\Delta\times B(0,2\varepsilon) \to \mathbb{R}^{l+m-2}$ be a $C^1$ map such that $F(z,\xi)=0$ and such that the map $$\tilde{H}\colon \tilde \Delta\times B(0,2\varepsilon)\to \tilde M\times\mathbb{R}^{l+m-2}$$ given by $\tilde{H} = (H, F)$ satisfies $$D\tilde{H}(z,\xi) (\mathbb{R}^l\times\mathbb{R}^m) = T_{H(z,\xi)}\tilde M\times \mathbb{R}^{l+m-2}.$$ There exist $\delta>0$ and open neighborhoods $U\subset \tilde \Delta\times B(0,2\varepsilon)$ of $(z,\xi)$ and $V\subset \tilde M$ of $H(z,\xi)$ such that the restriction of $\tilde{H}$ to $U$ is a biLipschitz homeomorphism with image $V\times B(0,\delta)$. Let $G$ be the inverse of $\tilde{H}|_U$, so that $$H^{-1}(x)\cap U = G(\{x\}\times B(0,\delta))$$ for every $x\in V$. It follows that there exists $L'$ such that $${\mathcal H}^{l+m-2}(H^{-1}(x)\cap U) \leq L'$$ for every $x\in V$. Since $\Delta\times\overline{B}(0,\varepsilon)$ is compact we can cover it by finitely many such open sets $U$ and the claim follows for a suitable number $L$. Finally, let $\rho\colon M\to[0,\infty]$ be a Borel function. From the co-area formula and the inequalities \eqref{eq:lower-bound-jac} and \eqref{eq:upper-bound-levelsets} we conclude \begin{equation*} \begin{split} \int_{B(0,\varepsilon)}\int_\Delta \rho\circ H(z, \xi)&\,d{\mathcal H}^l(z)\,d\xi\\ &\leq c^{-1} \int_{B(0,\varepsilon)}\int_\Delta \rho\circ H(z, \xi)\, {\mathbf J}(D H(z,\xi))\,d{\mathcal H}^l(z)\,d\xi\\ & = c^{-1}\int_M\rho(x) \cdot {\mathcal H}^{l+m-2}(H^{-1}(x))\,d{\mathcal H}^2_g(x)\\ &\leq \frac{L}{c} \int_M\rho(x)\,d{\mathcal H}^2_g(x). \end{split} \end{equation*} This proves \eqref{eq:second-ineq-crucial-lemma} with $B_{\Phi, h}:= B(0,\varepsilon)$ and $C=\frac{L}{c}$. \end{proof} Lemma~\ref{lem:crucial-ineq-deformation} has the following immediate corollary. \begin{cor}\label{cor:aeagree} If $N\subset M$ and $E\subset \partial M$ satisfy ${\mathcal H}^2_g(N)={\mathcal H}^1_g(E)=0$ then, for almost every $\xi\in B_{\Phi,h}$, we have that $h_\xi(x)\notin N$ for ${\mathcal H}^1$--a.e. $x\in K^1\setminus \partial K$ and $h_\xi(x)\notin E$ for every $x\in K^0\cap \partial K$. \end{cor} In the next statement, we denote by $u\circ h_\xi|_{K^1}$ the map which agrees with $u\circ h_\xi$ on $K^1\setminus\partial K$ and with $\operatorname{tr}(u)\circ h_\xi$ on $\partial K$. \begin{prop}\label{prop:restriction-1-skeleton} Let $X$ be a complete metric space and $u\in W^{1,2}(M,X)$. Then $u\circ h_\xi|_{K^1\setminus\partial K}$ is essentially continuous for a.e. $\xi\in B_{\Phi,h}$. If $u$ has continuous trace then $u\circ h_\xi|_{K^1}$ is essentially continuous for a.e. $\xi\in B_{\Phi,h}$, and extends continuously to $K$ in case $X$ is proper and admits a local quadratic isoperimetric inequality. \end{prop} \begin{proof} Let $v\colon M\to X$ be a Newton-Sobolev representative of $u$ with upper gradient $\rho\in L^2(M)$ (cf. Proposition~\ref{prop:Newton-Sobolev-rep}), and $A:=\overline{K^1\setminus \partial K}$. Since $$ \int_A\rho^2\circ h_\xi{\rm d}{\mathcal H}^1<\infty $$ for a.e. $\xi\in B_{\Phi,h}$ by \eqref{eq:second-ineq-crucial-lemma}, Lemma~\ref{lem:ug} implies that $v\circ h_\xi|_A$ is continuous for a.e. $\xi\in B_{\Phi,h}$. The first claim now follows from Corollary~\ref{cor:aeagree} applied to the null-set $\{u\ne v\}$. Suppose $\operatorname{tr}(u)$ has a continuous representative $\eta$. The set $\{v|_{\partial M}\ne \eta\}\subset \partial M$ has null ${\mathcal H}^1_g$--measure by Proposition~\ref{prop:Newton-Sobolev-rep}, in particular $v\circ h_\xi|_{\partial K}=\eta\circ h_\xi|_{\partial K}$ ${\mathcal H}^1$--a.e., for a.e. $\xi$. The argument above together with Corollary~\ref{cor:aeagree} applied to $\{u\ne v\}$ and $\{v|_{\partial M}\ne \eta\}$ implies that, for a.e. $\xi\in B_{\Phi,h}$, the map \begin{align*} w_\xi(x):= \left\{ \begin{array}{ll} v\circ h_\xi(x), & x\in K^1\setminus\partial K\\ \eta\circ h_\xi(x), & x\in \partial K \end{array}\right. \end{align*} is a continuous representative of $u\circ h_\xi|_{K^1}$. If $X$ is proper and admits a local quadratic isoperimetric inequality and $\Delta$ is a $2$--cell of $K$ then $\operatorname{tr}(u\circ h_\xi|_\Delta)=w_\xi|_{\partial\Delta}$ for a.e. $\xi\in B_{\Phi,h}$ by Proposition~\ref{prop:Newton-Sobolev-rep}. Applying Proposition~\ref{prop:good-cont-filling} on each $2$--cell and gluing these together yields the desired continuous extension. \end{proof} Now, suppose that $M$ has $k\geq 1$ boundary components and let $\Gamma$ be the disjoint union of $k$ rectifiable Jordan curves in a proper metric space $X$ admitting a local quadratic isoperimetric inequality. Recall the definition of homotopy relative to $\Gamma$ from the introduction. \begin{thm}\label{thm:homotopic-1-skeleton} Let $u\in\Lambda(M,\Gamma, X)$. Then there exists a negligible set $N\subset B_{\Phi, h}$ such that the continuous representatives of $u\circ h_\xi|_{K^1}$ and $u\circ h_\zeta|_{K^1}$ are homotopic relative to $\Gamma$ for all $\xi, \zeta\in B_{\Phi, h}\setminus N$. \end{thm} \begin{proof} Denote by $\eta$ the continuous representative of $\operatorname{tr}(u)$. Let $v\colon M\to X$ be a Newton-Sobolev representative of $u$ with upper gradient $\rho\in L^2(M)$ as in Proposition~\ref{prop:Newton-Sobolev-rep}, and set $\bar v:=v$ on ${\rm int}(M)$ and $\bar v:=\eta$ on $\partial M$. By the proof of Proposition~\ref{prop:restriction-1-skeleton}, there exists a null-set $N_0\subset B_{\Phi,h}$ such that $\bar v\circ h_\xi|_{K^1}$ is the continuous representative of $u\circ h_\xi|_{K^1}$ whenever $\xi\in B_{\Phi,h}\setminus N_0$. We claim that there exists $\xi_0\in B_{\Phi,h}\setminus N_0$ such that the map $H_\xi\colon K^1\times [0,1]\to M$ given by $H_\xi(x,t):=\Phi(h(x),\xi_0+t(\xi-\xi_0)$ satisfies \begin{align}\label{eq:riesz} \int_0^1\int_{K^l\setminus \partial K}\rho^2\circ H_\xi{\rm d}{\mathcal H}^l{\rm d}t<\infty,\quad l=0,1, \end{align} for a.e. $\xi\in B_{\Phi,h}$. Let us first finish the proof assuming \eqref{eq:riesz}. It is enough to show that there exists a null-set $N\subset B_{\Phi,h}$ containing $N_0$ such that $\bar v\circ h_{\xi_0}|_{K^1}\sim \bar v\circ h_{\xi}|_{K^1}$ rel $\Gamma$, whenever $\xi\in B_{\Phi,h}\setminus N$. Indeed, from this it follows that $\bar v\circ h_{\xi}|_{K^1}\sim \bar v\circ h_{\zeta}|_{K^1}$ rel $\Gamma$ for every $\xi,\zeta\in B_{\Phi,h}\setminus N$. Note that $H_\xi(\cdot,0)=\bar v\circ h_{\xi_0}|_{K^1}$, $H_\xi(\cdot,1)=\bar v\circ h_\xi|_{K^1}$ and that $\bar v\circ H_\xi|_{\partial K\times [0,1]}$ is continuous with $\bar v\circ H_\xi|_{\partial K\times \{t\}}\in [\Gamma]$ for every $\xi\in B_{\Phi,h}$ and $t\in [0,1]$. Fix a $1$--cell $e$ of $K$ not contained in $\partial K$ and let $A:=e\times [0,1]$. We show that $\bar v\circ H_\xi|_{\partial A}$ is continuous and the trace of a Sobolev map, for a.e $\xi\in B_{\Phi,h}\setminus N_0$. By Proposition~\ref{prop:good-cont-filling} this implies that $\bar v\circ H_\xi|_{\partial A}$ has a continuous extension to $A$, and choosing a continuous extension for each $A$ we obtain the desired homotopy relative to $\Gamma$ between $\bar v\circ h_{\xi_0}|_{K^1}$ and $\bar v\circ h_\xi|_{K^1}$, for a.e. $\xi\in B_{\Phi,h}\setminus N_0$. Since ${\rm Lip}(H_\xi)\cdot\rho\circ H_\xi|_A$ is an upper gradient of $v\circ H_\xi|_A$, it follows from \eqref{eq:riesz} and Lemma~\ref{lem:ug} that $v\circ H_\xi|_A\in W^{1,2}(A,X)$ and $\operatorname{tr}(v\circ H_\xi|_A)=v\circ H_\xi|_{\partial A}$ for a.e. $\xi\in B_{\Phi,h}\setminus N_0$. For a.e. $\xi\in B_{\Phi,h}\setminus N_0$, we have that $\bar v\circ H_\xi(z_0,\cdot)=v\circ H_\xi(z_0,\cdot)$ is H\"older continuous for $z_0\in K^0\setminus \partial K$ by \eqref{eq:riesz} and Lemma~\ref{lem:ug}, and $$ \bar v\circ H_\xi(z_0,t)= v\circ H_\xi(z_0,t)\quad \textrm{ a.e. }t\in [0,1] $$ for $z_0\in K^0\cap \partial K$, by Corollary~\ref{cor:aeagree} and a Fubini-type argument. Thus $\bar v\circ H_\xi|_{\partial A}$ is the continuous representative of $v\circ H_\xi|_{\partial A}$ for a.e. $\xi\in B_{\Phi,h}\setminus N_0$. This completes the proof that $\bar v\circ H_\xi|_{\partial A}$ is continuous and the trace of a Sobolev function, for a.e. $\xi\in B_{\Phi,h}\setminus N_0$. It remains to show \eqref{eq:riesz}. Define $$ f(\xi):=\chi_{B_{\Phi,h}}(\xi)\left(\int_{K^0\setminus\partial K}\rho^2\circ h_\xi{\rm d}{\mathcal H}^0+\int_{K^1\setminus\partial K}\rho^2\circ h_\xi{\rm d}{\mathcal H}^1\right),\quad \xi\in \mathbb{R}^m.$$ Then $f\in L^1(\mathbb{R}^m)$ by \eqref{eq:second-ineq-crucial-lemma} and thus there exists $\xi_0\in B_{\Phi,h}\setminus N_0$ such that the Riesz potential $R_1f(\xi_0):=\int_{\mathbb{R}^m}\frac{f(\xi_0+\xi)}{|\xi|^{m-1}}{\rm d}\xi$ is finite (cf. \cite[Theorem 3.22]{hei01}). Integrating in spherical coordinates we have $\int_{S^{m-1}}\int_0^\infty f(\xi_0+tw){\rm d}t{\rm d}w= R_1f(\xi_0)<\infty.$ Since $$\int_0^1\int_{K^l\setminus \partial K}\rho^2\circ H_\xi{\rm d}{\mathcal H}^l{\rm d}t\le \int_0^\infty f(\xi_0+t(\xi-\xi_0))){\rm d}t= |\xi-\xi_0|\int_0^\infty f\left(\xi_0+s\frac{\xi-\xi_0}{|\xi-\xi_0|}\right) {\rm d}s$$ for $l=0,1$ and $\xi\in B_{\Phi,h}\setminus\{\xi_0\}$, \eqref{eq:riesz} follows \end{proof} We end this section with the following lemma which will be used in the proofs of the theorems in the next section. \begin{lem}\label{lem:conv-restr-1-skeleton} Let $u\in W^{1,2}(M, X)$ and let $(u_n)\subset W^{1,2}(M,X)$ be an energy bounded sequence converging to $u$ in $L^2(M,X)$. Then for almost every $\xi\in B_{\Phi, h}$ there exists a subsequence $(u_{n_j})$ such that the continuous representative of $u_{n_j}\circ h_\xi|_{K^1\setminus \partial K}$ converges uniformly to the continuous representative of $u\circ h_\xi|_{K^1\setminus\partial K}$ as $j\to\infty$. \end{lem} \begin{proof} By passing to a subsequence we may assume that $u_n\to u$ almost everywhere in $M$. For each $n\in\mathbb{N}$, let $v_n\colon M\to X$ be a Newton-Sobolev representative of $u_n$ with upper gradient $\rho_n\in L^2(M)$ satisfying $$ \|\rho_n\|_{L^2(M,g)}^2\le 2E_+^2(u_n,g), $$ cf. Proposition~\ref{prop:Newton-Sobolev-rep}. By the proof of Proposition~\ref{prop:restriction-1-skeleton} and Corollary~\ref{cor:aeagree}, there exists a negligible set $N_0\subset B_{\Phi, h}$ such that for every $z\in B_{\Phi,h}\setminus N_0$ the map $v_n\circ h_\xi|_{K^1\setminus \partial K}$ is the continuous representative of $u_n\circ h_\xi|_{K^1\setminus \partial K}$ for every $n\in\mathbb{N}$ and \begin{equation}\label{eq:vn-tou} v_n\circ h_\xi|_{K^1\setminus \partial K} \rightarrow u\circ h_\xi|_{K^1\setminus \partial K} \end{equation} ${\mathcal H}^1$--a.e. with $n\to\infty$. Set $A:=\overline{K^1\setminus\partial K}$. Fatou's lemma and \eqref{eq:second-ineq-crucial-lemma} imply that \begin{align*} \int_{B_{\Phi,h}}\left(\liminf_{n\to\infty}\int_A\rho_n^2\circ h_\xi{\rm d}{\mathcal H}^1\right){\rm d}\xi&\le \liminf_{n\to\infty}\int_{B_{\Phi,h}}\int_A\rho_n^2\circ h_\xi{\rm d}{\mathcal H}^1{\rm d}\xi\\ &\le C\liminf_{n\to\infty}\int_M\rho_n^2{\rm d}{\mathcal H}^2_g<\infty. \end{align*} Therefore, for almost every $\xi\in B_{\Phi, h}\setminus N_0$, we have $$\liminf_{n\to\infty}\int_A\rho_n^2\circ h_\xi{\rm d}{\mathcal H}^1<\infty.$$ By Lemma~\ref{lem:ug}, Arzela-Ascoli's Theorem and \eqref{eq:vn-tou}, for such $\xi$ there exists a subsequence $(v_{n_j}\circ h_\xi|_A)_{j\in\mathbb{N}}$ which is uniformly $\frac 12$--H\"older continuous and converges uniformly to the continuous representative of $u\circ h_\xi|_{K^1\setminus \partial K}$ as $j\to\infty$. \end{proof} \section{The relative $1$--homotopy class of Sobolev maps}\label{sec:1-homot} Throughout this section, let $X$ be a proper geodesic metric space admitting a local quadratic isoperimetric inequality. Let $\Gamma\subset X$ be the disjoint union of $k\geq 1$ rectifiable Jordan curves, and let $M$ be a surface with $k$ boundary components. We fix a Riemannian metric $g$ on $M$. Let $\Phi\colon M\times\mathbb{R}^m\to M$ be an admissible deformation on $M$. Theorem~\ref{thm:homotopic-1-skeleton} shows that for every $u\in\Lambda(M,\Gamma, X)$ and every triangulation $h\colon K\to M$ of $M$ we have $$[u\circ h_\xi|_{K^1}]_\Gamma = [u\circ h_\zeta|_{K^1}]_\Gamma$$ for almost all $\xi,\zeta\in B_{\Phi, h}$. We denote the common relative homotopy class by $u_{\#, 1}[h]$. The following theorem shows that $u_{\#,1}[h]$ is independent of the choice of deformation $\Phi$ and that inducing the same relative homotopy class is independent of the triangulation $h$. \begin{thm}\label{thm:rel-hom-indep-wiggling} Let $X$, $\Gamma$, $M$, $\Phi$ be as above. Let $u\in \Lambda(M,\Gamma, X)$ and let $h\colon K\to M$ be a triangulation of $M$. The relative homotopy class $u_{\#,1}[h]$ does not depend on the choice of admissible deformation $\Phi$. Moreover, if $v\in\Lambda(M,\Gamma, X)$ is such that $v_{\#,1}[h] = u_{\#,1}[h]$ then we have $v_{\#,1}[\tilde{h}] = u_{\#,1}[\tilde{h}]$ for any triangulation $\tilde{h}\colon \tilde{K}\to M$. \end{thm} We will need the following two lemmas in the proof. \begin{lem}\label{lem:approx-cont-area-energy-bounded} Let $u\in W^{1,2}(M, X)$ have continuous trace. Then for all $\varepsilon, \delta>0$ there exists a continuous map $\hat{u}\colon M\to X$ in $W^{1,2}(M,X)$ with $\hat{u}|_{\partial M} = \operatorname{tr}(u)$, $d_{L^2}(u, \hat{u})<\varepsilon$, and \begin{equation}\label{eq:bound-area-energy-cont} \operatorname{Area}(\hat{u}) \leq \operatorname{Area}(u) + \delta \cdot E_+^2(u,g),\quad E_+^2(\hat{u}, g) \leq \left(1+\delta^{-1}\right) \cdot E_+^2(u,g). \end{equation} \end{lem} \begin{proof} Let $u$ be as in the statement of the lemma and let $\varepsilon, \delta>0$. Fix an admissible deformation $\Phi$ on $M$ and let $\varepsilon'>0$ be sufficiently small, to be determined later. Choose a triangulation $h\colon K\to M$ of $M$ in such a way that for every $\xi\in B_{\Phi, h}$ we have ${\mathcal H}^2_g(h_\xi(\Delta))<\varepsilon'$ for every $2$--cell $\Delta\subset K$. It follows from (the proof of) Proposition~\ref{prop:restriction-1-skeleton} that, for almost every $\xi\in B_{\Phi, h}$, the map $u\circ h_\xi|_{K^1}$ is essentially continuous and its restriction to the boundary of each open $2$--cell $\Delta\subset K$ coincides with the trace of the Sobolev map $u\circ h_\xi|_{\Delta}$. Fix such $\xi$ and abbreviate $H:= h_\xi$. It thus follows that if $\Delta$ is an open $2$--cell then the map $u|_{H(\partial \Delta)}$ is essentially continuous and the trace of the Sobolev map $u|_{H(\Delta)}$. By Proposition~\ref{prop:good-cont-filling} there thus exists a continuous map $u_\Delta\colon H(\overline{\Delta})\to X$ which extends the continuous representative of $u|_{H(\partial \Delta)}$, belongs to $W^{1,2}(H(\Delta), X)$ and satisfies $$\operatorname{Area}(u_\Delta) \leq \operatorname{Area}(u|_{H(\Delta)}) + \delta\cdot E_+^2(u|_{H(\Delta)}, g)$$ as well as $$E_+^2(u_\Delta, g) \leq \left(1+\delta^{-1}\right)\cdot E_+^2(u|_{H(\Delta)}, g).$$ It follows from the Sobolev-Poincar\'e inequality (see \cite[Section 2]{Heb99} for closed manifolds), from \cite[Corollary 1.6.3]{KS93} and H\"older's inequality that \begin{equation*} \begin{split} \int_{H(\Delta)} d^2(u_\Delta(z), u(z))\,d{\mathcal H}^2_g(z)&\leq C\cdot {\mathcal H}_g^2(H(\Delta))\cdot \left[E_+^2(u_\Delta, g) + E_+^2(u|_{H(\Delta)}, g)\right]\\ &\leq C\varepsilon' \left(2+\delta^{-1}\right) \cdot E_+^2(u|_{H(\Delta)}, g) \end{split} \end{equation*} for some constant $C$ depending on $(M,g)$ Finally, let $\hat{u}\colon M\to X$ be the continuous map obtained by gluing the maps $u_\Delta$ along their boundaries. Then $\hat u\in W^{1,2}(M, X)$ by \cite[Theorem 1.12.3]{KS93} and, taking the sum over all $\Delta$ in the three inequalities above, we obtain the inequalities in \eqref{eq:bound-area-energy-cont} as well as $$\int_{M} d^2(\hat{u}(z), u(z))\,d{\mathcal H}^2_g(z)\leq C\varepsilon' \left(2+\delta^{-1}\right) \cdot E_+^2(u,g).$$ Upon choosing $\varepsilon'>0$ sufficiently small, this yields $d_{L^2}(\hat{u}, u)<\varepsilon$. \end{proof} \begin{lem}\label{lem:close-1-skeleton-implies-homotopic-rel} Let $X$, $\Gamma$, $M$ be as above. Then there exists $\delta>0$ with the following property. Let $h\colon K\to M$ be a triangulation and let $\varrho,\varrho'\colon K^1\to X$ be continuous such that $\varrho|_{\partial K},\varrho'|_{\partial K}\in [\Gamma]$ are homotopic via a family of maps in $[\Gamma]$. If $$\sup_{z\in K^1\setminus \partial K} d(\varrho(z), \varrho'(z))< \delta$$ and if for every component $C$ of $\partial M$ for which the Jordan curve $\varrho(C)$ is not contractible in $X$ we have \begin{equation}\label{eq:bound-on-boundary} \sup_{z\in C} d(\varrho(z), \varrho'(z))< \delta \end{equation} then $\varrho$ and $\varrho'$ are homotopic relative to $\Gamma$. \end{lem} The condition \eqref{eq:bound-on-boundary} cannot be omitted, as easy examples show. The lemma will also be used in the proof of Theorem~\ref{thm:stability-1-homotopic}, where it will be essential that we do not impose any condition akin to \eqref{eq:bound-on-boundary} for the components $C$ of $\partial K$ which are mapped to contractible Jordan curves. \begin{proof} Since $X$ is proper, geodesic and admits a local quadratic isoperimetric inequality it follows from \cite[Theorem 5.2]{LWY20}, \cite[Proposition 2.2]{LWY20}, and from the proof of \cite[Proposition 6.2]{LWY20} that there exists $r_0>0$ such that every closed curve in $X$ of diameter at most $4r_0$ is contractible. Recall that $\Gamma=\Gamma_1\cup\dots\cup \Gamma_k$ is the disjoint union of rectifiable Jordan curves. We may assume that $3r_0\leq\operatorname{diam}(\Gamma_i)$ for every $i$. There exists $0<\delta<r_0/3$ such that whenever $x,y\in \Gamma$ satisfy $d(x,y)\leq 9\delta$ then they belong to the same Jordan curve $\Gamma_i$ and one of the two segments of $\Gamma_i$ joining $x$ and $y$ has diameter at most $r_0$. Let $\varrho, \varrho'\colon K^1\to X$ be as in the statement of the lemma with this specific choice of $\delta$. After possibly adding vertices to $K^1\setminus \partial K$ we may further assume that the image under $\varrho$ and $\varrho'$ of any edge in the closure of $K^1\setminus \partial K$ has diameter at most $\delta$. We now construct a homotopy $H\colon K^1\times[0,1]\to X$ relative to $\Gamma$ between $\varrho$ and $\varrho'$. Let $H(\cdot, 0) = \varrho$ and $H(\cdot,1)=\varrho'$. For each $z_0\in K^0\setminus \partial K$ let $H(z_0,\cdot)$ be a (constant speed) geodesic from $\varrho(z_0)$ and $\varrho'(z_0)$. For each component $C$ of $\partial K$ and each $z\in K^0\cap C$, let $H(z,\cdot)$ be a weakly monotone parametrization of one of the segments in $\varrho(C)$ joining $\varrho(z)$ to $\varrho'(z)$ in such a way that, for every edge $e\subset C$, the map $H|_{\partial (e\times [0,1])}$ is contractible in $\varrho(C)$. By \eqref{eq:bound-on-boundary}, in the case that $\varrho(C)$ is not contractible, we may choose $H(z,\cdot)$ to have diameter at most $r_0$ for every $z\in C\cap K^0$. For every edge $e\subset \partial K$ the map $H|_{\partial (e\times[0,1])}$ admits a continuous extension with image in $\Gamma$ such that for each $t\in[0,1]$ the map $H(\cdot, t)$ is weakly monotone. Moreover, for every edge $e\subset K^1$ not intersecting $\partial K$ the curve $H|_{\partial(e\times[0,1])}$ has diameter at most $4\delta$ and thus admits a continuous extension to $e\times [0,1]$. Finally, let $e\subset K^1$ be an edge which intersects (but is not contained in) some component $C$ of $\partial K$. Notice that the image of $H|_{\partial (e\times[0,1])}$ is contained in the $3\delta$--neighbourhood of $\varrho(C)$. Thus, if $\varrho(C)$ is contractible then $H|_{\partial (e\times[0,1])}$ admits a continuous extension to $e\times[0,1]$. If $\varrho(C)$ is not contractible then, by construction, the image of $H|_{\partial (e\times[0,1])}$ has diameter at most $4r_0$ and hence admits again a continuous extension to $e\times[0,1]$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:rel-hom-indep-wiggling}] Let $u\in \Lambda(M,\Gamma, X)$ and let $h\colon K\to M$ be a triangulation. We wish to show that the relative homotopy class, which we denote by $u_{\#, 1}[h,\Phi]$ for the moment, is independent of the choice of admissible deformation $\Phi$. Let $(u_n)$ be a sequence of continuous maps $u_n\colon M\to X$ converging in $L^2(M,X)$ to $u$, with $u_n|_{\partial M} = \operatorname{tr}(u)$ and $u_n\in W^{1,2}(M,X)$ for every $n$, and such that the energy of $u_n$ is bounded independently of $n$. Such a sequence exists by Lemma~\ref{lem:approx-cont-area-energy-bounded} and we call it a good approximating sequence for $u$. We first claim that there exists a subsequence $(n_j)$ such that $u_{\#, 1}[h,\Phi] = [u_{n_j}\circ h|_{K^1}]_\Gamma$ for all $j\geq 1$. Indeed, by Proposition~\ref{prop:restriction-1-skeleton} and Theorem~\ref{thm:homotopic-1-skeleton} there exists a negligible subset $N\subset B_{\Phi, h}$ such that for $\xi,\xi'\in B_{\Phi, h}\setminus N$ the maps $u\circ h_\xi|_{K^1}$ and $u\circ h_{\xi'}|_{K^1}$ are essentially continuous and their continuous representatives are homotopic relative to $\Gamma$. Since $u_n|_{\partial M} = \operatorname{tr}(u)$ it follows with Lemma~\ref{lem:conv-restr-1-skeleton} that for almost every $\xi_0\in B_{\Phi, h}\setminus N$ there is a subsequence $(n_j)$ such that the maps $u_{n_j}\circ h_{\xi_0}|_{K^1}$ converge uniformly to the continuous representative of $u\circ h_{\xi_0}|_{K^1}$ as $j\to \infty$. Fix such $\xi_0$ and such a subsequence $(n_j)$. Lemma~\ref{lem:close-1-skeleton-implies-homotopic-rel} thus implies that there exists $j_0$ such that $u_{n_j}\circ h_{\xi_0}|_{K^1}$ is homotopic relative to $\Gamma$ to the continuous representative of $u\circ h_{\xi_0}|_{K^1}$ for every $j\geq j_0$. Since $u_{n_j}$ is continuous the maps $u_{n_j}\circ h_{\xi_0}|_{K^1}$ and $u_{n_j}\circ h|_{K^1}$ are homotopic relative to $\Gamma$. It thus follows that for all $j\geq j_0$ the continuous representative of $u\circ h_\xi|_{K^1}$ is homotopic relative to $\Gamma$ to $u_{n_j}\circ h|_{K^1}$ for every $\xi\in B_{\Phi, h}\setminus N$. Upon reindexing the subsequence we may assume that $j_0=1$. This proves the claim. It easily follows from the claim that $u_{\#,1}[h,\Phi]$ is independent of $\Phi$. Indeed, let $\tilde{\Phi}$ be another admissible deformation on $M$. On the one hand, the claim shows that there exists a subsequence $(n_j)$ such that $$u_{\#, 1}[h,\Phi] = [u_{n_j}\circ h|_{K^1}]_\Gamma$$ for all $j\geq 1$. Applying the claim again with $\Phi$ replaced by $\tilde{\Phi}$ and with $(u_n)$ replaced by $(u_{n_j})$ we see that there is a further subsequence $(n_{j_l})$ such that $$u_{\#, 1}[h,\tilde{\Phi}] = [u_{n_{j_l}}\circ h|_{K^1}]_\Gamma$$ for all $l\geq 1$. From this it follows that $u_{\#,1}[h,\Phi] = u_{\#,1}[h,\tilde{\Phi}]$, which proves the first statement of the theorem. The second statement of the theorem also follows from the claim. Indeed, let $v\in\Lambda(M,\Gamma, X)$ be such $v_{\#, 1}[h] = u_{\#,1}[h]$ and let $(v_n)$ be a good approximating sequence for $v$. The claim shows that we can find a subsequence $(n_j)$ such that $$[u_{n_j}\circ h|_{K^1}]_\Gamma = u_{\#,1}[h] = v_{\#,1}[h] = [v_{n_j}\circ h|_{K^1}]_\Gamma$$ for all $j\geq 1$. Let $\tilde{h}\colon \tilde{K}\to M$ be another triangulation. Since $u_{n_j}$ and $v_{n_j}$ are continuous it is easy to see that $$[u_{n_j}\circ\tilde{h}|_{\tilde{K}^1}]_\Gamma = [v_{n_j}\circ \tilde{h}|_{\tilde{K}^1}]_\Gamma$$ for all $j\geq 1$, compare with \cite[Lemma 2.1]{HL03}. The claim now implies that $u_{\#,1}[\tilde{h}] = v_{\#,1}[\tilde{h}]$, which proves the second statement of the theorem. \end{proof} \begin{prop} Let $\varphi\colon M\to X$ be a continuous map such that $\varphi|_{\partial M}\in[\Gamma]$ and let $u\in\Lambda(M, \Gamma, X)$. Then $$u_{\#,1}[h] = [\varphi\circ h|_{K^1}]_\Gamma$$ holds for one triangulation $h\colon K \to M$ if and only if it holds for every triangulation. \end{prop} \begin{proof} Let $h\colon K\to M$ be a triangulation of $M$ such that $$u_{\#,1}[h] = [\varphi\circ h|_{K^1}]_\Gamma$$ and let $(u_n)$ be a good approximating sequence for $u$ as in the first paragraph of the proof of Theorem~\ref{thm:rel-hom-indep-wiggling}. By the claim in the second paragraph of that proof, there exists a subsequence $(n_j)$ such that $u_{\#,1}[h] = [u_{n_j}\circ h|_{K^1}]_\Gamma$ for all $j\geq 1$ and hence $$[u_{n_j}\circ h|_{K^1}]\Gamma = [\varphi\circ h|_{K^1}]_\Gamma$$ for all $j\geq 1$. Let $\tilde{h}\colon\tilde{K}\to M$ be another triangulation of $M$. Since $u_{n_j}$ and $\varphi$ are continuous $$[u_{n_j}\circ \tilde{h}|_{\tilde{K}^1}]_\Gamma = [\varphi\circ\tilde{h}|_{\tilde{K}^1}]_\Gamma$$ for all $j\geq 1$, compare with \cite[Lemma 2.1]{HL03}. After possibly passing to a further subsequence we have $u_{\#,1}[\tilde{h}] = [u_{n_j}\circ \tilde{h}|_{\tilde{K}^1}]_\Gamma$ for all $j\geq 1$ and hence $$u_{\#,1}[\tilde{h}]=[\varphi\circ\tilde{h}|_{\tilde{K}^1}]_\Gamma.$$ This concludes the proof. \end{proof} \begin{defn} Two maps $u,v\in\Lambda(M,\Gamma, X)$ are said to be $1$--homotopic relative to $\Gamma$, denoted $u\sim_1 v$ rel $\Gamma$, if for some and thus every triangulation $h$ of $M$ we have $u_{\#,1}[h] = v_{\#,1}[h]$. If $u\in\Lambda(M,\Gamma, X)$ and $\varphi\colon M\to X$ is continuous with $\varphi|_{\partial M}\in[\Gamma]$ then $u$ and $\varphi$ are said to be $1$--homotopic relative to $\Gamma$, denoted $u\sim_1\varphi$ rel $\Gamma$, if for some and thus every triangulation $h\colon K\to M$ we have $u_{\#,1}[h] = [\varphi\circ h|_{K^1}]_\Gamma$. \end{defn} If $u,v\in \Lambda(M,\Gamma,X)$, $u\sim_1 v$ rel $\Gamma$ and $\psi\colon M\to M$ is a diffeomorphism then $u\circ \psi\sim_1 v\circ\psi$ rel $\Gamma$, see the remark after Definition~\ref{def:admdef}. \begin{thm}\label{thm:stability-1-homotopic} Let $X$, $\Gamma$, $M$ be as above. Then for every $L>0$ there exists $\varepsilon>0$ such that if $u,v\in\Lambda(M,\Gamma, X)$ induce the same orientation on $\Gamma$ and satisfy $$\max\left\{E_+^2(u,g), E_+^2(v,g)\right\}\leq L\quad\textrm{and}\quad d_{L^2}(u,v)\leq \varepsilon,$$ then $u$ and $v$ are $1$--homotopic relative to $\Gamma$. \end{thm} Notice that the theorem does not imply the stability of $1$--homotopy classes relative to $\Gamma$, since the $L^2$--limit of a sequence in $\Lambda(M,\Gamma,X)$ with uniformly bounded energy need not belong to $\Lambda(M,\Gamma,X)$. An analog of Theorem~\ref{thm:stability-1-homotopic} holds for closed surfaces (where $\Gamma=\varnothing$ and $\Lambda(M,\Gamma,X)=W^{1,2}(M,X)$) and in this case implies the stability of $1$--homotopy classes in the presence of a local quadratic isoperimetric inequality. Example~\ref{ex:stabilityfail} below shows that the local quadratic isoperimetric inequality is crucial for this. \begin{proof} We argue by contradiction and assume the statement is not true. Then there exist energy bounded sequences $(u_n),(v_n)\subset\Lambda(M,\Gamma, X)$ such that for every $n\in\mathbb{N}$ we have $d_{L^2}(u_n,v_n)\leq \frac{1}{n}$, that $u_n$ and $v_n$ induce the same orientation on $\Gamma$ but $u_n$ is not $1$--homotopic to $v_n$ relative to $\Gamma$. After possibly passing to a subsequence, we may assume by the Rellich-Kondrachov compactness theorem \cite[Theorem 1.13]{KS93} and by \cite[Lemma 2.4]{FW-Plateau-Douglas} that there exists $u\in W^{1,2}(M, X)$ such that the sequences $(u_n)$ and $(v_n)$ both converge to $u$ in $L^2(M, X)$. Fix an admissible deformation $\Phi$ on $M$ and a triangulation $h\colon K\to M$. By Proposition~\ref{prop:restriction-1-skeleton} and Theorem~\ref{thm:homotopic-1-skeleton} there exists a negligible set $N\subset B_{\Phi, h}$ such that for all $\xi,\zeta\in B_{\Phi, h}\setminus N$ and all $n\in\mathbb{N}$ we have that $u_n\circ h_\xi|_{K^1}$ and $u_n\circ h_\zeta|_{K^1}$ are essentially continuous and their continuous representatives are homotopic relative to $\Gamma$ and that the same is true when $u_n$ is replaced by $v_n$ and $u$. It moreover follows from Lemma~\ref{lem:conv-restr-1-skeleton} that for almost every $\xi_0\in B_{\Phi, h}\setminus N$ there exists a subsequence $(n_j)$ such that the continuous representatives of $u_{n_j}\circ h_{\xi_0}|_{K^1\setminus \partial K}$ and of $v_{n_j}\circ h_{\xi_0}|_{K^1\setminus \partial K} $ both converge uniformly to the continuous representative of $u\circ h_{\xi_0}|_{K^1\setminus \partial K}$. Fix such $\xi_0$ and denote by $\varrho_j$ and $\varrho'_j$ the continuous representatives of $u_{n_j}\circ h_{\xi_0}|_{K^1}$ and of $v_{n_j}\circ h_{\xi_0}|_{K^1}$, respectively. Denote by $C_m$ and $\Gamma_m$, $m=1,\dots, k$, the components of $\partial K$ and $\Gamma$, respectively. Notice that the sequences $(\varrho_j|_{\partial K})_j$ and $(\varrho'_j|_{\partial K})_j$ both converge in $L^2(\partial K, X)$ to $u\circ h_{\xi_0}|_{\partial K}$ by \cite[Theorem 1.12.2]{KS93}. Thus, after possibly relabelling the components, we may assume that $$\varrho_j(C_m) = \Gamma_m = \varrho'_j(C_m)$$ for all sufficiently large $j$ and every $m=1,\dots, k$. Since $\varrho_j|_{\partial K}$ and $\varrho'_j|_{\partial K}$ induce the same orientation on $\Gamma$ it follows, in particular, that $\varrho_j|_{\partial K}$ and $\varrho'_j|_{\partial K}$ are homotopic via a family of maps in $[\Gamma]$ for every sufficiently large $j$. Let $m$ be such that $\Gamma_m$ is not contractible. Then it follows from the remark after Proposition~\ref{prop:good-cont-filling} together with the proof of \cite[Proposition 5.1]{FW-Plateau-Douglas} that the families $(\varrho_j|_{C_m})$ and $(\varrho'_j|_{C_m})$ are both equi-continuous. Hence, after possibly passing to a subsequence, we may assume that both sequences converge uniformly to the continuous representative of $u\circ h_{\xi_0}|_{C_m}$. It thus follows that for every sufficiently large $j$ the maps $\varrho_j$ and $\varrho'_j$ satisfy the hypotheses of Lemma~\ref{lem:close-1-skeleton-implies-homotopic-rel}. In particular, it follows that there exists $j_0$ such that $\varrho_j$ and $\varrho_j'$ are homotopic relative to $\Gamma$ for every $j\geq j_0$. Hence, for every $\xi\in B_{\Phi, h}\setminus N$ and every $j\geq j_0$ we have that the continuous representatives of $u_{n_j}\circ h_{\xi}|_{K^1}$ and $v_{n_j}\circ h_{\xi}|_{K^1}$ are homotopic relative to $\Gamma$. This shows that $u_{n_j}$ and $v_{n_j}$ are $1$--homotopic relative to $\Gamma$, which is a contradiction, concluding the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:intro-properties-1-hom-class-Sobolev}] Statements (ii) and (iii) follow from Theorems~\ref{thm:rel-hom-indep-wiggling} and \ref{thm:stability-1-homotopic}. As for statement (i), suppose $u$ has a continuous representative $\bar{u}\colon M\to X$. We have $u\circ h_\xi|_{K^1} = \bar{u}\circ h_\xi|_{K^1}$ ${\mathcal H}^1$--a.e., for almost every $\xi$ by Corollary~\ref{cor:aeagree} and hence $$[u\circ h_\xi|_{K^1}]_\Gamma =[\bar u\circ h_\xi|_{K^1}]_\Gamma= [\bar{u}\circ h|_{K^1}]_\Gamma$$ for almost every $\xi$. This proves statement (i). \end{proof} \begin{example}\label{ex:stabilityfail} Consider the surface of revolution $C\subset \mathbb{R}^3$ of the graph of $$f\colon (0,1]\to [1/3,1], \quad f(x)= (2+\sin(1/x))/3.$$ The compact set $C\cup \{0\}\times \overline{\mathbb D}\subset \mathbb{R}^3$ equipped with the subspace metric is not geodesic, but by adding a countable number of suitable line segments parallel to the $x$--axis, connecting points on $C$ to $\{0\}\times \overline{\mathbb D}$, we obtain a compact subset of $\mathbb{R}^3$ bi-Lipschitz equivalent to a geodesic space $Y$. It is not difficult to see that $Y$, and thus $X:=S^1\times Y$, fails to admit a local quadratic isoperimetric inequality. Let $x_n\to 0$ be the sequence of local minima of $f$, and $h_n\colon S^1\to Y$ the constant speed parametrizations corresponding to the circles $\{x_n\}\times \mathbb{R}^2\cap C$. The maps $$u_n\colon S^1\times S^1\to X,\quad (z,z')\mapsto (z,h_n(z'))$$ are bi-Lipschitz for each $n$, and converge uniformly to the map $u(z,z')=(z,h(z'))$, where $h\colon S^1\to Y$ is the constant speed parametrization of the circle corresponding to $\{(0,z'/3):\ z'\in S^1\}\subset Y$. However, one can check that the maps $h_n$ are all non-contractible and pairwise $1$--homotopic, while $h$ is contractible. It follows that $u$ cannot lie in the common homotopy class of the maps $u_n$. \end{example} The example above can be modified so that the maps $u_n$ form an area minimizing sequence in their common $1$--homotopy class. Considering the set $C\cup \{0\}\times\overline{\mathbb D}$ with the metric inherited from $\mathbb{R}^3$ in the example above, we obtain a \emph{non-geodesic} space with a local quadratic isoperimetric inequality where the stability of $1$--homotopy classes of maps from closed surfaces fails. \section{The homotopic Douglas condition and its consequences}\label{sec:homot-Douglas-cond} Let $X$ be a proper geodesic metric space admitting a local quadratic isoperimetric inequality, and let $\Gamma\subset X$ be the disjoint union of $k\geq 1$ rectifiable Jordan curves. Let $M$ be a connected surface with $k$ boundary components, and let $\varphi\colon M\to X$ be a continuous map such that $\varphi|_{\partial M}\in[\Gamma]$. \begin{prop}\label{prop:induced-homom-Douglas} If the induced homomorphism $\varphi_*\colon \pi_1(M)\to \pi_1(X)$ on fundamental groups is injective then $\varphi$ satisfies the homotopic Douglas condition \eqref{eq:Douglas-condition}. \end{prop} \begin{proof} We first claim that $a(M,\varphi, X)<\infty$. Let $l_0>0$ be as in the definition of the local quadratic isoperimetric inequality. Since $\Gamma$ is a finite union of rectifiable Jordan curves there exists $0<r_0<l_0/3$ such that every subcurve of $\Gamma$ of diameter at most $r_0$ has length at most $l_0/3$. Moreover we may choose $r_0$ small enough so that all closed loops of diameter $\le 2r_0$ are contractible, cf. the proof of Lemma~\ref{lem:close-1-skeleton-implies-homotopic-rel}. Now, fix a triangulation of $M$ all of whose $2$--cells are triangles. We identify the $1$--skeleton of the triangulation with a subset of $M$ and denote it by $M^1$. Choosing the triangulation sufficiently fine we may assume that for each $1$--cell $e\subset M^1$ we have $\operatorname{diam}(\varphi(e))< r_0$. Let $u\colon M^1\to X$ be the continuous map which agrees with $\varphi$ on the $0$--skeleton $M^0$ and such that for each $1$--cell $e\subset M^1$ the following holds: if $e$ is contained in $\partial M$ then $u|_e$ is the constant speed parametrization of $\varphi(e)$; if $e$ is not contained in $\partial M$ then $u|_e$ is a geodesic. It follows that for every $2$--cell $\Delta\subset M$ the curve $u|_{\partial \Delta}$ is Lipschitz and has length at most $l_0$ and thus has a continuous Sobolev extension to $\Delta$ (which we denote $u|_{\Delta}$) by the local quadratic isoperimetric inequality and Lemma~\ref{lem:approx-cont-area-energy-bounded}. Also note that $u|_e$ is end-point homotopic to $\varphi|_e$ by the choice of $r_0$. The continuous map $\bar u\colon M\to X$ obtained by gluing all the $u|_{\Delta}$ together is a Sobolev map and satisfies $\bar u|_{M^1} \sim \varphi|_{M^1}$ rel $\Gamma$. It thus follows that $\bar u\sim_1\varphi$ relative to $\Gamma$. The map $\bar u$ has finite area and thus we obtain $a(M,\varphi, X)<\infty$, as claimed. Since the induced homomorphism $\varphi_*\colon \pi_1(M)\to \pi_1(X)$ on fundamental groups is injective it follows that if $\alpha$ is a simple closed non-contractible curve in the interior of $M$ then $\varphi\circ \alpha$ is not contractible. Consequently, there are no primary reductions $(M^*, \varphi^*)$ of $(M,\varphi)$ and hence $a^*(M,\varphi,X)=\infty$ by definition. Since $a(M,\varphi, X)<\infty$ this shows that $\varphi$ satisfies the homotopic Douglas condition. \end{proof} \begin{prop}\label{prop:equi-cont} Let $g$ be a Riemannian metric on $M$. Then for every $\eta>0$ and $L>0$ the family $$\{\operatorname{tr}(u): \text{$u\in\Lambda(M,\Gamma, X)$, $u\sim_1 \varphi$ rel $\Gamma$, $E_+^2(u,g)\leq L$, $\operatorname{Area}(u)\leq a^*(M,\varphi, X) - \eta$}\}$$ is equi-continuous. \end{prop} A corresponding result without fixing relative $1$--homotopy classes is contained in \cite[Proposition 5.1]{FW-Plateau-Douglas}. In order to control the relative $1$--homotopy class of the maps that we construct in the proof, we will use the following technical lemma. In the next statement, $\alpha$ is a smooth closed simple non-contractible curve in the interior of $M$ and let $M^*$ be the smooth surface obtained from $M$ by cutting $M$ along $\alpha$ and gluing smooth discs to the two newly created boundary components. \begin{lem}\label{lem:1-hom-reduction} Let $A\subset M$ be a biLipschitz cylinder such that $A\cap \partial M$ is connected and one boundary component of $A$ coincides with $\alpha$. Suppose there is $v\in \Lambda(M^*, \Gamma, X)$ inducing the same orientation on $\Gamma$ as $u$ and satisfying $v|_{M\setminus A} = u$. Then $\varphi\circ \alpha$ is contractible and $v$ is $1$--homotopic to $\varphi^*$ relative to $\Gamma$, whenever $\varphi^*\colon M^*\to X$ is continuous and coincides with $\varphi$ on $M\setminus \alpha$. \end{lem} \begin{proof} Let $A'\subset M$ be a biLipschitz cylinder with piecewise smooth boundary components and such that $A'$ contains a small neighborhood of $A$ in $M$. The boundary component $\alpha'$ of $A'$ which is homotopic to $\alpha$ outside $A$ is contained in the interior of $M$. Let $\beta'$ be the other boundary component of $A'$. If $\gamma:= A\cap \partial M$ is not empty then $\beta'$ contains $\gamma$. Let $h\colon K\to M$ be a triangulation of $M$ such that $h(K^1)$ contains $\alpha'$ and $\beta'$. Let $K'$ be the sub-complex of $K$ obtained by removing the interior of cells that get mapped to the interior of $A'$. Let $K^*$ be the complex obtained from $K'$ by adding two cells, each glued along the preimage of $\alpha'$ and $\beta'$, respectively, and extend $h|_{K'}$ to a triangulation $h^*\colon K^*\to M^*$ of $M^*$. Let $C\subset K'^1$ be the preimage of $\beta'\cap\partial M$ under $h$. Let $U\subset M$ be a small neighborhood of $\alpha$ whose closure is contained in the interior of $A'$. Using vector fields as in the proof of Proposition~\ref{prop:existence-admissible-deformations} it is not difficult to construct admissible deformations $\Phi\colon M\times\mathbb{R}^m\to M$ on $M$ and $\Phi^*\colon M^*\times \mathbb{R}^m\to M^*$ on $M^*$ which agree on $(M\setminus U)\times B(0,\varepsilon)$ for some sufficiently small $\varepsilon>0$. On $K'^1\setminus C$ the maps $h_\xi=\Phi_\xi\circ h$ and $h^*_\xi = \Phi^*_\xi\circ h^*$ agree for sufficiently small $\xi$ and stay outside $A$, so we have $$v\circ h^*_\xi|_{K'^1\setminus C} = u\circ h_\xi|_{K'^1\setminus C}$$ for a.e. small $\xi$. Since $u$ and $v$ induce the same orientation on $\Gamma$ it follows that $v\circ h^*_\xi|_{K'^1}$ is homotopic to $u\circ h_\xi|_{K'^1}$ relative to $\Gamma$ for almost every sufficiently small $\xi$. Now, $u\circ h_\xi|_{K'^1}$ is homotopic to $\varphi\circ h|_{K'^1}$ relative to $\Gamma$ for almost every $\xi$ sufficiently small. Let $\Omega\subset M^*$ be the Lipschitz Jordan domain bounded by $\alpha'$. Since $v\circ\Phi^*_\xi|_{\partial \Omega}$ is the trace of the Sobolev disc $v\circ\Phi^*_\xi|_{\Omega}$ for almost every small $\xi$ it follows from Proposition~\ref{prop:good-cont-filling} that the continuous representative of $v\circ\Phi^*_\xi\circ \alpha'$ is contractible and hence $\varphi\circ\alpha'$ and therefore $\varphi\circ\alpha$ are contractible. Let $\varphi^*\colon M^*\to X$ be a continuous extension of $\varphi|_{M\setminus \alpha}$ to $M^*$. Since $$\varphi^*\circ h^*|_{K'^1} = \varphi\circ h|_{K'^1}$$ and the $1$--skeletons of $K^*$ and $K'$ agree it follows that $v\circ h^*_\xi|_{K^{*1}}$ is homotopic to $\varphi^*\circ h^*|_{K^{*1}}$ relative to $\Gamma$ for almost every $\xi$ sufficiently small. This shows that $v$ is $1$--homotopic to $\varphi^*$ relative to $\Gamma$. \end{proof} The proof of Proposition~\ref{prop:equi-cont} is almost the same as that of \cite[Proposition 5.1]{FW-Plateau-Douglas}, so we only give a rough sketch. \begin{proof}[Proof of Proposition~\ref{prop:equi-cont}] Denote by $\mathscr A$ the family of maps $u\in \Lambda(M,\Gamma,X)$ such that $u\sim_1 \varphi$ rel $\Gamma$, $E^2_+(u,g)\le L$ and $\operatorname{Area}(u)\le a^*(M,\varphi,X)-\eta$. Suppose the claim is not true. Then there exists $\varepsilon_0>0$ and, for each $\delta>0$, a map $u\in \mathscr A$ such that the image of some boundary arc with length $\le \delta$ has length $\ge \varepsilon_0$. By considering a conformal chart containing the short boundary arc and using the Courant-Lebesgue lemma \cite[Lemma 7.3]{LW15-Plateau} we see that there exists an arc $\beta\colon I\to M$ connecting two boundary points on either side (and outside) of the short boundary arc, for which $u\circ \beta\in W^{1,2}(I,X)$ agrees with the continuous representative of $\operatorname{tr}(u)$ at the end-points, and $\ell(u\circ\beta)\le \pi[E^2_+(u,g)/\log(1/\delta)]^{1/2}$. Since $\Gamma$ consists of rectifiable Jordan curves, there exists $\delta'>0$ so that any points on $\Gamma$ with distance at most $\delta'$ belong to the same component and the shorter of the arcs joining them has length $<\min\{\varepsilon_0,\eta'\}$, where $0<\eta'<l_0/2$ is such that $C(2\eta')^2<\eta/2$. Here $C$ and $l_0$ are the constants in the local quadratic isoperimetric inequality of $X$. Thus, by choosing $\delta>0$ small enough, it follows that $\ell(u\circ\beta)<\eta'$ and moreover the image $\Gamma^+$ of the longer boundary arc $\gamma^+$ joining the endpoints of $\beta$ has length $<\eta'$. Let $\alpha\subset \operatorname{int} M$ be a smooth Jordan curve bounding an annulus $A\subset M$ together with the curve $\alpha':=\gamma^+\cup\beta$ such that $u\circ\alpha\in W^{1,2}(S^1,X)$. In the surface $M^*$ obtained by cutting $M$ along $\alpha$ and gluing discs to the newly created boundary curves, $\alpha'$ bounds a Lipschitz Jordan domain $\Omega$. If $\Gamma_0$ is the concatenation of $u\circ\beta$ and $\Gamma^+=\operatorname{tr}(u)\circ\gamma^+$, then $\ell(\Gamma_0)<2\eta'$ and, by \cite[Lemma 4.8]{LW-intrinsic}, $\Gamma_0$ is the trace of a Sobolev map $w_\Omega\in W^{1,2}(\Omega,X)$ with $\operatorname{Area}(w_\Omega)< C(2\eta')^2<\eta/2$. We define $v$ as $w_{\Omega}$ and $u|_{M\setminus A}$ on the respective sets. To define $v$ on the remaining smooth disc $\Omega'\subset M^*$, map $A$ diffeomorphically to an annulus $A'\subset \Omega'$ identifying $\alpha$ with $\partial\Omega'$, and $\alpha'$ with a Jordan curve (compactly contained in $\Omega'$) that bounds a copy $\Omega''$ of $\Omega$, and set $v|_{\Omega''}=w_\Omega$ and $v|_{A'}=u|_A$ (after the diffeomorphic identifications). The gluing theorem \cite[Theorem 1.12.3]{KS93} implies that $v\in W^{1,2}(M^*,X)$ and by construction $v\in \Lambda(M^*,\Gamma,X)$ with $v$ and $u$ inducing the same orientation on $\Gamma$. Lemma~\ref{lem:1-hom-reduction} implies that $v$ is $1$--homotopic to $\varphi^*$ rel $\Gamma$ for any primary reduction $(M^*,\varphi^*)$ of $(M,\varphi)$. Now the estimate \[ \operatorname{Area}(v)=\operatorname{Area}(u|_{M\setminus A})+2\operatorname{Area}(w_\Omega)+\operatorname{Area}(u|_A)<\operatorname{Area}(u)+\eta \] yields a contradiction with the fact that $u\in\mathscr A$, completing the proof. \end{proof} In the next proposition, we assume that the Euler characteristic $\chi(M)$ of $M$ is strictly negative so that $M$ admits a hyperbolic metric, that is, a Riemannian metric on $M$ of constant curvature $-1$ and such that $\partial M$ is geodesic. \begin{prop}\label{prop:lower-bound-rel-systole} For every $\eta>0$ and $L>0$ there exists $\varepsilon>0$ with the following property. If $u\in\Lambda(M,\Gamma, X)$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ and such that $$\operatorname{Area}(u)\leq a^*(M,\varphi, X)-\eta,$$ and if $g$ is a hyperbolic metric on $M$ satisfying $E_+^2(u,g)\leq L$ then the relative systole of $(M,g)$ satisfies $\operatorname{sys}_{\rm rel}(M,g)\geq \varepsilon$. \end{prop} The relative systole $\operatorname{sys}_{\rm rel}(M,g)$ of $(M,g)$ is the minimal length of curves $\beta$ in $M$ of the following form. Either $\beta$ is closed and not contractible in $M$ via a family of closed curves, or the endpoints of $\beta$ lie on the boundary of $M$ and $\beta$ is not contractible via a family of curves with endpoints on $\partial M$. The proof of the proposition is almost the same as that of \cite[Proposition 6.1]{FW-Plateau-Douglas} and we only sketch it. Lemma~\ref{lem:1-hom-reduction} will be used again to control the relative $1$--homotopy type of the primary reductions appearing in the proof. \begin{proof} Let $\beta_0$ be the geodesic realizing the systole $\lambda:=\operatorname{sys}_{\rm rel}(M,g)$. We may use a collar neighbourhood to find a 'parallel' Jordan curve $\beta\colon I\to M$ for which $u\circ\beta\in W^{1,2}(I,X)$ and $\ell(u\circ\beta)\le 2[\lambda E^2_+(u,g)]^{1/2}$, see \cite[Lemma 6.2]{FW-Plateau-Douglas}. If $\beta$ connects two boundary points, then $I$ is a closed interval and the proof is analogous to that of Proposition~\ref{prop:equi-cont}. Namely, using the notation from the proof of Proposition~\ref{prop:equi-cont} and supposing the relative systole $\lambda$ is small enough, we may assume the boundary points are on the same boundary component and the image $\Gamma^+$ of one boundary arc $\gamma^+$ connecting them has small length, so that the concatenation $\Gamma_0$ of $u\circ\beta$ and $\Gamma^+$ satisfies $\ell(\Gamma_0)<2\eta'$. We let $\alpha\subset \operatorname{int} M$ be a closed Jordan curve bounding a (closed) annulus $A$ with $\alpha':=\gamma^+\cup\beta$ such that $u\circ\alpha\in W^{1,2}(S^1,X)$. In the surface $M^*$ obtained from $M$ by cutting along $\alpha$, $\alpha'$ bounds a Jordan domain $\Omega$ containing $A$ and we let $w_\Omega\in W^{1,2}(\Omega,X)$ satisfy $\operatorname{tr}(w_\Omega)=\Gamma_0$ and $\operatorname{Area}(w_\Omega)< C(2\eta')^2<\eta/2$. Defining $v\in \Lambda(M^*,\Gamma,X)$ as in the proof of Proposition~\ref{prop:equi-cont}, we reach the same contradiction with the fact that $\operatorname{Area}(u)\le a^*(M,\varphi,X)-\eta$. If $\beta_0$ is a closed geodesic, we construct $M^*$ and $v$ essentially as in the proof of \cite[Proposition 6.1]{FW-Plateau-Douglas} (keeping any components without boundary, and defining $v$ on them analogously). We omit the details. \end{proof} \section{Solution of the homotopic Plateau-Douglas problem}\label{sec:sol} Let $X$ be a proper geodesic metric space admitting a local quadratic isoperimetric inequality and let $\Gamma\subset X$ be the union of $k\geq 1$ rectifiable Jordan curves. Let $M$ be a connected surface with $k$ boundary components and let $\varphi\colon M\to X$ be a continuous map such that $\varphi|_{\partial M}\in [\Gamma]$. \begin{prop}\label{prop:tech-min-seq-1-hom} Suppose $\chi(M)<0$. Let $(u_n)\subset\Lambda(M, \Gamma, X)$ be a sequence such that each $u_n$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ and $$\sup_n \operatorname{Area}(u_n)<a^*(M,\varphi, X).$$ Let $(g_n)$ be a sequence of hyperbolic metrics on $M$. Then there exist $u\in\Lambda(M,\Gamma, X)$ which is $1$--homotopic to $\varphi$ relative to $\Gamma$ and a hyperbolic metric $g$ on $M$ such that $$\operatorname{Area}(u)\leq \limsup_{n\to\infty} \operatorname{Area}(u_n)\quad\text{ and }\quad E_+^2(u,g)\leq \limsup_{n\to\infty} E_+^2(u_n, g_n).$$ \end{prop} \begin{proof} Let $(u_n)$ and $(g_n)$ be as in the statement of the proposition. By \cite[Theorem 1.2 and (5.2)]{FW-Morrey} there exist hyperbolic metrics $\tilde{g}_n$ such that $$E_+^2(u_n, \tilde{g}_n) \leq \frac{4}{\pi}\cdot \operatorname{Area}(u_n) +1.$$ After possibly replacing $g_n$ by $\tilde{g}_n$ and passing to a subsequence, we may therefore assume that the energies $E_+^2(u_n, g_n)$ are uniformly bounded and converge to a limit denoted by $m$. By Proposition~\ref{prop:lower-bound-rel-systole}, the relative systoles of $(M, g_n)$ are uniformly bounded away from zero. Therefore, by the Mumford compactness theorem (see \cite[Theorem 3.3]{FW-Plateau-Douglas} and \cite[Theorem 4.4.1]{DHT10} for the fact that the diffeomorphisms may be chosen to be orientation preserving), there exist orientation preserving diffeomorphisms $\psi_n\colon M\to M$ and a hyperbolic metric $h$ on $M$ such that, after possibly passing to a subsequence, the Riemannian metrics $\psi_n^*g_n$ smoothly converge to $h$. For $n\in\mathbb{N}$ define a map by $v_n:= u_n\circ\psi_n$ and notice that $v_n\in\Lambda(M,\Gamma, X)$. Since $\psi_n$, when viewed as a map from $(M, h)$ to $(M, g_n)$, is $\lambda_n$-biLipschitz with $\lambda_n\to 1$ it follows that $$\lim_{n\to\infty} E_+^2(v_n, h) = m.$$ By \cite[Lemma 2.4]{FW-Plateau-Douglas} and the metric space valued Rellich-Kondrachov theorem (see \cite[Theorem 1.13]{KS93}) there exists $v\in W^{1,2}(M,X)$ such that a subsequence $(v_{n_j})$ converges in $L^2(M,X)$ to $v$. The lower semi-continuity of energy implies that $E_+^2(v, h)\leq m$. Since each $u_n$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ and each $\psi_n$ is orientation preserving it follows that all the maps $v_n$ induce the same orientation on $\Gamma$. By Theorem~\ref{thm:stability-1-homotopic} there thus exists $j_0\in\mathbb{N}$ such that $v_{n_j}$ is $1$--homotopic to $v_{n_{j_0}}$ for every $j\geq j_0$. It follows that for $j\geq j_0$ the maps $w_j:= v_{n_j}\circ \psi_{n_{j_0}}^{-1}\in \Lambda(M,\Gamma, X)$ satisfy $$w_j\sim_1 v_{n_{j_0}}\circ \psi_{n_{j_0}}^{-1}=u_{n_{j_0}} \sim_1 \varphi\textrm{ rel }\Gamma.$$ The sequence $(w_j)$ converges in $L^2(M,X)$ to the map $u:= v\circ\psi_{n_{j_0}}^{-1}$ and $g:=(\psi_{n_{j_0}}^{-1})^*h$, we furthermore have $$E_+^2(u,g)\leq \lim_{j\to \infty} E_+^2(w_j, h_0)= \lim_{j\to\infty} E_+^2(v_j, h) = m.$$ Finally, Proposition~\ref{prop:equi-cont} implies that the family $\{\operatorname{tr}(w_j): j\in\mathbb{N}\}$ is equi-continuous and hence, after passing to a further subsequence, we may assume that the sequence $(\operatorname{tr}(w_j))$ converges uniformly to some continuous map $\gamma\colon \partial M\to X$. As the uniform limit of weakly monotone parametrizations of $\Gamma$, the map $\gamma$ is also a weakly monotone parametrization of $\Gamma$. Since $(\operatorname{tr}(w_j))$ converges in $L^2(\partial M, X)$ to $\operatorname{tr}(u)$ it follows that $\operatorname{tr}(u)=\gamma$ and hence $u\in \Lambda(M,\Gamma, X)$. Since $u$ and $w_j$ induce the same orientation on $\Gamma$ and since $w_j$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ for every $j$ sufficiently large, it follows from Theorem~\ref{thm:stability-1-homotopic} that $u$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ as well. The lower semi-continuity of area and invariance of area under diffeomorphisms imply that $$\operatorname{Area}(u) \leq \liminf_{j\to\infty}\operatorname{Area}(w_j) \leq \limsup_{n\to\infty} \operatorname{Area}(u_n).$$ This concludes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Plateau-Douglas-homot-intro}] Let $X$, $M$, $\Gamma$ be as in the statement of the theorem and let $\varphi\colon M\to X$ be a continuous map with $\varphi|_{\partial M}\in[\Gamma]$ satisfying the Douglas condition \eqref{eq:Douglas-condition}. We start by proving (i) in the case $\chi(M)<0$. The family $$\Lambda_{\rm min}:= \{u\in\Lambda(M,\Gamma, X): \text{$u \sim_1 \varphi$ relative to $\Gamma$ and $\operatorname{Area}(u) = a(M,\varphi, X)$}\}$$ is not empty. Indeed, this follows from Proposition~\ref{prop:tech-min-seq-1-hom}, applied to a sequence $(u_n)\subset\Lambda(M,\Gamma, X)$ and an arbitrary sequence of hyperbolic metrics such that $u_n$ is $1$--homotopic to $\varphi$ relative to $\Gamma$ for every $n$ and $$\operatorname{Area}(u_n) \to a(M, \varphi, X)$$ as $n$ tends to infinity. Next, set $$m:= \inf\{E_+^2(u, g): \text{$u\in \Lambda_{\rm min}$, $g$ hyperbolic metric}\}$$ and choose sequences $(u_n)$ and $(g_n)$, where $u_n\in \Lambda_{\rm min}$ and where the $g_n$ are hyperbolic metrics on $M$, such that $$\lim_{n\to\infty} E_+^2(u_n, g_n)= m.$$ Applying Proposition~\ref{prop:tech-min-seq-1-hom} to these sequences we obtain a map $u\in \Lambda_{\rm min}$ and a hyperbolic metric $g$ on $M$ such that $E_+^2(u,g) = m$. It now follows from \cite[Corollary 1.3]{FW-Morrey} that $u$ is infinitesimally isotropic with respect to $g$. We are left with the case $\chi(M)\ge 0$. If $k=1$ and ${\rm genus}(M)=0$, the result follows from \cite[Theorem 1.2 and 1.4]{FW-Plateau-Douglas} since in this case any two maps inducing the same orientation on $\Gamma$ are $1$--homotopic. In the remaining case $k=2$ and ${\rm genus}(M)=0$, one uses the Mumford compactness theorem for flat metrics normalized to have volume 1 (see \cite[Theorem 4.4.1]{DHT10} for the case of closed surfaces) and a flat collar lemma to prove an analog of Proposition~\ref{prop:lower-bound-rel-systole}. Replacing Proposition~\ref{prop:lower-bound-rel-systole} by this analog, the proof of Proposition~\ref{prop:tech-min-seq-1-hom} remains valid, and the argument above then works verbatim. See the proof of \cite[Theorem 1.2]{FW-Plateau-Douglas} for more discussion. This concludes the proof of statement (i). To show (ii) let $u$ and $g$ be as in statement (i). Then $u$ is a local area minimizer and it follows from the proof of \cite[Theorem 1.4]{FW-Plateau-Douglas} that $u$ has a representative $\bar{u}$ which is locally H\"older continuous in the interior of $M$ and continuously extends to the boundary $\partial M$, thus proving statement (ii). Statement (iii) is a direct consequence of the following lemma. \end{proof} \begin{lem}\label{lem:pi2} Let $X$ be a metric space, let $\Gamma\subset X$ be the the disjoint union of $k\geq 1$ Jordan curve, and let $M$ be a smooth compact surface with $k$ boundary components. If $X$ has trivial second homotopy group then two continuous maps $\varphi, \psi\colon M\to X$ with $\varphi|_{\partial M}, \psi|_{\partial M}\in [\Gamma]$ are $1$--homotopic relative to $\Gamma$ if and only if they are homotopic relative to $\Gamma$. \end{lem} We provide the easy proof for completeness, compare with \cite[Lemma 2.1]{Lem82}. \begin{proof} Let $X$, $M$, $\Gamma$ be as in the statement of the lemma and let $\varphi, \psi\colon M\to X$ be continuous maps such that $\varphi|_{\partial M}, \psi|_{\partial M}\in [\Gamma]$. It is clear that if $\varphi$ and $\psi$ are homotopic relative to $\Gamma$ then they are, in particular, $1$--homotopic relative to $\Gamma$. In order to prove the opposite direction, suppose $\varphi$ and $\psi$ are $1$--homotopic relative to $\Gamma$ and let $F\colon K^1\times [0,1]\to X$ be a homotopy from $\varphi$ to $\psi$ such that $F(\cdot, t)\in[\Gamma]$ for all $t$. Let $G$ be the continuous map which coincides with $F$ on $K^1\times [0,1]$ and with $\varphi$ and $\psi$ on $K\times\{0\}$ and $K\times\{1\}$, respectively. For every $2$--cell $\Delta\subset K$ the restriction of $G$ to $\partial(\Delta\times[0,1])$ extends to a continuous map on $\Delta\times[0,1]$ since $X$ has trivial second homotopy group. The map $\bar{G}\colon K\times[0,1]\to X$ obtained in this way is a homotopy relative to $\Gamma$ between $\varphi$ and $\psi$. \end{proof} Observe that being $1$--homotopic is a more restrictive condition than inducing the same action on fundamental groups. \begin{example}\label{ex:1-hom-stronger-action-fundgrp} Let $X=S^1\times S^1$ be the standard torus, $\Gamma=\{1\}\times S^1\cup\{e^{i\pi}\}\times S^1\subset X$, and $M=[0,1]\times S^1$. The maps $\varphi_\pm\in \Lambda(M,\Gamma,X)$ given by $\varphi_\pm(t,z)=(e^{\pm i\pi t},z)$ induce the same action $\pi_1(M)\to \pi_1(X)$ and agree on $\partial M$, but are not $1$--homotopic relative to $\Gamma$. Note that $\varphi_\pm$ are both conformal area minimizers in $\Lambda(M,\Gamma,X)$. \begin{comment} Let $X = (S^1\times[0,1]\times\{0,1\})/_\sim$ with the identification $(z,i, 0)\sim(z,i,1)$ for $i=0,1$. Equip $X$ with the natural length metric coming from the length metric on $S^1\times[0,1]$. Then $X$ admits a local quadratic isoperimetric inequality and the subset $\Gamma:= (S^1\times\{0,1\}\times\{0,1\})/_\sim$ is the union of two rectifiable Jordan curves. Set $M:=S^1\times[0,1]$ and define continuous maps $u_0,u_1\colon M \to X$ by $u_i(z,t):= [(z,t,i)]$ for $i=0,1$. Then $u_0$ and $u_1$ belong to $\Lambda(M,\Gamma, X)$ and they induce the same homomorphism on fundamental groups but they are not $1$--homotopic relative to $\Gamma$. \end{comment} \end{example} We finish the paper by discussing an analog of Theorem~\ref{thm:Plateau-Douglas-homot-intro} for closed surfaces, that is, $k=0$. In this case $\Gamma=\varnothing$ and consequently $\operatorname{tr}(u)\in [\Gamma]$ is a vacuous condition; in particular $\Lambda(M,\Gamma,X)=W^{1,2}(M,X)$. We say that two maps are $1$--homotopic if they are $1$--homotopic relative to $\Gamma=\varnothing$. We assume throughout this discussion that {\bf $X$ is compact}, so that the Rellich-Kondrachov compactness theorem is applicable for any energy bounded sequence in $W^{1,2}(M,X)$. (The assumption $\operatorname{tr}(u)\in [\Gamma]$ prevents a sequence from escaping to infinity when $\Gamma\ne \varnothing$, and we prevent the same here by assuming compactness.) Thus the results in Section~\ref{sec:1-homot} about $1$--homotopy remain valid with these interpretations. Note that, with the convention $\operatorname{sys}_{\rm rel}(M)=\operatorname{sys}(M)$, Proposition~\ref{prop:lower-bound-rel-systole} (and thus Proposition~\ref{prop:tech-min-seq-1-hom}) also remain valid with the same proofs. The following theorem extends \cite[Theorem 4.4]{SU82} and \cite[Theorem 3.1]{SY79} to non-smooth target spaces. \begin{thm}\label{thm:area-min-hom-class-without-bdry} Suppose $M$ is a closed surface, and $X$ a compact geodesic metric space admitting a local quadratic isoperimetric inequality. If a continuous map $\varphi\colon M\to X$ satisfies the homotopic Douglas condition, then there exist $u\in W^{1,2}(M,X)$ and a Riemannian metric $g$ on $M$ such that $u$ is $1$--homotopic to $\varphi$, $u$ is infinitesimally isotropic with respect to $g$, and $$\operatorname{Area}(u) = a(M, \varphi, X).$$ Furthermore, any such $u$ has a representative $\bar{u}$ which is H\"older continuous in $M$. If $X$ has trivial second homotopy group then $\bar{u}$ is homotopic to $\varphi$ relative to $\Gamma$. \end{thm} \begin{proof} The proof Theorem~\ref{thm:Plateau-Douglas-homot-intro} (as well as that of Lemma~\ref{lem:pi2}) remains valid under the hypotheses of the claim (see the discussion above), except for the existence of $u$ and $g$ in the case $\chi(M)\ge 0$, i.e.~$M=S^2$ or $M=S^1\times S^1$. In the first case we may choose $u\equiv {\rm constant}$ and $g$ the standard metric on $S^2$, since $\varphi$ is $1$--homotopic to a constant map. In the second case $M=S^1\times S^1$ we use Mumford's compactness theorem for flat metrics with volume normalized to 1 to obtain analogs of Propositions~\ref{prop:lower-bound-rel-systole} and \ref{prop:tech-min-seq-1-hom} and proceed as in the proof of Theorem~\ref{thm:Plateau-Douglas-homot-intro}. \end{proof} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
1,477,468,749,980
arxiv
\section{Introduction} The presence of dusty disks around main-sequence stars serves as a marker for the existence of planetesimals. Without collisions among planetesimals, or their evaporation, the dust would not be replenished, and it would have disappeared from the system long ago. Thus, debris disks indicate that the planet-formation process is occurring, or has occurred. In particular, the study of disks around low-mass stars illuminates the planet-formation process in the relatively low-radiation environments analogous to the solar system. Resolved images of these systems help to constrain their physical and geometrical properties. Scattered-light images sample the whole disk regardless of its temperature and this, coupled with the fact that optical and near-infrared detectors have higher angular resolution than far-infrared and submillimeter ones, allows for a rich understanding of the systems. Here we present coronagraphic scattered-light images of the disk around the G2 V star HD~107146. This is the first debris disk resolved in scattered light around a solar-like star. The observations presented here address the issue of the diversity of planet formation histories within stars of the same spectral type, like HD~107146 and our sun. To date, only one other debris disk around a non-A star (the M0 star AU Mic, see \citealp{kri04,liu04,kal04}) has been resolved in scattered light. The disk around HD~107146 was marginally resolved by \citet{wil04} at 450 $\mu$m. Submillimeter and mid-infrared measurements indicate that its fractional excess luminosity is $f_d=L_d/L_*=1.2 \times 10^{-3}$, due to a dust mass of 0.1-0.4 M$_\earth$, comparable to that of the $\beta$ Pictoris disk. An inner hole (>31 AU) is suggested by the lack of IRAS 25$\mu$m detection. No gas measurements have been reported. The Hipparcos distance of the host star is 28.5 pc, its luminosity is 1.1 L$_\sun$ and age estimates range from 30 to 250 Myr \citep{wil04}. The star is a candidate periodic V-band photometric variable ($P=7$d, \citealp{koe02}). \section{Observations \label{observations}} HD~107146 and the PSF reference star HD~120066 were observed with the ACS/HRC on UT~2004 June 5 and UT~2004 July 20. The observations were conducted as part of the guaranteed observing time awarded to the ACS Investigation Definition Team (Prop. ID 9987 and 10330). The images were taken with the F606W (broad $V$) and F814W (broad $I$) filters. For each band and each target, a short direct exposure was followed by a coronagraphic exposure using the 1.8'' mask. The coronagraphic exposures for the target were 2330 sec long in F606W, and 2150 sec long in F814W. For HD~120066 they were 1990 sec in F606W, and 2250 sec in F814W. Here we present a summary of the reduction procedure: a more detailed description, applied to different targets, can be found in \citet{kri04} and \citet{cla03}. The results of the measurements are in Table \ref{tab_res}. To measure the magnitudes of the target and the PSF reference star, we integrated the direct image flux within a circular aperture $>$6 arcsecs in radius, which includes the saturated stellar core\footnote{Instrument Science Reports, R.L. Gilliland, 05 Jan 2004, www.stsci.edu/hst/acs/documents/isrs}. The transformation between counts/sec and magnitudes was obtained using the STSDAS synthetic photometry package SYNPHOT, which simulates most HST observing configurations. The coronagraphic image of HD~120066 was normalized to, aligned with, and subtracted from the image of HD~107146. Alignment of the images is accurate to within $\pm0.2$ pixels ($\pm0.005$''). The resulting subtracted image was smoothed using a 3$\times$3 median filter and corrected for geometric distortion. Figure \ref{disk} shows the result. To increase the signal-to-noise ratio of the displayed images, we have performed an additional 5$\times$5 median smoothing and re-binned by a factor of two. Subtraction errors, caused by mismatches in the colors of the stars or PSF time-variability, dominate the emission within $\sim2''$ from the star, and contribute light at large distances. With only one reference star, we cannot quantify very precisely the magnitudes of these two error sources. The ACS Instrument Handbook\footnote{ACS Instrument Handbook, www.stsci.edu/hst/acs/documents/handbooks/cycle13/cover.html} indicates that a mismatch of three spectral classes would produce subtraction residuals for a star of this brightness of the order of 0.2 $\mu$Jy/arcsec$^{2}$ (or 25.73 mag/arcsec$^{2}$ in the V-band), 5'' away from the target. The actual errors in these observations are likely to be smaller, because the V-I colors of the two stars are the same within errors: for HD~120066 we measure V=$6.32\pm0.05$, V-I=$0.68\pm0.07$. The instrument handbook also indicates that typical time-dependent PSF variations within an orbit (due to variations in focus) will result in errors of the same order. However, Figure \ref{rad_prof} suggest that this may underestimate the error in our observations, as subtraction residuals at 6.5'' occur at the level of $\sim$1 $\mu$Jy/arcsec$^{2}$. To quantify the systematic error in the normalization constant between the target and the PSF reference star, we compare the value of this constant obtained by using four different methods: taking the ratio of the stellar flux in each band, taking the ratio of the flux of each star away from the saturated columns, taking the ratio of the number of saturated pixels per unit time in the direct images for each band, and adjusting the value of the constant to produce the cleanest visual PSF subtraction. For each filter, the four methods yield the same normalization constant within 1\%, which corresponds to photometric errors of 0.1 and 0.2 mag arcsec$^{-2}$ in the brightest and faintest regions of the disk, respectively. Given that this is an estimate of a systematic error, in what follows we propagate it linearly to estimate uncertainties in calculated quantities. For the quantities calculated below, this error dominates over all others, including the $\sim$5\% random photometric error. \section{Results \label{results}} The circumstellar disk is clearly seen in the subtracted images. Within the limitations of the observations, it appears elongated along the SE-NW direction, and featureless, except for the fact that the SW side is brighter than the NE side. The excess color of the disk with respect to the star is $\Delta (V-I)=0.4\pm0.3$ (Table \ref{tab_res}). This would be the intrinsic color of the disk if the scattering phase function were independent of wavelength. We also detect an object 6.8'' southwest of the star. The time baseline between the observations taken in the two filters is too short to detect any difference in the relative positions between the star and the object. Smooth elliptical fits subtracted from the object reveal residual spiral structure. Therefore, we believe this to be a faint background spiral galaxy (V=19.4, V-I=1.2). The galaxy is at a different position angle than the offset with respect to the optical position of the star of the SCUBA 450 $\mu$m or the map's extension \citep{wil04}. This suggests that the sub-millimeter measurements are not contaminated by the presence of the galaxy. The slightly red color of the disk with respect to the star suggests the presence of small grains, which leads us to interpret the NE-SW brightness asymmetry as being due to preferential forward-scattering in an inclined disk. This interpretation is consistent with the slightly elongated shape of the observed disk. By fitting elliptical isophotes to the disk image, we conclude that the disk minor axis has a position angle of $58^\circ \pm 5^\circ$. The bright SW region is symmetric with respect to this axis. The measurements are consistent with the picture of a circular disk inclined $25^\circ \pm 5^\circ$ from the plane of the sky. Assuming that the disk is optically thin (as implied by its low $f_d$ value), we can map the surface density \citep{cla03}. We deproject the disk, assuming that it is intrinsically circular, and multiply the resulting image by $r^2$, where $r$ is the distance to the star, to correct for the geometric dilution of starlight. Finally, we divide the deprojected image by a Henyey-Greenstein phase function, adjusting $g$, the scattering asymmetry parameter, until the front- and back-scattering regions have the same brightness (Table \ref{tab_res}). After dividing by the stellar luminosity, the result (Figure \ref{disk}, bottom panels) is a map of the scattering optical depth, which we write as $\tau \omega$, the optical depth times the albedo in the band. The scattering optical depth is proportional to the surface density \citep{wei99, cla03}. To eliminate the galaxy, we fitted its isophotes to a series of ellipses of varying centers, ellipticities, and position angles and generated a model to be subtracted from the image. Within the level of the errors, the disks shown in the bottom panels of Figure \ref{disk} are azimuthally featureless and they have the same shape. The observed morphology is a broad ring, with maximum opacity at 130 AU and a FWHM of 85 AU. By taking medians of annular sections of the deprojected disks, we map the scattering optical depth as a function of distance (Figure \ref{colors}, top panel). The shape of $\tau \omega$ can be parametrized, as $r^p$, by two power laws: $p=1.6\pm0.5$ (from 80 to 130 AU) and $p=-2.8\pm0.3$ (from 130 to 185 AU). Given the large subtraction residuals within $\sim60$~AU from the star, our observations are not inconsistent with the presence of dust within this radius, nor with a constant optical depth within 80 AU. However, we clearly detect a decrease in the dust opacity within 130 AU: the normalization error would have to be larger than calculated ($>5$\%) for the observations to be consistent with constant opacity within this limit. The difference in the shape of $\tau \omega$ between the two bands is not significant, given the subtraction residuals in the deprojected F814W image. For the adopted values of $g$, the scattering optical depth is similar in both bands. To measure the color of the deprojected disk, we take the ratio of the two deprojected images and obtain medians of annular sections of the result (Figure \ref{colors}, bottom panel). The color can be reliably determined only between 100 and 180 AU. Between these limits the disk is uniform in color with the mean of the ratio in $\tau \omega$ of the two filters being $1.3\pm0.3$. The uncertainty is determined by the uncertainty in the $g$ values. The uncertainty in the mean is $\pm0.06$. \section{Analysis and Discussion\label{anal}} The outer radius of the disk resolved in scattered light is similar to the submillimeter one. To fit the thermal emission, \citet{wil04} used a single temperature modified blackbody function, in which the emission is $\propto Q_\lambda \ B_\lambda$, with $Q_\lambda=1-exp[-(\lambda_0/\lambda)^\beta]$ and $\lambda_0=100 \mu$m (the characteristic grain size). They concluded that T=$51\pm4$ K and $\beta=0.69\pm0.15$. A detailed model of the thermal infrared emission of the system is beyond the scope of this Letter. Here we note that if we constrain the thermal emission to originate at 130~AU (the radius of maximum brightness in scattered light), and use the values of $\lambda_0$, $\beta$, and $T$ from \citet{wil04}, the disk will not be in thermal equilibrium with the stellar radiation (Backman \& Paresce 1993, Eqn. 1). In the context of this model, the only way to preserve thermal equilibrium is by reducing $\lambda_0$, the characteristic particle size. By keeping $\beta=0.7$ constant, we fit the data with T$\sim45\pm5$~K and $\lambda_0=2\pm1 \mu$m. With $\beta=1$ we obtain T$\sim40\pm5$~K and $\lambda_0=15\pm3 \mu$m. The precise values of $\beta$ and $\lambda_0$ are poorly constrained, as shown by the fact that both sets of parameters fit the thermal data and satisfy thermal equilibrium. For comparison, \citet{den00} found that for $\beta$ Pic, $\epsilon$ Eri, Vega, and Fomalhaut, $\lambda_0=10-100 \mu$m and $\beta=0.8-1.1$. The 25 $\mu$m IRAS non-detection \citep{wil04} and the results of the fit (with $\beta=0.7$) imply that the dust surface density at 130~AU is $\gtrsim$5 times more than at 60 AU, similar to the conclusion reached in scattered light. Even though it is poorly constrained, the value of $\lambda_0$ suggests the presence of small grains in the disk. The scattering asymmetry parameter, $g=0.2-0.3$, is also consistent with the presence of small grains. Similar values of $g$ are obtained in other debris disks: e.g. $g=0.15-0.25$ for HD~141569A \footnote{Because of a typo, the value of $g$ for the HD~141569A disk is quoted as $g=0.25-0.35$ in \citet{cla03}} \citep{cla03} and $g=0.4$ for AU Mic \citep{kri04}. For the standard ``astronomical silicate'' \citep{dra84, lao93}, such values indicate the presence of submicron grains, although the actual predicted size is not very sensitive to the value of $g$ \citep{wein01}. The color is also sensitive to grain size. Assuming compact astronomical silicate particles with size distribution $s^{-3.5}$, where $s$ is the particle radius, one can obtain a color ratio between the two bands as large as $\sim1.2$ if the lower limit of the size distribution is $\sim$0.3 $\mu$m. With this dust model a grey disk is observed when then dust particles are $\gtrsim1\mu$m. The ratio of the force due to radiation pressure to the gravitational force is $\beta_{RP}=0.5/s$, with $s$ in $\mu$m (assuming a unity radiation pressure coefficient and a dust density of $\rho_d$=1.25 gr cm$^{-3}$, see \citealp{tak01}). Grains with $\beta_{RP}>0.5$ or $s<1 \mu$m will be expelled from the system in dynamical timescales. In other words, the mean color is consistent with the presence of grains smaller than the radiation pressure limiting size. A more detailed analysis (with a more realistic dust model) is necessary to confirm this result, although a similar situation has been found for the debris disk around HD~141569 (Ardila et al. 2005). The appearance of a symmetric ring is one of the features of the models by \citet{tak01}. Another is dust segregation: their models predict that the smallest grains present in the system ($\beta_{RP}\sim0.5$) should reside at the largest radii, and larger grains should remain closer to the star. The actual parameters of the segregation depend on the behavior of the gas density at the disk edge. Even in the absence of gas, smaller grains should, in average, reside farther out than larger grains. Over the wavelength span of the two bands, we detect no significant difference in the surface density profile or any systematic color change as a function of distance. Comparison among observations performed over a wider range of wavelengths (for example NICMOS vs. ACS/HRC) are crucial to detect any size separation. \citet{ken04} show that increased collisions among planetesimals due to the formation of planets with radii larger than 1000 km can generate dust rings. In their models, the outer edge of the ring marks the position at which planets are starting to form. From their calculations, a planet at 170 AU could grow to the appropriate size if the mass in solids of the disk is between $\sim10$ and $\sim70$ times larger than that of the minimum mass solar nebula. The range is due to the uncertainty in the age of HD~107146. The dust ring maximum fractional luminosity would be $f_d\sim10^{-2}$, larger than the observations. On the other hand, these kind of models produce very axisymmetric structures, similar to those seen in Figure 1. An alternative to local planet formation is the migration of a planet formed at a smaller radius. \citet{wei96} show that gravitational interactions among multiple large bodies can scatter one of them into an eccentric large orbit in dynamical timescales. Dynamical friction will induce large eccentricities in smaller bodies, increasing their collision rate and generating broad rings \citep{ken99}. The timescale for this process will depend on the relative masses of the planet and the planetesimals. This scenario would imply that at least two giant planets are present in the HD~107146 system and the presence of the hole reflects the relative paucity of an underlying population of planetesimals, expelled by the (giant) planets. A 50 Myrs old, 10 M$_{Jup}$ planet would have $m_I\sim23 $ mag \citep{bur97}. If the PSF is spread over 4 HRC pixels, each would have a brightness of 16.5 mag/arcsec$^2$. In the F814W subtracted image of HD~107146, the mean surface brightness within 1.8'' is $15.9 \pm 2$ mag/arcsec$^2$, which implies that the planet would not be detected photometrically. Does this disk represent an earlier stage in the evolution of our solar system? The observed dust disk is larger and proportionally much wider than the solar system Kuiper Belt (KB), and it has $\sim$4 orders of magnitude more dust \citep{gre04}. In order to look like the KB, the disk would have to shrink by a factor of $\sim$3 and become narrower by a factor of $\sim$8. This evolutionary path seems contrary to current ideas about the solar system. \citet{gom03} argue that the KB formed closer inward and was pushed out by interactions with Neptune. Additionally, \citet{lev03} argue that the sharp exterior edge of the KB is determined by a 2:1 resonance with Neptune. Defining the edge of the HD~107146 disk at 185~AU implies a planet at 116~AU. There is no dynamical or photometric signature of this object in the scattered light image. We believe therefore that HD~107146 is unlikely to evolve into a system like our own. \acknowledgments We wish to thank the team from \citet{wil04} for providing us with the name of their target before publication. ACS was developed under NASA contract NAS 5-32865, and this research has been supported by NASA grant NAG5-7697. We are grateful for an equipment grant from Sun Microsystems, Inc. The Space Telescope Science Institute is operated by AURA, Inc., under NASA contract NAS5-26555.